feat: gamemaster
This commit is contained in:
+123
-28
@@ -417,9 +417,9 @@ It also stores a denormalized runtime snapshot for convenience, at least:
|
||||
* `engine_health_summary`.
|
||||
|
||||
Additionally, `Game Lobby` aggregates per-member game statistics from
|
||||
`player_turn_stats` carried on each `runtime_snapshot_update` event: current
|
||||
and running-max of `planets`, `population`, and `ships_built`. The aggregate
|
||||
is retained from game start until capability evaluation at `game_finished`.
|
||||
`player_turn_stats` carried on each `runtime_snapshot_update` event:
|
||||
current and running-max of `planets` and `population`. The aggregate is
|
||||
retained from game start until capability evaluation at `game_finished`.
|
||||
|
||||
This prevents user-facing list/read flows from fan-out requests into `Game Master`.
|
||||
|
||||
@@ -544,7 +544,7 @@ background worker.
|
||||
`RND.ReleaseAllByUser(user_id)` atomically with membership/application/invite
|
||||
cancellations for the affected user.
|
||||
|
||||
## 8. Game Master
|
||||
## 8. [Game Master](gamemaster/README.md)
|
||||
|
||||
`Game Master` owns runtime and operational metadata of already running games.
|
||||
|
||||
@@ -561,6 +561,40 @@ It owns:
|
||||
* engine version registry and version-specific engine options;
|
||||
* runtime mapping `platform user_id -> engine player UUID` for each running game.
|
||||
|
||||
### Topology
|
||||
|
||||
`Game Master` runs as a single process in v1. The in-process scheduler is
|
||||
authoritative; multi-instance with leader election is an explicit future
|
||||
iteration. Every other service that interacts with `Game Master`
|
||||
(`Edge Gateway`, `Game Lobby`, `Admin Service`, `Runtime Manager`) treats
|
||||
GM as a singleton on the trusted network segment.
|
||||
|
||||
### Engine container contract
|
||||
|
||||
`Game Master` is the only platform component that talks to the engine. The
|
||||
engine container exposes two route classes:
|
||||
|
||||
* admin paths under `/api/v1/admin/*` — `init`, `status`, `turn`, and
|
||||
`race/banish`. They are unauthenticated and reachable only inside the
|
||||
trusted network segment that connects GM to the engine container;
|
||||
* player paths under `/api/v1/{command, order, report}` — invoked by GM on
|
||||
behalf of an authenticated platform user; the actor field on each call
|
||||
is set by GM from the verified user identity, never from the inbound
|
||||
payload;
|
||||
* `GET /healthz` — liveness probe used by `Runtime Manager` and operator
|
||||
tooling.
|
||||
|
||||
Two engine-side fields are part of the contract:
|
||||
|
||||
* `StateResponse.finished:bool` — when `true` on a turn-generation
|
||||
response, GM transitions the runtime to `finished`, publishes
|
||||
`game_finished`, and dispatches the finish notification. The conditional
|
||||
logic that flips the flag lives in the engine's domain code and is not
|
||||
GM's concern;
|
||||
* `POST /api/v1/admin/race/banish` with body `{race_name}` — invoked by GM
|
||||
in response to the Lobby-driven banish flow after a permanent
|
||||
platform-level membership removal. The engine returns `204` on success.
|
||||
|
||||
### Game Master status model
|
||||
|
||||
Minimum runtime-level status set:
|
||||
@@ -571,8 +605,12 @@ Minimum runtime-level status set:
|
||||
* `generation_failed`
|
||||
* `stopped`
|
||||
* `engine_unreachable`
|
||||
* `finished`
|
||||
|
||||
`running` here means `running_accepting_commands`.
|
||||
`running` here means `running_accepting_commands`. `finished` is terminal:
|
||||
the runtime record stays in this state indefinitely; no further turn
|
||||
generation, command, or order is accepted, and operator cleanup is the
|
||||
only path out.
|
||||
|
||||
### Game command routing
|
||||
|
||||
@@ -599,14 +637,25 @@ Private-game owner can use the subset allowed for the owner of that game.
|
||||
|
||||
### Turn cutoff and scheduling
|
||||
|
||||
`Game Master` is the owner of authoritative platform time for turn cutoff decisions.
|
||||
`Game Master` is the owner of authoritative platform time for turn cutoff
|
||||
decisions.
|
||||
|
||||
Commands arriving exactly on the boundary of a new turn are considered stale and must not reach the engine.
|
||||
The cutoff is enforced by a single status compare-and-swap: every player
|
||||
command, order, and report read requires `runtime_status=running` at the
|
||||
moment of the call, and turn generation begins by CAS-ing
|
||||
`running → generation_in_progress`. There is no separately tracked shadow
|
||||
window or grace period — the status transition itself is the boundary.
|
||||
Commands arriving after the CAS are rejected with `runtime_not_running`.
|
||||
|
||||
The scheduler is a subsystem inside `Game Master`.
|
||||
It triggers turn generation according to the game schedule.
|
||||
The scheduler is a subsystem inside `Game Master`. It triggers turn
|
||||
generation according to the game schedule.
|
||||
|
||||
If a manual “force next turn” is executed, the next scheduled turn slot must be skipped so that players still get at least one full normal schedule interval before the following generated turn.
|
||||
If a manual `force next turn` is executed, the next scheduled turn slot
|
||||
must be skipped so that players still get at least one full normal
|
||||
schedule interval before the following generated turn. The skip is
|
||||
recorded as `runtime_records.skip_next_tick=true`; the scheduler advances
|
||||
`next_generation_at` by one extra cron step the next time it computes the
|
||||
tick and clears the flag.
|
||||
|
||||
### Runtime snapshot publishing
|
||||
|
||||
@@ -615,16 +664,27 @@ consumed by `Game Lobby`. Events include:
|
||||
|
||||
* `runtime_snapshot_update` — carries the current `current_turn`,
|
||||
`runtime_status`, `engine_health_summary`, and a `player_turn_stats` array
|
||||
with one entry per active member (`user_id`, `planets`, `population`,
|
||||
`ships_built`). `Game Lobby` maintains a per-game per-user stats aggregate
|
||||
from these events for capability evaluation at game finish.
|
||||
with one entry per active member (`user_id`, `planets`, `population`).
|
||||
`Game Lobby` maintains a per-game per-user stats aggregate from these
|
||||
events for capability evaluation at game finish.
|
||||
* `game_finished` — carries the final snapshot values and triggers the
|
||||
platform status transition plus Race Name Directory capability evaluation
|
||||
inside `Game Lobby`.
|
||||
|
||||
`Game Master` does not retain the aggregate; it only publishes the per-turn
|
||||
observation. `Game Lobby` is responsible for holding initial values and
|
||||
running maxima across the lifetime of the game.
|
||||
Publication cadence is event-driven. GM publishes a snapshot when:
|
||||
|
||||
* a turn was generated (success or failure);
|
||||
* `runtime_status` transitioned (e.g.,
|
||||
`running ↔ generation_in_progress`, `running → engine_unreachable`,
|
||||
`* → finished`);
|
||||
* `engine_health_summary` changed in response to a `runtime:health_events`
|
||||
observation; consecutive observations with identical summaries are
|
||||
debounced.
|
||||
|
||||
There is no periodic heartbeat. `Game Master` does not retain the
|
||||
aggregate; it only publishes the per-turn observation. `Game Lobby` is
|
||||
responsible for holding initial values and running maxima across the
|
||||
lifetime of the game.
|
||||
|
||||
### Runtime/engine finish flow
|
||||
|
||||
@@ -847,13 +907,17 @@ requests for no operational benefit.
|
||||
* `Gateway -> Admin Service`
|
||||
* `Gateway -> User Service`
|
||||
* `Gateway -> Game Lobby`
|
||||
* `Gateway -> Game Master`
|
||||
* `Gateway -> Game Master` for verified player command, order, and report
|
||||
calls;
|
||||
* `Auth / Session Service -> User Service`
|
||||
* `Auth / Session Service -> Mail Service`
|
||||
* `Geo Profile Service -> Auth / Session Service`
|
||||
* `Geo Profile Service -> User Service`
|
||||
* `Game Lobby -> User Service`
|
||||
* `Game Lobby -> Game Master` for critical registration/update calls
|
||||
* `Game Lobby -> Game Master` for `register-runtime` after a successful
|
||||
container start, engine-version `image-ref` resolve, membership
|
||||
invalidation hook, banish, and the liveness reply consumed by Lobby's
|
||||
resume flow;
|
||||
* `Game Master -> Runtime Manager` for inspect, restart, patch, stop, and cleanup REST calls
|
||||
* `Admin Service -> Runtime Manager` for operational inspect, restart, patch, stop, and cleanup REST calls
|
||||
|
||||
@@ -864,11 +928,15 @@ requests for no operational benefit.
|
||||
* `Lobby -> Runtime Manager` runtime jobs through `runtime:start_jobs` (`{game_id, image_ref, requested_at_ms}`) and `runtime:stop_jobs` (`{game_id, reason, requested_at_ms}`);
|
||||
* `Runtime Manager -> Lobby` job outcomes through `runtime:job_results`;
|
||||
* `Runtime Manager -> Notification Service` admin-only failure intents (image pull, container start, start config) through `notification:intents`;
|
||||
* `Runtime Manager` outbound technical health stream `runtime:health_events` consumed by `Game Master`; `Game Lobby` and `Admin Service` are reserved as future consumers;
|
||||
* `Runtime Manager` outbound technical health stream `runtime:health_events`
|
||||
consumed by `Game Master`; `Game Lobby` and `Admin Service` are reserved
|
||||
as future consumers;
|
||||
* all event-bus propagation;
|
||||
* `Game Master -> Game Lobby` runtime snapshot updates (including
|
||||
`player_turn_stats` for capability aggregation) and game-finish events
|
||||
through a dedicated Redis Stream consumed by `Game Lobby`;
|
||||
through the `gm:lobby_events` Redis Stream consumed by `Game Lobby`,
|
||||
published event-only with no periodic heartbeat (turn generation,
|
||||
status transition, or debounced engine-health summary change);
|
||||
* `User Service -> Game Lobby` user lifecycle events
|
||||
(`user.lifecycle.permanent_blocked`, `user.lifecycle.deleted`) through the
|
||||
`user:lifecycle_events` Redis Stream, consumed by `Game Lobby` to cascade
|
||||
@@ -908,6 +976,10 @@ PostgreSQL is the source of truth for table-shaped business state:
|
||||
registry (registered/reservation/pending tiers);
|
||||
* runtime manager runtime records (`game_id -> current_container_id`),
|
||||
per-operation audit log, and latest health snapshot per game;
|
||||
* game master runtime records (`game_id -> engine_endpoint`,
|
||||
status/turn/scheduling), the engine version registry (`engine_versions`),
|
||||
per-game player mappings (`game_id, user_id -> race_name,
|
||||
engine_player_uuid`), and the GM operation log;
|
||||
* idempotency records, expressed as `UNIQUE` constraints on the durable
|
||||
table — not as a separate kv;
|
||||
* retry scheduling state, expressed as a `next_attempt_at` column on the
|
||||
@@ -931,9 +1003,9 @@ Redis is the source of truth for ephemeral and runtime-coordination state:
|
||||
### Database topology
|
||||
|
||||
* Single PostgreSQL database `galaxy`.
|
||||
* Schema per service: `user`, `mail`, `notification`, `lobby`, `rtmanager`.
|
||||
Reserved for future use: `geoprofile`. Not allocated unless needed:
|
||||
`gateway`, `authsession`.
|
||||
* Schema per service: `user`, `mail`, `notification`, `lobby`, `rtmanager`,
|
||||
`gamemaster`. Reserved for future use: `geoprofile`. Not allocated unless
|
||||
needed: `gateway`, `authsession`.
|
||||
* Each service connects with its own PostgreSQL role whose grants are
|
||||
restricted to its own schema (defense-in-depth).
|
||||
* Authentication is username + password only. `sslmode=disable`. No client
|
||||
@@ -1012,7 +1084,8 @@ crossing the SQL boundary carry `time.UTC` as their location.
|
||||
### Configuration
|
||||
|
||||
For each service `<S>` ∈ { `USERSERVICE`, `MAIL`, `NOTIFICATION`,
|
||||
`LOBBY`, `RTMANAGER`, `GATEWAY`, `AUTHSESSION` }, the Redis connection accepts:
|
||||
`LOBBY`, `RTMANAGER`, `GAMEMASTER`, `GATEWAY`, `AUTHSESSION` }, the Redis
|
||||
connection accepts:
|
||||
|
||||
* `<S>_REDIS_MASTER_ADDR` (required)
|
||||
* `<S>_REDIS_REPLICA_ADDRS` (optional, comma-separated)
|
||||
@@ -1020,7 +1093,7 @@ For each service `<S>` ∈ { `USERSERVICE`, `MAIL`, `NOTIFICATION`,
|
||||
* `<S>_REDIS_DB`, `<S>_REDIS_OPERATION_TIMEOUT`
|
||||
|
||||
For PG-backed services (`USERSERVICE`, `MAIL`, `NOTIFICATION`, `LOBBY`,
|
||||
`RTMANAGER`) the Postgres connection accepts:
|
||||
`RTMANAGER`, `GAMEMASTER`) the Postgres connection accepts:
|
||||
|
||||
* `<S>_POSTGRES_PRIMARY_DSN` (required;
|
||||
`postgres://<role>:<pwd>@<host>:5432/galaxy?search_path=<schema>&sslmode=disable`)
|
||||
@@ -1384,7 +1457,17 @@ Rules:
|
||||
* upgrade during a running game is allowed only as a patch update within the same major/minor line;
|
||||
* game-engine version management is manual in v1;
|
||||
* each engine version may carry version-specific engine options;
|
||||
* `Game Master` owns the engine version registry and its internal API.
|
||||
* `Game Master` owns the engine version registry from v1 — `(version,
|
||||
image_ref, options, status)` rows live in the `gamemaster` schema and
|
||||
are managed exclusively through GM's internal REST surface;
|
||||
* `Game Lobby` resolves `image_ref` synchronously through GM at game start
|
||||
by calling `GET /api/v1/internal/engine-versions/{version}/image-ref`;
|
||||
`LOBBY_ENGINE_IMAGE_TEMPLATE` and any Lobby-side template-based
|
||||
resolution are removed without a backward-compat shim. If GM is
|
||||
unavailable when Lobby attempts the resolve, the start fails with
|
||||
`service_unavailable` and `runtime:start_jobs` is never published;
|
||||
* `Runtime Manager` continues to receive a verbatim `image_ref` from the
|
||||
start envelope and never resolves engine versions itself.
|
||||
|
||||
## Administrative Access Model
|
||||
|
||||
@@ -1457,7 +1540,7 @@ Recommended order for implementation is:
|
||||
6. **Game Lobby Service** (implemented)
|
||||
Platform game records, membership, invites, applications, approvals, schedules, user-facing lists, pre-start lifecycle.
|
||||
|
||||
7. **Runtime Manager**
|
||||
7. **Runtime Manager** (implemented)
|
||||
Dedicated Docker-control service for container lifecycle (start, stop,
|
||||
restart, semver-patch, cleanup) and inspect/health monitoring through
|
||||
Docker events, periodic inspect, and active HTTP probes. Driven
|
||||
@@ -1466,7 +1549,19 @@ Recommended order for implementation is:
|
||||
`Admin Service` via the trusted internal REST surface.
|
||||
|
||||
8. **Game Master**
|
||||
Running-game orchestration, engine version registry, runtime state, turn scheduler, engine API mediation, operational controls.
|
||||
Single-instance running-game orchestrator. Owns the runtime state
|
||||
(`game_id → engine_endpoint`, status, current turn, scheduling, engine
|
||||
health), the engine version registry consumed synchronously by
|
||||
`Game Lobby` for `image_ref` resolution, and the platform mapping
|
||||
`(user_id, race_name, engine_player_uuid)` per running game. Drives
|
||||
the turn scheduler with the force-next-turn skip rule, mediates every
|
||||
engine HTTP call (admin paths under `/api/v1/admin/*`, player paths
|
||||
under `/api/v1/{command, order, report}`), and reacts to
|
||||
`StateResponse.finished` by transitioning the runtime to `finished` and
|
||||
publishing `game_finished`. Drives `Runtime Manager` synchronously over
|
||||
REST for stop, restart, and patch; consumes `runtime:health_events`
|
||||
from RTM; publishes `gm:lobby_events` (event-only, no heartbeat) and
|
||||
`notification:intents`. Never opens the Docker SDK.
|
||||
|
||||
9. **Admin Service**
|
||||
Admin UI backend that orchestrates trusted APIs of other services.
|
||||
|
||||
-920
@@ -1,920 +0,0 @@
|
||||
# PostgreSQL Migration Plan
|
||||
|
||||
This plan has been already implemented and stays here for historical reasons.
|
||||
|
||||
It should NOT be threated as source of truth for service functionality.
|
||||
|
||||
## Context
|
||||
|
||||
The Galaxy Game project currently uses Redis as the only persistence backend
|
||||
across all implemented services (`user`, `mail`, `notification`, `lobby`,
|
||||
`gateway`, `authsession`). Redis serves both kinds of state: ephemeral and
|
||||
runtime-coordination state (where it shines — Streams, caches, replay keys,
|
||||
runtime queues, session caches, leases) and table-shaped business state where
|
||||
it is a poor fit (durable user accounts, entitlements/sanctions, mail audit
|
||||
records, notification routes/idempotency, lobby memberships and invites).
|
||||
Replication and standby for Redis are not configured anywhere. There is no
|
||||
SQL/migration tooling in the repo at all.
|
||||
|
||||
We migrate to a Redis + PostgreSQL split where each backend owns the data it
|
||||
serves best. PostgreSQL becomes the source of truth for table-shaped business
|
||||
state, gives us ACID transactions, mature physical/logical replication, and
|
||||
backup/restore via `pg_dump` and WAL archiving. Redis remains the source of
|
||||
truth for streams, pub/sub, caches, leases, replay keys, rate limits, session
|
||||
caches, runtime queues, and stream consumer offsets.
|
||||
|
||||
The plan migrates only services already implemented and explicitly excludes
|
||||
`galaxy/game`. It targets steady-state architecture rules first (one
|
||||
authoritative document, `ARCHITECTURE.md`), then walks each service end to end
|
||||
— code, tests, service-local README/docs, and integration suites — so that no
|
||||
intermediate commit leaves docs and code in conflict.
|
||||
|
||||
## Confirmed decisions (with project owner)
|
||||
|
||||
1. **Documentation strategy**: `ARCHITECTURE.md` is updated as the very first
|
||||
stage with the architecture-wide rules. Each per-service README and per-
|
||||
service `docs/` change inside that service's own stage, paired with code
|
||||
and tests. This keeps `ARCHITECTURE.md` ≡ policy, README ≡ current state,
|
||||
and ensures any commit can be checked out without code/doc divergence.
|
||||
2. **Service scope**: full migration of durable storage to PostgreSQL for
|
||||
`user`, `mail`, `notification`, `lobby`. Only Redis configuration refactor
|
||||
(master/replica + mandatory password, drop `TLS_ENABLED` / `USERNAME`) for
|
||||
`gateway` and `authsession` — these services intentionally stay Redis-
|
||||
only. `geoprofile` has no implementation; its `PLAN.md` and `README.md`
|
||||
absorb the new persistence rules so future implementation follows them.
|
||||
3. **Idempotency and retry-schedule placement**: idempotency records and
|
||||
retry schedule queues live in PostgreSQL on the same table as the durable
|
||||
record they protect (`(producer, idempotency_key)` UNIQUE on `records`,
|
||||
`next_attempt_at` column on `deliveries` / `routes`). One source of truth,
|
||||
no dual-write hazard between PG and Redis ZSETs.
|
||||
4. **Stack**: `github.com/jackc/pgx/v5` driver, exposed as `*sql.DB` via
|
||||
`github.com/jackc/pgx/v5/stdlib`. `github.com/go-jet/jet/v2` for
|
||||
type-safe query building + code generation, generated against a
|
||||
testcontainers PostgreSQL instance with migrations applied (Makefile
|
||||
target per service). `github.com/pressly/goose/v3` library API for
|
||||
embedded migrations applied at service startup; the `goose` CLI may be
|
||||
used for local development and rollback investigations but is not in the
|
||||
service binary path.
|
||||
5. **Code**: all postgres queries must use pre-generated code with `jet` and
|
||||
appropriate builders rather than raw SQL queries, unless this usage cannot
|
||||
achive the goal of businness-scenario due to lack of `go-jet` functionality.
|
||||
|
||||
## Architectural rules (target steady-state)
|
||||
|
||||
These rules land in `ARCHITECTURE.md` in Stage 0 and govern every subsequent
|
||||
service stage.
|
||||
|
||||
### Backend assignment
|
||||
|
||||
PostgreSQL is the source of truth for:
|
||||
|
||||
- Domain entities with table-shaped business state (`accounts`,
|
||||
`entitlement_records`, `sanction_records`, `limit_records`,
|
||||
`blocked_emails`, `deliveries`, `attempts`, `dead_letters`,
|
||||
`malformed_commands`, `notification_records`, `notification_routes`,
|
||||
`games`, `applications`, `invites`, `memberships`, `race_names`).
|
||||
- Idempotency records (UNIQUE constraint on the durable table, not a
|
||||
separate kv).
|
||||
- Retry scheduling state (`next_attempt_at` column + supporting index on the
|
||||
durable table).
|
||||
- Audit history records that must outlive any Redis snapshot.
|
||||
|
||||
Redis is the source of truth for:
|
||||
|
||||
- Redis Streams used as the event bus (`user:domain_events`,
|
||||
`user:lifecycle_events`, `gm:lobby_events`, `runtime:job_results`,
|
||||
`notification:intents`, `gateway:client-events`, `mail:delivery_commands`).
|
||||
- Stream consumer offsets (small runtime coordination state, rebuildable).
|
||||
- Caches and projections (gateway session cache).
|
||||
- Replay reservation keys.
|
||||
- Rate limit counters.
|
||||
- Runtime coordination locks/leases (e.g. notification `route_leases`).
|
||||
- Authentication challenge state and active session tokens (TTL-bounded; loss
|
||||
is recoverable by re-authentication).
|
||||
- Ephemeral per-game runtime aggregates that are deleted at game finish
|
||||
(lobby `game_turn_stats`, `gap_activated_at`, capability evaluation
|
||||
marker).
|
||||
|
||||
### Database topology
|
||||
|
||||
- Single PostgreSQL database `galaxy`.
|
||||
- Schema-per-service: `user`, `mail`, `notification`, `lobby`. Reserved for
|
||||
later: `geoprofile`. Not allocated unless needed: `gateway`, `authsession`.
|
||||
- Per-service PostgreSQL role with grants restricted to its own schema
|
||||
(defense-in-depth, simple to express in the initial migration).
|
||||
- Authentication: username + password only. `sslmode=disable`. No client
|
||||
certificates, no SCRAM channel binding, no custom auth plugins.
|
||||
- Each service connects to one primary plus zero-or-more read-only replicas.
|
||||
In this iteration only the primary is used; the replica pool is wired but
|
||||
receives no traffic. Future read-routing is non-breaking.
|
||||
|
||||
### Redis topology
|
||||
|
||||
- Each service connects to one master Redis plus zero-or-more replica Redis
|
||||
hosts.
|
||||
- All connections use a mandatory password. `USERNAME`/ACL not used. TLS off.
|
||||
- In this iteration only the master is used; the replica list is wired but
|
||||
unused — non-breaking switch later when the app starts routing reads.
|
||||
- Existing env vars `*_REDIS_TLS_ENABLED`, `*_REDIS_USERNAME` are removed
|
||||
(hard rename; no backward-compat shim — fresh project, no production
|
||||
deploys to migrate).
|
||||
|
||||
### Library stack
|
||||
|
||||
- Driver: `github.com/jackc/pgx/v5` (modern, actively maintained), exposed
|
||||
to `database/sql` via `github.com/jackc/pgx/v5/stdlib` so go-jet's
|
||||
`qrm.Queryable` interface is satisfied without changes.
|
||||
- Query layer: `github.com/go-jet/jet/v2` (PostgreSQL dialect). Generated
|
||||
code lives under each service `internal/adapters/postgres/jet/`,
|
||||
regenerated via a `make jet` target and committed to the repo.
|
||||
- Migrations: `github.com/pressly/goose/v3` library API; migration files
|
||||
embedded via `//go:embed *.sql`; applied at startup, before opening any
|
||||
HTTP/gRPC listener; non-zero exit on failure.
|
||||
- Test infrastructure: `github.com/testcontainers/testcontainers-go` plus
|
||||
the `modules/postgres` submodule; the same setup is reused by `make jet`
|
||||
to host a transient instance for jet codegen.
|
||||
|
||||
### Migration discipline
|
||||
|
||||
- Forward-only sequence-numbered files: `00001_init.sql`, `00002_*.sql`, …
|
||||
- Lowercase snake_case names; goose `-- +goose Up` / `-- +goose Down`
|
||||
markers; statements that need transaction-wrapping use
|
||||
`-- +goose StatementBegin` / `-- +goose StatementEnd`.
|
||||
- Migrations apply at service startup; service exits non-zero on failure.
|
||||
- Per-service decision record at `galaxy/<service>/docs/postgres-migration.md`
|
||||
captures schema decisions and any non-trivial deviation from the rules.
|
||||
|
||||
### Per-service code organisation
|
||||
|
||||
```text
|
||||
galaxy/<service>/
|
||||
internal/
|
||||
adapters/
|
||||
postgres/
|
||||
migrations/ # *.sql files + migrations.go (//go:embed)
|
||||
jet/ # generated; commit-checked
|
||||
<portname>/ # adapter implementations matching internal/ports
|
||||
config/
|
||||
config.go # adds Postgres + new Redis schema
|
||||
Makefile # `jet` target: testcontainers + goose + jet
|
||||
```
|
||||
|
||||
### Test patterns
|
||||
|
||||
- Per-service unit tests against a real PostgreSQL via
|
||||
`testcontainers-go`; replace the corresponding miniredis test path where
|
||||
storage moved to PG.
|
||||
- Shared port-test suites (e.g. `lobby/internal/ports/racenamedirtest/`)
|
||||
gain a Postgres harness; they remain backend-agnostic in shape.
|
||||
- `integration/internal/harness/postgres_container.go` is added; integration
|
||||
suites that need PG declare it next to their existing Redis container.
|
||||
- Stub adapters (`*stub/`) are kept where the in-memory port is useful for
|
||||
tests that don't need a real backend. Redis adapters that previously
|
||||
implemented these ports are removed (no dead code).
|
||||
|
||||
### Configuration env vars (target)
|
||||
|
||||
For each service `<S>` ∈ { `USERSERVICE`, `MAIL`, `NOTIFICATION`, `LOBBY`,
|
||||
`GATEWAY`, `AUTHSESSION` }:
|
||||
|
||||
- `<S>_REDIS_MASTER_ADDR` (required)
|
||||
- `<S>_REDIS_REPLICA_ADDRS` (optional, comma-separated; default empty)
|
||||
- `<S>_REDIS_PASSWORD` (required)
|
||||
- `<S>_REDIS_DB` (default 0)
|
||||
- `<S>_REDIS_OPERATION_TIMEOUT` (default 250ms)
|
||||
|
||||
For PG-backed services (`USERSERVICE`, `MAIL`, `NOTIFICATION`, `LOBBY`):
|
||||
|
||||
- `<S>_POSTGRES_PRIMARY_DSN` (required;
|
||||
e.g. `postgres://userservice:secret@postgres:5432/galaxy?search_path=user&sslmode=disable`)
|
||||
- `<S>_POSTGRES_REPLICA_DSNS` (optional, comma-separated)
|
||||
- `<S>_POSTGRES_OPERATION_TIMEOUT` (default 1s)
|
||||
- `<S>_POSTGRES_MAX_OPEN_CONNS` (default 25)
|
||||
- `<S>_POSTGRES_MAX_IDLE_CONNS` (default 5)
|
||||
- `<S>_POSTGRES_CONN_MAX_LIFETIME` (default 30m)
|
||||
|
||||
DSN sets `search_path=<schema>` so unqualified table references resolve into
|
||||
the service-owned schema; `sslmode=disable` is set explicitly per the
|
||||
"no TLS" requirement.
|
||||
|
||||
Service-prefix-specific stream/keyspace env vars (`*_REDIS_DOMAIN_EVENTS_STREAM`,
|
||||
`*_REDIS_LIFECYCLE_EVENTS_STREAM`, `*_REDIS_KEYSPACE_PREFIX`,
|
||||
`MAIL_REDIS_COMMAND_STREAM`, etc.) keep their current names and semantics —
|
||||
they describe stream/key shapes, not connection topology.
|
||||
|
||||
---
|
||||
|
||||
## Stages
|
||||
|
||||
Each stage is independently executable and shippable.
|
||||
|
||||
### ~~Stage 0~~ — Architecture-wide rules and PG_PLAN.md materialisation
|
||||
|
||||
This stage is implemented.
|
||||
|
||||
**Goal**: land the steady-state rules in `ARCHITECTURE.md` and place
|
||||
`PG_PLAN.md` at the project root so subsequent `/stage-implementation`
|
||||
invocations have an authoritative reference.
|
||||
|
||||
**Actions**:
|
||||
|
||||
1. Write the contents of this plan file to `/Users/id/src/go/galaxy/PG_PLAN.md`.
|
||||
2. Add a new section to `ARCHITECTURE.md` (e.g. `§9 Persistence Backends`)
|
||||
capturing every rule under the *Architectural rules* heading above:
|
||||
backend assignment, database/Redis topology, library stack, migration
|
||||
discipline, code organisation, test patterns, env-var conventions.
|
||||
3. Add a short *Migration Window* sub-section to `ARCHITECTURE.md` noting
|
||||
that until all `PG_PLAN.md` stages complete, each service's `README.md`
|
||||
continues to describe its actual current state — this caveat is removed
|
||||
in Stage 9.
|
||||
4. Adjust `ARCHITECTURE.md §8` (publisher rules) so cross-references
|
||||
distinguish "Redis Stream" (event bus, stays Redis) from "PG-backed
|
||||
table" (durable record).
|
||||
|
||||
**Files (modified / new)**:
|
||||
|
||||
- `/Users/id/src/go/galaxy/PG_PLAN.md` — new
|
||||
- `/Users/id/src/go/galaxy/ARCHITECTURE.md` — modified
|
||||
|
||||
**Out of scope**: zero service code, zero per-service README/docs, zero
|
||||
`go.mod` changes, zero new dependencies in service modules.
|
||||
|
||||
**Verification**:
|
||||
|
||||
- `git diff --stat` reports two paths only: `PG_PLAN.md`, `ARCHITECTURE.md`.
|
||||
- `ARCHITECTURE.md` reads coherently end to end, with the new section
|
||||
cross-referenced from §8 and from any other place that today says
|
||||
"Redis is the v1 backend".
|
||||
- Manual: read `PG_PLAN.md` top to bottom, confirm every architectural
|
||||
decision matches the section in `ARCHITECTURE.md`.
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 1~~ — Shared infrastructure packages (`pkg/postgres`, `pkg/redisconn`)
|
||||
|
||||
This stage is implemented.
|
||||
|
||||
**Goal**: provide one canonical helper each for Postgres and Redis so per-
|
||||
service stages don't reinvent connection/migration wiring. No service
|
||||
consumes them yet.
|
||||
|
||||
**Files (new)**:
|
||||
|
||||
- `pkg/postgres/config.go` — `Config` struct (PrimaryDSN, ReplicaDSNs,
|
||||
OperationTimeout, MaxOpenConns, MaxIdleConns, ConnMaxLifetime); helper
|
||||
`LoadFromEnv(prefix string) (Config, error)` that reads
|
||||
`<prefix>_POSTGRES_*`.
|
||||
- `pkg/postgres/open.go` — `OpenPrimary(ctx, cfg) (*sql.DB, error)` and
|
||||
`OpenReplicas(ctx, cfg) ([]*sql.DB, error)` using
|
||||
`pgx.ConnConfig` → `stdlib.OpenDB(...)`; configures pool sizes and
|
||||
per-statement context timeout.
|
||||
- `pkg/postgres/migrate.go` — `RunMigrations(ctx context.Context, db *sql.DB,
|
||||
fs embed.FS) error` wrapping `goose.SetBaseFS(fs)` + `goose.UpContext`.
|
||||
- `pkg/postgres/otel.go` — `Instrument(db *sql.DB, telemetry telemetry.Runtime)`
|
||||
applying `otelsql.RegisterDBStatsMetrics` and statement spans.
|
||||
- `pkg/postgres/postgres_test.go` — testcontainers-backed smoke test:
|
||||
open primary, run a one-line migration, insert/select.
|
||||
- `pkg/redisconn/config.go` — `Config` struct (MasterAddr, ReplicaAddrs,
|
||||
Password, DB, OperationTimeout); helper `LoadFromEnv(prefix string)
|
||||
(Config, error)` that reads `<prefix>_REDIS_*` (the new shape only;
|
||||
rejects deprecated TLS/USERNAME vars with a clear error).
|
||||
- `pkg/redisconn/client.go` — `NewMasterClient(cfg) *redis.Client` and
|
||||
`NewReplicaClients(cfg) []*redis.Client` (latter returns nil/empty when
|
||||
replicas not configured).
|
||||
- `pkg/redisconn/otel.go` — `Instrument(client *redis.Client,
|
||||
telemetry telemetry.Runtime)` applying `redisotel.InstrumentTracing` /
|
||||
`InstrumentMetrics`.
|
||||
- `pkg/redisconn/redisconn_test.go` — miniredis-backed config and master
|
||||
client tests.
|
||||
|
||||
**Files (touched)**:
|
||||
|
||||
- `pkg/go.mod` — add `github.com/jackc/pgx/v5`,
|
||||
`github.com/jackc/pgx/v5/stdlib`, `github.com/pressly/goose/v3`,
|
||||
`github.com/testcontainers/testcontainers-go/modules/postgres`,
|
||||
`github.com/XSAM/otelsql` (for db instrumentation; alternative:
|
||||
`go.nhat.io/otelsql` — pick one in implementation).
|
||||
- `go.work` — confirm `pkg/` is registered (already is).
|
||||
|
||||
**Verification**:
|
||||
|
||||
- `cd /Users/id/src/go/galaxy/pkg && go test ./postgres/... ./redisconn/...`
|
||||
passes locally with Docker available.
|
||||
- `go vet ./...` clean.
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 2~~ — Integration test harness extension
|
||||
|
||||
This stage is implemented.
|
||||
|
||||
**Goal**: extend `integration/internal/harness/` with a Postgres container
|
||||
helper and a service-bootstrap helper that builds the per-service DSN with
|
||||
the right `search_path`. All existing integration suites stay green.
|
||||
|
||||
**Files (new)**:
|
||||
|
||||
- `integration/internal/harness/postgres_container.go` —
|
||||
`StartPostgresContainer(t testing.TB) *PostgresRuntime`. The runtime
|
||||
exposes `BaseDSN()`, `DSNForSchema(schema, role string) string`, and
|
||||
`EnsureRoleAndSchema(ctx, schema, role, password string) error` so each
|
||||
test can prepare an isolated schema for the service it is booting.
|
||||
- `integration/internal/harness/postgres_container_test.go` — smoke test.
|
||||
|
||||
**Files (touched)**:
|
||||
|
||||
- `integration/internal/harness/binary.go` — extend `Process`/launch
|
||||
helpers with `WithPostgres(rt *PostgresRuntime, schema, role string)`
|
||||
that injects the right `<S>_POSTGRES_PRIMARY_DSN`. (Existing API already
|
||||
takes `env map[string]string`; this is a thin wrapper.)
|
||||
- `integration/go.mod` — add the testcontainers Postgres module.
|
||||
|
||||
**Out of scope**: no integration suite is yet wired to Postgres; each
|
||||
service stage wires in its suites.
|
||||
|
||||
**Verification**:
|
||||
|
||||
- `cd integration && go test ./internal/harness/...` passes.
|
||||
- `cd integration && go test ./...` still green for all existing suites
|
||||
(Redis-only services remain Redis-only).
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 3~~ — User Service migration (pilot)
|
||||
|
||||
**Goal**: replace User Service's Redis durable storage with PostgreSQL. The
|
||||
two Redis Streams (`user:domain_events`, `user:lifecycle_events`) remain on
|
||||
Redis. This stage is the pilot; subsequent service stages copy its shape.
|
||||
|
||||
**Schema (`user` schema)**:
|
||||
|
||||
- `accounts` (user_id PK, email UNIQUE, user_name UNIQUE, display_name,
|
||||
preferred_language, time_zone, declared_country, created_at, updated_at,
|
||||
deleted_at).
|
||||
- `blocked_emails` (email PK, reason_code, blocked_at, actor_type, actor_id,
|
||||
resolved_user_id).
|
||||
- `entitlement_records` (record_id PK, user_id FK, plan_code, is_paid,
|
||||
starts_at, ends_at, source, actor_type, actor_id, reason_code,
|
||||
updated_at).
|
||||
- `entitlement_snapshots` (user_id PK FK → accounts, …current effective
|
||||
values mirroring Redis snapshot shape).
|
||||
- `sanction_records` (record_id PK, user_id FK, sanction_code, scope,
|
||||
reason_code, actor_type, actor_id, applied_at, expires_at, removed_at,
|
||||
removed_by_type, removed_by_id, removed_reason_code).
|
||||
- `sanction_active` (user_id, sanction_code, record_id) PRIMARY KEY
|
||||
(user_id, sanction_code).
|
||||
- `limit_records`, `limit_active` — analogous to sanctions.
|
||||
- Indexes: `accounts(created_at DESC, user_id DESC)` for newest-first
|
||||
pagination; `accounts(declared_country)`;
|
||||
`entitlement_snapshots(plan_code, is_paid)`;
|
||||
`entitlement_snapshots(ends_at) WHERE is_paid AND ends_at IS NOT NULL`;
|
||||
`sanction_active(sanction_code)`; `limit_active(limit_code)`. Eligibility
|
||||
flags become computed predicates on these columns.
|
||||
|
||||
**Files (new)**:
|
||||
|
||||
- `galaxy/user/internal/adapters/postgres/migrations/00001_init.sql` —
|
||||
full schema with grants (`GRANT USAGE ON SCHEMA user TO userservice;
|
||||
GRANT … ON ALL TABLES …;`).
|
||||
- `galaxy/user/internal/adapters/postgres/migrations/migrations.go` —
|
||||
`//go:embed *.sql` and a `Migrations() embed.FS` accessor.
|
||||
- `galaxy/user/internal/adapters/postgres/jet/...` — generated code
|
||||
(commit-checked).
|
||||
- `galaxy/user/internal/adapters/postgres/userstore/store.go` — Postgres
|
||||
implementation of `ports.UserAccountStore` and `ports.AuthDirectoryStore`.
|
||||
- `galaxy/user/internal/adapters/postgres/userstore/entitlement_store.go` —
|
||||
Postgres implementation of `EntitlementSnapshotStore` and
|
||||
`EntitlementHistoryStore`.
|
||||
- `galaxy/user/internal/adapters/postgres/userstore/policy_store.go` —
|
||||
Postgres implementation of `SanctionStore` and `LimitStore`.
|
||||
- `galaxy/user/internal/adapters/postgres/userstore/list_store.go` —
|
||||
Postgres implementation of `UserListStore` (pagination + filters
|
||||
expressed as SQL).
|
||||
- `galaxy/user/internal/adapters/postgres/userstore/store_test.go` and
|
||||
siblings — testcontainers-backed unit tests covering the same matrix the
|
||||
current Redis tests cover.
|
||||
- `galaxy/user/Makefile` — `jet` target.
|
||||
- `galaxy/user/docs/postgres-migration.md` — decision record (schema
|
||||
shape, why we keep `entitlement_snapshots` denormalised, eligibility
|
||||
expressed as SQL predicates, schema role grants).
|
||||
|
||||
**Files (touched)**:
|
||||
|
||||
- `galaxy/user/internal/config/config.go` — add Postgres config; refactor
|
||||
Redis config to master/replica/password (drop `TLS_ENABLED`, `USERNAME`).
|
||||
- `galaxy/user/internal/config/config_test.go` — update to new env shape.
|
||||
- `galaxy/user/internal/app/runtime.go` — open Postgres pool, run
|
||||
migrations on startup before listeners open, wire postgres adapters
|
||||
into services. Redis client now serves only the two stream publishers.
|
||||
- `galaxy/user/README.md` — replace "Redis-backed user state" with the
|
||||
new persistence model, update env-var section.
|
||||
- `galaxy/user/docs/runbook.md`, `galaxy/user/docs/runtime.md`,
|
||||
`galaxy/user/docs/examples.md` — update storage references and
|
||||
config sections.
|
||||
- `galaxy/user/go.mod` — add `github.com/jackc/pgx/v5{,/stdlib}`,
|
||||
`github.com/pressly/goose/v3`, `github.com/go-jet/jet/v2`,
|
||||
`github.com/testcontainers/testcontainers-go/modules/postgres`. Use
|
||||
`pkg/postgres`, `pkg/redisconn`.
|
||||
|
||||
**Files (deleted)**:
|
||||
|
||||
- `galaxy/user/internal/adapters/redis/userstore/` — entire directory.
|
||||
- The portions of `galaxy/user/internal/adapters/redisstate/keyspace.go`
|
||||
that defined account/entitlement/sanction/limit/index keys (keep only
|
||||
what `domainevents` and `lifecycleevents` publishers still require — if
|
||||
none, delete the file outright).
|
||||
|
||||
**Files retained on Redis**:
|
||||
|
||||
- `galaxy/user/internal/adapters/redis/domainevents/publisher.go`.
|
||||
- `galaxy/user/internal/adapters/redis/lifecycleevents/publisher.go`.
|
||||
|
||||
**Touched integration suites** (each gets a Postgres container in addition
|
||||
to the existing Redis one):
|
||||
|
||||
- `integration/authsessionuser/`
|
||||
- `integration/gatewayauthsessionuser/`
|
||||
- `integration/gatewayauthsessionusermail/`
|
||||
- `integration/notificationuser/`
|
||||
- `integration/lobbyuser/`
|
||||
|
||||
**Verification**:
|
||||
|
||||
- `cd galaxy/user && make jet && go test ./...` (Docker needed).
|
||||
- `cd integration && go test ./authsessionuser/... ./gatewayauthsessionuser/... ./gatewayauthsessionusermail/... ./notificationuser/... ./lobbyuser/...`
|
||||
- Manual smoke against a `docker-compose` stack (PG + Redis with
|
||||
passwords) using flows from `galaxy/user/docs/examples.md`.
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 4~~ — Mail Service migration
|
||||
|
||||
This stage is implemented.
|
||||
|
||||
**Goal**: move durable mail storage (deliveries, attempts, dead letters,
|
||||
malformed commands, payloads, idempotency, attempt schedule) into
|
||||
PostgreSQL. Keep Redis only for the inbound `mail:delivery_commands`
|
||||
stream and its consumer offset.
|
||||
|
||||
**Schema (`mail` schema)**:
|
||||
|
||||
- `deliveries` (delivery_id PK, source, status, recipient_envelope JSONB,
|
||||
subject, text_body, html_body, payload_mode, template_id,
|
||||
idempotency_source, idempotency_key, locale_fallback_used,
|
||||
next_attempt_at, attempt_count, max_attempts, created_at, updated_at).
|
||||
- INDEX (status, next_attempt_at) for the scheduler.
|
||||
- UNIQUE (idempotency_source, idempotency_key) — the idempotency record
|
||||
IS this row (no separate kv).
|
||||
- INDEX (created_at DESC) for operator listings; INDEX on status, source,
|
||||
template_id, recipient as needed.
|
||||
- `attempts` (delivery_id FK, attempt_no, status, provider_summary,
|
||||
scheduled_for_ms, started_at_ms, completed_at_ms, PRIMARY KEY
|
||||
(delivery_id, attempt_no)).
|
||||
- `dead_letters` (delivery_id PK FK, final_attempt_count, max_attempts,
|
||||
failure_classification, failure_message, created_at_ms).
|
||||
- `delivery_payloads` (delivery_id PK FK, template_variables JSONB).
|
||||
- `malformed_commands` (stream_entry_id PK, failure_code, failure_message,
|
||||
raw_fields JSONB, recorded_at_ms; INDEX created_at).
|
||||
|
||||
**Files**: mirror Stage 3 (postgres adapter package, migrations, jet
|
||||
codegen, Makefile, decision record, removal of corresponding
|
||||
`internal/adapters/redisstate/*` files for migrated entities, retention
|
||||
of stream offset and consumer wiring on Redis).
|
||||
|
||||
**Worker change**: the mail attempt scheduler loop replaces
|
||||
`ZRANGEBYSCORE` over `mail:attempt_schedule` with
|
||||
`SELECT … FROM deliveries WHERE status IN ('queued','retry_pending') AND next_attempt_at <= now() ORDER BY next_attempt_at LIMIT N FOR UPDATE SKIP LOCKED`.
|
||||
|
||||
**Files (deleted)**:
|
||||
|
||||
- `galaxy/mail/internal/adapters/redisstate/auth_acceptance_store.go`
|
||||
- `galaxy/mail/internal/adapters/redisstate/generic_acceptance_store.go`
|
||||
- `galaxy/mail/internal/adapters/redisstate/attempt_execution_store.go`
|
||||
- `galaxy/mail/internal/adapters/redisstate/operator_store.go`
|
||||
- `galaxy/mail/internal/adapters/redisstate/malformed_command_store.go`
|
||||
- `galaxy/mail/internal/adapters/redisstate/render_store.go`
|
||||
- The portions of `galaxy/mail/internal/adapters/redisstate/keyspace.go`
|
||||
no longer used (`mail:attempt_schedule`, `mail:idempotency:*`, all
|
||||
delivery/attempt/dead-letter/index keys).
|
||||
|
||||
**Files retained on Redis**:
|
||||
|
||||
- `galaxy/mail/internal/adapters/redisstate/stream_offset_store.go` (offset
|
||||
for `mail:delivery_commands` consumer).
|
||||
- The command stream consumer wiring itself.
|
||||
|
||||
**Touched integration suites**:
|
||||
|
||||
- `integration/authsessionmail/`
|
||||
- `integration/gatewayauthsessionmail/`
|
||||
- `integration/gatewayauthsessionusermail/`
|
||||
- `integration/notificationmail/`
|
||||
|
||||
**Verification**: per Stage 3 pattern; plus end-to-end smoke that pushes
|
||||
a delivery through retry_pending → provider_accepted using the SMTP stub.
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 5~~ — Notification Service migration
|
||||
|
||||
This stage is implemented.
|
||||
|
||||
**Goal**: move durable notification storage (records, routes, idempotency,
|
||||
dead letters, malformed intents) into PostgreSQL. Keep Redis for the
|
||||
inbound `notification:intents` stream, the outbound `gateway:client-events`
|
||||
stream, the outbound `mail:delivery_commands` stream, the corresponding
|
||||
stream offsets, and the short-lived per-route lease (`route_leases:*`).
|
||||
|
||||
**Schema (`notification` schema)**:
|
||||
|
||||
- `records` (notification_id PK, notification_type, producer, audience_kind,
|
||||
recipient_user_ids JSONB, payload JSONB, idempotency_key,
|
||||
request_fingerprint, request_id, trace_id, occurred_at_ms,
|
||||
accepted_at_ms, updated_at_ms).
|
||||
- UNIQUE (producer, idempotency_key) — idempotency record IS this row.
|
||||
- `routes` (notification_id, route_id, channel, recipient_ref, status,
|
||||
attempt_count, max_attempts, next_attempt_at_ms, resolved_email,
|
||||
resolved_locale, last_error_classification, last_error_message,
|
||||
last_error_at_ms, created_at_ms, updated_at_ms, published_at_ms,
|
||||
dead_lettered_at_ms, skipped_at_ms, PRIMARY KEY
|
||||
(notification_id, route_id)).
|
||||
- INDEX (status, next_attempt_at_ms) for the scheduler.
|
||||
- `dead_letters` (notification_id, route_id PK FK, channel, recipient_ref,
|
||||
final_attempt_count, max_attempts, failure_classification,
|
||||
failure_message, recovery_hint, created_at_ms).
|
||||
- `malformed_intents` (stream_entry_id PK, notification_type, producer,
|
||||
idempotency_key, failure_code, failure_message, raw_fields JSONB,
|
||||
recorded_at_ms).
|
||||
|
||||
**Worker change**: route publisher selects work via the same
|
||||
`FOR UPDATE SKIP LOCKED` pattern as Mail. The Redis lease is still used
|
||||
as a short-lived, per-process exclusivity hint atop the SQL claim.
|
||||
|
||||
**Files (deleted)**:
|
||||
|
||||
- `galaxy/notification/internal/adapters/redisstate/acceptance_store.go`
|
||||
- `galaxy/notification/internal/adapters/redisstate/route_state_store.go`
|
||||
- `galaxy/notification/internal/adapters/redisstate/malformed_intent_store.go`
|
||||
- The portions of
|
||||
`galaxy/notification/internal/adapters/redisstate/keyspace.go` no longer
|
||||
used (records, routes, idempotency, dead_letters, malformed_intents).
|
||||
|
||||
**Files retained on Redis**:
|
||||
|
||||
- `galaxy/notification/internal/adapters/redisstate/stream_offset_store.go`.
|
||||
- Route lease key generator (still under `redisstate/`, narrowed to leases
|
||||
only).
|
||||
- All stream consumer/publisher wiring.
|
||||
|
||||
**Touched integration suites**:
|
||||
|
||||
- `integration/notificationgateway/`
|
||||
- `integration/notificationmail/`
|
||||
- `integration/notificationuser/`
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 6A~~ — Lobby Service: core enrollment entities
|
||||
|
||||
**Goal**: move `Game`, `Application`, `Invite`, `Membership` records and
|
||||
their indexes into PostgreSQL. RaceNameDirectory, GameTurnStats,
|
||||
GapActivation, EvaluationGuard, StreamOffset remain on Redis until later
|
||||
sub-stages.
|
||||
|
||||
**Schema (`lobby` schema, partial)**:
|
||||
|
||||
- `games` (game_id PK, owner_id, kind ('public'|'private'), status,
|
||||
created_at, updated_at, runtime_snapshot JSONB, runtime_binding JSONB,
|
||||
…other denormalised game settings).
|
||||
- INDEX (status, created_at).
|
||||
- INDEX (owner_id) WHERE kind = 'private'.
|
||||
- `applications` (application_id PK, game_id FK, user_id, status,
|
||||
canonical_key, submitted_at, decided_at).
|
||||
- PARTIAL UNIQUE INDEX (user_id, game_id) WHERE status = 'active' —
|
||||
enforces the single-active constraint at the DB level (replaces
|
||||
`lobby:user_game_application:*:*`).
|
||||
- INDEX (game_id), INDEX (user_id).
|
||||
- `invites` (invite_id PK, game_id FK, inviter_id, invitee_id, race_name,
|
||||
status, created_at, expires_at, decided_at).
|
||||
- INDEX (game_id), INDEX (invitee_id), INDEX (inviter_id).
|
||||
- INDEX (status, expires_at) for any expiration scanner if needed.
|
||||
- `memberships` (membership_id PK, game_id FK, user_id, status, joined_at,
|
||||
canonical_key, …).
|
||||
- INDEX (game_id), INDEX (user_id).
|
||||
|
||||
**Files (new)**:
|
||||
|
||||
- `galaxy/lobby/internal/adapters/postgres/migrations/00001_core_entities.sql`.
|
||||
- `galaxy/lobby/internal/adapters/postgres/migrations/migrations.go`.
|
||||
- `galaxy/lobby/internal/adapters/postgres/jet/...`.
|
||||
- `galaxy/lobby/internal/adapters/postgres/gamestore/store.go`.
|
||||
- `galaxy/lobby/internal/adapters/postgres/applicationstore/store.go`.
|
||||
- `galaxy/lobby/internal/adapters/postgres/invitestore/store.go`.
|
||||
- `galaxy/lobby/internal/adapters/postgres/membershipstore/store.go`.
|
||||
- Test files for each store using the existing test patterns.
|
||||
- `galaxy/lobby/Makefile` (`jet` target).
|
||||
- `galaxy/lobby/docs/postgres-migration.md` (decision record covering
|
||||
this sub-stage and what is intentionally left for 6B/6C).
|
||||
|
||||
**Files (touched)**:
|
||||
|
||||
- `galaxy/lobby/internal/config/config.go` — add Postgres config; refactor
|
||||
Redis config to the new shape.
|
||||
- `galaxy/lobby/internal/app/runtime.go` — open Postgres pool, run
|
||||
migrations on startup, wire core PG-backed stores into services.
|
||||
RaceNameDirectory and stats/guard stores still wired to Redis until 6B/6C.
|
||||
- `galaxy/lobby/README.md` and `galaxy/lobby/docs/runbook.md` — updated
|
||||
to describe core entities on PG, RND/stats still on Redis until 6B/6C.
|
||||
|
||||
**Files (deleted)**:
|
||||
|
||||
- `galaxy/lobby/internal/adapters/redisstate/gamestore.go`,
|
||||
`applicationstore.go`, `invitestore.go`, `membershipstore.go`.
|
||||
- The corresponding sections of `redisstate/keyspace.go`.
|
||||
|
||||
**Stub adapters retained**: `gamestub/`, `applicationstub/`, `invitestub/`,
|
||||
`membershipstub/` stay — they are pure in-memory ports useful for tests
|
||||
that don't need real PG.
|
||||
|
||||
**Touched integration suites**:
|
||||
|
||||
- `integration/lobbyuser/`
|
||||
- `integration/lobbynotification/`
|
||||
|
||||
**Verification**: per Stage 3 pattern; plus the existing lobby HTTP
|
||||
contract tests against the public/internal ports.
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 6B~~ — Lobby Service: RaceNameDirectory
|
||||
|
||||
This stage is implemented.
|
||||
|
||||
**Goal**: replace the Lua-backed Redis `RaceNameDirectory` with a PG
|
||||
implementation that preserves the two-tier model (registered / reservation /
|
||||
pending_registration) and atomic registration semantics via SQL
|
||||
transactions and (where required) advisory locks.
|
||||
|
||||
**Schema (additions to `lobby` schema)**:
|
||||
|
||||
- `race_names` (canonical_key PK, holder_user_id, binding_kind ('registered'
|
||||
| 'reserved' | 'pending_registration'), source_game_id, eligible_until_ms,
|
||||
registered_at_ms, reserved_at_ms).
|
||||
- INDEX (holder_user_id) for `ListRegistered`/`ListReservations`/
|
||||
`ListPendingRegistrations` queries.
|
||||
- PARTIAL INDEX (eligible_until_ms) WHERE binding_kind =
|
||||
'pending_registration' for the expiration scanner.
|
||||
- The confusable-pair policy is enforced at write time inside
|
||||
`BEGIN … COMMIT` transactions; `Reserve`/`Register`/
|
||||
`MarkPendingRegistration` use `SELECT … FOR UPDATE` on the canonical
|
||||
keys involved (or PG advisory locks keyed by `hashtext(canonical_key)`)
|
||||
to serialise concurrent attempts.
|
||||
|
||||
**Files (new)**:
|
||||
|
||||
- `galaxy/lobby/internal/adapters/postgres/migrations/00002_race_names.sql`.
|
||||
- `galaxy/lobby/internal/adapters/postgres/racenamedir/directory.go` —
|
||||
Postgres implementation of `ports.RaceNameDirectory`.
|
||||
- `galaxy/lobby/internal/adapters/postgres/racenamedir/directory_test.go`
|
||||
— runs the existing shared suite at
|
||||
`galaxy/lobby/internal/ports/racenamedirtest/suite.go`.
|
||||
|
||||
**Files (touched)**:
|
||||
|
||||
- `galaxy/lobby/internal/app/runtime.go` — wire PG RND.
|
||||
- `galaxy/lobby/internal/ports/racenamedirtest/suite.go` — only
|
||||
shape-preserving updates if the suite assumed Redis-only behaviour
|
||||
(e.g. SCAN-based list ordering).
|
||||
- `galaxy/lobby/README.md`, `galaxy/lobby/docs/runbook.md` — RND now PG-
|
||||
backed; canonical_lookup cache no longer needed (PG indexed lookup is
|
||||
fast enough; remove the Redis cache key from `redisstate/keyspace.go`).
|
||||
|
||||
**Files (deleted)**:
|
||||
|
||||
- `galaxy/lobby/internal/adapters/redisstate/racenamedir.go` and the
|
||||
embedded Lua scripts.
|
||||
- `galaxy/lobby/internal/adapters/racenamestub/` stays (useful for unit
|
||||
tests that don't need PG).
|
||||
|
||||
**Worker change**: the pending-registration expiration worker switches
|
||||
from `ZRANGEBYSCORE` on `lobby:race_names:pending_index` to
|
||||
`SELECT … FROM race_names WHERE binding_kind='pending_registration' AND eligible_until_ms <= now()`.
|
||||
|
||||
**Verification**: shared port suite (`racenamedirtest`) green against PG
|
||||
adapter; lobby unit tests green; `integration/lobbyuser/`,
|
||||
`integration/lobbynotification/` green.
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 6C~~ — Lobby Service: workers, ephemeral stores, cleanup
|
||||
|
||||
This stage is implemented.
|
||||
|
||||
**Goal**: finish the lobby migration. Confirm what stays Redis-only,
|
||||
update workers that touch both backends, drop dead Redis adapters.
|
||||
|
||||
**Stays on Redis (per architectural rules)**:
|
||||
|
||||
- `GameTurnStatsStore` — ephemeral per-game aggregate, deleted at game
|
||||
finish, rebuildable from GM events.
|
||||
- `EvaluationGuardStore` — ephemeral marker.
|
||||
- `GapActivationStore` — short-lived gap-window timestamp cache.
|
||||
- `StreamOffsetStore` — runtime coordination per the architectural rule.
|
||||
- All stream consumers and publishers (`gm:lobby_events`,
|
||||
`runtime:job_results`, `user:lifecycle_events`, `notification:intents`).
|
||||
|
||||
This is documented in `galaxy/lobby/docs/postgres-migration.md`.
|
||||
|
||||
**Files (touched)**:
|
||||
|
||||
- `galaxy/lobby/internal/worker/gmevents/consumer.go` — write durable
|
||||
updates via PG-backed `GameStore`.
|
||||
- `galaxy/lobby/internal/worker/runtimejobresult/consumer.go` — same.
|
||||
- `galaxy/lobby/internal/adapters/userlifecycle/consumer.go` (and the
|
||||
worker that drives it) — RND release, membership/application/invite
|
||||
cascade all flow through PG.
|
||||
- `galaxy/lobby/internal/worker/pendingregistration/worker.go` — PG-based
|
||||
scan, no Redis ZSET.
|
||||
- `galaxy/lobby/internal/worker/enrollmentautomation/worker.go` — uses PG
|
||||
`GameStore.GetByStatus("enrollment_open")`.
|
||||
- `galaxy/lobby/internal/adapters/redisstate/keyspace.go` — pruned to the
|
||||
remaining Redis keys (turn stats, gap activation, evaluation guard,
|
||||
stream offsets, lifecycle stream consumer state).
|
||||
- `galaxy/lobby/README.md`, `galaxy/lobby/docs/runtime.md`,
|
||||
`galaxy/lobby/docs/runbook.md`, `galaxy/lobby/docs/examples.md` —
|
||||
finalised storage descriptions.
|
||||
|
||||
**Files (deleted)**:
|
||||
|
||||
- Anything left in `galaxy/lobby/internal/adapters/redisstate/` whose
|
||||
only consumer was a port now PG-backed (see 6A/6B deletions).
|
||||
|
||||
**Verification**:
|
||||
|
||||
- All previously-green lobby unit tests pass with PG-backed adapters.
|
||||
- `integration/lobbyuser/`, `integration/lobbynotification/` pass.
|
||||
- `grep -rn "redisstate" galaxy/lobby/internal/` returns only the keys
|
||||
intentionally retained on Redis.
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 7~~ — Gateway and Auth/Session: Redis configuration refactor
|
||||
|
||||
This stage is implemented.
|
||||
|
||||
**Goal**: apply the new Redis configuration shape (master/replica/password,
|
||||
drop TLS/USERNAME) to Gateway and Auth/Session. No PG migration; these
|
||||
services intentionally stay Redis-only.
|
||||
|
||||
**Files (touched)**:
|
||||
|
||||
- `galaxy/gateway/internal/config/config.go` — switch `RedisConfig`
|
||||
fields to the `pkg/redisconn.Config` shape; update the three
|
||||
prefixes: `GATEWAY_SESSION_CACHE_REDIS_*`, `GATEWAY_REPLAY_REDIS_*`,
|
||||
`GATEWAY_SESSION_EVENTS_REDIS_*`. Drop `TLS_ENABLED`, `USERNAME`.
|
||||
- `galaxy/gateway/internal/session/redis.go`,
|
||||
`galaxy/gateway/internal/replay/redis.go`,
|
||||
`galaxy/gateway/internal/events/subscriber.go` — adopt new client
|
||||
constructor via `pkg/redisconn`.
|
||||
- `galaxy/gateway/internal/config/config_test.go`,
|
||||
`galaxy/gateway/internal/session/redis_test.go`,
|
||||
`galaxy/gateway/internal/replay/redis_test.go` — updated to new env shape.
|
||||
- `galaxy/authsession/internal/config/config.go` — same pattern; drop
|
||||
TLS, USERNAME.
|
||||
- `galaxy/authsession/internal/adapters/redis/sessionstore/store.go`,
|
||||
`challengestore/store.go`, `projectionpublisher/publisher.go`,
|
||||
`sendemailcodeabuse/protector.go`, `configprovider/store.go` — adopt
|
||||
new client.
|
||||
- `galaxy/authsession/internal/config/config_test.go` — updated.
|
||||
- `galaxy/gateway/README.md`, `galaxy/authsession/README.md`,
|
||||
`galaxy/gateway/docs/runbook.md`, `galaxy/authsession/docs/runbook.md`
|
||||
— note that Redis-only is intentional and reference the `ARCHITECTURE.md`
|
||||
rule on TTL-bounded auth state.
|
||||
|
||||
**No deletions of business logic**; only env-var refactor and adapter
|
||||
plumbing through `pkg/redisconn`.
|
||||
|
||||
**Touched integration suites**:
|
||||
|
||||
- `integration/gatewayauthsession/`
|
||||
- `integration/authsession/`
|
||||
- (every suite that boots gateway or authsession picks up the new env vars
|
||||
via the harness; confirm none still pass `*_REDIS_TLS_ENABLED`).
|
||||
|
||||
**Verification**:
|
||||
|
||||
- `cd galaxy/gateway && go test ./...`
|
||||
- `cd galaxy/authsession && go test ./...`
|
||||
- `cd integration && go test ./gatewayauthsession/... ./authsession/...`
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 8~~ — GeoProfile: documentation only
|
||||
|
||||
**Goal**: ensure the GeoProfile plan and README reflect the new
|
||||
persistence rules so its future implementation follows them. No code
|
||||
exists yet.
|
||||
|
||||
**Files (touched)**:
|
||||
|
||||
- `galaxy/geoprofile/PLAN.md` — add a stage referencing `pkg/postgres`
|
||||
and `pkg/redisconn`; specify that observed-country aggregates,
|
||||
declared_country history and review records will live in a `geoprofile`
|
||||
schema, while ephemeral per-session signals (if any) stay on Redis.
|
||||
- `galaxy/geoprofile/README.md` — note ownership of the `geoprofile`
|
||||
schema and the stack choices.
|
||||
|
||||
**No code change**.
|
||||
|
||||
---
|
||||
|
||||
### ~~Stage 9~~ — Final sweep
|
||||
|
||||
**Goal**: confirm no dead Redis adapter code, no orphaned stub, no
|
||||
broken doc reference. Remove the *Migration Window* caveat from
|
||||
`ARCHITECTURE.md` once all stages are done.
|
||||
|
||||
**Activities**:
|
||||
|
||||
- Walk every PG-backed service: `grep -rn "redis" galaxy/<svc>/internal/adapters/`
|
||||
and verify every match belongs to a still-active stream/cache/runtime
|
||||
use case.
|
||||
- Walk integration suites: confirm each one provisions only the
|
||||
containers it actually needs; no stale env vars.
|
||||
- Update `ARCHITECTURE.md` to drop the *Migration Window* sub-section.
|
||||
- Combine sequences of migration `.sql` files into a single first file.
|
||||
Rewrite SQL-code, not just concat.
|
||||
The reason is that project still in in development state and all schema updates
|
||||
can go directly in the only and first step of relevant migrations. This should
|
||||
be represented in `ARCHITECTURE.md` as well.
|
||||
- One round of `go test ./...` in every module plus
|
||||
`cd integration && go test ./...`.
|
||||
|
||||
**Verification**:
|
||||
|
||||
- All tests pass in every module.
|
||||
- No file matches `// TODO.*postgres` or `// TODO.*migrate`.
|
||||
- `git grep -n REDIS_TLS_ENABLED REDIS_USERNAME` returns nothing under
|
||||
`galaxy/` (these env vars are fully retired).
|
||||
|
||||
---
|
||||
|
||||
## Verification strategy (whole project)
|
||||
|
||||
After each stage:
|
||||
|
||||
- `cd /Users/id/src/go/galaxy/pkg && go test ./...`
|
||||
- `cd /Users/id/src/go/galaxy/<changed_service> && go test ./...`
|
||||
(with Docker available for testcontainers).
|
||||
- `cd /Users/id/src/go/galaxy/integration && go test ./<affected_suites>/...`
|
||||
- Manual smoke against a `docker-compose` stack (PG + Redis, both with
|
||||
passwords) using the example flows in each service's `docs/examples.md`.
|
||||
|
||||
After Stage 9:
|
||||
|
||||
- `cd /Users/id/src/go/galaxy/integration && go test ./...` end to end
|
||||
against real PG + real Redis.
|
||||
- Confirm `git grep -nE 'REDIS_(TLS_ENABLED|USERNAME)'` returns nothing
|
||||
under `galaxy/`.
|
||||
- Confirm `git grep -n 'TODO.*(postgres|migrate)'` returns nothing.
|
||||
|
||||
## Out of scope
|
||||
|
||||
- `galaxy/game` — explicitly excluded by the project owner.
|
||||
- Production deployment manifests (Helm/k8s) — local `docker-compose` is
|
||||
enough for development.
|
||||
- Backup/restore tooling configuration — `pg_dump` and WAL archiving are
|
||||
available out of the box; operational setup is not part of this plan.
|
||||
- Sentinel/Cluster Redis topology code paths — config exposes replica
|
||||
addresses for future use; no failover routing implemented yet.
|
||||
- Read-traffic routing to PG replicas — config exposes
|
||||
`*_POSTGRES_REPLICA_DSNS` for future use; no routing implemented yet.
|
||||
- `golangci-lint` config addition — not part of this migration.
|
||||
- CI pipeline — no `.github/workflows/` exists; not added by this plan.
|
||||
|
||||
## Risks and notes
|
||||
|
||||
- **`go-jet` codegen requires a live database**. The `make jet` target
|
||||
per service uses `testcontainers-go` to bring up a transient PG, applies
|
||||
the same goose migrations the service applies at startup, then runs
|
||||
`jet -dsn=… -path=internal/adapters/postgres/jet`. Generated code is
|
||||
committed; consumers don't need Docker just to build.
|
||||
- **Schema-per-service vs single-DB cross-service joins**: there are no
|
||||
cross-schema joins in this plan. Each service reads only its own schema;
|
||||
cross-service data flows go via Redis Streams (event bus) or HTTP
|
||||
contracts (User Service is queried by Lobby for eligibility) — same as
|
||||
today. The DB-level role grants enforce this.
|
||||
- **Pending registration expiration worker**: under Redis it scanned a
|
||||
global ZSET; under PG it does an indexed scan. The partial index on
|
||||
`eligible_until_ms WHERE binding_kind='pending_registration'` keeps the
|
||||
scan cheap.
|
||||
- **Idempotency under crash**: with idempotency expressed as a UNIQUE
|
||||
constraint on the durable record, recovery is "the row either exists or
|
||||
it doesn't" — no Redis-loss window where duplicates can sneak through.
|
||||
- **lib/pq vs pgx (revisit)**: confirmed pgx/v5 + jet via stdlib adapter.
|
||||
The `make jet` target will pass `-source=postgres` to jet (the dialect
|
||||
is independent of which Go driver runs the queries at runtime).
|
||||
- **No backward-compat shim for env vars**: `*_REDIS_TLS_ENABLED` and
|
||||
`*_REDIS_USERNAME` are retired in one cut. Any external dev environment
|
||||
that sets these will start failing fast at startup with a clear error
|
||||
emitted by `pkg/redisconn.LoadFromEnv`.
|
||||
+1
-1
@@ -58,7 +58,7 @@ require (
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
||||
github.com/oasdiff/yaml v0.0.9 // indirect
|
||||
github.com/oasdiff/yaml3 v0.0.9 // indirect
|
||||
github.com/oasdiff/yaml3 v0.0.12 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.3.0 // indirect
|
||||
github.com/perimeterx/marshmallow v1.1.5 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
|
||||
+1
-2
@@ -87,8 +87,7 @@ github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||
github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48=
|
||||
github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM=
|
||||
github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g=
|
||||
github.com/oasdiff/yaml3 v0.0.9/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o=
|
||||
github.com/oasdiff/yaml3 v0.0.12 h1:75urAtPeDg2/iDEWwzNrLOWxI9N/dCh81nTTJtokt2M=
|
||||
github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM=
|
||||
github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
||||
|
||||
+47
-6
@@ -39,13 +39,54 @@ do not pass per-game limits.
|
||||
## Endpoints
|
||||
|
||||
The contract is the union of `openapi.yaml` and the technical liveness probe
|
||||
described below.
|
||||
described below. Endpoints split into two route classes:
|
||||
|
||||
| Class | Path | Caller | Purpose |
|
||||
| --- | --- | --- | --- |
|
||||
| Admin (GM-only) | `POST /api/v1/admin/init` | `Game Master` | Initialise the engine with the race roster. |
|
||||
| Admin (GM-only) | `GET /api/v1/admin/status` | `Game Master` | Read the full game state. |
|
||||
| Admin (GM-only) | `PUT /api/v1/admin/turn` | `Game Master` | Generate the next turn. |
|
||||
| Admin (GM-only) | `POST /api/v1/admin/race/banish` | `Game Master` | Deactivate a race after a permanent platform removal. |
|
||||
| Player | `PUT /api/v1/command` | `Game Master` (forwarded from `Edge Gateway`) | Execute a batch of player commands. |
|
||||
| Player | `PUT /api/v1/order` | `Game Master` | Validate and store a batch of player orders. |
|
||||
| Player | `GET /api/v1/report` | `Game Master` | Fetch the per-player turn report. |
|
||||
| Probe | `GET /healthz` | `Runtime Manager` | Technical liveness probe. |
|
||||
|
||||
Admin paths are unauthenticated but are routed only from inside the
|
||||
trusted network segment that connects `Game Master` to the engine
|
||||
container. The engine does not enforce caller identity — network-level
|
||||
segmentation is the boundary. Player paths apply the same rule and rely
|
||||
on `Game Master` to forward only verified player payloads.
|
||||
|
||||
### Game endpoints
|
||||
|
||||
Documented in [`openapi.yaml`](openapi.yaml). When the engine has not been
|
||||
initialised through `POST /api/v1/init`, game endpoints respond `501 Not
|
||||
Implemented` to make the uninitialised state unambiguous.
|
||||
initialised through `POST /api/v1/admin/init`, game endpoints respond
|
||||
`501 Not Implemented` to make the uninitialised state unambiguous.
|
||||
|
||||
### `StateResponse.finished`
|
||||
|
||||
`StateResponse` (returned by `GET /api/v1/admin/status` and
|
||||
`PUT /api/v1/admin/turn`) carries a required boolean `finished` field.
|
||||
The engine sets it to `true` exactly once on the turn-generation response
|
||||
that ends the game; otherwise it stays `false`. `Game Master` uses this
|
||||
field as the sole signal to run the platform finish flow. The conditional
|
||||
logic that flips `finished` to `true` lives in the engine's domain code
|
||||
and is owned by the engine maintainers.
|
||||
|
||||
### `POST /api/v1/admin/race/banish`
|
||||
|
||||
Deactivates a race after a permanent platform-level membership removal.
|
||||
`Game Master` calls this endpoint synchronously after a Lobby-driven
|
||||
remove-and-banish flow.
|
||||
|
||||
- Request body: `{ "race_name": "<name>" }`. `race_name` must be
|
||||
non-empty and must match an existing race in the engine's roster.
|
||||
- Successful response: `204 No Content` with an empty body.
|
||||
- Error responses follow the same `400` / `500` envelope shape as the
|
||||
other admin endpoints. The engine-side mechanics of `banish` (what
|
||||
exactly happens to the race's planets, fleets, and pending orders) are
|
||||
owned by the engine maintainers.
|
||||
|
||||
### `GET /healthz`
|
||||
|
||||
@@ -53,9 +94,9 @@ Technical liveness probe used by `Runtime Manager` and operator tooling.
|
||||
|
||||
- Returns `{"status":"ok"}` with HTTP `200` whenever the HTTP server is
|
||||
serving requests, regardless of whether the engine has been initialised
|
||||
through `POST /api/v1/init`.
|
||||
- Carries no game-state semantics. Use `GET /api/v1/status` for game-state
|
||||
inspection.
|
||||
through `POST /api/v1/admin/init`.
|
||||
- Carries no game-state semantics. Use `GET /api/v1/admin/status` for
|
||||
game-state inspection.
|
||||
|
||||
This endpoint exists so that `Runtime Manager` can probe a freshly started
|
||||
container before `init` runs.
|
||||
|
||||
+1
-1
@@ -34,7 +34,7 @@ require (
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
||||
github.com/oasdiff/yaml v0.0.9 // indirect
|
||||
github.com/oasdiff/yaml3 v0.0.9 // indirect
|
||||
github.com/oasdiff/yaml3 v0.0.12 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.3.0 // indirect
|
||||
github.com/perimeterx/marshmallow v1.1.5 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
|
||||
+1
-2
@@ -66,8 +66,7 @@ github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||
github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48=
|
||||
github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM=
|
||||
github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g=
|
||||
github.com/oasdiff/yaml3 v0.0.9/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o=
|
||||
github.com/oasdiff/yaml3 v0.0.12 h1:75urAtPeDg2/iDEWwzNrLOWxI9N/dCh81nTTJtokt2M=
|
||||
github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM=
|
||||
github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
||||
|
||||
@@ -19,6 +19,15 @@ func (c Controller) RaceID(actor string) (uuid.UUID, error) {
|
||||
return c.Cache.g.Race[ri].ID, nil
|
||||
}
|
||||
|
||||
func (c Controller) RaceBanish(actor string) error {
|
||||
ri, err := c.Cache.validRace(actor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.Cache.g.Race[ri].Extinct = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c Controller) RaceQuit(actor string) error {
|
||||
ri, err := c.Cache.validRace(actor)
|
||||
if err != nil {
|
||||
|
||||
@@ -134,6 +134,14 @@ func ValidateOrder(configure func(*Param), actor string, cmd ...order.DecodableC
|
||||
return ec.validateOrder(actor, cmd...)
|
||||
}
|
||||
|
||||
func BanishRace(configure func(*Param), actor string) error {
|
||||
ec, err := NewRepoController(configure)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return ec.banishRace(actor)
|
||||
}
|
||||
|
||||
func GameState(configure func(*Param)) (s game.State, err error) {
|
||||
ec, err := NewRepoController(configure)
|
||||
if err != nil {
|
||||
@@ -149,6 +157,7 @@ func GameState(configure func(*Param)) (s game.State, err error) {
|
||||
ID: g.ID,
|
||||
Turn: g.Turn,
|
||||
Stage: g.Stage,
|
||||
Finished: g.Finished(),
|
||||
Players: make([]game.PlayerState, len(g.Race)),
|
||||
}
|
||||
|
||||
@@ -243,6 +252,16 @@ func (ec *RepoController) executeCommand(consumer func(*Controller) error) (err
|
||||
})
|
||||
}
|
||||
|
||||
func (ec *RepoController) banishRace(actor string) (err error) {
|
||||
return ec.executeLocked(func(c *Controller) error {
|
||||
err = c.RaceBanish(actor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return c.saveState()
|
||||
})
|
||||
}
|
||||
|
||||
func (ec *RepoController) executeSafe(consumer func(uint, *Controller) error) (err error) {
|
||||
g, err := ec.Repo.LoadStateSafe()
|
||||
if err != nil {
|
||||
|
||||
@@ -118,7 +118,7 @@ func (c *Cache) raceTechLevel(ri int, t game.Tech, v float64) {
|
||||
|
||||
func (c *Cache) TurnWipeExtinctRaces() {
|
||||
for i := range c.listRaceActingIdx() {
|
||||
if c.g.Race[i].TTL == 0 {
|
||||
if (c.g.Race[i].Extinct && c.g.Race[i].TTL > 0) || (!c.g.Race[i].Extinct && c.g.Race[i].TTL == 0) {
|
||||
c.wipeRace(i)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ type State struct {
|
||||
Turn uint
|
||||
Stage uint
|
||||
Players []PlayerState
|
||||
Finished bool
|
||||
}
|
||||
|
||||
type PlayerState struct {
|
||||
|
||||
@@ -0,0 +1,45 @@
|
||||
package router_test
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"galaxy/model/rest"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
const apiBanishPath = "/api/v1/admin/race/banish"
|
||||
|
||||
func TestBanishHappyPath(t *testing.T) {
|
||||
r := setupRouter()
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest(http.MethodPost, apiBanishPath, asBody(rest.BanishRequest{RaceName: "Aelinari"}))
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusNoContent, w.Code, w.Body)
|
||||
assert.Empty(t, w.Body.String())
|
||||
}
|
||||
|
||||
func TestBanishValidation(t *testing.T) {
|
||||
r := setupRouter()
|
||||
|
||||
for _, tc := range []struct {
|
||||
description string
|
||||
body any
|
||||
}{
|
||||
{"missing race_name", struct{}{}},
|
||||
{"empty race_name", rest.BanishRequest{RaceName: ""}},
|
||||
{"blank race_name", rest.BanishRequest{RaceName: " "}},
|
||||
} {
|
||||
t.Run(tc.description, func(t *testing.T) {
|
||||
w := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest(http.MethodPost, apiBanishPath, asBody(tc.body))
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusBadRequest, w.Code, w.Body)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,22 @@
|
||||
package handler
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"galaxy/model/rest"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func BanishHandler(c *gin.Context, executor CommandExecutor) {
|
||||
var req rest.BanishRequest
|
||||
if errorResponse(c, c.ShouldBindJSON(&req)) {
|
||||
return
|
||||
}
|
||||
|
||||
if errorResponse(c, executor.BanishRace(req.RaceName)) {
|
||||
return
|
||||
}
|
||||
|
||||
c.Status(http.StatusNoContent)
|
||||
}
|
||||
@@ -23,6 +23,7 @@ type CommandExecutor interface {
|
||||
GenerateGame([]string) (rest.StateResponse, error)
|
||||
GenerateTurn() (rest.StateResponse, error)
|
||||
GameState() (rest.StateResponse, error)
|
||||
BanishRace(string) error
|
||||
LoadReport(actor string, turn uint) (*report.Report, error)
|
||||
Execute(cmd ...Command) error
|
||||
ValidateOrder(actor string, cmd ...order.DecodableCommand) error
|
||||
@@ -103,6 +104,10 @@ func (e *executor) GameState() (rest.StateResponse, error) {
|
||||
return stateResponse(s), nil
|
||||
}
|
||||
|
||||
func (e *executor) BanishRace(raceName string) error {
|
||||
return controller.BanishRace(e.cfg, raceName)
|
||||
}
|
||||
|
||||
func (e *executor) LoadReport(actor string, turn uint) (*report.Report, error) {
|
||||
return controller.LoadReport(e.cfg, actor, turn)
|
||||
}
|
||||
@@ -112,6 +117,7 @@ func stateResponse(s game.State) rest.StateResponse {
|
||||
ID: s.ID,
|
||||
Turn: s.Turn,
|
||||
Stage: s.Stage,
|
||||
Finished: s.Finished,
|
||||
Players: make([]rest.PlayerState, len(s.Players)),
|
||||
}
|
||||
for i := range s.Players {
|
||||
|
||||
@@ -8,7 +8,7 @@ import (
|
||||
|
||||
// HealthzHandler is the technical liveness probe used by Runtime Manager
|
||||
// and operator tooling. It returns 200 with {"status":"ok"} regardless
|
||||
// of whether the engine has been initialised through POST /api/v1/init.
|
||||
// of whether the engine has been initialised through POST /api/v1/admin/init.
|
||||
func HealthzHandler(c *gin.Context) {
|
||||
c.JSON(http.StatusOK, gin.H{"status": "ok"})
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@ func TestInit(t *testing.T) {
|
||||
payload := generateInitRequest(10)
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest("POST", "/api/v1/init", asBody(payload))
|
||||
req, _ := http.NewRequest("POST", "/api/v1/admin/init", asBody(payload))
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusCreated, w.Code, w.Body)
|
||||
@@ -42,7 +42,7 @@ func TestInitValidators(t *testing.T) {
|
||||
payload := generateInitRequest(9)
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest("POST", "/api/v1/init", asBody(payload))
|
||||
req, _ := http.NewRequest("POST", "/api/v1/admin/init", asBody(payload))
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusBadRequest, w.Code, w.Body)
|
||||
|
||||
@@ -24,7 +24,7 @@ func TestGetReport(t *testing.T) {
|
||||
payload := generateInitRequest(10)
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest("POST", "/api/v1/init", asBody(payload))
|
||||
req, _ := http.NewRequest("POST", "/api/v1/admin/init", asBody(payload))
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusCreated, w.Code, w.Body)
|
||||
|
||||
@@ -67,12 +67,15 @@ func setupRouter(executor handler.CommandExecutor) *gin.Engine {
|
||||
|
||||
groupV1 := r.Group("/api/v1")
|
||||
|
||||
groupV1.GET("/status", func(ctx *gin.Context) { handler.StatusHandler(ctx, executor) })
|
||||
groupV1.POST("/init", func(ctx *gin.Context) { handler.InitHandler(ctx, executor) })
|
||||
groupAdmin := groupV1.Group("/admin")
|
||||
groupAdmin.GET("/status", func(ctx *gin.Context) { handler.StatusHandler(ctx, executor) })
|
||||
groupAdmin.POST("/init", func(ctx *gin.Context) { handler.InitHandler(ctx, executor) })
|
||||
groupAdmin.PUT("/turn", func(ctx *gin.Context) { handler.TurnHandler(ctx, executor) })
|
||||
groupAdmin.POST("/race/banish", func(ctx *gin.Context) { handler.BanishHandler(ctx, executor) })
|
||||
|
||||
groupV1.GET("/report", func(ctx *gin.Context) { handler.ReportHandler(ctx, executor) })
|
||||
groupV1.PUT("/command", LimitMiddleware(1), func(ctx *gin.Context) { handler.CommandHandler(ctx, executor) })
|
||||
groupV1.PUT("/order", func(ctx *gin.Context) { handler.OrderHandler(ctx, executor) })
|
||||
groupV1.PUT("/turn", func(ctx *gin.Context) { handler.TurnHandler(ctx, executor) })
|
||||
|
||||
return r
|
||||
}
|
||||
|
||||
@@ -52,6 +52,10 @@ func (e *dummyExecutor) GenerateTurn() (rest.StateResponse, error) {
|
||||
return rest.StateResponse{}, nil
|
||||
}
|
||||
|
||||
func (e *dummyExecutor) BanishRace(raceName string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *dummyExecutor) GameState() (rest.StateResponse, error) {
|
||||
return rest.StateResponse{}, nil
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@ func TestGetStatus(t *testing.T) {
|
||||
payload := generateInitRequest(10)
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest("POST", "/api/v1/init", asBody(payload))
|
||||
req, _ := http.NewRequest("POST", "/api/v1/admin/init", asBody(payload))
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusCreated, w.Code, w.Body)
|
||||
@@ -37,7 +37,7 @@ func TestGetStatus(t *testing.T) {
|
||||
assert.NotEqual(t, uuid.Nil, uuid.MustParse(initResponse.ID.String()))
|
||||
|
||||
w = httptest.NewRecorder()
|
||||
req, _ = http.NewRequest("GET", "/api/v1/status", nil)
|
||||
req, _ = http.NewRequest("GET", "/api/v1/admin/status", nil)
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusOK, w.Code, w.Body)
|
||||
@@ -47,6 +47,7 @@ func TestGetStatus(t *testing.T) {
|
||||
assert.Equal(t, initResponse.ID, stateResponse.ID)
|
||||
assert.Equal(t, uint(0), stateResponse.Turn)
|
||||
assert.Equal(t, uint(0), stateResponse.Stage)
|
||||
assert.False(t, stateResponse.Finished)
|
||||
assert.Len(t, stateResponse.Players, 10)
|
||||
for i := range stateResponse.Players {
|
||||
assert.NoError(t, uuid.Validate(stateResponse.Players[i].ID.String()))
|
||||
|
||||
@@ -29,7 +29,7 @@ func TestGetTurn(t *testing.T) {
|
||||
payload := generateInitRequest(10)
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest("POST", "/api/v1/init", asBody(payload))
|
||||
req, _ := http.NewRequest("POST", "/api/v1/admin/init", asBody(payload))
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusCreated, w.Code, w.Body)
|
||||
@@ -50,7 +50,7 @@ func TestGetTurn(t *testing.T) {
|
||||
// generate next turn
|
||||
|
||||
w = httptest.NewRecorder()
|
||||
req, _ = http.NewRequest("PUT", "/api/v1/turn", nil)
|
||||
req, _ = http.NewRequest("PUT", "/api/v1/admin/turn", nil)
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusOK, w.Code, w.Body)
|
||||
@@ -72,7 +72,7 @@ func TestGetTurn(t *testing.T) {
|
||||
// validate status
|
||||
|
||||
w = httptest.NewRecorder()
|
||||
req, _ = http.NewRequest("GET", "/api/v1/status", nil)
|
||||
req, _ = http.NewRequest("GET", "/api/v1/admin/status", nil)
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusOK, w.Code, w.Body)
|
||||
|
||||
+63
-13
@@ -30,16 +30,17 @@ tags:
|
||||
- name: Health
|
||||
description: Technical liveness probes used by Runtime Manager and operator tooling.
|
||||
paths:
|
||||
/api/v1/status:
|
||||
/api/v1/admin/status:
|
||||
get:
|
||||
tags:
|
||||
- GameLifecycle
|
||||
operationId: getGameStatus
|
||||
operationId: adminGetGameStatus
|
||||
summary: Get the current game state
|
||||
description: |
|
||||
Returns the current game state including turn number, stage, and a
|
||||
summary of all players. Returns `501` if the game has not yet been
|
||||
initialized.
|
||||
initialized. Routed only from the trusted network segment that
|
||||
connects `Game Master` to the engine container.
|
||||
responses:
|
||||
"200":
|
||||
description: Current game state.
|
||||
@@ -51,15 +52,17 @@ paths:
|
||||
description: Game has not been initialized yet.
|
||||
"500":
|
||||
$ref: "#/components/responses/InternalError"
|
||||
/api/v1/init:
|
||||
/api/v1/admin/init:
|
||||
post:
|
||||
tags:
|
||||
- GameLifecycle
|
||||
operationId: initGame
|
||||
operationId: adminInitGame
|
||||
summary: Initialize a new game
|
||||
description: |
|
||||
Generates a new game instance with the supplied list of races.
|
||||
Requires at least 10 race entries.
|
||||
Requires at least 10 race entries. Routed only from the trusted
|
||||
network segment that connects `Game Master` to the engine
|
||||
container.
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
@@ -77,6 +80,30 @@ paths:
|
||||
$ref: "#/components/responses/ValidationError"
|
||||
"500":
|
||||
$ref: "#/components/responses/InternalError"
|
||||
/api/v1/admin/race/banish:
|
||||
post:
|
||||
tags:
|
||||
- GameLifecycle
|
||||
operationId: adminBanishRace
|
||||
summary: Deactivate a race after a permanent platform-level removal
|
||||
description: |
|
||||
Deactivates the named race in the running engine. Called by `Game
|
||||
Master` after a Lobby-driven permanent membership removal. Routed
|
||||
only from the trusted network segment that connects `Game Master`
|
||||
to the engine container.
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/BanishRequest"
|
||||
responses:
|
||||
"204":
|
||||
description: Race deactivated; no response body.
|
||||
"400":
|
||||
$ref: "#/components/responses/ValidationError"
|
||||
"500":
|
||||
$ref: "#/components/responses/InternalError"
|
||||
/api/v1/report:
|
||||
get:
|
||||
tags:
|
||||
@@ -148,15 +175,16 @@ paths:
|
||||
$ref: "#/components/responses/ValidationError"
|
||||
"500":
|
||||
$ref: "#/components/responses/InternalError"
|
||||
/api/v1/turn:
|
||||
/api/v1/admin/turn:
|
||||
put:
|
||||
tags:
|
||||
- GameLifecycle
|
||||
operationId: generateTurn
|
||||
operationId: adminGenerateTurn
|
||||
summary: Advance the game to the next turn
|
||||
description: |
|
||||
Processes the current turn and generates the next one. Returns the
|
||||
updated game state.
|
||||
updated game state. Routed only from the trusted network segment
|
||||
that connects `Game Master` to the engine container.
|
||||
responses:
|
||||
"200":
|
||||
description: Updated game state after turn generation.
|
||||
@@ -175,10 +203,10 @@ paths:
|
||||
description: |
|
||||
Returns `{"status":"ok"}` with HTTP `200` whenever the HTTP server
|
||||
is serving requests, regardless of whether the engine has been
|
||||
initialised through `POST /api/v1/init`. Used by `Runtime Manager`
|
||||
to probe a freshly started container before `init` runs. Carries
|
||||
no game-state semantics; use `GET /api/v1/status` for game-state
|
||||
inspection.
|
||||
initialised through `POST /api/v1/admin/init`. Used by `Runtime
|
||||
Manager` to probe a freshly started container before `init` runs.
|
||||
Carries no game-state semantics; use `GET /api/v1/admin/status`
|
||||
for game-state inspection.
|
||||
responses:
|
||||
"200":
|
||||
description: Engine HTTP server is up.
|
||||
@@ -225,6 +253,7 @@ components:
|
||||
- turn
|
||||
- stage
|
||||
- player
|
||||
- finished
|
||||
properties:
|
||||
id:
|
||||
type: string
|
||||
@@ -243,6 +272,15 @@ components:
|
||||
description: Summary state for each player participating in the game.
|
||||
items:
|
||||
$ref: "#/components/schemas/PlayerState"
|
||||
finished:
|
||||
type: boolean
|
||||
description: |
|
||||
True exactly once on the turn-generation response that ends the
|
||||
game; otherwise false. Server default: false. `Game Master`
|
||||
uses this flag as the sole signal to run the platform finish
|
||||
flow. The conditional logic that flips it to true lives in
|
||||
the engine's domain code and is owned by the engine
|
||||
maintainers.
|
||||
PlayerState:
|
||||
type: object
|
||||
description: Brief player state returned as part of the game state response.
|
||||
@@ -292,6 +330,18 @@ components:
|
||||
type: string
|
||||
description: Name of the race. Must be non-blank and satisfy the entity-name format.
|
||||
minLength: 1
|
||||
BanishRequest:
|
||||
type: object
|
||||
description: |
|
||||
Request body for the admin banish endpoint. `race_name` must
|
||||
identify an existing race in the engine roster.
|
||||
required:
|
||||
- race_name
|
||||
properties:
|
||||
race_name:
|
||||
type: string
|
||||
description: Name of the race to banish. Must be non-blank.
|
||||
minLength: 1
|
||||
CommandRequest:
|
||||
type: object
|
||||
description: |
|
||||
|
||||
@@ -31,15 +31,15 @@ func TestGameOpenAPISpecFreezesResponseSchemas(t *testing.T) {
|
||||
wantRef string
|
||||
}{
|
||||
{
|
||||
name: "get game status",
|
||||
path: "/api/v1/status",
|
||||
name: "admin get game status",
|
||||
path: "/api/v1/admin/status",
|
||||
method: http.MethodGet,
|
||||
status: http.StatusOK,
|
||||
wantRef: "#/components/schemas/StateResponse",
|
||||
},
|
||||
{
|
||||
name: "init game",
|
||||
path: "/api/v1/init",
|
||||
name: "admin init game",
|
||||
path: "/api/v1/admin/init",
|
||||
method: http.MethodPost,
|
||||
status: http.StatusCreated,
|
||||
wantRef: "#/components/schemas/StateResponse",
|
||||
@@ -52,8 +52,8 @@ func TestGameOpenAPISpecFreezesResponseSchemas(t *testing.T) {
|
||||
wantRef: "#/components/schemas/Report",
|
||||
},
|
||||
{
|
||||
name: "generate turn",
|
||||
path: "/api/v1/turn",
|
||||
name: "admin generate turn",
|
||||
path: "/api/v1/admin/turn",
|
||||
method: http.MethodPut,
|
||||
status: http.StatusOK,
|
||||
wantRef: "#/components/schemas/StateResponse",
|
||||
@@ -81,7 +81,7 @@ func TestGameOpenAPISpecFreezesInitRequest(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadOpenAPISpec(t)
|
||||
operation := getOpenAPIOperation(t, doc, "/api/v1/init", http.MethodPost)
|
||||
operation := getOpenAPIOperation(t, doc, "/api/v1/admin/init", http.MethodPost)
|
||||
|
||||
assertSchemaRef(t, requestSchemaRef(t, operation), "#/components/schemas/InitRequest", "init request schema")
|
||||
|
||||
@@ -93,6 +93,68 @@ func TestGameOpenAPISpecFreezesInitRequest(t *testing.T) {
|
||||
require.Equal(t, uint64(10), racesSchema.Value.MinItems, "InitRequest.races minItems must be 10")
|
||||
}
|
||||
|
||||
func TestGameOpenAPISpecFreezesAdminOperationIDs(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadOpenAPISpec(t)
|
||||
|
||||
tests := []struct {
|
||||
path string
|
||||
method string
|
||||
opID string
|
||||
}{
|
||||
{"/api/v1/admin/init", http.MethodPost, "adminInitGame"},
|
||||
{"/api/v1/admin/status", http.MethodGet, "adminGetGameStatus"},
|
||||
{"/api/v1/admin/turn", http.MethodPut, "adminGenerateTurn"},
|
||||
{"/api/v1/admin/race/banish", http.MethodPost, "adminBanishRace"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.opID, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
operation := getOpenAPIOperation(t, doc, tt.path, tt.method)
|
||||
require.Equal(t, tt.opID, operation.OperationID, "operation id for %s %s", tt.method, tt.path)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGameOpenAPISpecFreezesBanishRequest(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadOpenAPISpec(t)
|
||||
operation := getOpenAPIOperation(t, doc, "/api/v1/admin/race/banish", http.MethodPost)
|
||||
|
||||
assertSchemaRef(t, requestSchemaRef(t, operation), "#/components/schemas/BanishRequest", "banish request schema")
|
||||
|
||||
if operation.Responses == nil {
|
||||
require.FailNow(t, "banish operation is missing responses")
|
||||
}
|
||||
noContent := operation.Responses.Status(http.StatusNoContent)
|
||||
require.NotNil(t, noContent, "banish operation must declare 204 response")
|
||||
require.NotNil(t, noContent.Value, "banish 204 response must have a value")
|
||||
|
||||
schema := componentSchemaRef(t, doc, "BanishRequest")
|
||||
assertRequiredFields(t, schema, "race_name")
|
||||
|
||||
raceNameSchema := schema.Value.Properties["race_name"]
|
||||
require.NotNil(t, raceNameSchema, "BanishRequest.race_name schema must exist")
|
||||
require.Equal(t, uint64(1), raceNameSchema.Value.MinLength, "BanishRequest.race_name minLength must be 1")
|
||||
}
|
||||
|
||||
func TestGameOpenAPISpecFreezesStateResponseFinished(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadOpenAPISpec(t)
|
||||
schema := componentSchemaRef(t, doc, "StateResponse")
|
||||
|
||||
assertRequiredFields(t, schema, "id", "turn", "stage", "player", "finished")
|
||||
|
||||
finishedSchema := schema.Value.Properties["finished"]
|
||||
require.NotNil(t, finishedSchema, "StateResponse.finished schema must exist")
|
||||
require.True(t, finishedSchema.Value.Type.Is("boolean"), "StateResponse.finished must be boolean")
|
||||
}
|
||||
|
||||
func TestGameOpenAPISpecFreezesCommandRequest(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
|
||||
@@ -0,0 +1,32 @@
|
||||
# Makefile for galaxy/gamemaster.
|
||||
#
|
||||
# The `jet` target regenerates the go-jet/v2 query-builder code under
|
||||
# internal/adapters/postgres/jet/ against a transient PostgreSQL container
|
||||
# brought up by cmd/jetgen. Generated code is committed; running this
|
||||
# target requires a reachable Docker daemon (testcontainers spins up a
|
||||
# postgres:16-alpine container).
|
||||
#
|
||||
# The `mocks` target regenerates the gomock-driven mocks via the
|
||||
# //go:generate directives that live next to the interfaces they cover:
|
||||
# - internal/ports/ — port interfaces (PLAN stage 10)
|
||||
# - internal/api/internalhttp/handlers/ — REST handler service ports (PLAN stage 19)
|
||||
# Generated code is committed.
|
||||
#
|
||||
# The `integration` target runs the service-local end-to-end suite under
|
||||
# integration/ (PLAN stage 21). It requires a reachable Docker daemon
|
||||
# (`/var/run/docker.sock` or `DOCKER_HOST`); without one the helpers in
|
||||
# integration/harness call t.Skip and the tests are no-ops.
|
||||
|
||||
.PHONY: jet mocks integration
|
||||
|
||||
jet:
|
||||
go run ./cmd/jetgen
|
||||
|
||||
mocks:
|
||||
go generate ./internal/ports/...
|
||||
@if [ -d ./internal/api/internalhttp/handlers ]; then \
|
||||
go generate ./internal/api/internalhttp/handlers/...; \
|
||||
fi
|
||||
|
||||
integration:
|
||||
go test -tags=integration -count=1 ./integration/...
|
||||
+1276
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,975 @@
|
||||
# Game Master
|
||||
|
||||
`Game Master` (GM) is the only Galaxy platform service permitted to talk to
|
||||
running game engine containers. It owns runtime and operational state of
|
||||
already-running games, the engine version registry, the platform mapping of
|
||||
`(user_id ↔ race_name ↔ engine_player_uuid)`, the per-game turn scheduler,
|
||||
and the synchronous and asynchronous boundaries that other services use to
|
||||
interact with running games.
|
||||
|
||||
## References
|
||||
|
||||
- [`../ARCHITECTURE.md`](../ARCHITECTURE.md) — system architecture, §8 Game
|
||||
Master.
|
||||
- [`../TESTING.md`](../TESTING.md) §8 — testing matrix for GM.
|
||||
- [`./PLAN.md`](./PLAN.md) — staged implementation plan.
|
||||
- [`./docs/README.md`](./docs/README.md) — service-local documentation entry
|
||||
point (created at PLAN stage 24).
|
||||
- [`./docs/stage06-contract-files.md`](./docs/stage06-contract-files.md) —
|
||||
decisions behind the OpenAPI and AsyncAPI specs frozen at PLAN stage 06.
|
||||
- [`./docs/stage07-notification-catalog-audit.md`](./docs/stage07-notification-catalog-audit.md) —
|
||||
notification catalog audit and producer-side freeze test added at PLAN stage 07.
|
||||
- [`./docs/stage08-module-skeleton.md`](./docs/stage08-module-skeleton.md) —
|
||||
module skeleton wiring decisions (config groups, telemetry instruments,
|
||||
Makefile targets, deferred dependencies) recorded at PLAN stage 08.
|
||||
- [`./docs/stage09-postgres-migration.md`](./docs/stage09-postgres-migration.md) —
|
||||
PostgreSQL schema, embedded migration, jet generation pipeline, and
|
||||
runtime wiring landed at PLAN stage 09.
|
||||
- [`./docs/stage10-domain-and-ports.md`](./docs/stage10-domain-and-ports.md) —
|
||||
domain types, port interfaces, and the six stage-10 decisions
|
||||
(operation domain package, membership DTO placement, engine-version
|
||||
options shape, schedule wrapper signature, recovery transition,
|
||||
deferred mock destination) landed at PLAN stage 10.
|
||||
- [`./docs/stage11-persistence-adapters.md`](./docs/stage11-persistence-adapters.md) —
|
||||
PostgreSQL stores (`runtimerecordstore`, `engineversionstore`,
|
||||
`playermappingstore`, `operationlog`), the Redis offset store, and
|
||||
the eight stage-11 decisions (sqlx/pgtest local clones, CAS
|
||||
pattern, port-level Now extension, domain conflict sentinels, jsonb
|
||||
cast, idempotent Deprecate, multi-row BulkInsert, miniredis
|
||||
dependency) landed at PLAN stage 11.
|
||||
- [`./docs/stage12-external-clients.md`](./docs/stage12-external-clients.md) —
|
||||
outbound adapters (engine, Lobby, Runtime Manager, notification
|
||||
intent publisher, lobby-events publisher) and the seven stage-12
|
||||
decisions (per-call engine base URL, dual engine timeout dispatch,
|
||||
engine population rounding, Lobby pagination cap, no extra RTM
|
||||
sentinels, AsyncAPI-aligned XADD encoding for `gm:lobby_events`,
|
||||
Makefile mocks-target guard) landed at PLAN stage 12.
|
||||
- [`./docs/stage13-register-runtime.md`](./docs/stage13-register-runtime.md) —
|
||||
register-runtime service-layer orchestrator and the five
|
||||
stage-13 decisions (`RuntimeRecordStore.Delete` extension, engine
|
||||
4xx/5xx classification split, engine response validated as
|
||||
`engine_protocol_violation`, initial snapshot carries `player_turn_stats`
|
||||
from `/admin/init`, two-flag rollback gating) landed at PLAN
|
||||
stage 13.
|
||||
- [`./docs/stage14-engine-version-registry.md`](./docs/stage14-engine-version-registry.md) —
|
||||
engine version registry service-layer orchestrator (List, Get,
|
||||
Create, Update, Deprecate, Delete, ResolveImageRef) and the five
|
||||
stage-14 decisions (`EngineVersionStore.Delete` port extension,
|
||||
reference probe before hard delete, new `engine_version_delete`
|
||||
op_kind in schema and domain, `operation_log.game_id` overloaded
|
||||
as audit subject for registry entries, JSON-object validation for
|
||||
`options`) landed at PLAN stage 14.
|
||||
- [`./docs/stage15-scheduler-and-turn-generation.md`](./docs/stage15-scheduler-and-turn-generation.md) —
|
||||
scheduler ticker, turn-generation orchestrator, and snapshot
|
||||
publisher and the seven stage-15 decisions
|
||||
(`LobbyClient.GetGameSummary` extension with fail-soft `game_name`
|
||||
fallback, telemetry-only `Trigger` parameter, two-CAS pattern with
|
||||
external-mutation conflict, single-snapshot-per-outcome cadence,
|
||||
player_mappings as recipient source, stateless scheduler utility,
|
||||
in-flight set on the ticker) landed at PLAN stage 15.
|
||||
- [`./docs/stage16-membership-cache-and-invalidation.md`](./docs/stage16-membership-cache-and-invalidation.md) —
|
||||
hot-path services (`commandexecute`, `orderput`, `reportget`),
|
||||
membership cache, and the six stage-16 decisions (no
|
||||
`runtime_not_running` for reports, GM-side envelope rewrite
|
||||
`commands`→`cmd` with injected `actor`, hot-path skips
|
||||
`operation_log`, hand-rolled per-game inflight tracker, raw status
|
||||
string return, missing-mapping surfaces as `forbidden`) landed at
|
||||
PLAN stage 16.
|
||||
- [`./docs/stage17-admin-operations.md`](./docs/stage17-admin-operations.md) —
|
||||
admin service-layer operations (`adminstop`, `adminforce`,
|
||||
`adminpatch`, `adminbanish`, `livenessreply`) and the six
|
||||
stage-17 decisions (`RuntimeRecordStore.UpdateImage` extension,
|
||||
`adminstop` idempotent on terminal statuses and `conflict` on
|
||||
`starting`, `adminforce` always sets `skip_next_tick`,
|
||||
`adminbanish` without status check and missing race surfaces as
|
||||
`forbidden`, `livenessreply` 200 + empty status on
|
||||
`runtime_not_found`, RTM failures map to `service_unavailable`)
|
||||
landed at PLAN stage 17.
|
||||
- [`./docs/stage18-health-events-consumer.md`](./docs/stage18-health-events-consumer.md) —
|
||||
`runtime:health_events` consumer worker and the seven stage-18
|
||||
decisions (event-type taxonomy expanded to seven values with
|
||||
`container_started` and `probe_recovered`, CAS-conflict fallback to
|
||||
health-only update, new `RuntimeRecordStore.UpdateEngineHealth`
|
||||
port method, in-memory dedupe of last-emitted summaries,
|
||||
read-after-write snapshot construction, `health_events` stream
|
||||
offset label, worker wiring deferred to Stage 19) landed at PLAN
|
||||
stage 18.
|
||||
- [`./api/internal-openapi.yaml`](./api/internal-openapi.yaml) — internal
|
||||
trusted REST contract.
|
||||
- [`./api/runtime-events-asyncapi.yaml`](./api/runtime-events-asyncapi.yaml) —
|
||||
`gm:lobby_events` Redis Stream contract.
|
||||
- [`../game/README.md`](../game/README.md) — game engine container contract
|
||||
(env, ports, admin and player REST surfaces, `/healthz`).
|
||||
- [`../lobby/README.md`](../lobby/README.md) — Game Lobby integration with GM.
|
||||
- [`../rtmanager/README.md`](../rtmanager/README.md) — Runtime Manager
|
||||
contract used synchronously by GM admin operations.
|
||||
|
||||
## Purpose
|
||||
|
||||
A running Galaxy game lives in exactly one Docker container managed by
|
||||
`Runtime Manager`. The platform must:
|
||||
|
||||
- register a freshly started container with platform-level membership;
|
||||
- initialise the engine with the agreed race roster;
|
||||
- accept and forward player commands and orders to the engine;
|
||||
- route per-player report reads;
|
||||
- generate turns according to a schedule;
|
||||
- detect game finish and propagate it back to platform-level state;
|
||||
- expose runtime/operational controls (force-next-turn, stop, patch, banish);
|
||||
- own the catalogue of supported engine versions and resolve `image_ref`
|
||||
values for `Game Lobby`.
|
||||
|
||||
`Game Master` is the single component that performs these actions. It does
|
||||
**not** own platform metadata of games (that is `Game Lobby`), Docker control
|
||||
(that is `Runtime Manager`), or the full game state (that is the engine
|
||||
container). Engine state on disk is the engine's domain; GM never reads or
|
||||
writes the bind-mounted state directory.
|
||||
|
||||
## Scope
|
||||
|
||||
`Game Master` is the source of truth for:
|
||||
|
||||
- the runtime mapping `game_id → engine_endpoint` for every running game;
|
||||
- the runtime status (`starting | running | generation_in_progress |
|
||||
generation_failed | stopped | engine_unreachable | finished`);
|
||||
- the current turn number and the next-tick timestamp;
|
||||
- the per-game `(user_id, race_name, engine_player_uuid)` triple;
|
||||
- the engine version registry: `(version, image_ref, options, status)`;
|
||||
- the durable history of every operation GM performed (`operation_log`);
|
||||
- the latest engine health summary per game.
|
||||
|
||||
`Game Master` is **not** the source of truth for:
|
||||
|
||||
- platform game records (created, draft, enrollment, finished metadata) —
|
||||
owned by `Game Lobby`;
|
||||
- container lifecycle and Docker reality — owned by `Runtime Manager`;
|
||||
- in-game world state (planets, ships, science, reports) — owned by the
|
||||
engine container;
|
||||
- platform user identity and entitlements — owned by `User Service`;
|
||||
- in-game `race_name` reservations and the Race Name Directory — owned by
|
||||
`Game Lobby`.
|
||||
|
||||
## Non-Goals
|
||||
|
||||
- Multi-instance operation in v1. GM runs as a single process; the in-process
|
||||
scheduler is authoritative. Multi-instance with leader election is an
|
||||
explicit future iteration.
|
||||
- Direct Docker access. GM never imports the Docker SDK; every container
|
||||
operation goes through `Runtime Manager` over trusted internal REST.
|
||||
- Player removal/block at platform level. `Game Lobby` owns that decision;
|
||||
GM only performs the engine-side `banish` call when explicitly invoked.
|
||||
- Pause/resume of a running game on the platform side. `Game Lobby.paused`
|
||||
is a platform-only state; GM only answers a liveness probe used by
|
||||
Lobby's resume flow.
|
||||
- Automatic semver-patch upgrades. Patch is always an explicit admin
|
||||
operation against a target engine version present in the registry.
|
||||
- TLS or mTLS on the internal listener. GM trusts its network segment.
|
||||
- Direct delivery of player-visible push events. `Notification Service`
|
||||
owns user-targeted push delivery; GM publishes notification intents only.
|
||||
- A separate Admin Service. GM exposes its trusted internal REST surface;
|
||||
Admin Service will adopt it in a later iteration.
|
||||
- Engine state file management. Backup, archival, and cleanup of the
|
||||
bind-mounted state directories are operator concerns.
|
||||
|
||||
## Position in the System
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
Gateway["Edge Gateway"]
|
||||
Lobby["Game Lobby"]
|
||||
Admin["Admin Service\n(future)"]
|
||||
GM["Game Master"]
|
||||
RTM["Runtime Manager"]
|
||||
Notify["Notification Service"]
|
||||
Engine["Game Engine container\n(galaxy/game)"]
|
||||
Postgres["PostgreSQL\nschema gamemaster"]
|
||||
Redis["Redis\nstreams + caches"]
|
||||
|
||||
Gateway -- "verified player commands\n(REST/JSON)" --> GM
|
||||
Lobby -- "register-runtime,\nimage-ref resolve,\nmemberships invalidate" --> GM
|
||||
Admin -- "internal REST" --> GM
|
||||
GM -- "engine HTTP API" --> Engine
|
||||
GM -- "stop / restart / patch" --> RTM
|
||||
GM -- "notification:intents" --> Notify
|
||||
GM -- "gm:lobby_events" --> Redis
|
||||
Redis -- "runtime:health_events" --> GM
|
||||
GM --> Postgres
|
||||
```
|
||||
|
||||
`Edge Gateway` routes verified player message types (`game.command.execute`,
|
||||
`game.order.put`, `game.report.get`) to GM as trusted REST/JSON after
|
||||
transcoding from FlatBuffers. `Game Lobby` calls GM synchronously to
|
||||
register runtimes after a successful container start, to resolve `image_ref`
|
||||
from the engine version registry, to invalidate membership cache on roster
|
||||
changes, and to verify GM liveness during platform resume. `Game Master`
|
||||
calls `Runtime Manager` synchronously over REST for stop, restart, and
|
||||
patch. `Runtime Manager` publishes `runtime:health_events`, which GM
|
||||
consumes asynchronously. GM publishes `gm:lobby_events` consumed by
|
||||
`Game Lobby`, and `notification:intents` consumed by `Notification Service`.
|
||||
|
||||
## Responsibility Boundaries
|
||||
|
||||
`Game Master` is responsible for:
|
||||
|
||||
- registering a freshly started container into platform-level runtime state;
|
||||
- initialising the engine with the race roster received from Lobby;
|
||||
- maintaining the platform mapping of `user_id`, `race_name`, and
|
||||
`engine_player_uuid`;
|
||||
- forwarding player commands, orders, and report reads to the engine after
|
||||
authorising the actor;
|
||||
- generating turns on schedule, including the force-next-turn skip rule;
|
||||
- evaluating engine finish on every turn boundary;
|
||||
- publishing runtime snapshot updates and the final game-finish event;
|
||||
- consuming runtime health events from `Runtime Manager` and updating its
|
||||
per-game health summary;
|
||||
- exposing the engine version registry CRUD;
|
||||
- driving admin-level runtime operations (stop, force-next-turn, patch,
|
||||
banish) by calling `Runtime Manager` and the engine on demand.
|
||||
|
||||
`Game Master` is not responsible for:
|
||||
|
||||
- creating or stopping containers on Docker (that is `Runtime Manager`);
|
||||
- evaluating whether a game is allowed to start (that is `Game Lobby`);
|
||||
- deriving recipient user lists for non-game notifications (that is
|
||||
`Notification Service`);
|
||||
- verifying authenticated transport, signatures, freshness, and replay
|
||||
(that is `Edge Gateway`);
|
||||
- mapping `user_id` to platform-level membership (that is `Game Lobby`).
|
||||
|
||||
## Engine Container Contract
|
||||
|
||||
The engine container is `galaxy/game`. GM uses two route classes:
|
||||
|
||||
| Class | Path | Purpose |
|
||||
| --- | --- | --- |
|
||||
| Admin (GM-only) | `POST /api/v1/admin/init` | Initialise the engine with a race roster. |
|
||||
| Admin (GM-only) | `GET /api/v1/admin/status` | Read the full game state. |
|
||||
| Admin (GM-only) | `PUT /api/v1/admin/turn` | Generate the next turn. |
|
||||
| Admin (GM-only) | `POST /api/v1/admin/race/banish` | Deactivate a race after permanent platform removal. Body `{race_name}`. |
|
||||
| Player | `PUT /api/v1/command` | Execute a batch of player commands. |
|
||||
| Player | `PUT /api/v1/order` | Validate and store a batch of player orders. |
|
||||
| Player | `GET /api/v1/report` | Fetch per-player turn report. |
|
||||
| Probe | `GET /healthz` | Liveness probe used by `Runtime Manager` and operator tooling. |
|
||||
|
||||
Admin paths are unauthenticated but routed only from inside the trusted
|
||||
network segment that connects GM to the engine container. The engine does
|
||||
not enforce caller identity — network-level segmentation is the boundary.
|
||||
|
||||
`StateResponse` carries an extra boolean `finished` field. When `true` on a
|
||||
turn-generation response, GM treats the game as finished and runs the
|
||||
finish flow described below. The conditional logic that flips `finished`
|
||||
to `true` lives in the engine's domain code and is not GM's concern.
|
||||
|
||||
The engine endpoint URL is the `engine_endpoint` value handed to GM by
|
||||
`Game Lobby` during `register-runtime`: `http://galaxy-game-{game_id}:8080`.
|
||||
The DNS name is stable across restart and patch.
|
||||
|
||||
## Runtime Surface
|
||||
|
||||
### Listeners
|
||||
|
||||
| Listener | Default address | Purpose |
|
||||
| --- | --- | --- |
|
||||
| Internal HTTP | `:8097` (`GAMEMASTER_INTERNAL_HTTP_ADDR`) | Probes (`/healthz`, `/readyz`) and the trusted REST surface for `Edge Gateway`, `Game Lobby`, and `Admin Service`. |
|
||||
|
||||
There is no public listener. The internal listener is unauthenticated and
|
||||
assumes a trusted network segment. Authentication of player commands has
|
||||
already happened at `Edge Gateway`; GM enforces authorisation only.
|
||||
|
||||
### Background workers
|
||||
|
||||
| Worker | Driver | Description |
|
||||
| --- | --- | --- |
|
||||
| Scheduler ticker | 1 s loop | Scans `runtime_records` for due `next_generation_at`, runs the turn-generation service for each, recomputes `next_generation_at` from `turn_schedule` (skipping one tick when `skip_next_tick=true` is set). |
|
||||
| `runtime:health_events` consumer | Redis Stream | XREADs from `runtime:health_events` (produced by RTM), updates `runtime_records.engine_health` summary, debounces `runtime_snapshot_update` publication. |
|
||||
|
||||
### Startup dependencies
|
||||
|
||||
In start order:
|
||||
|
||||
1. PostgreSQL primary (`GAMEMASTER_POSTGRES_PRIMARY_DSN`). Embedded goose
|
||||
migrations apply synchronously before any listener opens.
|
||||
2. Redis master (`GAMEMASTER_REDIS_MASTER_ADDR`).
|
||||
3. Telemetry exporter (OTLP grpc/http or stdout).
|
||||
4. Internal HTTP listener.
|
||||
5. Health-events consumer worker.
|
||||
6. Scheduler ticker worker.
|
||||
|
||||
A failure in any step exits the process non-zero.
|
||||
|
||||
### Probes
|
||||
|
||||
`/healthz` reports liveness — the process responds when the HTTP server is
|
||||
alive.
|
||||
|
||||
`/readyz` reports readiness — `200` only when the PostgreSQL pool can ping
|
||||
the primary and the Redis master client can ping. No deeper dependency is
|
||||
checked synchronously; the engine is reached only on demand.
|
||||
|
||||
Both probes are documented in
|
||||
[`./api/internal-openapi.yaml`](./api/internal-openapi.yaml).
|
||||
|
||||
## Lifecycles
|
||||
|
||||
### Register-runtime
|
||||
|
||||
**Triggered by:** `Game Lobby` after a successful container start, calling
|
||||
`POST /api/v1/internal/games/{game_id}/register-runtime` with body
|
||||
`{engine_endpoint, members:[{user_id, race_name}], target_engine_version,
|
||||
turn_schedule}`.
|
||||
|
||||
**Flow on success:**
|
||||
|
||||
1. Validate request shape; reject with `invalid_request` if any required
|
||||
field is missing.
|
||||
2. Reject with `conflict` if `runtime_records.{game_id}` already exists.
|
||||
3. Resolve `image_ref` for `target_engine_version` from `engine_versions`;
|
||||
reject with `engine_version_not_found` when missing.
|
||||
4. Persist `runtime_records` with `status=starting`, `engine_endpoint`,
|
||||
`current_image_ref`, `current_engine_version`, `turn_schedule`, and
|
||||
`created_at`.
|
||||
5. Call engine `POST /api/v1/admin/init` with the race-name list derived
|
||||
from `members`.
|
||||
6. Read `StateResponse` and persist one `player_mappings` row per player:
|
||||
`(game_id, user_id, race_name, engine_player_uuid)`.
|
||||
7. CAS `runtime_records.status: starting → running`. Persist
|
||||
`current_turn=0` and `next_generation_at` computed from `turn_schedule`.
|
||||
8. Append `operation_log` entry (`op_kind=register_runtime`,
|
||||
`outcome=success`).
|
||||
9. Publish `runtime_snapshot_update` to `gm:lobby_events`.
|
||||
10. Return `200` with the persisted `runtime_records` row.
|
||||
|
||||
**Failure paths:**
|
||||
|
||||
| Failure | Side effect | Outcome to caller |
|
||||
| --- | --- | --- |
|
||||
| Invalid envelope | None | `400 invalid_request` |
|
||||
| `runtime_records` already exists | None | `409 conflict` |
|
||||
| Engine `/admin/init` returns 4xx | Roll back `runtime_records`; append failure to `operation_log` | `502 engine_validation_error` |
|
||||
| Engine `/admin/init` returns 5xx or fails at the transport layer | Roll back; append failure | `502 engine_unreachable` |
|
||||
| Engine response missing players or contains races not in roster | Roll back; append failure | `502 engine_protocol_violation` |
|
||||
| PostgreSQL transaction failure | Roll back; append failure if possible | `503 service_unavailable` |
|
||||
|
||||
A failed `register-runtime` leaves no `runtime_records` row and no
|
||||
`player_mappings` rows. `Game Lobby` then transitions the platform game
|
||||
record to `paused` (per the architecture's flow §4 forced-pause path).
|
||||
|
||||
### Turn generation
|
||||
|
||||
**Triggered by:** the scheduler ticker when `now >= next_generation_at`
|
||||
for a game in `status=running`, or by an admin invocation of
|
||||
`force-next-turn`.
|
||||
|
||||
**Flow on success:**
|
||||
|
||||
1. CAS `runtime_records.status: running → generation_in_progress`. If the
|
||||
CAS fails (status changed concurrently), the tick is skipped silently.
|
||||
2. Call engine `PUT /api/v1/admin/turn`. Engine returns `StateResponse`
|
||||
with the new `turn` and the updated `player[]` array.
|
||||
3. Persist `runtime_records.current_turn` and refresh
|
||||
`runtime_records.engine_health` summary.
|
||||
4. If `StateResponse.finished == true`:
|
||||
- CAS `runtime_records.status: generation_in_progress → finished`;
|
||||
- publish `game_finished` to `gm:lobby_events` with
|
||||
`{game_id, final_turn_number, finished_at_ms, player_turn_stats[]}`;
|
||||
- publish `game.finished` notification intent to all `active` members.
|
||||
5. If `StateResponse.finished == false`:
|
||||
- CAS `runtime_records.status: generation_in_progress → running`;
|
||||
- recompute `next_generation_at` from `turn_schedule`. If
|
||||
`skip_next_tick=true`, advance by one extra cron step and clear the
|
||||
flag;
|
||||
- publish `runtime_snapshot_update` to `gm:lobby_events` with
|
||||
`{game_id, current_turn, runtime_status, engine_health_summary,
|
||||
player_turn_stats[]}`;
|
||||
- publish `game.turn.ready` notification intent to all `active`
|
||||
members.
|
||||
6. Append `operation_log` entry (`op_kind=turn_generation`,
|
||||
`outcome=success`).
|
||||
|
||||
**Failure paths:**
|
||||
|
||||
| Failure | Side effect | Outcome |
|
||||
| --- | --- | --- |
|
||||
| Engine timeout / 5xx | CAS `status: generation_in_progress → generation_failed`; publish `runtime_snapshot_update`; publish `game.generation_failed` admin notification | Logged; ticker leaves the game in `generation_failed` until manual recovery (admin issues `force-next-turn` or `stop`). |
|
||||
| Persistence failure after engine success | Append failure to `operation_log`; status stays `generation_in_progress` | Health-summary update on next probe will resync. |
|
||||
|
||||
`player_turn_stats[]` is built from `StateResponse.player[]` by mapping
|
||||
`raceName → user_id` through `player_mappings` and projecting
|
||||
`{user_id, planets, population}`. `ships_built` is intentionally absent
|
||||
(see [`./docs/stage01-architecture-sync.md`](./docs/stage01-architecture-sync.md)).
|
||||
|
||||
### Force-next-turn
|
||||
|
||||
**Triggered by:** `Admin Service` or system-admin via
|
||||
`POST /api/v1/internal/runtimes/{game_id}/force-next-turn`.
|
||||
|
||||
**Pre-conditions:** runtime exists, `status=running`.
|
||||
|
||||
**Flow:**
|
||||
|
||||
1. Run the turn-generation flow synchronously (the same code path the
|
||||
scheduler uses).
|
||||
2. After success, set `runtime_records.skip_next_tick = true`. The next
|
||||
regular tick computed from `turn_schedule` is then advanced by one
|
||||
extra step before being persisted as `next_generation_at`.
|
||||
3. Append `operation_log` entry (`op_kind=force_next_turn`).
|
||||
|
||||
The skip rule guarantees that the inter-turn spacing is never shorter than
|
||||
one schedule interval, regardless of when the force is issued.
|
||||
|
||||
### Game finish
|
||||
|
||||
The finish flow is driven entirely by the engine signal `finished:bool`.
|
||||
GM never decides finish independently. After `game_finished` is published,
|
||||
`Game Lobby` transitions its platform record to `finished`, runs the
|
||||
capability evaluation, and finalises Race Name Directory state. The GM
|
||||
record stays in `status=finished` indefinitely; cleanup is operator-driven.
|
||||
|
||||
### Banish (engine-side player removal)
|
||||
|
||||
**Triggered by:** `Game Lobby` synchronously calling
|
||||
`POST /api/v1/internal/games/{game_id}/race/{race_name}/banish` after a
|
||||
permanent membership removal at platform level.
|
||||
|
||||
**Pre-conditions:** runtime exists; `race_name` resolves to an existing
|
||||
`player_mappings` row.
|
||||
|
||||
**Flow:**
|
||||
|
||||
1. Call engine `POST /api/v1/admin/race/banish` with `{race_name}`.
|
||||
2. On engine success, append `operation_log` entry (`op_kind=banish`,
|
||||
`outcome=success`).
|
||||
3. Return `204` to Lobby.
|
||||
|
||||
**Failure path:** engine error returns `502 engine_unreachable`. Lobby
|
||||
treats this as a degraded state and may retry; the platform-level
|
||||
membership stays `removed` regardless.
|
||||
|
||||
### Stop
|
||||
|
||||
**Triggered by:** system-admin via
|
||||
`POST /api/v1/internal/runtimes/{game_id}/stop` with body `{reason}`,
|
||||
where `reason ∈ {admin_request, finished, timeout}`.
|
||||
|
||||
**Flow:**
|
||||
|
||||
1. Call `Runtime Manager` `POST /api/v1/internal/runtimes/{game_id}/stop`
|
||||
with the same `reason`.
|
||||
2. CAS `runtime_records.status: * → stopped`.
|
||||
3. Append `operation_log` entry.
|
||||
4. Publish `runtime_snapshot_update` reflecting the stopped status.
|
||||
|
||||
### Patch
|
||||
|
||||
**Triggered by:** system-admin via
|
||||
`POST /api/v1/internal/runtimes/{game_id}/patch` with body `{version}`.
|
||||
|
||||
**Pre-conditions:**
|
||||
|
||||
- `engine_versions.{version}` exists with `status=active`;
|
||||
- the new version is a semver-patch of the current version (same major and
|
||||
minor); otherwise reject with `semver_patch_only`.
|
||||
|
||||
**Flow:**
|
||||
|
||||
1. Resolve `image_ref` from `engine_versions.{version}`.
|
||||
2. Call `Runtime Manager`
|
||||
`POST /api/v1/internal/runtimes/{game_id}/patch` with `{image_ref}`.
|
||||
3. On success, persist new `current_image_ref` and `current_engine_version`
|
||||
on `runtime_records`.
|
||||
4. Append `operation_log` entry.
|
||||
|
||||
The engine container is recreated by RTM with the same DNS name; the
|
||||
`engine_endpoint` is unchanged. GM does not call `/admin/init` again —
|
||||
the bind-mounted state directory is preserved and the engine resumes from
|
||||
the previous turn.
|
||||
|
||||
### Liveness reply (Lobby resume)
|
||||
|
||||
**Triggered by:** `Game Lobby` resuming a paused game, calling
|
||||
`GET /api/v1/internal/games/{game_id}/liveness`.
|
||||
|
||||
**Flow:** if `runtime_records.{game_id}` exists and `status=running`,
|
||||
return `200 {ready: true}`. Otherwise return `200 {ready: false, status:
|
||||
"<observed status>"}`.
|
||||
|
||||
This endpoint never calls the engine; it reflects GM's own view only.
|
||||
|
||||
## Hot Path
|
||||
|
||||
### Player commands and orders
|
||||
|
||||
Both `game.command.execute` and `game.order.put` use the same FlatBuffers
|
||||
schema (`pkg/schema/fbs/order.fbs` `Order{updated_at, commands:[…]}`). The
|
||||
gateway transcodes the verified payload to JSON via
|
||||
`pkg/transcoder/order.go` before calling GM.
|
||||
|
||||
**GM endpoints:**
|
||||
|
||||
- `POST /api/v1/internal/games/{game_id}/commands` — execute now; engine
|
||||
`PUT /api/v1/command`.
|
||||
- `POST /api/v1/internal/games/{game_id}/orders` — validate-and-store;
|
||||
engine `PUT /api/v1/order`.
|
||||
|
||||
Both endpoints accept body `{commands:[{cmd_id, @type, …}, …]}` and the
|
||||
`X-User-ID` header. The actor field on the engine call is **always** set
|
||||
by GM from the authenticated user identity; GM never trusts a payload
|
||||
field for actor identification.
|
||||
|
||||
**Pre-conditions:**
|
||||
|
||||
- `runtime_records.{game_id}` exists with `status=running`;
|
||||
- the user is an `active` member of the game (cache lookup);
|
||||
- `player_mappings.(game_id, user_id)` exists.
|
||||
|
||||
**Errors:**
|
||||
|
||||
- `runtime_not_found` — runtime missing.
|
||||
- `runtime_not_running` — `runtime_status` is anything other than
|
||||
`running`.
|
||||
- `forbidden` — caller is not an active member.
|
||||
- `engine_unreachable` — engine returned 5xx.
|
||||
- `engine_validation_error` — engine returned 4xx; the body carries the
|
||||
engine's per-command result (`cmd_applied`, `cmd_error_code`).
|
||||
|
||||
### Reports
|
||||
|
||||
**GM endpoint:** `GET /api/v1/internal/games/{game_id}/reports/{turn}`
|
||||
with the `X-User-ID` header.
|
||||
|
||||
**Flow:**
|
||||
|
||||
1. Authorise: caller must be an active member of the game.
|
||||
2. Resolve `race_name` from `player_mappings`.
|
||||
3. Call engine `GET /api/v1/report?player={race_name}&turn={turn}`.
|
||||
4. Return the engine response verbatim. Reports are full per-player
|
||||
payloads and are never cached at the platform layer; the engine remains
|
||||
the source of truth.
|
||||
|
||||
### Membership cache and invalidation
|
||||
|
||||
GM holds an in-process per-game TTL cache (default 30 s) of memberships
|
||||
loaded from `Lobby /api/v1/internal/games/{id}/memberships`. The cache
|
||||
shape is `map[user_id]MembershipStatus` plus a load timestamp. TTL is
|
||||
the safety-net fallback.
|
||||
|
||||
The primary invalidation mechanism is an explicit hook from Lobby:
|
||||
|
||||
- Endpoint: `POST /api/v1/internal/games/{game_id}/memberships/invalidate`.
|
||||
- Lobby invokes it post-commit on every operation that mutates roster:
|
||||
application approval, application rejection, invite redeem, member
|
||||
remove, member block, user-lifecycle cascade.
|
||||
- Failed invalidation does not roll back Lobby state; the TTL safety net
|
||||
catches stale data within the next 30 s.
|
||||
|
||||
This is a deliberate tight coupling. The trade-off is recorded in
|
||||
[`./PLAN.md` Stage 16](./PLAN.md).
|
||||
|
||||
## Engine Version Registry
|
||||
|
||||
The registry is the source of truth for which engine versions are
|
||||
deployable. CRUD is exposed on the GM internal port; `Game Lobby`
|
||||
consumes it synchronously to resolve `image_ref` for `target_engine_version`
|
||||
just before publishing a `runtime:start_jobs` envelope.
|
||||
|
||||
| Method | Path | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `GET` | `/api/v1/internal/engine-versions` | List versions; supports `status` filter. |
|
||||
| `POST` | `/api/v1/internal/engine-versions` | Create a new version with `version`, `image_ref`, optional `options`. Validates semver shape and Docker reference. |
|
||||
| `GET` | `/api/v1/internal/engine-versions/{version}` | Read one version. |
|
||||
| `PATCH` | `/api/v1/internal/engine-versions/{version}` | Update `image_ref`, `options`, or `status`. |
|
||||
| `DELETE` | `/api/v1/internal/engine-versions/{version}` | Soft-deprecate (`status=deprecated`). Hard delete is rejected if the version is referenced by any non-finished `runtime_records` row. |
|
||||
| `GET` | `/api/v1/internal/engine-versions/{version}/image-ref` | Resolve `image_ref` only. Used by Lobby's start flow. |
|
||||
|
||||
`options` is a free-form `jsonb` document stored verbatim. v1 does not
|
||||
enforce a schema; future engine-side options follow the engine's own
|
||||
contract.
|
||||
|
||||
`status` values: `active` (deployable), `deprecated` (rejected on new
|
||||
starts; existing runtimes unaffected). Hard removal of a deprecated
|
||||
version requires that no runtime references it.
|
||||
|
||||
Lobby resolves `image_ref` synchronously per game start. If the resolve
|
||||
call fails or the version is missing, Lobby fails the start with
|
||||
`engine_version_not_found` and never publishes `runtime:start_jobs`.
|
||||
|
||||
## Trusted Surfaces
|
||||
|
||||
### Internal REST
|
||||
|
||||
The internal REST surface is consumed by:
|
||||
|
||||
- `Edge Gateway` — verified player commands and report reads;
|
||||
- `Game Lobby` — register-runtime, image-ref resolve, membership invalidate,
|
||||
banish, liveness reply;
|
||||
- `Admin Service` (future) — full administrative operations;
|
||||
- platform probes — `/healthz`, `/readyz`.
|
||||
|
||||
The listener is unauthenticated; downstream services rely on network
|
||||
segmentation. Caller identity for audit is recorded from the optional
|
||||
`X-Galaxy-Caller` header (`gateway`, `lobby`, `admin`) and reflected as
|
||||
`op_source` in `operation_log` (`gateway_player`, `lobby_internal`,
|
||||
`admin_rest`); when missing or unrecognised, GM defaults to
|
||||
`op_source=admin_rest`.
|
||||
|
||||
For player-command endpoints, the additional `X-User-ID` header is
|
||||
required and authoritative for the acting user identity.
|
||||
|
||||
Request and response shapes are defined in
|
||||
[`./api/internal-openapi.yaml`](./api/internal-openapi.yaml). Unknown JSON
|
||||
fields are rejected with `invalid_request`.
|
||||
|
||||
## Async Stream Contracts
|
||||
|
||||
### `gm:lobby_events` (out)
|
||||
|
||||
Producer: `Game Master`. Consumer: `Game Lobby`.
|
||||
|
||||
Two message types share the stream, discriminated by `event_type`:
|
||||
|
||||
| `event_type` | Body |
|
||||
| --- | --- |
|
||||
| `runtime_snapshot_update` | `{game_id, current_turn, runtime_status, engine_health_summary, player_turn_stats:[{user_id, planets, population}], occurred_at_ms}` |
|
||||
| `game_finished` | `{game_id, final_turn_number, runtime_status:"finished", player_turn_stats:[…], finished_at_ms}` |
|
||||
|
||||
Publication cadence: events only. GM publishes a snapshot when:
|
||||
|
||||
- a turn was generated (success or failure);
|
||||
- `runtime_status` transitioned (e.g., `running ↔ generation_in_progress`,
|
||||
`running → engine_unreachable`, `* → finished`);
|
||||
- `engine_health_summary` changed in response to a `runtime:health_events`
|
||||
observation (debounced — duplicates are suppressed when the summary did
|
||||
not change).
|
||||
|
||||
There is no periodic heartbeat. `Game Lobby` consumes these events to
|
||||
update its denormalised runtime snapshot and to feed the per-game
|
||||
`player_turn_stats` aggregate used at game finish.
|
||||
|
||||
The first `runtime_snapshot_update` published right after a successful
|
||||
`register-runtime` carries `player_turn_stats` projected from the
|
||||
engine `/admin/init` response — the per-player baseline (`planets`,
|
||||
`population`) at turn 0. Lobby treats this baseline as the reference
|
||||
point against which subsequent turn deltas are measured. For other
|
||||
status transitions that fire without a fresh engine state payload
|
||||
(e.g., a pure health-summary change), `player_turn_stats` is empty.
|
||||
|
||||
The full schema is enforced by
|
||||
[`./api/runtime-events-asyncapi.yaml`](./api/runtime-events-asyncapi.yaml).
|
||||
|
||||
### `runtime:health_events` (in)
|
||||
|
||||
Producer: `Runtime Manager`. Consumer: `Game Master`.
|
||||
|
||||
GM consumes the stream to update `runtime_records.engine_health` summary
|
||||
per game. The schema is owned by `Runtime Manager` and documented in
|
||||
[`../rtmanager/api/runtime-health-asyncapi.yaml`](../rtmanager/api/runtime-health-asyncapi.yaml).
|
||||
GM never modifies `runtime:health_events`; it is read-only.
|
||||
|
||||
GM does not publish notifications in response to runtime health changes
|
||||
in v1; the operator surface is `gm:lobby_events` plus the GM REST
|
||||
inspect endpoints.
|
||||
|
||||
## Notification Contracts
|
||||
|
||||
`Game Master` publishes notification intents to `notification:intents`
|
||||
using the shared `pkg/notificationintent` producer module:
|
||||
|
||||
| Trigger | `notification_type` | Audience | Channels |
|
||||
| --- | --- | --- | --- |
|
||||
| Successful turn generation | `game.turn.ready` | active members of the game | `push+email` |
|
||||
| Game finish | `game.finished` | active members of the game | `push+email` |
|
||||
| Turn generation failed | `game.generation_failed` | configured admin email list | `email` |
|
||||
|
||||
Recipient resolution: GM materialises `recipient_user_ids` from its own
|
||||
membership cache (loaded from Lobby) at publish time; admin recipients
|
||||
are resolved by `Notification Service` from configuration.
|
||||
|
||||
A failed publication is a notification degradation and must not roll back
|
||||
already committed runtime state. Failed publications are logged and
|
||||
counted via `gamemaster.notification.publish_attempts`.
|
||||
|
||||
## Persistence Layout
|
||||
|
||||
### PostgreSQL durable state (schema `gamemaster`)
|
||||
|
||||
| Table | Purpose | Key |
|
||||
| --- | --- | --- |
|
||||
| `runtime_records` | One row per game; latest known runtime status and scheduling state. | `game_id` |
|
||||
| `engine_versions` | Engine version registry. | `version` |
|
||||
| `player_mappings` | `(game_id, user_id) → race_name + engine_player_uuid`. | composite `(game_id, user_id)` |
|
||||
| `operation_log` | Append-only audit of every GM operation. | `id` (auto) |
|
||||
|
||||
`runtime_records` columns:
|
||||
|
||||
- `game_id` — primary key, references Lobby's identifier.
|
||||
- `status` — `starting | running | generation_in_progress |
|
||||
generation_failed | stopped | engine_unreachable | finished`.
|
||||
- `engine_endpoint` — `http://galaxy-game-{game_id}:8080`.
|
||||
- `current_image_ref` — Docker reference of the running image.
|
||||
- `current_engine_version` — semver string registered in `engine_versions`.
|
||||
- `turn_schedule` — five-field cron expression copied from Lobby.
|
||||
- `current_turn` — last completed turn number; `0` until the first turn
|
||||
generates.
|
||||
- `next_generation_at` — UTC timestamp of the next due tick.
|
||||
- `skip_next_tick` — boolean; set by `force-next-turn`, cleared after the
|
||||
first cron step is skipped.
|
||||
- `engine_health` — short text summary derived from
|
||||
`runtime:health_events`.
|
||||
- `created_at`, `updated_at`, `started_at`, `stopped_at`, `finished_at` —
|
||||
lifecycle timestamps.
|
||||
|
||||
`engine_versions` columns:
|
||||
|
||||
- `version` — primary key; semver string.
|
||||
- `image_ref` — non-empty Docker reference.
|
||||
- `options` — `jsonb`, free-form, default `'{}'`.
|
||||
- `status` — `active | deprecated`.
|
||||
- `created_at`, `updated_at`.
|
||||
|
||||
`player_mappings` columns:
|
||||
|
||||
- composite primary key `(game_id, user_id)`.
|
||||
- `race_name` — non-empty string; unique per `game_id`.
|
||||
- `engine_player_uuid` — UUID returned by the engine `/admin/init`.
|
||||
- `created_at`.
|
||||
|
||||
`operation_log` columns:
|
||||
|
||||
- `id`, `game_id`, `op_kind` (`register_runtime | turn_generation |
|
||||
force_next_turn | banish | stop | patch | engine_version_create |
|
||||
engine_version_update | engine_version_deprecate |
|
||||
engine_version_delete`), `op_source`, `source_ref` (request id
|
||||
when known), `outcome` (`success | failure`), `error_code`,
|
||||
`error_message`, `started_at`, `finished_at`.
|
||||
|
||||
For engine-version registry entries (`op_kind` starting with
|
||||
`engine_version_`), the `game_id` column doubles as the audit subject
|
||||
and stores the canonical `version` string instead of a platform game
|
||||
identifier; the registry is global, not per-game. The convention is
|
||||
documented in
|
||||
[`./docs/stage14-engine-version-registry.md`](./docs/stage14-engine-version-registry.md).
|
||||
|
||||
Indexes:
|
||||
|
||||
- `runtime_records (status, next_generation_at)` — drives the scheduler
|
||||
ticker scan.
|
||||
- `operation_log (game_id, started_at DESC)` — drives audit reads.
|
||||
- UNIQUE on `player_mappings (game_id, race_name)` —
|
||||
one-race-per-game invariant.
|
||||
|
||||
Per-game roster reads (`WHERE game_id = $1`) are served by the
|
||||
leftmost prefix of the composite primary key on
|
||||
`player_mappings (game_id, user_id)`; no extra single-column index is
|
||||
added.
|
||||
|
||||
Migrations are embedded `00001_init.sql` (single-init pre-launch policy
|
||||
from `ARCHITECTURE.md §Persistence Backends`).
|
||||
|
||||
### Redis runtime-coordination state
|
||||
|
||||
| Key shape | Purpose |
|
||||
| --- | --- |
|
||||
| `gamemaster:stream_offsets:{label}` | Last processed entry id per consumer (`health_events`). Same shape as Lobby and RTM. |
|
||||
|
||||
GM does not persist the membership cache to Redis in v1; the cache is
|
||||
in-process. This trade-off is documented in [`./PLAN.md` Stage 16](./PLAN.md).
|
||||
|
||||
## Error Model
|
||||
|
||||
Error envelope: `{ "error": { "code": "...", "message": "..." } }`,
|
||||
identical to Lobby and RTM.
|
||||
|
||||
Stable error codes:
|
||||
|
||||
| Code | Meaning |
|
||||
| --- | --- |
|
||||
| `invalid_request` | Malformed JSON, unknown fields, missing required parameter. |
|
||||
| `runtime_not_found` | `runtime_records.{game_id}` does not exist. |
|
||||
| `runtime_not_running` | Operation requires `status=running`. |
|
||||
| `conflict` | State transition not allowed. |
|
||||
| `forbidden` | Caller is not an active member or not authorised. |
|
||||
| `engine_version_not_found` | `engine_versions.{version}` does not exist. |
|
||||
| `engine_version_in_use` | Hard-delete attempt against a version referenced by a non-finished runtime. |
|
||||
| `semver_patch_only` | Patch attempt across major/minor boundary. |
|
||||
| `engine_unreachable` | Engine returned 5xx or connection error. |
|
||||
| `engine_protocol_violation` | Engine response missing required fields or carries unexpected payload. |
|
||||
| `engine_validation_error` | Engine returned 4xx with per-command results. |
|
||||
| `service_unavailable` | Dependency (PostgreSQL, Redis, Lobby, RTM) unavailable. |
|
||||
| `internal_error` | Unspecified failure. |
|
||||
|
||||
## Configuration
|
||||
|
||||
All variables use the `GAMEMASTER_` prefix. Required variables fail-fast
|
||||
on startup.
|
||||
|
||||
### Required
|
||||
|
||||
- `GAMEMASTER_INTERNAL_HTTP_ADDR`
|
||||
- `GAMEMASTER_POSTGRES_PRIMARY_DSN`
|
||||
- `GAMEMASTER_REDIS_MASTER_ADDR`
|
||||
- `GAMEMASTER_REDIS_PASSWORD`
|
||||
- `GAMEMASTER_LOBBY_INTERNAL_BASE_URL`
|
||||
- `GAMEMASTER_RTM_INTERNAL_BASE_URL`
|
||||
|
||||
### Configuration groups
|
||||
|
||||
**Listener:**
|
||||
|
||||
- `GAMEMASTER_INTERNAL_HTTP_ADDR` (e.g., `:8097`).
|
||||
- `GAMEMASTER_INTERNAL_HTTP_READ_TIMEOUT` (default `5s`).
|
||||
- `GAMEMASTER_INTERNAL_HTTP_WRITE_TIMEOUT` (default `30s`).
|
||||
- `GAMEMASTER_INTERNAL_HTTP_IDLE_TIMEOUT` (default `60s`).
|
||||
|
||||
**PostgreSQL:**
|
||||
|
||||
- `GAMEMASTER_POSTGRES_PRIMARY_DSN`
|
||||
(`postgres://gamemaster:<pwd>@<host>:5432/galaxy?search_path=gamemaster&sslmode=disable`).
|
||||
- `GAMEMASTER_POSTGRES_REPLICA_DSNS` (optional, comma-separated; not used
|
||||
in v1).
|
||||
- `GAMEMASTER_POSTGRES_OPERATION_TIMEOUT` (default `2s`).
|
||||
- `GAMEMASTER_POSTGRES_MAX_OPEN_CONNS` (default `10`).
|
||||
- `GAMEMASTER_POSTGRES_MAX_IDLE_CONNS` (default `2`).
|
||||
- `GAMEMASTER_POSTGRES_CONN_MAX_LIFETIME` (default `30m`).
|
||||
|
||||
**Redis:**
|
||||
|
||||
- `GAMEMASTER_REDIS_MASTER_ADDR`.
|
||||
- `GAMEMASTER_REDIS_REPLICA_ADDRS` (optional, comma-separated).
|
||||
- `GAMEMASTER_REDIS_PASSWORD`.
|
||||
- `GAMEMASTER_REDIS_DB` (default `0`).
|
||||
- `GAMEMASTER_REDIS_OPERATION_TIMEOUT` (default `2s`).
|
||||
|
||||
**Streams:**
|
||||
|
||||
- `GAMEMASTER_REDIS_LOBBY_EVENTS_STREAM` (default `gm:lobby_events`).
|
||||
- `GAMEMASTER_REDIS_HEALTH_EVENTS_STREAM` (default
|
||||
`runtime:health_events`).
|
||||
- `GAMEMASTER_REDIS_NOTIFICATION_INTENTS_STREAM` (default
|
||||
`notification:intents`).
|
||||
- `GAMEMASTER_STREAM_BLOCK_TIMEOUT` (default `5s`).
|
||||
|
||||
**Engine client:**
|
||||
|
||||
- `GAMEMASTER_ENGINE_CALL_TIMEOUT` (default `30s` — covers turn generation
|
||||
on large games).
|
||||
- `GAMEMASTER_ENGINE_PROBE_TIMEOUT` (default `5s` — for inspect-style
|
||||
reads).
|
||||
|
||||
**Lobby internal client:**
|
||||
|
||||
- `GAMEMASTER_LOBBY_INTERNAL_BASE_URL`.
|
||||
- `GAMEMASTER_LOBBY_INTERNAL_TIMEOUT` (default `2s`).
|
||||
|
||||
**Runtime Manager internal client:**
|
||||
|
||||
- `GAMEMASTER_RTM_INTERNAL_BASE_URL`.
|
||||
- `GAMEMASTER_RTM_INTERNAL_TIMEOUT` (default `5s`).
|
||||
|
||||
**Scheduler:**
|
||||
|
||||
- `GAMEMASTER_SCHEDULER_TICK_INTERVAL` (default `1s`).
|
||||
- `GAMEMASTER_TURN_GENERATION_TIMEOUT` (default `60s`).
|
||||
|
||||
**Membership cache:**
|
||||
|
||||
- `GAMEMASTER_MEMBERSHIP_CACHE_TTL` (default `30s`).
|
||||
- `GAMEMASTER_MEMBERSHIP_CACHE_MAX_GAMES` (default `4096`; LRU eviction).
|
||||
|
||||
**Logging:**
|
||||
|
||||
- `GAMEMASTER_LOG_LEVEL` (default `info`).
|
||||
|
||||
**Lifecycle:**
|
||||
|
||||
- `GAMEMASTER_SHUTDOWN_TIMEOUT` (default `30s`).
|
||||
|
||||
**Telemetry:** uses the standard OTLP env vars
|
||||
(`OTEL_EXPORTER_OTLP_ENDPOINT`, `OTEL_EXPORTER_OTLP_PROTOCOL`, etc.)
|
||||
shared with other Galaxy services.
|
||||
|
||||
## Observability
|
||||
|
||||
### Metrics (OpenTelemetry, low cardinality)
|
||||
|
||||
- `gamemaster.register_runtime.outcomes` — counter; labels `outcome`,
|
||||
`error_code`.
|
||||
- `gamemaster.turn_generation.outcomes` — counter; labels `outcome`,
|
||||
`error_code`, `trigger` (`scheduler | force`).
|
||||
- `gamemaster.command_execute.outcomes` — counter; labels `outcome`,
|
||||
`error_code`.
|
||||
- `gamemaster.order_put.outcomes` — counter; labels `outcome`,
|
||||
`error_code`.
|
||||
- `gamemaster.report_get.outcomes` — counter; labels `outcome`,
|
||||
`error_code`.
|
||||
- `gamemaster.banish.outcomes` — counter; labels `outcome`, `error_code`.
|
||||
- `gamemaster.engine_call.latency` — histogram; label `op` (`init |
|
||||
status | turn | banish | command | order | report`).
|
||||
- `gamemaster.runtime_records_by_status` — gauge; label `status`.
|
||||
- `gamemaster.scheduler.due_games` — gauge.
|
||||
- `gamemaster.health_events.consumed` — counter.
|
||||
- `gamemaster.lobby_events.published` — counter; label `event_type`.
|
||||
- `gamemaster.notification.publish_attempts` — counter; label
|
||||
`notification_type`, `result` (`ok | error`).
|
||||
- `gamemaster.membership_cache.hits` — counter; labels `result` (`hit |
|
||||
miss | invalidate`).
|
||||
- `gamemaster.engine_versions_total` — gauge.
|
||||
|
||||
Metrics avoid high-cardinality attributes such as `game_id` and `user_id`.
|
||||
|
||||
### Structured logs (slog JSON to stdout)
|
||||
|
||||
Common fields on every entry: `service=gamemaster`, `request_id`,
|
||||
`trace_id`, `span_id`, `game_id` (when known), `user_id` (when known),
|
||||
`op_kind`, `op_source`, `outcome`, `error_code`.
|
||||
|
||||
Worker-specific fields: `event_type` (lobby-events publisher),
|
||||
`stream_entry_id` (health-events consumer), `turn` (turn-generation),
|
||||
`engine_endpoint` (engine calls).
|
||||
|
||||
## Verification
|
||||
|
||||
Service-level (per [`./PLAN.md`](./PLAN.md)):
|
||||
|
||||
- Unit tests for every service-layer operation against mocked engine,
|
||||
Lobby, RTM, notification publisher, lobby-events publisher.
|
||||
- Adapter tests using `testcontainers-go` for PostgreSQL and Redis.
|
||||
- Contract tests for `internal-openapi.yaml` and
|
||||
`runtime-events-asyncapi.yaml`.
|
||||
|
||||
Service-local integration suite under `gamemaster/integration/`:
|
||||
|
||||
- Register-runtime + first turn happy path against the real
|
||||
`galaxy/game` test image.
|
||||
- Force-next-turn skip behaviour.
|
||||
- Engine version registry CRUD + resolve.
|
||||
- Admin stop synchronous REST.
|
||||
- Banish round-trip.
|
||||
- Membership invalidation hook.
|
||||
- `runtime:health_events` consumption.
|
||||
|
||||
Inter-service suite under `integration/lobbygm/` and
|
||||
`integration/lobbygmrtm/`:
|
||||
|
||||
- `lobbygm`: real Lobby + real GM + real engine + stub RTM. Covers
|
||||
enrollment → register-runtime → first turn → finish + capability
|
||||
evaluation.
|
||||
- `lobbygmrtm`: full Lobby + GM + RTM + engine. Covers happy path and the
|
||||
documented failure paths from `ARCHITECTURE.md` flow §4.
|
||||
|
||||
Manual smoke (development):
|
||||
|
||||
```sh
|
||||
docker network create galaxy-net # once
|
||||
GAMEMASTER_INTERNAL_HTTP_ADDR=:8097 \
|
||||
GAMEMASTER_POSTGRES_PRIMARY_DSN=postgres://gamemaster:secret@localhost:5432/galaxy?search_path=gamemaster&sslmode=disable \
|
||||
GAMEMASTER_REDIS_MASTER_ADDR=localhost:6379 \
|
||||
GAMEMASTER_REDIS_PASSWORD=secret \
|
||||
GAMEMASTER_LOBBY_INTERNAL_BASE_URL=http://localhost:8095 \
|
||||
GAMEMASTER_RTM_INTERNAL_BASE_URL=http://localhost:8096 \
|
||||
... go run ./gamemaster/cmd/gamemaster
|
||||
```
|
||||
|
||||
After start, `curl http://localhost:8097/readyz` returns `200`. Driving
|
||||
Lobby through its public start flow brings up `galaxy-game-{game_id}`
|
||||
containers, GM registers each runtime, generates turns on the configured
|
||||
schedule, and propagates events to Lobby.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,204 @@
|
||||
asyncapi: 3.1.0
|
||||
info:
|
||||
title: Galaxy Game Master Runtime Events Contract
|
||||
version: 1.0.0
|
||||
description: |
|
||||
Stable Redis Streams contract for runtime snapshot updates and game
|
||||
finish events published by `Game Master` toward `Game Lobby` on the
|
||||
`gm:lobby_events` stream.
|
||||
|
||||
Two distinct message types share the channel and are discriminated
|
||||
by the `event_type` field on the payload:
|
||||
|
||||
- `RuntimeSnapshotUpdate` (`event_type=runtime_snapshot_update`) is
|
||||
published whenever a turn was generated (success or failure), the
|
||||
runtime status transitioned, or the engine health summary changed
|
||||
in response to a `runtime:health_events` observation. Duplicates
|
||||
are suppressed when the summary did not change.
|
||||
- `GameFinished` (`event_type=game_finished`) is published once
|
||||
when the engine reports `finished:true` on a turn-generation
|
||||
response. The runtime stays in `status=finished` indefinitely;
|
||||
no further events are published for the game.
|
||||
|
||||
Both payload schemas are closed (`additionalProperties: false`).
|
||||
Adding a field to either payload after this contract was frozen is
|
||||
a breaking change that requires a contract bump and a coordinated
|
||||
consumer update.
|
||||
|
||||
Polymorphism: the AsyncAPI surface uses two messages on one channel
|
||||
and one `send` operation per message. The
|
||||
`runtime_health-asyncapi.yaml` style of a single message with
|
||||
`oneOf` details is not used here because the two payload shapes
|
||||
have no shared field set beyond the discriminator and the
|
||||
`game_id`. See `gamemaster/docs/stage06-contract-files.md`.
|
||||
channels:
|
||||
lobbyEvents:
|
||||
address: gm:lobby_events
|
||||
messages:
|
||||
runtimeSnapshotUpdate:
|
||||
$ref: '#/components/messages/RuntimeSnapshotUpdate'
|
||||
gameFinished:
|
||||
$ref: '#/components/messages/GameFinished'
|
||||
operations:
|
||||
publishRuntimeSnapshotUpdate:
|
||||
action: send
|
||||
summary: Publish a runtime snapshot update for Game Lobby.
|
||||
channel:
|
||||
$ref: '#/channels/lobbyEvents'
|
||||
messages:
|
||||
- $ref: '#/channels/lobbyEvents/messages/runtimeSnapshotUpdate'
|
||||
publishGameFinished:
|
||||
action: send
|
||||
summary: Publish a game finish event for Game Lobby.
|
||||
channel:
|
||||
$ref: '#/channels/lobbyEvents'
|
||||
messages:
|
||||
- $ref: '#/channels/lobbyEvents/messages/gameFinished'
|
||||
components:
|
||||
messages:
|
||||
RuntimeSnapshotUpdate:
|
||||
name: RuntimeSnapshotUpdate
|
||||
title: Runtime snapshot update
|
||||
summary: Snapshot of one game's runtime state, published on transitions and health changes.
|
||||
payload:
|
||||
$ref: '#/components/schemas/RuntimeSnapshotUpdatePayload'
|
||||
examples:
|
||||
- name: runningTurnReady
|
||||
summary: Snapshot published after a successful turn generation.
|
||||
payload:
|
||||
event_type: runtime_snapshot_update
|
||||
game_id: game-123
|
||||
current_turn: 17
|
||||
runtime_status: running
|
||||
engine_health_summary: healthy
|
||||
player_turn_stats:
|
||||
- user_id: user-1
|
||||
planets: 4
|
||||
population: 12000
|
||||
- user_id: user-2
|
||||
planets: 3
|
||||
population: 9000
|
||||
occurred_at_ms: 1775121700000
|
||||
GameFinished:
|
||||
name: GameFinished
|
||||
title: Game finished
|
||||
summary: Terminal event published once when the engine reports finished:true on a turn-generation response.
|
||||
payload:
|
||||
$ref: '#/components/schemas/GameFinishedPayload'
|
||||
examples:
|
||||
- name: gameFinished
|
||||
summary: Game finished on turn 42; final per-player stats included.
|
||||
payload:
|
||||
event_type: game_finished
|
||||
game_id: game-123
|
||||
final_turn_number: 42
|
||||
runtime_status: finished
|
||||
player_turn_stats:
|
||||
- user_id: user-1
|
||||
planets: 6
|
||||
population: 25000
|
||||
- user_id: user-2
|
||||
planets: 0
|
||||
population: 0
|
||||
finished_at_ms: 1775130000000
|
||||
schemas:
|
||||
RuntimeStatus:
|
||||
type: string
|
||||
enum:
|
||||
- starting
|
||||
- running
|
||||
- generation_in_progress
|
||||
- generation_failed
|
||||
- stopped
|
||||
- engine_unreachable
|
||||
- finished
|
||||
description: Runtime status enum; identical to the value used in the internal REST contract.
|
||||
PlayerTurnStat:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
required:
|
||||
- user_id
|
||||
- planets
|
||||
- population
|
||||
properties:
|
||||
user_id:
|
||||
type: string
|
||||
description: Platform user identifier of the player.
|
||||
planets:
|
||||
type: integer
|
||||
minimum: 0
|
||||
description: Number of planets controlled by the player at the snapshot turn.
|
||||
population:
|
||||
type: integer
|
||||
minimum: 0
|
||||
description: Total population controlled by the player at the snapshot turn.
|
||||
RuntimeSnapshotUpdatePayload:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
required:
|
||||
- event_type
|
||||
- game_id
|
||||
- current_turn
|
||||
- runtime_status
|
||||
- engine_health_summary
|
||||
- player_turn_stats
|
||||
- occurred_at_ms
|
||||
properties:
|
||||
event_type:
|
||||
type: string
|
||||
const: runtime_snapshot_update
|
||||
description: Discriminator pinned to `runtime_snapshot_update`; consumers dispatch on this value.
|
||||
game_id:
|
||||
type: string
|
||||
description: Opaque stable game identifier.
|
||||
current_turn:
|
||||
type: integer
|
||||
minimum: 0
|
||||
description: Last completed turn number; zero when the snapshot reflects the pre-first-turn state.
|
||||
runtime_status:
|
||||
$ref: '#/components/schemas/RuntimeStatus'
|
||||
engine_health_summary:
|
||||
type: string
|
||||
description: Short text summary of engine health; empty until the first health observation.
|
||||
player_turn_stats:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/PlayerTurnStat'
|
||||
description: Per-player stats projection; empty before any turn has generated.
|
||||
occurred_at_ms:
|
||||
type: integer
|
||||
format: int64
|
||||
description: UTC Unix milliseconds when Game Master observed the underlying transition.
|
||||
GameFinishedPayload:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
required:
|
||||
- event_type
|
||||
- game_id
|
||||
- final_turn_number
|
||||
- runtime_status
|
||||
- player_turn_stats
|
||||
- finished_at_ms
|
||||
properties:
|
||||
event_type:
|
||||
type: string
|
||||
const: game_finished
|
||||
description: Discriminator pinned to `game_finished`; consumers dispatch on this value.
|
||||
game_id:
|
||||
type: string
|
||||
description: Opaque stable game identifier.
|
||||
final_turn_number:
|
||||
type: integer
|
||||
minimum: 0
|
||||
description: Last turn number generated before the engine reported finished:true.
|
||||
runtime_status:
|
||||
$ref: '#/components/schemas/RuntimeStatus'
|
||||
player_turn_stats:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/PlayerTurnStat'
|
||||
description: Final per-player stats projection at the finish turn.
|
||||
finished_at_ms:
|
||||
type: integer
|
||||
format: int64
|
||||
description: UTC Unix milliseconds when Game Master persisted the finish transition.
|
||||
@@ -0,0 +1,46 @@
|
||||
// Binary gamemaster is the runnable Game Master process entrypoint.
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
"galaxy/gamemaster/internal/app"
|
||||
"galaxy/gamemaster/internal/config"
|
||||
"galaxy/gamemaster/internal/logging"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if err := run(); err != nil {
|
||||
_, _ = fmt.Fprintf(os.Stderr, "gamemaster: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
func run() error {
|
||||
cfg, err := config.LoadFromEnv()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger, err := logging.New(cfg.Logging.Level)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rootCtx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
|
||||
defer stop()
|
||||
|
||||
runtime, err := app.NewRuntime(rootCtx, cfg, logger)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() {
|
||||
_ = runtime.Close()
|
||||
}()
|
||||
|
||||
return runtime.Run(rootCtx)
|
||||
}
|
||||
@@ -0,0 +1,237 @@
|
||||
// Command jetgen regenerates the go-jet/v2 query-builder code under
|
||||
// galaxy/gamemaster/internal/adapters/postgres/jet/ against a transient
|
||||
// PostgreSQL instance.
|
||||
//
|
||||
// The program is intended to be invoked as `go run ./cmd/jetgen` (or via
|
||||
// the `make jet` Makefile target) from within `galaxy/gamemaster`. It is
|
||||
// not part of the runtime binary.
|
||||
//
|
||||
// Steps:
|
||||
//
|
||||
// 1. start a postgres:16-alpine container via testcontainers-go
|
||||
// 2. open it through pkg/postgres as the superuser
|
||||
// 3. CREATE ROLE gamemasterservice and CREATE SCHEMA "gamemaster"
|
||||
// AUTHORIZATION gamemasterservice
|
||||
// 4. open a second pool as gamemasterservice with search_path=gamemaster
|
||||
// and apply the embedded goose migrations
|
||||
// 5. run jet's PostgreSQL generator against schema=gamemaster, writing
|
||||
// into ../internal/adapters/postgres/jet
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"time"
|
||||
|
||||
"galaxy/postgres"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/migrations"
|
||||
|
||||
jetpostgres "github.com/go-jet/jet/v2/generator/postgres"
|
||||
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
const (
|
||||
postgresImage = "postgres:16-alpine"
|
||||
superuserName = "galaxy"
|
||||
superuserPassword = "galaxy"
|
||||
superuserDatabase = "galaxy_gamemaster"
|
||||
serviceRole = "gamemasterservice"
|
||||
servicePassword = "gamemasterservice"
|
||||
serviceSchema = "gamemaster"
|
||||
containerStartup = 90 * time.Second
|
||||
defaultOpTimeout = 10 * time.Second
|
||||
jetOutputDirSuffix = "internal/adapters/postgres/jet"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if err := run(context.Background()); err != nil {
|
||||
log.Fatalf("jetgen: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func run(ctx context.Context) error {
|
||||
outputDir, err := jetOutputDir()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
container, err := tcpostgres.Run(ctx, postgresImage,
|
||||
tcpostgres.WithDatabase(superuserDatabase),
|
||||
tcpostgres.WithUsername(superuserName),
|
||||
tcpostgres.WithPassword(superuserPassword),
|
||||
testcontainers.WithWaitStrategy(
|
||||
wait.ForLog("database system is ready to accept connections").
|
||||
WithOccurrence(2).
|
||||
WithStartupTimeout(containerStartup),
|
||||
),
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("start postgres container: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if termErr := testcontainers.TerminateContainer(container); termErr != nil {
|
||||
log.Printf("jetgen: terminate container: %v", termErr)
|
||||
}
|
||||
}()
|
||||
|
||||
baseDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||
if err != nil {
|
||||
return fmt.Errorf("resolve container dsn: %w", err)
|
||||
}
|
||||
|
||||
if err := provisionRoleAndSchema(ctx, baseDSN); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
scopedDSN, err := dsnForServiceRole(baseDSN)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := applyMigrations(ctx, scopedDSN); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := os.RemoveAll(outputDir); err != nil {
|
||||
return fmt.Errorf("remove existing jet output %q: %w", outputDir, err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Dir(outputDir), 0o755); err != nil {
|
||||
return fmt.Errorf("ensure jet output parent: %w", err)
|
||||
}
|
||||
|
||||
jetCfg := postgres.DefaultConfig()
|
||||
jetCfg.PrimaryDSN = scopedDSN
|
||||
jetCfg.OperationTimeout = defaultOpTimeout
|
||||
jetDB, err := postgres.OpenPrimary(ctx, jetCfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open scoped pool for jet generation: %w", err)
|
||||
}
|
||||
defer func() { _ = jetDB.Close() }()
|
||||
|
||||
if err := jetpostgres.GenerateDB(jetDB, serviceSchema, outputDir); err != nil {
|
||||
return fmt.Errorf("jet generate: %w", err)
|
||||
}
|
||||
|
||||
log.Printf("jetgen: generated jet code into %s (schema=%s)", outputDir, serviceSchema)
|
||||
return nil
|
||||
}
|
||||
|
||||
func provisionRoleAndSchema(ctx context.Context, baseDSN string) error {
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = baseDSN
|
||||
cfg.OperationTimeout = defaultOpTimeout
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open admin pool: %w", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
|
||||
statements := []string{
|
||||
fmt.Sprintf(`DO $$ BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = %s) THEN
|
||||
CREATE ROLE %s LOGIN PASSWORD %s;
|
||||
END IF;
|
||||
END $$;`, sqlLiteral(serviceRole), sqlIdentifier(serviceRole), sqlLiteral(servicePassword)),
|
||||
fmt.Sprintf(`CREATE SCHEMA IF NOT EXISTS %s AUTHORIZATION %s;`,
|
||||
sqlIdentifier(serviceSchema), sqlIdentifier(serviceRole)),
|
||||
fmt.Sprintf(`GRANT USAGE ON SCHEMA %s TO %s;`,
|
||||
sqlIdentifier(serviceSchema), sqlIdentifier(serviceRole)),
|
||||
}
|
||||
for _, statement := range statements {
|
||||
if _, err := db.ExecContext(ctx, statement); err != nil {
|
||||
return fmt.Errorf("provision %q/%q: %w", serviceSchema, serviceRole, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func dsnForServiceRole(baseDSN string) (string, error) {
|
||||
parsed, err := url.Parse(baseDSN)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("parse base dsn: %w", err)
|
||||
}
|
||||
values := url.Values{}
|
||||
values.Set("search_path", serviceSchema)
|
||||
values.Set("sslmode", "disable")
|
||||
scoped := url.URL{
|
||||
Scheme: parsed.Scheme,
|
||||
User: url.UserPassword(serviceRole, servicePassword),
|
||||
Host: parsed.Host,
|
||||
Path: parsed.Path,
|
||||
RawQuery: values.Encode(),
|
||||
}
|
||||
return scoped.String(), nil
|
||||
}
|
||||
|
||||
func applyMigrations(ctx context.Context, dsn string) error {
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = dsn
|
||||
cfg.OperationTimeout = defaultOpTimeout
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open scoped pool: %w", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
|
||||
if err := postgres.Ping(ctx, db, defaultOpTimeout); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := postgres.RunMigrations(ctx, db, migrations.FS(), "."); err != nil {
|
||||
return fmt.Errorf("run migrations: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// jetOutputDir returns the absolute path that jet should write into. We
|
||||
// rely on the runtime caller info to anchor it to galaxy/gamemaster
|
||||
// regardless of the invoking working directory.
|
||||
func jetOutputDir() (string, error) {
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
return "", errors.New("resolve runtime caller for jet output path")
|
||||
}
|
||||
dir := filepath.Dir(file)
|
||||
// dir = .../galaxy/gamemaster/cmd/jetgen
|
||||
moduleRoot := filepath.Clean(filepath.Join(dir, "..", ".."))
|
||||
return filepath.Join(moduleRoot, jetOutputDirSuffix), nil
|
||||
}
|
||||
|
||||
func sqlIdentifier(name string) string {
|
||||
return `"` + escapeDoubleQuotes(name) + `"`
|
||||
}
|
||||
|
||||
func sqlLiteral(value string) string {
|
||||
return "'" + escapeSingleQuotes(value) + "'"
|
||||
}
|
||||
|
||||
func escapeDoubleQuotes(value string) string {
|
||||
out := make([]byte, 0, len(value))
|
||||
for index := 0; index < len(value); index++ {
|
||||
if value[index] == '"' {
|
||||
out = append(out, '"', '"')
|
||||
continue
|
||||
}
|
||||
out = append(out, value[index])
|
||||
}
|
||||
return string(out)
|
||||
}
|
||||
|
||||
func escapeSingleQuotes(value string) string {
|
||||
out := make([]byte, 0, len(value))
|
||||
for index := 0; index < len(value); index++ {
|
||||
if value[index] == '\'' {
|
||||
out = append(out, '\'', '\'')
|
||||
continue
|
||||
}
|
||||
out = append(out, value[index])
|
||||
}
|
||||
return string(out)
|
||||
}
|
||||
@@ -0,0 +1,360 @@
|
||||
package gamemaster
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
type runtimeEventPayloadExpectation struct {
|
||||
schemaName string
|
||||
eventTypeConst string
|
||||
required []string
|
||||
}
|
||||
|
||||
var expectedRuntimeEventPayloads = []runtimeEventPayloadExpectation{
|
||||
{
|
||||
schemaName: "RuntimeSnapshotUpdatePayload",
|
||||
eventTypeConst: "runtime_snapshot_update",
|
||||
required: []string{
|
||||
"event_type",
|
||||
"game_id",
|
||||
"current_turn",
|
||||
"runtime_status",
|
||||
"engine_health_summary",
|
||||
"player_turn_stats",
|
||||
"occurred_at_ms",
|
||||
},
|
||||
},
|
||||
{
|
||||
schemaName: "GameFinishedPayload",
|
||||
eventTypeConst: "game_finished",
|
||||
required: []string{
|
||||
"event_type",
|
||||
"game_id",
|
||||
"final_turn_number",
|
||||
"runtime_status",
|
||||
"player_turn_stats",
|
||||
"finished_at_ms",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var expectedRuntimeStatusEnum = []string{
|
||||
"starting",
|
||||
"running",
|
||||
"generation_in_progress",
|
||||
"generation_failed",
|
||||
"stopped",
|
||||
"engine_unreachable",
|
||||
"finished",
|
||||
}
|
||||
|
||||
// TestRuntimeEventsAsyncAPISpecLoads verifies the spec parses as YAML and is
|
||||
// pinned to AsyncAPI 3.1.0.
|
||||
func TestRuntimeEventsAsyncAPISpecLoads(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadAsyncAPISpec(t)
|
||||
require.Equal(t, "3.1.0", getStringValue(t, doc, "asyncapi"))
|
||||
}
|
||||
|
||||
// TestRuntimeEventsAsyncAPIChannel verifies the single channel address and
|
||||
// the two message references attached to it.
|
||||
func TestRuntimeEventsAsyncAPIChannel(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadAsyncAPISpec(t)
|
||||
channel := getMapValue(t, doc, "channels", "lobbyEvents")
|
||||
|
||||
require.Equal(t, "gm:lobby_events", getStringValue(t, channel, "address"))
|
||||
|
||||
channelMessages := getMapValue(t, channel, "messages")
|
||||
require.ElementsMatch(t,
|
||||
[]string{"runtimeSnapshotUpdate", "gameFinished"},
|
||||
mapKeys(channelMessages))
|
||||
|
||||
require.Equal(t,
|
||||
"#/components/messages/RuntimeSnapshotUpdate",
|
||||
getStringValue(t, getMapValue(t, channelMessages, "runtimeSnapshotUpdate"), "$ref"))
|
||||
require.Equal(t,
|
||||
"#/components/messages/GameFinished",
|
||||
getStringValue(t, getMapValue(t, channelMessages, "gameFinished"), "$ref"))
|
||||
}
|
||||
|
||||
// TestRuntimeEventsAsyncAPIOperations verifies that each message has its own
|
||||
// `send` operation with the correct channel and message reference. Game
|
||||
// Master is the publisher; no `receive` operations exist on this stream.
|
||||
func TestRuntimeEventsAsyncAPIOperations(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadAsyncAPISpec(t)
|
||||
operations := getMapValue(t, doc, "operations")
|
||||
|
||||
require.ElementsMatch(t,
|
||||
[]string{"publishRuntimeSnapshotUpdate", "publishGameFinished"},
|
||||
mapKeys(operations))
|
||||
|
||||
cases := []struct {
|
||||
operationName string
|
||||
messageKey string
|
||||
}{
|
||||
{"publishRuntimeSnapshotUpdate", "runtimeSnapshotUpdate"},
|
||||
{"publishGameFinished", "gameFinished"},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
tc := tc
|
||||
t.Run(tc.operationName, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
op := getMapValue(t, operations, tc.operationName)
|
||||
require.Equal(t, "send", getStringValue(t, op, "action"))
|
||||
require.Equal(t, "#/channels/lobbyEvents",
|
||||
getStringValue(t, getMapValue(t, op, "channel"), "$ref"))
|
||||
|
||||
messageRefs := getSliceValue(t, op, "messages")
|
||||
require.Len(t, messageRefs, 1, "%s must reference exactly one message", tc.operationName)
|
||||
|
||||
ref, ok := messageRefs[0].(map[string]any)
|
||||
require.True(t, ok, "%s message reference must be a map", tc.operationName)
|
||||
require.Equal(t,
|
||||
"#/channels/lobbyEvents/messages/"+tc.messageKey,
|
||||
getStringValue(t, ref, "$ref"))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestRuntimeEventsAsyncAPIMessageNames verifies that components.messages
|
||||
// contains exactly the two message names frozen by Stage 06.
|
||||
func TestRuntimeEventsAsyncAPIMessageNames(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadAsyncAPISpec(t)
|
||||
messages := getMapValue(t, doc, "components", "messages")
|
||||
|
||||
require.ElementsMatch(t,
|
||||
[]string{"RuntimeSnapshotUpdate", "GameFinished"},
|
||||
mapKeys(messages))
|
||||
|
||||
for _, name := range []string{"RuntimeSnapshotUpdate", "GameFinished"} {
|
||||
message := getMapValue(t, messages, name)
|
||||
require.Equal(t, name, getStringValue(t, message, "name"),
|
||||
"message %s must declare its own name", name)
|
||||
require.Equal(t,
|
||||
"#/components/schemas/"+name+"Payload",
|
||||
getStringValue(t, getMapValue(t, message, "payload"), "$ref"),
|
||||
"message %s must reference its payload schema", name)
|
||||
}
|
||||
}
|
||||
|
||||
// TestRuntimeEventsAsyncAPIPayloadFreeze verifies that each payload schema
|
||||
// has the expected required-field set, the correct `event_type` const, and
|
||||
// `additionalProperties: false`.
|
||||
func TestRuntimeEventsAsyncAPIPayloadFreeze(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadAsyncAPISpec(t)
|
||||
schemas := getMapValue(t, doc, "components", "schemas")
|
||||
|
||||
for _, expectation := range expectedRuntimeEventPayloads {
|
||||
expectation := expectation
|
||||
t.Run(expectation.schemaName, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
payload := getMapValue(t, schemas, expectation.schemaName)
|
||||
|
||||
require.Equal(t, false, getScalarValue(t, payload, "additionalProperties"),
|
||||
"%s must reject unknown fields", expectation.schemaName)
|
||||
|
||||
require.ElementsMatch(t,
|
||||
toAnySlice(expectation.required),
|
||||
getSliceValue(t, payload, "required"),
|
||||
"%s required field set", expectation.schemaName)
|
||||
|
||||
properties := getMapValue(t, payload, "properties")
|
||||
|
||||
eventType := getMapValue(t, properties, "event_type")
|
||||
require.Equal(t, "string", getStringValue(t, eventType, "type"))
|
||||
require.Equal(t, expectation.eventTypeConst,
|
||||
getScalarValue(t, eventType, "const"),
|
||||
"%s.event_type const must be %q", expectation.schemaName, expectation.eventTypeConst)
|
||||
|
||||
runtimeStatus := getMapValue(t, properties, "runtime_status")
|
||||
require.Equal(t, "#/components/schemas/RuntimeStatus",
|
||||
getStringValue(t, runtimeStatus, "$ref"),
|
||||
"%s.runtime_status must reference RuntimeStatus", expectation.schemaName)
|
||||
|
||||
playerTurnStats := getMapValue(t, properties, "player_turn_stats")
|
||||
require.Equal(t, "array", getStringValue(t, playerTurnStats, "type"))
|
||||
require.Equal(t, "#/components/schemas/PlayerTurnStat",
|
||||
getStringValue(t, getMapValue(t, playerTurnStats, "items"), "$ref"),
|
||||
"%s.player_turn_stats items must reference PlayerTurnStat", expectation.schemaName)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestRuntimeEventsAsyncAPIPlayerTurnStat verifies the per-player stat
|
||||
// schema shape from gamemaster/README.md §Async Stream Contracts.
|
||||
func TestRuntimeEventsAsyncAPIPlayerTurnStat(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadAsyncAPISpec(t)
|
||||
stat := getMapValue(t, doc, "components", "schemas", "PlayerTurnStat")
|
||||
|
||||
require.Equal(t, false, getScalarValue(t, stat, "additionalProperties"))
|
||||
require.ElementsMatch(t,
|
||||
[]any{"user_id", "planets", "population"},
|
||||
getSliceValue(t, stat, "required"))
|
||||
|
||||
properties := getMapValue(t, stat, "properties")
|
||||
require.Equal(t, "string", getStringValue(t, getMapValue(t, properties, "user_id"), "type"))
|
||||
require.Equal(t, "integer", getStringValue(t, getMapValue(t, properties, "planets"), "type"))
|
||||
require.Equal(t, "integer", getStringValue(t, getMapValue(t, properties, "population"), "type"))
|
||||
}
|
||||
|
||||
// TestRuntimeEventsAsyncAPIRuntimeStatusEnum verifies the RuntimeStatus
|
||||
// enum copied locally for the AsyncAPI surface contains the same seven
|
||||
// values as the OpenAPI surface.
|
||||
func TestRuntimeEventsAsyncAPIRuntimeStatusEnum(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadAsyncAPISpec(t)
|
||||
schema := getMapValue(t, doc, "components", "schemas", "RuntimeStatus")
|
||||
|
||||
require.ElementsMatch(t, expectedRuntimeStatusEnum, getStringSlice(t, schema, "enum"))
|
||||
}
|
||||
|
||||
func loadAsyncAPISpec(t *testing.T) map[string]any {
|
||||
t.Helper()
|
||||
|
||||
payload := loadTextFile(t, filepath.Join("api", "runtime-events-asyncapi.yaml"))
|
||||
|
||||
var doc map[string]any
|
||||
if err := yaml.Unmarshal([]byte(payload), &doc); err != nil {
|
||||
require.Failf(t, "test failed", "decode spec: %v", err)
|
||||
}
|
||||
|
||||
return doc
|
||||
}
|
||||
|
||||
func loadTextFile(t *testing.T, relativePath string) string {
|
||||
t.Helper()
|
||||
|
||||
path := filepath.Join(moduleRoot(t), relativePath)
|
||||
payload, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
require.Failf(t, "test failed", "read file %s: %v", path, err)
|
||||
}
|
||||
|
||||
return string(payload)
|
||||
}
|
||||
|
||||
func moduleRoot(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
_, thisFile, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
require.FailNow(t, "runtime.Caller failed")
|
||||
}
|
||||
|
||||
return filepath.Dir(thisFile)
|
||||
}
|
||||
|
||||
func getMapValue(t *testing.T, value map[string]any, path ...string) map[string]any {
|
||||
t.Helper()
|
||||
|
||||
current := value
|
||||
for _, segment := range path {
|
||||
raw, ok := current[segment]
|
||||
if !ok {
|
||||
require.Failf(t, "test failed", "missing map key %s", segment)
|
||||
}
|
||||
next, ok := raw.(map[string]any)
|
||||
if !ok {
|
||||
require.Failf(t, "test failed", "value at %s is not a map", segment)
|
||||
}
|
||||
current = next
|
||||
}
|
||||
|
||||
return current
|
||||
}
|
||||
|
||||
func getStringValue(t *testing.T, value map[string]any, key string) string {
|
||||
t.Helper()
|
||||
|
||||
raw, ok := value[key]
|
||||
if !ok {
|
||||
require.Failf(t, "test failed", "missing key %s", key)
|
||||
}
|
||||
result, ok := raw.(string)
|
||||
if !ok {
|
||||
require.Failf(t, "test failed", "value at %s is not a string", key)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func getStringSlice(t *testing.T, value map[string]any, key string) []string {
|
||||
t.Helper()
|
||||
|
||||
raw := getSliceValue(t, value, key)
|
||||
result := make([]string, 0, len(raw))
|
||||
for _, item := range raw {
|
||||
text, ok := item.(string)
|
||||
if !ok {
|
||||
require.Failf(t, "test failed", "value at %s is not a string slice", key)
|
||||
}
|
||||
result = append(result, text)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func getScalarValue(t *testing.T, value map[string]any, key string) any {
|
||||
t.Helper()
|
||||
|
||||
raw, ok := value[key]
|
||||
if !ok {
|
||||
require.Failf(t, "test failed", "missing key %s", key)
|
||||
}
|
||||
|
||||
return raw
|
||||
}
|
||||
|
||||
func getSliceValue(t *testing.T, value map[string]any, key string) []any {
|
||||
t.Helper()
|
||||
|
||||
raw, ok := value[key]
|
||||
if !ok {
|
||||
require.Failf(t, "test failed", "missing key %s", key)
|
||||
}
|
||||
result, ok := raw.([]any)
|
||||
if !ok {
|
||||
require.Failf(t, "test failed", "value at %s is not a slice", key)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func mapKeys(value map[string]any) []string {
|
||||
keys := make([]string, 0, len(value))
|
||||
for key := range value {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
|
||||
return keys
|
||||
}
|
||||
|
||||
func toAnySlice(values []string) []any {
|
||||
result := make([]any, 0, len(values))
|
||||
for _, value := range values {
|
||||
result = append(result, value)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
@@ -0,0 +1,718 @@
|
||||
package gamemaster
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/getkin/kin-openapi/openapi3"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
var expectedInternalOperationIDs = []string{
|
||||
"internalHealthz",
|
||||
"internalReadyz",
|
||||
"internalRegisterRuntime",
|
||||
"internalGetRuntime",
|
||||
"internalListRuntimes",
|
||||
"internalForceNextTurn",
|
||||
"internalStopRuntime",
|
||||
"internalPatchRuntime",
|
||||
"internalBanishRace",
|
||||
"internalInvalidateMemberships",
|
||||
"internalGameLiveness",
|
||||
"internalListEngineVersions",
|
||||
"internalCreateEngineVersion",
|
||||
"internalGetEngineVersion",
|
||||
"internalUpdateEngineVersion",
|
||||
"internalDeprecateEngineVersion",
|
||||
"internalResolveEngineVersionImageRef",
|
||||
"internalExecuteCommands",
|
||||
"internalPutOrders",
|
||||
"internalGetReport",
|
||||
}
|
||||
|
||||
// gmOwnedClosedSchemas lists every component schema for which Game Master
|
||||
// owns the wire shape and therefore must reject unknown fields. The list
|
||||
// is curated; the matching test fails if any schema in this list opens up.
|
||||
var gmOwnedClosedSchemas = []string{
|
||||
"ProbeResponse",
|
||||
"LivenessResponse",
|
||||
"ImageRefResponse",
|
||||
"RegisterRuntimeMember",
|
||||
"RegisterRuntimeRequest",
|
||||
"RuntimeRecord",
|
||||
"RuntimeListResponse",
|
||||
"StopRuntimeRequest",
|
||||
"PatchRuntimeRequest",
|
||||
"EngineVersion",
|
||||
"EngineVersionListResponse",
|
||||
"CreateEngineVersionRequest",
|
||||
"UpdateEngineVersionRequest",
|
||||
"ErrorResponse",
|
||||
"ErrorBody",
|
||||
}
|
||||
|
||||
// engineOwnedPassthroughSchemas lists every component schema that forwards
|
||||
// engine-owned payloads verbatim and therefore deliberately uses
|
||||
// `additionalProperties: true`. The matching test fails if any schema in
|
||||
// this list closes up.
|
||||
var engineOwnedPassthroughSchemas = []string{
|
||||
"ExecuteCommandsRequest",
|
||||
"ExecuteCommandsResponse",
|
||||
"PutOrdersRequest",
|
||||
"PutOrdersResponse",
|
||||
"ReportResponse",
|
||||
}
|
||||
|
||||
// TestInternalOpenAPISpecValidates loads internal-openapi.yaml and verifies
|
||||
// it is a syntactically valid OpenAPI 3.0 document.
|
||||
func TestInternalOpenAPISpecValidates(t *testing.T) {
|
||||
t.Parallel()
|
||||
loadInternalSpec(t)
|
||||
}
|
||||
|
||||
// TestInternalSpecHasAllOperationIDs verifies that the spec declares every
|
||||
// operationId required by gamemaster/PLAN.md Stage 06 and no extras. Adding
|
||||
// a new operation requires updating expectedInternalOperationIDs in the same
|
||||
// patch as the spec change.
|
||||
func TestInternalSpecHasAllOperationIDs(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
|
||||
got := make([]string, 0, len(expectedInternalOperationIDs))
|
||||
for _, pathItem := range doc.Paths.Map() {
|
||||
for _, op := range pathItem.Operations() {
|
||||
require.NotEmpty(t, op.OperationID, "every operation must declare a non-empty operationId")
|
||||
got = append(got, op.OperationID)
|
||||
}
|
||||
}
|
||||
|
||||
require.ElementsMatch(t, expectedInternalOperationIDs, got)
|
||||
}
|
||||
|
||||
// TestInternalSpecRegisterRuntime verifies the register-runtime contract
|
||||
// used by Game Lobby after a successful container start.
|
||||
func TestInternalSpecRegisterRuntime(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/games/{game_id}/register-runtime", http.MethodPost)
|
||||
|
||||
require.Equal(t, "internalRegisterRuntime", op.OperationID)
|
||||
assertOperationParameterRefs(t, op, "#/components/parameters/GameIDPath")
|
||||
assertSchemaRef(t, requestSchemaRef(t, op), "#/components/schemas/RegisterRuntimeRequest", "internalRegisterRuntime request")
|
||||
assertSchemaRef(t, responseSchemaRef(t, op, http.StatusOK), "#/components/schemas/RuntimeRecord", "internalRegisterRuntime 200")
|
||||
assertResponseRef(t, op, http.StatusBadRequest, "#/components/responses/InvalidRequestError")
|
||||
assertResponseRef(t, op, http.StatusNotFound, "#/components/responses/EngineVersionNotFoundError")
|
||||
assertResponseRef(t, op, http.StatusConflict, "#/components/responses/ConflictError")
|
||||
assertResponseRef(t, op, http.StatusBadGateway, "#/components/responses/EngineUnreachableError")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
assertResponseRef(t, op, http.StatusServiceUnavailable, "#/components/responses/ServiceUnavailableError")
|
||||
|
||||
req := componentSchemaRef(t, doc, "RegisterRuntimeRequest")
|
||||
assertRequiredFields(t, req,
|
||||
"engine_endpoint", "members", "target_engine_version", "turn_schedule")
|
||||
|
||||
member := componentSchemaRef(t, doc, "RegisterRuntimeMember")
|
||||
assertRequiredFields(t, member, "user_id", "race_name")
|
||||
}
|
||||
|
||||
// TestInternalSpecGetRuntime verifies the runtime read contract.
|
||||
func TestInternalSpecGetRuntime(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/runtimes/{game_id}", http.MethodGet)
|
||||
|
||||
require.Equal(t, "internalGetRuntime", op.OperationID)
|
||||
assertOperationParameterRefs(t, op, "#/components/parameters/GameIDPath")
|
||||
assertSchemaRef(t, responseSchemaRef(t, op, http.StatusOK), "#/components/schemas/RuntimeRecord", "internalGetRuntime 200")
|
||||
assertResponseRef(t, op, http.StatusNotFound, "#/components/responses/NotFoundError")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
}
|
||||
|
||||
// TestInternalSpecListRuntimes verifies the list contract and the optional
|
||||
// status query parameter.
|
||||
func TestInternalSpecListRuntimes(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/runtimes", http.MethodGet)
|
||||
|
||||
require.Equal(t, "internalListRuntimes", op.OperationID)
|
||||
assertOperationParameterRefs(t, op, "#/components/parameters/RuntimeStatusQuery")
|
||||
assertSchemaRef(t, responseSchemaRef(t, op, http.StatusOK), "#/components/schemas/RuntimeListResponse", "internalListRuntimes 200")
|
||||
assertResponseRef(t, op, http.StatusBadRequest, "#/components/responses/InvalidRequestError")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
|
||||
param := componentParameterRef(t, doc, "RuntimeStatusQuery")
|
||||
require.Equal(t, "status", param.Value.Name)
|
||||
require.Equal(t, "query", param.Value.In)
|
||||
require.False(t, param.Value.Required, "status filter must be optional")
|
||||
require.Equal(t, "#/components/schemas/RuntimeStatus", param.Value.Schema.Ref,
|
||||
"status filter schema must reference RuntimeStatus")
|
||||
}
|
||||
|
||||
// TestInternalSpecForceNextTurn verifies the force-next-turn admin contract.
|
||||
func TestInternalSpecForceNextTurn(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/runtimes/{game_id}/force-next-turn", http.MethodPost)
|
||||
|
||||
require.Equal(t, "internalForceNextTurn", op.OperationID)
|
||||
require.Nil(t, op.RequestBody, "internalForceNextTurn must have no request body")
|
||||
assertOperationParameterRefs(t, op, "#/components/parameters/GameIDPath")
|
||||
assertSchemaRef(t, responseSchemaRef(t, op, http.StatusOK), "#/components/schemas/RuntimeRecord", "internalForceNextTurn 200")
|
||||
assertResponseRef(t, op, http.StatusNotFound, "#/components/responses/NotFoundError")
|
||||
assertResponseRef(t, op, http.StatusConflict, "#/components/responses/ConflictError")
|
||||
assertResponseRef(t, op, http.StatusBadGateway, "#/components/responses/EngineUnreachableError")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
}
|
||||
|
||||
// TestInternalSpecStopRuntime verifies the stop admin contract.
|
||||
func TestInternalSpecStopRuntime(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/runtimes/{game_id}/stop", http.MethodPost)
|
||||
|
||||
require.Equal(t, "internalStopRuntime", op.OperationID)
|
||||
assertOperationParameterRefs(t, op, "#/components/parameters/GameIDPath")
|
||||
assertSchemaRef(t, requestSchemaRef(t, op), "#/components/schemas/StopRuntimeRequest", "internalStopRuntime request")
|
||||
assertSchemaRef(t, responseSchemaRef(t, op, http.StatusOK), "#/components/schemas/RuntimeRecord", "internalStopRuntime 200")
|
||||
assertResponseRef(t, op, http.StatusBadRequest, "#/components/responses/InvalidRequestError")
|
||||
assertResponseRef(t, op, http.StatusNotFound, "#/components/responses/NotFoundError")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
assertResponseRef(t, op, http.StatusServiceUnavailable, "#/components/responses/ServiceUnavailableError")
|
||||
|
||||
req := componentSchemaRef(t, doc, "StopRuntimeRequest")
|
||||
assertRequiredFields(t, req, "reason")
|
||||
reason := req.Value.Properties["reason"]
|
||||
require.NotNil(t, reason)
|
||||
require.Equal(t, "#/components/schemas/StopReason", reason.Ref,
|
||||
"StopRuntimeRequest.reason must reference StopReason")
|
||||
}
|
||||
|
||||
// TestInternalSpecPatchRuntime verifies the patch admin contract.
|
||||
func TestInternalSpecPatchRuntime(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/runtimes/{game_id}/patch", http.MethodPost)
|
||||
|
||||
require.Equal(t, "internalPatchRuntime", op.OperationID)
|
||||
assertOperationParameterRefs(t, op, "#/components/parameters/GameIDPath")
|
||||
assertSchemaRef(t, requestSchemaRef(t, op), "#/components/schemas/PatchRuntimeRequest", "internalPatchRuntime request")
|
||||
assertSchemaRef(t, responseSchemaRef(t, op, http.StatusOK), "#/components/schemas/RuntimeRecord", "internalPatchRuntime 200")
|
||||
assertResponseRef(t, op, http.StatusBadRequest, "#/components/responses/InvalidRequestError")
|
||||
assertResponseRef(t, op, http.StatusNotFound, "#/components/responses/NotFoundError")
|
||||
assertResponseRef(t, op, http.StatusConflict, "#/components/responses/ConflictError")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
assertResponseRef(t, op, http.StatusServiceUnavailable, "#/components/responses/ServiceUnavailableError")
|
||||
|
||||
req := componentSchemaRef(t, doc, "PatchRuntimeRequest")
|
||||
assertRequiredFields(t, req, "version")
|
||||
}
|
||||
|
||||
// TestInternalSpecBanishRace verifies the engine-side race banish contract
|
||||
// called by Game Lobby after a permanent membership removal.
|
||||
func TestInternalSpecBanishRace(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/games/{game_id}/race/{race_name}/banish", http.MethodPost)
|
||||
|
||||
require.Equal(t, "internalBanishRace", op.OperationID)
|
||||
require.Nil(t, op.RequestBody, "internalBanishRace must have no request body; the race_name is on the path")
|
||||
assertOperationParameterRefs(t, op,
|
||||
"#/components/parameters/GameIDPath",
|
||||
"#/components/parameters/RaceNamePath",
|
||||
)
|
||||
|
||||
assertNoContentResponse(t, op, http.StatusNoContent)
|
||||
assertResponseRef(t, op, http.StatusNotFound, "#/components/responses/NotFoundError")
|
||||
assertResponseRef(t, op, http.StatusBadGateway, "#/components/responses/EngineUnreachableError")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
}
|
||||
|
||||
// TestInternalSpecInvalidateMemberships verifies the membership cache hook
|
||||
// called by Game Lobby on every roster mutation.
|
||||
func TestInternalSpecInvalidateMemberships(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/games/{game_id}/memberships/invalidate", http.MethodPost)
|
||||
|
||||
require.Equal(t, "internalInvalidateMemberships", op.OperationID)
|
||||
require.Nil(t, op.RequestBody)
|
||||
assertOperationParameterRefs(t, op, "#/components/parameters/GameIDPath")
|
||||
|
||||
assertNoContentResponse(t, op, http.StatusNoContent)
|
||||
assertResponseRef(t, op, http.StatusNotFound, "#/components/responses/NotFoundError")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
}
|
||||
|
||||
// TestInternalSpecGameLiveness verifies the liveness reply used by Lobby's
|
||||
// resume flow.
|
||||
func TestInternalSpecGameLiveness(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
op := getOperation(t, doc, "/api/v1/internal/games/{game_id}/liveness", http.MethodGet)
|
||||
|
||||
require.Equal(t, "internalGameLiveness", op.OperationID)
|
||||
assertOperationParameterRefs(t, op, "#/components/parameters/GameIDPath")
|
||||
assertSchemaRef(t, responseSchemaRef(t, op, http.StatusOK), "#/components/schemas/LivenessResponse", "internalGameLiveness 200")
|
||||
assertResponseRef(t, op, http.StatusInternalServerError, "#/components/responses/InternalError")
|
||||
|
||||
resp := componentSchemaRef(t, doc, "LivenessResponse")
|
||||
assertRequiredFields(t, resp, "ready", "status")
|
||||
status := resp.Value.Properties["status"]
|
||||
require.NotNil(t, status)
|
||||
require.Equal(t, "#/components/schemas/RuntimeStatus", status.Ref,
|
||||
"LivenessResponse.status must reference RuntimeStatus")
|
||||
}
|
||||
|
||||
// TestInternalSpecEngineVersionsCRUD verifies all six engine version
|
||||
// registry operations: list, create, get, update, deprecate, resolve.
|
||||
func TestInternalSpecEngineVersionsCRUD(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
|
||||
listOp := getOperation(t, doc, "/api/v1/internal/engine-versions", http.MethodGet)
|
||||
require.Equal(t, "internalListEngineVersions", listOp.OperationID)
|
||||
assertOperationParameterRefs(t, listOp, "#/components/parameters/EngineVersionStatusQuery")
|
||||
assertSchemaRef(t, responseSchemaRef(t, listOp, http.StatusOK), "#/components/schemas/EngineVersionListResponse", "internalListEngineVersions 200")
|
||||
|
||||
createOp := getOperation(t, doc, "/api/v1/internal/engine-versions", http.MethodPost)
|
||||
require.Equal(t, "internalCreateEngineVersion", createOp.OperationID)
|
||||
assertSchemaRef(t, requestSchemaRef(t, createOp), "#/components/schemas/CreateEngineVersionRequest", "create request")
|
||||
assertSchemaRef(t, responseSchemaRef(t, createOp, http.StatusCreated), "#/components/schemas/EngineVersion", "internalCreateEngineVersion 201")
|
||||
assertResponseRef(t, createOp, http.StatusConflict, "#/components/responses/ConflictError")
|
||||
|
||||
getOp := getOperation(t, doc, "/api/v1/internal/engine-versions/{version}", http.MethodGet)
|
||||
require.Equal(t, "internalGetEngineVersion", getOp.OperationID)
|
||||
assertOperationParameterRefs(t, getOp, "#/components/parameters/VersionPath")
|
||||
assertSchemaRef(t, responseSchemaRef(t, getOp, http.StatusOK), "#/components/schemas/EngineVersion", "internalGetEngineVersion 200")
|
||||
assertResponseRef(t, getOp, http.StatusNotFound, "#/components/responses/NotFoundError")
|
||||
|
||||
updateOp := getOperation(t, doc, "/api/v1/internal/engine-versions/{version}", http.MethodPatch)
|
||||
require.Equal(t, "internalUpdateEngineVersion", updateOp.OperationID)
|
||||
assertOperationParameterRefs(t, updateOp, "#/components/parameters/VersionPath")
|
||||
assertSchemaRef(t, requestSchemaRef(t, updateOp), "#/components/schemas/UpdateEngineVersionRequest", "update request")
|
||||
assertSchemaRef(t, responseSchemaRef(t, updateOp, http.StatusOK), "#/components/schemas/EngineVersion", "internalUpdateEngineVersion 200")
|
||||
|
||||
deprecateOp := getOperation(t, doc, "/api/v1/internal/engine-versions/{version}", http.MethodDelete)
|
||||
require.Equal(t, "internalDeprecateEngineVersion", deprecateOp.OperationID)
|
||||
assertNoContentResponse(t, deprecateOp, http.StatusNoContent)
|
||||
assertResponseRef(t, deprecateOp, http.StatusConflict, "#/components/responses/EngineVersionInUseError")
|
||||
|
||||
resolveOp := getOperation(t, doc, "/api/v1/internal/engine-versions/{version}/image-ref", http.MethodGet)
|
||||
require.Equal(t, "internalResolveEngineVersionImageRef", resolveOp.OperationID)
|
||||
assertOperationParameterRefs(t, resolveOp, "#/components/parameters/VersionPath")
|
||||
assertSchemaRef(t, responseSchemaRef(t, resolveOp, http.StatusOK), "#/components/schemas/ImageRefResponse", "internalResolveEngineVersionImageRef 200")
|
||||
assertResponseRef(t, resolveOp, http.StatusNotFound, "#/components/responses/EngineVersionNotFoundError")
|
||||
|
||||
createReq := componentSchemaRef(t, doc, "CreateEngineVersionRequest")
|
||||
assertRequiredFields(t, createReq, "version", "image_ref")
|
||||
}
|
||||
|
||||
// TestInternalSpecHotPathContracts verifies the three Edge Gateway hot-path
|
||||
// operations and their pass-through schema treatment.
|
||||
func TestInternalSpecHotPathContracts(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
|
||||
cmdOp := getOperation(t, doc, "/api/v1/internal/games/{game_id}/commands", http.MethodPost)
|
||||
require.Equal(t, "internalExecuteCommands", cmdOp.OperationID)
|
||||
assertOperationParameterRefs(t, cmdOp,
|
||||
"#/components/parameters/GameIDPath",
|
||||
"#/components/parameters/XUserIDHeader",
|
||||
)
|
||||
assertSchemaRef(t, requestSchemaRef(t, cmdOp), "#/components/schemas/ExecuteCommandsRequest", "internalExecuteCommands request")
|
||||
assertSchemaRef(t, responseSchemaRef(t, cmdOp, http.StatusOK), "#/components/schemas/ExecuteCommandsResponse", "internalExecuteCommands 200")
|
||||
assertResponseRef(t, cmdOp, http.StatusForbidden, "#/components/responses/ForbiddenError")
|
||||
assertResponseRef(t, cmdOp, http.StatusBadGateway, "#/components/responses/EngineUnreachableError")
|
||||
|
||||
orderOp := getOperation(t, doc, "/api/v1/internal/games/{game_id}/orders", http.MethodPost)
|
||||
require.Equal(t, "internalPutOrders", orderOp.OperationID)
|
||||
assertOperationParameterRefs(t, orderOp,
|
||||
"#/components/parameters/GameIDPath",
|
||||
"#/components/parameters/XUserIDHeader",
|
||||
)
|
||||
assertSchemaRef(t, requestSchemaRef(t, orderOp), "#/components/schemas/PutOrdersRequest", "internalPutOrders request")
|
||||
assertSchemaRef(t, responseSchemaRef(t, orderOp, http.StatusOK), "#/components/schemas/PutOrdersResponse", "internalPutOrders 200")
|
||||
|
||||
reportOp := getOperation(t, doc, "/api/v1/internal/games/{game_id}/reports/{turn}", http.MethodGet)
|
||||
require.Equal(t, "internalGetReport", reportOp.OperationID)
|
||||
assertOperationParameterRefs(t, reportOp,
|
||||
"#/components/parameters/GameIDPath",
|
||||
"#/components/parameters/TurnPath",
|
||||
"#/components/parameters/XUserIDHeader",
|
||||
)
|
||||
require.Nil(t, reportOp.RequestBody, "internalGetReport must have no request body")
|
||||
assertSchemaRef(t, responseSchemaRef(t, reportOp, http.StatusOK), "#/components/schemas/ReportResponse", "internalGetReport 200")
|
||||
assertResponseRef(t, reportOp, http.StatusForbidden, "#/components/responses/ForbiddenError")
|
||||
assertResponseRef(t, reportOp, http.StatusBadGateway, "#/components/responses/EngineUnreachableError")
|
||||
}
|
||||
|
||||
// TestInternalSpecProbes verifies the two probe operations.
|
||||
func TestInternalSpecProbes(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
|
||||
for _, path := range []string{"/healthz", "/readyz"} {
|
||||
op := getOperation(t, doc, path, http.MethodGet)
|
||||
assertSchemaRef(t, responseSchemaRef(t, op, http.StatusOK), "#/components/schemas/ProbeResponse", op.OperationID+" 200")
|
||||
assertResponseRef(t, op, http.StatusServiceUnavailable, "#/components/responses/ServiceUnavailableError")
|
||||
}
|
||||
|
||||
healthz := getOperation(t, doc, "/healthz", http.MethodGet)
|
||||
require.Equal(t, "internalHealthz", healthz.OperationID)
|
||||
readyz := getOperation(t, doc, "/readyz", http.MethodGet)
|
||||
require.Equal(t, "internalReadyz", readyz.OperationID)
|
||||
}
|
||||
|
||||
// TestInternalSpecRuntimeRecordSchema verifies that RuntimeRecord declares
|
||||
// the required field set documented in gamemaster/README.md §Persistence
|
||||
// Layout, with the optional lifecycle timestamps present in properties.
|
||||
func TestInternalSpecRuntimeRecordSchema(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
schema := componentSchemaRef(t, doc, "RuntimeRecord")
|
||||
|
||||
assertRequiredFields(t, schema,
|
||||
"game_id",
|
||||
"runtime_status",
|
||||
"engine_endpoint",
|
||||
"current_image_ref",
|
||||
"current_engine_version",
|
||||
"turn_schedule",
|
||||
"current_turn",
|
||||
"next_generation_at",
|
||||
"skip_next_tick",
|
||||
"engine_health_summary",
|
||||
"created_at",
|
||||
"updated_at",
|
||||
)
|
||||
|
||||
for _, optional := range []string{"started_at", "stopped_at", "finished_at"} {
|
||||
require.Contains(t, schema.Value.Properties, optional,
|
||||
"RuntimeRecord.%s must be present in properties", optional)
|
||||
}
|
||||
|
||||
runtimeStatus := schema.Value.Properties["runtime_status"]
|
||||
require.NotNil(t, runtimeStatus)
|
||||
require.Equal(t, "#/components/schemas/RuntimeStatus", runtimeStatus.Ref,
|
||||
"RuntimeRecord.runtime_status must reference RuntimeStatus")
|
||||
}
|
||||
|
||||
// TestInternalSpecEngineVersionSchema verifies the EngineVersion schema's
|
||||
// required field set and the deliberate `additionalProperties: true` on
|
||||
// the free-form `options` field.
|
||||
func TestInternalSpecEngineVersionSchema(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
schema := componentSchemaRef(t, doc, "EngineVersion")
|
||||
|
||||
assertRequiredFields(t, schema,
|
||||
"version", "image_ref", "options", "status", "created_at", "updated_at")
|
||||
|
||||
options := schema.Value.Properties["options"]
|
||||
require.NotNil(t, options)
|
||||
require.NotNil(t, options.Value.AdditionalProperties.Has,
|
||||
"EngineVersion.options must declare additionalProperties explicitly")
|
||||
require.True(t, *options.Value.AdditionalProperties.Has,
|
||||
"EngineVersion.options is free-form jsonb and must keep additionalProperties: true")
|
||||
|
||||
status := schema.Value.Properties["status"]
|
||||
require.NotNil(t, status)
|
||||
require.Equal(t, "#/components/schemas/EngineVersionStatus", status.Ref,
|
||||
"EngineVersion.status must reference EngineVersionStatus")
|
||||
}
|
||||
|
||||
// TestInternalSpecRuntimeStatusEnum verifies the seven-value RuntimeStatus
|
||||
// enum from gamemaster/README.md §Scope.
|
||||
func TestInternalSpecRuntimeStatusEnum(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
schema := componentSchemaRef(t, doc, "RuntimeStatus")
|
||||
|
||||
got := stringEnumValues(t, schema)
|
||||
require.ElementsMatch(t,
|
||||
[]string{
|
||||
"starting",
|
||||
"running",
|
||||
"generation_in_progress",
|
||||
"generation_failed",
|
||||
"stopped",
|
||||
"engine_unreachable",
|
||||
"finished",
|
||||
},
|
||||
got)
|
||||
}
|
||||
|
||||
// TestInternalSpecEngineVersionStatusEnum verifies the EngineVersionStatus
|
||||
// enum from gamemaster/README.md §Engine Version Registry.
|
||||
func TestInternalSpecEngineVersionStatusEnum(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
schema := componentSchemaRef(t, doc, "EngineVersionStatus")
|
||||
|
||||
got := stringEnumValues(t, schema)
|
||||
require.ElementsMatch(t, []string{"active", "deprecated"}, got)
|
||||
}
|
||||
|
||||
// TestInternalSpecStopReasonEnum verifies the StopReason enum from
|
||||
// gamemaster/README.md §Lifecycles -> Stop.
|
||||
func TestInternalSpecStopReasonEnum(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
schema := componentSchemaRef(t, doc, "StopReason")
|
||||
|
||||
got := stringEnumValues(t, schema)
|
||||
require.ElementsMatch(t, []string{"admin_request", "finished", "timeout"}, got)
|
||||
}
|
||||
|
||||
// TestInternalSpecErrorEnvelope verifies the error envelope shape, which
|
||||
// must be identical to the Lobby and Runtime Manager envelopes.
|
||||
func TestInternalSpecErrorEnvelope(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
|
||||
envelope := componentSchemaRef(t, doc, "ErrorResponse")
|
||||
assertRequiredFields(t, envelope, "error")
|
||||
assertAdditionalPropertiesFalse(t, envelope, "ErrorResponse")
|
||||
errRef := envelope.Value.Properties["error"]
|
||||
require.NotNil(t, errRef)
|
||||
require.Equal(t, "#/components/schemas/ErrorBody", errRef.Ref,
|
||||
"ErrorResponse.error must reference ErrorBody")
|
||||
|
||||
body := componentSchemaRef(t, doc, "ErrorBody")
|
||||
assertRequiredFields(t, body, "code", "message")
|
||||
assertAdditionalPropertiesFalse(t, body, "ErrorBody")
|
||||
}
|
||||
|
||||
// TestInternalSpecGMOwnedSchemasAreClosed verifies that every schema for
|
||||
// which Game Master owns the wire shape rejects unknown fields.
|
||||
func TestInternalSpecGMOwnedSchemasAreClosed(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
|
||||
for _, name := range gmOwnedClosedSchemas {
|
||||
name := name
|
||||
t.Run(name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
schema := componentSchemaRef(t, doc, name)
|
||||
assertAdditionalPropertiesFalse(t, schema, name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestInternalSpecHotPathSchemasArePassthrough verifies that every engine
|
||||
// pass-through schema deliberately keeps `additionalProperties: true`.
|
||||
// The matching test guards against a refactor that closes these by mistake.
|
||||
func TestInternalSpecHotPathSchemasArePassthrough(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadInternalSpec(t)
|
||||
|
||||
for _, name := range engineOwnedPassthroughSchemas {
|
||||
name := name
|
||||
t.Run(name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
schema := componentSchemaRef(t, doc, name)
|
||||
require.NotNil(t, schema.Value.AdditionalProperties.Has,
|
||||
"%s must declare additionalProperties explicitly", name)
|
||||
require.True(t, *schema.Value.AdditionalProperties.Has,
|
||||
"%s must keep additionalProperties: true (engine pass-through)", name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// loadInternalSpec loads and validates gamemaster/api/internal-openapi.yaml
|
||||
// relative to this test file.
|
||||
func loadInternalSpec(t *testing.T) *openapi3.T {
|
||||
t.Helper()
|
||||
return loadSpec(t, filepath.Join("api", "internal-openapi.yaml"))
|
||||
}
|
||||
|
||||
func loadSpec(t *testing.T, rel string) *openapi3.T {
|
||||
t.Helper()
|
||||
|
||||
_, thisFile, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
require.FailNow(t, "runtime.Caller failed")
|
||||
}
|
||||
|
||||
specPath := filepath.Join(filepath.Dir(thisFile), rel)
|
||||
loader := openapi3.NewLoader()
|
||||
doc, err := loader.LoadFromFile(specPath)
|
||||
if err != nil {
|
||||
require.Failf(t, "test failed", "load spec %s: %v", specPath, err)
|
||||
}
|
||||
if doc == nil {
|
||||
require.Failf(t, "test failed", "load spec %s: returned nil document", specPath)
|
||||
}
|
||||
if err := doc.Validate(context.Background()); err != nil {
|
||||
require.Failf(t, "test failed", "validate spec %s: %v", specPath, err)
|
||||
}
|
||||
|
||||
return doc
|
||||
}
|
||||
|
||||
func getOperation(t *testing.T, doc *openapi3.T, path, method string) *openapi3.Operation {
|
||||
t.Helper()
|
||||
|
||||
if doc.Paths == nil {
|
||||
require.FailNow(t, "spec is missing paths")
|
||||
}
|
||||
pathItem := doc.Paths.Value(path)
|
||||
if pathItem == nil {
|
||||
require.Failf(t, "test failed", "spec is missing path %s", path)
|
||||
}
|
||||
op := pathItem.GetOperation(method)
|
||||
if op == nil {
|
||||
require.Failf(t, "test failed", "spec is missing %s operation for path %s", method, path)
|
||||
}
|
||||
|
||||
return op
|
||||
}
|
||||
|
||||
func requestSchemaRef(t *testing.T, op *openapi3.Operation) *openapi3.SchemaRef {
|
||||
t.Helper()
|
||||
|
||||
if op.RequestBody == nil || op.RequestBody.Value == nil {
|
||||
require.FailNow(t, "operation is missing request body")
|
||||
}
|
||||
mt := op.RequestBody.Value.Content.Get("application/json")
|
||||
if mt == nil || mt.Schema == nil {
|
||||
require.FailNow(t, "operation is missing application/json request schema")
|
||||
}
|
||||
|
||||
return mt.Schema
|
||||
}
|
||||
|
||||
func responseSchemaRef(t *testing.T, op *openapi3.Operation, status int) *openapi3.SchemaRef {
|
||||
t.Helper()
|
||||
|
||||
ref := op.Responses.Status(status)
|
||||
if ref == nil || ref.Value == nil {
|
||||
require.Failf(t, "test failed", "operation is missing %d response", status)
|
||||
}
|
||||
mt := ref.Value.Content.Get("application/json")
|
||||
if mt == nil || mt.Schema == nil {
|
||||
require.Failf(t, "test failed", "operation is missing application/json schema for %d response", status)
|
||||
}
|
||||
|
||||
return mt.Schema
|
||||
}
|
||||
|
||||
func componentSchemaRef(t *testing.T, doc *openapi3.T, name string) *openapi3.SchemaRef {
|
||||
t.Helper()
|
||||
|
||||
if doc.Components.Schemas == nil {
|
||||
require.FailNow(t, "spec is missing component schemas")
|
||||
}
|
||||
ref := doc.Components.Schemas[name]
|
||||
if ref == nil {
|
||||
require.Failf(t, "test failed", "spec is missing component schema %s", name)
|
||||
}
|
||||
|
||||
return ref
|
||||
}
|
||||
|
||||
func componentParameterRef(t *testing.T, doc *openapi3.T, name string) *openapi3.ParameterRef {
|
||||
t.Helper()
|
||||
|
||||
if doc.Components.Parameters == nil {
|
||||
require.FailNow(t, "spec is missing component parameters")
|
||||
}
|
||||
ref := doc.Components.Parameters[name]
|
||||
if ref == nil {
|
||||
require.Failf(t, "test failed", "spec is missing component parameter %s", name)
|
||||
}
|
||||
|
||||
return ref
|
||||
}
|
||||
|
||||
func assertSchemaRef(t *testing.T, schemaRef *openapi3.SchemaRef, want, name string) {
|
||||
t.Helper()
|
||||
require.NotNil(t, schemaRef, "%s schema ref", name)
|
||||
require.Equal(t, want, schemaRef.Ref, "%s schema ref", name)
|
||||
}
|
||||
|
||||
func assertRequiredFields(t *testing.T, schemaRef *openapi3.SchemaRef, fields ...string) {
|
||||
t.Helper()
|
||||
require.NotNil(t, schemaRef)
|
||||
require.ElementsMatch(t, fields, schemaRef.Value.Required)
|
||||
}
|
||||
|
||||
func assertOperationParameterRefs(t *testing.T, op *openapi3.Operation, refs ...string) {
|
||||
t.Helper()
|
||||
|
||||
got := make([]string, 0, len(op.Parameters))
|
||||
for _, p := range op.Parameters {
|
||||
got = append(got, p.Ref)
|
||||
}
|
||||
|
||||
require.ElementsMatch(t, refs, got)
|
||||
}
|
||||
|
||||
func assertResponseRef(t *testing.T, op *openapi3.Operation, status int, want string) {
|
||||
t.Helper()
|
||||
|
||||
ref := op.Responses.Status(status)
|
||||
if ref == nil {
|
||||
require.Failf(t, "test failed", "operation %s is missing %d response", op.OperationID, status)
|
||||
}
|
||||
require.Equal(t, want, ref.Ref,
|
||||
"operation %s response %d must reference %s", op.OperationID, status, want)
|
||||
}
|
||||
|
||||
func assertNoContentResponse(t *testing.T, op *openapi3.Operation, status int) {
|
||||
t.Helper()
|
||||
|
||||
ref := op.Responses.Status(status)
|
||||
if ref == nil || ref.Value == nil {
|
||||
require.Failf(t, "test failed", "operation %s is missing %d response", op.OperationID, status)
|
||||
}
|
||||
require.Empty(t, ref.Value.Content,
|
||||
"operation %s response %d must have no content body", op.OperationID, status)
|
||||
}
|
||||
|
||||
func assertAdditionalPropertiesFalse(t *testing.T, schemaRef *openapi3.SchemaRef, name string) {
|
||||
t.Helper()
|
||||
require.NotNil(t, schemaRef.Value.AdditionalProperties.Has,
|
||||
"%s must declare additionalProperties explicitly", name)
|
||||
require.False(t, *schemaRef.Value.AdditionalProperties.Has,
|
||||
"%s must reject unknown fields (additionalProperties: false)", name)
|
||||
}
|
||||
|
||||
func stringEnumValues(t *testing.T, schemaRef *openapi3.SchemaRef) []string {
|
||||
t.Helper()
|
||||
|
||||
require.NotNil(t, schemaRef)
|
||||
got := make([]string, 0, len(schemaRef.Value.Enum))
|
||||
for _, value := range schemaRef.Value.Enum {
|
||||
s, ok := value.(string)
|
||||
require.True(t, ok, "enum value %v is not a string", value)
|
||||
got = append(got, s)
|
||||
}
|
||||
return got
|
||||
}
|
||||
@@ -0,0 +1,62 @@
|
||||
# Stage 01 — Architecture sync
|
||||
|
||||
This decision record captures the non-obvious choice from
|
||||
[`../PLAN.md` Stage 01](../PLAN.md#stage-01-update-architecturemd):
|
||||
the drop of `ships_built` from every architectural mention of
|
||||
`player_turn_stats`.
|
||||
|
||||
## Context
|
||||
|
||||
Before Stage 01, `ARCHITECTURE.md` and `lobby/README.md` described
|
||||
`player_turn_stats` as carrying `{user_id, planets, population,
|
||||
ships_built}`, and the Race Name Directory capability rule was wired in
|
||||
prose as if `ships_built` could affect the outcome. In practice, the
|
||||
formal capability rule was already
|
||||
`max_planets > initial_planets AND max_population > initial_population`
|
||||
— `ships_built` was named in the stats payload but never referenced by
|
||||
the rule.
|
||||
|
||||
## Decision
|
||||
|
||||
`player_turn_stats` carries `{user_id, planets, population}` only.
|
||||
`ships_built` is removed from:
|
||||
|
||||
- `ARCHITECTURE.md §8 Game Master` — `runtime_snapshot_update` payload
|
||||
description.
|
||||
- `ARCHITECTURE.md §7 Game Lobby` — per-member aggregate description
|
||||
(`current and running-max of planets and population`).
|
||||
- `gamemaster/README.md` — already aligned at the stage-02 README
|
||||
freeze.
|
||||
|
||||
The capability rule wording is unchanged because it was already
|
||||
`planets`/`population`-only; only the surrounding prose mentioning the
|
||||
unused field was inaccurate.
|
||||
|
||||
This is a documentation-only change. No runtime behaviour, wire format,
|
||||
schema, or test fixture is affected.
|
||||
|
||||
## Why
|
||||
|
||||
`ships_built` was unused. Naming it in the contract obliged every
|
||||
producer (GM) and consumer (Lobby aggregator) to populate and forward a
|
||||
field with no consumer. Dropping it now — before any GM code lands —
|
||||
keeps the contract minimal and avoids future drift between "what the
|
||||
spec lists" and "what the code uses". `lobby/README.md` and the lobby
|
||||
aggregate code are aligned in Stage 03 of the same plan.
|
||||
|
||||
## Alternatives considered
|
||||
|
||||
- **Keep `ships_built` in the contract for future use.** Rejected: no
|
||||
concrete plan exists for a `ships_built`-driven capability or stat
|
||||
surface; speculative fields rot.
|
||||
- **Add `ships_built` only as an opaque stat without changing the
|
||||
capability rule.** Rejected: the runtime cost of carrying it is
|
||||
negligible, but the documentation burden of explaining why an unused
|
||||
field is in the payload is not.
|
||||
|
||||
## References
|
||||
|
||||
- [`../PLAN.md` Stage 01](../PLAN.md)
|
||||
- [`../../ARCHITECTURE.md` §7 Game Lobby](../../ARCHITECTURE.md)
|
||||
- [`../../ARCHITECTURE.md` §8 Game Master](../../ARCHITECTURE.md)
|
||||
- [`../README.md`](../README.md) — `player_turn_stats[]` description.
|
||||
@@ -0,0 +1,124 @@
|
||||
---
|
||||
stage: 03
|
||||
title: Existing-service docs sync (Lobby, Notification, Game, RTM)
|
||||
---
|
||||
|
||||
# Stage 03 — Existing-service docs sync
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
synchronising every touched-service README with the post-Game-Master
|
||||
contract before any code change lands. The mechanical edits
|
||||
(strikethrough renames, drop of `ships_built`, replacement of the
|
||||
`engineimage.Resolver` block) are not enumerated here — they are direct
|
||||
consequences of the rules already recorded in
|
||||
[`../README.md`](../README.md) and
|
||||
[`../../ARCHITECTURE.md`](../../ARCHITECTURE.md).
|
||||
|
||||
## Context
|
||||
|
||||
Stage 03 had to reach a state where every README in the repository
|
||||
agreed on three new contractual rules before any service-level code
|
||||
landed:
|
||||
|
||||
- `image_ref` is resolved synchronously from `Game Master`'s engine
|
||||
version registry, not from a Go-template held by `Game Lobby`.
|
||||
- A new outgoing `POST /api/v1/internal/games/{game_id}/memberships/invalidate`
|
||||
hook from `Game Lobby` into `Game Master` fires post-commit on every
|
||||
roster mutation.
|
||||
- The engine container splits its REST surface into `/api/v1/admin/*`
|
||||
(GM-only) and `/api/v1/{command,order,report}` (player), and
|
||||
`StateResponse` carries a new boolean `finished` field that GM uses
|
||||
as the sole finish signal.
|
||||
|
||||
Three decisions were not derivable from the GM README and required a
|
||||
deliberate choice while editing `lobby/README.md`, `game/README.md`,
|
||||
and `rtmanager/README.md`.
|
||||
|
||||
## Decision 1 — `lobby.game.start` failure modes for GM-driven image resolve
|
||||
|
||||
`Game Lobby` now calls
|
||||
`GET /api/v1/internal/engine-versions/{version}/image-ref` synchronously
|
||||
before publishing `runtime:start_jobs`. The contract defines two new
|
||||
failure modes for the `lobby.game.start` command:
|
||||
|
||||
- GM unreachable (network error, timeout, `5xx`) ⇒
|
||||
`lobby.game.start` returns `service_unavailable`; the game stays in
|
||||
`ready_to_start`. No container is created, no envelope is published.
|
||||
- GM reports the version is missing or deprecated (`404` or
|
||||
`engine_version_not_found` payload) ⇒ `lobby.game.start` returns
|
||||
`engine_version_not_found`; the game stays in `ready_to_start`.
|
||||
|
||||
Both error codes were added to the stable error code list in
|
||||
`lobby/README.md`. They are deliberately distinct from the existing
|
||||
GM-unavailable-after-container-start path, which transitions the game to
|
||||
`paused` (the container is alive; only platform tracking is missing).
|
||||
Conflating the two would force operators to inspect the `paused` set
|
||||
for misconfigurations that never produced a container.
|
||||
|
||||
Alternatives considered and rejected:
|
||||
|
||||
- treat GM-unavailable at resolve time as `paused` for symmetry with the
|
||||
later path — rejected because no container exists, so the
|
||||
`lobby.runtime_paused_after_start` admin notification (which announces
|
||||
a stranded container) would be a lie;
|
||||
- silently fall back to a Go-template default when GM is unreachable —
|
||||
rejected because it brings back the very coupling the stage is
|
||||
retiring and lets a misconfigured registry slip through unnoticed.
|
||||
|
||||
## Decision 2 — Membership invalidate hook is fail-open
|
||||
|
||||
The new outgoing
|
||||
`POST /api/v1/internal/games/{game_id}/memberships/invalidate` call from
|
||||
`approveapplication`, `rejectapplication`, `redeeminvite`,
|
||||
`removemember`, `blockmember`, and the user-lifecycle cascade worker is
|
||||
documented as **fail-open**: a non-2xx response is logged and metered
|
||||
but never rolls back the Lobby commit. GM's TTL safety net catches
|
||||
stale data within the next cache TTL window.
|
||||
|
||||
This matches the architectural rule that a failed cross-service hook
|
||||
must not invalidate an already committed business state. The TTL on
|
||||
GM's in-process membership cache (default `30s`) bounds the staleness
|
||||
window; the explicit hook only optimises for the time between commit
|
||||
and TTL expiry.
|
||||
|
||||
Alternatives considered and rejected:
|
||||
|
||||
- two-phase commit across Lobby and GM — rejected: GM is allowed to be
|
||||
unavailable without rolling back Lobby's roster mutation;
|
||||
- queue the invalidation on a Redis Stream and let GM consume it
|
||||
asynchronously — rejected for v1 because it introduces a new stream
|
||||
contract for a rare event, and the synchronous post-commit call is
|
||||
cheap enough that the staleness reduction beats the operational cost.
|
||||
|
||||
## Decision 3 — Keep `runtime:start_jobs` envelope shape unchanged
|
||||
|
||||
The `runtime:start_jobs` envelope continues to carry `image_ref` as a
|
||||
top-level string field. Only the source of that string changes (from a
|
||||
Lobby-side template substitution to a Lobby-side synchronous call into
|
||||
GM). `Runtime Manager` does not need a contract change in this stage
|
||||
and does not learn about engine versions — it still receives a
|
||||
ready-to-pull Docker reference.
|
||||
|
||||
Alternatives considered and rejected:
|
||||
|
||||
- replace `image_ref` with `engine_version` and have RTM resolve the
|
||||
image — rejected: it would force RTM to call GM, which violates the
|
||||
rule that RTM has no upstream service dependencies for runtime
|
||||
operations;
|
||||
- attach the resolved version metadata to the envelope alongside
|
||||
`image_ref` — rejected: RTM has no consumer for the metadata and
|
||||
carrying it would invite divergence between Lobby and RTM views of
|
||||
the engine version registry.
|
||||
|
||||
## References
|
||||
|
||||
- [`../PLAN.md` Stage 03](../PLAN.md)
|
||||
- [`../README.md`](../README.md) — Game Master service description.
|
||||
- [`../../lobby/README.md`](../../lobby/README.md) — updated Game Start
|
||||
Flow, internal trusted REST, configuration, and error codes.
|
||||
- [`../../game/README.md`](../../game/README.md) — admin path layout,
|
||||
`StateResponse.finished`, `/admin/race/banish` shape.
|
||||
- [`../../rtmanager/README.md`](../../rtmanager/README.md) —
|
||||
`runtime:health_events` consumer note.
|
||||
- [`../../notification/README.md`](../../notification/README.md) — GM as
|
||||
the producer of the three `game.*` notification types.
|
||||
@@ -0,0 +1,177 @@
|
||||
---
|
||||
stage: 06
|
||||
title: Contract files and contract tests
|
||||
---
|
||||
|
||||
# Stage 06 — Contract files and contract tests
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
producing the machine-readable contracts for `Game Master`:
|
||||
[`../api/internal-openapi.yaml`](../api/internal-openapi.yaml),
|
||||
[`../api/runtime-events-asyncapi.yaml`](../api/runtime-events-asyncapi.yaml),
|
||||
and the matching contract tests in the `gamemaster` package.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 06](../PLAN.md) freezes the GM REST and event
|
||||
contracts before any handler is written, so later stages have a target
|
||||
spec. The plan enumerates the 20 internal REST `operationId` values and
|
||||
the two `gm:lobby_events` message types and asks contract tests to
|
||||
fail loudly if anything drifts.
|
||||
|
||||
Three decisions were not derivable from `../README.md` or
|
||||
[`../../ARCHITECTURE.md`](../../ARCHITECTURE.md) and required a
|
||||
deliberate choice while writing the YAML.
|
||||
|
||||
## Decision 1 — Two messages and two send operations on one channel
|
||||
|
||||
`gm:lobby_events` carries two distinct message types — a recurring
|
||||
`runtime_snapshot_update` and a terminal `game_finished`. The AsyncAPI
|
||||
3.1.0 surface encodes them as **two separate messages on one channel
|
||||
with one `send` operation per message**:
|
||||
|
||||
```yaml
|
||||
channels:
|
||||
lobbyEvents:
|
||||
address: gm:lobby_events
|
||||
messages:
|
||||
runtimeSnapshotUpdate: { $ref: '#/components/messages/RuntimeSnapshotUpdate' }
|
||||
gameFinished: { $ref: '#/components/messages/GameFinished' }
|
||||
operations:
|
||||
publishRuntimeSnapshotUpdate: { action: send, ... }
|
||||
publishGameFinished: { action: send, ... }
|
||||
```
|
||||
|
||||
The `notification:intents` contract uses a single message with
|
||||
`allOf`-conditional discriminator branches; the `runtime:health_events`
|
||||
contract uses a single message with a `oneOf` `details` field. Both
|
||||
patterns work when most fields are shared and only one variant slot
|
||||
differs.
|
||||
|
||||
For `gm:lobby_events` the two payloads share only `event_type`,
|
||||
`game_id`, `runtime_status`, and `player_turn_stats[]`. The remaining
|
||||
fields (`current_turn`, `engine_health_summary`, `occurred_at_ms` on
|
||||
the snapshot vs `final_turn_number`, `finished_at_ms` on the finish
|
||||
event) have no overlap, and their semantics differ — the snapshot is
|
||||
recurring, the finish event is terminal. Two messages reflect this
|
||||
asymmetry directly and keep each payload schema closed without
|
||||
needing per-variant `if/then` rules.
|
||||
|
||||
Alternatives considered:
|
||||
|
||||
- **One message with `allOf` discriminator** — rejected: would force
|
||||
every shared field to be optional at the envelope level and
|
||||
re-required inside each `if/then` branch, doubling the schema size
|
||||
and complicating the contract test. The notification spec accepts
|
||||
this cost because it has 18 message types and the payload-shape
|
||||
asymmetry is the whole point; here it's two types with no field
|
||||
overlap.
|
||||
- **Two channels** — rejected: would require Game Lobby to subscribe
|
||||
to two streams, breaking the cadence guarantees in `../README.md`
|
||||
§Async Stream Contracts ("snapshot transitions and finish are
|
||||
ordered relative to each other on the same stream").
|
||||
|
||||
## Decision 2 — `event_type` is a required schema-level `const`
|
||||
|
||||
[`../PLAN.md` Stage 06](../PLAN.md) lists the "frozen field set per
|
||||
message" without naming `event_type`. The implementation pins
|
||||
`event_type` as a required schema property with a `const` value:
|
||||
|
||||
```yaml
|
||||
RuntimeSnapshotUpdatePayload:
|
||||
required: [event_type, ...]
|
||||
properties:
|
||||
event_type: { type: string, const: runtime_snapshot_update }
|
||||
```
|
||||
|
||||
Reasons:
|
||||
|
||||
1. The wire payload must carry a discriminator; consumers (Game Lobby)
|
||||
dispatch on `event_type` after `XREAD`. Omitting it from the schema
|
||||
would require Game Master to inject the value at publish time
|
||||
without spec backing.
|
||||
2. `const` at the schema level lets the contract test assert the
|
||||
discriminator value, which is the only meaningful check Stage 06
|
||||
asks for ("`event_type` discriminator values"). Asserting only the
|
||||
message component name without the on-wire `event_type` would not
|
||||
protect consumers from a misconfigured publisher.
|
||||
3. `rtmanager/api/runtime-health-asyncapi.yaml` already uses
|
||||
`event_type` as a schema-level enum-typed discriminator; treating
|
||||
`gm:lobby_events` the same way keeps the patterns consistent for a
|
||||
reader cross-walking the two specs.
|
||||
|
||||
Alternatives considered:
|
||||
|
||||
- **Leave `event_type` out of the spec and produce it only at the
|
||||
publish-side adapter** — rejected: hides the discriminator from the
|
||||
contract test, which then cannot fail when the publisher renames or
|
||||
drops it.
|
||||
- **Encode discrimination through AsyncAPI message names alone**
|
||||
(relying on `header.X-Message-Type` or similar) — rejected: Redis
|
||||
Streams have no message-headers concept; everything travels in the
|
||||
payload field set.
|
||||
|
||||
## Decision 3 — `additionalProperties: true` on engine pass-through schemas
|
||||
|
||||
Three internal REST operations forward engine-owned payloads without
|
||||
modification:
|
||||
|
||||
- `internalExecuteCommands` — `POST /api/v1/command` on the engine
|
||||
- `internalPutOrders` — `PUT /api/v1/order` on the engine
|
||||
- `internalGetReport` — `GET /api/v1/report` on the engine
|
||||
|
||||
Their request and response bodies use `additionalProperties: true`:
|
||||
|
||||
```yaml
|
||||
ExecuteCommandsRequest:
|
||||
type: object
|
||||
additionalProperties: true
|
||||
required: [commands]
|
||||
properties:
|
||||
commands:
|
||||
type: array
|
||||
items: { type: object, additionalProperties: true }
|
||||
```
|
||||
|
||||
Game Master does not own the shape of these payloads — `galaxy/game/openapi.yaml`
|
||||
is the source of truth — and freezing them in the GM contract would
|
||||
turn every engine-side schema bump into a coordinated GM release. The
|
||||
same reasoning applies to `EngineVersion.options`, which is a
|
||||
free-form `jsonb` document Game Master stores verbatim.
|
||||
|
||||
To prevent the open-by-default flag from spreading by accident, the
|
||||
contract test
|
||||
[`../contract_openapi_test.go`](../contract_openapi_test.go) maintains
|
||||
two explicit allowlists:
|
||||
|
||||
- `gmOwnedClosedSchemas` — every schema for which Game Master owns
|
||||
the wire shape; the test asserts each one closes with
|
||||
`additionalProperties: false`.
|
||||
- `engineOwnedPassthroughSchemas` — the five pass-through schemas
|
||||
(request and response bodies of the three hot-path operations); the
|
||||
test asserts each one keeps `additionalProperties: true`.
|
||||
|
||||
Adding a new GM schema requires registering it in
|
||||
`gmOwnedClosedSchemas`; the test fails loudly if it isn't.
|
||||
|
||||
Alternatives considered:
|
||||
|
||||
- **Close the pass-through schemas with `additionalProperties: false`
|
||||
and hand-mirror every engine field** — rejected: `galaxy/game` and
|
||||
`galaxy/gamemaster` would have to release in lockstep; even cosmetic
|
||||
field renames in the engine would break Edge Gateway routing.
|
||||
- **Rely on a `// pass-through` comment in the YAML alone** — rejected:
|
||||
comments do not survive automated reformatters and provide no
|
||||
test-time signal.
|
||||
|
||||
## References
|
||||
|
||||
- [`../PLAN.md` Stage 06](../PLAN.md)
|
||||
- [`../README.md` §Hot Path](../README.md), [`../README.md` §Async Stream Contracts](../README.md)
|
||||
- [`../api/internal-openapi.yaml`](../api/internal-openapi.yaml)
|
||||
- [`../api/runtime-events-asyncapi.yaml`](../api/runtime-events-asyncapi.yaml)
|
||||
- [`../contract_openapi_test.go`](../contract_openapi_test.go)
|
||||
- [`../contract_asyncapi_test.go`](../contract_asyncapi_test.go)
|
||||
- [`../../lobby/contract_openapi_test.go`](../../lobby/contract_openapi_test.go) — OpenAPI test pattern reused here.
|
||||
- [`../../notification/contract_asyncapi_test.go`](../../notification/contract_asyncapi_test.go) — YAML walker pattern reused here.
|
||||
- [`../../rtmanager/api/runtime-health-asyncapi.yaml`](../../rtmanager/api/runtime-health-asyncapi.yaml) — `event_type` const precedent.
|
||||
@@ -0,0 +1,125 @@
|
||||
---
|
||||
stage: 07
|
||||
title: Notification catalog audit
|
||||
---
|
||||
|
||||
# Stage 07 — Notification catalog audit
|
||||
|
||||
This decision record captures the audit outcome and the freeze-test
|
||||
choice made for the GM-owned notification types
|
||||
(`game.turn.ready`, `game.finished`, `game.generation_failed`).
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 07](../PLAN.md) asks for confirmation that the three
|
||||
notification types `Game Master` will produce in Stage 15 are already
|
||||
wired through the shared producer module
|
||||
[`../../pkg/notificationintent/`](../../pkg/notificationintent/), the
|
||||
`notification` service AsyncAPI contract
|
||||
[`../../notification/api/intents-asyncapi.yaml`](../../notification/api/intents-asyncapi.yaml),
|
||||
and the catalog freeze in
|
||||
[`../../notification/contract_asyncapi_test.go`](../../notification/contract_asyncapi_test.go).
|
||||
The stage is described as «no-op or minor»: edits land elsewhere only if
|
||||
the audit finds drift.
|
||||
|
||||
The producer-side surface is consumed in Stage 15 by
|
||||
`gamemaster/internal/adapters/notificationpublisher/`; this stage locks
|
||||
the contract before the publisher is implemented.
|
||||
|
||||
## Audit outcome — no drift
|
||||
|
||||
Each artefact already matches the `Game Master` notification table at
|
||||
[`../README.md` §Notification Contracts](../README.md):
|
||||
|
||||
- [`../../pkg/notificationintent/intent.go`](../../pkg/notificationintent/intent.go)
|
||||
declares `NotificationTypeGameTurnReady`, `NotificationTypeGameFinished`,
|
||||
`NotificationTypeGameGenerationFailed`; `ExpectedProducer` maps the
|
||||
three to `ProducerGameMaster`; `SupportsAudience` and `SupportsChannel`
|
||||
encode `user + (push|email)` for the first two and `admin_email + email`
|
||||
for the failure type.
|
||||
- [`../../pkg/notificationintent/payloads.go`](../../pkg/notificationintent/payloads.go)
|
||||
defines `GameTurnReadyPayload`, `GameFinishedPayload`,
|
||||
`GameGenerationFailedPayload` with the exact field set required by the
|
||||
README table, and exposes `NewGameTurnReadyIntent`,
|
||||
`NewGameFinishedIntent`, `NewGameGenerationFailedIntent`. The
|
||||
user-targeted constructors take `recipientUserIDs`; the admin-email
|
||||
constructor does not.
|
||||
- [`../../notification/api/intents-asyncapi.yaml`](../../notification/api/intents-asyncapi.yaml)
|
||||
carries the three values in the `notification_type` enum, declares
|
||||
one `if/then` branch each on the envelope, and defines the
|
||||
`GameTurnReadyPayload`, `GameFinishedPayload`,
|
||||
`GameGenerationFailedPayload` schemas with the per-type required
|
||||
fields.
|
||||
- [`../../notification/contract_asyncapi_test.go`](../../notification/contract_asyncapi_test.go)
|
||||
freezes the three types inside `expectedNotificationCatalog` and
|
||||
exercises them through `TestIntentAsyncAPISpecFreezesNotificationCatalogBranches`
|
||||
and `TestNotificationCatalogDocsStayInSync`.
|
||||
|
||||
There is no separate «catalog data table» inside `notification/internal/`:
|
||||
the routing decisions live in `pkg/notificationintent/intent.go` and are
|
||||
shared by every producer and by the notification service itself.
|
||||
Consequently no edits to
|
||||
`notification/api/intents-asyncapi.yaml`,
|
||||
`notification/internal/...`, or
|
||||
`notification/contract_asyncapi_test.go` are required by this stage.
|
||||
|
||||
## Decision — producer-side compile-time freeze in addition to the YAML freeze
|
||||
|
||||
[`../notificationintent_audit_test.go`](../notificationintent_audit_test.go)
|
||||
imports `galaxy/notificationintent` from inside the `gamemaster`
|
||||
package. Because the test names every constant, constructor, and
|
||||
payload struct field directly, any rename or removal in
|
||||
`pkg/notificationintent` breaks `go build ./gamemaster/...` before the
|
||||
test even runs. At runtime the test additionally asserts:
|
||||
|
||||
- the wire value of every `NotificationType` constant
|
||||
(`game.turn.ready`, `game.finished`, `game.generation_failed`);
|
||||
- the `Producer`, `AudienceKind`, recipient handling, and `Validate()`
|
||||
outcome of the constructed intent;
|
||||
- the on-wire field names through `Contains` checks against
|
||||
`Intent.PayloadJSON` (catches a JSON tag rename even when the Go
|
||||
struct field name stays);
|
||||
- the audience/channel matrix via `SupportsAudience` and
|
||||
`SupportsChannel`.
|
||||
|
||||
Reasons for adding this in addition to the YAML freeze in
|
||||
`notification/contract_asyncapi_test.go`:
|
||||
|
||||
1. The YAML freeze runs in the `notification` module. A drift in
|
||||
`pkg/notificationintent` that is *consistent* with a drift in
|
||||
`notification/api/intents-asyncapi.yaml` would still be caught, but
|
||||
the failure surface is on the consumer side, not the producer side.
|
||||
The GM-side test fails first and points the engineer at the producer
|
||||
they own.
|
||||
2. The test binds the contract at compile time. A field rename in
|
||||
`pkg/notificationintent/payloads.go` cannot land without breaking
|
||||
`gamemaster/notificationintent_audit_test.go` build, even before
|
||||
`go test` runs.
|
||||
3. Stage 15 will introduce a publisher adapter that calls the same
|
||||
constructors. Locking the constructor signatures here removes one
|
||||
class of churn from that stage — the test serves as a contract
|
||||
reference that the adapter has to satisfy.
|
||||
|
||||
Alternatives considered:
|
||||
|
||||
- **YAML re-parse in `gamemaster/`** — rejected: would duplicate the
|
||||
walker logic already present in
|
||||
`notification/contract_asyncapi_test.go` and bind the GM module to
|
||||
the YAML file path through a relative `../notification/` reference.
|
||||
The Go-import test catches the relevant drift class with no
|
||||
cross-module file lookups.
|
||||
- **No GM-side test, rely on the YAML freeze alone** — rejected:
|
||||
Stage 07's exit criterion is «the freeze test passes», which the
|
||||
PLAN explicitly anchors to a new file under `gamemaster/`. The YAML
|
||||
freeze alone would also miss a Go-side rename that the test author
|
||||
forgot to mirror in the YAML in the same change.
|
||||
|
||||
## References
|
||||
|
||||
- [`../PLAN.md` Stage 07](../PLAN.md)
|
||||
- [`../README.md` §Notification Contracts](../README.md)
|
||||
- [`../notificationintent_audit_test.go`](../notificationintent_audit_test.go)
|
||||
- [`../../pkg/notificationintent/intent.go`](../../pkg/notificationintent/intent.go)
|
||||
- [`../../pkg/notificationintent/payloads.go`](../../pkg/notificationintent/payloads.go)
|
||||
- [`../../notification/api/intents-asyncapi.yaml`](../../notification/api/intents-asyncapi.yaml)
|
||||
- [`../../notification/contract_asyncapi_test.go`](../../notification/contract_asyncapi_test.go) — YAML-level catalog freeze.
|
||||
@@ -0,0 +1,145 @@
|
||||
---
|
||||
stage: 08
|
||||
title: Module skeleton
|
||||
---
|
||||
|
||||
# Stage 08 — GM module skeleton
|
||||
|
||||
This decision record captures the wiring choices made when bootstrapping
|
||||
the runnable `gamemaster` binary on top of the contracts and freeze
|
||||
tests landed by Stages 01–07.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 08](../PLAN.md) calls for a buildable `gamemaster`
|
||||
process that loads its environment-driven configuration, opens
|
||||
PostgreSQL and Redis pools, installs the OpenTelemetry runtime, exposes
|
||||
`/healthz` and `/readyz` on the trusted internal HTTP listener, and
|
||||
exits cleanly on `SIGTERM` within `GAMEMASTER_SHUTDOWN_TIMEOUT`. No
|
||||
business endpoints, no workers, and no persistence stores yet.
|
||||
|
||||
The reference implementation is `rtmanager`, the most recently landed
|
||||
Galaxy service that follows the platform-wide skeleton conventions
|
||||
(layered `cmd / internal/{app, api, config, logging, telemetry}`,
|
||||
`app.Component` lifecycle, OpenTelemetry runtime with deferred
|
||||
observable gauges, fail-fast environment loader). Stage 08 mirrors that
|
||||
skeleton with two deliberate divergences described below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. `go.mod` scope is minimal at Stage 08
|
||||
|
||||
Only modules actually imported by Stage 08 code land in
|
||||
[`../go.mod`](../go.mod):
|
||||
|
||||
- `galaxy/postgres`, `galaxy/redisconn`, `galaxy/notificationintent`
|
||||
(the last one was already present from Stage 07 freeze test);
|
||||
- the OpenTelemetry stack (`otel`, `metric`, `trace`, `sdk`,
|
||||
`sdk/metric`, OTLP exporters for traces and metrics over gRPC and
|
||||
HTTP, stdout exporters);
|
||||
- `go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp`;
|
||||
- `github.com/redis/go-redis/v9` (promoted from indirect to direct);
|
||||
- `github.com/jackc/pgx/v5` (transitive via `pkg/postgres`).
|
||||
|
||||
PLAN-listed modules that arrive with later consumers (`go-jet/jet/v2`,
|
||||
`pressly/goose/v3`, the testcontainers modules, `go.uber.org/mock`,
|
||||
`galaxy/cronutil`, `galaxy/error`, `galaxy/util`) are deliberately left
|
||||
out of Stage 08's `go.mod`. They join the module together with their
|
||||
first consumers in Stages 09 / 10 / 11 / 12.
|
||||
|
||||
Reasoning: keeping `go mod tidy` honest at every stage is cheaper than
|
||||
pre-declaring blank-import stubs. The PLAN's full list is the eventual
|
||||
shape of the module across the series, not a Stage 08 contract.
|
||||
|
||||
### 2. `ShutdownTimeout` lives at the top level of `Config`
|
||||
|
||||
The README §Configuration groups one variable —
|
||||
`GAMEMASTER_SHUTDOWN_TIMEOUT` — under a documentation group called
|
||||
"Lifecycle". The Go struct does not split that single field into a
|
||||
substruct: `Config.ShutdownTimeout` mirrors the
|
||||
`rtmanager.Config.ShutdownTimeout` shape so the two services stay
|
||||
isomorphic. The "Lifecycle" group remains a documentation grouping in
|
||||
[`../README.md`](../README.md) only.
|
||||
|
||||
### 3. Telemetry — counters and histograms now, observable gauges later
|
||||
|
||||
`internal/telemetry/runtime.go` registers every counter and histogram
|
||||
listed under [`../README.md` §Observability](../README.md) at process
|
||||
start (`buildRuntime`). The three observable gauges
|
||||
(`gamemaster.runtime_records_by_status`,
|
||||
`gamemaster.scheduler.due_games`, `gamemaster.engine_versions_total`)
|
||||
are declared up front but their callbacks are installed via a deferred
|
||||
`Runtime.RegisterGauges(deps)` call. The wiring layer at Stages 11 / 14
|
||||
/ 15 supplies the probes (per-status row count, due-now scheduler
|
||||
count, registered engine versions) once the persistence stores and the
|
||||
scheduler exist.
|
||||
|
||||
This matches the `rtmanager` pattern where
|
||||
`runtime_records_by_status` is registered through an analogous
|
||||
`RegisterGauges` plumbing.
|
||||
|
||||
### 4. PostgreSQL migrations are deferred to Stage 09
|
||||
|
||||
The README §Startup dependencies states "Embedded goose migrations
|
||||
apply synchronously before any listener opens." Stage 08 opens,
|
||||
instruments, and pings the PostgreSQL pool but **does not** call
|
||||
`postgres.RunMigrations`. The migrations package
|
||||
(`internal/adapters/postgres/migrations/`) is shipped by Stage 09; the
|
||||
runtime adds the one-line `RunMigrations` call at that stage.
|
||||
|
||||
Until then, the runtime is buildable, listener-ready, and serves
|
||||
`/healthz` + `/readyz` against a fresh PostgreSQL pool with no schema
|
||||
applied. This is acceptable because Stage 08 ships no business handlers
|
||||
and no workers; nothing reads or writes `gamemaster.*` tables yet.
|
||||
|
||||
### 5. Makefile mirrors `rtmanager`
|
||||
|
||||
[`../Makefile`](../Makefile) declares `jet`, `mocks`, `integration`
|
||||
targets identical in shape to `rtmanager/Makefile`. The `jet` target
|
||||
runs `go run ./cmd/jetgen`; the binary lands in Stage 09. The `mocks`
|
||||
target runs `go generate ./internal/ports/...
|
||||
./internal/api/internalhttp/handlers/...`; the `//go:generate`
|
||||
directives land in Stages 10 / 12 / 19. Both targets fail until their
|
||||
prerequisites land — accepted because Stage 08 does not require either
|
||||
to succeed; only `go build` and `go test ./gamemaster/...` matter.
|
||||
|
||||
### 6. No Docker dependency
|
||||
|
||||
`Game Master` is forbidden from importing the Docker SDK
|
||||
([`../README.md` §Non-Goals](../README.md)). The skeleton therefore
|
||||
drops the `newDockerClient` / `pingDocker` helpers from
|
||||
`internal/app/bootstrap.go` and the Docker-related fields from
|
||||
`internal/app/wiring.go`. The readiness probe pings PostgreSQL and
|
||||
Redis only.
|
||||
|
||||
## Files landed
|
||||
|
||||
- `cmd/gamemaster/main.go` — process entrypoint.
|
||||
- `internal/config/{config.go, env.go, validation.go, config_test.go}` —
|
||||
GAMEMASTER-prefixed env loader plus required-vars fail-fast.
|
||||
- `internal/logging/{logger.go, context.go}` — slog JSON-stdout logger
|
||||
with request id and span id helpers.
|
||||
- `internal/telemetry/{runtime.go, runtime_test.go}` — OpenTelemetry
|
||||
runtime, instruments listed in §Observability, deferred gauge
|
||||
plumbing.
|
||||
- `internal/api/internalhttp/{server.go, server_test.go}` — `/healthz`
|
||||
and `/readyz` listener with observability middleware.
|
||||
- `internal/app/{app.go, app_test.go, bootstrap.go, runtime.go,
|
||||
wiring.go}` — process lifecycle (component supervisor + reverse-order
|
||||
cleanup), Redis bootstrap helpers, minimal placeholder wiring.
|
||||
- `Makefile` — `jet`, `mocks`, `integration` target stubs.
|
||||
- Updated `go.mod` / `go.sum` with the dependencies and replace
|
||||
directives for `galaxy/postgres` and `galaxy/redisconn`.
|
||||
|
||||
## Verification
|
||||
|
||||
- `go build ./gamemaster/...` succeeds.
|
||||
- `go test ./gamemaster/...` passes (existing contract / freeze tests
|
||||
plus the four new test files).
|
||||
- Manual smoke against a local Postgres + Redis confirms:
|
||||
`/healthz` returns `200 ok`, `/readyz` returns `200 ready` while both
|
||||
dependencies respond, and `503 service_unavailable` once one of them
|
||||
is brought down.
|
||||
- `SIGTERM` ends the process within `GAMEMASTER_SHUTDOWN_TIMEOUT`,
|
||||
releasing PostgreSQL pool, Redis client, and telemetry providers in
|
||||
reverse construction order.
|
||||
@@ -0,0 +1,257 @@
|
||||
---
|
||||
stage: 09
|
||||
title: PostgreSQL schema, migrations, jet
|
||||
---
|
||||
|
||||
# Stage 09 — PostgreSQL schema, migrations, jet
|
||||
|
||||
This decision record captures the schema and code-generation pipeline
|
||||
landed for Game Master at PLAN Stage 09. It is a service-local mirror
|
||||
of [`../../rtmanager/docs/postgres-migration.md`](../../rtmanager/docs/postgres-migration.md)
|
||||
but only documents the decisions specific to Stage 09; the stage-24
|
||||
[`postgres-migration.md`](postgres-migration.md) reorganisation will
|
||||
later subsume and supersede this record.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 09](../PLAN.md) finalises the persistence schema
|
||||
and the code-generation pipeline. Stage 08 already opens, instruments,
|
||||
and pings the PostgreSQL pool but does not apply any migrations. The
|
||||
durable surface for runtime state, engine version registry, player
|
||||
mappings, and the audit log is described in
|
||||
[`../README.md` §Persistence Layout](../README.md). Stage 09 ships:
|
||||
|
||||
- `internal/adapters/postgres/migrations/00001_init.sql` plus the
|
||||
matching embed package;
|
||||
- `cmd/jetgen` — a testcontainers-driven regeneration pipeline for
|
||||
the go-jet/v2 query builder code;
|
||||
- the generated jet code under
|
||||
`internal/adapters/postgres/jet/gamemaster/{model,table}/`,
|
||||
committed verbatim;
|
||||
- the `postgres.RunMigrations` call in `internal/app/runtime.go`,
|
||||
applied after the PostgreSQL pool ping and before any listener is
|
||||
built.
|
||||
|
||||
The reference precedent is `rtmanager`, the most recently landed
|
||||
PG-backed service in the workspace.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. Schema and role provisioning are excluded from `00001_init.sql`
|
||||
|
||||
**Decision.** The `gamemaster` schema and the matching
|
||||
`gamemasterservice` role are created outside the migration sequence
|
||||
(in tests by [`../cmd/jetgen/main.go`](../cmd/jetgen/main.go)
|
||||
`provisionRoleAndSchema`; in production by an ops init script not in
|
||||
scope for this stage). The embedded migration `00001_init.sql` only
|
||||
contains DDL for the four service-owned tables and indexes and assumes
|
||||
it runs as the schema owner with `search_path=gamemaster`.
|
||||
|
||||
**Why.** [`../../ARCHITECTURE.md` §Database topology](../../ARCHITECTURE.md)
|
||||
mandates that each service connects with its own role whose grants are
|
||||
restricted to its own schema. Mixing role creation, schema creation,
|
||||
and table DDL into one script forces the migration to run as a
|
||||
superuser on every replica boot and effectively relaxes the per-service
|
||||
role boundary. The `rtmanager` precedent settled on the split first;
|
||||
GM follows it for the same architectural reason. This is a deliberate
|
||||
deviation from PLAN Stage 09's literal `CREATE SCHEMA IF NOT EXISTS
|
||||
gamemaster;` instruction, called out in the comment header at the top
|
||||
of `00001_init.sql`.
|
||||
|
||||
### 2. Natural primary keys mirror the platform identifiers
|
||||
|
||||
**Decision.** Every PK is a natural identifier already owned by another
|
||||
component:
|
||||
|
||||
- `runtime_records.game_id` — Lobby's platform identifier;
|
||||
- `engine_versions.version` — semver string from the registry;
|
||||
- `player_mappings (game_id, user_id)` — composite, both columns owned
|
||||
by Lobby/User Service.
|
||||
- `operation_log.id` — `bigserial`, the only synthetic PK because the
|
||||
audit table has no natural identity per row.
|
||||
|
||||
**Why.** The same reasoning as in
|
||||
[`../../rtmanager/docs/postgres-migration.md` §2](../../rtmanager/docs/postgres-migration.md)
|
||||
applies: surrogate keys would force every cross-service join through a
|
||||
lookup table, while the natural keys keep the persistence layer
|
||||
pin-compatible with the contracts (every `register-runtime` envelope
|
||||
already names `game_id`, every Lobby resolve names `version`, every
|
||||
player command names `user_id`).
|
||||
|
||||
### 3. Defense-in-depth CHECK constraints on every status enum
|
||||
|
||||
**Decision.** Five CHECK constraints reproduce the Go-level enums in
|
||||
the schema:
|
||||
|
||||
- `runtime_records_status_chk` — seven runtime statuses
|
||||
(`starting`, `running`, `generation_in_progress`, `generation_failed`,
|
||||
`stopped`, `engine_unreachable`, `finished`);
|
||||
- `engine_versions_status_chk` — `active | deprecated`;
|
||||
- `operation_log_op_kind_chk` — nine operation kinds
|
||||
(`register_runtime`, `turn_generation`, `force_next_turn`, `banish`,
|
||||
`stop`, `patch`, `engine_version_create`, `engine_version_update`,
|
||||
`engine_version_deprecate`);
|
||||
- `operation_log_op_source_chk` — three op sources
|
||||
(`gateway_player`, `lobby_internal`, `admin_rest`);
|
||||
- `operation_log_outcome_chk` — `success | failure`.
|
||||
|
||||
The Go-level enums in the domain layer (added in Stage 10) remain the
|
||||
source of truth for application code.
|
||||
|
||||
**Why.** The same defense-in-depth argument as for `rtmanager`: the
|
||||
storage boundary catches an adapter regression that would otherwise
|
||||
persist an unexpected string. Operator-side queries (`SELECT … WHERE
|
||||
op_kind = 'patch'`) benefit from the enum being verifiable directly in
|
||||
psql without consulting the Go source. PostgreSQL's `CREATE TYPE … AS
|
||||
ENUM` was rejected because adding values to a PG enum type requires
|
||||
`ALTER TYPE` outside a transaction and complicates the single-init
|
||||
pre-launch policy (decision §6).
|
||||
|
||||
### 4. Indexes derive from concrete query shapes
|
||||
|
||||
**Decision.** Three secondary indexes ship with `00001_init.sql`:
|
||||
|
||||
- `runtime_records (status, next_generation_at)` — drives the
|
||||
scheduler ticker scan
|
||||
(`WHERE status='running' AND next_generation_at <= now()` once per
|
||||
second);
|
||||
- `player_mappings (game_id, race_name)` UNIQUE — enforces the
|
||||
one-race-per-game invariant at the storage boundary;
|
||||
- `operation_log (game_id, started_at DESC)` — drives audit reads
|
||||
ordered by recency.
|
||||
|
||||
The README §Persistence Layout list also mentions `player_mappings
|
||||
(game_id)`, which is intentionally **not** added: the composite
|
||||
primary key on `(game_id, user_id)` already serves as a leftmost-prefix
|
||||
index for `WHERE game_id = $1`, and a one-column duplicate would only
|
||||
double the write cost for no plan-stability gain. The README's
|
||||
indexes list is corrected in the same patch to drop the redundant
|
||||
entry.
|
||||
|
||||
**Why.** Each remaining index has a single concrete read shape behind
|
||||
it. The composite ordering on `(status, next_generation_at)` lets the
|
||||
planner satisfy the scheduler scan with one index sweep. The descending
|
||||
ordering on `(game_id, started_at DESC)` matches the
|
||||
`ListByGame ORDER BY started_at DESC` shape already established by
|
||||
`rtmanager.operationlogstore.ListByGame`.
|
||||
|
||||
### 5. `next_generation_at` is nullable
|
||||
|
||||
**Decision.** `runtime_records.next_generation_at timestamptz` admits
|
||||
NULL; `runtime_records.skip_next_tick boolean NOT NULL DEFAULT false`
|
||||
does not.
|
||||
|
||||
**Why.** A row enters the table at register-runtime with
|
||||
`status='starting'` and no scheduled tick yet — the tick is only
|
||||
computed once the engine `/admin/init` succeeds and the CAS flips the
|
||||
status to `running`. NULL captures «no tick scheduled» without forcing
|
||||
a sentinel value into the column. The scheduler index
|
||||
`(status, next_generation_at)` still works correctly: the predicate
|
||||
`next_generation_at <= now()` is undefined for NULL inputs, and PG
|
||||
excludes those rows from the result set, which is the desired
|
||||
behaviour. `skip_next_tick` is a boolean knob set or cleared by the
|
||||
force-next-turn flow; NULL would be a third state with no semantic, so
|
||||
the column is NOT NULL with a `false` default.
|
||||
|
||||
### 6. Single-init pre-launch policy applies as documented
|
||||
|
||||
**Decision.** `00001_init.sql` evolves in place until first production
|
||||
deploy. Adding a column, an index, or a new table during the
|
||||
pre-launch development window edits this file directly rather than
|
||||
producing `00002_*.sql`. The runtime applies the migration on every
|
||||
boot; if the schema is already at head, `pkg/postgres`'s goose
|
||||
adapter exits zero.
|
||||
|
||||
**Why.** The schema-per-service architectural rule
|
||||
([`../../ARCHITECTURE.md` §Persistence Backends](../../ARCHITECTURE.md))
|
||||
endorses a single-init policy for pre-launch services. The pre-launch
|
||||
window allows non-additive changes (column rename, type narrowing,
|
||||
CHECK tightening) that a multi-step migration sequence would force into
|
||||
awkward two-step rewrites. Once the service ships to production, the
|
||||
next schema change becomes `00002_*.sql` and the policy lifts.
|
||||
|
||||
### 7. `cmd/jetgen` is a one-to-one mirror of `rtmanager/cmd/jetgen`
|
||||
|
||||
**Decision.** [`../cmd/jetgen/main.go`](../cmd/jetgen/main.go) follows
|
||||
the same shape as
|
||||
[`../../rtmanager/cmd/jetgen/main.go`](../../rtmanager/cmd/jetgen/main.go):
|
||||
spin a `postgres:16-alpine` testcontainer, open it as superuser,
|
||||
provision the role and schema, open a second pool with
|
||||
`search_path=gamemaster`, apply the embedded goose migrations, then
|
||||
invoke `github.com/go-jet/jet/v2/generator/postgres.GenerateDB` with
|
||||
schema=gamemaster. Constants differ (`gamemasterservice`,
|
||||
`gamemaster`, `galaxy_gamemaster`) but the algorithm and helper shape
|
||||
are intentionally identical.
|
||||
|
||||
**Why.** Two PG-backed services should not diverge on a dev-only code
|
||||
generator that nothing else in the workspace relies on. Mirroring
|
||||
`rtmanager` keeps `make -C <service> jet` interchangeable for
|
||||
operators and minimises the cognitive overhead of moving between
|
||||
services.
|
||||
|
||||
### 8. Generated jet code is committed
|
||||
|
||||
**Decision.** The output of `make -C gamemaster jet` lands under
|
||||
[`../internal/adapters/postgres/jet/gamemaster/{model,table}/`](../internal/adapters/postgres/jet/gamemaster)
|
||||
and is committed verbatim.
|
||||
|
||||
**Why.** `go build ./...` from the repository root must work without
|
||||
Docker; CI runners and contributor machines without a local Docker
|
||||
daemon must still pass `go test ./gamemaster/...` for the non-PG-store
|
||||
parts of the module. The generation pipeline itself remains available
|
||||
behind `make jet` for everyone who wants to regenerate.
|
||||
|
||||
### 9. Migrations apply synchronously before any listener opens
|
||||
|
||||
**Decision.** [`../internal/app/runtime.go`](../internal/app/runtime.go)
|
||||
calls `postgres.RunMigrations(ctx, pgPool, migrations.FS(), ".")`
|
||||
immediately after the `postgres.Ping` succeeds and before
|
||||
`newWiring`/`internalhttp.NewServer` are constructed. A non-zero exit
|
||||
on migration failure follows the `pkg/postgres` policy.
|
||||
|
||||
**Why.** [`../README.md` §Startup dependencies](../README.md)
|
||||
specifies that «embedded goose migrations apply synchronously before
|
||||
any listener opens». Repeated process boots against a head schema
|
||||
return goose's «no work to do» success — this is how the policy stays
|
||||
operationally cheap, since a freshly-spawned replica re-applies the
|
||||
same `00001_init.sql` with no work and proceeds straight to opening
|
||||
its listeners.
|
||||
|
||||
## Files landed
|
||||
|
||||
- [`../internal/adapters/postgres/migrations/00001_init.sql`](../internal/adapters/postgres/migrations/00001_init.sql)
|
||||
— full schema for the four service tables plus indexes and CHECK
|
||||
constraints.
|
||||
- [`../internal/adapters/postgres/migrations/migrations.go`](../internal/adapters/postgres/migrations/migrations.go)
|
||||
— `//go:embed *.sql` and `FS()` exporter.
|
||||
- [`../cmd/jetgen/main.go`](../cmd/jetgen/main.go) — testcontainers +
|
||||
goose + jet pipeline.
|
||||
- [`../internal/adapters/postgres/jet/gamemaster/`](../internal/adapters/postgres/jet/gamemaster)
|
||||
— generated model and table packages.
|
||||
- [`../internal/app/runtime.go`](../internal/app/runtime.go) — wired
|
||||
`postgres.RunMigrations` call after the pool ping.
|
||||
- [`../Makefile`](../Makefile) — refreshed `jet` target comment now
|
||||
that the pipeline is real.
|
||||
- [`../go.mod`](../go.mod), [`../go.sum`](../go.sum) — promoted
|
||||
`github.com/go-jet/jet/v2`, `github.com/testcontainers/testcontainers-go`,
|
||||
and `github.com/testcontainers/testcontainers-go/modules/postgres`
|
||||
to direct dependencies.
|
||||
- [`../README.md`](../README.md) — corrected §Persistence Layout
|
||||
indexes list (dropped redundant `player_mappings (game_id)` entry)
|
||||
and added a §References pointer to this record.
|
||||
|
||||
## Verification
|
||||
|
||||
- `cd gamemaster && go mod tidy` — no missing dependency, no
|
||||
superfluous indirect.
|
||||
- `make -C gamemaster jet` — bring up `postgres:16-alpine`, apply
|
||||
`00001_init.sql`, regenerate `internal/adapters/postgres/jet/...`;
|
||||
`git status` is clean after a second run.
|
||||
- `go build ./gamemaster/...` succeeds (including the generated jet
|
||||
code).
|
||||
- `go test ./gamemaster/...` passes — existing contract, freeze, and
|
||||
config/telemetry/HTTP tests are unaffected.
|
||||
- Manual smoke against a local PostgreSQL with an empty `gamemaster`
|
||||
schema and a `gamemasterservice` role: the process applies the
|
||||
migration, `/readyz` returns `200`, and a second boot exits zero on
|
||||
the «no work to do» path.
|
||||
@@ -0,0 +1,184 @@
|
||||
---
|
||||
stage: 10
|
||||
title: Domain layer and ports
|
||||
---
|
||||
|
||||
# Stage 10 — Domain layer and ports
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
introducing the in-memory domain model and port interfaces of Game
|
||||
Master at PLAN Stage 10.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 10](../PLAN.md) freezes the domain types and the
|
||||
port surfaces that adapters (Stage 11/12), services (Stages 13–17), and
|
||||
workers (Stage 18) will adopt. No adapter or service code lands here;
|
||||
the stage exists so every consumer of these types in later stages can
|
||||
import a stable contract.
|
||||
|
||||
The reference precedent is `rtmanager`, the most recently landed
|
||||
PG-backed service. Its
|
||||
[`internal/domain/`](../../rtmanager/internal/domain) and
|
||||
[`internal/ports/`](../../rtmanager/internal/ports) directories define
|
||||
the shape every Stage 10 file follows: `Status string` enums with
|
||||
`IsKnown` / `AllStatuses`; `*InvalidTransitionError` wrapping
|
||||
`ErrInvalidTransition`; transition tables keyed by `(from, to)` pairs;
|
||||
input structs with `Validate()` methods on every store mutation.
|
||||
|
||||
Six decisions deviate from a direct copy of `rtmanager` or extend the
|
||||
literal task list of PLAN Stage 10. Each is recorded below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. `internal/domain/operation/` is added beyond the literal task list
|
||||
|
||||
**Decision.** Stage 10 ships
|
||||
[`internal/domain/operation/log.go`](../internal/domain/operation/log.go)
|
||||
with `OperationEntry`, `OpKind`, `OpSource`, and `Outcome` types even
|
||||
though PLAN Stage 10's bullet list does not enumerate them.
|
||||
|
||||
**Why.** The Stage 09
|
||||
[`00001_init.sql`](../internal/adapters/postgres/migrations/00001_init.sql)
|
||||
schema already declares CHECK constraints on `op_kind`, `op_source`,
|
||||
and `outcome`. The
|
||||
[`ports/operationlog.go`](../internal/ports/operationlog.go) interface
|
||||
returns and accepts an `OperationEntry` parameter, which must therefore
|
||||
live in the domain layer or be redefined inside `ports`. The
|
||||
`rtmanager` precedent
|
||||
([`rtmanager/internal/domain/operation/log.go`](../../rtmanager/internal/domain/operation/log.go))
|
||||
treats it as a domain package; mirroring that keeps Game Master's layout
|
||||
recognisable and lets later service code import a single canonical
|
||||
type. The alternative (defining the type on the port file) would
|
||||
duplicate the SQL CHECK enums in two places once Stage 11's adapter
|
||||
ships and would force every service-layer caller to import the port
|
||||
package for what is structurally a value type.
|
||||
|
||||
### 2. `Membership` lives on `ports/lobbyclient.go`, not in the domain
|
||||
|
||||
**Decision.** The DTO consumed by `LobbyClient.GetMemberships` is
|
||||
declared inside
|
||||
[`ports/lobbyclient.go`](../internal/ports/lobbyclient.go) rather than a
|
||||
new `internal/domain/membership/` package.
|
||||
|
||||
**Why.** Game Master does not own membership state — Game Lobby does
|
||||
([`../../ARCHITECTURE.md` §Membership rules](../../ARCHITECTURE.md)).
|
||||
Anything GM holds about membership is a remote projection used solely
|
||||
for hot-path authorisation. Treating it as a port-level DTO matches
|
||||
`rtmanager`'s precedent for cross-service projections
|
||||
([`rtmanager/internal/ports/lobbyinternal.go:LobbyGameRecord`](../../rtmanager/internal/ports/lobbyinternal.go))
|
||||
and keeps the domain layer free of types that GM does not author.
|
||||
Promoting it to a domain package later costs nothing if a real
|
||||
GM-owned invariant ever attaches to it, but the v1 surface has none.
|
||||
|
||||
### 3. `EngineVersion.Options` is `[]byte`, not `map[string]any`
|
||||
|
||||
**Decision.**
|
||||
[`engineversion.EngineVersion.Options`](../internal/domain/engineversion/model.go)
|
||||
is declared as `[]byte` carrying the raw `jsonb` document.
|
||||
|
||||
**Why.** The OpenAPI contract
|
||||
([`../api/internal-openapi.yaml`](../api/internal-openapi.yaml)) marks
|
||||
`EngineVersion.options` as `additionalProperties: true` — the engine
|
||||
owns the schema, GM is a pass-through registry. A `map[string]any` Go
|
||||
field would encourage callers to introspect or mutate keys, breaking
|
||||
that pass-through guarantee. `[]byte` matches how `rtmanager` keeps
|
||||
`Details json.RawMessage` on health snapshots
|
||||
([`rtmanager/internal/domain/health/snapshot.go`](../../rtmanager/internal/domain/health/snapshot.go))
|
||||
for the same reason. Schema-aware handling can introduce a typed shape
|
||||
in a future iteration without disturbing existing rows.
|
||||
|
||||
### 4. `Schedule.Next(after, skip)` returns `skipConsumed`, not mutated state
|
||||
|
||||
**Decision.** The wrapper at
|
||||
[`internal/domain/schedule/nexttick.go`](../internal/domain/schedule/nexttick.go)
|
||||
exposes `Next(after time.Time, skip bool) (time.Time, bool)`. The
|
||||
boolean return reports whether the skip flag was consumed; the wrapper
|
||||
itself stores no state.
|
||||
|
||||
**Why.** Persisting `skip_next_tick=false` is a column update on the
|
||||
`runtime_records` row and belongs to the service layer (Stage 15),
|
||||
together with the `next_generation_at` write. Encapsulating that
|
||||
mutation inside the schedule wrapper would couple a pure value type to
|
||||
the store; the boolean return keeps the wrapper trivially testable and
|
||||
lets the caller (service layer) issue the column update via an
|
||||
existing `UpdateScheduling` port call.
|
||||
|
||||
### 5. The transition table includes `engine_unreachable → running`
|
||||
|
||||
**Decision.** The runtime transitions map
|
||||
([`internal/domain/runtime/transitions.go`](../internal/domain/runtime/transitions.go))
|
||||
permits `engine_unreachable → running` even though Stage 10's task
|
||||
list does not introduce a producer for that edge.
|
||||
|
||||
**Why.** The Stage 18
|
||||
([`../PLAN.md` Stage 18](../PLAN.md)) health-events consumer must be
|
||||
able to recover an engine that previously appeared unreachable when a
|
||||
subsequent health observation reports `healthy`. Declaring the edge in
|
||||
Stage 10 means Stage 18 needs no transitions.go edit — the consumer
|
||||
calls `UpdateStatus` with the existing CAS guard. The alternative
|
||||
(wait until Stage 18 to add the edge) would couple two unrelated
|
||||
stages and force a domain-level edit during a worker stage.
|
||||
|
||||
### 6. mockgen directives target `internal/adapters/mocks/` (deferred)
|
||||
|
||||
**Decision.** Every port file carries a
|
||||
`//go:generate go run go.uber.org/mock/mockgen
|
||||
-destination=../adapters/mocks/mock_<file>.go -package=mocks
|
||||
galaxy/gamemaster/internal/ports <Interface>` directive even though
|
||||
the destination directory does not exist yet.
|
||||
|
||||
**Why.** Stage 12 ships the
|
||||
[`internal/adapters/mocks/`](../internal/adapters/mocks) directory and
|
||||
the first regeneration of `make mocks`. Putting the directives in
|
||||
place during Stage 10 means Stage 12 only adds the directory and the
|
||||
generated files; no port file has to be edited then. The directives
|
||||
are inert until the destination directory exists; running
|
||||
`go generate ./internal/ports/...` before Stage 12 is expected to
|
||||
fail. The
|
||||
[`Makefile`](../Makefile)'s `mocks` target already references the
|
||||
directives, matching the lobby and rtmanager pattern
|
||||
([`../../lobby/internal/ports/gmclient.go`](../../lobby/internal/ports/gmclient.go),
|
||||
[`../../rtmanager/internal/ports/dockerclient.go`](../../rtmanager/internal/ports/dockerclient.go)).
|
||||
|
||||
## Files landed
|
||||
|
||||
- [`../internal/domain/runtime/{model,errors,transitions}.go`](../internal/domain/runtime)
|
||||
with seven-status enum, `RuntimeRecord` struct, and the transition
|
||||
table from PLAN Stage 10 plus decision §5.
|
||||
- [`../internal/domain/engineversion/{model,semver}.go`](../internal/domain/engineversion)
|
||||
with the registry status enum, `EngineVersion` struct, and the
|
||||
`ParseSemver` / `IsPatchUpgrade` helpers.
|
||||
- [`../internal/domain/playermapping/model.go`](../internal/domain/playermapping/model.go)
|
||||
carrying the (game_id, user_id) → race_name + engine_player_uuid
|
||||
projection.
|
||||
- [`../internal/domain/operation/log.go`](../internal/domain/operation/log.go)
|
||||
per decision §1.
|
||||
- [`../internal/domain/schedule/nexttick.go`](../internal/domain/schedule/nexttick.go)
|
||||
per decision §4.
|
||||
- Ten port files under
|
||||
[`../internal/ports/`](../internal/ports) covering the runtime
|
||||
record, engine version, player mapping, operation log, stream
|
||||
offset, engine, lobby, runtime manager, notification publisher, and
|
||||
lobby events surfaces.
|
||||
- Unit tests next to every source file; the suite covers status
|
||||
enums, transition matrix, validators, semver normalisation, and
|
||||
schedule skip semantics.
|
||||
- [`../go.mod`](../go.mod) gains direct dependencies on
|
||||
`galaxy/cronutil` and `golang.org/x/mod` for the schedule wrapper
|
||||
and the semver helpers.
|
||||
|
||||
## Verification
|
||||
|
||||
- `cd gamemaster && go build ./...` — clean.
|
||||
- `cd gamemaster && go test ./internal/domain/... ./internal/ports/...`
|
||||
— green; transition matrix exhaustively asserts every allowed and
|
||||
forbidden pair, semver parser rejects shortened forms, schedule
|
||||
wrapper honours both `skip` modes.
|
||||
- `cd gamemaster && go vet ./internal/...` — clean.
|
||||
- `gofmt -l gamemaster/internal` — empty.
|
||||
- Stage 09 contract tests
|
||||
([`../contract_openapi_test.go`](../contract_openapi_test.go),
|
||||
[`../contract_asyncapi_test.go`](../contract_asyncapi_test.go),
|
||||
[`../notificationintent_audit_test.go`](../notificationintent_audit_test.go))
|
||||
remain green; Stage 10 introduces no contract changes.
|
||||
@@ -0,0 +1,242 @@
|
||||
---
|
||||
stage: 11
|
||||
title: Persistence adapters
|
||||
---
|
||||
|
||||
# Stage 11 — Persistence adapters
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
implementing the four PostgreSQL stores and the Redis offset store of
|
||||
Game Master at PLAN Stage 11.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 11](../PLAN.md) ships the persistence layer that
|
||||
the service-layer stages (13-17) and the worker stage (18) consume.
|
||||
Stage 09 already shipped the schema, embedded migration, and the
|
||||
generated jet code; Stage 10 fixed the domain types and the port
|
||||
interfaces. Stage 11 plugs concrete adapters into those ports.
|
||||
|
||||
The reference precedent is `rtmanager`, the most recently landed
|
||||
PG-backed service. Its
|
||||
[`internal/adapters/postgres/`](../../rtmanager/internal/adapters/postgres)
|
||||
and
|
||||
[`internal/adapters/redisstate/`](../../rtmanager/internal/adapters/redisstate)
|
||||
trees define the shape every Stage 11 file follows: per-store package
|
||||
under `postgres/<store>/store.go`, helper packages under
|
||||
`internal/sqlx` and `internal/pgtest`, `Config`/`Store`/`New` triple,
|
||||
ColumnList-driven canonical SELECTs, `sqlx.WithTimeout`/`sqlx.IsNoRows`/
|
||||
`sqlx.IsUniqueViolation` shared boundary helpers.
|
||||
|
||||
Eight decisions either deviate from a literal copy of `rtmanager` or
|
||||
extend the literal task list of PLAN Stage 11. Each is recorded below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. `internal/sqlx` and `internal/pgtest` are local clones, not a shared module
|
||||
|
||||
**Decision.**
|
||||
[`internal/adapters/postgres/internal/sqlx/sqlx.go`](../internal/adapters/postgres/internal/sqlx/sqlx.go)
|
||||
and
|
||||
[`internal/adapters/postgres/internal/pgtest/pgtest.go`](../internal/adapters/postgres/internal/pgtest/pgtest.go)
|
||||
are full copies of `rtmanager`'s sibling files, with the few constants
|
||||
that name the schema and role (`gamemaster`, `gamemasterservice`,
|
||||
`galaxy_gamemaster`) replaced verbatim.
|
||||
|
||||
**Why.** Each PG-backed service owns its own role, schema, and
|
||||
migration FS. Promoting these helpers into `pkg/postgres` would force
|
||||
that package to either know about every schema or take them as
|
||||
configuration; either path adds surface area for a runtime helper that
|
||||
already covers exactly one boundary. The `rtmanager` precedent settled
|
||||
on the per-service clone first and Game Master mirrors it for the
|
||||
same architectural reason. The duplication cost is small (≈250 lines
|
||||
total, mechanical) and the alternative would couple services through a
|
||||
testing concern that has no business in production code.
|
||||
|
||||
### 2. CAS via `(game_id, status)` predicate, not `SELECT … FOR UPDATE`
|
||||
|
||||
**Decision.**
|
||||
[`runtimerecordstore.UpdateStatus`](../internal/adapters/postgres/runtimerecordstore/store.go)
|
||||
encodes the compare-and-swap as a `WHERE game_id = $1 AND status = $2`
|
||||
predicate on a single `UPDATE`, then probes the row's existence on
|
||||
`RowsAffected == 0` to distinguish `runtime.ErrConflict` (status
|
||||
changed concurrently) from `runtime.ErrNotFound` (row absent).
|
||||
|
||||
**Why.** Same reasoning as
|
||||
[`rtmanager/docs/postgres-migration.md` §CAS](../../rtmanager/docs/postgres-migration.md):
|
||||
holding a `SELECT … FOR UPDATE` lock would block every other tick on
|
||||
the same game while the Go code computed the next status, lengthening
|
||||
the locked region for no correctness gain. The CAS-only path is
|
||||
verified by `TestUpdateStatusConcurrentCAS` (8 goroutines, exactly one
|
||||
winner).
|
||||
|
||||
### 3. Port-level deviation: `UpdateEngineVersionInput.Now` and `Deprecate(ctx, version, now)`
|
||||
|
||||
**Decision.**
|
||||
[`ports/engineversionstore.go`](../internal/ports/engineversionstore.go)
|
||||
gains a `Now time.Time` field on `UpdateEngineVersionInput` (validated
|
||||
by `Validate` to be non-zero) and a `now time.Time` argument on
|
||||
`Deprecate`. The corresponding port-level test fixtures in
|
||||
`engineversionstore_test.go` are updated to carry the new value.
|
||||
|
||||
**Why.** Stage 10's literal port did not include a wall-clock for the
|
||||
engine-version mutators, while
|
||||
[`UpdateStatusInput`](../internal/ports/runtimerecordstore.go) and
|
||||
[`UpdateSchedulingInput`](../internal/ports/runtimerecordstore.go) do.
|
||||
Without Now in the input, the adapter would have to either call
|
||||
`time.Now()` directly (loses test determinism) or accept a `Clock`
|
||||
dependency in `Config` (adds adapter infrastructure for a single use
|
||||
case). Aligning the inputs is a small, targeted contract change
|
||||
allowed by the pre-launch single-init policy and consistent with the
|
||||
clock-from-input convention adopted everywhere else in the service.
|
||||
|
||||
### 4. Domain-level conflict sentinels `engineversion.ErrConflict` and `playermapping.ErrConflict`
|
||||
|
||||
**Decision.** The domain packages
|
||||
[`engineversion`](../internal/domain/engineversion/model.go) and
|
||||
[`playermapping`](../internal/domain/playermapping/model.go) gain
|
||||
`ErrConflict` sentinels. Adapters surface PostgreSQL unique violations
|
||||
as `fmt.Errorf("...: %w", <pkg>.ErrConflict)` so service callers can
|
||||
branch with `errors.Is`.
|
||||
|
||||
**Why.** `runtime.ErrConflict` already exists in the runtime package
|
||||
and the rest of the codebase (lobby, rtmanager, notification) uses
|
||||
domain-level conflict sentinels (e.g.
|
||||
`membership.ErrConflict`,
|
||||
`runtime.ErrConflict`). Returning a generic wrapped error for
|
||||
engine-version and player-mapping conflicts would break the
|
||||
established pattern and force the service layer to carry adapter
|
||||
implementation knowledge (`sqlx.IsUniqueViolation`). Adding two
|
||||
sentinels is a small, idiomatic deviation from PLAN Stage 11's bullet
|
||||
list, called out here so future contract diffs do not re-litigate it.
|
||||
|
||||
### 5. `Options` jsonb requires explicit `CAST(... AS jsonb)` in dynamic UPDATE
|
||||
|
||||
**Decision.** In
|
||||
[`engineversionstore.Update`](../internal/adapters/postgres/engineversionstore/store.go)
|
||||
the dynamic assignment for `options` wraps the value in
|
||||
`pg.StringExp(pg.CAST(pg.String(...)).AS("jsonb"))`. The plain
|
||||
`pg.String(...)` literal makes PostgreSQL infer the right-hand side as
|
||||
`text` and the assignment to a `jsonb` column then fails with
|
||||
SQLSTATE `42804` (`column is of type jsonb but expression is of type
|
||||
text`).
|
||||
|
||||
**Why.** `INSERT ... VALUES(...)` paths bind the `[]byte` through pgx,
|
||||
which knows how to coerce text into jsonb at the protocol level.
|
||||
Dynamic `UPDATE … SET options = '...'` does not go through that bind
|
||||
because the SQL contains a string literal directly; PostgreSQL applies
|
||||
its own type inference and fails. Using
|
||||
[`jet`'s `CAST`](https://pkg.go.dev/github.com/go-jet/jet/v2/postgres#CAST)
|
||||
is the cleanest way to force the right-hand-side type without dropping
|
||||
to raw SQL. Storing `'{}'::jsonb` as the empty default mirrors the SQL
|
||||
column default.
|
||||
|
||||
### 6. `Deprecate` is idempotent through a pre-check `Get`
|
||||
|
||||
**Decision.**
|
||||
[`engineversionstore.Deprecate`](../internal/adapters/postgres/engineversionstore/store.go)
|
||||
runs `Get(version)` first to distinguish three cases: row absent
|
||||
(return `engineversion.ErrNotFound`), row already deprecated (return
|
||||
`nil` with no further mutation), row active (run the
|
||||
`UPDATE ... SET status='deprecated'`). Without the pre-check the
|
||||
adapter would have to interpret `RowsAffected == 0` against an
|
||||
ambiguous SQL guard (`WHERE version = ? AND status != 'deprecated'`).
|
||||
|
||||
**Why.** Deprecation is a relatively rare admin operation; the extra
|
||||
read costs ≈one millisecond and removes the ambiguity. The
|
||||
alternative is the same `classifyMissingUpdate` probe pattern used by
|
||||
`UpdateStatus`, which would still need a Get to tell "missing" from
|
||||
"already deprecated". The pre-check is the simplest path.
|
||||
|
||||
### 7. `BulkInsert` ships every row in one multi-row `INSERT`, not a transaction
|
||||
|
||||
**Decision.**
|
||||
[`playermappingstore.BulkInsert`](../internal/adapters/postgres/playermappingstore/store.go)
|
||||
emits a single `INSERT ... VALUES (a), (b), …` with as many tuples as
|
||||
the input slice. Any unique-violation rolls back every row in the same
|
||||
statement.
|
||||
|
||||
**Why.** The atomicity guarantee Game Master needs (no partial
|
||||
roster) is already provided by PostgreSQL's per-statement implicit
|
||||
transaction; wrapping the same rows in `BEGIN; INSERT; INSERT; COMMIT`
|
||||
buys nothing and adds round-trips. The multi-row form is also the
|
||||
only path that lets jet's
|
||||
[`InsertStatement.VALUES(...)`](https://pkg.go.dev/github.com/go-jet/jet/v2/postgres#InsertStatement)
|
||||
chain without escape hatches. Atomicity is verified end-to-end by
|
||||
[`TestBulkInsertAtomicConflictRaceName`](../internal/adapters/postgres/playermappingstore/store_test.go)
|
||||
(3 valid rows + 1 conflicting → 0 rows persisted).
|
||||
|
||||
### 8. `miniredis/v2` is a direct gamemaster dependency
|
||||
|
||||
**Decision.**
|
||||
[`go.mod`](../go.mod) gains `github.com/alicebob/miniredis/v2` as a
|
||||
direct dependency. The
|
||||
[`streamoffsets` test suite](../internal/adapters/redisstate/streamoffsets/store_test.go)
|
||||
uses `miniredis.RunT(t)` per test for full isolation.
|
||||
|
||||
**Why.** Same reasoning as `rtmanager`: an in-memory Redis is faster
|
||||
than testcontainers Redis, fully isolated per test, and fits the
|
||||
shape of the offset-store API. Adding it as a direct dep matches the
|
||||
pattern in the repo (`rtmanager`, `notification`, `lobby` all do this
|
||||
for similar adapter test suites).
|
||||
|
||||
## Files landed
|
||||
|
||||
- [`../internal/domain/engineversion/model.go`](../internal/domain/engineversion/model.go)
|
||||
— `ErrConflict` sentinel.
|
||||
- [`../internal/domain/playermapping/model.go`](../internal/domain/playermapping/model.go)
|
||||
— `ErrConflict` sentinel.
|
||||
- [`../internal/ports/engineversionstore.go`](../internal/ports/engineversionstore.go)
|
||||
— `Now` field, `Deprecate(ctx, version, now)` signature.
|
||||
- [`../internal/ports/engineversionstore_test.go`](../internal/ports/engineversionstore_test.go)
|
||||
— port-level fixtures plus the new `now must not be zero` reject
|
||||
case.
|
||||
- [`../internal/adapters/postgres/internal/sqlx/sqlx.go`](../internal/adapters/postgres/internal/sqlx/sqlx.go)
|
||||
— `WithTimeout`, `IsNoRows`, `IsUniqueViolation`, `Nullable*`
|
||||
helpers (mirror of `rtmanager`).
|
||||
- [`../internal/adapters/postgres/internal/pgtest/pgtest.go`](../internal/adapters/postgres/internal/pgtest/pgtest.go)
|
||||
— testcontainers harness scoped to the `gamemaster` schema and
|
||||
service role.
|
||||
- [`../internal/adapters/postgres/runtimerecordstore/store.go`](../internal/adapters/postgres/runtimerecordstore/store.go)
|
||||
with full `_test.go`.
|
||||
- [`../internal/adapters/postgres/engineversionstore/store.go`](../internal/adapters/postgres/engineversionstore/store.go)
|
||||
with full `_test.go`.
|
||||
- [`../internal/adapters/postgres/playermappingstore/store.go`](../internal/adapters/postgres/playermappingstore/store.go)
|
||||
with full `_test.go`.
|
||||
- [`../internal/adapters/postgres/operationlog/store.go`](../internal/adapters/postgres/operationlog/store.go)
|
||||
with full `_test.go`.
|
||||
- [`../internal/adapters/redisstate/keyspace.go`](../internal/adapters/redisstate/keyspace.go).
|
||||
- [`../internal/adapters/redisstate/streamoffsets/store.go`](../internal/adapters/redisstate/streamoffsets/store.go)
|
||||
with full `_test.go`.
|
||||
- [`../go.mod`](../go.mod), [`../go.sum`](../go.sum) — `miniredis/v2`
|
||||
promoted to a direct dependency.
|
||||
- [`../README.md`](../README.md) — §References pointer to this
|
||||
record.
|
||||
|
||||
## Verification
|
||||
|
||||
```sh
|
||||
cd gamemaster
|
||||
|
||||
# Domain + port unit tests still pass after the Stage-11 contract
|
||||
# touch-ups.
|
||||
go test ./internal/domain/... ./internal/ports/...
|
||||
|
||||
# All adapter test suites (require Docker for testcontainers; without
|
||||
# Docker, the pgtest helpers call t.Skip).
|
||||
go test ./internal/adapters/postgres/...
|
||||
go test ./internal/adapters/redisstate/...
|
||||
|
||||
# CAS race coverage with -race; the test must observe exactly one
|
||||
# winner per run.
|
||||
go test -count=3 -race -run TestUpdateStatusConcurrentCAS \
|
||||
./internal/adapters/postgres/runtimerecordstore
|
||||
|
||||
# Stage 06/07 contract freeze tests stay green:
|
||||
go test ./... -run Contract
|
||||
go test ./... -run NotificationIntent
|
||||
```
|
||||
|
||||
The full repo-level `go build ./...` from the workspace root also
|
||||
succeeds; service-layer stages (13+) and the mocks regeneration
|
||||
(stage 12) are unaffected by Stage 11's adapter additions.
|
||||
@@ -0,0 +1,211 @@
|
||||
---
|
||||
stage: 12
|
||||
title: External clients
|
||||
---
|
||||
|
||||
# Stage 12 — External clients
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
implementing the five outbound adapters Game Master uses to talk to
|
||||
the engine, Game Lobby, Runtime Manager, the notification stream, and
|
||||
the lobby-events stream at PLAN Stage 12.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 12](../PLAN.md) ships the adapter layer the
|
||||
service-layer stages 13–18 depend on. Ports were frozen by Stage 10
|
||||
([`stage10-domain-and-ports.md`](./stage10-domain-and-ports.md)) and
|
||||
the AsyncAPI/OpenAPI contracts were frozen by Stage 06
|
||||
([`stage06-contract-files.md`](./stage06-contract-files.md)). The
|
||||
reference precedent is `rtmanager`'s adapter tree
|
||||
([`rtmanager/internal/adapters/lobbyclient`](../../rtmanager/internal/adapters/lobbyclient),
|
||||
[`rtmanager/internal/adapters/notificationpublisher`](../../rtmanager/internal/adapters/notificationpublisher),
|
||||
[`rtmanager/internal/adapters/healtheventspublisher`](../../rtmanager/internal/adapters/healtheventspublisher)),
|
||||
which Stage 11 already locked in as the canonical shape for Game
|
||||
Master persistence adapters. Stage 12 extends that precedent to the
|
||||
HTTP clients and stream publishers.
|
||||
|
||||
Six decisions deviate from a literal copy of the `rtmanager` precedent
|
||||
or extend the literal task list of PLAN Stage 12. Each is recorded
|
||||
below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. Engine client carries no `BaseURL` in `Config`
|
||||
|
||||
**Decision.**
|
||||
[`engineclient.Config`](../internal/adapters/engineclient/client.go)
|
||||
exposes only `CallTimeout` and `ProbeTimeout`. The engine endpoint
|
||||
URL is supplied per call from `runtime_records.engine_endpoint`.
|
||||
|
||||
**Why.** Game Master operates on N concurrent games at runtime; each
|
||||
game lives behind its own DNS hostname (`http://galaxy-game-{game_id}:8080`).
|
||||
Binding a base URL at construction would force a per-game client
|
||||
instance and complicate the caller. The port already reflects the
|
||||
right shape (`baseURL` is a method parameter on every method), so the
|
||||
adapter follows it. The `*http.Client` is shared, so the HTTP
|
||||
connection pool stays single-instance.
|
||||
|
||||
### 2. Two timeouts on the engine client, dispatched per method
|
||||
|
||||
**Decision.** The engine client routes turn-generation-class methods
|
||||
(`Init`, `Turn`, `BanishRace`, `ExecuteCommands`, `PutOrders`)
|
||||
through `CallTimeout` and inspect-style methods (`Status`,
|
||||
`GetReport`) through `ProbeTimeout`. Both are required and must be
|
||||
positive at construction.
|
||||
|
||||
**Why.** README §Configuration already declares the two
|
||||
(`GAMEMASTER_ENGINE_CALL_TIMEOUT=30s`,
|
||||
`GAMEMASTER_ENGINE_PROBE_TIMEOUT=5s`) for exactly this dispatch:
|
||||
turn generation on a large game can run for tens of seconds, while
|
||||
status/report reads are bounded and benefit from a tight ceiling.
|
||||
A single shared timeout would either starve the long calls or relax
|
||||
the short ones; the dispatch keeps the contract consistent with the
|
||||
documented intent.
|
||||
|
||||
### 3. Engine `population` (number) decoded into `int` via `math.Round`
|
||||
|
||||
**Decision.**
|
||||
[`engineclient`](../internal/adapters/engineclient/client.go) decodes
|
||||
each `PlayerState.population` (typed as `number` in `game/openapi.yaml`)
|
||||
into a private `float64` field, then converts to the port-level `int`
|
||||
through `int(math.Round(value))`. NaN, infinite, and negative values
|
||||
are rejected as `ports.ErrEngineProtocolViolation`.
|
||||
|
||||
**Why.** The port (Stage 10) and the AsyncAPI for `gm:lobby_events`
|
||||
both treat population as a non-negative integer; the engine spec is
|
||||
the only place it is typed as `number`. The engine in practice
|
||||
returns whole values, but a defensive `math.Round` removes any
|
||||
floating-point noise that would otherwise propagate to Lobby.
|
||||
Rejecting NaN/Inf/negative payloads keeps the protocol invariant
|
||||
explicit at the trust boundary.
|
||||
|
||||
### 4. Lobby client walks pagination with a hard page cap
|
||||
|
||||
**Decision.**
|
||||
[`lobbyclient.GetMemberships`](../internal/adapters/lobbyclient/client.go)
|
||||
walks the `next_page_token` chain transparently with `page_size=200`,
|
||||
stopping when the upstream response carries an empty
|
||||
`next_page_token`. A hard cap of 64 pages (`maxPages`) surfaces as
|
||||
`fmt.Errorf("%w: pagination overflow ...", ports.ErrLobbyUnavailable)`
|
||||
when crossed.
|
||||
|
||||
**Why.** The port contract is "every membership of gameID, in any
|
||||
status"; the only way to satisfy it across Lobby's paged contract is
|
||||
to follow the chain. The 64-page cap is a defensive guard against a
|
||||
broken upstream that keeps issuing tokens; 64 × 200 = 12 800
|
||||
memberships per game, two orders of magnitude beyond any realistic
|
||||
Galaxy roster, so legitimate traffic never trips it. Surfacing the
|
||||
overflow as `ErrLobbyUnavailable` lets the membership cache treat it
|
||||
the same as any other transport fault.
|
||||
|
||||
### 5. RTM client does not introduce `ErrSemverPatchOnly`
|
||||
|
||||
**Decision.** RTM's `409 conflict` with `error_code=semver_patch_only`
|
||||
is wrapped as `fmt.Errorf("%w: rtm patch: ... (error_code=semver_patch_only)", ports.ErrRTMUnavailable)`
|
||||
without a dedicated typed sentinel.
|
||||
|
||||
**Why.** The Stage 10 port [`RTMClient.Patch`](../internal/ports/rtmclient.go)
|
||||
declares only `ErrRTMUnavailable`. Adding `ErrSemverPatchOnly` here
|
||||
would extend the port contract beyond Stage 10's frozen surface, and
|
||||
the v1 service-layer caller (Stage 17, `adminpatch`) already
|
||||
validates semver-patch eligibility against `engineversionstore`
|
||||
before issuing the call. The 409 path is therefore a defence-in-depth
|
||||
signal, not a primary branch; a single wrapped error keeps the port
|
||||
narrow and lets the caller match on the message substring if it
|
||||
ever needs to (today it does not).
|
||||
|
||||
### 6. Lobby-events publisher reuses the `rtmanager/healtheventspublisher`
|
||||
shape, with two methods sharing one stream
|
||||
|
||||
**Decision.**
|
||||
[`lobbyeventspublisher.Publisher`](../internal/adapters/lobbyeventspublisher/publisher.go)
|
||||
exposes `PublishSnapshotUpdate` and `PublishGameFinished`, both
|
||||
hitting the same Redis Stream key (`cfg.Streams.LobbyEvents`,
|
||||
default `gm:lobby_events`). Each XADD encodes the same field
|
||||
vocabulary as `rtmanager/healtheventspublisher`: integer fields are
|
||||
serialised through `strconv.FormatInt` / `strconv.Itoa`, the
|
||||
per-player projection is JSON-encoded into one stream field
|
||||
(`player_turn_stats`), and the discriminator field (`event_type`) is
|
||||
a string literal pinned to one of the two AsyncAPI const values.
|
||||
No MAXLEN cap is set on XADD; an empty `PlayerTurnStats` slice is
|
||||
serialised as `"[]"` (literal). All `time.Time` fields are coerced
|
||||
to UTC before `UnixMilli()` so the published timestamps match the
|
||||
contract regardless of caller-supplied timezone.
|
||||
|
||||
**Why.** The two messages share one channel per the AsyncAPI spec
|
||||
([`runtime-events-asyncapi.yaml`](../api/runtime-events-asyncapi.yaml));
|
||||
the discriminator is the documented dispatch key for Lobby's
|
||||
consumer. Using the existing field-encoding pattern from
|
||||
`rtmanager/healtheventspublisher` keeps the wire format consistent
|
||||
across services and lets Lobby reuse the same XADD-decoding helpers
|
||||
it already runs against `runtime:health_events`. Setting MAXLEN was
|
||||
considered and rejected: Game Master never processes the stream
|
||||
itself, and the Lobby consumer owns its consumer-group offset, so
|
||||
trimming would risk dropping unconsumed entries. The empty `"[]"`
|
||||
default keeps the stream entry valid JSON for the field even before
|
||||
the first turn generates (when no per-player stats exist yet).
|
||||
|
||||
### 7. Defensive Makefile guard for `make mocks` between Stage 12 and Stage 19
|
||||
|
||||
**Decision.** The `mocks` Makefile target now skips the
|
||||
`internal/api/internalhttp/handlers/...` line when that directory
|
||||
does not yet exist:
|
||||
|
||||
```makefile
|
||||
mocks:
|
||||
go generate ./internal/ports/...
|
||||
@if [ -d ./internal/api/internalhttp/handlers ]; then \
|
||||
go generate ./internal/api/internalhttp/handlers/...; \
|
||||
fi
|
||||
```
|
||||
|
||||
**Why.** Stage 8 wired the Makefile to regenerate both port-level
|
||||
and handler-level mocks, but the handlers directory only appears at
|
||||
Stage 19. Without the guard, `make mocks` fails with `lstat: no such
|
||||
file or directory` between Stage 12 and Stage 19 — exactly when GM
|
||||
is being grown stage by stage. The guard makes the target idempotent
|
||||
across stages and adds zero cost when the directory is finally
|
||||
created.
|
||||
|
||||
## Files landed
|
||||
|
||||
- [`../internal/adapters/engineclient/client.go`](../internal/adapters/engineclient/client.go),
|
||||
[`../internal/adapters/engineclient/client_test.go`](../internal/adapters/engineclient/client_test.go)
|
||||
- [`../internal/adapters/lobbyclient/client.go`](../internal/adapters/lobbyclient/client.go),
|
||||
[`../internal/adapters/lobbyclient/client_test.go`](../internal/adapters/lobbyclient/client_test.go)
|
||||
- [`../internal/adapters/rtmclient/client.go`](../internal/adapters/rtmclient/client.go),
|
||||
[`../internal/adapters/rtmclient/client_test.go`](../internal/adapters/rtmclient/client_test.go)
|
||||
- [`../internal/adapters/notificationpublisher/publisher.go`](../internal/adapters/notificationpublisher/publisher.go),
|
||||
[`../internal/adapters/notificationpublisher/publisher_test.go`](../internal/adapters/notificationpublisher/publisher_test.go)
|
||||
- [`../internal/adapters/lobbyeventspublisher/publisher.go`](../internal/adapters/lobbyeventspublisher/publisher.go),
|
||||
[`../internal/adapters/lobbyeventspublisher/publisher_test.go`](../internal/adapters/lobbyeventspublisher/publisher_test.go)
|
||||
- [`../internal/adapters/mocks/`](../internal/adapters/mocks) — ten
|
||||
generated `mockgen` files covering every Stage 10 port (engine,
|
||||
lobby, rtm, notification publisher, lobby-events publisher, plus
|
||||
the five store/log ports landed by Stage 11).
|
||||
- [`../Makefile`](../Makefile) — defensive guard on the `mocks`
|
||||
target.
|
||||
- [`../README.md`](../README.md) — §References pointer to this
|
||||
record.
|
||||
|
||||
## Verification
|
||||
|
||||
```sh
|
||||
cd gamemaster
|
||||
|
||||
# Mocks regenerate cleanly with no diff after a second run.
|
||||
make mocks
|
||||
git diff --exit-code internal/adapters/mocks
|
||||
|
||||
# Adapter-level unit tests against httptest / miniredis.
|
||||
go test ./internal/adapters/engineclient/...
|
||||
go test ./internal/adapters/lobbyclient/...
|
||||
go test ./internal/adapters/rtmclient/...
|
||||
go test ./internal/adapters/notificationpublisher/...
|
||||
go test ./internal/adapters/lobbyeventspublisher/...
|
||||
|
||||
# Full repo build remains green; Stage 06/07/09–11 contract and
|
||||
# adapter tests are unaffected.
|
||||
go test ./...
|
||||
```
|
||||
@@ -0,0 +1,230 @@
|
||||
---
|
||||
stage: 13
|
||||
title: Register-runtime service
|
||||
---
|
||||
|
||||
# Stage 13 — Register-runtime service
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
implementing the `register-runtime` service-layer orchestrator at PLAN
|
||||
Stage 13. The service is the single entry point Game Lobby uses (after
|
||||
Runtime Manager has reported a successful container start) to install a
|
||||
freshly-started game in Game Master.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 13](../PLAN.md) ships the first service-layer stage
|
||||
of Game Master. It lays the orchestrator pattern that Stages 14–17 will
|
||||
reuse (engine version registry CRUD, scheduler, hot path, admin
|
||||
operations). The lifecycle the service drives is frozen by
|
||||
[`../README.md` §Lifecycles → Register-runtime](../README.md):
|
||||
|
||||
1. validate request shape;
|
||||
2. reject if `runtime_records.{game_id}` already exists;
|
||||
3. resolve `image_ref` for `target_engine_version`;
|
||||
4. persist `runtime_records` with `status=starting`;
|
||||
5. call engine `POST /api/v1/admin/init`;
|
||||
6. persist `player_mappings` from the engine response;
|
||||
7. CAS `status: starting → running` and persist initial scheduling;
|
||||
8. append `operation_log`;
|
||||
9. publish `runtime_snapshot_update`;
|
||||
10. return the persisted record.
|
||||
|
||||
The reference precedent is
|
||||
[`rtmanager/internal/service/startruntime`](../../rtmanager/internal/service/startruntime),
|
||||
which established the `Input` / `Result` / `Dependencies` / `NewService`
|
||||
/ `Handle` shape, the `recordFailure` helper, and the
|
||||
`bestEffortAppend` audit-log convention.
|
||||
|
||||
Five decisions deviate from a literal reading of either PLAN Stage 13
|
||||
or the rtmanager precedent. Each is recorded below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. `RuntimeRecordStore.Delete` extension
|
||||
|
||||
**Decision.** [`ports.RuntimeRecordStore`](../internal/ports/runtimerecordstore.go)
|
||||
gains an idempotent `Delete(ctx, gameID) error` method. The
|
||||
PostgreSQL-backed adapter
|
||||
[`runtimerecordstore.Store.Delete`](../internal/adapters/postgres/runtimerecordstore/store.go)
|
||||
issues a single `DELETE FROM runtime_records WHERE game_id = $1` and
|
||||
returns `nil` even when no row matches. The mock at
|
||||
[`internal/adapters/mocks/mock_runtimerecordstore.go`](../internal/adapters/mocks/mock_runtimerecordstore.go)
|
||||
is regenerated by `make -C gamemaster mocks`. A lone integration
|
||||
test `TestDeleteIdempotent` mirrors `TestDeleteByGameIdempotent` in
|
||||
`playermappingstore`.
|
||||
|
||||
**Why.** The README's failure paths for `register-runtime` mandate
|
||||
"roll back `runtime_records`" on every post-Insert failure. The Stage 10
|
||||
port surface had no Delete primitive, so the orchestrator could not
|
||||
satisfy the README without one. Three alternatives were considered
|
||||
and rejected:
|
||||
|
||||
- **Reorder the flow** (call engine init first, only then persist
|
||||
`runtime_records`): contradicts the README, which lists the Insert
|
||||
step before the engine call so that the in-flight `starting` row is
|
||||
observable to inspect surfaces and acts as a coordination point for
|
||||
concurrent register-runtime requests on the same game id.
|
||||
- **Introduce a `removed` status enum**: changes the runtime status
|
||||
machine for one transient bookkeeping case; complicates indexes,
|
||||
filters, and the inspect surface; is not described anywhere in
|
||||
README §Game Master status model.
|
||||
- **Single SQL transaction across both stores**: requires the adapter
|
||||
layer to expose a transactional sub-interface, breaking the per-port
|
||||
abstraction Stage 10 set up. The cost of one extra method on a
|
||||
single port is far smaller.
|
||||
|
||||
This is the same pattern Stage 11 used for `UpdateEngineVersionInput.Now`
|
||||
and `Deprecate(ctx, version, now)`: a small, targeted contract delta
|
||||
admitted by the pre-launch single-init policy.
|
||||
|
||||
### 2. Engine 4xx → `engine_validation_error`, engine 5xx →
|
||||
`engine_unreachable`
|
||||
|
||||
**Decision.** When the engine `/admin/init` call returns 4xx, the
|
||||
service produces `Result{ErrorCode: engine_validation_error}`. When it
|
||||
returns 5xx (or fails at the transport layer), the service produces
|
||||
`Result{ErrorCode: engine_unreachable}`. The classification lives in
|
||||
[`classifyEngineError`](../internal/service/registerruntime/service.go)
|
||||
and dispatches on the engine port sentinels
|
||||
(`ports.ErrEngineValidation`, `ports.ErrEngineUnreachable`,
|
||||
`ports.ErrEngineProtocolViolation`).
|
||||
|
||||
**Why.** [`../PLAN.md` Stage 13](../PLAN.md) lists the two as separate
|
||||
test cases ("engine 4xx (engine_validation_error), engine 5xx
|
||||
(engine_unreachable)"), but [`../README.md` §Lifecycles →
|
||||
Register-runtime](../README.md)'s failure-path table at the time of
|
||||
Stage 13 lumped them as `engine_unreachable`. PLAN's classification is
|
||||
more useful operationally:
|
||||
|
||||
- 4xx from the engine signals a contract violation (the engine
|
||||
rejected the request shape, which is a Game Master bug or a stale
|
||||
contract). Treating this as `engine_unreachable` would push
|
||||
operators down the "is the engine alive?" branch when the right
|
||||
branch is "did the GM build send the right shape?".
|
||||
- 5xx (and transport failures) signal that the engine is unreachable
|
||||
or unhealthy. `engine_unreachable` is the right code.
|
||||
|
||||
The README §Lifecycles failure-path table is updated in the same
|
||||
patch to reflect the split, so the two documents agree.
|
||||
|
||||
### 3. Engine response validated as `engine_protocol_violation`
|
||||
|
||||
**Decision.** After a successful engine `/admin/init` HTTP response,
|
||||
the service performs two extra checks before persisting any
|
||||
player_mappings:
|
||||
|
||||
- the number of returned players must equal the input roster size;
|
||||
- the set of `RaceName` values returned must be a subset of the
|
||||
roster (no extra races, no missing races).
|
||||
|
||||
A failure on either check rolls back the runtime record and returns
|
||||
`Result{ErrorCode: engine_protocol_violation}`.
|
||||
|
||||
**Why.** The README's failure-path table includes
|
||||
`engine_protocol_violation` for "engine response missing players or
|
||||
contains races not in roster". The engine adapter ([Stage 12,
|
||||
`engineclient.decodeStateResponse`](../internal/adapters/engineclient/client.go))
|
||||
validates the wire shape (presence of required fields, well-formed
|
||||
numeric values), but it cannot validate against the roster Game Master
|
||||
sent — only the service layer knows the roster. Splitting the two
|
||||
checks keeps the adapter narrow and lets the service-layer error code
|
||||
carry the semantic meaning.
|
||||
|
||||
### 4. Initial `runtime_snapshot_update` carries non-empty
|
||||
`player_turn_stats`
|
||||
|
||||
**Decision.** The first `runtime_snapshot_update` published by
|
||||
register-runtime carries one
|
||||
`PlayerTurnStats{UserID, Planets, Population}` row per active member,
|
||||
projected from the `engine.Init` response by joining on `RaceName`
|
||||
against the input roster. The projection is sorted by `UserID` for a
|
||||
deterministic wire order.
|
||||
|
||||
**Why.** The README §Async Stream Contracts cadence note used to read
|
||||
"empty when the snapshot is published for a status transition with no
|
||||
new turn payload". For register-runtime there *is* a new payload — the
|
||||
engine returns the initial player state in its `/admin/init` response,
|
||||
including `Planets` and `Population`. That state is the turn-0
|
||||
baseline against which Lobby's per-game stats aggregator measures
|
||||
later deltas: without it, the first per-player delta after turn 1
|
||||
would silently equal "everything" instead of "the change since
|
||||
turn 0". The README cadence wording is updated in the same patch to
|
||||
say the register-runtime snapshot carries the engine's turn-0 stats.
|
||||
|
||||
### 5. Best-effort rollback with two-flag gating
|
||||
|
||||
**Decision.** The service exposes a single `rollback(ctx, gameID,
|
||||
playerMappingsInstalled)` helper that always tries `runtime_records.Delete`
|
||||
and conditionally tries `playermappings.DeleteByGame`. The two booleans
|
||||
on `recordFailure` (`runtimeInserted`, `playerMappingsInstalled`)
|
||||
gate the rollback so:
|
||||
|
||||
- a pre-Insert failure (`invalid_request`, `conflict` from `Get`,
|
||||
`engine_version_not_found`, `Insert`'s own `ErrConflict`) skips
|
||||
rollback entirely;
|
||||
- a post-Insert / pre-BulkInsert failure deletes only the runtime
|
||||
row;
|
||||
- a post-BulkInsert failure deletes both. Note that BulkInsert errors
|
||||
themselves never install rows (per stage 11 D7's per-statement
|
||||
atomicity), so on `BulkInsert` returning ErrConflict the rollback
|
||||
flag for player_mappings is `false`.
|
||||
|
||||
The rollback uses a fresh `context.Background()` with a 5-second
|
||||
timeout so a cancelled request context does not strand the
|
||||
`starting` row.
|
||||
|
||||
**Why.** A common pitfall in rollback paths is to call `Delete` on
|
||||
state owned by another caller. The Insert-conflict branch is the
|
||||
canonical example: when our `Insert` returns `ErrConflict`, another
|
||||
request inserted the row first and owns it. Blindly deleting it
|
||||
would corrupt that other caller's state. The two-flag gating makes
|
||||
the ownership transfer explicit. The fresh background context
|
||||
mirrors the same pattern in `rtmanager.startruntime.releaseLease`.
|
||||
|
||||
## Files landed
|
||||
|
||||
- [`../internal/ports/runtimerecordstore.go`](../internal/ports/runtimerecordstore.go)
|
||||
— added `Delete` to the interface and the comment block.
|
||||
- [`../internal/adapters/postgres/runtimerecordstore/store.go`](../internal/adapters/postgres/runtimerecordstore/store.go)
|
||||
— implemented `Delete`.
|
||||
- [`../internal/adapters/postgres/runtimerecordstore/store_test.go`](../internal/adapters/postgres/runtimerecordstore/store_test.go)
|
||||
— added `TestDeleteIdempotent` and `TestDeleteRejectsEmptyGameID`.
|
||||
- [`../internal/adapters/mocks/mock_runtimerecordstore.go`](../internal/adapters/mocks/mock_runtimerecordstore.go)
|
||||
— regenerated.
|
||||
- [`../internal/service/registerruntime/service.go`](../internal/service/registerruntime/service.go)
|
||||
with [`errors.go`](../internal/service/registerruntime/errors.go)
|
||||
and [`service_test.go`](../internal/service/registerruntime/service_test.go)
|
||||
— new orchestrator package and tests.
|
||||
- [`../README.md`](../README.md) — §References pointer to this record
|
||||
plus one-line clarifications in §Lifecycles → Register-runtime
|
||||
(failure-path table now splits 4xx/5xx per **D2**) and §Async Stream
|
||||
Contracts (cadence note now says the register-runtime snapshot
|
||||
carries `player_turn_stats` from the engine-init response per **D4**).
|
||||
- [`../PLAN.md`](../PLAN.md) — Stage 13 marked done.
|
||||
|
||||
## Verification
|
||||
|
||||
```sh
|
||||
cd gamemaster
|
||||
|
||||
# Mocks regenerate cleanly with no diff after the port extension.
|
||||
make mocks
|
||||
git diff --exit-code internal/adapters/mocks
|
||||
|
||||
# Domain + port tests still pass.
|
||||
go test ./internal/domain/... ./internal/ports/...
|
||||
|
||||
# Adapter test for the new Delete method.
|
||||
go test ./internal/adapters/postgres/runtimerecordstore/...
|
||||
|
||||
# Service-level tests for the new orchestrator.
|
||||
go test ./internal/service/registerruntime/...
|
||||
|
||||
# Stage 06/07/09–12 contract / adapter / freeze tests stay green.
|
||||
go test ./...
|
||||
```
|
||||
|
||||
The full repo-level `go build ./...` from the workspace root succeeds;
|
||||
later stages (14+) build on the orchestrator shape Stage 13
|
||||
establishes.
|
||||
@@ -0,0 +1,220 @@
|
||||
---
|
||||
stage: 14
|
||||
title: Engine version registry service
|
||||
---
|
||||
|
||||
# Stage 14 — Engine version registry service
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
implementing the `engine_version` registry service-layer at PLAN
|
||||
Stage 14. The service backs the
|
||||
`/api/v1/internal/engine-versions/*` REST surface (Stage 19) and the
|
||||
hot-path `image_ref` resolve called synchronously by Game Lobby's
|
||||
start flow.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 14](../PLAN.md) lists seven service methods:
|
||||
`List`, `Get`, `Create`, `Update`, `Deprecate`, `Delete`,
|
||||
`ResolveImageRef`. The lifecycle the service drives is frozen by
|
||||
[`../README.md` §Engine Version Registry](../README.md). The reference
|
||||
precedent for shape and audit semantics is
|
||||
[`../internal/service/registerruntime`](../internal/service/registerruntime/service.go)
|
||||
landed at Stage 13.
|
||||
|
||||
Five decisions deviate from a literal reading of either Stage 14 or
|
||||
the existing port and migration shapes. Each is recorded below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. `EngineVersionStore.Delete` extension
|
||||
|
||||
**Decision.** [`ports.EngineVersionStore`](../internal/ports/engineversionstore.go)
|
||||
gains a `Delete(ctx, version) error` method that returns
|
||||
`engineversion.ErrNotFound` when no row matches. The PostgreSQL-backed
|
||||
adapter [`engineversionstore.Store.Delete`](../internal/adapters/postgres/engineversionstore/store.go)
|
||||
issues a single `DELETE FROM engine_versions WHERE version = $1` and
|
||||
distinguishes "missing" from "removed" via `RowsAffected`. The mock at
|
||||
[`internal/adapters/mocks/mock_engineversionstore.go`](../internal/adapters/mocks/mock_engineversionstore.go)
|
||||
is regenerated by `make -C gamemaster mocks`. Three adapter tests
|
||||
(`TestDeleteHappy`, `TestDeleteNotFound`, `TestDeleteRejectsEmptyVersion`)
|
||||
mirror the pattern from the existing Deprecate tests.
|
||||
|
||||
**Why.** Stage 14 explicitly requires the service to expose a hard
|
||||
`Delete` distinct from `Deprecate`. The Stage 11 port surface only
|
||||
carried `Deprecate` (idempotent soft-mark) and
|
||||
`IsReferencedByActiveRuntime` (read probe). Three alternatives were
|
||||
considered and rejected:
|
||||
|
||||
- **Skip hard delete**: omits a Stage 14 deliverable and forces a port
|
||||
delta later. The OpenAPI 409 `engine_version_in_use` example would
|
||||
also become a dangling spec entry.
|
||||
- **Reuse `Deprecate` for both soft and hard semantics**: contradicts
|
||||
README §Engine Version Registry ("`status` values: ... `deprecated`
|
||||
(rejected on new starts; existing runtimes unaffected)"). A
|
||||
referenced version must remain deprecable so the operator can phase
|
||||
in a successor while existing runtimes finish out — folding the
|
||||
reference check into Deprecate would break that flow.
|
||||
- **Inline the SQL inside the service**: contradicts the per-port
|
||||
abstraction Stage 10 set up; the service must not import the jet
|
||||
table package.
|
||||
|
||||
This is the same pattern Stage 13 D1 used for
|
||||
`RuntimeRecordStore.Delete`: a small, targeted contract delta admitted
|
||||
by the pre-launch single-init policy.
|
||||
|
||||
### 2. Hard-delete reference probe runs before adapter `Delete`
|
||||
|
||||
**Decision.** [`Service.Delete`](../internal/service/engineversion/service.go)
|
||||
calls `versions.IsReferencedByActiveRuntime` first; on a positive
|
||||
result it surfaces `ErrInUse` without ever calling the adapter
|
||||
`Delete`. Only when the probe reports zero references does the service
|
||||
issue the SQL DELETE.
|
||||
|
||||
**Why.** Two alternatives were rejected:
|
||||
|
||||
- **Single transaction with `SELECT ... FOR UPDATE` plus DELETE**:
|
||||
requires the adapter to expose a transactional sub-interface and
|
||||
forces the service into store-internal locking semantics. The plan
|
||||
is single-instance (README §Non-Goals), so the small race window
|
||||
between probe and delete is acceptable and self-correcting (a
|
||||
late-arriving register-runtime against a deprecated version would
|
||||
fail at `runtime_records` insert anyway because the version row is
|
||||
gone — the eventual outcome is the same).
|
||||
- **Probe-after-delete**: leaks the DELETE on transient probe
|
||||
failures and surfaces a misleading "deleted" outcome to the caller.
|
||||
|
||||
Surfacing `engine_version_in_use` before any mutation matches the
|
||||
README §Error Model wording and the OpenAPI `EngineVersionInUseError`
|
||||
example.
|
||||
|
||||
### 3. `engine_version_delete` op kind added to schema and domain
|
||||
|
||||
**Decision.** A new audit value `engine_version_delete` is added to:
|
||||
|
||||
- [`domain/operation.OpKind`](../internal/domain/operation/log.go)
|
||||
(constant, `IsKnown`, `AllOpKinds`);
|
||||
- [`migrations/00001_init.sql`](../internal/adapters/postgres/migrations/00001_init.sql)
|
||||
(the `operation_log_op_kind_chk` CHECK constraint);
|
||||
- README §Persistence Layout (the `op_kind` enum listing in the
|
||||
`operation_log` description).
|
||||
|
||||
The pre-launch single-init policy from
|
||||
[`../../ARCHITECTURE.md` §Persistence Backends](../../ARCHITECTURE.md)
|
||||
allows editing `00001_init.sql` until first production deploy.
|
||||
|
||||
**Why.** Two alternatives were rejected:
|
||||
|
||||
- **Reuse `engine_version_deprecate`** for hard delete: semantically
|
||||
weak; audit consumers would have to inspect outcome plus an
|
||||
out-of-band column to tell soft from hard, defeating the audit's
|
||||
signal value.
|
||||
- **Skip audit for hard delete**: inconsistent with every other
|
||||
service-layer mutation (every Stage 13/14 mutation writes
|
||||
operation_log). Forensics on a destructive admin action are exactly
|
||||
where audit matters most.
|
||||
|
||||
### 4. `operation_log.game_id` column doubles as audit subject
|
||||
|
||||
**Decision.** Engine-version CRUD audit entries store the canonical
|
||||
`version` string in the `OperationEntry.GameID` field (and therefore
|
||||
in the `operation_log.game_id` column). For `OpKindEngineVersionCreate`
|
||||
the canonical post-`ParseSemver` form is used (`v1.2.3`); for
|
||||
`OpKindEngineVersionUpdate` / `Deprecate` / `Delete` the user-supplied
|
||||
version is used so failed lookups still record the attempt verbatim.
|
||||
|
||||
**Why.** Three alternatives were considered and rejected:
|
||||
|
||||
- **Make `game_id` nullable and add a `subject_id` column**: requires
|
||||
a migration delta + jet regeneration + a domain field rename. Out
|
||||
of scope for stage 14 and inconsistent with the minimal-diff
|
||||
principle.
|
||||
- **Use a sentinel `engine_version:<v>` prefix**: harder to query
|
||||
alongside per-game audit reads; the index
|
||||
`operation_log (game_id, started_at DESC)` already covers
|
||||
subject-scoped reads, and a sentinel prefix would force callers to
|
||||
strip it.
|
||||
- **Skip audit for engine-version CRUD**: README §Persistence Layout
|
||||
explicitly lists `engine_version_create | engine_version_update |
|
||||
engine_version_deprecate` as op_kind values; the audit table is
|
||||
the canonical surface.
|
||||
|
||||
The decision is recorded both here and in the README §Persistence
|
||||
Layout note so future readers can find the overload rationale.
|
||||
|
||||
### 5. JSON-object validation for `Options`
|
||||
|
||||
**Decision.** [`Service.Create`](../internal/service/engineversion/service.go)
|
||||
and `Service.Update` validate the `Options` byte slice as a JSON
|
||||
object before persisting (raw bytes are decoded into
|
||||
`map[string]any`; non-objects, including arrays and scalars, are
|
||||
rejected with `invalid_request`). Empty/whitespace-only input passes
|
||||
through as nil; the adapter (Stage 11 D5) already substitutes the
|
||||
schema default `'{}'::jsonb`.
|
||||
|
||||
**Why.** The `engine_versions.options` column is `jsonb`. Persisting
|
||||
an array, scalar, or malformed JSON would either be rejected by the
|
||||
PostgreSQL parser at INSERT time (surfacing as a generic 500) or
|
||||
accepted and break engine-side consumers that expect an object. The
|
||||
service-layer validation surfaces a clear `invalid_request` early and
|
||||
keeps the contract honest. README §Engine Version Registry already
|
||||
describes `options` as a "free-form `jsonb` document" (object
|
||||
implied); the validation makes that wording load-bearing.
|
||||
|
||||
## Files landed
|
||||
|
||||
- [`../internal/ports/engineversionstore.go`](../internal/ports/engineversionstore.go)
|
||||
— added `Delete` to the interface and the comment block.
|
||||
- [`../internal/adapters/postgres/engineversionstore/store.go`](../internal/adapters/postgres/engineversionstore/store.go)
|
||||
— implemented `Delete`.
|
||||
- [`../internal/adapters/postgres/engineversionstore/store_test.go`](../internal/adapters/postgres/engineversionstore/store_test.go)
|
||||
— added `TestDeleteHappy`, `TestDeleteNotFound`,
|
||||
`TestDeleteRejectsEmptyVersion`.
|
||||
- [`../internal/adapters/mocks/mock_engineversionstore.go`](../internal/adapters/mocks/mock_engineversionstore.go)
|
||||
— regenerated.
|
||||
- [`../internal/adapters/postgres/migrations/00001_init.sql`](../internal/adapters/postgres/migrations/00001_init.sql)
|
||||
— added `engine_version_delete` to `operation_log_op_kind_chk`.
|
||||
- [`../internal/domain/operation/log.go`](../internal/domain/operation/log.go)
|
||||
with [`log_test.go`](../internal/domain/operation/log_test.go)
|
||||
— added `OpKindEngineVersionDelete` plus `IsKnown`/`AllOpKinds`
|
||||
membership.
|
||||
- [`../internal/service/engineversion/service.go`](../internal/service/engineversion/service.go)
|
||||
with [`errors.go`](../internal/service/engineversion/errors.go)
|
||||
and [`service_test.go`](../internal/service/engineversion/service_test.go)
|
||||
— new orchestrator package and tests.
|
||||
- [`../internal/service/registerruntime/service_test.go`](../internal/service/registerruntime/service_test.go)
|
||||
— `fakeEngineVersions` gains a stub `Delete` to satisfy the
|
||||
extended port.
|
||||
- [`../README.md`](../README.md) — §References pointer to this
|
||||
record; §Persistence Layout note that engine-version CRUD audit
|
||||
entries store `version` in the `game_id` column and that
|
||||
`engine_version_delete` joins the op_kind enum.
|
||||
- [`../PLAN.md`](../PLAN.md) — Stage 14 marked done.
|
||||
|
||||
## Verification
|
||||
|
||||
```sh
|
||||
cd gamemaster
|
||||
|
||||
# Mocks regenerate cleanly with no diff after the port extension is
|
||||
# committed alongside this stage.
|
||||
make mocks
|
||||
git diff --exit-code internal/adapters/mocks
|
||||
|
||||
# Domain + port tests still pass (operation log enum membership).
|
||||
go test ./internal/domain/... ./internal/ports/...
|
||||
|
||||
# Adapter test for the new Delete method and the migration's CHECK
|
||||
# constraint.
|
||||
go test ./internal/adapters/postgres/engineversionstore/...
|
||||
go test ./internal/adapters/postgres/operationlog/...
|
||||
|
||||
# Service-level tests for the new orchestrator.
|
||||
go test ./internal/service/engineversion/...
|
||||
|
||||
# Stage 13 service tests still pass (the fake gains a stub Delete).
|
||||
go test ./internal/service/registerruntime/...
|
||||
|
||||
# Repo build succeeds at the workspace root.
|
||||
go build ./...
|
||||
```
|
||||
@@ -0,0 +1,297 @@
|
||||
---
|
||||
stage: 15
|
||||
title: Scheduler, turn generation, and snapshot publisher
|
||||
---
|
||||
|
||||
# Stage 15 — Scheduler, turn generation, and snapshot publisher
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
implementing the scheduler ticker, the turn-generation orchestrator,
|
||||
and the publication of `gm:lobby_events` plus `notification:intents`
|
||||
at PLAN Stage 15. It is the heart of Game Master: every running game
|
||||
flows through this code path on every scheduled or admin-forced turn.
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 15](../PLAN.md) ships three components that
|
||||
together drive a turn:
|
||||
|
||||
1. `service/turngeneration` — the orchestrator that CAS's `running →
|
||||
generation_in_progress`, calls the engine `/admin/turn`, branches
|
||||
on `finished`, and publishes a `runtime_snapshot_update` /
|
||||
`game_finished` event plus the corresponding `game.turn.ready` /
|
||||
`game.finished` / `game.generation_failed` notification.
|
||||
2. `service/scheduler` — a thin, stateless wrapper around
|
||||
`domain/schedule.Schedule.Next` reused by the turn-generation
|
||||
recompute step and (in Stage 17) by `service/adminforce`.
|
||||
3. `worker/schedulerticker` — the 1-second loop that scans
|
||||
`runtime_records.ListDueRunning(now)` and dispatches one
|
||||
`turngeneration.Handle` per due game.
|
||||
|
||||
The lifecycle the orchestrator drives is frozen by
|
||||
[`../README.md` §Lifecycles → Turn generation](../README.md), and the
|
||||
publication cadence by [§Async Stream Contracts](../README.md) and
|
||||
[§Notification Contracts](../README.md). The reference precedent for
|
||||
the orchestrator shape (Input / Result / Dependencies / NewService /
|
||||
Handle) is Stage 13's `service/registerruntime`.
|
||||
|
||||
Seven decisions deviate from a literal reading of either PLAN Stage 15,
|
||||
the README, or the Stage 13 precedent. Each is recorded below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### D1. Resolve `game_name` synchronously from Lobby per notification
|
||||
|
||||
**Decision.** [`ports.LobbyClient`](../internal/ports/lobbyclient.go)
|
||||
gains a `GetGameSummary(ctx, gameID) (GameSummary, error)` method plus
|
||||
a narrow `GameSummary{GameID, GameName, Status}` type. The
|
||||
HTTP-backed adapter at
|
||||
[`internal/adapters/lobbyclient/client.go`](../internal/adapters/lobbyclient/client.go)
|
||||
issues a `GET /api/v1/internal/games/{game_id}` against the Lobby
|
||||
internal listener, decodes the `GameRecord` shape (Lobby's frozen
|
||||
contract), and wraps every non-success outcome with
|
||||
`ports.ErrLobbyUnavailable`. The `turngeneration` service calls it
|
||||
before publishing each `notification:intents` entry; on any error the
|
||||
orchestrator falls back to using `game_id` as `game_name` and logs a
|
||||
`warn` event with `error_code=lobby_unavailable`.
|
||||
|
||||
**Why.** `notificationintent.GameTurnReadyPayload`,
|
||||
`GameFinishedPayload`, and `GameGenerationFailedPayload` all require a
|
||||
`game_name` string, but Game Master does not own the platform name and
|
||||
the `register-runtime` envelope does not carry it. Three alternatives
|
||||
were considered and rejected:
|
||||
|
||||
- **Extend the `register-runtime` contract with `game_name` and
|
||||
persist it on `runtime_records`.** Cleanest architecturally, but
|
||||
requires editing the Stage 06 frozen OpenAPI spec, the contract
|
||||
test, the Stage 09 migration, the Stage 10 domain type, the
|
||||
Stage 11 store and tests, the Stage 13 register-runtime service and
|
||||
tests, and the regenerated jet code. Substantial cross-stage churn
|
||||
for a single denormalised string.
|
||||
- **Use `game_id` as the `game_name` placeholder unconditionally.**
|
||||
Zero change cost, but every push notification a user receives
|
||||
carries the opaque platform identifier — a user-visible regression.
|
||||
- **Defer notification publication to Stage 16.** Contradicts the
|
||||
PLAN Stage 15 task list, which explicitly enumerates
|
||||
`game.turn.ready`, `game.finished`, and `game.generation_failed`
|
||||
publication.
|
||||
|
||||
The chosen design adds one method and one return type to a port
|
||||
already established in Stage 12, with fail-soft fallback semantics
|
||||
that keep notification publication best-effort.
|
||||
|
||||
### D2. `Trigger` parameter classifies telemetry, never logic
|
||||
|
||||
**Decision.** The plan's input shape `{gameID, trigger ∈ {scheduler,
|
||||
force}}` is preserved as `turngeneration.Input.Trigger`. The value
|
||||
flows into the
|
||||
`gamemaster.turn_generation.outcomes` counter as a
|
||||
`trigger` label and into structured logs; it does **not** branch the
|
||||
orchestrator's persistence path. The skip-tick mechanic is driven
|
||||
exclusively by the runtime record's `skip_next_tick` column.
|
||||
|
||||
**Why.** [`../README.md §Force-next-turn`](../README.md) describes
|
||||
adminforce as: "Run the turn-generation flow synchronously (the same
|
||||
code path the scheduler uses). After success, set
|
||||
`runtime_records.skip_next_tick = true`." Adminforce flips the flag
|
||||
*after* the forced turn completes; the *next* scheduler-driven
|
||||
generation consumes it. Forking the orchestrator on `Trigger` would
|
||||
duplicate the recompute logic in two places and reopen the question
|
||||
"what if a force fires while skip_next_tick is already true?".
|
||||
Single-path makes the answer fall out of the existing rule (read the
|
||||
flag at start, clear at recompute) without special cases.
|
||||
|
||||
### D3. Two CAS pattern with cleanup on engine failure
|
||||
|
||||
**Decision.** Persistence steps mirror Stage 13's CAS-then-rollback
|
||||
pattern with two CAS transitions per generation:
|
||||
|
||||
1. `running → generation_in_progress` at the start. On
|
||||
`runtime.ErrConflict` (concurrent stop / external mutation) the
|
||||
orchestrator returns `Result{ErrorCode: conflict}` without
|
||||
publishing events; the external mutation is responsible for its
|
||||
own snapshot.
|
||||
2. After the engine call:
|
||||
- success + `finished=true` → `generation_in_progress → finished`;
|
||||
- success + `finished=false` → `generation_in_progress → running`;
|
||||
- engine error → `generation_in_progress → generation_failed`.
|
||||
|
||||
The post-engine CAS surfaces `runtime.ErrConflict` only when an
|
||||
external mutation (typical cause: admin issued a stop while the engine
|
||||
was generating) overtook the orchestrator. The engine call has
|
||||
already mutated state, but the runtime row is owned by the new actor;
|
||||
the orchestrator records the audit failure with `conflict` and exits.
|
||||
|
||||
**Why.** This keeps Stage 13's pattern intact: every CAS knows what
|
||||
state the row should be in before the call, and a mismatch always
|
||||
yields `conflict`. Mixing the two CAS guards with a single combined
|
||||
status update (e.g., a transactional "running and not stopped") would
|
||||
require the adapter to expose multi-status CAS predicates, breaking
|
||||
the per-row CAS abstraction Stage 11 settled on.
|
||||
|
||||
### D4. Snapshot cadence: one publication per outcome
|
||||
|
||||
**Decision.** The orchestrator publishes exactly one
|
||||
`runtime_snapshot_update` *or* `game_finished` per turn-generation
|
||||
call:
|
||||
|
||||
- success + not finished → `PublishSnapshotUpdate` with full
|
||||
`player_turn_stats`;
|
||||
- success + finished → `PublishGameFinished` with full
|
||||
`player_turn_stats`;
|
||||
- engine failure → `PublishSnapshotUpdate` with
|
||||
`RuntimeStatus=generation_failed` and empty `player_turn_stats`
|
||||
(no fresh engine payload).
|
||||
|
||||
The intermediate `running → generation_in_progress` transition is
|
||||
**not** broadcast.
|
||||
|
||||
**Why.** The README cadence enumerates "transitioned" cases as
|
||||
examples (`running ↔ generation_in_progress`), but PLAN Stage 15
|
||||
explicitly anchors publication on the outcome side. Publishing twice
|
||||
would double Lobby's processing cost without delivering new
|
||||
information, because `generation_in_progress` carries no fresh engine
|
||||
state and Lobby cannot act on the in-progress moment.
|
||||
|
||||
### D5. Notification recipients = `playermappingstore.ListByGame`
|
||||
|
||||
**Decision.** `game.turn.ready` and `game.finished` use
|
||||
`AudienceKindUser` and need a sorted unique non-empty
|
||||
`recipient_user_ids` list. The orchestrator derives it from
|
||||
`playermappingstore.ListByGame(gameID)` projected to `UserID` values,
|
||||
deduplicated and sorted ascending. Empty rosters cause the
|
||||
notification to be skipped silently with a `warn` log; the runtime
|
||||
mutation persists.
|
||||
|
||||
**Why.** This is the only roster data Game Master owns until Stage 16
|
||||
delivers the membership cache. After Stage 17 wires `banish`, the
|
||||
player_mappings rows still represent the engine-known roster and
|
||||
remain a correct conservative recipient set (banished members will be
|
||||
filtered separately by Notification Service's user resolution if
|
||||
absent in `User Service`). Adding a synchronous Lobby
|
||||
`GetMemberships` call here would duplicate the work Stage 16 is
|
||||
already on the hook to provide.
|
||||
|
||||
### D6. Scheduler service is a stateless utility
|
||||
|
||||
**Decision.**
|
||||
[`service/scheduler.Service`](../internal/service/scheduler/service.go)
|
||||
exposes a single `ComputeNext(turnSchedule, after, skipNextTick)
|
||||
(time.Time, bool, error)` method that wraps `schedule.Parse(...).Next(after,
|
||||
skipNextTick)`. The service holds no dependencies and no clock; the
|
||||
caller passes `after`. `turngeneration` injects a
|
||||
`*scheduler.Service` and uses it during the post-success recompute;
|
||||
Stage 17 will reuse the same instance from `adminforce`.
|
||||
|
||||
**Why.** Centralising the parse-then-next sequence in one place keeps
|
||||
the skip rule in one place and makes the future Stage 17 caller
|
||||
trivial. Holding no state means tests are pure value tests against the
|
||||
`domain/schedule` wrapper; no clock injection or dependency wiring is
|
||||
required.
|
||||
|
||||
### D7. Per-game in-flight set on the scheduler ticker
|
||||
|
||||
**Decision.**
|
||||
[`worker/schedulerticker.Worker`](../internal/worker/schedulerticker/worker.go)
|
||||
holds a `sync.Map[gameID]struct{}` of currently-dispatched games. At
|
||||
each tick the worker scans `RuntimeRecords.ListDueRunning(now)` and
|
||||
launches one goroutine per due game; if `LoadOrStore` reports the game
|
||||
is already in-flight, the worker logs at `debug` and skips. The
|
||||
goroutine releases the slot via `defer w.inflight.Delete(gameID)`.
|
||||
|
||||
**Why.** A 1-second tick is shorter than typical engine call latency
|
||||
plus PostgreSQL round-trips, so two ticks can observe the same due row
|
||||
before the first completes. The CAS in `turngeneration` is the
|
||||
authoritative protection (only one goroutine can flip `running →
|
||||
generation_in_progress`), but two goroutines doing the engine call and
|
||||
discarding the loser as `conflict` would waste an engine call and
|
||||
inflate `engine_validation_error` / `engine_unreachable` counters with
|
||||
spurious entries. The in-flight set is a 4-line optimisation that
|
||||
removes the spurious work.
|
||||
|
||||
`Worker.Wait` exposes the in-flight `sync.WaitGroup` so tests (and
|
||||
Stage 19's wiring) can drive `Tick` deterministically and observe
|
||||
completion. `Run` itself waits on the same group before returning so
|
||||
context cancellation gracefully drains in-flight work.
|
||||
|
||||
## Files landed
|
||||
|
||||
**Modified:**
|
||||
|
||||
- [`../internal/ports/lobbyclient.go`](../internal/ports/lobbyclient.go)
|
||||
— added `GetGameSummary` to the interface plus the `GameSummary`
|
||||
type.
|
||||
- [`../internal/adapters/lobbyclient/client.go`](../internal/adapters/lobbyclient/client.go)
|
||||
— implemented `GetGameSummary` with the same `ErrLobbyUnavailable`
|
||||
wrapping precedent as `GetMemberships`.
|
||||
- [`../internal/adapters/lobbyclient/client_test.go`](../internal/adapters/lobbyclient/client_test.go)
|
||||
— table-driven tests for happy path, 404, 5xx, malformed JSON,
|
||||
missing required fields, timeout, and bad input.
|
||||
- [`../internal/adapters/mocks/mock_lobbyclient.go`](../internal/adapters/mocks/mock_lobbyclient.go)
|
||||
— regenerated.
|
||||
|
||||
**Created:**
|
||||
|
||||
- [`../internal/service/scheduler/service.go`](../internal/service/scheduler/service.go),
|
||||
[`../internal/service/scheduler/service_test.go`](../internal/service/scheduler/service_test.go)
|
||||
— stateless scheduler utility.
|
||||
- [`../internal/service/turngeneration/service.go`](../internal/service/turngeneration/service.go),
|
||||
[`../internal/service/turngeneration/errors.go`](../internal/service/turngeneration/errors.go),
|
||||
[`../internal/service/turngeneration/service_test.go`](../internal/service/turngeneration/service_test.go)
|
||||
— turn-generation orchestrator and tests.
|
||||
- [`../internal/worker/schedulerticker/worker.go`](../internal/worker/schedulerticker/worker.go),
|
||||
[`../internal/worker/schedulerticker/worker_test.go`](../internal/worker/schedulerticker/worker_test.go)
|
||||
— scheduler ticker worker and tests.
|
||||
- This decision record.
|
||||
|
||||
**Reused (not modified):**
|
||||
|
||||
- `internal/domain/runtime/{model.go, transitions.go}` —
|
||||
`running → generation_in_progress`, `generation_in_progress →
|
||||
running`, `generation_in_progress → generation_failed`,
|
||||
`generation_in_progress → finished` were all permitted by the
|
||||
Stage 10 transitions table.
|
||||
- `internal/domain/schedule/nexttick.go` — the cron + skip wrapper.
|
||||
- `internal/domain/operation/log.go` — the `OpKindTurnGeneration`
|
||||
enum value already in place.
|
||||
- `internal/ports/{runtimerecordstore.go, engineclient.go,
|
||||
playermappingstore.go, operationlog.go,
|
||||
notificationpublisher.go, lobbyeventspublisher.go}` — every store
|
||||
and publisher used by the orchestrator was already present.
|
||||
- `internal/telemetry/runtime.go` — `RecordTurnGenerationOutcome`,
|
||||
`RecordLobbyEventPublished`, `RecordNotificationPublishAttempt`.
|
||||
- `pkg/notificationintent.NewGameTurnReadyIntent`,
|
||||
`NewGameFinishedIntent`, `NewGameGenerationFailedIntent`.
|
||||
|
||||
## Verification
|
||||
|
||||
```sh
|
||||
cd gamemaster
|
||||
|
||||
# Mock regeneration must produce the GetGameSummary additions and
|
||||
# nothing else.
|
||||
make mocks
|
||||
git diff --stat internal/adapters/mocks
|
||||
|
||||
# Domain + ports tests still pass.
|
||||
go test ./internal/domain/... ./internal/ports/...
|
||||
|
||||
# Scheduler utility.
|
||||
go test ./internal/service/scheduler/...
|
||||
|
||||
# Turn-generation orchestrator.
|
||||
go test ./internal/service/turngeneration/...
|
||||
|
||||
# Scheduler ticker worker.
|
||||
go test ./internal/worker/schedulerticker/...
|
||||
|
||||
# Updated lobby client adapter.
|
||||
go test ./internal/adapters/lobbyclient/...
|
||||
|
||||
# Module-wide build remains green.
|
||||
go test ./...
|
||||
```
|
||||
|
||||
Out-of-scope for this stage: app wiring (Stage 19), service-local
|
||||
integration suite (Stage 21), cross-service Lobby ↔ GM tests
|
||||
(Stage 22).
|
||||
@@ -0,0 +1,256 @@
|
||||
---
|
||||
stage: 16
|
||||
title: Hot-path services and membership cache
|
||||
---
|
||||
|
||||
# Stage 16 — Hot-path services and membership cache
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
implementing the gateway-facing trio of player services
|
||||
(`commandexecute`, `orderput`, `reportget`) and the in-process membership
|
||||
cache that authorises every hot-path call. It is the last service-layer
|
||||
stage before Stage 17 (admin operations) and Stage 19 (REST handlers and
|
||||
wiring).
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 16](../PLAN.md) ships four components that together
|
||||
make the player surface usable:
|
||||
|
||||
1. `service/membership` — concurrent in-process LRU cache holding the
|
||||
per-game `user_id → status` projection from
|
||||
`Lobby /api/v1/internal/games/{game_id}/memberships`. TTL is the
|
||||
safety net; the explicit invalidation hook from Lobby is the
|
||||
primary staleness control.
|
||||
2. `service/commandexecute` — orchestrator behind
|
||||
`POST /api/v1/internal/games/{game_id}/commands`. Authorises the
|
||||
caller, resolves `actor=race_name`, reshapes the JSON envelope, and
|
||||
forwards `PUT /api/v1/command` to the engine.
|
||||
3. `service/orderput` — same shape as `commandexecute`, targeting the
|
||||
engine `PUT /api/v1/order`.
|
||||
4. `service/reportget` — orchestrator behind
|
||||
`GET /api/v1/internal/games/{game_id}/reports/{turn}`. Authorises
|
||||
the caller, resolves `race_name`, and forwards
|
||||
`GET /api/v1/report?player=<race>&turn=<turn>` to the engine.
|
||||
|
||||
The reference precedent for the orchestrator shape (Input / Result /
|
||||
Dependencies / NewService / Handle, plus a private `classifyEngineError`
|
||||
helper) is Stage 15's `service/turngeneration`. Six decisions deviate
|
||||
from a literal reading of the README, the OpenAPI surface, or the
|
||||
turngeneration precedent. Each is recorded below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### D1. `reportget` does not require `runtime_records.status = running`
|
||||
|
||||
**Decision.**
|
||||
[`service/reportget`](../internal/service/reportget/service.go) accepts
|
||||
any non-deleted runtime row and forwards the read to the engine.
|
||||
`runtime_not_running` is **not** part of `reportget`'s error vocabulary
|
||||
([`errors.go`](../internal/service/reportget/errors.go)).
|
||||
`commandexecute` and `orderput`, by contrast, reject anything other than
|
||||
`StatusRunning` with `runtime_not_running`.
|
||||
|
||||
**Why.** Three signals point at the same conclusion:
|
||||
|
||||
- The OpenAPI surface for `internalGetReport`
|
||||
(`api/internal-openapi.yaml` lines 546–575) lists only
|
||||
`403 / 404 / 502 / 500` responses; there is no 409 / `runtime_not_running`
|
||||
on the report path. The matching error response on commands and
|
||||
orders (lines 502, 540) does include 409.
|
||||
- The README §Reports flow (`../README.md` lines 508–520) lists only
|
||||
authorisation, race-name resolution, and engine forwarding. The
|
||||
preceding §Player commands and orders block (lines 492–506) lists the
|
||||
`status=running` precondition explicitly. The two sections are
|
||||
separately worded by design.
|
||||
- A finished or stopped runtime is a normal target for a post-mortem
|
||||
read of older turns. Refusing the read forces operators to use ad-hoc
|
||||
database access for the same data the engine already exposes.
|
||||
|
||||
The `engine_unreachable` outcome remains the natural failure mode when
|
||||
the engine container is genuinely gone (e.g., on `engine_unreachable`
|
||||
status); no extra branch is required.
|
||||
|
||||
This decision was confirmed with the user during plan-mode review.
|
||||
|
||||
### D2. GM rewrites the engine envelope (`commands` → `cmd`, inject `actor`)
|
||||
|
||||
**Decision.**
|
||||
[`commandexecute.rewriteCommandPayload`](../internal/service/commandexecute/service.go)
|
||||
and the parallel
|
||||
[`orderput.rewriteOrderPayload`](../internal/service/orderput/service.go)
|
||||
unmarshal the GM `ExecuteCommandsRequest` / `PutOrdersRequest` body as
|
||||
`map[string]json.RawMessage`, take the `commands` field, and emit a
|
||||
fresh JSON object containing only `actor` (set to the resolved race
|
||||
name) and `cmd` (carrying the original array). Every other top-level
|
||||
key is dropped. The OpenAPI descriptions for `ExecuteCommandsRequest`
|
||||
and `PutOrdersRequest` were updated in the same patch to document the
|
||||
rewrite.
|
||||
|
||||
**Why.** The literal "forwarded verbatim" wording in the original
|
||||
Stage 06 OpenAPI description conflicted with two upstream constraints:
|
||||
|
||||
- The engine `CommandRequest` schema in `game/openapi.yaml` lines
|
||||
345–364 declares `actor` and `cmd` as required, with no top-level
|
||||
`commands`.
|
||||
- The README §Hot Path rule "GM never trusts a payload field for actor
|
||||
identification" (`../README.md` lines 487–490) requires GM to set
|
||||
`actor` from the authenticated user identity.
|
||||
|
||||
Two alternatives were rejected:
|
||||
|
||||
- **Move the rewrite into `engineclient`.** The adapter's role is thin
|
||||
transport; injecting actor (an authorisation concern) into transport
|
||||
would muddle the boundary and make the adapter test harness
|
||||
authorisation-aware. The service is the right home.
|
||||
- **Inject `actor` only and keep the `commands` key.** The engine schema
|
||||
requires `cmd`; this would require an engine contract change outside
|
||||
the Stage 16 scope and break Stage 05's frozen path.
|
||||
|
||||
The transform is duplicated across the two services rather than
|
||||
extracted to a shared package. Each implementation is twelve lines and
|
||||
each service is otherwise independent; a shared package would add
|
||||
import-edge surface for marginal savings, and the project convention is
|
||||
to prefer the minimal diff (`CLAUDE.md §Priorities`). The duplication is
|
||||
explicitly documented in both file-level comments.
|
||||
|
||||
This decision was confirmed with the user during plan-mode review.
|
||||
|
||||
### D3. Hot-path services do not append to `operation_log`
|
||||
|
||||
**Decision.** None of the three services emit an `operation_log` entry.
|
||||
The `Input` shape carries no `OpSource`/`SourceRef` fields. Telemetry
|
||||
counters
|
||||
(`gamemaster.command_execute.outcomes`,
|
||||
`gamemaster.order_put.outcomes`, `gamemaster.report_get.outcomes`) are
|
||||
the only audit surface.
|
||||
|
||||
**Why.** The `operation.OpKind` enum
|
||||
(`internal/domain/operation/log.go`) intentionally has no value for
|
||||
command, order, or report — it stops at admin and lifecycle operations.
|
||||
Every hot-path call would multiply audit volume by the order rate
|
||||
without adding investigative value: the telemetry counter already
|
||||
exposes outcome distribution, and the engine itself is the source of
|
||||
truth for per-command results. Adding three new `OpKind` values would
|
||||
also bloat the SQL CHECK on `operation_log` with no operational
|
||||
consumer.
|
||||
|
||||
### D4. Membership cache uses a hand-rolled per-game inflight tracker
|
||||
|
||||
**Decision.**
|
||||
[`Cache.fetch`](../internal/service/membership/cache.go) coordinates
|
||||
concurrent misses on the same `game_id` through a tiny
|
||||
`map[gameID]*flight` plus a per-flight `done` channel. Joiners block on
|
||||
`select { case <-existing.done: case <-ctx.Done(): }`. The leader
|
||||
populates `members` (or `err`) on the flight before closing the channel.
|
||||
|
||||
**Why.** `golang.org/x/sync/singleflight` would be a sharper tool, but
|
||||
adding it as a *direct* dependency (it is currently only an indirect
|
||||
transitive of other modules in the workspace) requires the
|
||||
"justification for direct deps" bar set by `CLAUDE.md §Dependencies`.
|
||||
The cache is the only consumer in `gamemaster`, the implementation is
|
||||
~30 lines, and a context-cancellable wait is one extra `select` line we
|
||||
would otherwise have to wrap around `singleflight.Do` anyway. The
|
||||
cache-internal helper is the cheaper choice.
|
||||
|
||||
### D5. Cache returns the raw status string
|
||||
|
||||
**Decision.**
|
||||
[`Cache.Resolve`](../internal/service/membership/cache.go) returns
|
||||
`(status string, err error)` where the status is the verbatim Lobby
|
||||
vocabulary (`"active"`, `"removed"`, `"blocked"`) plus the empty string
|
||||
when the user is not in the roster. Callers compare against
|
||||
`membershipStatusActive = "active"` directly. There is no typed
|
||||
wrapper.
|
||||
|
||||
**Why.** `ports.Membership.Status` is already `string`
|
||||
(`internal/ports/lobbyclient.go` line 56); introducing a `MembershipStatus`
|
||||
domain type purely to be passed through would add boilerplate without
|
||||
enforcing any invariant Go's type system can check. The hot-path
|
||||
services need only a single equality check, so a typed enum buys
|
||||
nothing; it would also need a fallback for "unknown vocabulary"
|
||||
defensive against future Lobby additions, which is more decision
|
||||
surface than the cache should own.
|
||||
|
||||
### D6. Empty roster slot surfaces as `forbidden`
|
||||
|
||||
**Decision.** Two distinct underlying conditions both surface as
|
||||
`ErrorCodeForbidden` from the three services:
|
||||
|
||||
- The membership cache returns the empty string for the requested
|
||||
`(gameID, userID)`: the user is not present in the Lobby roster.
|
||||
- The membership cache returns `"active"` but
|
||||
`playermappingstore.Get(gameID, userID)` returns
|
||||
`playermapping.ErrNotFound`: the user is an active platform member
|
||||
but has no engine roster slot.
|
||||
|
||||
The second condition is an internal inconsistency (register-runtime
|
||||
should have installed the row), but the user-visible semantics — "you
|
||||
are not authorised to act on this game" — are identical to the first.
|
||||
The structured log captures the underlying cause.
|
||||
|
||||
**Why.** Surfacing the second condition as `internal_error` would
|
||||
expose 500 to a perfectly-routine "user not part of the engine roster"
|
||||
case and obscure the actual outcome from the gateway and the user. The
|
||||
inconsistency, if it ever materialises, is an operator concern visible
|
||||
in the warn-level log and the `forbidden` metric attribution; treating
|
||||
it as a 5xx would not help operators (who would then ignore the false
|
||||
alarm) nor users (who only care that they cannot act).
|
||||
|
||||
## Files landed
|
||||
|
||||
**Created:**
|
||||
|
||||
- [`../internal/service/membership/{errors.go, cache.go, cache_test.go}`](../internal/service/membership/)
|
||||
— concurrent LRU cache plus `ErrLobbyUnavailable` sentinel.
|
||||
- [`../internal/service/commandexecute/{errors.go, service.go, service_test.go}`](../internal/service/commandexecute/)
|
||||
— command-execute orchestrator and tests.
|
||||
- [`../internal/service/orderput/{errors.go, service.go, service_test.go}`](../internal/service/orderput/)
|
||||
— order-put orchestrator and tests.
|
||||
- [`../internal/service/reportget/{errors.go, service.go, service_test.go}`](../internal/service/reportget/)
|
||||
— report-get orchestrator and tests.
|
||||
- This decision record.
|
||||
|
||||
**Modified:**
|
||||
|
||||
- [`../api/internal-openapi.yaml`](../api/internal-openapi.yaml) —
|
||||
rewrote the description fields of `ExecuteCommandsRequest` and
|
||||
`PutOrdersRequest` to document the GM-side envelope rewrite.
|
||||
|
||||
**Reused (not modified):**
|
||||
|
||||
- `internal/ports/{engineclient.go, lobbyclient.go,
|
||||
playermappingstore.go, runtimerecordstore.go}` — every interface and
|
||||
sentinel was already present.
|
||||
- `internal/domain/runtime/model.go` — `StatusRunning` constant + the
|
||||
whole status vocabulary.
|
||||
- `internal/domain/playermapping/model.go` — `PlayerMapping` and
|
||||
`ErrNotFound`.
|
||||
- `internal/domain/operation/log.go` — `Outcome` enum.
|
||||
- `internal/config/config.go` — `MembershipCacheConfig.{TTL, MaxGames}`
|
||||
with defaults `30s` / `4096`.
|
||||
- `internal/telemetry/runtime.go` —
|
||||
`RecordCommandExecuteOutcome`, `RecordOrderPutOutcome`,
|
||||
`RecordReportGetOutcome`, `RecordMembershipCacheResult`,
|
||||
`RecordEngineCall` (already wired in Stage 08).
|
||||
|
||||
## Verification
|
||||
|
||||
```sh
|
||||
cd gamemaster
|
||||
|
||||
# Membership cache (race-clean concurrency).
|
||||
go test -race ./internal/service/membership/...
|
||||
|
||||
# Each new player service.
|
||||
go test ./internal/service/commandexecute/...
|
||||
go test ./internal/service/orderput/...
|
||||
go test ./internal/service/reportget/...
|
||||
|
||||
# Module-wide build + suite.
|
||||
go build ./...
|
||||
go test ./...
|
||||
```
|
||||
|
||||
Out-of-scope for this stage: app wiring (Stage 19), service-local
|
||||
integration suite (Stage 21), cross-service Lobby ↔ GM tests (Stage 22).
|
||||
@@ -0,0 +1,264 @@
|
||||
---
|
||||
stage: 17
|
||||
title: Admin operations and Lobby-facing liveness
|
||||
---
|
||||
|
||||
# Stage 17 — Admin operations and Lobby-facing liveness
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
implementing the five Game Master admin/inspect service-layer
|
||||
operations and the Lobby-facing liveness reply
|
||||
(`adminstop`, `adminforce`, `adminpatch`, `adminbanish`,
|
||||
`livenessreply`). Stage 17 is the last service-layer stage before
|
||||
Stage 18 (health-events consumer) and Stage 19 (REST handlers and
|
||||
wiring).
|
||||
|
||||
## Context
|
||||
|
||||
[`../PLAN.md` Stage 17](../PLAN.md) ships five services that close
|
||||
the GM service surface:
|
||||
|
||||
1. `service/adminstop` — orchestrator behind
|
||||
`POST /api/v1/internal/runtimes/{game_id}/stop`. Calls Runtime
|
||||
Manager and CASes `runtime_records.status → stopped`.
|
||||
2. `service/adminforce` — orchestrator behind
|
||||
`POST /api/v1/internal/runtimes/{game_id}/force-next-turn`. Runs
|
||||
the inner `service/turngeneration` flow synchronously, then sets
|
||||
`runtime_records.skip_next_tick = true`.
|
||||
3. `service/adminpatch` — orchestrator behind
|
||||
`POST /api/v1/internal/runtimes/{game_id}/patch`. Calls Runtime
|
||||
Manager and rotates `runtime_records.current_image_ref` plus
|
||||
`current_engine_version`.
|
||||
4. `service/adminbanish` — orchestrator behind
|
||||
`POST /api/v1/internal/games/{game_id}/race/{race_name}/banish`.
|
||||
Resolves the race and calls the engine `/admin/race/banish`.
|
||||
5. `service/livenessreply` — orchestrator behind
|
||||
`GET /api/v1/internal/games/{game_id}/liveness`. Reflects GM's own
|
||||
view of the runtime without ever calling the engine.
|
||||
|
||||
The reference precedent for the orchestrator shape (`Input` /
|
||||
`Result` / `Dependencies` / `NewService` / `Handle`) is Stage 13's
|
||||
`service/registerruntime` and Stage 15's `service/turngeneration`.
|
||||
Six decisions deviate from a literal reading of the README, the
|
||||
OpenAPI surface, or the turngeneration precedent. Each is recorded
|
||||
below.
|
||||
|
||||
## Decisions
|
||||
|
||||
### D1. `RuntimeRecordStore` grows a dedicated `UpdateImage` method
|
||||
|
||||
**Decision.**
|
||||
[`ports/runtimerecordstore.go`](../internal/ports/runtimerecordstore.go)
|
||||
adds a new `UpdateImage(ctx, UpdateImageInput) error` method with its
|
||||
own `UpdateImageInput` struct and `Validate`. The Postgres adapter
|
||||
gains a matching SQL UPDATE under a CAS guard on `(game_id, status)`.
|
||||
The existing `UpdateStatus` is **not** repurposed for patch updates.
|
||||
|
||||
**Why.** `UpdateStatusInput.Validate()` (Stage 11) calls
|
||||
`runtime.Transition(ExpectedFrom, To)` and rejects every pair where
|
||||
`ExpectedFrom == To`. Patch deliberately keeps the runtime in
|
||||
`running`, so any attempt to feed `UpdateStatus` with
|
||||
`ExpectedFrom == To == running` is rejected before the SQL even
|
||||
runs. Three alternatives were on the table:
|
||||
|
||||
- Drop the `runtime.Transition` invariant from `UpdateStatusInput`
|
||||
to allow self-transitions. That would weaken the CAS validator
|
||||
for every existing caller — register-runtime, turngeneration,
|
||||
health-events consumer — and reintroduce the «accidental no-op
|
||||
status update» class of bugs the validator was added to catch.
|
||||
- Introduce a synthetic `runtime.StatusRunning → runtime.StatusRunning`
|
||||
edge in `domain/runtime/transitions.go`. Same blast radius as
|
||||
above, only with stronger semantic baggage in the transition table.
|
||||
- Add a dedicated `UpdateImage` method that only writes the two
|
||||
image columns plus `updated_at`. Bounded blast radius (one new
|
||||
method, one new input struct, one new SQL UPDATE), preserves the
|
||||
CAS invariant, and matches how Stage 11 already separated
|
||||
`UpdateScheduling` from `UpdateStatus` for the same reason.
|
||||
|
||||
The third option is what shipped. Existing fakes (`registerruntime`,
|
||||
`turngeneration`, hot-path tests, schedulerticker) carry a no-op
|
||||
`UpdateImage` stub that returns `errors.New(...)` so a test that
|
||||
accidentally exercises the new path fails loudly.
|
||||
|
||||
### D2. `adminstop` is idempotent on `stopped` and `finished`, rejects `starting`
|
||||
|
||||
**Decision.**
|
||||
[`service/adminstop`](../internal/service/adminstop/service.go) reads
|
||||
the runtime row first; if `Status ∈ {stopped, finished}`, the service
|
||||
returns `OutcomeSuccess` without calling Runtime Manager and without
|
||||
publishing a `runtime_snapshot_update`. If `Status == starting`, the
|
||||
service returns `conflict` with `OutcomeFailure`. Every other
|
||||
non-terminal status (`running`, `generation_in_progress`,
|
||||
`generation_failed`, `engine_unreachable`) takes the regular path:
|
||||
RTM call → CAS → snapshot publication.
|
||||
|
||||
**Why.** The README §Stop says «CAS `runtime_records.status: * →
|
||||
stopped`» but in practice three edge cases pull the service away
|
||||
from a literal CAS-only implementation:
|
||||
|
||||
- `stopped` and `finished` are common operator races: an admin clicks
|
||||
«stop» on a UI list while another admin already pressed it (or the
|
||||
game finished naturally). Returning `conflict` would force the UI
|
||||
to retry the read and confuse the operator. Idempotent success is
|
||||
the smallest-surprise behaviour and matches how Lobby's other
|
||||
admin-cancel flows handle terminal states.
|
||||
- `starting` is the active engine-init window. RTM has just been
|
||||
asked to start the container; an admin stop here would race the
|
||||
init flow and almost certainly leave the system in a partially
|
||||
cleaned state. The transition table in Stage 10 deliberately
|
||||
excludes `starting → stopped` for the same reason. Returning
|
||||
`conflict` lets the admin tooling surface «runtime is mid-init,
|
||||
retry in a moment» instead of pretending the stop succeeded.
|
||||
- The «obvious» fourth path — letting the CAS validator reject
|
||||
`starting → stopped` and surface that as the natural conflict —
|
||||
was rejected because it depends on validator implementation
|
||||
detail leaking through; the explicit pre-CAS check makes the
|
||||
intent obvious in the audit log and the structured logs.
|
||||
|
||||
The audit log records every pre-CAS rejection with
|
||||
`outcome=failure / error_code=conflict`, and every idempotent no-op
|
||||
with `outcome=success`, so operators can distinguish the cases in
|
||||
post-hoc analysis.
|
||||
|
||||
### D3. `adminforce` always sets `skip_next_tick=true`, even on a finishing turn
|
||||
|
||||
**Decision.**
|
||||
[`service/adminforce`](../internal/service/adminforce/service.go)
|
||||
issues `UpdateScheduling{SkipNextTick=true,
|
||||
NextGenerationAt=turnResult.Record.NextGenerationAt,
|
||||
CurrentTurn=turnResult.Record.CurrentTurn}` after every successful
|
||||
inner turn-generation, regardless of whether `Result.Finished` is
|
||||
`true`.
|
||||
|
||||
**Why.** The cleaner branch — «skip the scheduling write when the
|
||||
turn just finished the game» — was considered and rejected:
|
||||
|
||||
- `turngeneration` already cleared `next_generation_at` and updated
|
||||
`current_turn` on the finishing branch (Stage 15
|
||||
`completeFinished`). A redundant write that re-affirms those
|
||||
values plus sets `skip_next_tick=true` does no harm: the row is
|
||||
already in `status=finished` and no scheduler tick will ever
|
||||
consume the flag.
|
||||
- The branchless code is shorter and the test contract is simpler
|
||||
(«adminforce always writes the skip flag on success»). One extra
|
||||
conditional saves zero SQL on the production path but doubles the
|
||||
set of cases the test matrix has to assert.
|
||||
- The README §Force-next-turn wording «After success, set
|
||||
`runtime_records.skip_next_tick = true`» is unconditional. Adding
|
||||
a runtime-side branch would silently weaken that contract.
|
||||
|
||||
The driver `op_kind=force_next_turn` audit row records the eventual
|
||||
outcome (success / failure with the same error code that
|
||||
turngeneration surfaced) so audit consumers can tell apart a forced
|
||||
turn that finished the game from a forced turn that prepared the
|
||||
next regular tick.
|
||||
|
||||
### D4. `adminbanish` does not check runtime status; missing race surfaces as `forbidden`
|
||||
|
||||
**Decision.**
|
||||
[`service/adminbanish`](../internal/service/adminbanish/service.go)
|
||||
reads the runtime row only to retrieve the `engine_endpoint`, then
|
||||
calls `playermappingstore.GetByRace`. A missing row maps to
|
||||
`error_code=forbidden`. The runtime status itself is **not**
|
||||
inspected; banish is dispatched even when the runtime is in
|
||||
`stopped`, `finished`, or `engine_unreachable`.
|
||||
|
||||
**Why.** Two threads informed the choice:
|
||||
|
||||
- README §Banish lists only two preconditions: «runtime exists»
|
||||
and «`race_name` resolves to an existing player_mappings row».
|
||||
Adding a status guard would silently extend the contract beyond
|
||||
what Lobby is allowed to depend on, and would make the banish
|
||||
flow fail differently from the documented set.
|
||||
- A banish on a stopped/finished runtime is a no-op at the engine
|
||||
side (the container is exited or absent). The engine call will
|
||||
fail with `engine_unreachable`, which is the right error for the
|
||||
caller to see — it means «the runtime was stopped before banish
|
||||
could land». Pre-rejecting with a different code would hide the
|
||||
real state from the operator.
|
||||
|
||||
The `forbidden` mapping for missing race mirrors Stage 16 D6 («empty
|
||||
roster surfaces as `forbidden`»). The frozen error vocabulary does
|
||||
not contain a `race_not_found` code, and `forbidden` is the
|
||||
semantically closest match: «the platform user this race belonged
|
||||
to is no longer authorised to act on the runtime».
|
||||
|
||||
### D5. `livenessreply` returns 200 / `status=""` on `runtime_not_found`
|
||||
|
||||
**Decision.**
|
||||
[`service/livenessreply`](../internal/service/livenessreply/service.go)
|
||||
absorbs `runtime.ErrNotFound` into a successful Result with
|
||||
`Ready=false` and `Status=runtime.Status("")`. The Go-level error
|
||||
return is reserved for non-business failures only (nil context, nil
|
||||
receiver, store-read errors, invalid input). A handler that wraps
|
||||
this service answers 200 with body `{"ready": false, "status": ""}`
|
||||
when GM has no record for the requested game.
|
||||
|
||||
**Why.** README §Liveness reply specifies the endpoint «never calls
|
||||
the engine; it reflects GM's own view only» and explicitly says it
|
||||
returns 200 even when the runtime is not running. Three response
|
||||
shapes were considered:
|
||||
|
||||
- 200 with `status="runtime_not_found"`. Mixes runtime-status
|
||||
values with error codes in the same field, breaking the
|
||||
caller's enum-match dispatch.
|
||||
- 404 `runtime_not_found`. Contradicts the README §Liveness reply
|
||||
«return `200`» wording and forces Lobby's resume flow to add a
|
||||
404 handler that means «no observation» — semantically the same
|
||||
as `Ready=false`.
|
||||
- 200 with `status=""`. The empty status reads naturally as «GM
|
||||
has no observation»; Lobby's resume flow already needs to handle
|
||||
the `Ready=false` branch and the empty status is exactly what
|
||||
«no observation» looks like in practice. Chosen for the smallest
|
||||
caller-side complexity.
|
||||
|
||||
### D6. RTM client errors surface as `service_unavailable`, not a dedicated code
|
||||
|
||||
**Decision.** Both `service/adminstop` and `service/adminpatch` map
|
||||
every error from `RTMClient.Stop` / `RTMClient.Patch` to
|
||||
`error_code=service_unavailable`, regardless of whether the
|
||||
underlying failure is `ErrRTMUnavailable`, a wrapped HTTP 5xx, or a
|
||||
dialler-level transport error.
|
||||
|
||||
**Why.** The frozen error vocabulary in
|
||||
[`gamemaster/api/internal-openapi.yaml`](../api/internal-openapi.yaml)
|
||||
does not contain a `runtime_manager_unavailable` code. Three options
|
||||
were on the table:
|
||||
|
||||
- Add a new code. Rejected: the OpenAPI surface is contract-frozen
|
||||
from Stage 06 and adding a new error code is a wire-format change
|
||||
that pulls every consumer into a re-validation. Stage 17 deals
|
||||
with service-layer code only; no contract change is in scope.
|
||||
- Map RTM failures to `engine_unreachable`. Rejected: the RTM call
|
||||
is a sibling-service hop, not an engine call; mixing the two in
|
||||
a single label confuses operators reading metric / log labels.
|
||||
- Map RTM failures to `service_unavailable`. Accepted: the
|
||||
vocabulary already documents `service_unavailable` as «a
|
||||
steady-state dependency was unreachable for this call», which is
|
||||
exactly what an RTM outage looks like from GM's perspective.
|
||||
|
||||
The Stage 12 D5 decision record in
|
||||
[`stage12-external-clients.md`](./stage12-external-clients.md)
|
||||
already records that the RTM adapter wraps every non-success
|
||||
outcome in `ports.ErrRTMUnavailable` without distinguishing
|
||||
sub-cases; Stage 17 simply consumes the unified sentinel.
|
||||
|
||||
## Cross-stage consequences
|
||||
|
||||
- The new port surface `RuntimeRecordStore.UpdateImage` is
|
||||
available to every later consumer; Stage 18 and Stage 19 do not
|
||||
use it. Existing hand-rolled fakes carry a no-op stub.
|
||||
- `OpKindStop`, `OpKindForceNextTurn`, `OpKindPatch`, `OpKindBanish`
|
||||
were introduced in Stage 09 / Stage 10 already; Stage 17 is their
|
||||
first writer.
|
||||
- The telemetry counter `gamemaster.banish.outcomes` (declared in
|
||||
Stage 08) gets its first call site in `service/adminbanish`. No
|
||||
new counters are introduced for `adminstop` / `adminforce` /
|
||||
`adminpatch` / `livenessreply`; the README §Observability list
|
||||
does not mention them and Stage 17 deliberately stays inside the
|
||||
declared instrument set.
|
||||
- The Stage 19 REST handlers consume the five services without
|
||||
service-layer changes: each handler decodes the JSON envelope,
|
||||
fills `Input.OpSource` / `Input.SourceRef` from the
|
||||
`X-Galaxy-Caller` header convention, and translates `Result.ErrorCode`
|
||||
into the standard error envelope.
|
||||
@@ -0,0 +1,171 @@
|
||||
---
|
||||
stage: 18
|
||||
title: runtime:health_events consumer
|
||||
---
|
||||
|
||||
# Stage 18 — `runtime:health_events` consumer
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
implementing the asynchronous consumer of the `runtime:health_events`
|
||||
Redis Stream produced by Runtime Manager. The consumer translates RTM
|
||||
observations into three effects on Game Master state:
|
||||
|
||||
1. Updates `runtime_records.engine_health` per game with a short
|
||||
summary string.
|
||||
2. For terminal container events applies a CAS
|
||||
`running → engine_unreachable`; for `probe_recovered` applies the
|
||||
symmetric recovery CAS `engine_unreachable → running`.
|
||||
3. Publishes a debounced `runtime_snapshot_update` on `gm:lobby_events`
|
||||
only when the engine-health summary or the runtime status actually
|
||||
changed.
|
||||
|
||||
The reference precedent for the worker shape (`Dependencies` /
|
||||
`NewWorker` / `Run` / `Shutdown` / exported `HandleMessage`) is the
|
||||
Lobby `gmevents` consumer at `lobby/internal/worker/gmevents`. Seven
|
||||
decisions deviate from a literal reading of [`../PLAN.md`](../PLAN.md)
|
||||
or are sharp enough to surface here.
|
||||
|
||||
## Decisions
|
||||
|
||||
### D1. Event-type taxonomy expanded to seven values
|
||||
|
||||
**Decision.** The consumer maps all seven values published by RTM
|
||||
([`rtmanager/internal/domain/health/snapshot.go`](../../rtmanager/internal/domain/health/snapshot.go)),
|
||||
not the six listed in PLAN Stage 18. The added values are
|
||||
`container_started` and `probe_recovered`. Both are mapped to the
|
||||
summary string `healthy`. `probe_recovered` additionally attempts the
|
||||
recovery CAS `engine_unreachable → running`. `container_started` does
|
||||
not transition status — Game Master owns runtime startup through the
|
||||
register-runtime flow, so RTM's container_started observation is
|
||||
informational at the consumer level.
|
||||
|
||||
**Why.** The transition table in
|
||||
[`internal/domain/runtime/transitions.go`](../internal/domain/runtime/transitions.go)
|
||||
already declares `engine_unreachable → running` with the comment
|
||||
`reserved for the Stage 18 consumer; declared here so Stage 18 needs
|
||||
no transitions edit`. The reserved transition is only useful when an
|
||||
event in the input stream actually triggers it; the only such event in
|
||||
RTM's vocabulary is `probe_recovered`. Leaving the two extra event
|
||||
types unmapped would either drop information (if ignored entirely) or
|
||||
keep the recovery transition forever unreachable. Mapping them now is
|
||||
the minimum diff that closes the loop.
|
||||
|
||||
### D2. CAS conflict on a status mutation falls back to a health-only update
|
||||
|
||||
**Decision.** When the worker plans a status transition (e.g.,
|
||||
`running → engine_unreachable` for `container_oom`) and
|
||||
`RuntimeRecordStore.UpdateStatus` returns `runtime.ErrConflict` or
|
||||
`runtime.ErrInvalidTransition`, the worker logs the conflict at debug
|
||||
and falls back to `RuntimeRecordStore.UpdateEngineHealth`. The summary
|
||||
column is refreshed; the status column stays under whatever the
|
||||
concurrent flow holds.
|
||||
|
||||
**Why.** Two flows can hold the runtime row when an RTM event arrives:
|
||||
turn generation (`generation_in_progress`) and admin operations
|
||||
(`stopped`, `finished`). Forcing the consumer to win over those flows
|
||||
would either reintroduce stale-status writes or require expanding the
|
||||
allowed-transitions table to include every non-terminal source — the
|
||||
latter weakens the guard that turn generation relies on. The failure
|
||||
semantics turn-generation already implements (engine call timeout →
|
||||
`generation_failed`) cover the case where an `oom` arrives while a
|
||||
turn is in flight: the engine call from turngeneration will fail
|
||||
naturally a moment later. The consumer's job in that window is to keep
|
||||
the summary current so operators see «last known: oom» on
|
||||
`gm:lobby_events`.
|
||||
|
||||
### D3. New port method `UpdateEngineHealth`
|
||||
|
||||
**Decision.** [`internal/ports/runtimerecordstore.go`](../internal/ports/runtimerecordstore.go)
|
||||
gains a new method `UpdateEngineHealth(ctx, UpdateEngineHealthInput) error`
|
||||
with its own input struct and `Validate`. The Postgres adapter gains a
|
||||
matching `UPDATE runtime_records SET engine_health = $1, updated_at =
|
||||
$2 WHERE game_id = $3`. The existing `UpdateStatus` is **not**
|
||||
repurposed for health-only updates.
|
||||
|
||||
**Why.** `UpdateStatusInput.Validate` calls
|
||||
`runtime.Transition(ExpectedFrom, To)` and rejects every pair where
|
||||
`ExpectedFrom == To` (Stage 17 D1). A health-only update keeps the
|
||||
runtime in its current status, so any attempt to feed `UpdateStatus`
|
||||
with `ExpectedFrom == To` is rejected before the SQL even runs. The
|
||||
same precedent led Stage 17 to add `UpdateImage` rather than relax the
|
||||
self-transition guard. Stage 18 follows that precedent.
|
||||
|
||||
In addition, the health update is not gated on a CAS at all: late-
|
||||
arriving events should still bookkeep the summary regardless of the
|
||||
current status (including `stopped` and `finished`). A guarded
|
||||
`UpdateStatus`-shaped variant would have to enumerate every source
|
||||
status the consumer might observe; an unguarded `UpdateEngineHealth`
|
||||
sidesteps the question.
|
||||
|
||||
### D4. In-memory dedupe of last-emitted summaries per game
|
||||
|
||||
**Decision.** The worker keeps a `map[string]string` (`gameID →
|
||||
lastEmittedSummary`) under a `sync.RWMutex`. A snapshot is published
|
||||
when either the status transitioned in this iteration or when the new
|
||||
summary differs from the cached one for the same game. The cache is
|
||||
process-local; on restart it is empty.
|
||||
|
||||
**Why.** [`./README.md` §`gm:lobby_events`](../README.md) freezes the
|
||||
publication rule: snapshots are emitted on transitions and on health-
|
||||
summary changes («debounced — duplicates are suppressed when the
|
||||
summary did not change»). Stage 18 chooses an in-process map over a
|
||||
Redis-backed dedupe for two reasons:
|
||||
|
||||
1. Game Master is single-instance in v1
|
||||
([`./README.md §Non-Goals`](../README.md)); a per-process map is
|
||||
sufficient for v1 correctness.
|
||||
2. Losing the cache on restart causes at most one extra snapshot per
|
||||
game right after restart — Lobby's `gmevents` consumer is
|
||||
idempotent (CAS-protected status transitions, deterministic
|
||||
snapshot blob), so the extra emission is benign.
|
||||
|
||||
A Redis-backed dedupe is cheap to introduce later if multi-instance
|
||||
Game Master ever lands; until then the simpler choice ships less code.
|
||||
|
||||
### D5. Snapshot construction reads the runtime row again after the mutation
|
||||
|
||||
**Decision.** Whenever the worker decides to publish, it re-reads the
|
||||
runtime record (`RuntimeRecordStore.Get`) and builds the
|
||||
`RuntimeSnapshotUpdate` from that fresh row. The `EngineHealthSummary`,
|
||||
`RuntimeStatus`, and `CurrentTurn` fields therefore reflect whatever
|
||||
the database holds after the mutation, rather than what the worker
|
||||
just intended to write.
|
||||
|
||||
**Why.** Two paths can produce the same publish decision: the CAS
|
||||
succeeded (status changed, summary changed), or the CAS conflicted and
|
||||
the fallback `UpdateEngineHealth` took over (status unchanged from the
|
||||
worker's point of view, but possibly mutated by a concurrent flow
|
||||
between the conflict and the read). A single read-after-write reduces
|
||||
both paths to the same envelope-building code and keeps the snapshot
|
||||
honest about what is actually in the database. `PlayerTurnStats` is
|
||||
intentionally left as `nil`: the consumer does not have a fresh engine
|
||||
state payload, so per-player stats stay empty until the next turn
|
||||
(this matches [`./README.md §`gm:lobby_events`] for status-only
|
||||
transitions).
|
||||
|
||||
### D6. Stream-offset label is `health_events`
|
||||
|
||||
**Decision.** The consumer uses the short label `health_events` for
|
||||
`StreamOffsetStore.Load` / `Save`. The corresponding Redis key is
|
||||
`gamemaster:stream_offsets:health_events`.
|
||||
|
||||
**Why.** The label convention is documented in
|
||||
[`./README.md §Persistence Layout / Redis runtime-coordination state`](../README.md):
|
||||
short logical identifier of the consumer, stable across renames of the
|
||||
underlying stream key. The Lobby `gmevents` consumer follows the same
|
||||
shape (`gm_lobby_events`).
|
||||
|
||||
### D7. Worker wiring deferred to Stage 19
|
||||
|
||||
**Decision.** Stage 18 ships the worker package and unit/loop tests but
|
||||
does not register the worker as an `app.Component` in
|
||||
`internal/app/runtime.go`. Wiring is deferred to Stage 19.
|
||||
|
||||
**Why.** The same pattern is already in place for the scheduler ticker
|
||||
introduced at Stage 15: the worker exists in the source tree but is
|
||||
not wired into `runtime.app = New(cfg, internalServer)`. Stage 19
|
||||
explicitly bundles handler wiring with worker wiring (see PLAN
|
||||
Stage 19), so deferring is consistent with the precedent. The
|
||||
configuration values the wiring will need (stream name, block timeout,
|
||||
offset-store DSN) are already loaded by `internal/config` and were
|
||||
introduced in Stage 08.
|
||||
@@ -0,0 +1,230 @@
|
||||
---
|
||||
stage: 19
|
||||
title: Internal REST handlers
|
||||
---
|
||||
|
||||
# Stage 19 — Internal REST handlers
|
||||
|
||||
This decision record captures the non-obvious choices made while
|
||||
bringing the trusted internal REST listener of Game Master to full
|
||||
contract coverage. The handlers wire the existing service layer
|
||||
(stages 13–17) and the membership cache (stage 16) to the eighteen
|
||||
operations frozen by
|
||||
[`../api/internal-openapi.yaml`](../api/internal-openapi.yaml). The
|
||||
listener lifecycle, OpenTelemetry middleware, and the `/healthz` /
|
||||
`/readyz` probes were established in stage 08; this stage adds the
|
||||
per-operation handler subpackage, widens the listener `Dependencies`
|
||||
struct to thread every service port, and grows
|
||||
[`../internal/app/wiring.go`](../internal/app/wiring.go) to construct
|
||||
the entire dependency graph (stores, adapters, services, workers).
|
||||
|
||||
The reference precedent for the handler shape is the rtmanager
|
||||
`internal/api/internalhttp/handlers` tree; the conformance test
|
||||
mirrors `rtmanager/internal/api/internalhttp/conformance_test.go`.
|
||||
Eight decisions deviate from a literal reading of
|
||||
[`../PLAN.md`](../PLAN.md) or are sharp enough to surface here.
|
||||
|
||||
## Decisions
|
||||
|
||||
### D1. Conformance test lives inside the listener package
|
||||
|
||||
**Decision.** The OpenAPI conformance test ships at
|
||||
[`../internal/api/internalhttp/conformance_test.go`](../internal/api/internalhttp/conformance_test.go),
|
||||
in the `internalhttp` package, not at
|
||||
`gamemaster/api/openapi_conformance_test.go` as the literal text of
|
||||
PLAN.md Stage 19 suggests.
|
||||
|
||||
**Why.** The test instantiates the live `Server.handler` through
|
||||
`NewServer(...)` with stub services and replays each documented
|
||||
operation against it. That requires reading the unexported
|
||||
`handler` field and wiring stub implementations of the
|
||||
handler-package interfaces; both are package-internal concerns that a
|
||||
sibling test under `gamemaster/api/` would not have access to without
|
||||
exporting hooks that exist solely for the test. The rtmanager
|
||||
service ships the analogous test inside its own `internalhttp`
|
||||
package; we follow the same idiom.
|
||||
|
||||
**How to apply.** Future surface-shape audits go in this file.
|
||||
PLAN.md text is treated as a drift; the constraint that the spec is
|
||||
covered by a kin-openapi-driven validation is honoured exactly.
|
||||
|
||||
### D2. `DELETE /engine-versions/{version}` calls `Service.Deprecate`
|
||||
|
||||
**Decision.** The handler bound to the OpenAPI operation
|
||||
`internalDeprecateEngineVersion` calls
|
||||
[`engineversion.Service.Deprecate`](../internal/service/engineversion/service.go)
|
||||
and never `Service.Delete`. The 409 response declared by the
|
||||
spec for `engine_version_in_use` is therefore unreachable on this
|
||||
endpoint.
|
||||
|
||||
**Why.** The operation id and the first sentence of the description
|
||||
explicitly say «Sets the engine version status to `deprecated`». The
|
||||
sentence about hard removal and `engine_version_in_use` is a
|
||||
leftover of an earlier intent — `Service.Deprecate` does not consult
|
||||
`IsReferencedByActiveRuntime`, so the in-use rejection cannot fire
|
||||
through this code path. Hard delete is a future Admin Service
|
||||
operation; v1 does not expose it through REST.
|
||||
|
||||
**How to apply.** Calls that need to release the registry row
|
||||
permanently must use `Service.Delete` directly (not yet wired through
|
||||
REST). The spec's leftover 409 example is recorded here so a future
|
||||
contract reviewer does not chase a phantom failure mode.
|
||||
|
||||
### D3. Workers wired and started alongside the listener
|
||||
|
||||
**Decision.** This stage constructs the scheduler ticker (stage 15)
|
||||
and the runtime:health_events consumer (stage 18) inside
|
||||
`wiring.buildWorkers` and registers them as `App.Component`-s next
|
||||
to the internal HTTP server.
|
||||
|
||||
**Why.** Stage 19's narrow text says «ship the gateway-, Lobby- and
|
||||
Admin-facing REST surface backed by the service layer». But the
|
||||
service layer collaborators referenced from the listener (turn
|
||||
generation, membership cache, runtime record store, etc.) only make
|
||||
sense inside a process that is also producing turns and consuming
|
||||
health events. Keeping the workers idle would leave the wiring graph
|
||||
half-built and the dev experience surprising. Constructing and
|
||||
starting them here makes a freshly-deployed process production-ready
|
||||
the moment the listener accepts traffic.
|
||||
|
||||
**How to apply.** The two workers are owned by `App.Run` exactly
|
||||
like the listener: both `Run` (long-lived) and `Shutdown` are part
|
||||
of `App.Component`. See D4 for the trivial `Shutdown` added on the
|
||||
scheduler ticker.
|
||||
|
||||
### D4. `schedulerticker.Worker.Shutdown` is a no-op
|
||||
|
||||
**Decision.** The scheduler ticker adds a one-line
|
||||
`Shutdown(_ context.Context) error { return nil }` so the type
|
||||
satisfies `app.Component`.
|
||||
|
||||
**Why.** The worker's `Run` already returns when the supplied
|
||||
context is cancelled, and `wg.Wait` drains the in-flight per-game
|
||||
goroutines before `Run` returns. There is nothing additional to
|
||||
release. The `healtheventsconsumer.Worker` already had a `Shutdown`
|
||||
from stage 18; this just brings the two workers to the same shape.
|
||||
|
||||
**How to apply.** When future workers grow real shutdown logic
|
||||
(buffered output to flush, persistent connections to drain), they
|
||||
should embed it inside `Shutdown` rather than relying on context
|
||||
cancellation alone.
|
||||
|
||||
### D5. New `RuntimeRecordStore.List(ctx)` method
|
||||
|
||||
**Decision.** The port grows a fifth read method:
|
||||
`List(ctx) ([]runtime.RuntimeRecord, error)`. The PostgreSQL
|
||||
adapter implements it as one SELECT ordered by
|
||||
`(created_at DESC, game_id ASC)`.
|
||||
|
||||
**Why.** The OpenAPI operation `internalListRuntimes` accepts an
|
||||
optional `status` query parameter. With the parameter set, the
|
||||
existing `ListByStatus` answers; without it, no method on the port
|
||||
returned every record. Composing the unfiltered list as a
|
||||
loop-over-statuses would dilute the ordering guarantee and double
|
||||
the round-trip cost. The new method is additive — every other
|
||||
caller keeps using its narrow read.
|
||||
|
||||
**How to apply.** Test fakes (`fakeRuntimeRecords` in service tests,
|
||||
`fakeRuntimeRecordsBackend` in scheduler-ticker tests) gained the
|
||||
method as well. The handler-side `RuntimeRecordsReader` interface
|
||||
exposes only the three read methods (`Get`, `List`, `ListByStatus`)
|
||||
so the listener cannot accidentally mutate runtime state.
|
||||
|
||||
### D6. `next_generation_at` encodes as `0` when unscheduled
|
||||
|
||||
**Decision.** The wire `RuntimeRecord.next_generation_at` field is
|
||||
declared `required: true` and `format: int64`. The domain holds
|
||||
`*time.Time` and may carry `nil` — typically while a runtime is in
|
||||
status `starting` and the first scheduling write has not yet
|
||||
landed. The encoder writes `0` in that case and writes the UTC
|
||||
millisecond value otherwise.
|
||||
|
||||
**Why.** Encoding `nil` as `0` keeps the wire shape JSON-Schema-valid
|
||||
without forcing every record reader to handle a missing field.
|
||||
Optional pointer-typed timestamps (`started_at`, `stopped_at`,
|
||||
`finished_at`) are still omitted from the JSON form via `omitempty`,
|
||||
matching the `required` list in the spec.
|
||||
|
||||
**How to apply.** Readers must treat `next_generation_at == 0` as
|
||||
«not yet scheduled» when the status warrants it; the field will
|
||||
turn into a real Unix-millisecond value once the scheduler's first
|
||||
write lands. The conformance test seeds a non-nil
|
||||
`NextGenerationAt`, so the strict response validator never sees
|
||||
this edge case at the wire boundary.
|
||||
|
||||
### D7. Hot-path bodies are pass-through, not strict-decoded
|
||||
|
||||
**Decision.** Handlers `internalExecuteCommands`, `internalPutOrders`
|
||||
read the request body as raw bytes. The body is rejected only when
|
||||
empty or not valid JSON; unknown fields pass through.
|
||||
|
||||
**Why.** The OpenAPI request schemas for these three operations carry
|
||||
`additionalProperties: true` because the envelopes are engine-owned
|
||||
(`galaxy/game/openapi.yaml`). Strict decoding here would reject
|
||||
legitimate engine extensions and force every contract bump to land
|
||||
in two services in lockstep.
|
||||
|
||||
**How to apply.** Engine `engine_validation_error` responses still
|
||||
surface as the canonical Game Master error envelope at HTTP 502 —
|
||||
the engine response body is recorded in `result.RawResponse` for
|
||||
audit but the OpenAPI spec mandates the error envelope on this code
|
||||
path. If a future contract version requires forwarding the engine's
|
||||
4xx body to the gateway, a separate response shape needs to land in
|
||||
the spec first.
|
||||
|
||||
### D8. `X-Galaxy-Caller` mapping with admin default
|
||||
|
||||
**Decision.** The `resolveOpSource` helper maps the
|
||||
`X-Galaxy-Caller` header values to
|
||||
[`operation.OpSource`](../internal/domain/operation/log.go) as
|
||||
follows: `gateway → OpSourceGatewayPlayer`,
|
||||
`lobby → OpSourceLobbyInternal`, `admin → OpSourceAdminRest`.
|
||||
Missing or unrecognised values fall back to `OpSourceAdminRest`,
|
||||
matching the contract documented in
|
||||
[`../README.md` §«Internal REST API»](../README.md).
|
||||
|
||||
**Why.** The default is conservative: an Admin Service request
|
||||
without the header still records as admin instead of being dropped.
|
||||
The other two values are reserved for the documented callers and
|
||||
trim/lowercase tolerantly so a casing slip in development does not
|
||||
produce a confusing audit row.
|
||||
|
||||
**How to apply.** New REST callers should set the header
|
||||
explicitly. Adding a fourth caller type requires an `OpSource`
|
||||
constant alongside the mapping change.
|
||||
|
||||
## What ships
|
||||
|
||||
- Eighteen operation handlers under
|
||||
[`../internal/api/internalhttp/handlers`](../internal/api/internalhttp/handlers).
|
||||
- The probe-only `internal/api/internalhttp/server.go` now widens
|
||||
`Dependencies` and forwards the per-operation services to
|
||||
`handlers.Register`.
|
||||
- Full dependency graph in
|
||||
[`../internal/app/wiring.go`](../internal/app/wiring.go): five
|
||||
stores, five external adapters, eleven services, two workers.
|
||||
- `RuntimeRecordStore.List(ctx)` plus its PostgreSQL adapter
|
||||
implementation and regression tests
|
||||
([`../internal/adapters/postgres/runtimerecordstore`](../internal/adapters/postgres/runtimerecordstore)).
|
||||
- `schedulerticker.Worker.Shutdown` so the worker is an
|
||||
`App.Component`.
|
||||
- Mockgen-generated handler-port mocks under
|
||||
[`../internal/api/internalhttp/handlers/mocks`](../internal/api/internalhttp/handlers/mocks).
|
||||
- A kin-openapi-driven conformance test
|
||||
([`../internal/api/internalhttp/conformance_test.go`](../internal/api/internalhttp/conformance_test.go))
|
||||
that validates request and response shapes for every documented
|
||||
operation against
|
||||
[`../api/internal-openapi.yaml`](../api/internal-openapi.yaml).
|
||||
- Per-handler unit tests covering happy paths, error-code mapping,
|
||||
unknown-field rejection, and header validation.
|
||||
|
||||
## What remains for later stages
|
||||
|
||||
- Lobby refactor (stage 20) flips Lobby's start flow to call
|
||||
`GET /api/v1/internal/engine-versions/{version}/image-ref`
|
||||
synchronously and adds the `InvalidateMemberships` outbound call
|
||||
on every roster mutation.
|
||||
- Service-local integration suite (stage 21) drives the listener
|
||||
end-to-end against a real engine container.
|
||||
- Cross-service integration tests (stages 22–23) cover Lobby + GM,
|
||||
Lobby + GM + RTM happy and failure paths.
|
||||
@@ -0,0 +1,128 @@
|
||||
module galaxy/gamemaster
|
||||
|
||||
go 1.26.2
|
||||
|
||||
require (
|
||||
galaxy/cronutil v0.0.0-00010101000000-000000000000
|
||||
galaxy/notificationintent v0.0.0-00010101000000-000000000000
|
||||
galaxy/postgres v0.0.0-00010101000000-000000000000
|
||||
galaxy/redisconn v0.0.0-00010101000000-000000000000
|
||||
github.com/alicebob/miniredis/v2 v2.37.0
|
||||
github.com/getkin/kin-openapi v0.135.0
|
||||
github.com/go-jet/jet/v2 v2.14.1
|
||||
github.com/jackc/pgx/v5 v5.9.2
|
||||
github.com/redis/go-redis/v9 v9.18.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/testcontainers/testcontainers-go v0.42.0
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0
|
||||
go.opentelemetry.io/otel v1.43.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0
|
||||
go.opentelemetry.io/otel/metric v1.43.0
|
||||
go.opentelemetry.io/otel/sdk v1.43.0
|
||||
go.opentelemetry.io/otel/sdk/metric v1.43.0
|
||||
go.opentelemetry.io/otel/trace v1.43.0
|
||||
golang.org/x/mod v0.35.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
)
|
||||
|
||||
require (
|
||||
dario.cat/mergo v1.0.2 // indirect
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
|
||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||
github.com/XSAM/otelsql v0.42.0 // indirect
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/containerd/errdefs v1.0.0 // indirect
|
||||
github.com/containerd/errdefs/pkg v0.3.0 // indirect
|
||||
github.com/containerd/log v0.1.0 // indirect
|
||||
github.com/containerd/platforms v0.2.1 // indirect
|
||||
github.com/cpuguy83/dockercfg v0.3.2 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
||||
github.com/distribution/reference v0.6.0 // indirect
|
||||
github.com/docker/go-connections v0.7.0 // indirect
|
||||
github.com/docker/go-units v0.5.0 // indirect
|
||||
github.com/ebitengine/purego v0.10.0 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/go-ole/go-ole v1.2.6 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.21.0 // indirect
|
||||
github.com/go-openapi/swag v0.23.0 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect
|
||||
github.com/jackc/chunkreader/v2 v2.0.1 // indirect
|
||||
github.com/jackc/pgconn v1.14.3 // indirect
|
||||
github.com/jackc/pgio v1.0.0 // indirect
|
||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||
github.com/jackc/pgproto3/v2 v2.3.3 // indirect
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||
github.com/jackc/pgtype v1.14.4 // indirect
|
||||
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/klauspost/compress v1.18.5 // indirect
|
||||
github.com/lib/pq v1.10.9 // indirect
|
||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
||||
github.com/magiconair/properties v1.8.10 // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/mfridman/interpolate v0.0.2 // indirect
|
||||
github.com/moby/docker-image-spec v1.3.1 // indirect
|
||||
github.com/moby/go-archive v0.2.0 // indirect
|
||||
github.com/moby/moby/api v1.54.2 // indirect
|
||||
github.com/moby/moby/client v0.4.1 // indirect
|
||||
github.com/moby/patternmatcher v0.6.1 // indirect
|
||||
github.com/moby/sys/sequential v0.6.0 // indirect
|
||||
github.com/moby/sys/user v0.4.0 // indirect
|
||||
github.com/moby/sys/userns v0.1.0 // indirect
|
||||
github.com/moby/term v0.5.2 // indirect
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
||||
github.com/oasdiff/yaml v0.0.9 // indirect
|
||||
github.com/oasdiff/yaml3 v0.0.12 // indirect
|
||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.1.1 // indirect
|
||||
github.com/perimeterx/marshmallow v1.1.5 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
|
||||
github.com/pressly/goose/v3 v3.27.1 // indirect
|
||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 // indirect
|
||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 // indirect
|
||||
github.com/robfig/cron/v3 v3.0.1 // indirect
|
||||
github.com/sethvargo/go-retry v0.3.0 // indirect
|
||||
github.com/shirou/gopsutil/v4 v4.26.3 // indirect
|
||||
github.com/sirupsen/logrus v1.9.4 // indirect
|
||||
github.com/tklauser/go-sysconf v0.3.16 // indirect
|
||||
github.com/tklauser/numcpus v0.11.0 // indirect
|
||||
github.com/ugorji/go/codec v1.3.1 // indirect
|
||||
github.com/woodsbury/decimal128 v1.3.0 // indirect
|
||||
github.com/yuin/gopher-lua v1.1.1 // indirect
|
||||
github.com/yusufpapurcu/wmi v1.2.4 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
|
||||
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
|
||||
go.uber.org/atomic v1.11.0 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
golang.org/x/crypto v0.50.0 // indirect
|
||||
golang.org/x/net v0.53.0 // indirect
|
||||
golang.org/x/sync v0.20.0 // indirect
|
||||
golang.org/x/sys v0.43.0 // indirect
|
||||
golang.org/x/text v0.36.0 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 // indirect
|
||||
google.golang.org/grpc v1.80.0 // indirect
|
||||
google.golang.org/protobuf v1.36.11 // indirect
|
||||
)
|
||||
|
||||
replace galaxy/cronutil => ../pkg/cronutil
|
||||
|
||||
replace galaxy/notificationintent => ../pkg/notificationintent
|
||||
|
||||
replace galaxy/postgres => ../pkg/postgres
|
||||
|
||||
replace galaxy/redisconn => ../pkg/redisconn
|
||||
@@ -0,0 +1,463 @@
|
||||
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
|
||||
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
|
||||
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk=
|
||||
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
|
||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||
github.com/XSAM/otelsql v0.42.0 h1:Li0xF4eJUxG2e0x3D4rvRlys1f27yJKvjTh7ljkUP5o=
|
||||
github.com/XSAM/otelsql v0.42.0/go.mod h1:4mOrEv+cS1KmKzrvTktvJnstr5GtKSAK+QHvFR9OcpI=
|
||||
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
|
||||
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
||||
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
|
||||
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
|
||||
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
|
||||
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
|
||||
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
|
||||
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ=
|
||||
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
|
||||
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
||||
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
|
||||
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
|
||||
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
|
||||
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
|
||||
github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=
|
||||
github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw=
|
||||
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA=
|
||||
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
|
||||
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
|
||||
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
||||
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
|
||||
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
|
||||
github.com/docker/go-connections v0.7.0 h1:6SsRfJddP22WMrCkj19x9WKjEDTB+ahsdiGYf0mN39c=
|
||||
github.com/docker/go-connections v0.7.0/go.mod h1:no1qkHdjq7kLMGUXYAduOhYPSJxxvgWBh7ogVvptn3Q=
|
||||
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
||||
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
|
||||
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||
github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg=
|
||||
github.com/getkin/kin-openapi v0.135.0/go.mod h1:6dd5FJl6RdX4usBtFBaQhk9q62Yb2J0Mk5IhUO/QqFI=
|
||||
github.com/go-jet/jet/v2 v2.14.1 h1:wsfD9e7CGP9h46+IFNlftfncBcmVnKddikbTtapQM3M=
|
||||
github.com/go-jet/jet/v2 v2.14.1/go.mod h1:dqTAECV2Mo3S2NFjbm4vJ1aDruZjhaJ1RAAR8rGUkkc=
|
||||
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
|
||||
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
|
||||
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
|
||||
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
|
||||
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
|
||||
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
|
||||
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM=
|
||||
github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
|
||||
github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
|
||||
github.com/jackc/chunkreader v1.0.0/go.mod h1:RT6O25fNZIuasFJRyZ4R/Y2BbhasbmZXF9QQ7T3kePo=
|
||||
github.com/jackc/chunkreader/v2 v2.0.0/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
|
||||
github.com/jackc/chunkreader/v2 v2.0.1 h1:i+RDz65UE+mmpjTfyz0MoVTnzeYxroil2G82ki7MGG8=
|
||||
github.com/jackc/chunkreader/v2 v2.0.1/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
|
||||
github.com/jackc/pgconn v0.0.0-20190420214824-7e0022ef6ba3/go.mod h1:jkELnwuX+w9qN5YIfX0fl88Ehu4XC3keFuOJJk9pcnA=
|
||||
github.com/jackc/pgconn v0.0.0-20190824142844-760dd75542eb/go.mod h1:lLjNuW/+OfW9/pnVKPazfWOgNfH2aPem8YQ7ilXGvJE=
|
||||
github.com/jackc/pgconn v0.0.0-20190831204454-2fabfa3c18b7/go.mod h1:ZJKsE/KZfsUgOEh9hBm+xYTstcNHg7UPMVJqRfQxq4s=
|
||||
github.com/jackc/pgconn v1.8.0/go.mod h1:1C2Pb36bGIP9QHGBYCjnyhqu7Rv3sGshaQUvmfGIB/o=
|
||||
github.com/jackc/pgconn v1.9.0/go.mod h1:YctiPyvzfU11JFxoXokUOOKQXQmDMoJL9vJzHH8/2JY=
|
||||
github.com/jackc/pgconn v1.9.1-0.20210724152538-d89c8390a530/go.mod h1:4z2w8XhRbP1hYxkpTuBjTS3ne3J48K83+u0zoyvg2pI=
|
||||
github.com/jackc/pgconn v1.14.3 h1:bVoTr12EGANZz66nZPkMInAV/KHD2TxH9npjXXgiB3w=
|
||||
github.com/jackc/pgconn v1.14.3/go.mod h1:RZbme4uasqzybK2RK5c65VsHxoyaml09lx3tXOcO/VM=
|
||||
github.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE=
|
||||
github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8=
|
||||
github.com/jackc/pgmock v0.0.0-20190831213851-13a1b77aafa2/go.mod h1:fGZlG77KXmcq05nJLRkk0+p82V8B8Dw8KN2/V9c/OAE=
|
||||
github.com/jackc/pgmock v0.0.0-20201204152224-4fe30f7445fd/go.mod h1:hrBW0Enj2AZTNpt/7Y5rr2xe/9Mn757Wtb2xeBzPv2c=
|
||||
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65 h1:DadwsjnMwFjfWc9y5Wi/+Zz7xoE5ALHsRQlOctkOiHc=
|
||||
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65/go.mod h1:5R2h2EEX+qri8jOWMbJCtaPWkrrNc7OHwsp2TCqp7ak=
|
||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||
github.com/jackc/pgproto3 v1.1.0/go.mod h1:eR5FA3leWg7p9aeAqi37XOTgTIbkABlvcPB3E5rlc78=
|
||||
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190420180111-c116219b62db/go.mod h1:bhq50y+xrl9n5mRYyCBFKkpRVTLYJVWeCc+mEAI3yXA=
|
||||
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190609003834-432c2951c711/go.mod h1:uH0AWtUmuShn0bcesswc4aBTWGvw0cAxIJp+6OB//Wg=
|
||||
github.com/jackc/pgproto3/v2 v2.0.0-rc3/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
|
||||
github.com/jackc/pgproto3/v2 v2.0.0-rc3.0.20190831210041-4c03ce451f29/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
|
||||
github.com/jackc/pgproto3/v2 v2.0.6/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||
github.com/jackc/pgproto3/v2 v2.1.1/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||
github.com/jackc/pgproto3/v2 v2.3.3 h1:1HLSx5H+tXR9pW3in3zaztoEwQYRC9SQaYUHjTSUOag=
|
||||
github.com/jackc/pgproto3/v2 v2.3.3/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b/go.mod h1:vsD4gTJCa9TptPL8sPkXrLZ+hDuNrZCnj29CQpr4X1E=
|
||||
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||
github.com/jackc/pgtype v0.0.0-20190421001408-4ed0de4755e0/go.mod h1:hdSHsc1V01CGwFsrv11mJRHWJ6aifDLfdV3aVjFF0zg=
|
||||
github.com/jackc/pgtype v0.0.0-20190824184912-ab885b375b90/go.mod h1:KcahbBH1nCMSo2DXpzsoWOAfFkdEtEJpPbVLq8eE+mc=
|
||||
github.com/jackc/pgtype v0.0.0-20190828014616-a8802b16cc59/go.mod h1:MWlu30kVJrUS8lot6TQqcg7mtthZ9T0EoIBFiJcmcyw=
|
||||
github.com/jackc/pgtype v1.8.1-0.20210724151600-32e20a603178/go.mod h1:C516IlIV9NKqfsMCXTdChteoXmwgUceqaLfjg2e3NlM=
|
||||
github.com/jackc/pgtype v1.14.0/go.mod h1:LUMuVrfsFfdKGLw+AFFVv6KtHOFMwRgDDzBt76IqCA4=
|
||||
github.com/jackc/pgtype v1.14.4 h1:fKuNiCumbKTAIxQwXfB/nsrnkEI6bPJrrSiMKgbJ2j8=
|
||||
github.com/jackc/pgtype v1.14.4/go.mod h1:aKeozOde08iifGosdJpz9MBZonJOUJxqNpPBcMJTlVA=
|
||||
github.com/jackc/pgx/v4 v4.0.0-20190420224344-cc3461e65d96/go.mod h1:mdxmSJJuR08CZQyj1PVQBHy9XOp5p8/SHH6a0psbY9Y=
|
||||
github.com/jackc/pgx/v4 v4.0.0-20190421002000-1b8f0016e912/go.mod h1:no/Y67Jkk/9WuGR0JG/JseM9irFbnEPbuWV2EELPNuM=
|
||||
github.com/jackc/pgx/v4 v4.0.0-pre1.0.20190824185557-6972a5742186/go.mod h1:X+GQnOEnf1dqHGpw7JmHqHc1NxDoalibchSk9/RWuDc=
|
||||
github.com/jackc/pgx/v4 v4.12.1-0.20210724153913-640aa07df17c/go.mod h1:1QD0+tgSXP7iUjYm9C1NxKhny7lq6ee99u/z+IHFcgs=
|
||||
github.com/jackc/pgx/v4 v4.18.2/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
|
||||
github.com/jackc/pgx/v4 v4.18.3 h1:dE2/TrEsGX3RBprb3qryqSV9Y60iZN1C6i8IrmW9/BA=
|
||||
github.com/jackc/pgx/v4 v4.18.3/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
|
||||
github.com/jackc/pgx/v5 v5.9.2 h1:3ZhOzMWnR4yJ+RW1XImIPsD1aNSz4T4fyP7zlQb56hw=
|
||||
github.com/jackc/pgx/v5 v5.9.2/go.mod h1:mal1tBGAFfLHvZzaYh77YS/eC6IX9OWbRV1QIIM0Jn4=
|
||||
github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle v1.1.3/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle v1.3.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
||||
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
||||
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
||||
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
|
||||
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
|
||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
|
||||
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
|
||||
github.com/mattn/go-isatty v0.0.21 h1:xYae+lCNBP7QuW4PUnNG61ffM4hVIfm+zUzDuSzYLGs=
|
||||
github.com/mattn/go-isatty v0.0.21/go.mod h1:ZXfXG4SQHsB/w3ZeOYbR0PrPwLy+n6xiMrJlRFqopa4=
|
||||
github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI=
|
||||
github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o=
|
||||
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
|
||||
github.com/mfridman/interpolate v0.0.2/go.mod h1:p+7uk6oE07mpE/Ik1b8EckO0O4ZXiGAfshKBWLUM9Xg=
|
||||
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
|
||||
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
|
||||
github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8=
|
||||
github.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU=
|
||||
github.com/moby/moby/api v1.54.2 h1:wiat9QAhnDQjA7wk1kh/TqHz2I1uUA7M7t9SAl/JNXg=
|
||||
github.com/moby/moby/api v1.54.2/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs=
|
||||
github.com/moby/moby/client v0.4.1 h1:DMQgisVoMkmMs7fp3ROSdiBnoAu8+vo3GggFl06M/wY=
|
||||
github.com/moby/moby/client v0.4.1/go.mod h1:z52C9O2POPOsnxZAy//WtKcQ32P+jT/NGeXu/7nfjGQ=
|
||||
github.com/moby/patternmatcher v0.6.1 h1:qlhtafmr6kgMIJjKJMDmMWq7WLkKIo23hsrpR3x084U=
|
||||
github.com/moby/patternmatcher v0.6.1/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
|
||||
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
|
||||
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
|
||||
github.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs=
|
||||
github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs=
|
||||
github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g=
|
||||
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
|
||||
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
|
||||
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
|
||||
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48=
|
||||
github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM=
|
||||
github.com/oasdiff/yaml3 v0.0.12 h1:75urAtPeDg2/iDEWwzNrLOWxI9N/dCh81nTTJtokt2M=
|
||||
github.com/oasdiff/yaml3 v0.0.12/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
|
||||
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
|
||||
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
||||
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
||||
github.com/pressly/goose/v3 v3.27.1 h1:6uEvcprBybDmW4hcz3gYujhARhye+GoWKhEWyzD5sh4=
|
||||
github.com/pressly/goose/v3 v3.27.1/go.mod h1:maruOxsPnIG2yHHyo8UqKWXYKFcH7Q76csUV7+7KYoM=
|
||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 h1:QY4nmPHLFAJjtT5O4OMUEOxP8WVaRNOFpcbmxT2NLZU=
|
||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0/go.mod h1:WH8cY/0fT41Bsf341qzo8v4nx0GCE8FykAA23IVbVmo=
|
||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 h1:2dKdoEYBJ0CZCLPiCdvvc7luz3DPwY6hKdzjL6m1eHE=
|
||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0/go.mod h1:WzkrVG9ro9BwCQD0eJOWn6AGL4Z1CleGflM45w1hu10=
|
||||
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
||||
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
|
||||
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
|
||||
github.com/rs/zerolog v1.15.0/go.mod h1:xYTKnLHcpfU2225ny5qZjxnj9NvkumZYjJHlAThCjNc=
|
||||
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
|
||||
github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah2SE=
|
||||
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
|
||||
github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc=
|
||||
github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
|
||||
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
|
||||
github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
|
||||
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
|
||||
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=
|
||||
github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY=
|
||||
github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30=
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0 h1:GCbb1ndrF7OTDiIvxXyItaDab4qkzTFJ48LKFdM7EIo=
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0/go.mod h1:IRPBaI8jXdrNfD0e4Zm7Fbcgaz5shKxOQv4axiL09xs=
|
||||
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
|
||||
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
|
||||
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
|
||||
github.com/tklauser/numcpus v0.11.0/go.mod h1:z+LwcLq54uWZTX0u/bGobaV34u6V7KNlTZejzM6/3MQ=
|
||||
github.com/ugorji/go/codec v1.3.1 h1:waO7eEiFDwidsBN6agj1vJQ4AG7lh2yqXyOXqhgQuyY=
|
||||
github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
|
||||
github.com/woodsbury/decimal128 v1.3.0 h1:8pffMNWIlC0O5vbyHWFZAt5yWvWcrHA+3ovIIjVWss0=
|
||||
github.com/woodsbury/decimal128 v1.3.0/go.mod h1:C5UTmyTjW3JftjUFzOVhC20BEQa2a4ZKOB5I6Zjb+ds=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M=
|
||||
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
|
||||
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
|
||||
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
|
||||
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
|
||||
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
|
||||
github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0/go.mod h1:BuhAPThV8PBHBvg8ZzZ/Ok3idOdhWIodywz2xEcRbJo=
|
||||
go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=
|
||||
go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 h1:8UQVDcZxOJLtX6gxtDt3vY2WTgvZqMQRzjsqiIHQdkc=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0/go.mod h1:2lmweYCiHYpEjQ/lSJBYhj9jP1zvCvQW4BqL9dnT7FQ=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 h1:w1K+pCJoPpQifuVpsKamUdn9U0zM3xUziVOqsGksUrY=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0/go.mod h1:HBy4BjzgVE8139ieRI75oXm3EcDN+6GhD88JT1Kjvxg=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0/go.mod h1:Vl1/iaggsuRlrHf/hfPJPvVag77kKyvrLeD10kpMl+A=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0/go.mod h1:AGmbycVGEsRx9mXMZ75CsOyhSP6MFIcj/6dnG+vhVjk=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0/go.mod h1:/G+nUPfhq2e+qiXMGxMwumDrP5jtzU+mWN7/sjT2rak=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 h1:TC+BewnDpeiAmcscXbGMfxkO+mwYUwE/VySwvw88PfA=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0/go.mod h1:J/ZyF4vfPwsSr9xJSPyQ4LqtcTPULFR64KwTikGLe+A=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 h1:mS47AX77OtFfKG4vtp+84kuGSFZHTyxtXIN269vChY0=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0/go.mod h1:PJnsC41lAGncJlPUniSwM81gc80GkgWJWr3cu2nKEtU=
|
||||
go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM=
|
||||
go.opentelemetry.io/otel/metric v1.43.0/go.mod h1:RDnPtIxvqlgO8GRW18W6Z/4P462ldprJtfxHxyKd2PY=
|
||||
go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg=
|
||||
go.opentelemetry.io/otel/sdk v1.43.0/go.mod h1:P+IkVU3iWukmiit/Yf9AWvpyRDlUeBaRg6Y+C58QHzg=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A=
|
||||
go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A=
|
||||
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
|
||||
go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g=
|
||||
go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk=
|
||||
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
|
||||
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
|
||||
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
|
||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201203163018-be400aefbc4c/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
|
||||
golang.org/x/crypto v0.20.0/go.mod h1:Xwo95rrVNIoSMx9wa1JroENMToLWn3RNVrTBpLHgZPQ=
|
||||
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
|
||||
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.35.0 h1:Ww1D637e6Pg+Zb2KrWfHQUnH2dQRLBQyAtpr/haaJeM=
|
||||
golang.org/x/mod v0.35.0/go.mod h1:+GwiRhIInF8wPm+4AoT6L0FA1QWAad3OMdTRx4tFYlU=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||
golang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
|
||||
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
||||
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||
golang.org/x/term v0.42.0 h1:UiKe+zDFmJobeJ5ggPwOshJIVt6/Ft0rcfrXZDLWAWY=
|
||||
golang.org/x/term v0.42.0/go.mod h1:Dq/D+snpsbazcBG5+F9Q1n2rXV8Ma+71xEjTRufARgY=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190425163242-31fd60d6bfdc/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190823170909-c4a336ef6a2f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 h1:XF8+t6QQiS0o9ArVan/HW8Q7cycNPGsJf6GA2nXxYAg=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
||||
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/inconshreveable/log15.v2 v2.0.0-20180818164646-67afb5ed74ec/go.mod h1:aPpfJ7XW+gOuirDoZ8gHhLh3kZ1B08FtV2bbmy7Jv3s=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
||||
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
modernc.org/libc v1.72.1 h1:db1xwJ6u1kE3KHTFTTbe2GCrczHPKzlURP0aDC4NGD0=
|
||||
modernc.org/libc v1.72.1/go.mod h1:HRMiC/PhPGLIPM7GzAFCbI+oSgE3dhZ8FWftmRrHVlY=
|
||||
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
|
||||
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
|
||||
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
|
||||
modernc.org/sqlite v1.49.1 h1:dYGHTKcX1sJ+EQDnUzvz4TJ5GbuvhNJa8Fg6ElGx73U=
|
||||
modernc.org/sqlite v1.49.1/go.mod h1:m0w8xhwYUVY3H6pSDwc3gkJ/irZT/0YEXwBlhaxQEew=
|
||||
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
|
||||
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
|
||||
@@ -0,0 +1,441 @@
|
||||
// Package engineclient provides the trusted-internal HTTP client Game
|
||||
// Master uses to talk to the engine container. The adapter implements
|
||||
// `ports.EngineClient` over the routes documented in
|
||||
// `galaxy/game/openapi.yaml`:
|
||||
//
|
||||
// - admin paths under `/api/v1/admin/*` (init, status, turn,
|
||||
// race/banish);
|
||||
// - player paths under `/api/v1/{command, order, report}`.
|
||||
//
|
||||
// The engine endpoint URL is per-call (Game Master keeps it on
|
||||
// `runtime_records.engine_endpoint`), so the client does not bind a
|
||||
// base URL at construction time. Only the per-call timeouts are wired
|
||||
// through `Config`: `CallTimeout` covers turn-generation-class
|
||||
// operations, `ProbeTimeout` covers inspect-style reads.
|
||||
package engineclient
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
|
||||
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
const (
|
||||
pathAdminInit = "/api/v1/admin/init"
|
||||
pathAdminStatus = "/api/v1/admin/status"
|
||||
pathAdminTurn = "/api/v1/admin/turn"
|
||||
pathAdminRaceBanish = "/api/v1/admin/race/banish"
|
||||
pathPlayerCommand = "/api/v1/command"
|
||||
pathPlayerOrder = "/api/v1/order"
|
||||
pathPlayerReport = "/api/v1/report"
|
||||
)
|
||||
|
||||
// Config configures one HTTP-backed engine client.
|
||||
type Config struct {
|
||||
// CallTimeout bounds turn-generation-class operations: init, turn,
|
||||
// banish, command, order. Mirrors `GAMEMASTER_ENGINE_CALL_TIMEOUT`.
|
||||
CallTimeout time.Duration
|
||||
|
||||
// ProbeTimeout bounds inspect-style reads: status, report. Mirrors
|
||||
// `GAMEMASTER_ENGINE_PROBE_TIMEOUT`.
|
||||
ProbeTimeout time.Duration
|
||||
}
|
||||
|
||||
// Client speaks REST/JSON to the engine container.
|
||||
type Client struct {
|
||||
callTimeout time.Duration
|
||||
probeTimeout time.Duration
|
||||
httpClient *http.Client
|
||||
closeIdleConnections func()
|
||||
}
|
||||
|
||||
// NewClient constructs an engine client with `otelhttp`-instrumented
|
||||
// transport cloned from `http.DefaultTransport`. The returned `Close`
|
||||
// hook releases idle connections owned by that transport.
|
||||
func NewClient(cfg Config) (*Client, error) {
|
||||
transport, ok := http.DefaultTransport.(*http.Transport)
|
||||
if !ok {
|
||||
return nil, errors.New("new engine client: default transport is not *http.Transport")
|
||||
}
|
||||
cloned := transport.Clone()
|
||||
return newClient(cfg, &http.Client{Transport: otelhttp.NewTransport(cloned)}, cloned.CloseIdleConnections)
|
||||
}
|
||||
|
||||
func newClient(cfg Config, httpClient *http.Client, closeIdleConnections func()) (*Client, error) {
|
||||
switch {
|
||||
case cfg.CallTimeout <= 0:
|
||||
return nil, errors.New("new engine client: call timeout must be positive")
|
||||
case cfg.ProbeTimeout <= 0:
|
||||
return nil, errors.New("new engine client: probe timeout must be positive")
|
||||
case httpClient == nil:
|
||||
return nil, errors.New("new engine client: http client must not be nil")
|
||||
}
|
||||
return &Client{
|
||||
callTimeout: cfg.CallTimeout,
|
||||
probeTimeout: cfg.ProbeTimeout,
|
||||
httpClient: httpClient,
|
||||
closeIdleConnections: closeIdleConnections,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close releases idle HTTP connections owned by the underlying
|
||||
// transport. Safe to call multiple times.
|
||||
func (client *Client) Close() error {
|
||||
if client == nil || client.closeIdleConnections == nil {
|
||||
return nil
|
||||
}
|
||||
client.closeIdleConnections()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Init calls POST /api/v1/admin/init.
|
||||
func (client *Client) Init(ctx context.Context, baseURL string, request ports.InitRequest) (ports.StateResponse, error) {
|
||||
if err := client.validateBase(baseURL); err != nil {
|
||||
return ports.StateResponse{}, err
|
||||
}
|
||||
if len(request.Races) == 0 {
|
||||
return ports.StateResponse{}, errors.New("engine init: races must not be empty")
|
||||
}
|
||||
body, err := encodeInitRequest(request)
|
||||
if err != nil {
|
||||
return ports.StateResponse{}, fmt.Errorf("engine init: encode request: %w", err)
|
||||
}
|
||||
payload, status, doErr := client.doRequest(ctx, http.MethodPost, baseURL+pathAdminInit, body, client.callTimeout)
|
||||
if doErr != nil {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine init: %w", ports.ErrEngineUnreachable, doErr)
|
||||
}
|
||||
switch status {
|
||||
case http.StatusOK, http.StatusCreated:
|
||||
return decodeStateResponse(payload, "engine init")
|
||||
case http.StatusBadRequest:
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine init: %s", ports.ErrEngineValidation, summariseEngineError(payload, status))
|
||||
default:
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine init: %s", ports.ErrEngineUnreachable, summariseEngineError(payload, status))
|
||||
}
|
||||
}
|
||||
|
||||
// Status calls GET /api/v1/admin/status.
|
||||
func (client *Client) Status(ctx context.Context, baseURL string) (ports.StateResponse, error) {
|
||||
if err := client.validateBase(baseURL); err != nil {
|
||||
return ports.StateResponse{}, err
|
||||
}
|
||||
payload, status, doErr := client.doRequest(ctx, http.MethodGet, baseURL+pathAdminStatus, nil, client.probeTimeout)
|
||||
if doErr != nil {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine status: %w", ports.ErrEngineUnreachable, doErr)
|
||||
}
|
||||
switch status {
|
||||
case http.StatusOK:
|
||||
return decodeStateResponse(payload, "engine status")
|
||||
case http.StatusBadRequest:
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine status: %s", ports.ErrEngineValidation, summariseEngineError(payload, status))
|
||||
default:
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine status: %s", ports.ErrEngineUnreachable, summariseEngineError(payload, status))
|
||||
}
|
||||
}
|
||||
|
||||
// Turn calls PUT /api/v1/admin/turn.
|
||||
func (client *Client) Turn(ctx context.Context, baseURL string) (ports.StateResponse, error) {
|
||||
if err := client.validateBase(baseURL); err != nil {
|
||||
return ports.StateResponse{}, err
|
||||
}
|
||||
payload, status, doErr := client.doRequest(ctx, http.MethodPut, baseURL+pathAdminTurn, nil, client.callTimeout)
|
||||
if doErr != nil {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine turn: %w", ports.ErrEngineUnreachable, doErr)
|
||||
}
|
||||
switch status {
|
||||
case http.StatusOK:
|
||||
return decodeStateResponse(payload, "engine turn")
|
||||
case http.StatusBadRequest:
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine turn: %s", ports.ErrEngineValidation, summariseEngineError(payload, status))
|
||||
default:
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: engine turn: %s", ports.ErrEngineUnreachable, summariseEngineError(payload, status))
|
||||
}
|
||||
}
|
||||
|
||||
// BanishRace calls POST /api/v1/admin/race/banish with body
|
||||
// `{race_name}`. Engine returns 204 on success.
|
||||
func (client *Client) BanishRace(ctx context.Context, baseURL, raceName string) error {
|
||||
if err := client.validateBase(baseURL); err != nil {
|
||||
return err
|
||||
}
|
||||
if strings.TrimSpace(raceName) == "" {
|
||||
return errors.New("engine banish: race name must not be empty")
|
||||
}
|
||||
body, err := json.Marshal(banishRequestEnvelope{RaceName: raceName})
|
||||
if err != nil {
|
||||
return fmt.Errorf("engine banish: encode request: %w", err)
|
||||
}
|
||||
payload, status, doErr := client.doRequest(ctx, http.MethodPost, baseURL+pathAdminRaceBanish, body, client.callTimeout)
|
||||
if doErr != nil {
|
||||
return fmt.Errorf("%w: engine banish: %w", ports.ErrEngineUnreachable, doErr)
|
||||
}
|
||||
switch status {
|
||||
case http.StatusNoContent, http.StatusOK:
|
||||
return nil
|
||||
case http.StatusBadRequest:
|
||||
return fmt.Errorf("%w: engine banish: %s", ports.ErrEngineValidation, summariseEngineError(payload, status))
|
||||
default:
|
||||
return fmt.Errorf("%w: engine banish: %s", ports.ErrEngineUnreachable, summariseEngineError(payload, status))
|
||||
}
|
||||
}
|
||||
|
||||
// ExecuteCommands calls PUT /api/v1/command with payload forwarded
|
||||
// verbatim. The engine response body is returned verbatim; on 4xx the
|
||||
// body is returned alongside `ports.ErrEngineValidation` so callers can
|
||||
// forward the per-command errors.
|
||||
func (client *Client) ExecuteCommands(ctx context.Context, baseURL string, payload json.RawMessage) (json.RawMessage, error) {
|
||||
return client.forwardPlayerWrite(ctx, baseURL, pathPlayerCommand, payload, "engine command")
|
||||
}
|
||||
|
||||
// PutOrders calls PUT /api/v1/order with the same forwarding semantics
|
||||
// as ExecuteCommands.
|
||||
func (client *Client) PutOrders(ctx context.Context, baseURL string, payload json.RawMessage) (json.RawMessage, error) {
|
||||
return client.forwardPlayerWrite(ctx, baseURL, pathPlayerOrder, payload, "engine order")
|
||||
}
|
||||
|
||||
// GetReport calls GET /api/v1/report?player=<raceName>&turn=<turn> and
|
||||
// returns the engine response body verbatim.
|
||||
func (client *Client) GetReport(ctx context.Context, baseURL, raceName string, turn int) (json.RawMessage, error) {
|
||||
if err := client.validateBase(baseURL); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if strings.TrimSpace(raceName) == "" {
|
||||
return nil, errors.New("engine report: race name must not be empty")
|
||||
}
|
||||
if turn < 0 {
|
||||
return nil, fmt.Errorf("engine report: turn must not be negative, got %d", turn)
|
||||
}
|
||||
values := url.Values{}
|
||||
values.Set("player", raceName)
|
||||
values.Set("turn", strconv.Itoa(turn))
|
||||
target := baseURL + pathPlayerReport + "?" + values.Encode()
|
||||
body, status, doErr := client.doRequest(ctx, http.MethodGet, target, nil, client.probeTimeout)
|
||||
if doErr != nil {
|
||||
return nil, fmt.Errorf("%w: engine report: %w", ports.ErrEngineUnreachable, doErr)
|
||||
}
|
||||
switch status {
|
||||
case http.StatusOK:
|
||||
if len(body) == 0 {
|
||||
return nil, fmt.Errorf("%w: engine report: empty response body", ports.ErrEngineProtocolViolation)
|
||||
}
|
||||
return json.RawMessage(body), nil
|
||||
case http.StatusBadRequest:
|
||||
return json.RawMessage(body), fmt.Errorf("%w: engine report: %s", ports.ErrEngineValidation, summariseEngineError(body, status))
|
||||
default:
|
||||
return nil, fmt.Errorf("%w: engine report: %s", ports.ErrEngineUnreachable, summariseEngineError(body, status))
|
||||
}
|
||||
}
|
||||
|
||||
func (client *Client) forwardPlayerWrite(ctx context.Context, baseURL, requestPath string, payload json.RawMessage, opLabel string) (json.RawMessage, error) {
|
||||
if err := client.validateBase(baseURL); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(bytes.TrimSpace(payload)) == 0 {
|
||||
return nil, fmt.Errorf("%s: payload must not be empty", opLabel)
|
||||
}
|
||||
body, status, doErr := client.doRequest(ctx, http.MethodPut, baseURL+requestPath, []byte(payload), client.callTimeout)
|
||||
if doErr != nil {
|
||||
return nil, fmt.Errorf("%w: %s: %w", ports.ErrEngineUnreachable, opLabel, doErr)
|
||||
}
|
||||
switch status {
|
||||
case http.StatusNoContent, http.StatusOK:
|
||||
if len(body) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return json.RawMessage(body), nil
|
||||
case http.StatusBadRequest:
|
||||
return json.RawMessage(body), fmt.Errorf("%w: %s: %s", ports.ErrEngineValidation, opLabel, summariseEngineError(body, status))
|
||||
default:
|
||||
return nil, fmt.Errorf("%w: %s: %s", ports.ErrEngineUnreachable, opLabel, summariseEngineError(body, status))
|
||||
}
|
||||
}
|
||||
|
||||
// validateBase rejects nil clients, nil/cancelled contexts, and
|
||||
// malformed engine endpoints up-front so transport-layer plumbing does
|
||||
// not need to handle them.
|
||||
func (client *Client) validateBase(baseURL string) error {
|
||||
if client == nil || client.httpClient == nil {
|
||||
return errors.New("engine client: nil client")
|
||||
}
|
||||
if strings.TrimSpace(baseURL) == "" {
|
||||
return errors.New("engine client: base url must not be empty")
|
||||
}
|
||||
parsed, err := url.Parse(baseURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("engine client: parse base url: %w", err)
|
||||
}
|
||||
if parsed.Scheme == "" || parsed.Host == "" {
|
||||
return fmt.Errorf("engine client: base url %q must be absolute", baseURL)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (client *Client) doRequest(ctx context.Context, method, target string, body []byte, timeout time.Duration) ([]byte, int, error) {
|
||||
if ctx == nil {
|
||||
return nil, 0, errors.New("nil context")
|
||||
}
|
||||
if err := ctx.Err(); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
attemptCtx, cancel := context.WithTimeout(ctx, timeout)
|
||||
defer cancel()
|
||||
|
||||
var reader io.Reader
|
||||
if len(body) > 0 {
|
||||
reader = bytes.NewReader(body)
|
||||
}
|
||||
req, err := http.NewRequestWithContext(attemptCtx, method, target, reader)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("build request: %w", err)
|
||||
}
|
||||
req.Header.Set("Accept", "application/json")
|
||||
if len(body) > 0 {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
resp, err := client.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
respBody, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, resp.StatusCode, fmt.Errorf("read response body: %w", err)
|
||||
}
|
||||
return respBody, resp.StatusCode, nil
|
||||
}
|
||||
|
||||
// encodeInitRequest serialises ports.InitRequest into the engine spec
|
||||
// shape (`InitRequest`/`InitRace`).
|
||||
func encodeInitRequest(request ports.InitRequest) ([]byte, error) {
|
||||
envelope := initRequestEnvelope{Races: make([]initRaceEnvelope, 0, len(request.Races))}
|
||||
for _, race := range request.Races {
|
||||
if strings.TrimSpace(race.RaceName) == "" {
|
||||
return nil, errors.New("init race: race name must not be empty")
|
||||
}
|
||||
envelope.Races = append(envelope.Races, initRaceEnvelope{RaceName: race.RaceName})
|
||||
}
|
||||
return json.Marshal(envelope)
|
||||
}
|
||||
|
||||
// decodeStateResponse decodes the engine StateResponse payload into the
|
||||
// port-level StateResponse projection. Unknown fields are tolerated;
|
||||
// missing required ones surface as ErrEngineProtocolViolation.
|
||||
func decodeStateResponse(payload []byte, opLabel string) (ports.StateResponse, error) {
|
||||
if len(payload) == 0 {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: %s: empty response body", ports.ErrEngineProtocolViolation, opLabel)
|
||||
}
|
||||
var envelope stateResponseEnvelope
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
if err := decoder.Decode(&envelope); err != nil {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: %s: decode body: %w", ports.ErrEngineProtocolViolation, opLabel, err)
|
||||
}
|
||||
if strings.TrimSpace(envelope.ID) == "" {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: %s: missing id", ports.ErrEngineProtocolViolation, opLabel)
|
||||
}
|
||||
if envelope.Player == nil {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: %s: missing player array", ports.ErrEngineProtocolViolation, opLabel)
|
||||
}
|
||||
state := ports.StateResponse{
|
||||
Turn: envelope.Turn,
|
||||
Finished: envelope.Finished,
|
||||
Players: make([]ports.PlayerState, 0, len(envelope.Player)),
|
||||
}
|
||||
for index, player := range envelope.Player {
|
||||
if strings.TrimSpace(player.RaceName) == "" {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: %s: player[%d] missing raceName", ports.ErrEngineProtocolViolation, opLabel, index)
|
||||
}
|
||||
if strings.TrimSpace(player.ID) == "" {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: %s: player[%d] missing id", ports.ErrEngineProtocolViolation, opLabel, index)
|
||||
}
|
||||
if player.Planets < 0 {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: %s: player[%d] negative planets", ports.ErrEngineProtocolViolation, opLabel, index)
|
||||
}
|
||||
if math.IsNaN(player.Population) || math.IsInf(player.Population, 0) || player.Population < 0 {
|
||||
return ports.StateResponse{}, fmt.Errorf("%w: %s: player[%d] invalid population", ports.ErrEngineProtocolViolation, opLabel, index)
|
||||
}
|
||||
state.Players = append(state.Players, ports.PlayerState{
|
||||
RaceName: player.RaceName,
|
||||
EnginePlayerUUID: player.ID,
|
||||
Planets: player.Planets,
|
||||
Population: int(math.Round(player.Population)),
|
||||
})
|
||||
}
|
||||
return state, nil
|
||||
}
|
||||
|
||||
// summariseEngineError extracts a short, human-readable summary from
|
||||
// the engine's validation/internal-error envelopes for the wrapped
|
||||
// error message.
|
||||
func summariseEngineError(payload []byte, status int) string {
|
||||
trimmed := bytes.TrimSpace(payload)
|
||||
if len(trimmed) == 0 {
|
||||
return fmt.Sprintf("status=%d", status)
|
||||
}
|
||||
var envelope engineErrorEnvelope
|
||||
if err := json.Unmarshal(trimmed, &envelope); err == nil {
|
||||
switch {
|
||||
case envelope.GenericError != "":
|
||||
return fmt.Sprintf("status=%d generic_error=%q code=%d", status, envelope.GenericError, envelope.Code)
|
||||
case envelope.Error != "":
|
||||
return fmt.Sprintf("status=%d error=%q", status, envelope.Error)
|
||||
}
|
||||
}
|
||||
return fmt.Sprintf("status=%d", status)
|
||||
}
|
||||
|
||||
// stateResponseEnvelope mirrors `StateResponse` from
|
||||
// `game/openapi.yaml`. Unknown fields are tolerated by encoding/json.
|
||||
type stateResponseEnvelope struct {
|
||||
ID string `json:"id"`
|
||||
Turn int `json:"turn"`
|
||||
Stage int `json:"stage"`
|
||||
Player []playerStateEnvelope `json:"player"`
|
||||
Finished bool `json:"finished"`
|
||||
}
|
||||
|
||||
// playerStateEnvelope mirrors `PlayerState`. Population is `number`
|
||||
// per the engine spec, so the adapter decodes into float64 and rounds
|
||||
// to the port-level int (engine in practice always returns whole
|
||||
// numbers; rounding is a defensive guard against floating-point
|
||||
// noise).
|
||||
type playerStateEnvelope struct {
|
||||
ID string `json:"id"`
|
||||
RaceName string `json:"raceName"`
|
||||
Planets int `json:"planets"`
|
||||
Population float64 `json:"population"`
|
||||
Extinct bool `json:"extinct"`
|
||||
}
|
||||
|
||||
type initRequestEnvelope struct {
|
||||
Races []initRaceEnvelope `json:"races"`
|
||||
}
|
||||
|
||||
type initRaceEnvelope struct {
|
||||
RaceName string `json:"raceName"`
|
||||
}
|
||||
|
||||
type banishRequestEnvelope struct {
|
||||
RaceName string `json:"race_name"`
|
||||
}
|
||||
|
||||
type engineErrorEnvelope struct {
|
||||
Error string `json:"error"`
|
||||
GenericError string `json:"generic_error"`
|
||||
Code int `json:"code"`
|
||||
}
|
||||
|
||||
// Compile-time assertion: Client implements ports.EngineClient.
|
||||
var _ ports.EngineClient = (*Client)(nil)
|
||||
@@ -0,0 +1,363 @@
|
||||
package engineclient
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
func newTestClient(t *testing.T, callTimeout, probeTimeout time.Duration) *Client {
|
||||
t.Helper()
|
||||
client, err := NewClient(Config{CallTimeout: callTimeout, ProbeTimeout: probeTimeout})
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
return client
|
||||
}
|
||||
|
||||
func TestNewClientValidatesConfig(t *testing.T) {
|
||||
cases := map[string]Config{
|
||||
"non-positive call timeout": {CallTimeout: 0, ProbeTimeout: time.Second},
|
||||
"non-positive probe timeout": {CallTimeout: time.Second, ProbeTimeout: 0},
|
||||
}
|
||||
for name, cfg := range cases {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
_, err := NewClient(cfg)
|
||||
require.Error(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestInitHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodPost, r.Method)
|
||||
require.Equal(t, "/api/v1/admin/init", r.URL.Path)
|
||||
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
|
||||
|
||||
body, err := io.ReadAll(r.Body)
|
||||
require.NoError(t, err)
|
||||
var got initRequestEnvelope
|
||||
require.NoError(t, json.Unmarshal(body, &got))
|
||||
require.Equal(t, []initRaceEnvelope{{RaceName: "Human"}, {RaceName: "Klingon"}}, got.Races)
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusCreated)
|
||||
_, _ = w.Write([]byte(`{
|
||||
"id": "00000000-0000-0000-0000-000000000001",
|
||||
"turn": 0,
|
||||
"stage": 0,
|
||||
"finished": false,
|
||||
"player": [
|
||||
{"id":"00000000-0000-0000-0000-000000000010","raceName":"Human","planets":3,"population":1500,"extinct":false},
|
||||
{"id":"00000000-0000-0000-0000-000000000011","raceName":"Klingon","planets":3,"population":1500,"extinct":false}
|
||||
]
|
||||
}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
state, err := client.Init(context.Background(), server.URL, ports.InitRequest{
|
||||
Races: []ports.InitRace{{RaceName: "Human"}, {RaceName: "Klingon"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, state.Turn)
|
||||
assert.False(t, state.Finished)
|
||||
require.Len(t, state.Players, 2)
|
||||
assert.Equal(t, "Human", state.Players[0].RaceName)
|
||||
assert.Equal(t, "00000000-0000-0000-0000-000000000010", state.Players[0].EnginePlayerUUID)
|
||||
assert.Equal(t, 3, state.Players[0].Planets)
|
||||
assert.Equal(t, 1500, state.Players[0].Population)
|
||||
}
|
||||
|
||||
func TestInitRejectsEmptyRaces(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
t.Fatal("must not contact engine on empty races")
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
_, err := client.Init(context.Background(), server.URL, ports.InitRequest{})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestInitValidationErrorMapsToEngineValidation(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusBadRequest)
|
||||
_, _ = w.Write([]byte(`{"error":"races must contain at least 10 entries"}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
_, err := client.Init(context.Background(), server.URL, ports.InitRequest{
|
||||
Races: []ports.InitRace{{RaceName: "X"}},
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrEngineValidation))
|
||||
assert.Contains(t, err.Error(), "must contain at least 10")
|
||||
}
|
||||
|
||||
func TestInitInternalErrorMapsToUnreachable(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
_, _ = w.Write([]byte(`{"generic_error":"boom","code":42}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
_, err := client.Init(context.Background(), server.URL, ports.InitRequest{Races: []ports.InitRace{{RaceName: "X"}}})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrEngineUnreachable))
|
||||
assert.Contains(t, err.Error(), "code=42")
|
||||
}
|
||||
|
||||
func TestStatusHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodGet, r.Method)
|
||||
require.Equal(t, "/api/v1/admin/status", r.URL.Path)
|
||||
_, _ = w.Write([]byte(`{
|
||||
"id": "g-1",
|
||||
"turn": 5,
|
||||
"stage": 0,
|
||||
"finished": false,
|
||||
"player": [
|
||||
{"id":"p-1","raceName":"Human","planets":4,"population":1700.0,"extinct":false}
|
||||
]
|
||||
}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
state, err := client.Status(context.Background(), server.URL)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 5, state.Turn)
|
||||
require.Len(t, state.Players, 1)
|
||||
assert.Equal(t, "Human", state.Players[0].RaceName)
|
||||
assert.Equal(t, 1700, state.Players[0].Population)
|
||||
}
|
||||
|
||||
func TestStatusUsesProbeTimeout(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
time.Sleep(120 * time.Millisecond)
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, 30*time.Millisecond)
|
||||
_, err := client.Status(context.Background(), server.URL)
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrEngineUnreachable))
|
||||
}
|
||||
|
||||
func TestTurnFinishedFlagPropagates(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodPut, r.Method)
|
||||
require.Equal(t, "/api/v1/admin/turn", r.URL.Path)
|
||||
_, _ = w.Write([]byte(`{
|
||||
"id":"g","turn":42,"stage":0,"finished":true,
|
||||
"player":[{"id":"p1","raceName":"Human","planets":0,"population":0,"extinct":true}]
|
||||
}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
state, err := client.Turn(context.Background(), server.URL)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 42, state.Turn)
|
||||
assert.True(t, state.Finished)
|
||||
}
|
||||
|
||||
func TestDecodeProtocolViolations(t *testing.T) {
|
||||
cases := map[string]string{
|
||||
"missing id": `{"turn":0,"stage":0,"finished":false,"player":[]}`,
|
||||
"missing player": `{"id":"g","turn":0,"stage":0,"finished":false}`,
|
||||
"missing race name": `{"id":"g","turn":0,"stage":0,"finished":false,"player":[{"id":"p","planets":0,"population":0,"extinct":false}]}`,
|
||||
"missing player id": `{"id":"g","turn":0,"stage":0,"finished":false,"player":[{"raceName":"X","planets":0,"population":0,"extinct":false}]}`,
|
||||
"negative planets": `{"id":"g","turn":0,"stage":0,"finished":false,"player":[{"id":"p","raceName":"X","planets":-1,"population":0,"extinct":false}]}`,
|
||||
"infinite population": `{"id":"g","turn":0,"stage":0,"finished":false,"player":[{"id":"p","raceName":"X","planets":1,"population":1e400,"extinct":false}]}`,
|
||||
}
|
||||
for name, body := range cases {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(body))
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
_, err := client.Status(context.Background(), server.URL)
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrEngineProtocolViolation), "case %q: %v", name, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestBanishRaceHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodPost, r.Method)
|
||||
require.Equal(t, "/api/v1/admin/race/banish", r.URL.Path)
|
||||
var got banishRequestEnvelope
|
||||
require.NoError(t, json.NewDecoder(r.Body).Decode(&got))
|
||||
assert.Equal(t, "Klingon", got.RaceName)
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
require.NoError(t, client.BanishRace(context.Background(), server.URL, "Klingon"))
|
||||
}
|
||||
|
||||
func TestBanishRaceRejectsBlankName(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
t.Fatal("must not contact engine on blank race name")
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
require.Error(t, client.BanishRace(context.Background(), server.URL, " "))
|
||||
}
|
||||
|
||||
func TestBanishRaceValidationError(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusBadRequest)
|
||||
_, _ = w.Write([]byte(`{"error":"unknown race"}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
err := client.BanishRace(context.Background(), server.URL, "Vulcan")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrEngineValidation))
|
||||
}
|
||||
|
||||
func TestExecuteCommandsHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodPut, r.Method)
|
||||
require.Equal(t, "/api/v1/command", r.URL.Path)
|
||||
body, _ := io.ReadAll(r.Body)
|
||||
assert.JSONEq(t, `{"actor":"Human","cmd":[]}`, string(body))
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
body, err := client.ExecuteCommands(context.Background(), server.URL, json.RawMessage(`{"actor":"Human","cmd":[]}`))
|
||||
require.NoError(t, err)
|
||||
assert.Nil(t, body)
|
||||
}
|
||||
|
||||
func TestExecuteCommandsValidationReturnsBody(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusBadRequest)
|
||||
_, _ = w.Write([]byte(`{"error":"bad command"}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
body, err := client.ExecuteCommands(context.Background(), server.URL, json.RawMessage(`{"actor":"Human","cmd":[{}]}`))
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrEngineValidation))
|
||||
assert.JSONEq(t, `{"error":"bad command"}`, string(body))
|
||||
}
|
||||
|
||||
func TestExecuteCommandsRejectsEmptyPayload(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
t.Fatal("must not contact engine with empty payload")
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
_, err := client.ExecuteCommands(context.Background(), server.URL, json.RawMessage(` `))
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestPutOrdersHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodPut, r.Method)
|
||||
require.Equal(t, "/api/v1/order", r.URL.Path)
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
body, err := client.PutOrders(context.Background(), server.URL, json.RawMessage(`{"actor":"Human","cmd":[]}`))
|
||||
require.NoError(t, err)
|
||||
assert.Nil(t, body)
|
||||
}
|
||||
|
||||
func TestGetReportHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodGet, r.Method)
|
||||
require.Equal(t, "/api/v1/report", r.URL.Path)
|
||||
assert.Equal(t, "Human", r.URL.Query().Get("player"))
|
||||
assert.Equal(t, "7", r.URL.Query().Get("turn"))
|
||||
_, _ = w.Write([]byte(`{"version":"1","turn":7,"race":"Human"}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
body, err := client.GetReport(context.Background(), server.URL, "Human", 7)
|
||||
require.NoError(t, err)
|
||||
assert.JSONEq(t, `{"version":"1","turn":7,"race":"Human"}`, string(body))
|
||||
}
|
||||
|
||||
func TestGetReportEmptyBodyIsProtocolViolation(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
_, err := client.GetReport(context.Background(), server.URL, "Human", 0)
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrEngineProtocolViolation))
|
||||
}
|
||||
|
||||
func TestGetReportRejectsBadInput(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
t.Fatal("must not contact engine on bad input")
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
_, err := client.GetReport(context.Background(), server.URL, " ", 0)
|
||||
require.Error(t, err)
|
||||
_, err = client.GetReport(context.Background(), server.URL, "Human", -1)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestValidateBaseRejectsBadURLs(t *testing.T) {
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
_, err := client.Status(context.Background(), "")
|
||||
require.Error(t, err)
|
||||
_, err = client.Status(context.Background(), "engine:8080")
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "absolute")
|
||||
}
|
||||
|
||||
func TestCancelledContextSurfaces(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
t.Fatal("must not contact engine with cancelled context")
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
_, err := client.Status(ctx, server.URL)
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, context.Canceled))
|
||||
}
|
||||
|
||||
func TestSummariseEngineErrorFallback(t *testing.T) {
|
||||
got := summariseEngineError([]byte("not json"), 502)
|
||||
assert.True(t, strings.Contains(got, "status=502"))
|
||||
}
|
||||
|
||||
func TestCloseIsIdempotent(t *testing.T) {
|
||||
client := newTestClient(t, time.Second, time.Second)
|
||||
require.NoError(t, client.Close())
|
||||
require.NoError(t, client.Close())
|
||||
}
|
||||
@@ -0,0 +1,343 @@
|
||||
// Package lobbyclient provides the trusted-internal Lobby REST client
|
||||
// Game Master uses to fetch membership lists for the in-process
|
||||
// authorization cache and to resolve the human-readable `game_name`
|
||||
// consumed by notification intents.
|
||||
//
|
||||
// Two endpoints are mounted today:
|
||||
//
|
||||
// - `GET /api/v1/internal/games/{game_id}/memberships` — pagination is
|
||||
// handled internally so callers always receive every membership of
|
||||
// the game;
|
||||
// - `GET /api/v1/internal/games/{game_id}` — single read used by the
|
||||
// turn-generation orchestrator to resolve `game_name` per
|
||||
// notification.
|
||||
package lobbyclient
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
|
||||
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
const (
|
||||
membershipsPathTemplate = "/api/v1/internal/games/%s/memberships"
|
||||
|
||||
gameRecordPathTemplate = "/api/v1/internal/games/%s"
|
||||
|
||||
// pageSize is the per-call page size; matches the Lobby spec
|
||||
// maximum (200) so we walk fewer pages on large rosters.
|
||||
pageSize = 200
|
||||
|
||||
// maxPages caps the page walk to defend against an upstream that
|
||||
// keeps returning a `next_page_token` indefinitely. 64 pages of
|
||||
// 200 items each cover 12_800 memberships per game — orders of
|
||||
// magnitude beyond any realistic Galaxy roster.
|
||||
maxPages = 64
|
||||
)
|
||||
|
||||
// Config configures one HTTP-backed Lobby internal client.
|
||||
type Config struct {
|
||||
// BaseURL stores the absolute base URL of the Lobby internal HTTP
|
||||
// listener (e.g. `http://lobby:8095`).
|
||||
BaseURL string
|
||||
|
||||
// RequestTimeout bounds one outbound page request. The total
|
||||
// wall-clock for `GetMemberships` is at most
|
||||
// `RequestTimeout * <pages>`, capped indirectly by the per-page
|
||||
// limit and `maxPages`.
|
||||
RequestTimeout time.Duration
|
||||
}
|
||||
|
||||
// Client resolves Lobby memberships through the trusted internal HTTP
|
||||
// API.
|
||||
type Client struct {
|
||||
baseURL string
|
||||
requestTimeout time.Duration
|
||||
httpClient *http.Client
|
||||
closeIdleConnections func()
|
||||
}
|
||||
|
||||
type membershipListEnvelope struct {
|
||||
Items []membershipRecordEnvelope `json:"items"`
|
||||
NextPageToken string `json:"next_page_token"`
|
||||
}
|
||||
|
||||
type membershipRecordEnvelope struct {
|
||||
MembershipID string `json:"membership_id"`
|
||||
GameID string `json:"game_id"`
|
||||
UserID string `json:"user_id"`
|
||||
RaceName string `json:"race_name"`
|
||||
Status string `json:"status"`
|
||||
JoinedAt int64 `json:"joined_at"`
|
||||
RemovedAt *int64 `json:"removed_at,omitempty"`
|
||||
}
|
||||
|
||||
// gameRecordEnvelope captures the fields GM consumes from Lobby's
|
||||
// `GameRecord` schema. Lobby may carry additional fields; the JSON
|
||||
// decoder ignores them.
|
||||
type gameRecordEnvelope struct {
|
||||
GameID string `json:"game_id"`
|
||||
GameName string `json:"game_name"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
type errorEnvelope struct {
|
||||
Error *errorBody `json:"error"`
|
||||
}
|
||||
|
||||
type errorBody struct {
|
||||
Code string `json:"code"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
// NewClient constructs a Lobby internal client with otelhttp-wrapped
|
||||
// transport cloned from `http.DefaultTransport`. Call `Close` to
|
||||
// release idle connections at shutdown.
|
||||
func NewClient(cfg Config) (*Client, error) {
|
||||
transport, ok := http.DefaultTransport.(*http.Transport)
|
||||
if !ok {
|
||||
return nil, errors.New("new lobby client: default transport is not *http.Transport")
|
||||
}
|
||||
cloned := transport.Clone()
|
||||
return newClient(cfg, &http.Client{Transport: otelhttp.NewTransport(cloned)}, cloned.CloseIdleConnections)
|
||||
}
|
||||
|
||||
func newClient(cfg Config, httpClient *http.Client, closeIdleConnections func()) (*Client, error) {
|
||||
switch {
|
||||
case strings.TrimSpace(cfg.BaseURL) == "":
|
||||
return nil, errors.New("new lobby client: base url must not be empty")
|
||||
case cfg.RequestTimeout <= 0:
|
||||
return nil, errors.New("new lobby client: request timeout must be positive")
|
||||
case httpClient == nil:
|
||||
return nil, errors.New("new lobby client: http client must not be nil")
|
||||
}
|
||||
parsed, err := url.Parse(strings.TrimRight(strings.TrimSpace(cfg.BaseURL), "/"))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("new lobby client: parse base url: %w", err)
|
||||
}
|
||||
if parsed.Scheme == "" || parsed.Host == "" {
|
||||
return nil, errors.New("new lobby client: base url must be absolute")
|
||||
}
|
||||
return &Client{
|
||||
baseURL: parsed.String(),
|
||||
requestTimeout: cfg.RequestTimeout,
|
||||
httpClient: httpClient,
|
||||
closeIdleConnections: closeIdleConnections,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close releases idle HTTP connections owned by the underlying
|
||||
// transport. Safe to call multiple times.
|
||||
func (client *Client) Close() error {
|
||||
if client == nil || client.closeIdleConnections == nil {
|
||||
return nil
|
||||
}
|
||||
client.closeIdleConnections()
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetMemberships returns every membership of gameID, walking the
|
||||
// pagination chain transparently. Transport faults, non-2xx responses,
|
||||
// malformed payloads, and pagination overflow all surface as
|
||||
// `ports.ErrLobbyUnavailable` so callers can branch with `errors.Is`.
|
||||
func (client *Client) GetMemberships(ctx context.Context, gameID string) ([]ports.Membership, error) {
|
||||
if client == nil || client.httpClient == nil {
|
||||
return nil, errors.New("lobby get memberships: nil client")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("lobby get memberships: nil context")
|
||||
}
|
||||
if err := ctx.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return nil, errors.New("lobby get memberships: game id must not be empty")
|
||||
}
|
||||
|
||||
var memberships []ports.Membership
|
||||
pathPrefix := fmt.Sprintf(membershipsPathTemplate, url.PathEscape(gameID))
|
||||
pageToken := ""
|
||||
for range maxPages {
|
||||
payload, statusCode, err := client.doRequest(ctx, http.MethodGet, buildPagedQuery(pathPrefix, pageToken))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%w: %w", ports.ErrLobbyUnavailable, err)
|
||||
}
|
||||
if statusCode != http.StatusOK {
|
||||
errorCode := decodeErrorCode(payload)
|
||||
if errorCode != "" {
|
||||
return nil, fmt.Errorf("%w: unexpected status %d (error_code=%s)", ports.ErrLobbyUnavailable, statusCode, errorCode)
|
||||
}
|
||||
return nil, fmt.Errorf("%w: unexpected status %d", ports.ErrLobbyUnavailable, statusCode)
|
||||
}
|
||||
var envelope membershipListEnvelope
|
||||
if err := decodeJSONPayload(payload, &envelope); err != nil {
|
||||
return nil, fmt.Errorf("%w: decode response: %w", ports.ErrLobbyUnavailable, err)
|
||||
}
|
||||
for index, item := range envelope.Items {
|
||||
converted, err := toMembership(item)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%w: items[%d]: %w", ports.ErrLobbyUnavailable, index, err)
|
||||
}
|
||||
memberships = append(memberships, converted)
|
||||
}
|
||||
if strings.TrimSpace(envelope.NextPageToken) == "" {
|
||||
return memberships, nil
|
||||
}
|
||||
pageToken = envelope.NextPageToken
|
||||
}
|
||||
return nil, fmt.Errorf("%w: pagination overflow after %d pages", ports.ErrLobbyUnavailable, maxPages)
|
||||
}
|
||||
|
||||
// GetGameSummary returns the narrow projection of Lobby's GameRecord
|
||||
// (game id, game name, lifecycle status) for gameID. Transport faults,
|
||||
// non-2xx responses, malformed payloads, and missing required fields
|
||||
// surface as `ports.ErrLobbyUnavailable` so callers can branch with
|
||||
// `errors.Is`.
|
||||
func (client *Client) GetGameSummary(ctx context.Context, gameID string) (ports.GameSummary, error) {
|
||||
if client == nil || client.httpClient == nil {
|
||||
return ports.GameSummary{}, errors.New("lobby get game summary: nil client")
|
||||
}
|
||||
if ctx == nil {
|
||||
return ports.GameSummary{}, errors.New("lobby get game summary: nil context")
|
||||
}
|
||||
if err := ctx.Err(); err != nil {
|
||||
return ports.GameSummary{}, err
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return ports.GameSummary{}, errors.New("lobby get game summary: game id must not be empty")
|
||||
}
|
||||
|
||||
requestPath := fmt.Sprintf(gameRecordPathTemplate, url.PathEscape(gameID))
|
||||
payload, statusCode, err := client.doRequest(ctx, http.MethodGet, requestPath)
|
||||
if err != nil {
|
||||
return ports.GameSummary{}, fmt.Errorf("%w: %w", ports.ErrLobbyUnavailable, err)
|
||||
}
|
||||
if statusCode != http.StatusOK {
|
||||
errorCode := decodeErrorCode(payload)
|
||||
if errorCode != "" {
|
||||
return ports.GameSummary{}, fmt.Errorf(
|
||||
"%w: unexpected status %d (error_code=%s)",
|
||||
ports.ErrLobbyUnavailable, statusCode, errorCode,
|
||||
)
|
||||
}
|
||||
return ports.GameSummary{}, fmt.Errorf(
|
||||
"%w: unexpected status %d", ports.ErrLobbyUnavailable, statusCode,
|
||||
)
|
||||
}
|
||||
var envelope gameRecordEnvelope
|
||||
if err := decodeJSONPayload(payload, &envelope); err != nil {
|
||||
return ports.GameSummary{}, fmt.Errorf("%w: decode response: %w", ports.ErrLobbyUnavailable, err)
|
||||
}
|
||||
if strings.TrimSpace(envelope.GameID) == "" {
|
||||
return ports.GameSummary{}, fmt.Errorf("%w: missing game_id", ports.ErrLobbyUnavailable)
|
||||
}
|
||||
if strings.TrimSpace(envelope.GameName) == "" {
|
||||
return ports.GameSummary{}, fmt.Errorf("%w: missing game_name", ports.ErrLobbyUnavailable)
|
||||
}
|
||||
if strings.TrimSpace(envelope.Status) == "" {
|
||||
return ports.GameSummary{}, fmt.Errorf("%w: missing status", ports.ErrLobbyUnavailable)
|
||||
}
|
||||
return ports.GameSummary{
|
||||
GameID: envelope.GameID,
|
||||
GameName: envelope.GameName,
|
||||
Status: envelope.Status,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func buildPagedQuery(path, pageToken string) string {
|
||||
params := url.Values{}
|
||||
params.Set("page_size", strconv.Itoa(pageSize))
|
||||
if pageToken != "" {
|
||||
params.Set("page_token", pageToken)
|
||||
}
|
||||
return path + "?" + params.Encode()
|
||||
}
|
||||
|
||||
func toMembership(record membershipRecordEnvelope) (ports.Membership, error) {
|
||||
if strings.TrimSpace(record.UserID) == "" {
|
||||
return ports.Membership{}, errors.New("missing user_id")
|
||||
}
|
||||
if strings.TrimSpace(record.RaceName) == "" {
|
||||
return ports.Membership{}, errors.New("missing race_name")
|
||||
}
|
||||
if strings.TrimSpace(record.Status) == "" {
|
||||
return ports.Membership{}, errors.New("missing status")
|
||||
}
|
||||
membership := ports.Membership{
|
||||
UserID: record.UserID,
|
||||
RaceName: record.RaceName,
|
||||
Status: record.Status,
|
||||
JoinedAt: time.UnixMilli(record.JoinedAt).UTC(),
|
||||
}
|
||||
if record.RemovedAt != nil {
|
||||
removedAt := time.UnixMilli(*record.RemovedAt).UTC()
|
||||
membership.RemovedAt = &removedAt
|
||||
}
|
||||
return membership, nil
|
||||
}
|
||||
|
||||
func (client *Client) doRequest(ctx context.Context, method, requestPath string) ([]byte, int, error) {
|
||||
attemptCtx, cancel := context.WithTimeout(ctx, client.requestTimeout)
|
||||
defer cancel()
|
||||
|
||||
req, err := http.NewRequestWithContext(attemptCtx, method, client.baseURL+requestPath, nil)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("build request: %w", err)
|
||||
}
|
||||
req.Header.Set("Accept", "application/json")
|
||||
|
||||
resp, err := client.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("read response body: %w", err)
|
||||
}
|
||||
return body, resp.StatusCode, nil
|
||||
}
|
||||
|
||||
func decodeJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func decodeErrorCode(payload []byte) string {
|
||||
if len(payload) == 0 {
|
||||
return ""
|
||||
}
|
||||
var envelope errorEnvelope
|
||||
if err := json.Unmarshal(payload, &envelope); err != nil {
|
||||
return ""
|
||||
}
|
||||
if envelope.Error == nil {
|
||||
return ""
|
||||
}
|
||||
return envelope.Error.Code
|
||||
}
|
||||
|
||||
// Compile-time assertion: Client implements ports.LobbyClient.
|
||||
var _ ports.LobbyClient = (*Client)(nil)
|
||||
@@ -0,0 +1,344 @@
|
||||
package lobbyclient
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strconv"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
func newTestClient(t *testing.T, baseURL string, timeout time.Duration) *Client {
|
||||
t.Helper()
|
||||
client, err := NewClient(Config{BaseURL: baseURL, RequestTimeout: timeout})
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
return client
|
||||
}
|
||||
|
||||
func TestNewClientValidatesConfig(t *testing.T) {
|
||||
cases := map[string]Config{
|
||||
"empty base url": {BaseURL: "", RequestTimeout: time.Second},
|
||||
"non-absolute base url": {BaseURL: "lobby:8095", RequestTimeout: time.Second},
|
||||
"non-positive timeout": {BaseURL: "http://lobby:8095", RequestTimeout: 0},
|
||||
}
|
||||
for name, cfg := range cases {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
_, err := NewClient(cfg)
|
||||
require.Error(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetMembershipsHappyPathSinglePage(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodGet, r.Method)
|
||||
require.Equal(t, "/api/v1/internal/games/game-1/memberships", r.URL.Path)
|
||||
assert.Equal(t, strconv.Itoa(pageSize), r.URL.Query().Get("page_size"))
|
||||
assert.Empty(t, r.URL.Query().Get("page_token"))
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_, _ = w.Write([]byte(`{
|
||||
"items": [
|
||||
{"membership_id":"m1","game_id":"game-1","user_id":"u1","race_name":"Human","status":"active","joined_at":1700000000000},
|
||||
{"membership_id":"m2","game_id":"game-1","user_id":"u2","race_name":"Klingon","status":"removed","joined_at":1700000010000,"removed_at":1700000020000}
|
||||
]
|
||||
}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
memberships, err := client.GetMemberships(context.Background(), "game-1")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, memberships, 2)
|
||||
|
||||
assert.Equal(t, "u1", memberships[0].UserID)
|
||||
assert.Equal(t, "Human", memberships[0].RaceName)
|
||||
assert.Equal(t, "active", memberships[0].Status)
|
||||
assert.Equal(t, time.UnixMilli(1700000000000).UTC(), memberships[0].JoinedAt)
|
||||
assert.Nil(t, memberships[0].RemovedAt)
|
||||
|
||||
assert.Equal(t, "removed", memberships[1].Status)
|
||||
require.NotNil(t, memberships[1].RemovedAt)
|
||||
assert.Equal(t, time.UnixMilli(1700000020000).UTC(), *memberships[1].RemovedAt)
|
||||
}
|
||||
|
||||
func TestGetMembershipsFollowsPagination(t *testing.T) {
|
||||
var calls atomic.Int32
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
call := calls.Add(1)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
switch call {
|
||||
case 1:
|
||||
assert.Empty(t, r.URL.Query().Get("page_token"))
|
||||
_, _ = w.Write([]byte(`{
|
||||
"items":[{"membership_id":"m1","game_id":"g","user_id":"u1","race_name":"Human","status":"active","joined_at":1}],
|
||||
"next_page_token":"tok-2"
|
||||
}`))
|
||||
case 2:
|
||||
assert.Equal(t, "tok-2", r.URL.Query().Get("page_token"))
|
||||
_, _ = w.Write([]byte(`{
|
||||
"items":[{"membership_id":"m2","game_id":"g","user_id":"u2","race_name":"Klingon","status":"active","joined_at":2}],
|
||||
"next_page_token":"tok-3"
|
||||
}`))
|
||||
case 3:
|
||||
assert.Equal(t, "tok-3", r.URL.Query().Get("page_token"))
|
||||
_, _ = w.Write([]byte(`{
|
||||
"items":[{"membership_id":"m3","game_id":"g","user_id":"u3","race_name":"Vulcan","status":"blocked","joined_at":3}]
|
||||
}`))
|
||||
default:
|
||||
t.Fatalf("unexpected extra call %d", call)
|
||||
}
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
memberships, err := client.GetMemberships(context.Background(), "g")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, memberships, 3)
|
||||
assert.Equal(t, "u1", memberships[0].UserID)
|
||||
assert.Equal(t, "u2", memberships[1].UserID)
|
||||
assert.Equal(t, "u3", memberships[2].UserID)
|
||||
assert.Equal(t, int32(3), calls.Load())
|
||||
}
|
||||
|
||||
func TestGetMembershipsPaginationOverflow(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(`{"items":[],"next_page_token":"never-ends"}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetMemberships(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
assert.Contains(t, err.Error(), "pagination overflow")
|
||||
}
|
||||
|
||||
func TestGetMembershipsInternalErrorMapsToUnavailable(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
_, _ = w.Write([]byte(`{"error":{"code":"internal_error","message":"boom"}}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetMemberships(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
assert.Contains(t, err.Error(), "internal_error")
|
||||
}
|
||||
|
||||
func TestGetMembershipsTimeoutMapsToUnavailable(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
time.Sleep(120 * time.Millisecond)
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, 30*time.Millisecond)
|
||||
_, err := client.GetMemberships(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
}
|
||||
|
||||
func TestGetMembershipsRejectsBadInput(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
t.Fatal("must not contact lobby on bad input")
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetMemberships(context.Background(), " ")
|
||||
require.Error(t, err)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
_, err = client.GetMemberships(ctx, "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, context.Canceled))
|
||||
}
|
||||
|
||||
func TestGetMembershipsMalformedPayload(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(`{"items":[{"membership_id":"m","game_id":"g","user_id":"","race_name":"","status":"active","joined_at":1}]}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetMemberships(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
}
|
||||
|
||||
func TestGetMembershipsEmptyList(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(`{"items":[]}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
memberships, err := client.GetMemberships(context.Background(), "g")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, memberships)
|
||||
}
|
||||
|
||||
func TestGetMembershipsTrailingJSONIsRejected(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(`{"items":[]}{"items":[]}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetMemberships(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
}
|
||||
|
||||
func TestGetGameSummaryHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodGet, r.Method)
|
||||
require.Equal(t, "/api/v1/internal/games/game-1", r.URL.Path)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_, _ = w.Write([]byte(`{
|
||||
"game_id":"game-1",
|
||||
"game_name":"Andromeda Conquest",
|
||||
"game_type":"public",
|
||||
"owner_user_id":"",
|
||||
"status":"running",
|
||||
"min_players":2,
|
||||
"max_players":8,
|
||||
"start_gap_hours":2,
|
||||
"start_gap_players":4,
|
||||
"enrollment_ends_at":1700000000,
|
||||
"turn_schedule":"0 18 * * *",
|
||||
"target_engine_version":"v1.2.3",
|
||||
"created_at":1700000000000,
|
||||
"updated_at":1700000000000,
|
||||
"current_turn":0,
|
||||
"runtime_status":"",
|
||||
"engine_health_summary":""
|
||||
}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
summary, err := client.GetGameSummary(context.Background(), "game-1")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, ports.GameSummary{
|
||||
GameID: "game-1",
|
||||
GameName: "Andromeda Conquest",
|
||||
Status: "running",
|
||||
}, summary)
|
||||
}
|
||||
|
||||
func TestGetGameSummaryNotFoundMapsToUnavailable(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
_, _ = w.Write([]byte(`{"error":{"code":"not_found","message":"game not found"}}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetGameSummary(context.Background(), "missing")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
assert.Contains(t, err.Error(), "not_found")
|
||||
}
|
||||
|
||||
func TestGetGameSummaryInternalErrorMapsToUnavailable(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
_, _ = w.Write([]byte(`{"error":{"code":"internal_error","message":"boom"}}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetGameSummary(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
assert.Contains(t, err.Error(), "internal_error")
|
||||
}
|
||||
|
||||
func TestGetGameSummaryTimeoutMapsToUnavailable(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
time.Sleep(120 * time.Millisecond)
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, 30*time.Millisecond)
|
||||
_, err := client.GetGameSummary(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
}
|
||||
|
||||
func TestGetGameSummaryMalformedJSON(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(`{not-json}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetGameSummary(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
}
|
||||
|
||||
func TestGetGameSummaryMissingRequiredFields(t *testing.T) {
|
||||
cases := map[string]string{
|
||||
"missing game_id": `{"game_name":"Andromeda","status":"running"}`,
|
||||
"missing game_name": `{"game_id":"g","status":"running"}`,
|
||||
"missing status": `{"game_id":"g","game_name":"Andromeda"}`,
|
||||
}
|
||||
for name, body := range cases {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(body))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetGameSummary(context.Background(), "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrLobbyUnavailable))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetGameSummaryRejectsBadInput(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(_ http.ResponseWriter, _ *http.Request) {
|
||||
t.Fatal("must not contact lobby on bad input")
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, err := client.GetGameSummary(context.Background(), " ")
|
||||
require.Error(t, err)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
_, err = client.GetGameSummary(ctx, "g")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, context.Canceled))
|
||||
}
|
||||
|
||||
func TestCloseIsIdempotent(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(`{"items":[]}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
_, _ = client.GetMemberships(context.Background(), "g")
|
||||
require.NoError(t, client.Close())
|
||||
require.NoError(t, client.Close())
|
||||
}
|
||||
|
||||
@@ -0,0 +1,180 @@
|
||||
// Package lobbyeventspublisher provides the Redis-Streams-backed
|
||||
// publisher for `gm:lobby_events`. The stream carries two distinct
|
||||
// message types — `runtime_snapshot_update` and `game_finished` —
|
||||
// discriminated by the `event_type` field as fixed by
|
||||
// `gamemaster/api/runtime-events-asyncapi.yaml`.
|
||||
//
|
||||
// The adapter mirrors `rtmanager/internal/adapters/healtheventspublisher`
|
||||
// behaviourally: the publisher validates the message before XADDing,
|
||||
// emits one entry per call, and never trims the stream (consumers own
|
||||
// their consumer-group offsets).
|
||||
package lobbyeventspublisher
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
|
||||
"galaxy/gamemaster/internal/domain/runtime"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
// Wire field names used by the Redis Streams payload. Frozen by
|
||||
// `gamemaster/api/runtime-events-asyncapi.yaml`; renaming any of them
|
||||
// breaks Game Lobby's consumer.
|
||||
const (
|
||||
fieldEventType = "event_type"
|
||||
fieldGameID = "game_id"
|
||||
fieldCurrentTurn = "current_turn"
|
||||
fieldFinalTurnNumber = "final_turn_number"
|
||||
fieldRuntimeStatus = "runtime_status"
|
||||
fieldEngineHealthSummary = "engine_health_summary"
|
||||
fieldPlayerTurnStats = "player_turn_stats"
|
||||
fieldOccurredAtMS = "occurred_at_ms"
|
||||
fieldFinishedAtMS = "finished_at_ms"
|
||||
|
||||
eventTypeRuntimeSnapshotUpdate = "runtime_snapshot_update"
|
||||
eventTypeGameFinished = "game_finished"
|
||||
|
||||
emptyPlayerTurnStatsJSON = "[]"
|
||||
)
|
||||
|
||||
// Config groups the dependencies and stream name required to
|
||||
// construct a Publisher.
|
||||
type Config struct {
|
||||
// Client appends entries to Redis Streams. Must be non-nil.
|
||||
Client *redis.Client
|
||||
|
||||
// Stream stores the Redis Stream key events are published to.
|
||||
// Must not be empty (typically `gm:lobby_events`).
|
||||
Stream string
|
||||
}
|
||||
|
||||
// Publisher implements `ports.LobbyEventsPublisher` on top of a shared
|
||||
// Redis client.
|
||||
type Publisher struct {
|
||||
client *redis.Client
|
||||
stream string
|
||||
}
|
||||
|
||||
// NewPublisher constructs a Publisher from cfg. Validation errors
|
||||
// surface the missing collaborator verbatim.
|
||||
func NewPublisher(cfg Config) (*Publisher, error) {
|
||||
if cfg.Client == nil {
|
||||
return nil, errors.New("new gamemaster lobby events publisher: nil redis client")
|
||||
}
|
||||
if cfg.Stream == "" {
|
||||
return nil, errors.New("new gamemaster lobby events publisher: stream must not be empty")
|
||||
}
|
||||
return &Publisher{client: cfg.Client, stream: cfg.Stream}, nil
|
||||
}
|
||||
|
||||
// PublishSnapshotUpdate appends a `runtime_snapshot_update` message to
|
||||
// the stream after validating msg through msg.Validate.
|
||||
func (publisher *Publisher) PublishSnapshotUpdate(ctx context.Context, msg ports.RuntimeSnapshotUpdate) error {
|
||||
if err := publisher.guardCall(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := msg.Validate(); err != nil {
|
||||
return fmt.Errorf("publish runtime snapshot update: %w", err)
|
||||
}
|
||||
statsJSON, err := encodePlayerTurnStats(msg.PlayerTurnStats)
|
||||
if err != nil {
|
||||
return fmt.Errorf("publish runtime snapshot update: %w", err)
|
||||
}
|
||||
values := map[string]any{
|
||||
fieldEventType: eventTypeRuntimeSnapshotUpdate,
|
||||
fieldGameID: msg.GameID,
|
||||
fieldCurrentTurn: strconv.Itoa(msg.CurrentTurn),
|
||||
fieldRuntimeStatus: string(msg.RuntimeStatus),
|
||||
fieldEngineHealthSummary: msg.EngineHealthSummary,
|
||||
fieldPlayerTurnStats: statsJSON,
|
||||
fieldOccurredAtMS: strconv.FormatInt(msg.OccurredAt.UTC().UnixMilli(), 10),
|
||||
}
|
||||
if err := publisher.client.XAdd(ctx, &redis.XAddArgs{
|
||||
Stream: publisher.stream,
|
||||
Values: values,
|
||||
}).Err(); err != nil {
|
||||
return fmt.Errorf("publish runtime snapshot update: xadd: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// PublishGameFinished appends a `game_finished` message to the stream
|
||||
// after validating msg through msg.Validate.
|
||||
func (publisher *Publisher) PublishGameFinished(ctx context.Context, msg ports.GameFinished) error {
|
||||
if err := publisher.guardCall(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := msg.Validate(); err != nil {
|
||||
return fmt.Errorf("publish game finished: %w", err)
|
||||
}
|
||||
if msg.RuntimeStatus != runtime.StatusFinished {
|
||||
return fmt.Errorf("publish game finished: runtime status must be %q, got %q", runtime.StatusFinished, msg.RuntimeStatus)
|
||||
}
|
||||
statsJSON, err := encodePlayerTurnStats(msg.PlayerTurnStats)
|
||||
if err != nil {
|
||||
return fmt.Errorf("publish game finished: %w", err)
|
||||
}
|
||||
values := map[string]any{
|
||||
fieldEventType: eventTypeGameFinished,
|
||||
fieldGameID: msg.GameID,
|
||||
fieldFinalTurnNumber: strconv.Itoa(msg.FinalTurnNumber),
|
||||
fieldRuntimeStatus: string(msg.RuntimeStatus),
|
||||
fieldPlayerTurnStats: statsJSON,
|
||||
fieldFinishedAtMS: strconv.FormatInt(msg.FinishedAt.UTC().UnixMilli(), 10),
|
||||
}
|
||||
if err := publisher.client.XAdd(ctx, &redis.XAddArgs{
|
||||
Stream: publisher.stream,
|
||||
Values: values,
|
||||
}).Err(); err != nil {
|
||||
return fmt.Errorf("publish game finished: xadd: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (publisher *Publisher) guardCall(ctx context.Context) error {
|
||||
if publisher == nil || publisher.client == nil {
|
||||
return errors.New("nil publisher")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("nil context")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// encodePlayerTurnStats returns the JSON serialisation of the per-player
|
||||
// stats array. Empty input becomes the literal `[]` so the stream entry
|
||||
// always carries a valid JSON document for the field.
|
||||
func encodePlayerTurnStats(stats []ports.PlayerTurnStats) (string, error) {
|
||||
if len(stats) == 0 {
|
||||
return emptyPlayerTurnStatsJSON, nil
|
||||
}
|
||||
envelope := make([]playerTurnStatEnvelope, 0, len(stats))
|
||||
for _, item := range stats {
|
||||
envelope = append(envelope, playerTurnStatEnvelope{
|
||||
UserID: item.UserID,
|
||||
Planets: item.Planets,
|
||||
Population: item.Population,
|
||||
})
|
||||
}
|
||||
encoded, err := json.Marshal(envelope)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("encode player turn stats: %w", err)
|
||||
}
|
||||
return string(encoded), nil
|
||||
}
|
||||
|
||||
type playerTurnStatEnvelope struct {
|
||||
UserID string `json:"user_id"`
|
||||
Planets int `json:"planets"`
|
||||
Population int `json:"population"`
|
||||
}
|
||||
|
||||
// Compile-time assertion: Publisher implements
|
||||
// ports.LobbyEventsPublisher.
|
||||
var _ ports.LobbyEventsPublisher = (*Publisher)(nil)
|
||||
@@ -0,0 +1,186 @@
|
||||
package lobbyeventspublisher
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"galaxy/gamemaster/internal/domain/runtime"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
const testStream = "gm:lobby_events"
|
||||
|
||||
func newTestPublisher(t *testing.T) (*Publisher, *redis.Client) {
|
||||
t.Helper()
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
publisher, err := NewPublisher(Config{Client: client, Stream: testStream})
|
||||
require.NoError(t, err)
|
||||
return publisher, client
|
||||
}
|
||||
|
||||
func TestNewPublisherValidation(t *testing.T) {
|
||||
t.Run("nil client", func(t *testing.T) {
|
||||
_, err := NewPublisher(Config{Stream: testStream})
|
||||
require.Error(t, err)
|
||||
})
|
||||
t.Run("empty stream", func(t *testing.T) {
|
||||
client := redis.NewClient(&redis.Options{Addr: "127.0.0.1:0"})
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
_, err := NewPublisher(Config{Client: client})
|
||||
require.Error(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestPublishSnapshotUpdateHappyPath(t *testing.T) {
|
||||
publisher, client := newTestPublisher(t)
|
||||
|
||||
occurredAt := time.Date(2026, 4, 27, 12, 0, 0, 0, time.UTC)
|
||||
msg := ports.RuntimeSnapshotUpdate{
|
||||
GameID: "game-1",
|
||||
CurrentTurn: 17,
|
||||
RuntimeStatus: runtime.StatusRunning,
|
||||
EngineHealthSummary: "healthy",
|
||||
PlayerTurnStats: []ports.PlayerTurnStats{
|
||||
{UserID: "user-1", Planets: 4, Population: 12000},
|
||||
{UserID: "user-2", Planets: 3, Population: 9000},
|
||||
},
|
||||
OccurredAt: occurredAt,
|
||||
}
|
||||
require.NoError(t, publisher.PublishSnapshotUpdate(context.Background(), msg))
|
||||
|
||||
entries, err := client.XRange(context.Background(), testStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, entries, 1)
|
||||
values := entries[0].Values
|
||||
assert.Equal(t, "runtime_snapshot_update", values[fieldEventType])
|
||||
assert.Equal(t, "game-1", values[fieldGameID])
|
||||
assert.Equal(t, "17", values[fieldCurrentTurn])
|
||||
assert.Equal(t, "running", values[fieldRuntimeStatus])
|
||||
assert.Equal(t, "healthy", values[fieldEngineHealthSummary])
|
||||
assert.Equal(t, strconv.FormatInt(occurredAt.UnixMilli(), 10), values[fieldOccurredAtMS])
|
||||
|
||||
statsRaw, ok := values[fieldPlayerTurnStats].(string)
|
||||
require.True(t, ok)
|
||||
var stats []playerTurnStatEnvelope
|
||||
require.NoError(t, json.Unmarshal([]byte(statsRaw), &stats))
|
||||
assert.Equal(t, []playerTurnStatEnvelope{
|
||||
{UserID: "user-1", Planets: 4, Population: 12000},
|
||||
{UserID: "user-2", Planets: 3, Population: 9000},
|
||||
}, stats)
|
||||
}
|
||||
|
||||
func TestPublishSnapshotUpdateEmptyStatsBecomesArray(t *testing.T) {
|
||||
publisher, client := newTestPublisher(t)
|
||||
msg := ports.RuntimeSnapshotUpdate{
|
||||
GameID: "g",
|
||||
CurrentTurn: 0,
|
||||
RuntimeStatus: runtime.StatusStarting,
|
||||
EngineHealthSummary: "",
|
||||
OccurredAt: time.Now().UTC(),
|
||||
}
|
||||
require.NoError(t, publisher.PublishSnapshotUpdate(context.Background(), msg))
|
||||
|
||||
entries, err := client.XRange(context.Background(), testStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, entries, 1)
|
||||
assert.Equal(t, "[]", entries[0].Values[fieldPlayerTurnStats])
|
||||
}
|
||||
|
||||
func TestPublishSnapshotUpdateRejectsInvalid(t *testing.T) {
|
||||
publisher, client := newTestPublisher(t)
|
||||
require.Error(t, publisher.PublishSnapshotUpdate(context.Background(), ports.RuntimeSnapshotUpdate{}))
|
||||
|
||||
entries, err := client.XRange(context.Background(), testStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, entries, "invalid messages must not reach the stream")
|
||||
}
|
||||
|
||||
func TestPublishGameFinishedHappyPath(t *testing.T) {
|
||||
publisher, client := newTestPublisher(t)
|
||||
|
||||
finishedAt := time.Date(2026, 4, 28, 8, 30, 0, 0, time.UTC)
|
||||
msg := ports.GameFinished{
|
||||
GameID: "game-1",
|
||||
FinalTurnNumber: 42,
|
||||
RuntimeStatus: runtime.StatusFinished,
|
||||
PlayerTurnStats: []ports.PlayerTurnStats{
|
||||
{UserID: "user-1", Planets: 6, Population: 25000},
|
||||
{UserID: "user-2", Planets: 0, Population: 0},
|
||||
},
|
||||
FinishedAt: finishedAt,
|
||||
}
|
||||
require.NoError(t, publisher.PublishGameFinished(context.Background(), msg))
|
||||
|
||||
entries, err := client.XRange(context.Background(), testStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, entries, 1)
|
||||
values := entries[0].Values
|
||||
assert.Equal(t, "game_finished", values[fieldEventType])
|
||||
assert.Equal(t, "game-1", values[fieldGameID])
|
||||
assert.Equal(t, "42", values[fieldFinalTurnNumber])
|
||||
assert.Equal(t, "finished", values[fieldRuntimeStatus])
|
||||
assert.Equal(t, strconv.FormatInt(finishedAt.UnixMilli(), 10), values[fieldFinishedAtMS])
|
||||
|
||||
_, hasOccurred := values[fieldOccurredAtMS]
|
||||
assert.False(t, hasOccurred, "game_finished must not carry occurred_at_ms")
|
||||
_, hasCurrentTurn := values[fieldCurrentTurn]
|
||||
assert.False(t, hasCurrentTurn, "game_finished must not carry current_turn")
|
||||
_, hasHealth := values[fieldEngineHealthSummary]
|
||||
assert.False(t, hasHealth, "game_finished must not carry engine_health_summary")
|
||||
}
|
||||
|
||||
func TestPublishGameFinishedRejectsBadStatus(t *testing.T) {
|
||||
publisher, client := newTestPublisher(t)
|
||||
require.Error(t, publisher.PublishGameFinished(context.Background(), ports.GameFinished{
|
||||
GameID: "g",
|
||||
FinalTurnNumber: 1,
|
||||
RuntimeStatus: runtime.StatusRunning, // wrong status
|
||||
FinishedAt: time.Now().UTC(),
|
||||
}))
|
||||
|
||||
entries, err := client.XRange(context.Background(), testStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, entries)
|
||||
}
|
||||
|
||||
func TestTimestampsNormalisedToUTC(t *testing.T) {
|
||||
publisher, client := newTestPublisher(t)
|
||||
loc, err := time.LoadLocation("Asia/Tokyo")
|
||||
require.NoError(t, err)
|
||||
|
||||
msg := ports.RuntimeSnapshotUpdate{
|
||||
GameID: "g",
|
||||
CurrentTurn: 1,
|
||||
RuntimeStatus: runtime.StatusRunning,
|
||||
OccurredAt: time.Date(2026, 4, 27, 21, 0, 0, 0, loc),
|
||||
}
|
||||
require.NoError(t, publisher.PublishSnapshotUpdate(context.Background(), msg))
|
||||
|
||||
entries, err := client.XRange(context.Background(), testStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, entries, 1)
|
||||
wantMs := msg.OccurredAt.UTC().UnixMilli()
|
||||
assert.Equal(t, strconv.FormatInt(wantMs, 10), entries[0].Values[fieldOccurredAtMS])
|
||||
}
|
||||
|
||||
func TestRejectsNilContext(t *testing.T) {
|
||||
publisher, _ := newTestPublisher(t)
|
||||
//nolint:staticcheck // explicitly testing nil-context rejection.
|
||||
err := publisher.PublishSnapshotUpdate(nil, ports.RuntimeSnapshotUpdate{
|
||||
GameID: "g",
|
||||
CurrentTurn: 0,
|
||||
RuntimeStatus: runtime.StatusStarting,
|
||||
OccurredAt: time.Now().UTC(),
|
||||
})
|
||||
require.Error(t, err)
|
||||
}
|
||||
@@ -0,0 +1,147 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: EngineClient)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_engineclient.go -package=mocks galaxy/gamemaster/internal/ports EngineClient
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
json "encoding/json"
|
||||
ports "galaxy/gamemaster/internal/ports"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockEngineClient is a mock of EngineClient interface.
|
||||
type MockEngineClient struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockEngineClientMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockEngineClientMockRecorder is the mock recorder for MockEngineClient.
|
||||
type MockEngineClientMockRecorder struct {
|
||||
mock *MockEngineClient
|
||||
}
|
||||
|
||||
// NewMockEngineClient creates a new mock instance.
|
||||
func NewMockEngineClient(ctrl *gomock.Controller) *MockEngineClient {
|
||||
mock := &MockEngineClient{ctrl: ctrl}
|
||||
mock.recorder = &MockEngineClientMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockEngineClient) EXPECT() *MockEngineClientMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// BanishRace mocks base method.
|
||||
func (m *MockEngineClient) BanishRace(ctx context.Context, baseURL, raceName string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "BanishRace", ctx, baseURL, raceName)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// BanishRace indicates an expected call of BanishRace.
|
||||
func (mr *MockEngineClientMockRecorder) BanishRace(ctx, baseURL, raceName any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "BanishRace", reflect.TypeOf((*MockEngineClient)(nil).BanishRace), ctx, baseURL, raceName)
|
||||
}
|
||||
|
||||
// ExecuteCommands mocks base method.
|
||||
func (m *MockEngineClient) ExecuteCommands(ctx context.Context, baseURL string, payload json.RawMessage) (json.RawMessage, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "ExecuteCommands", ctx, baseURL, payload)
|
||||
ret0, _ := ret[0].(json.RawMessage)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// ExecuteCommands indicates an expected call of ExecuteCommands.
|
||||
func (mr *MockEngineClientMockRecorder) ExecuteCommands(ctx, baseURL, payload any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ExecuteCommands", reflect.TypeOf((*MockEngineClient)(nil).ExecuteCommands), ctx, baseURL, payload)
|
||||
}
|
||||
|
||||
// GetReport mocks base method.
|
||||
func (m *MockEngineClient) GetReport(ctx context.Context, baseURL, raceName string, turn int) (json.RawMessage, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "GetReport", ctx, baseURL, raceName, turn)
|
||||
ret0, _ := ret[0].(json.RawMessage)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// GetReport indicates an expected call of GetReport.
|
||||
func (mr *MockEngineClientMockRecorder) GetReport(ctx, baseURL, raceName, turn any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetReport", reflect.TypeOf((*MockEngineClient)(nil).GetReport), ctx, baseURL, raceName, turn)
|
||||
}
|
||||
|
||||
// Init mocks base method.
|
||||
func (m *MockEngineClient) Init(ctx context.Context, baseURL string, request ports.InitRequest) (ports.StateResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Init", ctx, baseURL, request)
|
||||
ret0, _ := ret[0].(ports.StateResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// Init indicates an expected call of Init.
|
||||
func (mr *MockEngineClientMockRecorder) Init(ctx, baseURL, request any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Init", reflect.TypeOf((*MockEngineClient)(nil).Init), ctx, baseURL, request)
|
||||
}
|
||||
|
||||
// PutOrders mocks base method.
|
||||
func (m *MockEngineClient) PutOrders(ctx context.Context, baseURL string, payload json.RawMessage) (json.RawMessage, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "PutOrders", ctx, baseURL, payload)
|
||||
ret0, _ := ret[0].(json.RawMessage)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// PutOrders indicates an expected call of PutOrders.
|
||||
func (mr *MockEngineClientMockRecorder) PutOrders(ctx, baseURL, payload any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PutOrders", reflect.TypeOf((*MockEngineClient)(nil).PutOrders), ctx, baseURL, payload)
|
||||
}
|
||||
|
||||
// Status mocks base method.
|
||||
func (m *MockEngineClient) Status(ctx context.Context, baseURL string) (ports.StateResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Status", ctx, baseURL)
|
||||
ret0, _ := ret[0].(ports.StateResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// Status indicates an expected call of Status.
|
||||
func (mr *MockEngineClientMockRecorder) Status(ctx, baseURL any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Status", reflect.TypeOf((*MockEngineClient)(nil).Status), ctx, baseURL)
|
||||
}
|
||||
|
||||
// Turn mocks base method.
|
||||
func (m *MockEngineClient) Turn(ctx context.Context, baseURL string) (ports.StateResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Turn", ctx, baseURL)
|
||||
ret0, _ := ret[0].(ports.StateResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// Turn indicates an expected call of Turn.
|
||||
func (mr *MockEngineClientMockRecorder) Turn(ctx, baseURL any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Turn", reflect.TypeOf((*MockEngineClient)(nil).Turn), ctx, baseURL)
|
||||
}
|
||||
@@ -0,0 +1,145 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: EngineVersionStore)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_engineversionstore.go -package=mocks galaxy/gamemaster/internal/ports EngineVersionStore
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
engineversion "galaxy/gamemaster/internal/domain/engineversion"
|
||||
ports "galaxy/gamemaster/internal/ports"
|
||||
reflect "reflect"
|
||||
time "time"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockEngineVersionStore is a mock of EngineVersionStore interface.
|
||||
type MockEngineVersionStore struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockEngineVersionStoreMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockEngineVersionStoreMockRecorder is the mock recorder for MockEngineVersionStore.
|
||||
type MockEngineVersionStoreMockRecorder struct {
|
||||
mock *MockEngineVersionStore
|
||||
}
|
||||
|
||||
// NewMockEngineVersionStore creates a new mock instance.
|
||||
func NewMockEngineVersionStore(ctrl *gomock.Controller) *MockEngineVersionStore {
|
||||
mock := &MockEngineVersionStore{ctrl: ctrl}
|
||||
mock.recorder = &MockEngineVersionStoreMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockEngineVersionStore) EXPECT() *MockEngineVersionStoreMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// Delete mocks base method.
|
||||
func (m *MockEngineVersionStore) Delete(ctx context.Context, version string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Delete", ctx, version)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Delete indicates an expected call of Delete.
|
||||
func (mr *MockEngineVersionStoreMockRecorder) Delete(ctx, version any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Delete", reflect.TypeOf((*MockEngineVersionStore)(nil).Delete), ctx, version)
|
||||
}
|
||||
|
||||
// Deprecate mocks base method.
|
||||
func (m *MockEngineVersionStore) Deprecate(ctx context.Context, version string, now time.Time) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Deprecate", ctx, version, now)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Deprecate indicates an expected call of Deprecate.
|
||||
func (mr *MockEngineVersionStoreMockRecorder) Deprecate(ctx, version, now any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Deprecate", reflect.TypeOf((*MockEngineVersionStore)(nil).Deprecate), ctx, version, now)
|
||||
}
|
||||
|
||||
// Get mocks base method.
|
||||
func (m *MockEngineVersionStore) Get(ctx context.Context, version string) (engineversion.EngineVersion, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Get", ctx, version)
|
||||
ret0, _ := ret[0].(engineversion.EngineVersion)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// Get indicates an expected call of Get.
|
||||
func (mr *MockEngineVersionStoreMockRecorder) Get(ctx, version any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Get", reflect.TypeOf((*MockEngineVersionStore)(nil).Get), ctx, version)
|
||||
}
|
||||
|
||||
// Insert mocks base method.
|
||||
func (m *MockEngineVersionStore) Insert(ctx context.Context, record engineversion.EngineVersion) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Insert", ctx, record)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Insert indicates an expected call of Insert.
|
||||
func (mr *MockEngineVersionStoreMockRecorder) Insert(ctx, record any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Insert", reflect.TypeOf((*MockEngineVersionStore)(nil).Insert), ctx, record)
|
||||
}
|
||||
|
||||
// IsReferencedByActiveRuntime mocks base method.
|
||||
func (m *MockEngineVersionStore) IsReferencedByActiveRuntime(ctx context.Context, version string) (bool, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "IsReferencedByActiveRuntime", ctx, version)
|
||||
ret0, _ := ret[0].(bool)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// IsReferencedByActiveRuntime indicates an expected call of IsReferencedByActiveRuntime.
|
||||
func (mr *MockEngineVersionStoreMockRecorder) IsReferencedByActiveRuntime(ctx, version any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsReferencedByActiveRuntime", reflect.TypeOf((*MockEngineVersionStore)(nil).IsReferencedByActiveRuntime), ctx, version)
|
||||
}
|
||||
|
||||
// List mocks base method.
|
||||
func (m *MockEngineVersionStore) List(ctx context.Context, statusFilter *engineversion.Status) ([]engineversion.EngineVersion, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "List", ctx, statusFilter)
|
||||
ret0, _ := ret[0].([]engineversion.EngineVersion)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// List indicates an expected call of List.
|
||||
func (mr *MockEngineVersionStoreMockRecorder) List(ctx, statusFilter any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "List", reflect.TypeOf((*MockEngineVersionStore)(nil).List), ctx, statusFilter)
|
||||
}
|
||||
|
||||
// Update mocks base method.
|
||||
func (m *MockEngineVersionStore) Update(ctx context.Context, input ports.UpdateEngineVersionInput) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Update", ctx, input)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Update indicates an expected call of Update.
|
||||
func (mr *MockEngineVersionStoreMockRecorder) Update(ctx, input any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Update", reflect.TypeOf((*MockEngineVersionStore)(nil).Update), ctx, input)
|
||||
}
|
||||
@@ -0,0 +1,72 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: LobbyClient)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_lobbyclient.go -package=mocks galaxy/gamemaster/internal/ports LobbyClient
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
ports "galaxy/gamemaster/internal/ports"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockLobbyClient is a mock of LobbyClient interface.
|
||||
type MockLobbyClient struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockLobbyClientMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockLobbyClientMockRecorder is the mock recorder for MockLobbyClient.
|
||||
type MockLobbyClientMockRecorder struct {
|
||||
mock *MockLobbyClient
|
||||
}
|
||||
|
||||
// NewMockLobbyClient creates a new mock instance.
|
||||
func NewMockLobbyClient(ctrl *gomock.Controller) *MockLobbyClient {
|
||||
mock := &MockLobbyClient{ctrl: ctrl}
|
||||
mock.recorder = &MockLobbyClientMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockLobbyClient) EXPECT() *MockLobbyClientMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// GetGameSummary mocks base method.
|
||||
func (m *MockLobbyClient) GetGameSummary(ctx context.Context, gameID string) (ports.GameSummary, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "GetGameSummary", ctx, gameID)
|
||||
ret0, _ := ret[0].(ports.GameSummary)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// GetGameSummary indicates an expected call of GetGameSummary.
|
||||
func (mr *MockLobbyClientMockRecorder) GetGameSummary(ctx, gameID any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetGameSummary", reflect.TypeOf((*MockLobbyClient)(nil).GetGameSummary), ctx, gameID)
|
||||
}
|
||||
|
||||
// GetMemberships mocks base method.
|
||||
func (m *MockLobbyClient) GetMemberships(ctx context.Context, gameID string) ([]ports.Membership, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "GetMemberships", ctx, gameID)
|
||||
ret0, _ := ret[0].([]ports.Membership)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// GetMemberships indicates an expected call of GetMemberships.
|
||||
func (mr *MockLobbyClientMockRecorder) GetMemberships(ctx, gameID any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetMemberships", reflect.TypeOf((*MockLobbyClient)(nil).GetMemberships), ctx, gameID)
|
||||
}
|
||||
@@ -0,0 +1,70 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: LobbyEventsPublisher)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_lobbyeventspublisher.go -package=mocks galaxy/gamemaster/internal/ports LobbyEventsPublisher
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
ports "galaxy/gamemaster/internal/ports"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockLobbyEventsPublisher is a mock of LobbyEventsPublisher interface.
|
||||
type MockLobbyEventsPublisher struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockLobbyEventsPublisherMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockLobbyEventsPublisherMockRecorder is the mock recorder for MockLobbyEventsPublisher.
|
||||
type MockLobbyEventsPublisherMockRecorder struct {
|
||||
mock *MockLobbyEventsPublisher
|
||||
}
|
||||
|
||||
// NewMockLobbyEventsPublisher creates a new mock instance.
|
||||
func NewMockLobbyEventsPublisher(ctrl *gomock.Controller) *MockLobbyEventsPublisher {
|
||||
mock := &MockLobbyEventsPublisher{ctrl: ctrl}
|
||||
mock.recorder = &MockLobbyEventsPublisherMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockLobbyEventsPublisher) EXPECT() *MockLobbyEventsPublisherMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// PublishGameFinished mocks base method.
|
||||
func (m *MockLobbyEventsPublisher) PublishGameFinished(ctx context.Context, msg ports.GameFinished) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "PublishGameFinished", ctx, msg)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// PublishGameFinished indicates an expected call of PublishGameFinished.
|
||||
func (mr *MockLobbyEventsPublisherMockRecorder) PublishGameFinished(ctx, msg any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PublishGameFinished", reflect.TypeOf((*MockLobbyEventsPublisher)(nil).PublishGameFinished), ctx, msg)
|
||||
}
|
||||
|
||||
// PublishSnapshotUpdate mocks base method.
|
||||
func (m *MockLobbyEventsPublisher) PublishSnapshotUpdate(ctx context.Context, msg ports.RuntimeSnapshotUpdate) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "PublishSnapshotUpdate", ctx, msg)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// PublishSnapshotUpdate indicates an expected call of PublishSnapshotUpdate.
|
||||
func (mr *MockLobbyEventsPublisherMockRecorder) PublishSnapshotUpdate(ctx, msg any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PublishSnapshotUpdate", reflect.TypeOf((*MockLobbyEventsPublisher)(nil).PublishSnapshotUpdate), ctx, msg)
|
||||
}
|
||||
@@ -0,0 +1,56 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: NotificationIntentPublisher)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_notificationpublisher.go -package=mocks galaxy/gamemaster/internal/ports NotificationIntentPublisher
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
notificationintent "galaxy/notificationintent"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockNotificationIntentPublisher is a mock of NotificationIntentPublisher interface.
|
||||
type MockNotificationIntentPublisher struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockNotificationIntentPublisherMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockNotificationIntentPublisherMockRecorder is the mock recorder for MockNotificationIntentPublisher.
|
||||
type MockNotificationIntentPublisherMockRecorder struct {
|
||||
mock *MockNotificationIntentPublisher
|
||||
}
|
||||
|
||||
// NewMockNotificationIntentPublisher creates a new mock instance.
|
||||
func NewMockNotificationIntentPublisher(ctrl *gomock.Controller) *MockNotificationIntentPublisher {
|
||||
mock := &MockNotificationIntentPublisher{ctrl: ctrl}
|
||||
mock.recorder = &MockNotificationIntentPublisherMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockNotificationIntentPublisher) EXPECT() *MockNotificationIntentPublisherMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// Publish mocks base method.
|
||||
func (m *MockNotificationIntentPublisher) Publish(ctx context.Context, intent notificationintent.Intent) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Publish", ctx, intent)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Publish indicates an expected call of Publish.
|
||||
func (mr *MockNotificationIntentPublisherMockRecorder) Publish(ctx, intent any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Publish", reflect.TypeOf((*MockNotificationIntentPublisher)(nil).Publish), ctx, intent)
|
||||
}
|
||||
@@ -0,0 +1,72 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: OperationLogStore)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_operationlog.go -package=mocks galaxy/gamemaster/internal/ports OperationLogStore
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
operation "galaxy/gamemaster/internal/domain/operation"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockOperationLogStore is a mock of OperationLogStore interface.
|
||||
type MockOperationLogStore struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockOperationLogStoreMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockOperationLogStoreMockRecorder is the mock recorder for MockOperationLogStore.
|
||||
type MockOperationLogStoreMockRecorder struct {
|
||||
mock *MockOperationLogStore
|
||||
}
|
||||
|
||||
// NewMockOperationLogStore creates a new mock instance.
|
||||
func NewMockOperationLogStore(ctrl *gomock.Controller) *MockOperationLogStore {
|
||||
mock := &MockOperationLogStore{ctrl: ctrl}
|
||||
mock.recorder = &MockOperationLogStoreMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockOperationLogStore) EXPECT() *MockOperationLogStoreMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// Append mocks base method.
|
||||
func (m *MockOperationLogStore) Append(ctx context.Context, entry operation.OperationEntry) (int64, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Append", ctx, entry)
|
||||
ret0, _ := ret[0].(int64)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// Append indicates an expected call of Append.
|
||||
func (mr *MockOperationLogStoreMockRecorder) Append(ctx, entry any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Append", reflect.TypeOf((*MockOperationLogStore)(nil).Append), ctx, entry)
|
||||
}
|
||||
|
||||
// ListByGame mocks base method.
|
||||
func (m *MockOperationLogStore) ListByGame(ctx context.Context, gameID string, limit int) ([]operation.OperationEntry, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "ListByGame", ctx, gameID, limit)
|
||||
ret0, _ := ret[0].([]operation.OperationEntry)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// ListByGame indicates an expected call of ListByGame.
|
||||
func (mr *MockOperationLogStoreMockRecorder) ListByGame(ctx, gameID, limit any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListByGame", reflect.TypeOf((*MockOperationLogStore)(nil).ListByGame), ctx, gameID, limit)
|
||||
}
|
||||
@@ -0,0 +1,115 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: PlayerMappingStore)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_playermappingstore.go -package=mocks galaxy/gamemaster/internal/ports PlayerMappingStore
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
playermapping "galaxy/gamemaster/internal/domain/playermapping"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockPlayerMappingStore is a mock of PlayerMappingStore interface.
|
||||
type MockPlayerMappingStore struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockPlayerMappingStoreMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockPlayerMappingStoreMockRecorder is the mock recorder for MockPlayerMappingStore.
|
||||
type MockPlayerMappingStoreMockRecorder struct {
|
||||
mock *MockPlayerMappingStore
|
||||
}
|
||||
|
||||
// NewMockPlayerMappingStore creates a new mock instance.
|
||||
func NewMockPlayerMappingStore(ctrl *gomock.Controller) *MockPlayerMappingStore {
|
||||
mock := &MockPlayerMappingStore{ctrl: ctrl}
|
||||
mock.recorder = &MockPlayerMappingStoreMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockPlayerMappingStore) EXPECT() *MockPlayerMappingStoreMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// BulkInsert mocks base method.
|
||||
func (m *MockPlayerMappingStore) BulkInsert(ctx context.Context, records []playermapping.PlayerMapping) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "BulkInsert", ctx, records)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// BulkInsert indicates an expected call of BulkInsert.
|
||||
func (mr *MockPlayerMappingStoreMockRecorder) BulkInsert(ctx, records any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "BulkInsert", reflect.TypeOf((*MockPlayerMappingStore)(nil).BulkInsert), ctx, records)
|
||||
}
|
||||
|
||||
// DeleteByGame mocks base method.
|
||||
func (m *MockPlayerMappingStore) DeleteByGame(ctx context.Context, gameID string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "DeleteByGame", ctx, gameID)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// DeleteByGame indicates an expected call of DeleteByGame.
|
||||
func (mr *MockPlayerMappingStoreMockRecorder) DeleteByGame(ctx, gameID any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteByGame", reflect.TypeOf((*MockPlayerMappingStore)(nil).DeleteByGame), ctx, gameID)
|
||||
}
|
||||
|
||||
// Get mocks base method.
|
||||
func (m *MockPlayerMappingStore) Get(ctx context.Context, gameID, userID string) (playermapping.PlayerMapping, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Get", ctx, gameID, userID)
|
||||
ret0, _ := ret[0].(playermapping.PlayerMapping)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// Get indicates an expected call of Get.
|
||||
func (mr *MockPlayerMappingStoreMockRecorder) Get(ctx, gameID, userID any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Get", reflect.TypeOf((*MockPlayerMappingStore)(nil).Get), ctx, gameID, userID)
|
||||
}
|
||||
|
||||
// GetByRace mocks base method.
|
||||
func (m *MockPlayerMappingStore) GetByRace(ctx context.Context, gameID, raceName string) (playermapping.PlayerMapping, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "GetByRace", ctx, gameID, raceName)
|
||||
ret0, _ := ret[0].(playermapping.PlayerMapping)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// GetByRace indicates an expected call of GetByRace.
|
||||
func (mr *MockPlayerMappingStoreMockRecorder) GetByRace(ctx, gameID, raceName any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetByRace", reflect.TypeOf((*MockPlayerMappingStore)(nil).GetByRace), ctx, gameID, raceName)
|
||||
}
|
||||
|
||||
// ListByGame mocks base method.
|
||||
func (m *MockPlayerMappingStore) ListByGame(ctx context.Context, gameID string) ([]playermapping.PlayerMapping, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "ListByGame", ctx, gameID)
|
||||
ret0, _ := ret[0].([]playermapping.PlayerMapping)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// ListByGame indicates an expected call of ListByGame.
|
||||
func (mr *MockPlayerMappingStoreMockRecorder) ListByGame(ctx, gameID any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListByGame", reflect.TypeOf((*MockPlayerMappingStore)(nil).ListByGame), ctx, gameID)
|
||||
}
|
||||
@@ -0,0 +1,69 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: RTMClient)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_rtmclient.go -package=mocks galaxy/gamemaster/internal/ports RTMClient
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockRTMClient is a mock of RTMClient interface.
|
||||
type MockRTMClient struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockRTMClientMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockRTMClientMockRecorder is the mock recorder for MockRTMClient.
|
||||
type MockRTMClientMockRecorder struct {
|
||||
mock *MockRTMClient
|
||||
}
|
||||
|
||||
// NewMockRTMClient creates a new mock instance.
|
||||
func NewMockRTMClient(ctrl *gomock.Controller) *MockRTMClient {
|
||||
mock := &MockRTMClient{ctrl: ctrl}
|
||||
mock.recorder = &MockRTMClientMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockRTMClient) EXPECT() *MockRTMClientMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// Patch mocks base method.
|
||||
func (m *MockRTMClient) Patch(ctx context.Context, gameID, imageRef string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Patch", ctx, gameID, imageRef)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Patch indicates an expected call of Patch.
|
||||
func (mr *MockRTMClientMockRecorder) Patch(ctx, gameID, imageRef any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Patch", reflect.TypeOf((*MockRTMClient)(nil).Patch), ctx, gameID, imageRef)
|
||||
}
|
||||
|
||||
// Stop mocks base method.
|
||||
func (m *MockRTMClient) Stop(ctx context.Context, gameID, reason string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Stop", ctx, gameID, reason)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Stop indicates an expected call of Stop.
|
||||
func (mr *MockRTMClientMockRecorder) Stop(ctx, gameID, reason any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Stop", reflect.TypeOf((*MockRTMClient)(nil).Stop), ctx, gameID, reason)
|
||||
}
|
||||
@@ -0,0 +1,188 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: RuntimeRecordStore)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_runtimerecordstore.go -package=mocks galaxy/gamemaster/internal/ports RuntimeRecordStore
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
runtime "galaxy/gamemaster/internal/domain/runtime"
|
||||
ports "galaxy/gamemaster/internal/ports"
|
||||
reflect "reflect"
|
||||
time "time"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockRuntimeRecordStore is a mock of RuntimeRecordStore interface.
|
||||
type MockRuntimeRecordStore struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockRuntimeRecordStoreMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockRuntimeRecordStoreMockRecorder is the mock recorder for MockRuntimeRecordStore.
|
||||
type MockRuntimeRecordStoreMockRecorder struct {
|
||||
mock *MockRuntimeRecordStore
|
||||
}
|
||||
|
||||
// NewMockRuntimeRecordStore creates a new mock instance.
|
||||
func NewMockRuntimeRecordStore(ctrl *gomock.Controller) *MockRuntimeRecordStore {
|
||||
mock := &MockRuntimeRecordStore{ctrl: ctrl}
|
||||
mock.recorder = &MockRuntimeRecordStoreMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockRuntimeRecordStore) EXPECT() *MockRuntimeRecordStoreMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// Delete mocks base method.
|
||||
func (m *MockRuntimeRecordStore) Delete(ctx context.Context, gameID string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Delete", ctx, gameID)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Delete indicates an expected call of Delete.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) Delete(ctx, gameID any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Delete", reflect.TypeOf((*MockRuntimeRecordStore)(nil).Delete), ctx, gameID)
|
||||
}
|
||||
|
||||
// Get mocks base method.
|
||||
func (m *MockRuntimeRecordStore) Get(ctx context.Context, gameID string) (runtime.RuntimeRecord, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Get", ctx, gameID)
|
||||
ret0, _ := ret[0].(runtime.RuntimeRecord)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// Get indicates an expected call of Get.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) Get(ctx, gameID any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Get", reflect.TypeOf((*MockRuntimeRecordStore)(nil).Get), ctx, gameID)
|
||||
}
|
||||
|
||||
// Insert mocks base method.
|
||||
func (m *MockRuntimeRecordStore) Insert(ctx context.Context, record runtime.RuntimeRecord) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Insert", ctx, record)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Insert indicates an expected call of Insert.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) Insert(ctx, record any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Insert", reflect.TypeOf((*MockRuntimeRecordStore)(nil).Insert), ctx, record)
|
||||
}
|
||||
|
||||
// List mocks base method.
|
||||
func (m *MockRuntimeRecordStore) List(ctx context.Context) ([]runtime.RuntimeRecord, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "List", ctx)
|
||||
ret0, _ := ret[0].([]runtime.RuntimeRecord)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// List indicates an expected call of List.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) List(ctx any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "List", reflect.TypeOf((*MockRuntimeRecordStore)(nil).List), ctx)
|
||||
}
|
||||
|
||||
// ListByStatus mocks base method.
|
||||
func (m *MockRuntimeRecordStore) ListByStatus(ctx context.Context, status runtime.Status) ([]runtime.RuntimeRecord, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "ListByStatus", ctx, status)
|
||||
ret0, _ := ret[0].([]runtime.RuntimeRecord)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// ListByStatus indicates an expected call of ListByStatus.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) ListByStatus(ctx, status any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListByStatus", reflect.TypeOf((*MockRuntimeRecordStore)(nil).ListByStatus), ctx, status)
|
||||
}
|
||||
|
||||
// ListDueRunning mocks base method.
|
||||
func (m *MockRuntimeRecordStore) ListDueRunning(ctx context.Context, now time.Time) ([]runtime.RuntimeRecord, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "ListDueRunning", ctx, now)
|
||||
ret0, _ := ret[0].([]runtime.RuntimeRecord)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// ListDueRunning indicates an expected call of ListDueRunning.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) ListDueRunning(ctx, now any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListDueRunning", reflect.TypeOf((*MockRuntimeRecordStore)(nil).ListDueRunning), ctx, now)
|
||||
}
|
||||
|
||||
// UpdateEngineHealth mocks base method.
|
||||
func (m *MockRuntimeRecordStore) UpdateEngineHealth(ctx context.Context, input ports.UpdateEngineHealthInput) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "UpdateEngineHealth", ctx, input)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// UpdateEngineHealth indicates an expected call of UpdateEngineHealth.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) UpdateEngineHealth(ctx, input any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateEngineHealth", reflect.TypeOf((*MockRuntimeRecordStore)(nil).UpdateEngineHealth), ctx, input)
|
||||
}
|
||||
|
||||
// UpdateImage mocks base method.
|
||||
func (m *MockRuntimeRecordStore) UpdateImage(ctx context.Context, input ports.UpdateImageInput) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "UpdateImage", ctx, input)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// UpdateImage indicates an expected call of UpdateImage.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) UpdateImage(ctx, input any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateImage", reflect.TypeOf((*MockRuntimeRecordStore)(nil).UpdateImage), ctx, input)
|
||||
}
|
||||
|
||||
// UpdateScheduling mocks base method.
|
||||
func (m *MockRuntimeRecordStore) UpdateScheduling(ctx context.Context, input ports.UpdateSchedulingInput) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "UpdateScheduling", ctx, input)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// UpdateScheduling indicates an expected call of UpdateScheduling.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) UpdateScheduling(ctx, input any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateScheduling", reflect.TypeOf((*MockRuntimeRecordStore)(nil).UpdateScheduling), ctx, input)
|
||||
}
|
||||
|
||||
// UpdateStatus mocks base method.
|
||||
func (m *MockRuntimeRecordStore) UpdateStatus(ctx context.Context, input ports.UpdateStatusInput) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "UpdateStatus", ctx, input)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// UpdateStatus indicates an expected call of UpdateStatus.
|
||||
func (mr *MockRuntimeRecordStoreMockRecorder) UpdateStatus(ctx, input any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateStatus", reflect.TypeOf((*MockRuntimeRecordStore)(nil).UpdateStatus), ctx, input)
|
||||
}
|
||||
@@ -0,0 +1,71 @@
|
||||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: galaxy/gamemaster/internal/ports (interfaces: StreamOffsetStore)
|
||||
//
|
||||
// Generated by this command:
|
||||
//
|
||||
// mockgen -destination=../adapters/mocks/mock_streamoffsetstore.go -package=mocks galaxy/gamemaster/internal/ports StreamOffsetStore
|
||||
//
|
||||
|
||||
// Package mocks is a generated GoMock package.
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
// MockStreamOffsetStore is a mock of StreamOffsetStore interface.
|
||||
type MockStreamOffsetStore struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockStreamOffsetStoreMockRecorder
|
||||
isgomock struct{}
|
||||
}
|
||||
|
||||
// MockStreamOffsetStoreMockRecorder is the mock recorder for MockStreamOffsetStore.
|
||||
type MockStreamOffsetStoreMockRecorder struct {
|
||||
mock *MockStreamOffsetStore
|
||||
}
|
||||
|
||||
// NewMockStreamOffsetStore creates a new mock instance.
|
||||
func NewMockStreamOffsetStore(ctrl *gomock.Controller) *MockStreamOffsetStore {
|
||||
mock := &MockStreamOffsetStore{ctrl: ctrl}
|
||||
mock.recorder = &MockStreamOffsetStoreMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockStreamOffsetStore) EXPECT() *MockStreamOffsetStoreMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// Load mocks base method.
|
||||
func (m *MockStreamOffsetStore) Load(ctx context.Context, stream string) (string, bool, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Load", ctx, stream)
|
||||
ret0, _ := ret[0].(string)
|
||||
ret1, _ := ret[1].(bool)
|
||||
ret2, _ := ret[2].(error)
|
||||
return ret0, ret1, ret2
|
||||
}
|
||||
|
||||
// Load indicates an expected call of Load.
|
||||
func (mr *MockStreamOffsetStoreMockRecorder) Load(ctx, stream any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Load", reflect.TypeOf((*MockStreamOffsetStore)(nil).Load), ctx, stream)
|
||||
}
|
||||
|
||||
// Save mocks base method.
|
||||
func (m *MockStreamOffsetStore) Save(ctx context.Context, stream, entryID string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Save", ctx, stream, entryID)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Save indicates an expected call of Save.
|
||||
func (mr *MockStreamOffsetStoreMockRecorder) Save(ctx, stream, entryID any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Save", reflect.TypeOf((*MockStreamOffsetStore)(nil).Save), ctx, stream, entryID)
|
||||
}
|
||||
@@ -0,0 +1,73 @@
|
||||
// Package notificationpublisher provides the Redis-Streams-backed
|
||||
// notification-intent publisher Game Master uses for the three GM-owned
|
||||
// types listed in `gamemaster/README.md §Notification Contracts`:
|
||||
// `game.turn.ready`, `game.finished`, `game.generation_failed`.
|
||||
//
|
||||
// The adapter is a thin shim over `galaxy/notificationintent.Publisher`
|
||||
// that drops the entry id at the wrapper boundary; it mirrors
|
||||
// `rtmanager/internal/adapters/notificationpublisher` byte-for-byte
|
||||
// (`rtmanager/docs/domain-and-ports.md §7` justifies that decision and
|
||||
// applies here for the same reason).
|
||||
package notificationpublisher
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
|
||||
"galaxy/notificationintent"
|
||||
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
// Config groups the dependencies and stream name required to construct
|
||||
// a Publisher.
|
||||
type Config struct {
|
||||
// Client appends entries to Redis Streams. Must be non-nil.
|
||||
Client *redis.Client
|
||||
|
||||
// Stream stores the Redis Stream key intents are published to.
|
||||
// When empty, `notificationintent.DefaultIntentsStream` is used.
|
||||
Stream string
|
||||
}
|
||||
|
||||
// Publisher implements `ports.NotificationIntentPublisher` on top of
|
||||
// the shared `notificationintent.Publisher`.
|
||||
type Publisher struct {
|
||||
inner *notificationintent.Publisher
|
||||
}
|
||||
|
||||
// NewPublisher constructs a Publisher from cfg. Validation errors and
|
||||
// transport errors propagate verbatim.
|
||||
func NewPublisher(cfg Config) (*Publisher, error) {
|
||||
if cfg.Client == nil {
|
||||
return nil, errors.New("new gamemaster notification publisher: nil redis client")
|
||||
}
|
||||
inner, err := notificationintent.NewPublisher(notificationintent.PublisherConfig{
|
||||
Client: cfg.Client,
|
||||
Stream: cfg.Stream,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("new gamemaster notification publisher: %w", err)
|
||||
}
|
||||
return &Publisher{inner: inner}, nil
|
||||
}
|
||||
|
||||
// Publish forwards intent to the underlying notificationintent
|
||||
// publisher and discards the resulting Redis Stream entry id. A failed
|
||||
// publish surfaces as the underlying error.
|
||||
func (publisher *Publisher) Publish(ctx context.Context, intent notificationintent.Intent) error {
|
||||
if publisher == nil || publisher.inner == nil {
|
||||
return errors.New("publish notification intent: nil publisher")
|
||||
}
|
||||
if _, err := publisher.inner.Publish(ctx, intent); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Compile-time assertion: Publisher implements
|
||||
// ports.NotificationIntentPublisher.
|
||||
var _ ports.NotificationIntentPublisher = (*Publisher)(nil)
|
||||
@@ -0,0 +1,167 @@
|
||||
package notificationpublisher
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"galaxy/notificationintent"
|
||||
)
|
||||
|
||||
func newRedis(t *testing.T) (*redis.Client, *miniredis.Miniredis) {
|
||||
t.Helper()
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
return client, server
|
||||
}
|
||||
|
||||
func readStream(t *testing.T, client *redis.Client, stream string) []redis.XMessage {
|
||||
t.Helper()
|
||||
messages, err := client.XRange(context.Background(), stream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
return messages
|
||||
}
|
||||
|
||||
func TestNewPublisherValidation(t *testing.T) {
|
||||
t.Run("nil client", func(t *testing.T) {
|
||||
_, err := NewPublisher(Config{})
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "nil redis client")
|
||||
})
|
||||
}
|
||||
|
||||
func TestPublishGameTurnReady(t *testing.T) {
|
||||
client, _ := newRedis(t)
|
||||
|
||||
publisher, err := NewPublisher(Config{Client: client, Stream: "notification:intents"})
|
||||
require.NoError(t, err)
|
||||
|
||||
intent, err := notificationintent.NewGameTurnReadyIntent(
|
||||
notificationintent.Metadata{
|
||||
IdempotencyKey: "gamemaster:turn:game-1:42",
|
||||
OccurredAt: time.UnixMilli(1714200000000).UTC(),
|
||||
},
|
||||
[]string{"u-2", "u-1"},
|
||||
notificationintent.GameTurnReadyPayload{
|
||||
GameID: "game-1",
|
||||
GameName: "Galaxy",
|
||||
TurnNumber: 42,
|
||||
},
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, publisher.Publish(context.Background(), intent))
|
||||
|
||||
messages := readStream(t, client, "notification:intents")
|
||||
require.Len(t, messages, 1)
|
||||
values := messages[0].Values
|
||||
assert.Equal(t, "game.turn.ready", values["notification_type"])
|
||||
assert.Equal(t, "game_master", values["producer"])
|
||||
assert.Equal(t, "user", values["audience_kind"])
|
||||
assert.Equal(t, "gamemaster:turn:game-1:42", values["idempotency_key"])
|
||||
|
||||
recipients, ok := values["recipient_user_ids_json"].(string)
|
||||
require.True(t, ok)
|
||||
var ids []string
|
||||
require.NoError(t, json.Unmarshal([]byte(recipients), &ids))
|
||||
assert.ElementsMatch(t, []string{"u-1", "u-2"}, ids)
|
||||
|
||||
payloadRaw, ok := values["payload_json"].(string)
|
||||
require.True(t, ok)
|
||||
var payload map[string]any
|
||||
require.NoError(t, json.Unmarshal([]byte(payloadRaw), &payload))
|
||||
assert.Equal(t, "game-1", payload["game_id"])
|
||||
assert.Equal(t, float64(42), payload["turn_number"])
|
||||
}
|
||||
|
||||
func TestPublishGameFinished(t *testing.T) {
|
||||
client, _ := newRedis(t)
|
||||
publisher, err := NewPublisher(Config{Client: client, Stream: "notification:intents"})
|
||||
require.NoError(t, err)
|
||||
|
||||
intent, err := notificationintent.NewGameFinishedIntent(
|
||||
notificationintent.Metadata{
|
||||
IdempotencyKey: "gamemaster:finished:g-1",
|
||||
OccurredAt: time.UnixMilli(1714200000000).UTC(),
|
||||
},
|
||||
[]string{"u-1"},
|
||||
notificationintent.GameFinishedPayload{
|
||||
GameID: "g-1",
|
||||
GameName: "Galaxy",
|
||||
FinalTurnNumber: 100,
|
||||
},
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, publisher.Publish(context.Background(), intent))
|
||||
|
||||
messages := readStream(t, client, "notification:intents")
|
||||
require.Len(t, messages, 1)
|
||||
assert.Equal(t, "game.finished", messages[0].Values["notification_type"])
|
||||
assert.Equal(t, "user", messages[0].Values["audience_kind"])
|
||||
}
|
||||
|
||||
func TestPublishGameGenerationFailed(t *testing.T) {
|
||||
client, _ := newRedis(t)
|
||||
publisher, err := NewPublisher(Config{Client: client, Stream: "notification:intents"})
|
||||
require.NoError(t, err)
|
||||
|
||||
intent, err := notificationintent.NewGameGenerationFailedIntent(
|
||||
notificationintent.Metadata{
|
||||
IdempotencyKey: "gamemaster:gen-failed:g-1:42",
|
||||
OccurredAt: time.UnixMilli(1714200000000).UTC(),
|
||||
},
|
||||
notificationintent.GameGenerationFailedPayload{
|
||||
GameID: "g-1",
|
||||
GameName: "Galaxy",
|
||||
FailureReason: "engine timeout",
|
||||
},
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, publisher.Publish(context.Background(), intent))
|
||||
|
||||
messages := readStream(t, client, "notification:intents")
|
||||
require.Len(t, messages, 1)
|
||||
values := messages[0].Values
|
||||
assert.Equal(t, "game.generation_failed", values["notification_type"])
|
||||
assert.Equal(t, "admin_email", values["audience_kind"])
|
||||
_, hasRecipients := values["recipient_user_ids_json"]
|
||||
assert.False(t, hasRecipients, "admin_email audience must not carry recipient ids")
|
||||
}
|
||||
|
||||
func TestPublishForwardsValidationError(t *testing.T) {
|
||||
client, _ := newRedis(t)
|
||||
publisher, err := NewPublisher(Config{Client: client})
|
||||
require.NoError(t, err)
|
||||
|
||||
bad := notificationintent.Intent{
|
||||
NotificationType: notificationintent.NotificationTypeGameTurnReady,
|
||||
Producer: notificationintent.ProducerGameMaster,
|
||||
AudienceKind: notificationintent.AudienceKindUser,
|
||||
IdempotencyKey: "k",
|
||||
PayloadJSON: `{"game_id":"g","game_name":"x","turn_number":1}`,
|
||||
}
|
||||
require.Error(t, publisher.Publish(context.Background(), bad))
|
||||
}
|
||||
|
||||
func TestPublishDefaultStream(t *testing.T) {
|
||||
client, _ := newRedis(t)
|
||||
publisher, err := NewPublisher(Config{Client: client, Stream: ""})
|
||||
require.NoError(t, err)
|
||||
|
||||
intent, err := notificationintent.NewGameTurnReadyIntent(
|
||||
notificationintent.Metadata{IdempotencyKey: "k", OccurredAt: time.UnixMilli(1).UTC()},
|
||||
[]string{"u-1"},
|
||||
notificationintent.GameTurnReadyPayload{GameID: "g", GameName: "n", TurnNumber: 1},
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, publisher.Publish(context.Background(), intent))
|
||||
|
||||
messages := readStream(t, client, notificationintent.DefaultIntentsStream)
|
||||
require.Len(t, messages, 1)
|
||||
}
|
||||
@@ -0,0 +1,416 @@
|
||||
// Package engineversionstore implements the PostgreSQL-backed adapter
|
||||
// for `ports.EngineVersionStore`.
|
||||
//
|
||||
// The package owns the on-disk shape of the `engine_versions` table
|
||||
// defined in
|
||||
// `galaxy/gamemaster/internal/adapters/postgres/migrations/00001_init.sql`
|
||||
// and translates the schema-agnostic `ports.EngineVersionStore`
|
||||
// interface declared in `internal/ports/engineversionstore.go` into
|
||||
// concrete go-jet/v2 statements driven by the pgx driver.
|
||||
//
|
||||
// Insert maps PostgreSQL unique violations to engineversion.ErrConflict;
|
||||
// Update applies a partial UPDATE driven by the non-nil pointer fields
|
||||
// of UpdateEngineVersionInput; Deprecate is idempotent on the
|
||||
// already-deprecated row; IsReferencedByActiveRuntime probes the
|
||||
// runtime_records table for non-finished references.
|
||||
package engineversionstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/internal/sqlx"
|
||||
pgtable "galaxy/gamemaster/internal/adapters/postgres/jet/gamemaster/table"
|
||||
"galaxy/gamemaster/internal/domain/engineversion"
|
||||
"galaxy/gamemaster/internal/domain/runtime"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
|
||||
pg "github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
// emptyOptionsJSON is the default value persisted when a caller hands
|
||||
// us an empty Options slice. It matches the SQL column default.
|
||||
var emptyOptionsJSON = []byte("{}")
|
||||
|
||||
// Config configures one PostgreSQL-backed engine-version store. The
|
||||
// store does not own the underlying *sql.DB lifecycle.
|
||||
type Config struct {
|
||||
DB *sql.DB
|
||||
OperationTimeout time.Duration
|
||||
}
|
||||
|
||||
// Store persists Game Master engine-version registry rows in
|
||||
// PostgreSQL.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
// New constructs one PostgreSQL-backed engine-version store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.DB == nil {
|
||||
return nil, errors.New("new postgres engine version store: db must not be nil")
|
||||
}
|
||||
if cfg.OperationTimeout <= 0 {
|
||||
return nil, errors.New("new postgres engine version store: operation timeout must be positive")
|
||||
}
|
||||
return &Store{
|
||||
db: cfg.DB,
|
||||
operationTimeout: cfg.OperationTimeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// engineVersionSelectColumns matches scanRow's column order.
|
||||
var engineVersionSelectColumns = pg.ColumnList{
|
||||
pgtable.EngineVersions.Version,
|
||||
pgtable.EngineVersions.ImageRef,
|
||||
pgtable.EngineVersions.Options,
|
||||
pgtable.EngineVersions.Status,
|
||||
pgtable.EngineVersions.CreatedAt,
|
||||
pgtable.EngineVersions.UpdatedAt,
|
||||
}
|
||||
|
||||
// Get returns the row identified by version. Returns
|
||||
// engineversion.ErrNotFound when no row exists.
|
||||
func (store *Store) Get(ctx context.Context, version string) (engineversion.EngineVersion, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return engineversion.EngineVersion{}, errors.New("get engine version: nil store")
|
||||
}
|
||||
if strings.TrimSpace(version) == "" {
|
||||
return engineversion.EngineVersion{}, fmt.Errorf("get engine version: version must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get engine version", store.operationTimeout)
|
||||
if err != nil {
|
||||
return engineversion.EngineVersion{}, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(engineVersionSelectColumns).
|
||||
FROM(pgtable.EngineVersions).
|
||||
WHERE(pgtable.EngineVersions.Version.EQ(pg.String(version)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
got, err := scanRow(row)
|
||||
if sqlx.IsNoRows(err) {
|
||||
return engineversion.EngineVersion{}, engineversion.ErrNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return engineversion.EngineVersion{}, fmt.Errorf("get engine version: %w", err)
|
||||
}
|
||||
return got, nil
|
||||
}
|
||||
|
||||
// List returns every row whose status matches statusFilter (when
|
||||
// non-nil), ordered by version ASC.
|
||||
func (store *Store) List(ctx context.Context, statusFilter *engineversion.Status) ([]engineversion.EngineVersion, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("list engine versions: nil store")
|
||||
}
|
||||
if statusFilter != nil && !statusFilter.IsKnown() {
|
||||
return nil, fmt.Errorf("list engine versions: status %q is unsupported", *statusFilter)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "list engine versions", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(engineVersionSelectColumns).
|
||||
FROM(pgtable.EngineVersions)
|
||||
if statusFilter != nil {
|
||||
stmt = stmt.WHERE(pgtable.EngineVersions.Status.EQ(pg.String(string(*statusFilter))))
|
||||
}
|
||||
stmt = stmt.ORDER_BY(pgtable.EngineVersions.Version.ASC())
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("list engine versions: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
versions := make([]engineversion.EngineVersion, 0)
|
||||
for rows.Next() {
|
||||
got, err := scanRow(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("list engine versions: scan: %w", err)
|
||||
}
|
||||
versions = append(versions, got)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("list engine versions: %w", err)
|
||||
}
|
||||
if len(versions) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return versions, nil
|
||||
}
|
||||
|
||||
// Insert installs record into the registry. Returns
|
||||
// engineversion.ErrConflict when a row with the same version already
|
||||
// exists.
|
||||
func (store *Store) Insert(ctx context.Context, record engineversion.EngineVersion) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("insert engine version: nil store")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("insert engine version: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "insert engine version", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
options := record.Options
|
||||
if len(options) == 0 {
|
||||
options = emptyOptionsJSON
|
||||
}
|
||||
|
||||
stmt := pgtable.EngineVersions.INSERT(
|
||||
pgtable.EngineVersions.Version,
|
||||
pgtable.EngineVersions.ImageRef,
|
||||
pgtable.EngineVersions.Options,
|
||||
pgtable.EngineVersions.Status,
|
||||
pgtable.EngineVersions.CreatedAt,
|
||||
pgtable.EngineVersions.UpdatedAt,
|
||||
).VALUES(
|
||||
record.Version,
|
||||
record.ImageRef,
|
||||
string(options),
|
||||
string(record.Status),
|
||||
record.CreatedAt.UTC(),
|
||||
record.UpdatedAt.UTC(),
|
||||
)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
if sqlx.IsUniqueViolation(err) {
|
||||
return fmt.Errorf("insert engine version: %w", engineversion.ErrConflict)
|
||||
}
|
||||
return fmt.Errorf("insert engine version: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update applies a partial update to one engine-version row.
|
||||
// updated_at is always refreshed from input.Now. Returns
|
||||
// engineversion.ErrNotFound when the row is absent.
|
||||
func (store *Store) Update(ctx context.Context, input ports.UpdateEngineVersionInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update engine version: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update engine version", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
now := input.Now.UTC()
|
||||
assignments := []any{
|
||||
pgtable.EngineVersions.UpdatedAt.SET(pg.TimestampzT(now)),
|
||||
}
|
||||
if input.ImageRef != nil {
|
||||
assignments = append(assignments,
|
||||
pgtable.EngineVersions.ImageRef.SET(pg.String(*input.ImageRef)))
|
||||
}
|
||||
if input.Options != nil {
|
||||
options := *input.Options
|
||||
if len(options) == 0 {
|
||||
options = emptyOptionsJSON
|
||||
}
|
||||
assignments = append(assignments,
|
||||
pgtable.EngineVersions.Options.SET(
|
||||
pg.StringExp(pg.CAST(pg.String(string(options))).AS("jsonb")),
|
||||
))
|
||||
}
|
||||
if input.Status != nil {
|
||||
assignments = append(assignments,
|
||||
pgtable.EngineVersions.Status.SET(pg.String(string(*input.Status))))
|
||||
}
|
||||
|
||||
stmt := pgtable.EngineVersions.UPDATE(pgtable.EngineVersions.UpdatedAt).
|
||||
SET(assignments[0], assignments[1:]...).
|
||||
WHERE(pgtable.EngineVersions.Version.EQ(pg.String(input.Version)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update engine version: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update engine version: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return engineversion.ErrNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Deprecate sets `status=deprecated` and refreshes `updated_at` for
|
||||
// version. Returns engineversion.ErrNotFound when no row exists.
|
||||
// Calling Deprecate on an already deprecated row succeeds with no
|
||||
// further mutation (idempotent).
|
||||
func (store *Store) Deprecate(ctx context.Context, version string, now time.Time) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("deprecate engine version: nil store")
|
||||
}
|
||||
if strings.TrimSpace(version) == "" {
|
||||
return fmt.Errorf("deprecate engine version: version must not be empty")
|
||||
}
|
||||
if now.IsZero() {
|
||||
return fmt.Errorf("deprecate engine version: now must not be zero")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "deprecate engine version", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
// Pre-check the row's existence so we can surface a precise
|
||||
// ErrNotFound; a 0-row affected from the UPDATE alone could mean
|
||||
// "missing" or "already deprecated".
|
||||
current, err := store.Get(operationCtx, version)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if current.Status == engineversion.StatusDeprecated {
|
||||
return nil
|
||||
}
|
||||
|
||||
stmt := pgtable.EngineVersions.UPDATE(pgtable.EngineVersions.Status).
|
||||
SET(
|
||||
pgtable.EngineVersions.Status.SET(pg.String(string(engineversion.StatusDeprecated))),
|
||||
pgtable.EngineVersions.UpdatedAt.SET(pg.TimestampzT(now.UTC())),
|
||||
).
|
||||
WHERE(pgtable.EngineVersions.Version.EQ(pg.String(version)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
return fmt.Errorf("deprecate engine version: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete removes the row identified by version. Returns
|
||||
// engineversion.ErrNotFound when no row matches. The adapter does not
|
||||
// inspect runtime_records; the service layer guards against active
|
||||
// references through IsReferencedByActiveRuntime before issuing Delete.
|
||||
func (store *Store) Delete(ctx context.Context, version string) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("delete engine version: nil store")
|
||||
}
|
||||
if strings.TrimSpace(version) == "" {
|
||||
return fmt.Errorf("delete engine version: version must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "delete engine version", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.EngineVersions.DELETE().
|
||||
WHERE(pgtable.EngineVersions.Version.EQ(pg.String(version)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("delete engine version: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("delete engine version: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return engineversion.ErrNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsReferencedByActiveRuntime reports whether any non-finished and
|
||||
// non-stopped runtime row currently references version through
|
||||
// `current_engine_version`.
|
||||
func (store *Store) IsReferencedByActiveRuntime(ctx context.Context, version string) (bool, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return false, errors.New("is referenced by active runtime: nil store")
|
||||
}
|
||||
if strings.TrimSpace(version) == "" {
|
||||
return false, fmt.Errorf("is referenced by active runtime: version must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "is referenced by active runtime", store.operationTimeout)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(pg.Int32(1).AS("present")).
|
||||
FROM(pgtable.RuntimeRecords).
|
||||
WHERE(pg.AND(
|
||||
pgtable.RuntimeRecords.CurrentEngineVersion.EQ(pg.String(version)),
|
||||
pgtable.RuntimeRecords.Status.NOT_IN(
|
||||
pg.String(string(runtime.StatusFinished)),
|
||||
pg.String(string(runtime.StatusStopped)),
|
||||
),
|
||||
)).
|
||||
LIMIT(1)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
var present int32
|
||||
if err := row.Scan(&present); err != nil {
|
||||
if sqlx.IsNoRows(err) {
|
||||
return false, nil
|
||||
}
|
||||
return false, fmt.Errorf("is referenced by active runtime: %w", err)
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// rowScanner abstracts *sql.Row and *sql.Rows so scanRow can be shared
|
||||
// across single-row and iterated reads.
|
||||
type rowScanner interface {
|
||||
Scan(dest ...any) error
|
||||
}
|
||||
|
||||
// scanRow scans one engine_versions row from rs.
|
||||
func scanRow(rs rowScanner) (engineversion.EngineVersion, error) {
|
||||
var (
|
||||
version string
|
||||
imageRef string
|
||||
options string
|
||||
status string
|
||||
createdAt time.Time
|
||||
updatedAt time.Time
|
||||
)
|
||||
if err := rs.Scan(&version, &imageRef, &options, &status, &createdAt, &updatedAt); err != nil {
|
||||
return engineversion.EngineVersion{}, err
|
||||
}
|
||||
return engineversion.EngineVersion{
|
||||
Version: version,
|
||||
ImageRef: imageRef,
|
||||
Options: []byte(options),
|
||||
Status: engineversion.Status(status),
|
||||
CreatedAt: createdAt.UTC(),
|
||||
UpdatedAt: updatedAt.UTC(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.EngineVersionStore interface at
|
||||
// compile time.
|
||||
var _ ports.EngineVersionStore = (*Store)(nil)
|
||||
@@ -0,0 +1,403 @@
|
||||
package engineversionstore_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/engineversionstore"
|
||||
"galaxy/gamemaster/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/gamemaster/internal/domain/engineversion"
|
||||
"galaxy/gamemaster/internal/domain/runtime"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
func newStore(t *testing.T) *engineversionstore.Store {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
store, err := engineversionstore.New(engineversionstore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return store
|
||||
}
|
||||
|
||||
// poolOnly returns the shared pool for tests that have to seed
|
||||
// runtime_records directly (e.g. TestIsReferencedByActiveRuntime).
|
||||
func poolOnly(t *testing.T) *sql.DB {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
return pgtest.Ensure(t).Pool()
|
||||
}
|
||||
|
||||
func validVersion(version string, createdAt time.Time, status engineversion.Status) engineversion.EngineVersion {
|
||||
return engineversion.EngineVersion{
|
||||
Version: version,
|
||||
ImageRef: "ghcr.io/galaxy/game:" + version,
|
||||
Options: []byte(`{"max_planets":120}`),
|
||||
Status: status,
|
||||
CreatedAt: createdAt,
|
||||
UpdatedAt: createdAt,
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewRejectsInvalidConfig(t *testing.T) {
|
||||
_, err := engineversionstore.New(engineversionstore.Config{})
|
||||
require.Error(t, err)
|
||||
|
||||
store, err := engineversionstore.New(engineversionstore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: 0,
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.Nil(t, store)
|
||||
}
|
||||
|
||||
func TestInsertGetRoundTrip(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
record := validVersion("v1.2.3", now, engineversion.StatusActive)
|
||||
|
||||
require.NoError(t, store.Insert(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, "v1.2.3")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.Version, got.Version)
|
||||
assert.Equal(t, record.ImageRef, got.ImageRef)
|
||||
assert.JSONEq(t, `{"max_planets":120}`, string(got.Options))
|
||||
assert.Equal(t, engineversion.StatusActive, got.Status)
|
||||
assert.True(t, got.CreatedAt.Equal(now))
|
||||
assert.True(t, got.UpdatedAt.Equal(now))
|
||||
assert.Equal(t, time.UTC, got.CreatedAt.Location())
|
||||
assert.Equal(t, time.UTC, got.UpdatedAt.Location())
|
||||
}
|
||||
|
||||
func TestInsertEmptyOptionsDefaultsToObject(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
record := validVersion("v1.2.3", now, engineversion.StatusActive)
|
||||
record.Options = nil
|
||||
|
||||
require.NoError(t, store.Insert(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, "v1.2.3")
|
||||
require.NoError(t, err)
|
||||
assert.JSONEq(t, `{}`, string(got.Options))
|
||||
}
|
||||
|
||||
func TestInsertConflict(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
record := validVersion("v1.2.3", now, engineversion.StatusActive)
|
||||
require.NoError(t, store.Insert(ctx, record))
|
||||
|
||||
err := store.Insert(ctx, record)
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, engineversion.ErrConflict), "want ErrConflict, got %v", err)
|
||||
}
|
||||
|
||||
func TestGetNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.Get(ctx, "v9.9.9")
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, engineversion.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestListNoFilter(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.0", now, engineversion.StatusDeprecated)))
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.3", now, engineversion.StatusActive)))
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.3.0", now, engineversion.StatusActive)))
|
||||
|
||||
all, err := store.List(ctx, nil)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, all, 3)
|
||||
assert.Equal(t, "v1.2.0", all[0].Version)
|
||||
assert.Equal(t, "v1.2.3", all[1].Version)
|
||||
assert.Equal(t, "v1.3.0", all[2].Version)
|
||||
}
|
||||
|
||||
func TestListByStatusFilter(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.0", now, engineversion.StatusDeprecated)))
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.3", now, engineversion.StatusActive)))
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.3.0", now, engineversion.StatusActive)))
|
||||
|
||||
active := engineversion.StatusActive
|
||||
got, err := store.List(ctx, &active)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, got, 2)
|
||||
assert.Equal(t, "v1.2.3", got[0].Version)
|
||||
assert.Equal(t, "v1.3.0", got[1].Version)
|
||||
|
||||
deprecated := engineversion.StatusDeprecated
|
||||
got, err = store.List(ctx, &deprecated)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, got, 1)
|
||||
assert.Equal(t, "v1.2.0", got[0].Version)
|
||||
}
|
||||
|
||||
func TestListUnknownStatusRejected(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
exotic := engineversion.Status("exotic")
|
||||
_, err := store.List(ctx, &exotic)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestUpdateImageRefOnly(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.3", now, engineversion.StatusActive)))
|
||||
|
||||
newRef := "ghcr.io/galaxy/game:v1.2.4"
|
||||
updateAt := now.Add(time.Minute)
|
||||
require.NoError(t, store.Update(ctx, ports.UpdateEngineVersionInput{
|
||||
Version: "v1.2.3",
|
||||
ImageRef: &newRef,
|
||||
Now: updateAt,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "v1.2.3")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, newRef, got.ImageRef)
|
||||
assert.Equal(t, engineversion.StatusActive, got.Status)
|
||||
assert.True(t, got.UpdatedAt.Equal(updateAt))
|
||||
}
|
||||
|
||||
func TestUpdateAllFields(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.3", now, engineversion.StatusActive)))
|
||||
|
||||
newRef := "ghcr.io/galaxy/game:v1.2.4"
|
||||
newOptions := []byte(`{"max_planets":240,"hot_seat":true}`)
|
||||
deprecated := engineversion.StatusDeprecated
|
||||
updateAt := now.Add(time.Minute)
|
||||
require.NoError(t, store.Update(ctx, ports.UpdateEngineVersionInput{
|
||||
Version: "v1.2.3",
|
||||
ImageRef: &newRef,
|
||||
Options: &newOptions,
|
||||
Status: &deprecated,
|
||||
Now: updateAt,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "v1.2.3")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, newRef, got.ImageRef)
|
||||
assert.JSONEq(t, string(newOptions), string(got.Options))
|
||||
assert.Equal(t, engineversion.StatusDeprecated, got.Status)
|
||||
assert.True(t, got.UpdatedAt.Equal(updateAt))
|
||||
}
|
||||
|
||||
func TestUpdateNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
newRef := "ghcr.io/galaxy/game:v1.2.4"
|
||||
err := store.Update(ctx, ports.UpdateEngineVersionInput{
|
||||
Version: "v9.9.9",
|
||||
ImageRef: &newRef,
|
||||
Now: time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, engineversion.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestDeprecateHappy(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.3", now, engineversion.StatusActive)))
|
||||
|
||||
deprecateAt := now.Add(time.Hour)
|
||||
require.NoError(t, store.Deprecate(ctx, "v1.2.3", deprecateAt))
|
||||
|
||||
got, err := store.Get(ctx, "v1.2.3")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, engineversion.StatusDeprecated, got.Status)
|
||||
assert.True(t, got.UpdatedAt.Equal(deprecateAt))
|
||||
}
|
||||
|
||||
func TestDeprecateIdempotent(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.3", now, engineversion.StatusDeprecated)))
|
||||
|
||||
require.NoError(t, store.Deprecate(ctx, "v1.2.3", now.Add(time.Hour)))
|
||||
|
||||
got, err := store.Get(ctx, "v1.2.3")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, engineversion.StatusDeprecated, got.Status)
|
||||
// updated_at must remain at the original insert value because the
|
||||
// idempotent path performs no UPDATE.
|
||||
assert.True(t, got.UpdatedAt.Equal(now))
|
||||
}
|
||||
|
||||
func TestDeprecateNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.Deprecate(ctx, "v9.9.9", time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC))
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, engineversion.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestDeprecateRejectsZeroNow(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.Deprecate(ctx, "v1.2.3", time.Time{})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestDeleteHappy(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.3", now, engineversion.StatusActive)))
|
||||
|
||||
require.NoError(t, store.Delete(ctx, "v1.2.3"))
|
||||
|
||||
_, err := store.Get(ctx, "v1.2.3")
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, engineversion.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestDeleteNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.Delete(ctx, "v9.9.9")
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, engineversion.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestDeleteRejectsEmptyVersion(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.Delete(ctx, "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
// TestIsReferencedByActiveRuntime exercises the join between
|
||||
// engine_versions and runtime_records. The runtime rows are seeded by
|
||||
// inserting directly through the shared pool, since the
|
||||
// runtimerecordstore adapter lives in a sibling package.
|
||||
func TestIsReferencedByActiveRuntime(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
pool := poolOnly(t)
|
||||
store, err := engineversionstore.New(engineversionstore.Config{
|
||||
DB: pool,
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.3", now, engineversion.StatusActive)))
|
||||
require.NoError(t, store.Insert(ctx, validVersion("v1.2.4", now, engineversion.StatusActive)))
|
||||
|
||||
insertRuntime(t, pool, "game-running", runtime.StatusRunning, "v1.2.3", now)
|
||||
insertRuntime(t, pool, "game-finished", runtime.StatusFinished, "v1.2.3", now)
|
||||
insertRuntime(t, pool, "game-stopped", runtime.StatusStopped, "v1.2.3", now)
|
||||
|
||||
used, err := store.IsReferencedByActiveRuntime(ctx, "v1.2.3")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, used, "v1.2.3 must be reported referenced (game-running uses it)")
|
||||
|
||||
unused, err := store.IsReferencedByActiveRuntime(ctx, "v1.2.4")
|
||||
require.NoError(t, err)
|
||||
assert.False(t, unused, "v1.2.4 has no active runtime reference")
|
||||
|
||||
missing, err := store.IsReferencedByActiveRuntime(ctx, "v9.9.9")
|
||||
require.NoError(t, err)
|
||||
assert.False(t, missing)
|
||||
}
|
||||
|
||||
// insertRuntime seeds one runtime_records row directly via raw SQL. The
|
||||
// adapter under test is engineversionstore; using the runtimerecordstore
|
||||
// here would couple two adapter test suites unnecessarily.
|
||||
func insertRuntime(t *testing.T, pool *sql.DB, gameID string, status runtime.Status, engineVersion string, createdAt time.Time) {
|
||||
t.Helper()
|
||||
at := createdAt.UTC()
|
||||
var stoppedAt, finishedAt any
|
||||
switch status {
|
||||
case runtime.StatusStopped:
|
||||
stoppedAt = at
|
||||
case runtime.StatusFinished:
|
||||
finishedAt = at
|
||||
}
|
||||
const stmt = `
|
||||
INSERT INTO runtime_records (
|
||||
game_id, status, engine_endpoint, current_image_ref,
|
||||
current_engine_version, turn_schedule, current_turn,
|
||||
next_generation_at, skip_next_tick, engine_health,
|
||||
created_at, updated_at, started_at, stopped_at, finished_at
|
||||
) VALUES (
|
||||
$1, $2, 'http://galaxy-game-' || $1 || ':8080', 'ghcr.io/galaxy/game:' || $3,
|
||||
$3, '0 18 * * *', 0,
|
||||
NULL, false, '',
|
||||
$4, $5, $6, $7, $8
|
||||
)`
|
||||
_, err := pool.ExecContext(context.Background(), stmt,
|
||||
gameID, string(status), engineVersion,
|
||||
at, at, at, stoppedAt, finishedAt,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestIsReferencedRejectsEmptyVersion(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.IsReferencedByActiveRuntime(ctx, "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestGetRejectsEmpty(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.Get(ctx, "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestUpdateRejectsInvalidInput(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.Update(ctx, ports.UpdateEngineVersionInput{Version: "v1.2.3"})
|
||||
require.Error(t, err)
|
||||
}
|
||||
@@ -0,0 +1,211 @@
|
||||
// Package pgtest exposes the testcontainers-backed PostgreSQL bootstrap
|
||||
// shared by every Game Master PG adapter test. The package is regular
|
||||
// Go code — not a `_test.go` file — so it can be imported by the
|
||||
// `_test.go` files in the four sibling store packages
|
||||
// (`runtimerecordstore`, `engineversionstore`, `playermappingstore`,
|
||||
// `operationlog`).
|
||||
//
|
||||
// No production code in `cmd/gamemaster` or in the runtime imports this
|
||||
// package. The testcontainers-go dependency therefore stays out of the
|
||||
// production binary's import graph.
|
||||
package pgtest
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"net/url"
|
||||
"os"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/postgres"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/migrations"
|
||||
|
||||
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
const (
|
||||
postgresImage = "postgres:16-alpine"
|
||||
superUser = "galaxy"
|
||||
superPassword = "galaxy"
|
||||
superDatabase = "galaxy_gamemaster"
|
||||
serviceRole = "gamemasterservice"
|
||||
servicePassword = "gamemasterservice"
|
||||
serviceSchema = "gamemaster"
|
||||
containerStartup = 90 * time.Second
|
||||
|
||||
// OperationTimeout is the per-statement timeout used by every store
|
||||
// constructed via the per-package newStore helpers. Tests may pass a
|
||||
// smaller value if they need to assert deadline behaviour explicitly.
|
||||
OperationTimeout = 10 * time.Second
|
||||
)
|
||||
|
||||
// Env holds the per-process container plus the *sql.DB pool already
|
||||
// provisioned with the gamemaster schema, role, and migrations applied.
|
||||
type Env struct {
|
||||
container *tcpostgres.PostgresContainer
|
||||
pool *sql.DB
|
||||
}
|
||||
|
||||
// Pool returns the shared pool. Tests truncate per-table state before
|
||||
// each run via TruncateAll.
|
||||
func (env *Env) Pool() *sql.DB { return env.pool }
|
||||
|
||||
var (
|
||||
once sync.Once
|
||||
cur *Env
|
||||
curEr error
|
||||
)
|
||||
|
||||
// Ensure starts the PostgreSQL container on first invocation and applies
|
||||
// the embedded goose migrations. Subsequent invocations reuse the same
|
||||
// container/pool. When Docker is unavailable Ensure calls t.Skip with the
|
||||
// underlying error so the test suite still passes on machines without
|
||||
// Docker.
|
||||
func Ensure(t testing.TB) *Env {
|
||||
t.Helper()
|
||||
once.Do(func() {
|
||||
cur, curEr = start()
|
||||
})
|
||||
if curEr != nil {
|
||||
t.Skipf("postgres container start failed (Docker unavailable?): %v", curEr)
|
||||
}
|
||||
return cur
|
||||
}
|
||||
|
||||
// TruncateAll wipes every Game Master table inside the shared pool,
|
||||
// leaving the schema and indexes intact. Use it from each test that
|
||||
// needs a clean slate.
|
||||
func TruncateAll(t testing.TB) {
|
||||
t.Helper()
|
||||
env := Ensure(t)
|
||||
const stmt = `TRUNCATE TABLE runtime_records, engine_versions, player_mappings, operation_log RESTART IDENTITY CASCADE`
|
||||
if _, err := env.pool.ExecContext(context.Background(), stmt); err != nil {
|
||||
t.Fatalf("truncate gamemaster tables: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown terminates the shared container and closes the pool. It is
|
||||
// invoked from each test package's TestMain after `m.Run` returns so the
|
||||
// container is released even if individual tests panic.
|
||||
func Shutdown() {
|
||||
if cur == nil {
|
||||
return
|
||||
}
|
||||
if cur.pool != nil {
|
||||
_ = cur.pool.Close()
|
||||
}
|
||||
if cur.container != nil {
|
||||
_ = testcontainers.TerminateContainer(cur.container)
|
||||
}
|
||||
cur = nil
|
||||
}
|
||||
|
||||
// RunMain is a convenience helper for each store package's TestMain: it
|
||||
// runs the test main, captures the exit code, shuts the container down,
|
||||
// and exits. Wiring it through one helper keeps every TestMain to two
|
||||
// lines.
|
||||
func RunMain(m *testing.M) {
|
||||
code := m.Run()
|
||||
Shutdown()
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func start() (*Env, error) {
|
||||
ctx := context.Background()
|
||||
container, err := tcpostgres.Run(ctx, postgresImage,
|
||||
tcpostgres.WithDatabase(superDatabase),
|
||||
tcpostgres.WithUsername(superUser),
|
||||
tcpostgres.WithPassword(superPassword),
|
||||
testcontainers.WithWaitStrategy(
|
||||
wait.ForLog("database system is ready to accept connections").
|
||||
WithOccurrence(2).
|
||||
WithStartupTimeout(containerStartup),
|
||||
),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
baseDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||
if err != nil {
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
if err := provisionRoleAndSchema(ctx, baseDSN); err != nil {
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
scopedDSN, err := dsnForServiceRole(baseDSN)
|
||||
if err != nil {
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = scopedDSN
|
||||
cfg.OperationTimeout = OperationTimeout
|
||||
pool, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
if err := postgres.Ping(ctx, pool, OperationTimeout); err != nil {
|
||||
_ = pool.Close()
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
if err := postgres.RunMigrations(ctx, pool, migrations.FS(), "."); err != nil {
|
||||
_ = pool.Close()
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
return &Env{container: container, pool: pool}, nil
|
||||
}
|
||||
|
||||
func provisionRoleAndSchema(ctx context.Context, baseDSN string) error {
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = baseDSN
|
||||
cfg.OperationTimeout = OperationTimeout
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
|
||||
statements := []string{
|
||||
`DO $$ BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'gamemasterservice') THEN
|
||||
CREATE ROLE gamemasterservice LOGIN PASSWORD 'gamemasterservice';
|
||||
END IF;
|
||||
END $$;`,
|
||||
`CREATE SCHEMA IF NOT EXISTS gamemaster AUTHORIZATION gamemasterservice;`,
|
||||
`GRANT USAGE ON SCHEMA gamemaster TO gamemasterservice;`,
|
||||
}
|
||||
for _, statement := range statements {
|
||||
if _, err := db.ExecContext(ctx, statement); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func dsnForServiceRole(baseDSN string) (string, error) {
|
||||
parsed, err := url.Parse(baseDSN)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
values := url.Values{}
|
||||
values.Set("search_path", serviceSchema)
|
||||
values.Set("sslmode", "disable")
|
||||
scoped := url.URL{
|
||||
Scheme: parsed.Scheme,
|
||||
User: url.UserPassword(serviceRole, servicePassword),
|
||||
Host: parsed.Host,
|
||||
Path: parsed.Path,
|
||||
RawQuery: values.Encode(),
|
||||
}
|
||||
return scoped.String(), nil
|
||||
}
|
||||
@@ -0,0 +1,111 @@
|
||||
// Package sqlx contains the small set of helpers shared by every Game
|
||||
// Master PostgreSQL adapter (runtimerecordstore, engineversionstore,
|
||||
// playermappingstore, operationlog). The helpers centralise the
|
||||
// boundary translations for nullable timestamps and the pgx SQLSTATE
|
||||
// codes the adapters interpret as domain conflicts.
|
||||
package sqlx
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5/pgconn"
|
||||
)
|
||||
|
||||
// PgUniqueViolationCode identifies the SQLSTATE returned by PostgreSQL
|
||||
// when a UNIQUE constraint is violated by INSERT or UPDATE.
|
||||
const PgUniqueViolationCode = "23505"
|
||||
|
||||
// IsUniqueViolation reports whether err is a PostgreSQL unique-violation,
|
||||
// regardless of constraint name.
|
||||
func IsUniqueViolation(err error) bool {
|
||||
var pgErr *pgconn.PgError
|
||||
if !errors.As(err, &pgErr) {
|
||||
return false
|
||||
}
|
||||
return pgErr.Code == PgUniqueViolationCode
|
||||
}
|
||||
|
||||
// IsNoRows reports whether err is sql.ErrNoRows.
|
||||
func IsNoRows(err error) bool {
|
||||
return errors.Is(err, sql.ErrNoRows)
|
||||
}
|
||||
|
||||
// NullableTime returns t.UTC() when non-zero, otherwise nil so the column
|
||||
// is bound as SQL NULL.
|
||||
func NullableTime(t time.Time) any {
|
||||
if t.IsZero() {
|
||||
return nil
|
||||
}
|
||||
return t.UTC()
|
||||
}
|
||||
|
||||
// NullableTimePtr returns t.UTC() when t is non-nil and non-zero,
|
||||
// otherwise nil. Companion of NullableTime for domain types that use
|
||||
// *time.Time to express absent timestamps.
|
||||
func NullableTimePtr(t *time.Time) any {
|
||||
if t == nil {
|
||||
return nil
|
||||
}
|
||||
return NullableTime(*t)
|
||||
}
|
||||
|
||||
// NullableString returns value when non-empty, otherwise nil so the
|
||||
// column is bound as SQL NULL.
|
||||
func NullableString(value string) any {
|
||||
if value == "" {
|
||||
return nil
|
||||
}
|
||||
return value
|
||||
}
|
||||
|
||||
// StringFromNullable copies an optional sql.NullString into a domain
|
||||
// string. NULL becomes the empty string, matching the Game Master
|
||||
// domain convention that empty == NULL for nullable text columns.
|
||||
func StringFromNullable(value sql.NullString) string {
|
||||
if !value.Valid {
|
||||
return ""
|
||||
}
|
||||
return value.String
|
||||
}
|
||||
|
||||
// TimeFromNullable copies an optional sql.NullTime into a domain
|
||||
// time.Time, applying the global UTC normalisation rule. NULL values
|
||||
// become the zero time.Time.
|
||||
func TimeFromNullable(value sql.NullTime) time.Time {
|
||||
if !value.Valid {
|
||||
return time.Time{}
|
||||
}
|
||||
return value.Time.UTC()
|
||||
}
|
||||
|
||||
// TimePtrFromNullable copies an optional sql.NullTime into a domain
|
||||
// *time.Time. NULL becomes nil; non-NULL values are wrapped after UTC
|
||||
// normalisation.
|
||||
func TimePtrFromNullable(value sql.NullTime) *time.Time {
|
||||
if !value.Valid {
|
||||
return nil
|
||||
}
|
||||
t := value.Time.UTC()
|
||||
return &t
|
||||
}
|
||||
|
||||
// WithTimeout derives a child context bounded by timeout and prefixes
|
||||
// context errors with operation. Callers must always invoke the returned
|
||||
// cancel.
|
||||
func WithTimeout(ctx context.Context, operation string, timeout time.Duration) (context.Context, context.CancelFunc, error) {
|
||||
if ctx == nil {
|
||||
return nil, nil, fmt.Errorf("%s: nil context", operation)
|
||||
}
|
||||
if err := ctx.Err(); err != nil {
|
||||
return nil, nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if timeout <= 0 {
|
||||
return nil, nil, fmt.Errorf("%s: operation timeout must be positive", operation)
|
||||
}
|
||||
bounded, cancel := context.WithTimeout(ctx, timeout)
|
||||
return bounded, cancel, nil
|
||||
}
|
||||
@@ -0,0 +1,21 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type EngineVersions struct {
|
||||
Version string `sql:"primary_key"`
|
||||
ImageRef string
|
||||
Options string
|
||||
Status string
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
@@ -0,0 +1,19 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type GooseDbVersion struct {
|
||||
ID int32 `sql:"primary_key"`
|
||||
VersionID int64
|
||||
IsApplied bool
|
||||
Tstamp time.Time
|
||||
}
|
||||
@@ -0,0 +1,25 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type OperationLog struct {
|
||||
ID int64 `sql:"primary_key"`
|
||||
GameID string
|
||||
OpKind string
|
||||
OpSource string
|
||||
SourceRef string
|
||||
Outcome string
|
||||
ErrorCode string
|
||||
ErrorMessage string
|
||||
StartedAt time.Time
|
||||
FinishedAt *time.Time
|
||||
}
|
||||
@@ -0,0 +1,20 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type PlayerMappings struct {
|
||||
GameID string `sql:"primary_key"`
|
||||
UserID string `sql:"primary_key"`
|
||||
RaceName string
|
||||
EnginePlayerUUID string
|
||||
CreatedAt time.Time
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type RuntimeRecords struct {
|
||||
GameID string `sql:"primary_key"`
|
||||
Status string
|
||||
EngineEndpoint string
|
||||
CurrentImageRef string
|
||||
CurrentEngineVersion string
|
||||
TurnSchedule string
|
||||
CurrentTurn int32
|
||||
NextGenerationAt *time.Time
|
||||
SkipNextTick bool
|
||||
EngineHealth string
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
StartedAt *time.Time
|
||||
StoppedAt *time.Time
|
||||
FinishedAt *time.Time
|
||||
}
|
||||
@@ -0,0 +1,93 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var EngineVersions = newEngineVersionsTable("gamemaster", "engine_versions", "")
|
||||
|
||||
type engineVersionsTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
Version postgres.ColumnString
|
||||
ImageRef postgres.ColumnString
|
||||
Options postgres.ColumnString
|
||||
Status postgres.ColumnString
|
||||
CreatedAt postgres.ColumnTimestampz
|
||||
UpdatedAt postgres.ColumnTimestampz
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type EngineVersionsTable struct {
|
||||
engineVersionsTable
|
||||
|
||||
EXCLUDED engineVersionsTable
|
||||
}
|
||||
|
||||
// AS creates new EngineVersionsTable with assigned alias
|
||||
func (a EngineVersionsTable) AS(alias string) *EngineVersionsTable {
|
||||
return newEngineVersionsTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new EngineVersionsTable with assigned schema name
|
||||
func (a EngineVersionsTable) FromSchema(schemaName string) *EngineVersionsTable {
|
||||
return newEngineVersionsTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new EngineVersionsTable with assigned table prefix
|
||||
func (a EngineVersionsTable) WithPrefix(prefix string) *EngineVersionsTable {
|
||||
return newEngineVersionsTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new EngineVersionsTable with assigned table suffix
|
||||
func (a EngineVersionsTable) WithSuffix(suffix string) *EngineVersionsTable {
|
||||
return newEngineVersionsTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newEngineVersionsTable(schemaName, tableName, alias string) *EngineVersionsTable {
|
||||
return &EngineVersionsTable{
|
||||
engineVersionsTable: newEngineVersionsTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newEngineVersionsTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newEngineVersionsTableImpl(schemaName, tableName, alias string) engineVersionsTable {
|
||||
var (
|
||||
VersionColumn = postgres.StringColumn("version")
|
||||
ImageRefColumn = postgres.StringColumn("image_ref")
|
||||
OptionsColumn = postgres.StringColumn("options")
|
||||
StatusColumn = postgres.StringColumn("status")
|
||||
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
||||
UpdatedAtColumn = postgres.TimestampzColumn("updated_at")
|
||||
allColumns = postgres.ColumnList{VersionColumn, ImageRefColumn, OptionsColumn, StatusColumn, CreatedAtColumn, UpdatedAtColumn}
|
||||
mutableColumns = postgres.ColumnList{ImageRefColumn, OptionsColumn, StatusColumn, CreatedAtColumn, UpdatedAtColumn}
|
||||
defaultColumns = postgres.ColumnList{OptionsColumn}
|
||||
)
|
||||
|
||||
return engineVersionsTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
Version: VersionColumn,
|
||||
ImageRef: ImageRefColumn,
|
||||
Options: OptionsColumn,
|
||||
Status: StatusColumn,
|
||||
CreatedAt: CreatedAtColumn,
|
||||
UpdatedAt: UpdatedAtColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,87 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var GooseDbVersion = newGooseDbVersionTable("gamemaster", "goose_db_version", "")
|
||||
|
||||
type gooseDbVersionTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
ID postgres.ColumnInteger
|
||||
VersionID postgres.ColumnInteger
|
||||
IsApplied postgres.ColumnBool
|
||||
Tstamp postgres.ColumnTimestamp
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type GooseDbVersionTable struct {
|
||||
gooseDbVersionTable
|
||||
|
||||
EXCLUDED gooseDbVersionTable
|
||||
}
|
||||
|
||||
// AS creates new GooseDbVersionTable with assigned alias
|
||||
func (a GooseDbVersionTable) AS(alias string) *GooseDbVersionTable {
|
||||
return newGooseDbVersionTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new GooseDbVersionTable with assigned schema name
|
||||
func (a GooseDbVersionTable) FromSchema(schemaName string) *GooseDbVersionTable {
|
||||
return newGooseDbVersionTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new GooseDbVersionTable with assigned table prefix
|
||||
func (a GooseDbVersionTable) WithPrefix(prefix string) *GooseDbVersionTable {
|
||||
return newGooseDbVersionTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new GooseDbVersionTable with assigned table suffix
|
||||
func (a GooseDbVersionTable) WithSuffix(suffix string) *GooseDbVersionTable {
|
||||
return newGooseDbVersionTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newGooseDbVersionTable(schemaName, tableName, alias string) *GooseDbVersionTable {
|
||||
return &GooseDbVersionTable{
|
||||
gooseDbVersionTable: newGooseDbVersionTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newGooseDbVersionTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newGooseDbVersionTableImpl(schemaName, tableName, alias string) gooseDbVersionTable {
|
||||
var (
|
||||
IDColumn = postgres.IntegerColumn("id")
|
||||
VersionIDColumn = postgres.IntegerColumn("version_id")
|
||||
IsAppliedColumn = postgres.BoolColumn("is_applied")
|
||||
TstampColumn = postgres.TimestampColumn("tstamp")
|
||||
allColumns = postgres.ColumnList{IDColumn, VersionIDColumn, IsAppliedColumn, TstampColumn}
|
||||
mutableColumns = postgres.ColumnList{VersionIDColumn, IsAppliedColumn, TstampColumn}
|
||||
defaultColumns = postgres.ColumnList{TstampColumn}
|
||||
)
|
||||
|
||||
return gooseDbVersionTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
ID: IDColumn,
|
||||
VersionID: VersionIDColumn,
|
||||
IsApplied: IsAppliedColumn,
|
||||
Tstamp: TstampColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,105 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var OperationLog = newOperationLogTable("gamemaster", "operation_log", "")
|
||||
|
||||
type operationLogTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
ID postgres.ColumnInteger
|
||||
GameID postgres.ColumnString
|
||||
OpKind postgres.ColumnString
|
||||
OpSource postgres.ColumnString
|
||||
SourceRef postgres.ColumnString
|
||||
Outcome postgres.ColumnString
|
||||
ErrorCode postgres.ColumnString
|
||||
ErrorMessage postgres.ColumnString
|
||||
StartedAt postgres.ColumnTimestampz
|
||||
FinishedAt postgres.ColumnTimestampz
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type OperationLogTable struct {
|
||||
operationLogTable
|
||||
|
||||
EXCLUDED operationLogTable
|
||||
}
|
||||
|
||||
// AS creates new OperationLogTable with assigned alias
|
||||
func (a OperationLogTable) AS(alias string) *OperationLogTable {
|
||||
return newOperationLogTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new OperationLogTable with assigned schema name
|
||||
func (a OperationLogTable) FromSchema(schemaName string) *OperationLogTable {
|
||||
return newOperationLogTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new OperationLogTable with assigned table prefix
|
||||
func (a OperationLogTable) WithPrefix(prefix string) *OperationLogTable {
|
||||
return newOperationLogTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new OperationLogTable with assigned table suffix
|
||||
func (a OperationLogTable) WithSuffix(suffix string) *OperationLogTable {
|
||||
return newOperationLogTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newOperationLogTable(schemaName, tableName, alias string) *OperationLogTable {
|
||||
return &OperationLogTable{
|
||||
operationLogTable: newOperationLogTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newOperationLogTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newOperationLogTableImpl(schemaName, tableName, alias string) operationLogTable {
|
||||
var (
|
||||
IDColumn = postgres.IntegerColumn("id")
|
||||
GameIDColumn = postgres.StringColumn("game_id")
|
||||
OpKindColumn = postgres.StringColumn("op_kind")
|
||||
OpSourceColumn = postgres.StringColumn("op_source")
|
||||
SourceRefColumn = postgres.StringColumn("source_ref")
|
||||
OutcomeColumn = postgres.StringColumn("outcome")
|
||||
ErrorCodeColumn = postgres.StringColumn("error_code")
|
||||
ErrorMessageColumn = postgres.StringColumn("error_message")
|
||||
StartedAtColumn = postgres.TimestampzColumn("started_at")
|
||||
FinishedAtColumn = postgres.TimestampzColumn("finished_at")
|
||||
allColumns = postgres.ColumnList{IDColumn, GameIDColumn, OpKindColumn, OpSourceColumn, SourceRefColumn, OutcomeColumn, ErrorCodeColumn, ErrorMessageColumn, StartedAtColumn, FinishedAtColumn}
|
||||
mutableColumns = postgres.ColumnList{GameIDColumn, OpKindColumn, OpSourceColumn, SourceRefColumn, OutcomeColumn, ErrorCodeColumn, ErrorMessageColumn, StartedAtColumn, FinishedAtColumn}
|
||||
defaultColumns = postgres.ColumnList{IDColumn, SourceRefColumn, ErrorCodeColumn, ErrorMessageColumn}
|
||||
)
|
||||
|
||||
return operationLogTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
ID: IDColumn,
|
||||
GameID: GameIDColumn,
|
||||
OpKind: OpKindColumn,
|
||||
OpSource: OpSourceColumn,
|
||||
SourceRef: SourceRefColumn,
|
||||
Outcome: OutcomeColumn,
|
||||
ErrorCode: ErrorCodeColumn,
|
||||
ErrorMessage: ErrorMessageColumn,
|
||||
StartedAt: StartedAtColumn,
|
||||
FinishedAt: FinishedAtColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,90 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var PlayerMappings = newPlayerMappingsTable("gamemaster", "player_mappings", "")
|
||||
|
||||
type playerMappingsTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
GameID postgres.ColumnString
|
||||
UserID postgres.ColumnString
|
||||
RaceName postgres.ColumnString
|
||||
EnginePlayerUUID postgres.ColumnString
|
||||
CreatedAt postgres.ColumnTimestampz
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type PlayerMappingsTable struct {
|
||||
playerMappingsTable
|
||||
|
||||
EXCLUDED playerMappingsTable
|
||||
}
|
||||
|
||||
// AS creates new PlayerMappingsTable with assigned alias
|
||||
func (a PlayerMappingsTable) AS(alias string) *PlayerMappingsTable {
|
||||
return newPlayerMappingsTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new PlayerMappingsTable with assigned schema name
|
||||
func (a PlayerMappingsTable) FromSchema(schemaName string) *PlayerMappingsTable {
|
||||
return newPlayerMappingsTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new PlayerMappingsTable with assigned table prefix
|
||||
func (a PlayerMappingsTable) WithPrefix(prefix string) *PlayerMappingsTable {
|
||||
return newPlayerMappingsTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new PlayerMappingsTable with assigned table suffix
|
||||
func (a PlayerMappingsTable) WithSuffix(suffix string) *PlayerMappingsTable {
|
||||
return newPlayerMappingsTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newPlayerMappingsTable(schemaName, tableName, alias string) *PlayerMappingsTable {
|
||||
return &PlayerMappingsTable{
|
||||
playerMappingsTable: newPlayerMappingsTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newPlayerMappingsTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newPlayerMappingsTableImpl(schemaName, tableName, alias string) playerMappingsTable {
|
||||
var (
|
||||
GameIDColumn = postgres.StringColumn("game_id")
|
||||
UserIDColumn = postgres.StringColumn("user_id")
|
||||
RaceNameColumn = postgres.StringColumn("race_name")
|
||||
EnginePlayerUUIDColumn = postgres.StringColumn("engine_player_uuid")
|
||||
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
||||
allColumns = postgres.ColumnList{GameIDColumn, UserIDColumn, RaceNameColumn, EnginePlayerUUIDColumn, CreatedAtColumn}
|
||||
mutableColumns = postgres.ColumnList{RaceNameColumn, EnginePlayerUUIDColumn, CreatedAtColumn}
|
||||
defaultColumns = postgres.ColumnList{}
|
||||
)
|
||||
|
||||
return playerMappingsTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
GameID: GameIDColumn,
|
||||
UserID: UserIDColumn,
|
||||
RaceName: RaceNameColumn,
|
||||
EnginePlayerUUID: EnginePlayerUUIDColumn,
|
||||
CreatedAt: CreatedAtColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,120 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var RuntimeRecords = newRuntimeRecordsTable("gamemaster", "runtime_records", "")
|
||||
|
||||
type runtimeRecordsTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
GameID postgres.ColumnString
|
||||
Status postgres.ColumnString
|
||||
EngineEndpoint postgres.ColumnString
|
||||
CurrentImageRef postgres.ColumnString
|
||||
CurrentEngineVersion postgres.ColumnString
|
||||
TurnSchedule postgres.ColumnString
|
||||
CurrentTurn postgres.ColumnInteger
|
||||
NextGenerationAt postgres.ColumnTimestampz
|
||||
SkipNextTick postgres.ColumnBool
|
||||
EngineHealth postgres.ColumnString
|
||||
CreatedAt postgres.ColumnTimestampz
|
||||
UpdatedAt postgres.ColumnTimestampz
|
||||
StartedAt postgres.ColumnTimestampz
|
||||
StoppedAt postgres.ColumnTimestampz
|
||||
FinishedAt postgres.ColumnTimestampz
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type RuntimeRecordsTable struct {
|
||||
runtimeRecordsTable
|
||||
|
||||
EXCLUDED runtimeRecordsTable
|
||||
}
|
||||
|
||||
// AS creates new RuntimeRecordsTable with assigned alias
|
||||
func (a RuntimeRecordsTable) AS(alias string) *RuntimeRecordsTable {
|
||||
return newRuntimeRecordsTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new RuntimeRecordsTable with assigned schema name
|
||||
func (a RuntimeRecordsTable) FromSchema(schemaName string) *RuntimeRecordsTable {
|
||||
return newRuntimeRecordsTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new RuntimeRecordsTable with assigned table prefix
|
||||
func (a RuntimeRecordsTable) WithPrefix(prefix string) *RuntimeRecordsTable {
|
||||
return newRuntimeRecordsTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new RuntimeRecordsTable with assigned table suffix
|
||||
func (a RuntimeRecordsTable) WithSuffix(suffix string) *RuntimeRecordsTable {
|
||||
return newRuntimeRecordsTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newRuntimeRecordsTable(schemaName, tableName, alias string) *RuntimeRecordsTable {
|
||||
return &RuntimeRecordsTable{
|
||||
runtimeRecordsTable: newRuntimeRecordsTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newRuntimeRecordsTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newRuntimeRecordsTableImpl(schemaName, tableName, alias string) runtimeRecordsTable {
|
||||
var (
|
||||
GameIDColumn = postgres.StringColumn("game_id")
|
||||
StatusColumn = postgres.StringColumn("status")
|
||||
EngineEndpointColumn = postgres.StringColumn("engine_endpoint")
|
||||
CurrentImageRefColumn = postgres.StringColumn("current_image_ref")
|
||||
CurrentEngineVersionColumn = postgres.StringColumn("current_engine_version")
|
||||
TurnScheduleColumn = postgres.StringColumn("turn_schedule")
|
||||
CurrentTurnColumn = postgres.IntegerColumn("current_turn")
|
||||
NextGenerationAtColumn = postgres.TimestampzColumn("next_generation_at")
|
||||
SkipNextTickColumn = postgres.BoolColumn("skip_next_tick")
|
||||
EngineHealthColumn = postgres.StringColumn("engine_health")
|
||||
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
||||
UpdatedAtColumn = postgres.TimestampzColumn("updated_at")
|
||||
StartedAtColumn = postgres.TimestampzColumn("started_at")
|
||||
StoppedAtColumn = postgres.TimestampzColumn("stopped_at")
|
||||
FinishedAtColumn = postgres.TimestampzColumn("finished_at")
|
||||
allColumns = postgres.ColumnList{GameIDColumn, StatusColumn, EngineEndpointColumn, CurrentImageRefColumn, CurrentEngineVersionColumn, TurnScheduleColumn, CurrentTurnColumn, NextGenerationAtColumn, SkipNextTickColumn, EngineHealthColumn, CreatedAtColumn, UpdatedAtColumn, StartedAtColumn, StoppedAtColumn, FinishedAtColumn}
|
||||
mutableColumns = postgres.ColumnList{StatusColumn, EngineEndpointColumn, CurrentImageRefColumn, CurrentEngineVersionColumn, TurnScheduleColumn, CurrentTurnColumn, NextGenerationAtColumn, SkipNextTickColumn, EngineHealthColumn, CreatedAtColumn, UpdatedAtColumn, StartedAtColumn, StoppedAtColumn, FinishedAtColumn}
|
||||
defaultColumns = postgres.ColumnList{CurrentTurnColumn, SkipNextTickColumn, EngineHealthColumn}
|
||||
)
|
||||
|
||||
return runtimeRecordsTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
GameID: GameIDColumn,
|
||||
Status: StatusColumn,
|
||||
EngineEndpoint: EngineEndpointColumn,
|
||||
CurrentImageRef: CurrentImageRefColumn,
|
||||
CurrentEngineVersion: CurrentEngineVersionColumn,
|
||||
TurnSchedule: TurnScheduleColumn,
|
||||
CurrentTurn: CurrentTurnColumn,
|
||||
NextGenerationAt: NextGenerationAtColumn,
|
||||
SkipNextTick: SkipNextTickColumn,
|
||||
EngineHealth: EngineHealthColumn,
|
||||
CreatedAt: CreatedAtColumn,
|
||||
UpdatedAt: UpdatedAtColumn,
|
||||
StartedAt: StartedAtColumn,
|
||||
StoppedAt: StoppedAtColumn,
|
||||
FinishedAt: FinishedAtColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,18 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
// UseSchema sets a new schema name for all generated table SQL builder types. It is recommended to invoke
|
||||
// this method only once at the beginning of the program.
|
||||
func UseSchema(schema string) {
|
||||
EngineVersions = EngineVersions.FromSchema(schema)
|
||||
GooseDbVersion = GooseDbVersion.FromSchema(schema)
|
||||
OperationLog = OperationLog.FromSchema(schema)
|
||||
PlayerMappings = PlayerMappings.FromSchema(schema)
|
||||
RuntimeRecords = RuntimeRecords.FromSchema(schema)
|
||||
}
|
||||
@@ -0,0 +1,136 @@
|
||||
-- +goose Up
|
||||
-- Initial Game Master PostgreSQL schema.
|
||||
--
|
||||
-- Four tables cover the durable surface of the service:
|
||||
-- * runtime_records — one row per game with the latest known runtime
|
||||
-- status, scheduling state, and engine health summary;
|
||||
-- * engine_versions — the deployable engine version registry consumed
|
||||
-- by Lobby's start flow and the GM admin/patch flow;
|
||||
-- * player_mappings — the (game_id, user_id) → (race_name,
|
||||
-- engine_player_uuid) projection installed at register-runtime;
|
||||
-- * operation_log — append-only audit of every register-runtime,
|
||||
-- turn-generation, force-next-turn, banish, stop, patch, and
|
||||
-- engine-version mutation GM performed.
|
||||
--
|
||||
-- Schema and the matching `gamemasterservice` role are provisioned
|
||||
-- outside this script (in tests via cmd/jetgen/main.go::provisionRoleAndSchema;
|
||||
-- in production via an ops init script). This migration runs as the
|
||||
-- schema owner with `search_path=gamemaster` and only contains DDL for
|
||||
-- the service-owned tables and indexes. ARCHITECTURE.md §Database topology
|
||||
-- mandates that the per-service role's grants stay restricted to its own
|
||||
-- schema; consequently this file deliberately deviates from PLAN.md
|
||||
-- Stage 09's literal `CREATE SCHEMA IF NOT EXISTS gamemaster;` instruction.
|
||||
|
||||
-- runtime_records holds one durable record per game with the latest
|
||||
-- known runtime status, scheduling state, and engine health summary.
|
||||
-- The status enum is enforced by a CHECK so domain code can rely on it
|
||||
-- without reading every callsite. The composite (status,
|
||||
-- next_generation_at) index drives the scheduler ticker scan that
|
||||
-- selects `status='running' AND next_generation_at <= now()` once per
|
||||
-- second. next_generation_at is nullable: a row enters with
|
||||
-- status='starting' and a null tick, and only acquires a tick when the
|
||||
-- register-runtime CAS flips it to 'running'.
|
||||
CREATE TABLE runtime_records (
|
||||
game_id text PRIMARY KEY,
|
||||
status text NOT NULL,
|
||||
engine_endpoint text NOT NULL,
|
||||
current_image_ref text NOT NULL,
|
||||
current_engine_version text NOT NULL,
|
||||
turn_schedule text NOT NULL,
|
||||
current_turn integer NOT NULL DEFAULT 0,
|
||||
next_generation_at timestamptz,
|
||||
skip_next_tick boolean NOT NULL DEFAULT false,
|
||||
engine_health text NOT NULL DEFAULT '',
|
||||
created_at timestamptz NOT NULL,
|
||||
updated_at timestamptz NOT NULL,
|
||||
started_at timestamptz,
|
||||
stopped_at timestamptz,
|
||||
finished_at timestamptz,
|
||||
CONSTRAINT runtime_records_status_chk
|
||||
CHECK (status IN (
|
||||
'starting', 'running', 'generation_in_progress',
|
||||
'generation_failed', 'stopped', 'engine_unreachable',
|
||||
'finished'
|
||||
))
|
||||
);
|
||||
|
||||
CREATE INDEX runtime_records_status_next_gen_idx
|
||||
ON runtime_records (status, next_generation_at);
|
||||
|
||||
-- engine_versions is the deployable engine version registry. Each row
|
||||
-- ties a semver string to a Docker reference and a free-form options
|
||||
-- document; the status enum gates the start flow (active versions are
|
||||
-- accepted by Lobby's resolve, deprecated versions are rejected on new
|
||||
-- starts but remain valid for already-running games). `options` is
|
||||
-- jsonb: v1 stores it verbatim and never element-filters.
|
||||
CREATE TABLE engine_versions (
|
||||
version text PRIMARY KEY,
|
||||
image_ref text NOT NULL,
|
||||
options jsonb NOT NULL DEFAULT '{}'::jsonb,
|
||||
status text NOT NULL,
|
||||
created_at timestamptz NOT NULL,
|
||||
updated_at timestamptz NOT NULL,
|
||||
CONSTRAINT engine_versions_status_chk
|
||||
CHECK (status IN ('active', 'deprecated'))
|
||||
);
|
||||
|
||||
-- player_mappings carries the (game_id, user_id) → (race_name,
|
||||
-- engine_player_uuid) projection installed at register-runtime. The
|
||||
-- composite primary key both serves the lookups by (game_id, user_id)
|
||||
-- on every command/order/report request and as a leftmost-prefix index
|
||||
-- for the per-game roster reads (`WHERE game_id = $1`). The partial
|
||||
-- UNIQUE index on (game_id, race_name) enforces the one-race-per-game
|
||||
-- invariant at the storage boundary.
|
||||
CREATE TABLE player_mappings (
|
||||
game_id text NOT NULL,
|
||||
user_id text NOT NULL,
|
||||
race_name text NOT NULL,
|
||||
engine_player_uuid text NOT NULL,
|
||||
created_at timestamptz NOT NULL,
|
||||
PRIMARY KEY (game_id, user_id)
|
||||
);
|
||||
|
||||
CREATE UNIQUE INDEX player_mappings_game_race_uniq
|
||||
ON player_mappings (game_id, race_name);
|
||||
|
||||
-- operation_log is an append-only audit of every operation Game Master
|
||||
-- performed against a game's runtime or against the engine version
|
||||
-- registry. The (game_id, started_at DESC) index drives audit reads
|
||||
-- from the GM/Admin REST surface. finished_at is nullable for in-flight
|
||||
-- rows even though the service layer always finalises the row. The
|
||||
-- op_kind / op_source / outcome enums are enforced by CHECK constraints
|
||||
-- to keep the audit schema honest without a separate Go validator.
|
||||
CREATE TABLE operation_log (
|
||||
id bigserial PRIMARY KEY,
|
||||
game_id text NOT NULL,
|
||||
op_kind text NOT NULL,
|
||||
op_source text NOT NULL,
|
||||
source_ref text NOT NULL DEFAULT '',
|
||||
outcome text NOT NULL,
|
||||
error_code text NOT NULL DEFAULT '',
|
||||
error_message text NOT NULL DEFAULT '',
|
||||
started_at timestamptz NOT NULL,
|
||||
finished_at timestamptz,
|
||||
CONSTRAINT operation_log_op_kind_chk
|
||||
CHECK (op_kind IN (
|
||||
'register_runtime', 'turn_generation', 'force_next_turn',
|
||||
'banish', 'stop', 'patch',
|
||||
'engine_version_create', 'engine_version_update',
|
||||
'engine_version_deprecate', 'engine_version_delete'
|
||||
)),
|
||||
CONSTRAINT operation_log_op_source_chk
|
||||
CHECK (op_source IN (
|
||||
'gateway_player', 'lobby_internal', 'admin_rest'
|
||||
)),
|
||||
CONSTRAINT operation_log_outcome_chk
|
||||
CHECK (outcome IN ('success', 'failure'))
|
||||
);
|
||||
|
||||
CREATE INDEX operation_log_game_started_idx
|
||||
ON operation_log (game_id, started_at DESC);
|
||||
|
||||
-- +goose Down
|
||||
DROP TABLE IF EXISTS operation_log;
|
||||
DROP TABLE IF EXISTS player_mappings;
|
||||
DROP TABLE IF EXISTS engine_versions;
|
||||
DROP TABLE IF EXISTS runtime_records;
|
||||
@@ -0,0 +1,19 @@
|
||||
// Package migrations exposes the embedded goose migration files used by
|
||||
// Game Master to provision its `gamemaster` schema in PostgreSQL.
|
||||
//
|
||||
// The embedded filesystem is consumed by `pkg/postgres.RunMigrations`
|
||||
// during gamemaster-service startup and by `cmd/jetgen` when regenerating
|
||||
// the `internal/adapters/postgres/jet/` code against a transient
|
||||
// PostgreSQL instance.
|
||||
package migrations
|
||||
|
||||
import "embed"
|
||||
|
||||
//go:embed *.sql
|
||||
var fs embed.FS
|
||||
|
||||
// FS returns the embedded filesystem containing every numbered goose
|
||||
// migration shipped with Game Master.
|
||||
func FS() embed.FS {
|
||||
return fs
|
||||
}
|
||||
@@ -0,0 +1,221 @@
|
||||
// Package operationlog implements the PostgreSQL-backed adapter for
|
||||
// `ports.OperationLogStore`.
|
||||
//
|
||||
// The package owns the on-disk shape of the `operation_log` table
|
||||
// defined in
|
||||
// `galaxy/gamemaster/internal/adapters/postgres/migrations/00001_init.sql`
|
||||
// and translates the schema-agnostic `ports.OperationLogStore`
|
||||
// interface declared in `internal/ports/operationlog.go` into
|
||||
// concrete go-jet/v2 statements driven by the pgx driver.
|
||||
//
|
||||
// Append uses `INSERT ... RETURNING id` to surface the bigserial id
|
||||
// back to callers; ListByGame is index-driven by
|
||||
// `operation_log_game_started_idx`.
|
||||
package operationlog
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/internal/sqlx"
|
||||
pgtable "galaxy/gamemaster/internal/adapters/postgres/jet/gamemaster/table"
|
||||
"galaxy/gamemaster/internal/domain/operation"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
|
||||
pg "github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
// Config configures one PostgreSQL-backed operation-log store.
|
||||
type Config struct {
|
||||
DB *sql.DB
|
||||
OperationTimeout time.Duration
|
||||
}
|
||||
|
||||
// Store persists Game Master operation-log entries in PostgreSQL.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
// New constructs one PostgreSQL-backed operation-log store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.DB == nil {
|
||||
return nil, errors.New("new postgres operation log store: db must not be nil")
|
||||
}
|
||||
if cfg.OperationTimeout <= 0 {
|
||||
return nil, errors.New("new postgres operation log store: operation timeout must be positive")
|
||||
}
|
||||
return &Store{
|
||||
db: cfg.DB,
|
||||
operationTimeout: cfg.OperationTimeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// operationLogSelectColumns matches scanRow's column order.
|
||||
var operationLogSelectColumns = pg.ColumnList{
|
||||
pgtable.OperationLog.ID,
|
||||
pgtable.OperationLog.GameID,
|
||||
pgtable.OperationLog.OpKind,
|
||||
pgtable.OperationLog.OpSource,
|
||||
pgtable.OperationLog.SourceRef,
|
||||
pgtable.OperationLog.Outcome,
|
||||
pgtable.OperationLog.ErrorCode,
|
||||
pgtable.OperationLog.ErrorMessage,
|
||||
pgtable.OperationLog.StartedAt,
|
||||
pgtable.OperationLog.FinishedAt,
|
||||
}
|
||||
|
||||
// Append inserts entry into the operation log and returns the
|
||||
// generated bigserial id. entry is validated through
|
||||
// operation.OperationEntry.Validate before the SQL is issued.
|
||||
func (store *Store) Append(ctx context.Context, entry operation.OperationEntry) (int64, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return 0, errors.New("append operation log entry: nil store")
|
||||
}
|
||||
if err := entry.Validate(); err != nil {
|
||||
return 0, fmt.Errorf("append operation log entry: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "append operation log entry", store.operationTimeout)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.OperationLog.INSERT(
|
||||
pgtable.OperationLog.GameID,
|
||||
pgtable.OperationLog.OpKind,
|
||||
pgtable.OperationLog.OpSource,
|
||||
pgtable.OperationLog.SourceRef,
|
||||
pgtable.OperationLog.Outcome,
|
||||
pgtable.OperationLog.ErrorCode,
|
||||
pgtable.OperationLog.ErrorMessage,
|
||||
pgtable.OperationLog.StartedAt,
|
||||
pgtable.OperationLog.FinishedAt,
|
||||
).VALUES(
|
||||
entry.GameID,
|
||||
string(entry.OpKind),
|
||||
string(entry.OpSource),
|
||||
entry.SourceRef,
|
||||
string(entry.Outcome),
|
||||
entry.ErrorCode,
|
||||
entry.ErrorMessage,
|
||||
entry.StartedAt.UTC(),
|
||||
sqlx.NullableTimePtr(entry.FinishedAt),
|
||||
).RETURNING(pgtable.OperationLog.ID)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
var id int64
|
||||
if err := row.Scan(&id); err != nil {
|
||||
return 0, fmt.Errorf("append operation log entry: %w", err)
|
||||
}
|
||||
return id, nil
|
||||
}
|
||||
|
||||
// ListByGame returns the most recent entries for gameID, ordered by
|
||||
// started_at descending and id descending (a tie-breaker that keeps
|
||||
// the order stable when two rows share a started_at). The result is
|
||||
// capped by limit; non-positive limit is rejected.
|
||||
func (store *Store) ListByGame(ctx context.Context, gameID string, limit int) ([]operation.OperationEntry, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("list operation log entries by game: nil store")
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return nil, fmt.Errorf("list operation log entries by game: game id must not be empty")
|
||||
}
|
||||
if limit <= 0 {
|
||||
return nil, fmt.Errorf("list operation log entries by game: limit must be positive, got %d", limit)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "list operation log entries by game", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(operationLogSelectColumns).
|
||||
FROM(pgtable.OperationLog).
|
||||
WHERE(pgtable.OperationLog.GameID.EQ(pg.String(gameID))).
|
||||
ORDER_BY(pgtable.OperationLog.StartedAt.DESC(), pgtable.OperationLog.ID.DESC()).
|
||||
LIMIT(int64(limit))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("list operation log entries by game: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
entries := make([]operation.OperationEntry, 0)
|
||||
for rows.Next() {
|
||||
got, err := scanRow(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("list operation log entries by game: scan: %w", err)
|
||||
}
|
||||
entries = append(entries, got)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("list operation log entries by game: %w", err)
|
||||
}
|
||||
if len(entries) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
// rowScanner abstracts *sql.Row and *sql.Rows so scanRow can be shared
|
||||
// across single-row and iterated reads.
|
||||
type rowScanner interface {
|
||||
Scan(dest ...any) error
|
||||
}
|
||||
|
||||
// scanRow scans one operation_log row from rs.
|
||||
func scanRow(rs rowScanner) (operation.OperationEntry, error) {
|
||||
var (
|
||||
id int64
|
||||
gameID string
|
||||
opKind string
|
||||
opSource string
|
||||
sourceRef string
|
||||
outcome string
|
||||
errorCode string
|
||||
errorMessage string
|
||||
startedAt time.Time
|
||||
finishedAt sql.NullTime
|
||||
)
|
||||
if err := rs.Scan(
|
||||
&id,
|
||||
&gameID,
|
||||
&opKind,
|
||||
&opSource,
|
||||
&sourceRef,
|
||||
&outcome,
|
||||
&errorCode,
|
||||
&errorMessage,
|
||||
&startedAt,
|
||||
&finishedAt,
|
||||
); err != nil {
|
||||
return operation.OperationEntry{}, err
|
||||
}
|
||||
return operation.OperationEntry{
|
||||
ID: id,
|
||||
GameID: gameID,
|
||||
OpKind: operation.OpKind(opKind),
|
||||
OpSource: operation.OpSource(opSource),
|
||||
SourceRef: sourceRef,
|
||||
Outcome: operation.Outcome(outcome),
|
||||
ErrorCode: errorCode,
|
||||
ErrorMessage: errorMessage,
|
||||
StartedAt: startedAt.UTC(),
|
||||
FinishedAt: sqlx.TimePtrFromNullable(finishedAt),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.OperationLogStore interface at
|
||||
// compile time.
|
||||
var _ ports.OperationLogStore = (*Store)(nil)
|
||||
@@ -0,0 +1,190 @@
|
||||
package operationlog_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/gamemaster/internal/adapters/postgres/operationlog"
|
||||
"galaxy/gamemaster/internal/domain/operation"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
func newStore(t *testing.T) *operationlog.Store {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
store, err := operationlog.New(operationlog.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return store
|
||||
}
|
||||
|
||||
func successEntry(gameID string, kind operation.OpKind, source operation.OpSource, startedAt time.Time) operation.OperationEntry {
|
||||
finishedAt := startedAt.Add(50 * time.Millisecond)
|
||||
return operation.OperationEntry{
|
||||
GameID: gameID,
|
||||
OpKind: kind,
|
||||
OpSource: source,
|
||||
SourceRef: "req-001",
|
||||
Outcome: operation.OutcomeSuccess,
|
||||
StartedAt: startedAt,
|
||||
FinishedAt: &finishedAt,
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewRejectsInvalidConfig(t *testing.T) {
|
||||
_, err := operationlog.New(operationlog.Config{})
|
||||
require.Error(t, err)
|
||||
|
||||
store, err := operationlog.New(operationlog.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: 0,
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.Nil(t, store)
|
||||
}
|
||||
|
||||
func TestAppendSuccessEntry(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
at := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
entry := successEntry("game-001", operation.OpKindRegisterRuntime, operation.OpSourceLobbyInternal, at)
|
||||
|
||||
id, err := store.Append(ctx, entry)
|
||||
require.NoError(t, err)
|
||||
assert.Greater(t, id, int64(0))
|
||||
|
||||
entries, err := store.ListByGame(ctx, "game-001", 10)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, entries, 1)
|
||||
got := entries[0]
|
||||
assert.Equal(t, id, got.ID)
|
||||
assert.Equal(t, entry.GameID, got.GameID)
|
||||
assert.Equal(t, entry.OpKind, got.OpKind)
|
||||
assert.Equal(t, entry.OpSource, got.OpSource)
|
||||
assert.Equal(t, entry.SourceRef, got.SourceRef)
|
||||
assert.Equal(t, operation.OutcomeSuccess, got.Outcome)
|
||||
assert.Empty(t, got.ErrorCode)
|
||||
assert.Empty(t, got.ErrorMessage)
|
||||
assert.True(t, got.StartedAt.Equal(at))
|
||||
require.NotNil(t, got.FinishedAt)
|
||||
assert.Equal(t, time.UTC, got.StartedAt.Location())
|
||||
assert.Equal(t, time.UTC, got.FinishedAt.Location())
|
||||
}
|
||||
|
||||
func TestAppendFailureEntry(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
at := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
finishedAt := at.Add(time.Second)
|
||||
entry := operation.OperationEntry{
|
||||
GameID: "game-001",
|
||||
OpKind: operation.OpKindTurnGeneration,
|
||||
OpSource: operation.OpSourceAdminRest,
|
||||
Outcome: operation.OutcomeFailure,
|
||||
ErrorCode: "engine_unreachable",
|
||||
ErrorMessage: "connection refused",
|
||||
StartedAt: at,
|
||||
FinishedAt: &finishedAt,
|
||||
}
|
||||
|
||||
_, err := store.Append(ctx, entry)
|
||||
require.NoError(t, err)
|
||||
|
||||
got, err := store.ListByGame(ctx, "game-001", 1)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, got, 1)
|
||||
assert.Equal(t, operation.OutcomeFailure, got[0].Outcome)
|
||||
assert.Equal(t, "engine_unreachable", got[0].ErrorCode)
|
||||
assert.Equal(t, "connection refused", got[0].ErrorMessage)
|
||||
}
|
||||
|
||||
func TestAppendIDsAreMonotonic(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
at := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
id1, err := store.Append(ctx, successEntry("game-001", operation.OpKindRegisterRuntime, operation.OpSourceLobbyInternal, at))
|
||||
require.NoError(t, err)
|
||||
|
||||
id2, err := store.Append(ctx, successEntry("game-001", operation.OpKindTurnGeneration, operation.OpSourceLobbyInternal, at.Add(time.Second)))
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Greater(t, id2, id1, "bigserial ids must be monotonic across appends")
|
||||
}
|
||||
|
||||
func TestAppendValidationRejection(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
bad := operation.OperationEntry{}
|
||||
_, err := store.Append(ctx, bad)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestListByGameOrderingDesc(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
at := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
_, err := store.Append(ctx, successEntry("game-001", operation.OpKindRegisterRuntime, operation.OpSourceLobbyInternal, at))
|
||||
require.NoError(t, err)
|
||||
_, err = store.Append(ctx, successEntry("game-001", operation.OpKindTurnGeneration, operation.OpSourceLobbyInternal, at.Add(time.Second)))
|
||||
require.NoError(t, err)
|
||||
_, err = store.Append(ctx, successEntry("game-001", operation.OpKindStop, operation.OpSourceAdminRest, at.Add(2*time.Second)))
|
||||
require.NoError(t, err)
|
||||
|
||||
got, err := store.ListByGame(ctx, "game-001", 10)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, got, 3)
|
||||
assert.Equal(t, operation.OpKindStop, got[0].OpKind)
|
||||
assert.Equal(t, operation.OpKindTurnGeneration, got[1].OpKind)
|
||||
assert.Equal(t, operation.OpKindRegisterRuntime, got[2].OpKind)
|
||||
}
|
||||
|
||||
func TestListByGameRespectsLimit(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
at := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
for index := range 5 {
|
||||
_, err := store.Append(ctx, successEntry("game-001", operation.OpKindTurnGeneration, operation.OpSourceLobbyInternal, at.Add(time.Duration(index)*time.Second)))
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
got, err := store.ListByGame(ctx, "game-001", 2)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, got, 2)
|
||||
}
|
||||
|
||||
func TestListByGameUnknownGame(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
got, err := store.ListByGame(ctx, "unknown-game", 10)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, got)
|
||||
}
|
||||
|
||||
func TestListByGameRejectsBadArgs(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.ListByGame(ctx, "", 10)
|
||||
require.Error(t, err)
|
||||
|
||||
_, err = store.ListByGame(ctx, "game-001", 0)
|
||||
require.Error(t, err)
|
||||
|
||||
_, err = store.ListByGame(ctx, "game-001", -1)
|
||||
require.Error(t, err)
|
||||
}
|
||||
@@ -0,0 +1,292 @@
|
||||
// Package playermappingstore implements the PostgreSQL-backed adapter
|
||||
// for `ports.PlayerMappingStore`.
|
||||
//
|
||||
// The package owns the on-disk shape of the `player_mappings` table
|
||||
// defined in
|
||||
// `galaxy/gamemaster/internal/adapters/postgres/migrations/00001_init.sql`
|
||||
// and translates the schema-agnostic `ports.PlayerMappingStore`
|
||||
// interface declared in `internal/ports/playermappingstore.go` into
|
||||
// concrete go-jet/v2 statements driven by the pgx driver.
|
||||
//
|
||||
// BulkInsert ships every row in a single multi-row INSERT so the
|
||||
// operation is atomic — any unique-constraint violation rolls back the
|
||||
// whole batch and is mapped to playermapping.ErrConflict.
|
||||
package playermappingstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/internal/sqlx"
|
||||
pgtable "galaxy/gamemaster/internal/adapters/postgres/jet/gamemaster/table"
|
||||
"galaxy/gamemaster/internal/domain/playermapping"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
|
||||
pg "github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
// Config configures one PostgreSQL-backed player-mapping store.
|
||||
type Config struct {
|
||||
DB *sql.DB
|
||||
OperationTimeout time.Duration
|
||||
}
|
||||
|
||||
// Store persists Game Master player mappings in PostgreSQL.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
// New constructs one PostgreSQL-backed player-mapping store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.DB == nil {
|
||||
return nil, errors.New("new postgres player mapping store: db must not be nil")
|
||||
}
|
||||
if cfg.OperationTimeout <= 0 {
|
||||
return nil, errors.New("new postgres player mapping store: operation timeout must be positive")
|
||||
}
|
||||
return &Store{
|
||||
db: cfg.DB,
|
||||
operationTimeout: cfg.OperationTimeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// playerMappingSelectColumns matches scanRow's column order.
|
||||
var playerMappingSelectColumns = pg.ColumnList{
|
||||
pgtable.PlayerMappings.GameID,
|
||||
pgtable.PlayerMappings.UserID,
|
||||
pgtable.PlayerMappings.RaceName,
|
||||
pgtable.PlayerMappings.EnginePlayerUUID,
|
||||
pgtable.PlayerMappings.CreatedAt,
|
||||
}
|
||||
|
||||
// BulkInsert installs every mapping in records using a single
|
||||
// multi-row INSERT. Either every row is persisted or none of them is.
|
||||
// Any PostgreSQL unique-violation
|
||||
// (`(game_id, user_id)` PK or `(game_id, race_name)` UNIQUE) is mapped
|
||||
// to playermapping.ErrConflict.
|
||||
func (store *Store) BulkInsert(ctx context.Context, records []playermapping.PlayerMapping) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("bulk insert player mappings: nil store")
|
||||
}
|
||||
if len(records) == 0 {
|
||||
return nil
|
||||
}
|
||||
for index, record := range records {
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("bulk insert player mappings: record %d: %w", index, err)
|
||||
}
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "bulk insert player mappings", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.PlayerMappings.INSERT(
|
||||
pgtable.PlayerMappings.GameID,
|
||||
pgtable.PlayerMappings.UserID,
|
||||
pgtable.PlayerMappings.RaceName,
|
||||
pgtable.PlayerMappings.EnginePlayerUUID,
|
||||
pgtable.PlayerMappings.CreatedAt,
|
||||
)
|
||||
for _, record := range records {
|
||||
stmt = stmt.VALUES(
|
||||
record.GameID,
|
||||
record.UserID,
|
||||
record.RaceName,
|
||||
record.EnginePlayerUUID,
|
||||
record.CreatedAt.UTC(),
|
||||
)
|
||||
}
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
if sqlx.IsUniqueViolation(err) {
|
||||
return fmt.Errorf("bulk insert player mappings: %w", playermapping.ErrConflict)
|
||||
}
|
||||
return fmt.Errorf("bulk insert player mappings: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get returns the mapping identified by (gameID, userID).
|
||||
func (store *Store) Get(ctx context.Context, gameID, userID string) (playermapping.PlayerMapping, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return playermapping.PlayerMapping{}, errors.New("get player mapping: nil store")
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return playermapping.PlayerMapping{}, fmt.Errorf("get player mapping: game id must not be empty")
|
||||
}
|
||||
if strings.TrimSpace(userID) == "" {
|
||||
return playermapping.PlayerMapping{}, fmt.Errorf("get player mapping: user id must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get player mapping", store.operationTimeout)
|
||||
if err != nil {
|
||||
return playermapping.PlayerMapping{}, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(playerMappingSelectColumns).
|
||||
FROM(pgtable.PlayerMappings).
|
||||
WHERE(pg.AND(
|
||||
pgtable.PlayerMappings.GameID.EQ(pg.String(gameID)),
|
||||
pgtable.PlayerMappings.UserID.EQ(pg.String(userID)),
|
||||
))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
got, err := scanRow(row)
|
||||
if sqlx.IsNoRows(err) {
|
||||
return playermapping.PlayerMapping{}, playermapping.ErrNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return playermapping.PlayerMapping{}, fmt.Errorf("get player mapping: %w", err)
|
||||
}
|
||||
return got, nil
|
||||
}
|
||||
|
||||
// GetByRace returns the mapping identified by (gameID, raceName).
|
||||
func (store *Store) GetByRace(ctx context.Context, gameID, raceName string) (playermapping.PlayerMapping, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return playermapping.PlayerMapping{}, errors.New("get player mapping by race: nil store")
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return playermapping.PlayerMapping{}, fmt.Errorf("get player mapping by race: game id must not be empty")
|
||||
}
|
||||
if strings.TrimSpace(raceName) == "" {
|
||||
return playermapping.PlayerMapping{}, fmt.Errorf("get player mapping by race: race name must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get player mapping by race", store.operationTimeout)
|
||||
if err != nil {
|
||||
return playermapping.PlayerMapping{}, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(playerMappingSelectColumns).
|
||||
FROM(pgtable.PlayerMappings).
|
||||
WHERE(pg.AND(
|
||||
pgtable.PlayerMappings.GameID.EQ(pg.String(gameID)),
|
||||
pgtable.PlayerMappings.RaceName.EQ(pg.String(raceName)),
|
||||
))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
got, err := scanRow(row)
|
||||
if sqlx.IsNoRows(err) {
|
||||
return playermapping.PlayerMapping{}, playermapping.ErrNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return playermapping.PlayerMapping{}, fmt.Errorf("get player mapping by race: %w", err)
|
||||
}
|
||||
return got, nil
|
||||
}
|
||||
|
||||
// ListByGame returns every mapping owned by gameID, ordered by user_id
|
||||
// ascending.
|
||||
func (store *Store) ListByGame(ctx context.Context, gameID string) ([]playermapping.PlayerMapping, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("list player mappings by game: nil store")
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return nil, fmt.Errorf("list player mappings by game: game id must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "list player mappings by game", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(playerMappingSelectColumns).
|
||||
FROM(pgtable.PlayerMappings).
|
||||
WHERE(pgtable.PlayerMappings.GameID.EQ(pg.String(gameID))).
|
||||
ORDER_BY(pgtable.PlayerMappings.UserID.ASC())
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("list player mappings by game: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
mappings := make([]playermapping.PlayerMapping, 0)
|
||||
for rows.Next() {
|
||||
got, err := scanRow(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("list player mappings by game: scan: %w", err)
|
||||
}
|
||||
mappings = append(mappings, got)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("list player mappings by game: %w", err)
|
||||
}
|
||||
if len(mappings) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return mappings, nil
|
||||
}
|
||||
|
||||
// DeleteByGame removes every mapping owned by gameID. The call is
|
||||
// idempotent: it returns nil even when no rows were deleted.
|
||||
func (store *Store) DeleteByGame(ctx context.Context, gameID string) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("delete player mappings by game: nil store")
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return fmt.Errorf("delete player mappings by game: game id must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "delete player mappings by game", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.PlayerMappings.DELETE().
|
||||
WHERE(pgtable.PlayerMappings.GameID.EQ(pg.String(gameID)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
return fmt.Errorf("delete player mappings by game: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// rowScanner abstracts *sql.Row and *sql.Rows so scanRow can be shared
|
||||
// across single-row and iterated reads.
|
||||
type rowScanner interface {
|
||||
Scan(dest ...any) error
|
||||
}
|
||||
|
||||
// scanRow scans one player_mappings row from rs.
|
||||
func scanRow(rs rowScanner) (playermapping.PlayerMapping, error) {
|
||||
var (
|
||||
gameID string
|
||||
userID string
|
||||
raceName string
|
||||
enginePlayerUUID string
|
||||
createdAt time.Time
|
||||
)
|
||||
if err := rs.Scan(&gameID, &userID, &raceName, &enginePlayerUUID, &createdAt); err != nil {
|
||||
return playermapping.PlayerMapping{}, err
|
||||
}
|
||||
return playermapping.PlayerMapping{
|
||||
GameID: gameID,
|
||||
UserID: userID,
|
||||
RaceName: raceName,
|
||||
EnginePlayerUUID: enginePlayerUUID,
|
||||
CreatedAt: createdAt.UTC(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.PlayerMappingStore interface at
|
||||
// compile time.
|
||||
var _ ports.PlayerMappingStore = (*Store)(nil)
|
||||
@@ -0,0 +1,264 @@
|
||||
package playermappingstore_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/gamemaster/internal/adapters/postgres/playermappingstore"
|
||||
"galaxy/gamemaster/internal/domain/playermapping"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
func newStore(t *testing.T) *playermappingstore.Store {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
store, err := playermappingstore.New(playermappingstore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return store
|
||||
}
|
||||
|
||||
func mapping(gameID, userID, raceName, uuid string, createdAt time.Time) playermapping.PlayerMapping {
|
||||
return playermapping.PlayerMapping{
|
||||
GameID: gameID,
|
||||
UserID: userID,
|
||||
RaceName: raceName,
|
||||
EnginePlayerUUID: uuid,
|
||||
CreatedAt: createdAt,
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewRejectsInvalidConfig(t *testing.T) {
|
||||
_, err := playermappingstore.New(playermappingstore.Config{})
|
||||
require.Error(t, err)
|
||||
|
||||
store, err := playermappingstore.New(playermappingstore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: 0,
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.Nil(t, store)
|
||||
}
|
||||
|
||||
func TestBulkInsertHappy(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
records := []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-1", "Aelinari", "uuid-1", now),
|
||||
mapping("game-001", "user-2", "Drazi", "uuid-2", now),
|
||||
mapping("game-001", "user-3", "Voltori", "uuid-3", now),
|
||||
}
|
||||
require.NoError(t, store.BulkInsert(ctx, records))
|
||||
|
||||
for _, want := range records {
|
||||
got, err := store.Get(ctx, want.GameID, want.UserID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, want.RaceName, got.RaceName)
|
||||
assert.Equal(t, want.EnginePlayerUUID, got.EnginePlayerUUID)
|
||||
assert.True(t, got.CreatedAt.Equal(now))
|
||||
assert.Equal(t, time.UTC, got.CreatedAt.Location())
|
||||
}
|
||||
}
|
||||
|
||||
func TestBulkInsertEmpty(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
require.NoError(t, store.BulkInsert(ctx, nil))
|
||||
require.NoError(t, store.BulkInsert(ctx, []playermapping.PlayerMapping{}))
|
||||
|
||||
got, err := store.ListByGame(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, got)
|
||||
}
|
||||
|
||||
func TestBulkInsertAtomicConflictRaceName(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
// user-2 reuses Aelinari (already taken by user-1) inside the same
|
||||
// game — the unique (game_id, race_name) index must reject the
|
||||
// whole batch.
|
||||
records := []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-1", "Aelinari", "uuid-1", now),
|
||||
mapping("game-001", "user-2", "Drazi", "uuid-2", now),
|
||||
mapping("game-001", "user-3", "Aelinari", "uuid-3", now),
|
||||
}
|
||||
err := store.BulkInsert(ctx, records)
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, playermapping.ErrConflict), "want ErrConflict, got %v", err)
|
||||
|
||||
got, err := store.ListByGame(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, got, "atomic batch must roll back every row when any row fails")
|
||||
}
|
||||
|
||||
func TestBulkInsertAtomicConflictUserID(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
records := []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-1", "Aelinari", "uuid-1", now),
|
||||
mapping("game-001", "user-1", "Drazi", "uuid-2", now), // user-1 twice
|
||||
}
|
||||
err := store.BulkInsert(ctx, records)
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, playermapping.ErrConflict))
|
||||
}
|
||||
|
||||
func TestBulkInsertConflictAcrossCalls(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.BulkInsert(ctx, []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-1", "Aelinari", "uuid-1", now),
|
||||
}))
|
||||
|
||||
err := store.BulkInsert(ctx, []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-1", "DifferentRace", "uuid-2", now),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, playermapping.ErrConflict))
|
||||
}
|
||||
|
||||
func TestBulkInsertRejectsInvalid(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
bad := []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-1", "Aelinari", "uuid-1", now),
|
||||
{GameID: "game-001", UserID: "", RaceName: "Drazi", EnginePlayerUUID: "uuid-2", CreatedAt: now},
|
||||
}
|
||||
err := store.BulkInsert(ctx, bad)
|
||||
require.Error(t, err)
|
||||
require.False(t, errors.Is(err, playermapping.ErrConflict))
|
||||
|
||||
got, err := store.ListByGame(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, got, "validation rejection must not insert any row")
|
||||
}
|
||||
|
||||
func TestGetMissingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.Get(ctx, "game-001", "user-1")
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, playermapping.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestGetByRace(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.BulkInsert(ctx, []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-1", "Aelinari", "uuid-1", now),
|
||||
mapping("game-001", "user-2", "Drazi", "uuid-2", now),
|
||||
}))
|
||||
|
||||
got, err := store.GetByRace(ctx, "game-001", "Aelinari")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "user-1", got.UserID)
|
||||
|
||||
_, err = store.GetByRace(ctx, "game-001", "Voltori")
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, playermapping.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestListByGameSortedByUserID(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.BulkInsert(ctx, []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-c", "Aelinari", "uuid-1", now),
|
||||
mapping("game-001", "user-a", "Drazi", "uuid-2", now),
|
||||
mapping("game-001", "user-b", "Voltori", "uuid-3", now),
|
||||
// other game's mappings must not leak
|
||||
mapping("game-002", "user-z", "Outsider", "uuid-4", now),
|
||||
}))
|
||||
|
||||
got, err := store.ListByGame(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, got, 3)
|
||||
assert.Equal(t, "user-a", got[0].UserID)
|
||||
assert.Equal(t, "user-b", got[1].UserID)
|
||||
assert.Equal(t, "user-c", got[2].UserID)
|
||||
}
|
||||
|
||||
func TestListByGameUnknown(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
got, err := store.ListByGame(ctx, "unknown-game")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, got)
|
||||
}
|
||||
|
||||
func TestDeleteByGameIdempotent(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.BulkInsert(ctx, []playermapping.PlayerMapping{
|
||||
mapping("game-001", "user-1", "Aelinari", "uuid-1", now),
|
||||
mapping("game-001", "user-2", "Drazi", "uuid-2", now),
|
||||
}))
|
||||
|
||||
require.NoError(t, store.DeleteByGame(ctx, "game-001"))
|
||||
got, err := store.ListByGame(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, got)
|
||||
|
||||
// Second call must be a no-op.
|
||||
require.NoError(t, store.DeleteByGame(ctx, "game-001"))
|
||||
}
|
||||
|
||||
func TestGetRejectsEmptyArgs(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.Get(ctx, "", "user-1")
|
||||
require.Error(t, err)
|
||||
_, err = store.Get(ctx, "game-001", "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestGetByRaceRejectsEmptyArgs(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.GetByRace(ctx, "", "Aelinari")
|
||||
require.Error(t, err)
|
||||
_, err = store.GetByRace(ctx, "game-001", "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestListByGameRejectsEmpty(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
_, err := store.ListByGame(ctx, "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestDeleteByGameRejectsEmpty(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
err := store.DeleteByGame(ctx, "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
@@ -0,0 +1,636 @@
|
||||
// Package runtimerecordstore implements the PostgreSQL-backed adapter
|
||||
// for `ports.RuntimeRecordStore`.
|
||||
//
|
||||
// The package owns the on-disk shape of the `runtime_records` table
|
||||
// defined in
|
||||
// `galaxy/gamemaster/internal/adapters/postgres/migrations/00001_init.sql`
|
||||
// and translates the schema-agnostic `ports.RuntimeRecordStore`
|
||||
// interface declared in `internal/ports/runtimerecordstore.go` into
|
||||
// concrete go-jet/v2 statements driven by the pgx driver.
|
||||
//
|
||||
// Lifecycle transitions (UpdateStatus) use compare-and-swap on
|
||||
// `(game_id, status)` rather than holding a SELECT ... FOR UPDATE lock
|
||||
// across the caller's logic, mirroring the pattern used by
|
||||
// `rtmanager/internal/adapters/postgres/runtimerecordstore`.
|
||||
package runtimerecordstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/internal/sqlx"
|
||||
pgtable "galaxy/gamemaster/internal/adapters/postgres/jet/gamemaster/table"
|
||||
"galaxy/gamemaster/internal/domain/runtime"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
|
||||
pg "github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
// Config configures one PostgreSQL-backed runtime-record store. The
|
||||
// store does not own the underlying *sql.DB lifecycle; the caller
|
||||
// (typically the service runtime) opens, instruments, migrates, and
|
||||
// closes the pool.
|
||||
type Config struct {
|
||||
// DB stores the connection pool the store uses for every query.
|
||||
DB *sql.DB
|
||||
|
||||
// OperationTimeout bounds one round trip. The store creates a
|
||||
// derived context for each operation so callers cannot starve the
|
||||
// pool with an unbounded ctx.
|
||||
OperationTimeout time.Duration
|
||||
}
|
||||
|
||||
// Store persists Game Master runtime records in PostgreSQL.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
// New constructs one PostgreSQL-backed runtime-record store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.DB == nil {
|
||||
return nil, errors.New("new postgres runtime record store: db must not be nil")
|
||||
}
|
||||
if cfg.OperationTimeout <= 0 {
|
||||
return nil, errors.New("new postgres runtime record store: operation timeout must be positive")
|
||||
}
|
||||
return &Store{
|
||||
db: cfg.DB,
|
||||
operationTimeout: cfg.OperationTimeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// runtimeSelectColumns is the canonical SELECT list for the
|
||||
// runtime_records table, matching scanRecord's column order.
|
||||
var runtimeSelectColumns = pg.ColumnList{
|
||||
pgtable.RuntimeRecords.GameID,
|
||||
pgtable.RuntimeRecords.Status,
|
||||
pgtable.RuntimeRecords.EngineEndpoint,
|
||||
pgtable.RuntimeRecords.CurrentImageRef,
|
||||
pgtable.RuntimeRecords.CurrentEngineVersion,
|
||||
pgtable.RuntimeRecords.TurnSchedule,
|
||||
pgtable.RuntimeRecords.CurrentTurn,
|
||||
pgtable.RuntimeRecords.NextGenerationAt,
|
||||
pgtable.RuntimeRecords.SkipNextTick,
|
||||
pgtable.RuntimeRecords.EngineHealth,
|
||||
pgtable.RuntimeRecords.CreatedAt,
|
||||
pgtable.RuntimeRecords.UpdatedAt,
|
||||
pgtable.RuntimeRecords.StartedAt,
|
||||
pgtable.RuntimeRecords.StoppedAt,
|
||||
pgtable.RuntimeRecords.FinishedAt,
|
||||
}
|
||||
|
||||
// Get returns the record identified by gameID. It returns
|
||||
// runtime.ErrNotFound when no record exists.
|
||||
func (store *Store) Get(ctx context.Context, gameID string) (runtime.RuntimeRecord, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return runtime.RuntimeRecord{}, errors.New("get runtime record: nil store")
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return runtime.RuntimeRecord{}, fmt.Errorf("get runtime record: game id must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get runtime record", store.operationTimeout)
|
||||
if err != nil {
|
||||
return runtime.RuntimeRecord{}, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(runtimeSelectColumns).
|
||||
FROM(pgtable.RuntimeRecords).
|
||||
WHERE(pgtable.RuntimeRecords.GameID.EQ(pg.String(gameID)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
record, err := scanRecord(row)
|
||||
if sqlx.IsNoRows(err) {
|
||||
return runtime.RuntimeRecord{}, runtime.ErrNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return runtime.RuntimeRecord{}, fmt.Errorf("get runtime record: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// Insert installs record into the store. Returns runtime.ErrConflict
|
||||
// when a row already exists for record.GameID.
|
||||
func (store *Store) Insert(ctx context.Context, record runtime.RuntimeRecord) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("insert runtime record: nil store")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("insert runtime record: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "insert runtime record", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.RuntimeRecords.INSERT(
|
||||
pgtable.RuntimeRecords.GameID,
|
||||
pgtable.RuntimeRecords.Status,
|
||||
pgtable.RuntimeRecords.EngineEndpoint,
|
||||
pgtable.RuntimeRecords.CurrentImageRef,
|
||||
pgtable.RuntimeRecords.CurrentEngineVersion,
|
||||
pgtable.RuntimeRecords.TurnSchedule,
|
||||
pgtable.RuntimeRecords.CurrentTurn,
|
||||
pgtable.RuntimeRecords.NextGenerationAt,
|
||||
pgtable.RuntimeRecords.SkipNextTick,
|
||||
pgtable.RuntimeRecords.EngineHealth,
|
||||
pgtable.RuntimeRecords.CreatedAt,
|
||||
pgtable.RuntimeRecords.UpdatedAt,
|
||||
pgtable.RuntimeRecords.StartedAt,
|
||||
pgtable.RuntimeRecords.StoppedAt,
|
||||
pgtable.RuntimeRecords.FinishedAt,
|
||||
).VALUES(
|
||||
record.GameID,
|
||||
string(record.Status),
|
||||
record.EngineEndpoint,
|
||||
record.CurrentImageRef,
|
||||
record.CurrentEngineVersion,
|
||||
record.TurnSchedule,
|
||||
int32(record.CurrentTurn),
|
||||
sqlx.NullableTimePtr(record.NextGenerationAt),
|
||||
record.SkipNextTick,
|
||||
record.EngineHealth,
|
||||
record.CreatedAt.UTC(),
|
||||
record.UpdatedAt.UTC(),
|
||||
sqlx.NullableTimePtr(record.StartedAt),
|
||||
sqlx.NullableTimePtr(record.StoppedAt),
|
||||
sqlx.NullableTimePtr(record.FinishedAt),
|
||||
)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
if sqlx.IsUniqueViolation(err) {
|
||||
return fmt.Errorf("insert runtime record: %w", runtime.ErrConflict)
|
||||
}
|
||||
return fmt.Errorf("insert runtime record: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition with a compare-and-swap
|
||||
// guard on (game_id, status). The destination's lifecycle timestamps
|
||||
// (started_at, stopped_at, finished_at) and the optional fields
|
||||
// (engine_health, current_image_ref, current_engine_version) are
|
||||
// written only when applicable.
|
||||
func (store *Store) UpdateStatus(ctx context.Context, input ports.UpdateStatusInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update runtime status: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update runtime status", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
assignments := buildUpdateStatusAssignments(input, input.Now.UTC())
|
||||
|
||||
// The first positional argument to UPDATE is required by jet's
|
||||
// API but ignored when SET receives ColumnAssigment values
|
||||
// (jet then serialises SetClauseNew instead of clauseSet).
|
||||
stmt := pgtable.RuntimeRecords.UPDATE(pgtable.RuntimeRecords.Status).
|
||||
SET(assignments[0], assignments[1:]...).
|
||||
WHERE(pg.AND(
|
||||
pgtable.RuntimeRecords.GameID.EQ(pg.String(input.GameID)),
|
||||
pgtable.RuntimeRecords.Status.EQ(pg.String(string(input.ExpectedFrom))),
|
||||
))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime status: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime status: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return store.classifyMissingUpdate(operationCtx, input.GameID)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// buildUpdateStatusAssignments returns the slice of column assignments
|
||||
// produced by one UpdateStatus call. Mandatory assignments (status,
|
||||
// updated_at) are always present; lifecycle timestamps and optional
|
||||
// fields appear only when relevant to the destination status or when
|
||||
// the corresponding pointer is non-nil.
|
||||
//
|
||||
// The slice element type is `any` so the result can be spread into
|
||||
// `UpdateStatement.SET(value any, values ...any)` without manual
|
||||
// boxing at the call site.
|
||||
func buildUpdateStatusAssignments(input ports.UpdateStatusInput, now time.Time) []any {
|
||||
nowExpr := pg.TimestampzT(now)
|
||||
assignments := []any{
|
||||
pgtable.RuntimeRecords.Status.SET(pg.String(string(input.To))),
|
||||
pgtable.RuntimeRecords.UpdatedAt.SET(nowExpr),
|
||||
}
|
||||
|
||||
if input.To == runtime.StatusRunning && input.ExpectedFrom == runtime.StatusStarting {
|
||||
assignments = append(assignments, pgtable.RuntimeRecords.StartedAt.SET(nowExpr))
|
||||
}
|
||||
if input.To == runtime.StatusStopped {
|
||||
assignments = append(assignments, pgtable.RuntimeRecords.StoppedAt.SET(nowExpr))
|
||||
}
|
||||
if input.To == runtime.StatusFinished {
|
||||
assignments = append(assignments, pgtable.RuntimeRecords.FinishedAt.SET(nowExpr))
|
||||
}
|
||||
if input.EngineHealthSummary != nil {
|
||||
assignments = append(assignments, pgtable.RuntimeRecords.EngineHealth.SET(pg.String(*input.EngineHealthSummary)))
|
||||
}
|
||||
if input.CurrentImageRef != nil {
|
||||
assignments = append(assignments, pgtable.RuntimeRecords.CurrentImageRef.SET(pg.String(*input.CurrentImageRef)))
|
||||
}
|
||||
if input.CurrentEngineVersion != nil {
|
||||
assignments = append(assignments, pgtable.RuntimeRecords.CurrentEngineVersion.SET(pg.String(*input.CurrentEngineVersion)))
|
||||
}
|
||||
|
||||
return assignments
|
||||
}
|
||||
|
||||
// classifyMissingUpdate distinguishes ErrNotFound from ErrConflict
|
||||
// after an UPDATE that affected zero rows. A row that is absent yields
|
||||
// ErrNotFound; a row whose status does not match the CAS predicate
|
||||
// yields ErrConflict.
|
||||
func (store *Store) classifyMissingUpdate(ctx context.Context, gameID string) error {
|
||||
probe := pg.SELECT(pgtable.RuntimeRecords.Status).
|
||||
FROM(pgtable.RuntimeRecords).
|
||||
WHERE(pgtable.RuntimeRecords.GameID.EQ(pg.String(gameID)))
|
||||
probeQuery, probeArgs := probe.Sql()
|
||||
|
||||
var current string
|
||||
row := store.db.QueryRowContext(ctx, probeQuery, probeArgs...)
|
||||
if err := row.Scan(¤t); err != nil {
|
||||
if sqlx.IsNoRows(err) {
|
||||
return runtime.ErrNotFound
|
||||
}
|
||||
return fmt.Errorf("update runtime status: probe: %w", err)
|
||||
}
|
||||
return runtime.ErrConflict
|
||||
}
|
||||
|
||||
// UpdateImage rotates the `current_image_ref` and
|
||||
// `current_engine_version` columns of one runtime row under a
|
||||
// compare-and-swap guard on `(game_id, status)`. The destination
|
||||
// status is preserved; only `updated_at` and the two image columns
|
||||
// change. Returns runtime.ErrNotFound when no row matches and
|
||||
// runtime.ErrConflict when the stored status differs from
|
||||
// input.ExpectedStatus. Used by the admin patch flow (Stage 17) where
|
||||
// Runtime Manager recreates the engine container with a new image
|
||||
// while the runtime stays `running`.
|
||||
func (store *Store) UpdateImage(ctx context.Context, input ports.UpdateImageInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update runtime image: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update runtime image", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
now := input.Now.UTC()
|
||||
stmt := pgtable.RuntimeRecords.UPDATE(
|
||||
pgtable.RuntimeRecords.CurrentImageRef,
|
||||
pgtable.RuntimeRecords.CurrentEngineVersion,
|
||||
pgtable.RuntimeRecords.UpdatedAt,
|
||||
).SET(
|
||||
pg.String(input.CurrentImageRef),
|
||||
pg.String(input.CurrentEngineVersion),
|
||||
pg.TimestampzT(now),
|
||||
).WHERE(pg.AND(
|
||||
pgtable.RuntimeRecords.GameID.EQ(pg.String(input.GameID)),
|
||||
pgtable.RuntimeRecords.Status.EQ(pg.String(string(input.ExpectedStatus))),
|
||||
))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime image: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime image: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return store.classifyMissingUpdate(operationCtx, input.GameID)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateEngineHealth rotates the `engine_health` column of one runtime
|
||||
// row plus `updated_at`. The destination status is preserved and no
|
||||
// CAS guard is applied so late-arriving runtime:health_events still
|
||||
// refresh the summary regardless of the current runtime status. Used
|
||||
// by the Stage 18 health-events consumer. Returns runtime.ErrNotFound
|
||||
// when no row exists for input.GameID.
|
||||
func (store *Store) UpdateEngineHealth(ctx context.Context, input ports.UpdateEngineHealthInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update runtime engine health: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update runtime engine health", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.RuntimeRecords.UPDATE(
|
||||
pgtable.RuntimeRecords.EngineHealth,
|
||||
pgtable.RuntimeRecords.UpdatedAt,
|
||||
).SET(
|
||||
pg.String(input.EngineHealthSummary),
|
||||
pg.TimestampzT(input.Now.UTC()),
|
||||
).WHERE(pgtable.RuntimeRecords.GameID.EQ(pg.String(input.GameID)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime engine health: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime engine health: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return runtime.ErrNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateScheduling mutates the scheduling columns of one runtime row
|
||||
// (`next_generation_at`, `skip_next_tick`, `current_turn`) plus
|
||||
// `updated_at`. Returns runtime.ErrNotFound when no row exists.
|
||||
func (store *Store) UpdateScheduling(ctx context.Context, input ports.UpdateSchedulingInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update runtime scheduling: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update runtime scheduling", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
var nextGenExpr pg.Expression
|
||||
if input.NextGenerationAt != nil {
|
||||
nextGenExpr = pg.TimestampzT(input.NextGenerationAt.UTC())
|
||||
} else {
|
||||
nextGenExpr = pg.NULL
|
||||
}
|
||||
|
||||
stmt := pgtable.RuntimeRecords.UPDATE(
|
||||
pgtable.RuntimeRecords.NextGenerationAt,
|
||||
pgtable.RuntimeRecords.SkipNextTick,
|
||||
pgtable.RuntimeRecords.CurrentTurn,
|
||||
pgtable.RuntimeRecords.UpdatedAt,
|
||||
).SET(
|
||||
nextGenExpr,
|
||||
pg.Bool(input.SkipNextTick),
|
||||
pg.Int32(int32(input.CurrentTurn)),
|
||||
pg.TimestampzT(input.Now.UTC()),
|
||||
).WHERE(pgtable.RuntimeRecords.GameID.EQ(pg.String(input.GameID)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime scheduling: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime scheduling: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return runtime.ErrNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete removes the record identified by gameID. The call is
|
||||
// idempotent: it returns nil even when no row matches (mirrors
|
||||
// PlayerMappingStore.DeleteByGame). Used by the register-runtime
|
||||
// rollback path (Stage 13) when engine /admin/init or any later setup
|
||||
// step fails after the row has been installed with status=starting.
|
||||
func (store *Store) Delete(ctx context.Context, gameID string) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("delete runtime record: nil store")
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return fmt.Errorf("delete runtime record: game id must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "delete runtime record", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.RuntimeRecords.DELETE().
|
||||
WHERE(pgtable.RuntimeRecords.GameID.EQ(pg.String(gameID)))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
return fmt.Errorf("delete runtime record: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListDueRunning returns every record whose status is `running` and
|
||||
// whose `next_generation_at <= now`. The order is
|
||||
// (next_generation_at ASC, game_id ASC), matching the
|
||||
// `runtime_records_status_next_gen_idx` direction.
|
||||
func (store *Store) ListDueRunning(ctx context.Context, now time.Time) ([]runtime.RuntimeRecord, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("list due runtime records: nil store")
|
||||
}
|
||||
if now.IsZero() {
|
||||
return nil, fmt.Errorf("list due runtime records: now must not be zero")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "list due runtime records", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
cutoff := pg.TimestampzT(now.UTC())
|
||||
stmt := pg.SELECT(runtimeSelectColumns).
|
||||
FROM(pgtable.RuntimeRecords).
|
||||
WHERE(pg.AND(
|
||||
pgtable.RuntimeRecords.Status.EQ(pg.String(string(runtime.StatusRunning))),
|
||||
pgtable.RuntimeRecords.NextGenerationAt.LT_EQ(cutoff),
|
||||
)).
|
||||
ORDER_BY(
|
||||
pgtable.RuntimeRecords.NextGenerationAt.ASC(),
|
||||
pgtable.RuntimeRecords.GameID.ASC(),
|
||||
)
|
||||
|
||||
return store.queryRecords(operationCtx, stmt, "list due runtime records")
|
||||
}
|
||||
|
||||
// List returns every record in the store, ordered by `created_at`
|
||||
// descending and by `game_id` ascending as a tie-breaker. Used by the
|
||||
// `internalListRuntimes` REST handler when no status filter is
|
||||
// supplied.
|
||||
func (store *Store) List(ctx context.Context) ([]runtime.RuntimeRecord, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("list runtime records: nil store")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "list runtime records", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(runtimeSelectColumns).
|
||||
FROM(pgtable.RuntimeRecords).
|
||||
ORDER_BY(
|
||||
pgtable.RuntimeRecords.CreatedAt.DESC(),
|
||||
pgtable.RuntimeRecords.GameID.ASC(),
|
||||
)
|
||||
|
||||
return store.queryRecords(operationCtx, stmt, "list runtime records")
|
||||
}
|
||||
|
||||
// ListByStatus returns every record currently indexed under status,
|
||||
// ordered by game_id ASC.
|
||||
func (store *Store) ListByStatus(ctx context.Context, status runtime.Status) ([]runtime.RuntimeRecord, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("list runtime records by status: nil store")
|
||||
}
|
||||
if !status.IsKnown() {
|
||||
return nil, fmt.Errorf("list runtime records by status: status %q is unsupported", status)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "list runtime records by status", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(runtimeSelectColumns).
|
||||
FROM(pgtable.RuntimeRecords).
|
||||
WHERE(pgtable.RuntimeRecords.Status.EQ(pg.String(string(status)))).
|
||||
ORDER_BY(pgtable.RuntimeRecords.GameID.ASC())
|
||||
|
||||
return store.queryRecords(operationCtx, stmt, "list runtime records by status")
|
||||
}
|
||||
|
||||
// queryRecords runs a SELECT statement and scans every returned row
|
||||
// into a runtime.RuntimeRecord slice. opName is used only to prefix
|
||||
// error messages.
|
||||
func (store *Store) queryRecords(ctx context.Context, stmt pg.SelectStatement, opName string) ([]runtime.RuntimeRecord, error) {
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(ctx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", opName, err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
records := make([]runtime.RuntimeRecord, 0)
|
||||
for rows.Next() {
|
||||
record, err := scanRecord(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: scan: %w", opName, err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", opName, err)
|
||||
}
|
||||
if len(records) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// rowScanner abstracts *sql.Row and *sql.Rows so scanRecord can be
|
||||
// shared across both single-row and iterated reads.
|
||||
type rowScanner interface {
|
||||
Scan(dest ...any) error
|
||||
}
|
||||
|
||||
// scanRecord scans one runtime_records row from rs. Returns
|
||||
// sql.ErrNoRows verbatim so callers can distinguish "no row" from a
|
||||
// hard error.
|
||||
func scanRecord(rs rowScanner) (runtime.RuntimeRecord, error) {
|
||||
var (
|
||||
gameID string
|
||||
status string
|
||||
engineEndpoint string
|
||||
currentImageRef string
|
||||
currentEngineVersion string
|
||||
turnSchedule string
|
||||
currentTurn int32
|
||||
nextGenerationAt sql.NullTime
|
||||
skipNextTick bool
|
||||
engineHealth string
|
||||
createdAt time.Time
|
||||
updatedAt time.Time
|
||||
startedAt sql.NullTime
|
||||
stoppedAt sql.NullTime
|
||||
finishedAt sql.NullTime
|
||||
)
|
||||
if err := rs.Scan(
|
||||
&gameID,
|
||||
&status,
|
||||
&engineEndpoint,
|
||||
¤tImageRef,
|
||||
¤tEngineVersion,
|
||||
&turnSchedule,
|
||||
¤tTurn,
|
||||
&nextGenerationAt,
|
||||
&skipNextTick,
|
||||
&engineHealth,
|
||||
&createdAt,
|
||||
&updatedAt,
|
||||
&startedAt,
|
||||
&stoppedAt,
|
||||
&finishedAt,
|
||||
); err != nil {
|
||||
return runtime.RuntimeRecord{}, err
|
||||
}
|
||||
return runtime.RuntimeRecord{
|
||||
GameID: gameID,
|
||||
Status: runtime.Status(status),
|
||||
EngineEndpoint: engineEndpoint,
|
||||
CurrentImageRef: currentImageRef,
|
||||
CurrentEngineVersion: currentEngineVersion,
|
||||
TurnSchedule: turnSchedule,
|
||||
CurrentTurn: int(currentTurn),
|
||||
NextGenerationAt: sqlx.TimePtrFromNullable(nextGenerationAt),
|
||||
SkipNextTick: skipNextTick,
|
||||
EngineHealth: engineHealth,
|
||||
CreatedAt: createdAt.UTC(),
|
||||
UpdatedAt: updatedAt.UTC(),
|
||||
StartedAt: sqlx.TimePtrFromNullable(startedAt),
|
||||
StoppedAt: sqlx.TimePtrFromNullable(stoppedAt),
|
||||
FinishedAt: sqlx.TimePtrFromNullable(finishedAt),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.RuntimeRecordStore interface at
|
||||
// compile time.
|
||||
var _ ports.RuntimeRecordStore = (*Store)(nil)
|
||||
@@ -0,0 +1,718 @@
|
||||
package runtimerecordstore_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/gamemaster/internal/adapters/postgres/runtimerecordstore"
|
||||
"galaxy/gamemaster/internal/domain/runtime"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
func newStore(t *testing.T) *runtimerecordstore.Store {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
store, err := runtimerecordstore.New(runtimerecordstore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return store
|
||||
}
|
||||
|
||||
func startingRecord(gameID string, createdAt time.Time) runtime.RuntimeRecord {
|
||||
return runtime.RuntimeRecord{
|
||||
GameID: gameID,
|
||||
Status: runtime.StatusStarting,
|
||||
EngineEndpoint: "http://galaxy-game-" + gameID + ":8080",
|
||||
CurrentImageRef: "ghcr.io/galaxy/game:v1.2.3",
|
||||
CurrentEngineVersion: "v1.2.3",
|
||||
TurnSchedule: "0 18 * * *",
|
||||
CurrentTurn: 0,
|
||||
EngineHealth: "",
|
||||
CreatedAt: createdAt,
|
||||
UpdatedAt: createdAt,
|
||||
}
|
||||
}
|
||||
|
||||
func runningRecord(gameID string, createdAt time.Time, nextGen time.Time) runtime.RuntimeRecord {
|
||||
startedAt := createdAt.Add(time.Second)
|
||||
return runtime.RuntimeRecord{
|
||||
GameID: gameID,
|
||||
Status: runtime.StatusRunning,
|
||||
EngineEndpoint: "http://galaxy-game-" + gameID + ":8080",
|
||||
CurrentImageRef: "ghcr.io/galaxy/game:v1.2.3",
|
||||
CurrentEngineVersion: "v1.2.3",
|
||||
TurnSchedule: "0 18 * * *",
|
||||
CurrentTurn: 1,
|
||||
NextGenerationAt: &nextGen,
|
||||
EngineHealth: "healthy",
|
||||
CreatedAt: createdAt,
|
||||
UpdatedAt: startedAt,
|
||||
StartedAt: &startedAt,
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewRejectsInvalidConfig(t *testing.T) {
|
||||
_, err := runtimerecordstore.New(runtimerecordstore.Config{})
|
||||
require.Error(t, err)
|
||||
|
||||
store, err := runtimerecordstore.New(runtimerecordstore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: 0,
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.Nil(t, store)
|
||||
}
|
||||
|
||||
func TestInsertGetRoundTrip(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
record := startingRecord("game-001", now)
|
||||
|
||||
require.NoError(t, store.Insert(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.GameID, got.GameID)
|
||||
assert.Equal(t, runtime.StatusStarting, got.Status)
|
||||
assert.Equal(t, record.EngineEndpoint, got.EngineEndpoint)
|
||||
assert.Equal(t, record.CurrentImageRef, got.CurrentImageRef)
|
||||
assert.Equal(t, record.CurrentEngineVersion, got.CurrentEngineVersion)
|
||||
assert.Equal(t, record.TurnSchedule, got.TurnSchedule)
|
||||
assert.Equal(t, 0, got.CurrentTurn)
|
||||
assert.Nil(t, got.NextGenerationAt)
|
||||
assert.False(t, got.SkipNextTick)
|
||||
assert.Equal(t, "", got.EngineHealth)
|
||||
assert.True(t, got.CreatedAt.Equal(now), "created_at: want %v, got %v", now, got.CreatedAt)
|
||||
assert.Equal(t, time.UTC, got.CreatedAt.Location())
|
||||
assert.True(t, got.UpdatedAt.Equal(now))
|
||||
assert.Equal(t, time.UTC, got.UpdatedAt.Location())
|
||||
assert.Nil(t, got.StartedAt)
|
||||
assert.Nil(t, got.StoppedAt)
|
||||
assert.Nil(t, got.FinishedAt)
|
||||
}
|
||||
|
||||
func TestInsertRejectsDuplicate(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
record := startingRecord("game-001", now)
|
||||
require.NoError(t, store.Insert(ctx, record))
|
||||
|
||||
err := store.Insert(ctx, record)
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, runtime.ErrConflict), "want ErrConflict, got %v", err)
|
||||
}
|
||||
|
||||
func TestInsertRejectsInvalidRecord(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
bad := runtime.RuntimeRecord{} // empty
|
||||
err := store.Insert(ctx, bad)
|
||||
require.Error(t, err)
|
||||
require.False(t, errors.Is(err, runtime.ErrConflict))
|
||||
}
|
||||
|
||||
func TestGetReturnsErrNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.Get(ctx, "missing")
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, runtime.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestUpdateStatusStartingToRunningSetsStartedAt(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, startingRecord("game-001", created)))
|
||||
|
||||
now := created.Add(2 * time.Second)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusStarting,
|
||||
To: runtime.StatusRunning,
|
||||
Now: now,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, runtime.StatusRunning, got.Status)
|
||||
require.NotNil(t, got.StartedAt)
|
||||
assert.True(t, got.StartedAt.Equal(now))
|
||||
assert.True(t, got.UpdatedAt.Equal(now))
|
||||
assert.Nil(t, got.StoppedAt)
|
||||
assert.Nil(t, got.FinishedAt)
|
||||
}
|
||||
|
||||
func TestUpdateStatusToFinishedSetsFinishedAt(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusRunning,
|
||||
To: runtime.StatusGenerationInProgress,
|
||||
Now: created.Add(2 * time.Second),
|
||||
}))
|
||||
|
||||
finishAt := created.Add(time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusGenerationInProgress,
|
||||
To: runtime.StatusFinished,
|
||||
Now: finishAt,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, runtime.StatusFinished, got.Status)
|
||||
require.NotNil(t, got.FinishedAt)
|
||||
assert.True(t, got.FinishedAt.Equal(finishAt))
|
||||
assert.True(t, got.UpdatedAt.Equal(finishAt))
|
||||
}
|
||||
|
||||
func TestUpdateStatusToStoppedSetsStoppedAt(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
stopAt := created.Add(2 * time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusRunning,
|
||||
To: runtime.StatusStopped,
|
||||
Now: stopAt,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, runtime.StatusStopped, got.Status)
|
||||
require.NotNil(t, got.StoppedAt)
|
||||
assert.True(t, got.StoppedAt.Equal(stopAt))
|
||||
require.NotNil(t, got.StartedAt, "started_at must remain set after stop")
|
||||
assert.Nil(t, got.FinishedAt)
|
||||
}
|
||||
|
||||
func TestUpdateStatusEngineUnreachableRecoveryKeepsStartedAt(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
original := runningRecord("game-001", created, nextGen)
|
||||
require.NoError(t, store.Insert(ctx, original))
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusRunning,
|
||||
To: runtime.StatusEngineUnreachable,
|
||||
Now: created.Add(time.Minute),
|
||||
}))
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusEngineUnreachable,
|
||||
To: runtime.StatusRunning,
|
||||
Now: created.Add(2 * time.Minute),
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, runtime.StatusRunning, got.Status)
|
||||
require.NotNil(t, got.StartedAt)
|
||||
assert.True(t, got.StartedAt.Equal(*original.StartedAt),
|
||||
"recovery transition must not overwrite started_at")
|
||||
}
|
||||
|
||||
func TestUpdateStatusOptionalFields(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
healthy := "engine_unreachable_summary"
|
||||
imageRef := "ghcr.io/galaxy/game:v1.2.4"
|
||||
engineVersion := "v1.2.4"
|
||||
now := created.Add(time.Minute)
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusRunning,
|
||||
To: runtime.StatusGenerationInProgress,
|
||||
Now: now,
|
||||
EngineHealthSummary: &healthy,
|
||||
CurrentImageRef: &imageRef,
|
||||
CurrentEngineVersion: &engineVersion,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, runtime.StatusGenerationInProgress, got.Status)
|
||||
assert.Equal(t, healthy, got.EngineHealth)
|
||||
assert.Equal(t, imageRef, got.CurrentImageRef)
|
||||
assert.Equal(t, engineVersion, got.CurrentEngineVersion)
|
||||
}
|
||||
|
||||
func TestUpdateStatusOnMissingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "ghost",
|
||||
ExpectedFrom: runtime.StatusRunning,
|
||||
To: runtime.StatusStopped,
|
||||
Now: time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, runtime.ErrNotFound), "want ErrNotFound, got %v", err)
|
||||
}
|
||||
|
||||
func TestUpdateStatusStaleCASReturnsConflict(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, startingRecord("game-001", created)))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusRunning,
|
||||
To: runtime.StatusStopped,
|
||||
Now: created.Add(time.Second),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, runtime.ErrConflict), "want ErrConflict, got %v", err)
|
||||
}
|
||||
|
||||
func TestUpdateStatusConcurrentCAS(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
const concurrency = 8
|
||||
results := make([]error, concurrency)
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(concurrency)
|
||||
for index := range concurrency {
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
results[index] = store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: "game-001",
|
||||
ExpectedFrom: runtime.StatusRunning,
|
||||
To: runtime.StatusStopped,
|
||||
Now: created.Add(time.Duration(index+1) * time.Second),
|
||||
})
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
wins, conflicts := 0, 0
|
||||
for _, err := range results {
|
||||
switch {
|
||||
case err == nil:
|
||||
wins++
|
||||
case errors.Is(err, runtime.ErrConflict):
|
||||
conflicts++
|
||||
default:
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
assert.Equal(t, 1, wins, "exactly one caller must win the CAS race")
|
||||
assert.Equal(t, concurrency-1, conflicts, "the rest must observe runtime.ErrConflict")
|
||||
}
|
||||
|
||||
func TestUpdateImageHappy(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
now := nextGen.Add(time.Second)
|
||||
require.NoError(t, store.UpdateImage(ctx, ports.UpdateImageInput{
|
||||
GameID: "game-001",
|
||||
ExpectedStatus: runtime.StatusRunning,
|
||||
CurrentImageRef: "ghcr.io/galaxy/game:v1.2.4",
|
||||
CurrentEngineVersion: "v1.2.4",
|
||||
Now: now,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, runtime.StatusRunning, got.Status, "patch must not change status")
|
||||
assert.Equal(t, "ghcr.io/galaxy/game:v1.2.4", got.CurrentImageRef)
|
||||
assert.Equal(t, "v1.2.4", got.CurrentEngineVersion)
|
||||
assert.True(t, got.UpdatedAt.Equal(now))
|
||||
require.NotNil(t, got.NextGenerationAt, "next_generation_at must remain untouched")
|
||||
assert.True(t, got.NextGenerationAt.Equal(nextGen))
|
||||
assert.Equal(t, 1, got.CurrentTurn, "current_turn must remain untouched")
|
||||
}
|
||||
|
||||
func TestUpdateImageStaleStatusReturnsConflict(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, startingRecord("game-001", created)))
|
||||
|
||||
err := store.UpdateImage(ctx, ports.UpdateImageInput{
|
||||
GameID: "game-001",
|
||||
ExpectedStatus: runtime.StatusRunning,
|
||||
CurrentImageRef: "ghcr.io/galaxy/game:v1.2.4",
|
||||
CurrentEngineVersion: "v1.2.4",
|
||||
Now: created.Add(time.Second),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, runtime.ErrConflict), "want ErrConflict, got %v", err)
|
||||
}
|
||||
|
||||
func TestUpdateImageOnMissingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.UpdateImage(ctx, ports.UpdateImageInput{
|
||||
GameID: "ghost",
|
||||
ExpectedStatus: runtime.StatusRunning,
|
||||
CurrentImageRef: "ghcr.io/galaxy/game:v1.2.4",
|
||||
CurrentEngineVersion: "v1.2.4",
|
||||
Now: time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, runtime.ErrNotFound), "want ErrNotFound, got %v", err)
|
||||
}
|
||||
|
||||
func TestUpdateImageRejectsInvalidInput(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.UpdateImage(ctx, ports.UpdateImageInput{
|
||||
GameID: "",
|
||||
ExpectedStatus: runtime.StatusRunning,
|
||||
CurrentImageRef: "ghcr.io/galaxy/game:v1.2.4",
|
||||
CurrentEngineVersion: "v1.2.4",
|
||||
Now: time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.False(t, errors.Is(err, runtime.ErrConflict))
|
||||
require.False(t, errors.Is(err, runtime.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestUpdateEngineHealthHappy(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
now := nextGen.Add(2 * time.Second)
|
||||
require.NoError(t, store.UpdateEngineHealth(ctx, ports.UpdateEngineHealthInput{
|
||||
GameID: "game-001",
|
||||
EngineHealthSummary: "probe_failed",
|
||||
Now: now,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, runtime.StatusRunning, got.Status, "engine health update must not change status")
|
||||
assert.Equal(t, "probe_failed", got.EngineHealth)
|
||||
assert.True(t, got.UpdatedAt.Equal(now))
|
||||
require.NotNil(t, got.NextGenerationAt, "next_generation_at must remain untouched")
|
||||
assert.True(t, got.NextGenerationAt.Equal(nextGen))
|
||||
assert.Equal(t, 1, got.CurrentTurn, "current_turn must remain untouched")
|
||||
}
|
||||
|
||||
func TestUpdateEngineHealthAcceptsEmptySummary(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
now := nextGen.Add(time.Second)
|
||||
require.NoError(t, store.UpdateEngineHealth(ctx, ports.UpdateEngineHealthInput{
|
||||
GameID: "game-001",
|
||||
EngineHealthSummary: "",
|
||||
Now: now,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "", got.EngineHealth)
|
||||
}
|
||||
|
||||
func TestUpdateEngineHealthOnMissingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.UpdateEngineHealth(ctx, ports.UpdateEngineHealthInput{
|
||||
GameID: "ghost",
|
||||
EngineHealthSummary: "exited",
|
||||
Now: time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, runtime.ErrNotFound), "want ErrNotFound, got %v", err)
|
||||
}
|
||||
|
||||
func TestUpdateEngineHealthRejectsInvalidInput(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.UpdateEngineHealth(ctx, ports.UpdateEngineHealthInput{
|
||||
GameID: "",
|
||||
EngineHealthSummary: "healthy",
|
||||
Now: time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.False(t, errors.Is(err, runtime.ErrConflict))
|
||||
require.False(t, errors.Is(err, runtime.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestUpdateEngineHealthAppliesFromAnyStatus(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, startingRecord("game-001", created)))
|
||||
|
||||
now := created.Add(time.Second)
|
||||
require.NoError(t, store.UpdateEngineHealth(ctx, ports.UpdateEngineHealthInput{
|
||||
GameID: "game-001",
|
||||
EngineHealthSummary: "exited",
|
||||
Now: now,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, runtime.StatusStarting, got.Status, "no status mutation expected")
|
||||
assert.Equal(t, "exited", got.EngineHealth)
|
||||
}
|
||||
|
||||
func TestUpdateSchedulingHappy(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
updated := nextGen.Add(time.Hour)
|
||||
now := nextGen.Add(time.Second)
|
||||
require.NoError(t, store.UpdateScheduling(ctx, ports.UpdateSchedulingInput{
|
||||
GameID: "game-001",
|
||||
NextGenerationAt: &updated,
|
||||
SkipNextTick: true,
|
||||
CurrentTurn: 5,
|
||||
Now: now,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, got.NextGenerationAt)
|
||||
assert.True(t, got.NextGenerationAt.Equal(updated))
|
||||
assert.True(t, got.SkipNextTick)
|
||||
assert.Equal(t, 5, got.CurrentTurn)
|
||||
assert.True(t, got.UpdatedAt.Equal(now))
|
||||
}
|
||||
|
||||
func TestUpdateSchedulingClearsNextGen(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
nextGen := created.Add(time.Hour)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-001", created, nextGen)))
|
||||
|
||||
now := nextGen.Add(time.Second)
|
||||
require.NoError(t, store.UpdateScheduling(ctx, ports.UpdateSchedulingInput{
|
||||
GameID: "game-001",
|
||||
NextGenerationAt: nil,
|
||||
SkipNextTick: false,
|
||||
CurrentTurn: 0,
|
||||
Now: now,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, "game-001")
|
||||
require.NoError(t, err)
|
||||
assert.Nil(t, got.NextGenerationAt)
|
||||
assert.False(t, got.SkipNextTick)
|
||||
}
|
||||
|
||||
func TestUpdateSchedulingOnMissingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.UpdateScheduling(ctx, ports.UpdateSchedulingInput{
|
||||
GameID: "ghost",
|
||||
CurrentTurn: 0,
|
||||
Now: time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC),
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, runtime.ErrNotFound))
|
||||
}
|
||||
|
||||
func TestListDueRunning(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
createdEarlier := time.Date(2026, time.April, 27, 10, 0, 0, 0, time.UTC)
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
due := created.Add(-time.Minute) // due before now
|
||||
future := created.Add(time.Hour) // not due yet
|
||||
|
||||
dueRecord := runningRecord("game-due", created, due)
|
||||
require.NoError(t, store.Insert(ctx, dueRecord))
|
||||
|
||||
futureRecord := runningRecord("game-future", created, future)
|
||||
require.NoError(t, store.Insert(ctx, futureRecord))
|
||||
|
||||
// A stopped record whose next_generation_at is in the past must
|
||||
// still be excluded by the running-status filter.
|
||||
stoppedRecord := startingRecord("game-stopped", createdEarlier)
|
||||
stoppedRecord.Status = runtime.StatusStopped
|
||||
startedAt := createdEarlier.Add(time.Second)
|
||||
stoppedAt := createdEarlier.Add(time.Minute)
|
||||
stoppedRecord.StartedAt = &startedAt
|
||||
stoppedRecord.StoppedAt = &stoppedAt
|
||||
stoppedRecord.UpdatedAt = stoppedAt
|
||||
stalePast := created.Add(-30 * time.Minute)
|
||||
stoppedRecord.NextGenerationAt = &stalePast
|
||||
require.NoError(t, store.Insert(ctx, stoppedRecord))
|
||||
|
||||
results, err := store.ListDueRunning(ctx, created)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, results, 1)
|
||||
assert.Equal(t, "game-due", results[0].GameID)
|
||||
}
|
||||
|
||||
func TestListByStatus(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
created := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-r1", created, created.Add(time.Hour))))
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-r2", created, created.Add(time.Hour))))
|
||||
require.NoError(t, store.Insert(ctx, startingRecord("game-s1", created)))
|
||||
|
||||
running, err := store.ListByStatus(ctx, runtime.StatusRunning)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, running, 2)
|
||||
assert.Equal(t, "game-r1", running[0].GameID)
|
||||
assert.Equal(t, "game-r2", running[1].GameID)
|
||||
|
||||
starting, err := store.ListByStatus(ctx, runtime.StatusStarting)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, starting, 1)
|
||||
assert.Equal(t, "game-s1", starting[0].GameID)
|
||||
|
||||
finished, err := store.ListByStatus(ctx, runtime.StatusFinished)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, finished)
|
||||
}
|
||||
|
||||
func TestListReturnsEveryRecordOrderedByCreatedAtDesc(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
earliest := time.Date(2026, time.April, 27, 10, 0, 0, 0, time.UTC)
|
||||
middle := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
latest := time.Date(2026, time.April, 27, 14, 0, 0, 0, time.UTC)
|
||||
|
||||
require.NoError(t, store.Insert(ctx, startingRecord("game-earliest", earliest)))
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-middle", middle, middle.Add(time.Hour))))
|
||||
require.NoError(t, store.Insert(ctx, runningRecord("game-latest", latest, latest.Add(time.Hour))))
|
||||
|
||||
records, err := store.List(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, records, 3)
|
||||
assert.Equal(t, "game-latest", records[0].GameID)
|
||||
assert.Equal(t, "game-middle", records[1].GameID)
|
||||
assert.Equal(t, "game-earliest", records[2].GameID)
|
||||
}
|
||||
|
||||
func TestListReturnsEmptySliceWhenStoreIsEmpty(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
records, err := store.List(ctx)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, records)
|
||||
}
|
||||
|
||||
func TestListByStatusUnknownRejected(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.ListByStatus(ctx, runtime.Status("exotic"))
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestListDueRunningRejectsZeroNow(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.ListDueRunning(ctx, time.Time{})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestGetRejectsEmptyGameID(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
_, err := store.Get(ctx, "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestDeleteIdempotent(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
now := time.Date(2026, time.April, 27, 12, 0, 0, 0, time.UTC)
|
||||
require.NoError(t, store.Insert(ctx, startingRecord("game-001", now)))
|
||||
|
||||
require.NoError(t, store.Delete(ctx, "game-001"))
|
||||
|
||||
_, err := store.Get(ctx, "game-001")
|
||||
require.ErrorIs(t, err, runtime.ErrNotFound)
|
||||
|
||||
// Second call must be a no-op.
|
||||
require.NoError(t, store.Delete(ctx, "game-001"))
|
||||
}
|
||||
|
||||
func TestDeleteRejectsEmptyGameID(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
require.Error(t, store.Delete(ctx, ""))
|
||||
}
|
||||
@@ -0,0 +1,38 @@
|
||||
// Package redisstate hosts the Game Master Redis adapters that share a
|
||||
// single keyspace. The sole sibling subpackage in v1 is
|
||||
// `streamoffsets` (the per-consumer offset for the
|
||||
// runtime:health_events stream); membership cache lives in process and
|
||||
// does not touch Redis.
|
||||
//
|
||||
// The package itself only declares the keyspace; concrete stores live
|
||||
// in nested packages so dependencies (miniredis, testcontainers) stay
|
||||
// out of consumer build graphs that do not need them.
|
||||
package redisstate
|
||||
|
||||
import "encoding/base64"
|
||||
|
||||
// defaultPrefix is the mandatory `gamemaster:` namespace prefix shared
|
||||
// by every Game Master Redis key.
|
||||
const defaultPrefix = "gamemaster:"
|
||||
|
||||
// Keyspace builds the Game Master Redis keys. The namespace covers
|
||||
// stream consumer offsets in v1.
|
||||
//
|
||||
// Dynamic key segments are encoded with base64url so raw key structure
|
||||
// does not depend on caller-provided characters; this matches the
|
||||
// encoding chosen by `lobby/internal/adapters/redisstate.Keyspace` and
|
||||
// `rtmanager/internal/adapters/redisstate.Keyspace`.
|
||||
type Keyspace struct{}
|
||||
|
||||
// StreamOffset returns the Redis key that stores the last successfully
|
||||
// processed entry id for one Redis Stream consumer. The streamLabel is
|
||||
// the short logical identifier of the consumer (e.g. `health_events`),
|
||||
// not the full stream name; it stays stable when the underlying stream
|
||||
// key is renamed.
|
||||
func (Keyspace) StreamOffset(streamLabel string) string {
|
||||
return defaultPrefix + "stream_offsets:" + encodeKeyComponent(streamLabel)
|
||||
}
|
||||
|
||||
func encodeKeyComponent(value string) string {
|
||||
return base64.RawURLEncoding.EncodeToString([]byte(value))
|
||||
}
|
||||
@@ -0,0 +1,94 @@
|
||||
// Package streamoffsets implements the Redis-backed adapter for
|
||||
// `ports.StreamOffsetStore`.
|
||||
//
|
||||
// In v1 the only consumer that calls Load/Save is the
|
||||
// runtime:health_events worker (PLAN stage 18). Keys are produced by
|
||||
// `redisstate.Keyspace.StreamOffset`, mirroring the lobby and rtmanager
|
||||
// patterns.
|
||||
package streamoffsets
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/redisstate"
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
)
|
||||
|
||||
// Config configures one Redis-backed stream-offset store. The store
|
||||
// does not own the redis client lifecycle; the caller (typically the
|
||||
// service runtime) opens and closes it.
|
||||
type Config struct {
|
||||
Client *redis.Client
|
||||
}
|
||||
|
||||
// Store persists Game Master stream consumer offsets in Redis.
|
||||
type Store struct {
|
||||
client *redis.Client
|
||||
keys redisstate.Keyspace
|
||||
}
|
||||
|
||||
// New constructs one Redis-backed stream-offset store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.Client == nil {
|
||||
return nil, errors.New("new gamemaster stream offset store: nil redis client")
|
||||
}
|
||||
return &Store{
|
||||
client: cfg.Client,
|
||||
keys: redisstate.Keyspace{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Load returns the last processed entry id for streamLabel when one
|
||||
// is stored. A missing key returns ("", false, nil).
|
||||
func (store *Store) Load(ctx context.Context, streamLabel string) (string, bool, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return "", false, errors.New("load gamemaster stream offset: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return "", false, errors.New("load gamemaster stream offset: nil context")
|
||||
}
|
||||
if strings.TrimSpace(streamLabel) == "" {
|
||||
return "", false, errors.New("load gamemaster stream offset: stream label must not be empty")
|
||||
}
|
||||
|
||||
value, err := store.client.Get(ctx, store.keys.StreamOffset(streamLabel)).Result()
|
||||
switch {
|
||||
case errors.Is(err, redis.Nil):
|
||||
return "", false, nil
|
||||
case err != nil:
|
||||
return "", false, fmt.Errorf("load gamemaster stream offset: %w", err)
|
||||
}
|
||||
return value, true, nil
|
||||
}
|
||||
|
||||
// Save stores entryID as the new offset for streamLabel. The key has
|
||||
// no TTL — offsets are durable and only overwritten by subsequent
|
||||
// Saves.
|
||||
func (store *Store) Save(ctx context.Context, streamLabel, entryID string) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("save gamemaster stream offset: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("save gamemaster stream offset: nil context")
|
||||
}
|
||||
if strings.TrimSpace(streamLabel) == "" {
|
||||
return errors.New("save gamemaster stream offset: stream label must not be empty")
|
||||
}
|
||||
if strings.TrimSpace(entryID) == "" {
|
||||
return errors.New("save gamemaster stream offset: entry id must not be empty")
|
||||
}
|
||||
|
||||
if err := store.client.Set(ctx, store.keys.StreamOffset(streamLabel), entryID, 0).Err(); err != nil {
|
||||
return fmt.Errorf("save gamemaster stream offset: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.StreamOffsetStore interface at
|
||||
// compile time.
|
||||
var _ ports.StreamOffsetStore = (*Store)(nil)
|
||||
@@ -0,0 +1,93 @@
|
||||
package streamoffsets_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"galaxy/gamemaster/internal/adapters/redisstate"
|
||||
"galaxy/gamemaster/internal/adapters/redisstate/streamoffsets"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newOffsetStore(t *testing.T) (*streamoffsets.Store, *miniredis.Miniredis) {
|
||||
t.Helper()
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
|
||||
store, err := streamoffsets.New(streamoffsets.Config{Client: client})
|
||||
require.NoError(t, err)
|
||||
return store, server
|
||||
}
|
||||
|
||||
func TestNewRejectsNilClient(t *testing.T) {
|
||||
_, err := streamoffsets.New(streamoffsets.Config{})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestLoadMissingReturnsNotFound(t *testing.T) {
|
||||
store, _ := newOffsetStore(t)
|
||||
id, found, err := store.Load(context.Background(), "health_events")
|
||||
require.NoError(t, err)
|
||||
assert.False(t, found)
|
||||
assert.Empty(t, id)
|
||||
}
|
||||
|
||||
func TestSaveLoadRoundTrip(t *testing.T) {
|
||||
store, server := newOffsetStore(t)
|
||||
|
||||
const entryID = "1700000000000-0"
|
||||
require.NoError(t, store.Save(context.Background(), "health_events", entryID))
|
||||
|
||||
id, found, err := store.Load(context.Background(), "health_events")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, found)
|
||||
assert.Equal(t, entryID, id)
|
||||
|
||||
// Verify the namespace prefix lands as expected.
|
||||
expectedKey := redisstate.Keyspace{}.StreamOffset("health_events")
|
||||
assert.True(t, server.Exists(expectedKey),
|
||||
"key %q must exist after Save", expectedKey)
|
||||
}
|
||||
|
||||
func TestSaveOverwritesPreviousValue(t *testing.T) {
|
||||
store, _ := newOffsetStore(t)
|
||||
|
||||
require.NoError(t, store.Save(context.Background(), "health_events", "1-0"))
|
||||
require.NoError(t, store.Save(context.Background(), "health_events", "2-0"))
|
||||
|
||||
id, found, err := store.Load(context.Background(), "health_events")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, found)
|
||||
assert.Equal(t, "2-0", id)
|
||||
}
|
||||
|
||||
func TestSaveRejectsBadInputs(t *testing.T) {
|
||||
store, _ := newOffsetStore(t)
|
||||
|
||||
require.Error(t, store.Save(context.Background(), "", "1-0"))
|
||||
require.Error(t, store.Save(context.Background(), "health_events", ""))
|
||||
//nolint:staticcheck // intentional nil ctx test
|
||||
require.Error(t, store.Save(nil, "health_events", "1-0"))
|
||||
}
|
||||
|
||||
func TestLoadRejectsBadInputs(t *testing.T) {
|
||||
store, _ := newOffsetStore(t)
|
||||
|
||||
_, _, err := store.Load(context.Background(), "")
|
||||
require.Error(t, err)
|
||||
//nolint:staticcheck // intentional nil ctx test
|
||||
_, _, err = store.Load(nil, "health_events")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestNilStoreOperationsRejected(t *testing.T) {
|
||||
var store *streamoffsets.Store
|
||||
_, _, err := store.Load(context.Background(), "health_events")
|
||||
require.Error(t, err)
|
||||
require.Error(t, store.Save(context.Background(), "health_events", "1-0"))
|
||||
}
|
||||
@@ -0,0 +1,225 @@
|
||||
// Package rtmclient provides the trusted-internal Runtime Manager
|
||||
// REST client Game Master uses for synchronous lifecycle operations
|
||||
// against an already-running container. Two routes are mounted:
|
||||
//
|
||||
// - POST /api/v1/internal/runtimes/{game_id}/stop
|
||||
// - POST /api/v1/internal/runtimes/{game_id}/patch
|
||||
//
|
||||
// `Restart` is reserved per `gamemaster/PLAN.md` Stage 10 and is not
|
||||
// part of the v1 surface.
|
||||
package rtmclient
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
|
||||
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
const (
|
||||
stopPathTemplate = "/api/v1/internal/runtimes/%s/stop"
|
||||
patchPathTemplate = "/api/v1/internal/runtimes/%s/patch"
|
||||
)
|
||||
|
||||
// Config configures one HTTP-backed Runtime Manager internal client.
|
||||
type Config struct {
|
||||
// BaseURL stores the absolute base URL of the Runtime Manager
|
||||
// internal HTTP listener (e.g. `http://rtmanager:8096`).
|
||||
BaseURL string
|
||||
|
||||
// RequestTimeout bounds one outbound stop/patch request.
|
||||
RequestTimeout time.Duration
|
||||
}
|
||||
|
||||
// Client speaks REST/JSON to the Runtime Manager internal API.
|
||||
type Client struct {
|
||||
baseURL string
|
||||
requestTimeout time.Duration
|
||||
httpClient *http.Client
|
||||
closeIdleConnections func()
|
||||
}
|
||||
|
||||
type stopRequestEnvelope struct {
|
||||
Reason string `json:"reason"`
|
||||
}
|
||||
|
||||
type patchRequestEnvelope struct {
|
||||
ImageRef string `json:"image_ref"`
|
||||
}
|
||||
|
||||
type errorEnvelope struct {
|
||||
Error *errorBody `json:"error"`
|
||||
}
|
||||
|
||||
type errorBody struct {
|
||||
Code string `json:"code"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
// NewClient constructs an RTM internal client with otelhttp-wrapped
|
||||
// transport cloned from `http.DefaultTransport`. Call `Close` to
|
||||
// release idle connections at shutdown.
|
||||
func NewClient(cfg Config) (*Client, error) {
|
||||
transport, ok := http.DefaultTransport.(*http.Transport)
|
||||
if !ok {
|
||||
return nil, errors.New("new rtm client: default transport is not *http.Transport")
|
||||
}
|
||||
cloned := transport.Clone()
|
||||
return newClient(cfg, &http.Client{Transport: otelhttp.NewTransport(cloned)}, cloned.CloseIdleConnections)
|
||||
}
|
||||
|
||||
func newClient(cfg Config, httpClient *http.Client, closeIdleConnections func()) (*Client, error) {
|
||||
switch {
|
||||
case strings.TrimSpace(cfg.BaseURL) == "":
|
||||
return nil, errors.New("new rtm client: base url must not be empty")
|
||||
case cfg.RequestTimeout <= 0:
|
||||
return nil, errors.New("new rtm client: request timeout must be positive")
|
||||
case httpClient == nil:
|
||||
return nil, errors.New("new rtm client: http client must not be nil")
|
||||
}
|
||||
parsed, err := url.Parse(strings.TrimRight(strings.TrimSpace(cfg.BaseURL), "/"))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("new rtm client: parse base url: %w", err)
|
||||
}
|
||||
if parsed.Scheme == "" || parsed.Host == "" {
|
||||
return nil, errors.New("new rtm client: base url must be absolute")
|
||||
}
|
||||
return &Client{
|
||||
baseURL: parsed.String(),
|
||||
requestTimeout: cfg.RequestTimeout,
|
||||
httpClient: httpClient,
|
||||
closeIdleConnections: closeIdleConnections,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close releases idle HTTP connections owned by the underlying
|
||||
// transport. Safe to call multiple times.
|
||||
func (client *Client) Close() error {
|
||||
if client == nil || client.closeIdleConnections == nil {
|
||||
return nil
|
||||
}
|
||||
client.closeIdleConnections()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop calls POST /api/v1/internal/runtimes/{game_id}/stop with body
|
||||
// `{reason}`. Any non-success outcome is wrapped with
|
||||
// `ports.ErrRTMUnavailable`.
|
||||
func (client *Client) Stop(ctx context.Context, gameID, reason string) error {
|
||||
if err := client.validate(ctx, gameID); err != nil {
|
||||
return err
|
||||
}
|
||||
if strings.TrimSpace(reason) == "" {
|
||||
return errors.New("rtm stop: reason must not be empty")
|
||||
}
|
||||
body, err := json.Marshal(stopRequestEnvelope{Reason: reason})
|
||||
if err != nil {
|
||||
return fmt.Errorf("rtm stop: encode request: %w", err)
|
||||
}
|
||||
return client.callMutation(ctx, fmt.Sprintf(stopPathTemplate, url.PathEscape(gameID)), body, "rtm stop")
|
||||
}
|
||||
|
||||
// Patch calls POST /api/v1/internal/runtimes/{game_id}/patch with body
|
||||
// `{image_ref}`. A `409 conflict` from RTM (semver violation) is also
|
||||
// wrapped with `ports.ErrRTMUnavailable`; the underlying `error_code`
|
||||
// is preserved in the wrapped error message so callers can branch on
|
||||
// the substring if needed.
|
||||
func (client *Client) Patch(ctx context.Context, gameID, imageRef string) error {
|
||||
if err := client.validate(ctx, gameID); err != nil {
|
||||
return err
|
||||
}
|
||||
if strings.TrimSpace(imageRef) == "" {
|
||||
return errors.New("rtm patch: image ref must not be empty")
|
||||
}
|
||||
body, err := json.Marshal(patchRequestEnvelope{ImageRef: imageRef})
|
||||
if err != nil {
|
||||
return fmt.Errorf("rtm patch: encode request: %w", err)
|
||||
}
|
||||
return client.callMutation(ctx, fmt.Sprintf(patchPathTemplate, url.PathEscape(gameID)), body, "rtm patch")
|
||||
}
|
||||
|
||||
func (client *Client) validate(ctx context.Context, gameID string) error {
|
||||
if client == nil || client.httpClient == nil {
|
||||
return errors.New("rtm client: nil client")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("rtm client: nil context")
|
||||
}
|
||||
if err := ctx.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
if strings.TrimSpace(gameID) == "" {
|
||||
return errors.New("rtm client: game id must not be empty")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (client *Client) callMutation(ctx context.Context, requestPath string, body []byte, opLabel string) error {
|
||||
payload, statusCode, err := client.doRequest(ctx, http.MethodPost, requestPath, body)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%w: %s: %w", ports.ErrRTMUnavailable, opLabel, err)
|
||||
}
|
||||
if statusCode >= 200 && statusCode < 300 {
|
||||
return nil
|
||||
}
|
||||
errorCode := decodeErrorCode(payload)
|
||||
if errorCode != "" {
|
||||
return fmt.Errorf("%w: %s: unexpected status %d (error_code=%s)", ports.ErrRTMUnavailable, opLabel, statusCode, errorCode)
|
||||
}
|
||||
return fmt.Errorf("%w: %s: unexpected status %d", ports.ErrRTMUnavailable, opLabel, statusCode)
|
||||
}
|
||||
|
||||
func (client *Client) doRequest(ctx context.Context, method, requestPath string, body []byte) ([]byte, int, error) {
|
||||
attemptCtx, cancel := context.WithTimeout(ctx, client.requestTimeout)
|
||||
defer cancel()
|
||||
|
||||
var reader io.Reader
|
||||
if len(body) > 0 {
|
||||
reader = bytes.NewReader(body)
|
||||
}
|
||||
req, err := http.NewRequestWithContext(attemptCtx, method, client.baseURL+requestPath, reader)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("build request: %w", err)
|
||||
}
|
||||
req.Header.Set("Accept", "application/json")
|
||||
if len(body) > 0 {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
resp, err := client.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
respBody, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, resp.StatusCode, fmt.Errorf("read response body: %w", err)
|
||||
}
|
||||
return respBody, resp.StatusCode, nil
|
||||
}
|
||||
|
||||
func decodeErrorCode(payload []byte) string {
|
||||
if len(payload) == 0 {
|
||||
return ""
|
||||
}
|
||||
var envelope errorEnvelope
|
||||
if err := json.Unmarshal(payload, &envelope); err != nil {
|
||||
return ""
|
||||
}
|
||||
if envelope.Error == nil {
|
||||
return ""
|
||||
}
|
||||
return envelope.Error.Code
|
||||
}
|
||||
|
||||
// Compile-time assertion: Client implements ports.RTMClient.
|
||||
var _ ports.RTMClient = (*Client)(nil)
|
||||
@@ -0,0 +1,156 @@
|
||||
package rtmclient
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"galaxy/gamemaster/internal/ports"
|
||||
)
|
||||
|
||||
func newTestClient(t *testing.T, baseURL string, timeout time.Duration) *Client {
|
||||
t.Helper()
|
||||
client, err := NewClient(Config{BaseURL: baseURL, RequestTimeout: timeout})
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
return client
|
||||
}
|
||||
|
||||
func TestNewClientValidatesConfig(t *testing.T) {
|
||||
cases := map[string]Config{
|
||||
"empty base url": {BaseURL: "", RequestTimeout: time.Second},
|
||||
"non-absolute": {BaseURL: "rtm:8096", RequestTimeout: time.Second},
|
||||
"zero timeout": {BaseURL: "http://rtm:8096", RequestTimeout: 0},
|
||||
"negative timeout": {BaseURL: "http://rtm:8096", RequestTimeout: -time.Second},
|
||||
}
|
||||
for name, cfg := range cases {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
_, err := NewClient(cfg)
|
||||
require.Error(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStopHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodPost, r.Method)
|
||||
require.Equal(t, "/api/v1/internal/runtimes/game-1/stop", r.URL.Path)
|
||||
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
|
||||
body, err := io.ReadAll(r.Body)
|
||||
require.NoError(t, err)
|
||||
var got stopRequestEnvelope
|
||||
require.NoError(t, json.Unmarshal(body, &got))
|
||||
assert.Equal(t, "admin_request", got.Reason)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_, _ = w.Write([]byte(`{"game_id":"game-1","status":"stopped"}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
require.NoError(t, client.Stop(context.Background(), "game-1", "admin_request"))
|
||||
}
|
||||
|
||||
func TestStopRejectsBadInput(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
t.Fatal("must not contact rtm on bad input")
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
|
||||
require.Error(t, client.Stop(context.Background(), " ", "admin_request"))
|
||||
require.Error(t, client.Stop(context.Background(), "g", " "))
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
err := client.Stop(ctx, "g", "admin_request")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, context.Canceled))
|
||||
}
|
||||
|
||||
func TestStopInternalError(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
_, _ = w.Write([]byte(`{"error":{"code":"internal_error","message":"boom"}}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
err := client.Stop(context.Background(), "g", "admin_request")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrRTMUnavailable))
|
||||
assert.Contains(t, err.Error(), "internal_error")
|
||||
}
|
||||
|
||||
func TestStopTimeoutMapsToUnavailable(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
time.Sleep(120 * time.Millisecond)
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, 30*time.Millisecond)
|
||||
err := client.Stop(context.Background(), "g", "admin_request")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrRTMUnavailable))
|
||||
}
|
||||
|
||||
func TestPatchHappyPath(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
require.Equal(t, http.MethodPost, r.Method)
|
||||
require.Equal(t, "/api/v1/internal/runtimes/g/patch", r.URL.Path)
|
||||
body, err := io.ReadAll(r.Body)
|
||||
require.NoError(t, err)
|
||||
var got patchRequestEnvelope
|
||||
require.NoError(t, json.Unmarshal(body, &got))
|
||||
assert.Equal(t, "galaxy/game:1.2.4", got.ImageRef)
|
||||
_, _ = w.Write([]byte(`{"game_id":"g","status":"running"}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
require.NoError(t, client.Patch(context.Background(), "g", "galaxy/game:1.2.4"))
|
||||
}
|
||||
|
||||
func TestPatchSemverConflictMapsToUnavailable(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusConflict)
|
||||
_, _ = w.Write([]byte(`{"error":{"code":"semver_patch_only","message":"cross-major patch not allowed"}}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
err := client.Patch(context.Background(), "g", "galaxy/game:2.0.0")
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, ports.ErrRTMUnavailable))
|
||||
assert.Contains(t, err.Error(), "semver_patch_only")
|
||||
}
|
||||
|
||||
func TestPatchRejectsBadInput(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
t.Fatal("must not contact rtm on bad input")
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
require.Error(t, client.Patch(context.Background(), " ", "galaxy/game:1.0.0"))
|
||||
require.Error(t, client.Patch(context.Background(), "g", " "))
|
||||
}
|
||||
|
||||
func TestCloseIsIdempotent(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
client := newTestClient(t, server.URL, time.Second)
|
||||
require.NoError(t, client.Stop(context.Background(), "g", "admin_request"))
|
||||
require.NoError(t, client.Close())
|
||||
require.NoError(t, client.Close())
|
||||
}
|
||||
@@ -0,0 +1,611 @@
|
||||
package internalhttp
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/api/internalhttp/handlers"
|
||||
"galaxy/gamemaster/internal/domain/engineversion"
|
||||
"galaxy/gamemaster/internal/domain/operation"
|
||||
domainruntime "galaxy/gamemaster/internal/domain/runtime"
|
||||
"galaxy/gamemaster/internal/service/adminbanish"
|
||||
"galaxy/gamemaster/internal/service/adminforce"
|
||||
"galaxy/gamemaster/internal/service/adminpatch"
|
||||
"galaxy/gamemaster/internal/service/adminstop"
|
||||
"galaxy/gamemaster/internal/service/commandexecute"
|
||||
engineversionsvc "galaxy/gamemaster/internal/service/engineversion"
|
||||
"galaxy/gamemaster/internal/service/livenessreply"
|
||||
"galaxy/gamemaster/internal/service/orderput"
|
||||
"galaxy/gamemaster/internal/service/registerruntime"
|
||||
"galaxy/gamemaster/internal/service/reportget"
|
||||
"galaxy/gamemaster/internal/service/turngeneration"
|
||||
|
||||
"github.com/getkin/kin-openapi/openapi3"
|
||||
"github.com/getkin/kin-openapi/openapi3filter"
|
||||
"github.com/getkin/kin-openapi/routers"
|
||||
"github.com/getkin/kin-openapi/routers/legacy"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// TestInternalRESTConformance loads the OpenAPI specification, drives
|
||||
// every internal REST operation against the live listener backed by
|
||||
// stub services, and validates each request and response body
|
||||
// against the spec via `openapi3filter.ValidateRequest` and
|
||||
// `openapi3filter.ValidateResponse`. Failure-path response shapes
|
||||
// are intentionally out of scope here; per-handler tests under
|
||||
// `handlers/<op>_test.go` cover the failure branches.
|
||||
func TestInternalRESTConformance(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
doc := loadConformanceSpec(t)
|
||||
|
||||
router, err := legacy.NewRouter(doc)
|
||||
require.NoError(t, err)
|
||||
|
||||
deps := newConformanceDeps()
|
||||
server, err := NewServer(newConformanceConfig(), Dependencies{
|
||||
Logger: nil,
|
||||
Telemetry: nil,
|
||||
Readiness: nil,
|
||||
RuntimeRecords: deps.runtimeRecords,
|
||||
RegisterRuntime: deps.registerRuntime,
|
||||
ForceNextTurn: deps.forceNextTurn,
|
||||
StopRuntime: deps.stopRuntime,
|
||||
PatchRuntime: deps.patchRuntime,
|
||||
BanishRace: deps.banishRace,
|
||||
InvalidateMemberships: deps.membership,
|
||||
GameLiveness: deps.liveness,
|
||||
EngineVersions: deps.engineVersions,
|
||||
CommandExecute: deps.commandExecute,
|
||||
PutOrders: deps.putOrders,
|
||||
GetReport: deps.getReport,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
cases := []conformanceCase{
|
||||
{name: "internalHealthz", method: http.MethodGet, path: "/healthz"},
|
||||
{name: "internalReadyz", method: http.MethodGet, path: "/readyz"},
|
||||
{
|
||||
name: "internalRegisterRuntime",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/games/" + conformanceGameID + "/register-runtime",
|
||||
contentType: "application/json",
|
||||
body: `{
|
||||
"engine_endpoint": "http://galaxy-game-` + conformanceGameID + `:8080",
|
||||
"members": [{"user_id": "user-1", "race_name": "Aelinari"}],
|
||||
"target_engine_version": "1.2.3",
|
||||
"turn_schedule": "0 18 * * *"
|
||||
}`,
|
||||
},
|
||||
{
|
||||
name: "internalBanishRace",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/games/" + conformanceGameID + "/race/Aelinari/banish",
|
||||
expectedStatus: http.StatusNoContent,
|
||||
},
|
||||
{
|
||||
name: "internalInvalidateMemberships",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/games/" + conformanceGameID + "/memberships/invalidate",
|
||||
expectedStatus: http.StatusNoContent,
|
||||
},
|
||||
{
|
||||
name: "internalGameLiveness",
|
||||
method: http.MethodGet,
|
||||
path: "/api/v1/internal/games/" + conformanceGameID + "/liveness",
|
||||
},
|
||||
{name: "internalListRuntimes", method: http.MethodGet, path: "/api/v1/internal/runtimes"},
|
||||
{
|
||||
name: "internalGetRuntime",
|
||||
method: http.MethodGet,
|
||||
path: "/api/v1/internal/runtimes/" + conformanceGameID,
|
||||
},
|
||||
{
|
||||
name: "internalForceNextTurn",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/runtimes/" + conformanceGameID + "/force-next-turn",
|
||||
},
|
||||
{
|
||||
name: "internalStopRuntime",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/runtimes/" + conformanceGameID + "/stop",
|
||||
contentType: "application/json",
|
||||
body: `{"reason":"admin_request"}`,
|
||||
},
|
||||
{
|
||||
name: "internalPatchRuntime",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/runtimes/" + conformanceGameID + "/patch",
|
||||
contentType: "application/json",
|
||||
body: `{"version":"1.2.4"}`,
|
||||
},
|
||||
{name: "internalListEngineVersions", method: http.MethodGet, path: "/api/v1/internal/engine-versions"},
|
||||
{
|
||||
name: "internalCreateEngineVersion",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/engine-versions",
|
||||
contentType: "application/json",
|
||||
body: `{"version":"1.2.5","image_ref":"galaxy/game:1.2.5"}`,
|
||||
expectedStatus: http.StatusCreated,
|
||||
},
|
||||
{
|
||||
name: "internalGetEngineVersion",
|
||||
method: http.MethodGet,
|
||||
path: "/api/v1/internal/engine-versions/1.2.3",
|
||||
},
|
||||
{
|
||||
name: "internalUpdateEngineVersion",
|
||||
method: http.MethodPatch,
|
||||
path: "/api/v1/internal/engine-versions/1.2.3",
|
||||
contentType: "application/json",
|
||||
body: `{"image_ref":"galaxy/game:1.2.3-patch"}`,
|
||||
},
|
||||
{
|
||||
name: "internalDeprecateEngineVersion",
|
||||
method: http.MethodDelete,
|
||||
path: "/api/v1/internal/engine-versions/1.2.3",
|
||||
expectedStatus: http.StatusNoContent,
|
||||
},
|
||||
{
|
||||
name: "internalResolveEngineVersionImageRef",
|
||||
method: http.MethodGet,
|
||||
path: "/api/v1/internal/engine-versions/1.2.3/image-ref",
|
||||
},
|
||||
{
|
||||
name: "internalExecuteCommands",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/games/" + conformanceGameID + "/commands",
|
||||
contentType: "application/json",
|
||||
body: `{"commands":[{"name":"build","args":{}}]}`,
|
||||
extraHeaders: map[string]string{userIDHeader: conformanceUserID},
|
||||
},
|
||||
{
|
||||
name: "internalPutOrders",
|
||||
method: http.MethodPost,
|
||||
path: "/api/v1/internal/games/" + conformanceGameID + "/orders",
|
||||
contentType: "application/json",
|
||||
body: `{"commands":[{"name":"move","args":{}}]}`,
|
||||
extraHeaders: map[string]string{userIDHeader: conformanceUserID},
|
||||
},
|
||||
{
|
||||
name: "internalGetReport",
|
||||
method: http.MethodGet,
|
||||
path: "/api/v1/internal/games/" + conformanceGameID + "/reports/0",
|
||||
extraHeaders: map[string]string{userIDHeader: conformanceUserID},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
runConformanceCase(t, server.handler, router, tc)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
const (
|
||||
conformanceGameID = "game-conformance"
|
||||
conformanceUserID = "user-conformance"
|
||||
conformanceServerURL = "http://localhost:8097"
|
||||
userIDHeader = "X-User-ID"
|
||||
)
|
||||
|
||||
type conformanceCase struct {
|
||||
name string
|
||||
method string
|
||||
path string
|
||||
contentType string
|
||||
body string
|
||||
expectedStatus int
|
||||
extraHeaders map[string]string
|
||||
}
|
||||
|
||||
func runConformanceCase(t *testing.T, handler http.Handler, router routers.Router, tc conformanceCase) {
|
||||
t.Helper()
|
||||
|
||||
expectedStatus := tc.expectedStatus
|
||||
if expectedStatus == 0 {
|
||||
expectedStatus = http.StatusOK
|
||||
}
|
||||
|
||||
var bodyReader io.Reader
|
||||
if tc.body != "" {
|
||||
bodyReader = strings.NewReader(tc.body)
|
||||
}
|
||||
request := httptest.NewRequest(tc.method, tc.path, bodyReader)
|
||||
if tc.contentType != "" {
|
||||
request.Header.Set("Content-Type", tc.contentType)
|
||||
}
|
||||
request.Header.Set("X-Galaxy-Caller", "admin")
|
||||
for key, value := range tc.extraHeaders {
|
||||
request.Header.Set(key, value)
|
||||
}
|
||||
|
||||
recorder := httptest.NewRecorder()
|
||||
handler.ServeHTTP(recorder, request)
|
||||
require.Equalf(t, expectedStatus, recorder.Code,
|
||||
"operation %s returned %d: %s", tc.name, recorder.Code, recorder.Body.String())
|
||||
|
||||
validationURL := conformanceServerURL + tc.path
|
||||
validationRequest := httptest.NewRequest(tc.method, validationURL, bodyReaderFor(tc.body))
|
||||
if tc.contentType != "" {
|
||||
validationRequest.Header.Set("Content-Type", tc.contentType)
|
||||
}
|
||||
validationRequest.Header.Set("X-Galaxy-Caller", "admin")
|
||||
for key, value := range tc.extraHeaders {
|
||||
validationRequest.Header.Set(key, value)
|
||||
}
|
||||
|
||||
route, pathParams, err := router.FindRoute(validationRequest)
|
||||
require.NoError(t, err)
|
||||
|
||||
requestInput := &openapi3filter.RequestValidationInput{
|
||||
Request: validationRequest,
|
||||
PathParams: pathParams,
|
||||
Route: route,
|
||||
Options: &openapi3filter.Options{
|
||||
IncludeResponseStatus: true,
|
||||
},
|
||||
}
|
||||
require.NoError(t, openapi3filter.ValidateRequest(context.Background(), requestInput))
|
||||
|
||||
responseInput := &openapi3filter.ResponseValidationInput{
|
||||
RequestValidationInput: requestInput,
|
||||
Status: recorder.Code,
|
||||
Header: recorder.Header(),
|
||||
Options: &openapi3filter.Options{
|
||||
IncludeResponseStatus: true,
|
||||
},
|
||||
}
|
||||
responseInput.SetBodyBytes(recorder.Body.Bytes())
|
||||
require.NoError(t, openapi3filter.ValidateResponse(context.Background(), responseInput))
|
||||
}
|
||||
|
||||
func loadConformanceSpec(t *testing.T) *openapi3.T {
|
||||
t.Helper()
|
||||
|
||||
_, thisFile, _, ok := runtime.Caller(0)
|
||||
require.True(t, ok)
|
||||
|
||||
specPath := filepath.Join(filepath.Dir(thisFile), "..", "..", "..", "api", "internal-openapi.yaml")
|
||||
loader := openapi3.NewLoader()
|
||||
doc, err := loader.LoadFromFile(specPath)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, doc.Validate(context.Background()))
|
||||
return doc
|
||||
}
|
||||
|
||||
func bodyReaderFor(raw string) io.Reader {
|
||||
if raw == "" {
|
||||
return http.NoBody
|
||||
}
|
||||
return bytes.NewBufferString(raw)
|
||||
}
|
||||
|
||||
func newConformanceConfig() Config {
|
||||
return Config{
|
||||
Addr: ":0",
|
||||
ReadHeaderTimeout: time.Second,
|
||||
ReadTimeout: time.Second,
|
||||
WriteTimeout: time.Second,
|
||||
IdleTimeout: time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
// conformanceDeps groups the stub collaborators handed to the listener.
|
||||
type conformanceDeps struct {
|
||||
runtimeRecords *conformanceRuntimeRecords
|
||||
registerRuntime *conformanceRegister
|
||||
forceNextTurn *conformanceForce
|
||||
stopRuntime *conformanceStop
|
||||
patchRuntime *conformancePatch
|
||||
banishRace *conformanceBanish
|
||||
membership *conformanceMembership
|
||||
liveness *conformanceLiveness
|
||||
engineVersions *conformanceEngineVersions
|
||||
commandExecute *conformanceCommands
|
||||
putOrders *conformanceOrders
|
||||
getReport *conformanceReport
|
||||
}
|
||||
|
||||
func newConformanceDeps() *conformanceDeps {
|
||||
return &conformanceDeps{
|
||||
runtimeRecords: newConformanceRuntimeRecords(),
|
||||
registerRuntime: &conformanceRegister{},
|
||||
forceNextTurn: &conformanceForce{},
|
||||
stopRuntime: &conformanceStop{},
|
||||
patchRuntime: &conformancePatch{},
|
||||
banishRace: &conformanceBanish{},
|
||||
membership: &conformanceMembership{},
|
||||
liveness: &conformanceLiveness{},
|
||||
engineVersions: newConformanceEngineVersions(),
|
||||
commandExecute: &conformanceCommands{},
|
||||
putOrders: &conformanceOrders{},
|
||||
getReport: &conformanceReport{},
|
||||
}
|
||||
}
|
||||
|
||||
// conformanceRecord builds a canonical running runtime record used
|
||||
// by every stub service.
|
||||
func conformanceRuntimeRecord() domainruntime.RuntimeRecord {
|
||||
moment := time.Date(2026, 4, 30, 12, 0, 0, 0, time.UTC)
|
||||
next := moment.Add(time.Minute)
|
||||
started := moment
|
||||
return domainruntime.RuntimeRecord{
|
||||
GameID: conformanceGameID,
|
||||
Status: domainruntime.StatusRunning,
|
||||
EngineEndpoint: "http://galaxy-game-" + conformanceGameID + ":8080",
|
||||
CurrentImageRef: "galaxy/game:1.2.3",
|
||||
CurrentEngineVersion: "1.2.3",
|
||||
TurnSchedule: "0 18 * * *",
|
||||
CurrentTurn: 0,
|
||||
NextGenerationAt: &next,
|
||||
SkipNextTick: false,
|
||||
EngineHealth: "healthy",
|
||||
CreatedAt: moment,
|
||||
UpdatedAt: moment,
|
||||
StartedAt: &started,
|
||||
}
|
||||
}
|
||||
|
||||
func conformanceEngineVersionRecord(version string) engineversion.EngineVersion {
|
||||
moment := time.Date(2026, 4, 30, 12, 0, 0, 0, time.UTC)
|
||||
return engineversion.EngineVersion{
|
||||
Version: version,
|
||||
ImageRef: "galaxy/game:" + version,
|
||||
Options: nil,
|
||||
Status: engineversion.StatusActive,
|
||||
CreatedAt: moment,
|
||||
UpdatedAt: moment,
|
||||
}
|
||||
}
|
||||
|
||||
// conformanceRuntimeRecords is an in-memory store seeded with the
|
||||
// canonical record so the get/list endpoints have something to return.
|
||||
type conformanceRuntimeRecords struct {
|
||||
mu sync.Mutex
|
||||
stored map[string]domainruntime.RuntimeRecord
|
||||
}
|
||||
|
||||
func newConformanceRuntimeRecords() *conformanceRuntimeRecords {
|
||||
return &conformanceRuntimeRecords{
|
||||
stored: map[string]domainruntime.RuntimeRecord{
|
||||
conformanceGameID: conformanceRuntimeRecord(),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *conformanceRuntimeRecords) Get(_ context.Context, gameID string) (domainruntime.RuntimeRecord, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
record, ok := s.stored[gameID]
|
||||
if !ok {
|
||||
return domainruntime.RuntimeRecord{}, domainruntime.ErrNotFound
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
func (s *conformanceRuntimeRecords) List(_ context.Context) ([]domainruntime.RuntimeRecord, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
out := make([]domainruntime.RuntimeRecord, 0, len(s.stored))
|
||||
for _, record := range s.stored {
|
||||
out = append(out, record)
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (s *conformanceRuntimeRecords) ListByStatus(_ context.Context, status domainruntime.Status) ([]domainruntime.RuntimeRecord, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
out := make([]domainruntime.RuntimeRecord, 0, len(s.stored))
|
||||
for _, record := range s.stored {
|
||||
if record.Status == status {
|
||||
out = append(out, record)
|
||||
}
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
type conformanceRegister struct{}
|
||||
|
||||
func (s *conformanceRegister) Handle(_ context.Context, _ registerruntime.Input) (registerruntime.Result, error) {
|
||||
return registerruntime.Result{
|
||||
Record: conformanceRuntimeRecord(),
|
||||
Outcome: operation.OutcomeSuccess,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type conformanceForce struct{}
|
||||
|
||||
func (s *conformanceForce) Handle(_ context.Context, _ adminforce.Input) (adminforce.Result, error) {
|
||||
return adminforce.Result{
|
||||
TurnGeneration: turngeneration.Result{Record: conformanceRuntimeRecord()},
|
||||
SkipScheduled: true,
|
||||
Outcome: operation.OutcomeSuccess,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type conformanceStop struct{}
|
||||
|
||||
func (s *conformanceStop) Handle(_ context.Context, _ adminstop.Input) (adminstop.Result, error) {
|
||||
rec := conformanceRuntimeRecord()
|
||||
rec.Status = domainruntime.StatusStopped
|
||||
stopped := rec.UpdatedAt.Add(time.Second)
|
||||
rec.StoppedAt = &stopped
|
||||
rec.UpdatedAt = stopped
|
||||
return adminstop.Result{Record: rec, Outcome: operation.OutcomeSuccess}, nil
|
||||
}
|
||||
|
||||
type conformancePatch struct{}
|
||||
|
||||
func (s *conformancePatch) Handle(_ context.Context, in adminpatch.Input) (adminpatch.Result, error) {
|
||||
rec := conformanceRuntimeRecord()
|
||||
if in.Version != "" {
|
||||
rec.CurrentImageRef = "galaxy/game:" + in.Version
|
||||
rec.CurrentEngineVersion = in.Version
|
||||
}
|
||||
return adminpatch.Result{Record: rec, Outcome: operation.OutcomeSuccess}, nil
|
||||
}
|
||||
|
||||
type conformanceBanish struct{}
|
||||
|
||||
func (s *conformanceBanish) Handle(_ context.Context, _ adminbanish.Input) (adminbanish.Result, error) {
|
||||
return adminbanish.Result{Outcome: operation.OutcomeSuccess}, nil
|
||||
}
|
||||
|
||||
type conformanceMembership struct{}
|
||||
|
||||
func (m *conformanceMembership) Invalidate(string) {}
|
||||
|
||||
type conformanceLiveness struct{}
|
||||
|
||||
func (s *conformanceLiveness) Handle(_ context.Context, _ livenessreply.Input) (livenessreply.Result, error) {
|
||||
return livenessreply.Result{
|
||||
Ready: true,
|
||||
Status: domainruntime.StatusRunning,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type conformanceEngineVersions struct {
|
||||
mu sync.Mutex
|
||||
versions map[string]engineversion.EngineVersion
|
||||
}
|
||||
|
||||
func newConformanceEngineVersions() *conformanceEngineVersions {
|
||||
return &conformanceEngineVersions{
|
||||
versions: map[string]engineversion.EngineVersion{
|
||||
"1.2.3": conformanceEngineVersionRecord("1.2.3"),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *conformanceEngineVersions) List(_ context.Context, _ *engineversion.Status) ([]engineversion.EngineVersion, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
out := make([]engineversion.EngineVersion, 0, len(s.versions))
|
||||
for _, version := range s.versions {
|
||||
out = append(out, version)
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (s *conformanceEngineVersions) Get(_ context.Context, version string) (engineversion.EngineVersion, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
v, ok := s.versions[version]
|
||||
if !ok {
|
||||
return engineversion.EngineVersion{}, engineversionsvc.ErrNotFound
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
|
||||
func (s *conformanceEngineVersions) ResolveImageRef(_ context.Context, version string) (string, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
v, ok := s.versions[version]
|
||||
if !ok {
|
||||
return "", engineversionsvc.ErrNotFound
|
||||
}
|
||||
return v.ImageRef, nil
|
||||
}
|
||||
|
||||
func (s *conformanceEngineVersions) Create(_ context.Context, in engineversionsvc.CreateInput) (engineversion.EngineVersion, error) {
|
||||
rec := engineversion.EngineVersion{
|
||||
Version: in.Version,
|
||||
ImageRef: in.ImageRef,
|
||||
Options: in.Options,
|
||||
Status: engineversion.StatusActive,
|
||||
CreatedAt: time.Date(2026, 4, 30, 12, 0, 0, 0, time.UTC),
|
||||
UpdatedAt: time.Date(2026, 4, 30, 12, 0, 0, 0, time.UTC),
|
||||
}
|
||||
s.mu.Lock()
|
||||
s.versions[in.Version] = rec
|
||||
s.mu.Unlock()
|
||||
return rec, nil
|
||||
}
|
||||
|
||||
func (s *conformanceEngineVersions) Update(_ context.Context, in engineversionsvc.UpdateInput) (engineversion.EngineVersion, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
rec, ok := s.versions[in.Version]
|
||||
if !ok {
|
||||
return engineversion.EngineVersion{}, engineversionsvc.ErrNotFound
|
||||
}
|
||||
if in.ImageRef != nil {
|
||||
rec.ImageRef = *in.ImageRef
|
||||
}
|
||||
if in.Status != nil {
|
||||
rec.Status = *in.Status
|
||||
}
|
||||
rec.UpdatedAt = time.Date(2026, 4, 30, 13, 0, 0, 0, time.UTC)
|
||||
s.versions[in.Version] = rec
|
||||
return rec, nil
|
||||
}
|
||||
|
||||
func (s *conformanceEngineVersions) Deprecate(_ context.Context, in engineversionsvc.DeprecateInput) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
rec, ok := s.versions[in.Version]
|
||||
if !ok {
|
||||
return engineversionsvc.ErrNotFound
|
||||
}
|
||||
rec.Status = engineversion.StatusDeprecated
|
||||
rec.UpdatedAt = time.Date(2026, 4, 30, 14, 0, 0, 0, time.UTC)
|
||||
s.versions[in.Version] = rec
|
||||
return nil
|
||||
}
|
||||
|
||||
type conformanceCommands struct{}
|
||||
|
||||
func (s *conformanceCommands) Handle(_ context.Context, _ commandexecute.Input) (commandexecute.Result, error) {
|
||||
return commandexecute.Result{
|
||||
Outcome: operation.OutcomeSuccess,
|
||||
RawResponse: json.RawMessage(`{"results":[]}`),
|
||||
}, nil
|
||||
}
|
||||
|
||||
type conformanceOrders struct{}
|
||||
|
||||
func (s *conformanceOrders) Handle(_ context.Context, _ orderput.Input) (orderput.Result, error) {
|
||||
return orderput.Result{
|
||||
Outcome: operation.OutcomeSuccess,
|
||||
RawResponse: json.RawMessage(`{"results":[]}`),
|
||||
}, nil
|
||||
}
|
||||
|
||||
type conformanceReport struct{}
|
||||
|
||||
func (s *conformanceReport) Handle(_ context.Context, _ reportget.Input) (reportget.Result, error) {
|
||||
return reportget.Result{
|
||||
Outcome: operation.OutcomeSuccess,
|
||||
RawResponse: json.RawMessage(`{"player":"Aelinari","turn":0}`),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Compile-time guards that the stubs satisfy the handler-level
|
||||
// service interfaces accepted by the listener.
|
||||
var (
|
||||
_ handlers.RegisterRuntimeService = (*conformanceRegister)(nil)
|
||||
_ handlers.ForceNextTurnService = (*conformanceForce)(nil)
|
||||
_ handlers.StopRuntimeService = (*conformanceStop)(nil)
|
||||
_ handlers.PatchRuntimeService = (*conformancePatch)(nil)
|
||||
_ handlers.BanishRaceService = (*conformanceBanish)(nil)
|
||||
_ handlers.MembershipInvalidator = (*conformanceMembership)(nil)
|
||||
_ handlers.LivenessService = (*conformanceLiveness)(nil)
|
||||
_ handlers.EngineVersionService = (*conformanceEngineVersions)(nil)
|
||||
_ handlers.CommandExecuteService = (*conformanceCommands)(nil)
|
||||
_ handlers.OrderPutService = (*conformanceOrders)(nil)
|
||||
_ handlers.ReportGetService = (*conformanceReport)(nil)
|
||||
_ handlers.RuntimeRecordsReader = (*conformanceRuntimeRecords)(nil)
|
||||
)
|
||||
@@ -0,0 +1,54 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"galaxy/gamemaster/internal/domain/operation"
|
||||
"galaxy/gamemaster/internal/service/adminbanish"
|
||||
)
|
||||
|
||||
// newBanishRaceHandler returns the handler for
|
||||
// `POST /api/v1/internal/games/{game_id}/race/{race_name}/banish`. The
|
||||
// request has no body; both identifiers come from the URL path.
|
||||
// Success returns `204 No Content`.
|
||||
func newBanishRaceHandler(deps Dependencies) http.HandlerFunc {
|
||||
logger := loggerFor(deps.Logger, "internal_rest.banish_race")
|
||||
return func(writer http.ResponseWriter, request *http.Request) {
|
||||
if deps.BanishRace == nil {
|
||||
writeError(writer, http.StatusInternalServerError, errorCodeInternal, "banish race service is not wired")
|
||||
return
|
||||
}
|
||||
|
||||
gameID, ok := extractGameID(writer, request)
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
raceName, ok := extractRaceName(writer, request)
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
result, err := deps.BanishRace.Handle(request.Context(), adminbanish.Input{
|
||||
GameID: gameID,
|
||||
RaceName: raceName,
|
||||
OpSource: resolveOpSource(request),
|
||||
SourceRef: requestSourceRef(request),
|
||||
})
|
||||
if err != nil {
|
||||
logger.ErrorContext(request.Context(), "banish race service errored",
|
||||
"game_id", gameID,
|
||||
"race_name", raceName,
|
||||
"err", err.Error(),
|
||||
)
|
||||
writeError(writer, http.StatusInternalServerError, errorCodeInternal, "banish race service failed")
|
||||
return
|
||||
}
|
||||
|
||||
if result.Outcome == operation.OutcomeFailure {
|
||||
writeFailure(writer, result.ErrorCode, result.ErrorMessage)
|
||||
return
|
||||
}
|
||||
|
||||
writeNoContent(writer)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,422 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"galaxy/gamemaster/internal/domain/engineversion"
|
||||
"galaxy/gamemaster/internal/domain/operation"
|
||||
"galaxy/gamemaster/internal/domain/runtime"
|
||||
engineversionsvc "galaxy/gamemaster/internal/service/engineversion"
|
||||
)
|
||||
|
||||
// jsonContentType is the Content-Type used by every internal REST
|
||||
// response body except the engine pass-through bodies which retain
|
||||
// the engine's chosen Content-Type.
|
||||
const jsonContentType = "application/json; charset=utf-8"
|
||||
|
||||
// callerHeader is the optional caller-classification header used to
|
||||
// attribute each request to a specific entry point. Documented in
|
||||
// `gamemaster/README.md` §«Internal REST API». Missing or unknown
|
||||
// values map to OpSourceAdminRest.
|
||||
const callerHeader = "X-Galaxy-Caller"
|
||||
|
||||
// userIDHeader carries the verified player identity propagated by
|
||||
// Edge Gateway on hot-path operations. Required for
|
||||
// `internalExecuteCommands`, `internalPutOrders`, and
|
||||
// `internalGetReport`.
|
||||
const userIDHeader = "X-User-ID"
|
||||
|
||||
// requestIDHeader is read into `operation_log.source_ref` when present
|
||||
// so REST callers can correlate audit rows with their requests.
|
||||
const requestIDHeader = "X-Request-ID"
|
||||
|
||||
// gameIDPathParam, raceNamePathParam, versionPathParam, turnPathParam
|
||||
// mirror the parameter names declared in
|
||||
// `gamemaster/api/internal-openapi.yaml`.
|
||||
const (
|
||||
gameIDPathParam = "game_id"
|
||||
raceNamePathParam = "race_name"
|
||||
versionPathParam = "version"
|
||||
turnPathParam = "turn"
|
||||
)
|
||||
|
||||
// Stable error codes used by the handler layer when no service result
|
||||
// is available (e.g., the service is not wired or the request shape
|
||||
// failed pre-decode validation). The values match the vocabulary
|
||||
// frozen by `gamemaster/README.md §Error Model` and
|
||||
// `gamemaster/api/internal-openapi.yaml`.
|
||||
const (
|
||||
errorCodeInvalidRequest = "invalid_request"
|
||||
errorCodeForbidden = "forbidden"
|
||||
errorCodeRuntimeNotFound = "runtime_not_found"
|
||||
errorCodeEngineVersionNotFound = "engine_version_not_found"
|
||||
errorCodeEngineVersionInUse = "engine_version_in_use"
|
||||
errorCodeConflict = "conflict"
|
||||
errorCodeRuntimeNotRunning = "runtime_not_running"
|
||||
errorCodeSemverPatchOnly = "semver_patch_only"
|
||||
errorCodeEngineUnreachable = "engine_unreachable"
|
||||
errorCodeEngineValidationError = "engine_validation_error"
|
||||
errorCodeEngineProtocolError = "engine_protocol_violation"
|
||||
errorCodeServiceUnavailable = "service_unavailable"
|
||||
errorCodeInternal = "internal_error"
|
||||
)
|
||||
|
||||
// errorBody mirrors the `error` element of the OpenAPI ErrorResponse
|
||||
// schema.
|
||||
type errorBody struct {
|
||||
Code string `json:"code"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
// errorResponse mirrors the OpenAPI ErrorResponse envelope.
|
||||
type errorResponse struct {
|
||||
Error errorBody `json:"error"`
|
||||
}
|
||||
|
||||
// runtimeRecordResponse mirrors the OpenAPI RuntimeRecord schema.
|
||||
// Required timestamps are always present and encode as int64 UTC
|
||||
// milliseconds; optional ones use `*int64` so an absent value is
|
||||
// omitted from the JSON form (rather than encoded as `null`).
|
||||
type runtimeRecordResponse struct {
|
||||
GameID string `json:"game_id"`
|
||||
RuntimeStatus string `json:"runtime_status"`
|
||||
EngineEndpoint string `json:"engine_endpoint"`
|
||||
CurrentImageRef string `json:"current_image_ref"`
|
||||
CurrentEngineVersion string `json:"current_engine_version"`
|
||||
TurnSchedule string `json:"turn_schedule"`
|
||||
CurrentTurn int `json:"current_turn"`
|
||||
NextGenerationAt int64 `json:"next_generation_at"`
|
||||
SkipNextTick bool `json:"skip_next_tick"`
|
||||
EngineHealthSummary string `json:"engine_health_summary"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
StartedAt *int64 `json:"started_at,omitempty"`
|
||||
StoppedAt *int64 `json:"stopped_at,omitempty"`
|
||||
FinishedAt *int64 `json:"finished_at,omitempty"`
|
||||
}
|
||||
|
||||
// runtimeListResponse mirrors the OpenAPI RuntimeListResponse schema.
|
||||
// Runtimes is always non-nil so an empty result encodes as
|
||||
// `{"runtimes":[]}` rather than `{"runtimes":null}`.
|
||||
type runtimeListResponse struct {
|
||||
Runtimes []runtimeRecordResponse `json:"runtimes"`
|
||||
}
|
||||
|
||||
// engineVersionResponse mirrors the OpenAPI EngineVersion schema.
|
||||
// Options is a `json.RawMessage` so the engine-side document passes
|
||||
// through verbatim.
|
||||
type engineVersionResponse struct {
|
||||
Version string `json:"version"`
|
||||
ImageRef string `json:"image_ref"`
|
||||
Options json.RawMessage `json:"options"`
|
||||
Status string `json:"status"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
}
|
||||
|
||||
// engineVersionListResponse mirrors the OpenAPI
|
||||
// EngineVersionListResponse schema.
|
||||
type engineVersionListResponse struct {
|
||||
Versions []engineVersionResponse `json:"versions"`
|
||||
}
|
||||
|
||||
// imageRefResponse mirrors the OpenAPI ImageRefResponse schema.
|
||||
type imageRefResponse struct {
|
||||
ImageRef string `json:"image_ref"`
|
||||
}
|
||||
|
||||
// livenessResponse mirrors the OpenAPI LivenessResponse schema.
|
||||
type livenessResponse struct {
|
||||
Ready bool `json:"ready"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
// encodeRuntimeRecord turns a domain RuntimeRecord into its wire shape.
|
||||
// Required `next_generation_at` encodes as `0` when the record carries
|
||||
// no scheduled tick (e.g., status=starting before the first
|
||||
// scheduling write); optional lifecycle timestamps are omitted when
|
||||
// nil.
|
||||
func encodeRuntimeRecord(record runtime.RuntimeRecord) runtimeRecordResponse {
|
||||
resp := runtimeRecordResponse{
|
||||
GameID: record.GameID,
|
||||
RuntimeStatus: string(record.Status),
|
||||
EngineEndpoint: record.EngineEndpoint,
|
||||
CurrentImageRef: record.CurrentImageRef,
|
||||
CurrentEngineVersion: record.CurrentEngineVersion,
|
||||
TurnSchedule: record.TurnSchedule,
|
||||
CurrentTurn: record.CurrentTurn,
|
||||
SkipNextTick: record.SkipNextTick,
|
||||
EngineHealthSummary: record.EngineHealth,
|
||||
CreatedAt: record.CreatedAt.UTC().UnixMilli(),
|
||||
UpdatedAt: record.UpdatedAt.UTC().UnixMilli(),
|
||||
}
|
||||
if record.NextGenerationAt != nil {
|
||||
resp.NextGenerationAt = record.NextGenerationAt.UTC().UnixMilli()
|
||||
}
|
||||
if record.StartedAt != nil {
|
||||
v := record.StartedAt.UTC().UnixMilli()
|
||||
resp.StartedAt = &v
|
||||
}
|
||||
if record.StoppedAt != nil {
|
||||
v := record.StoppedAt.UTC().UnixMilli()
|
||||
resp.StoppedAt = &v
|
||||
}
|
||||
if record.FinishedAt != nil {
|
||||
v := record.FinishedAt.UTC().UnixMilli()
|
||||
resp.FinishedAt = &v
|
||||
}
|
||||
return resp
|
||||
}
|
||||
|
||||
// encodeRuntimeList turns a domain RuntimeRecord slice into a wire
|
||||
// list response. records may be nil (empty store); the result still
|
||||
// carries an empty Runtimes slice so the JSON form is `{"runtimes":[]}`.
|
||||
func encodeRuntimeList(records []runtime.RuntimeRecord) runtimeListResponse {
|
||||
resp := runtimeListResponse{
|
||||
Runtimes: make([]runtimeRecordResponse, 0, len(records)),
|
||||
}
|
||||
for _, record := range records {
|
||||
resp.Runtimes = append(resp.Runtimes, encodeRuntimeRecord(record))
|
||||
}
|
||||
return resp
|
||||
}
|
||||
|
||||
// encodeEngineVersion turns a domain EngineVersion into its wire shape.
|
||||
// Empty Options bytes encode as the JSON object literal `{}` to
|
||||
// satisfy the schema (`type: object`).
|
||||
func encodeEngineVersion(version engineversion.EngineVersion) engineVersionResponse {
|
||||
options := json.RawMessage(version.Options)
|
||||
if len(options) == 0 {
|
||||
options = json.RawMessage("{}")
|
||||
}
|
||||
return engineVersionResponse{
|
||||
Version: version.Version,
|
||||
ImageRef: version.ImageRef,
|
||||
Options: options,
|
||||
Status: string(version.Status),
|
||||
CreatedAt: version.CreatedAt.UTC().UnixMilli(),
|
||||
UpdatedAt: version.UpdatedAt.UTC().UnixMilli(),
|
||||
}
|
||||
}
|
||||
|
||||
// encodeEngineVersionList turns a slice of domain EngineVersions into
|
||||
// a wire list response. The Versions slice is always non-nil.
|
||||
func encodeEngineVersionList(versions []engineversion.EngineVersion) engineVersionListResponse {
|
||||
resp := engineVersionListResponse{
|
||||
Versions: make([]engineVersionResponse, 0, len(versions)),
|
||||
}
|
||||
for _, version := range versions {
|
||||
resp.Versions = append(resp.Versions, encodeEngineVersion(version))
|
||||
}
|
||||
return resp
|
||||
}
|
||||
|
||||
// writeJSON writes payload as a JSON response with the given status
|
||||
// code.
|
||||
func writeJSON(writer http.ResponseWriter, statusCode int, payload any) {
|
||||
writer.Header().Set("Content-Type", jsonContentType)
|
||||
writer.WriteHeader(statusCode)
|
||||
_ = json.NewEncoder(writer).Encode(payload)
|
||||
}
|
||||
|
||||
// writeNoContent writes `204 No Content` with no body. The
|
||||
// Content-Type header is intentionally omitted so kin-openapi's
|
||||
// response validator does not look for a body.
|
||||
func writeNoContent(writer http.ResponseWriter) {
|
||||
writer.WriteHeader(http.StatusNoContent)
|
||||
}
|
||||
|
||||
// writeRawJSON writes raw, already-encoded JSON bytes as the response
|
||||
// body with the given status code. Used by the hot-path handlers
|
||||
// where the engine's response body is forwarded verbatim.
|
||||
func writeRawJSON(writer http.ResponseWriter, statusCode int, body []byte) {
|
||||
writer.Header().Set("Content-Type", jsonContentType)
|
||||
writer.WriteHeader(statusCode)
|
||||
_, _ = writer.Write(body)
|
||||
}
|
||||
|
||||
// writeError writes the canonical error envelope at statusCode.
|
||||
func writeError(writer http.ResponseWriter, statusCode int, code, message string) {
|
||||
writeJSON(writer, statusCode, errorResponse{
|
||||
Error: errorBody{Code: code, Message: message},
|
||||
})
|
||||
}
|
||||
|
||||
// writeFailure writes the canonical error envelope using the HTTP
|
||||
// status mapped from code via mapErrorCodeToStatus. Used by every
|
||||
// service-backed handler when its service returns
|
||||
// `Outcome=failure`.
|
||||
func writeFailure(writer http.ResponseWriter, code, message string) {
|
||||
writeError(writer, mapErrorCodeToStatus(code), code, message)
|
||||
}
|
||||
|
||||
// mapErrorCodeToStatus maps a stable error code to the HTTP status
|
||||
// declared by `gamemaster/api/internal-openapi.yaml`. Unknown codes
|
||||
// degrade to 500 so a future error code that ships ahead of its
|
||||
// handler-layer mapping still produces a structurally valid response.
|
||||
func mapErrorCodeToStatus(code string) int {
|
||||
switch code {
|
||||
case errorCodeInvalidRequest:
|
||||
return http.StatusBadRequest
|
||||
case errorCodeForbidden:
|
||||
return http.StatusForbidden
|
||||
case errorCodeRuntimeNotFound, errorCodeEngineVersionNotFound:
|
||||
return http.StatusNotFound
|
||||
case errorCodeConflict,
|
||||
errorCodeRuntimeNotRunning,
|
||||
errorCodeSemverPatchOnly,
|
||||
errorCodeEngineVersionInUse:
|
||||
return http.StatusConflict
|
||||
case errorCodeEngineUnreachable,
|
||||
errorCodeEngineValidationError,
|
||||
errorCodeEngineProtocolError:
|
||||
return http.StatusBadGateway
|
||||
case errorCodeServiceUnavailable:
|
||||
return http.StatusServiceUnavailable
|
||||
default:
|
||||
return http.StatusInternalServerError
|
||||
}
|
||||
}
|
||||
|
||||
// mapServiceError translates one of the `engineversionsvc` sentinel
|
||||
// errors into the corresponding HTTP status, error code, and message.
|
||||
// Unknown errors degrade to `500 internal_error`.
|
||||
func mapServiceError(err error) (int, string, string) {
|
||||
switch {
|
||||
case errors.Is(err, engineversionsvc.ErrInvalidRequest):
|
||||
return http.StatusBadRequest, errorCodeInvalidRequest, err.Error()
|
||||
case errors.Is(err, engineversionsvc.ErrNotFound):
|
||||
return http.StatusNotFound, errorCodeEngineVersionNotFound, err.Error()
|
||||
case errors.Is(err, engineversionsvc.ErrConflict):
|
||||
return http.StatusConflict, errorCodeConflict, err.Error()
|
||||
case errors.Is(err, engineversionsvc.ErrInUse):
|
||||
return http.StatusConflict, errorCodeEngineVersionInUse, err.Error()
|
||||
case errors.Is(err, engineversionsvc.ErrServiceUnavailable):
|
||||
return http.StatusServiceUnavailable, errorCodeServiceUnavailable, err.Error()
|
||||
default:
|
||||
return http.StatusInternalServerError, errorCodeInternal, "internal server error"
|
||||
}
|
||||
}
|
||||
|
||||
// decodeStrictJSON decodes one request body into target with strict
|
||||
// JSON semantics: unknown fields are rejected and trailing content is
|
||||
// rejected. Mirrors the helper used by lobby and rtmanager.
|
||||
func decodeStrictJSON(body io.Reader, target any) error {
|
||||
decoder := json.NewDecoder(body)
|
||||
decoder.DisallowUnknownFields()
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if decoder.More() {
|
||||
return errors.New("unexpected trailing content after JSON body")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// readRawJSONBody returns the raw request body provided it parses as
|
||||
// a JSON value. The hot-path handlers use this helper because the
|
||||
// envelope is engine-owned (`additionalProperties: true` on
|
||||
// ExecuteCommandsRequest / PutOrdersRequest); strict decoding would
|
||||
// reject legitimate extra fields.
|
||||
func readRawJSONBody(reader io.Reader) ([]byte, error) {
|
||||
if reader == nil {
|
||||
return nil, errors.New("request body is required")
|
||||
}
|
||||
body, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(body) == 0 {
|
||||
return nil, errors.New("request body is required")
|
||||
}
|
||||
if !json.Valid(body) {
|
||||
return nil, errors.New("request body is not valid JSON")
|
||||
}
|
||||
return body, nil
|
||||
}
|
||||
|
||||
// extractGameID pulls the {game_id} path variable from request. An
|
||||
// empty or whitespace-only value writes a `400 invalid_request` and
|
||||
// returns ok=false so callers can short-circuit.
|
||||
func extractGameID(writer http.ResponseWriter, request *http.Request) (string, bool) {
|
||||
raw := request.PathValue(gameIDPathParam)
|
||||
if strings.TrimSpace(raw) == "" {
|
||||
writeError(writer, http.StatusBadRequest, errorCodeInvalidRequest, "game id is required")
|
||||
return "", false
|
||||
}
|
||||
return raw, true
|
||||
}
|
||||
|
||||
// extractRaceName pulls the {race_name} path variable.
|
||||
func extractRaceName(writer http.ResponseWriter, request *http.Request) (string, bool) {
|
||||
raw := request.PathValue(raceNamePathParam)
|
||||
if strings.TrimSpace(raw) == "" {
|
||||
writeError(writer, http.StatusBadRequest, errorCodeInvalidRequest, "race name is required")
|
||||
return "", false
|
||||
}
|
||||
return raw, true
|
||||
}
|
||||
|
||||
// extractVersion pulls the {version} path variable.
|
||||
func extractVersion(writer http.ResponseWriter, request *http.Request) (string, bool) {
|
||||
raw := request.PathValue(versionPathParam)
|
||||
if strings.TrimSpace(raw) == "" {
|
||||
writeError(writer, http.StatusBadRequest, errorCodeInvalidRequest, "version is required")
|
||||
return "", false
|
||||
}
|
||||
return raw, true
|
||||
}
|
||||
|
||||
// extractUserID pulls the verified player identity from the
|
||||
// X-User-ID header. The hot-path operations require this header per
|
||||
// the OpenAPI spec; absent or whitespace-only values short-circuit
|
||||
// with `400 invalid_request`.
|
||||
func extractUserID(writer http.ResponseWriter, request *http.Request) (string, bool) {
|
||||
raw := strings.TrimSpace(request.Header.Get(userIDHeader))
|
||||
if raw == "" {
|
||||
writeError(writer, http.StatusBadRequest, errorCodeInvalidRequest, "X-User-ID header is required")
|
||||
return "", false
|
||||
}
|
||||
return raw, true
|
||||
}
|
||||
|
||||
// resolveOpSource maps the X-Galaxy-Caller header value to an
|
||||
// `operation.OpSource`. Missing or unknown values default to
|
||||
// OpSourceAdminRest, matching the documented contract in
|
||||
// `gamemaster/README.md` §«Internal REST API».
|
||||
func resolveOpSource(request *http.Request) operation.OpSource {
|
||||
switch strings.ToLower(strings.TrimSpace(request.Header.Get(callerHeader))) {
|
||||
case "gateway":
|
||||
return operation.OpSourceGatewayPlayer
|
||||
case "lobby":
|
||||
return operation.OpSourceLobbyInternal
|
||||
case "admin":
|
||||
return operation.OpSourceAdminRest
|
||||
default:
|
||||
return operation.OpSourceAdminRest
|
||||
}
|
||||
}
|
||||
|
||||
// requestSourceRef returns an opaque per-request reference recorded
|
||||
// in `operation_log.source_ref`. v1 reads the X-Request-ID header
|
||||
// when present so callers may correlate REST requests with audit
|
||||
// rows.
|
||||
func requestSourceRef(request *http.Request) string {
|
||||
return strings.TrimSpace(request.Header.Get(requestIDHeader))
|
||||
}
|
||||
|
||||
// loggerFor returns a logger annotated with the operation tag. Each
|
||||
// handler scopes its logs by op so operators filtering on
|
||||
// `op=internal_rest.<operation>` see exactly the lifecycle they care
|
||||
// about.
|
||||
func loggerFor(parent *slog.Logger, op string) *slog.Logger {
|
||||
if parent == nil {
|
||||
parent = slog.Default()
|
||||
}
|
||||
return parent.With("component", "internal_http.handlers", "op", op)
|
||||
}
|
||||
@@ -0,0 +1,205 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/gamemaster/internal/domain/engineversion"
|
||||
"galaxy/gamemaster/internal/domain/operation"
|
||||
"galaxy/gamemaster/internal/domain/runtime"
|
||||
engineversionsvc "galaxy/gamemaster/internal/service/engineversion"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMapErrorCodeToStatusCoversEveryDocumentedCode(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cases := map[string]int{
|
||||
errorCodeInvalidRequest: http.StatusBadRequest,
|
||||
errorCodeForbidden: http.StatusForbidden,
|
||||
errorCodeRuntimeNotFound: http.StatusNotFound,
|
||||
errorCodeEngineVersionNotFound: http.StatusNotFound,
|
||||
errorCodeConflict: http.StatusConflict,
|
||||
errorCodeRuntimeNotRunning: http.StatusConflict,
|
||||
errorCodeSemverPatchOnly: http.StatusConflict,
|
||||
errorCodeEngineVersionInUse: http.StatusConflict,
|
||||
errorCodeEngineUnreachable: http.StatusBadGateway,
|
||||
errorCodeEngineValidationError: http.StatusBadGateway,
|
||||
errorCodeEngineProtocolError: http.StatusBadGateway,
|
||||
errorCodeServiceUnavailable: http.StatusServiceUnavailable,
|
||||
errorCodeInternal: http.StatusInternalServerError,
|
||||
"unknown_code": http.StatusInternalServerError,
|
||||
}
|
||||
|
||||
for code, expected := range cases {
|
||||
assert.Equalf(t, expected, mapErrorCodeToStatus(code), "code %q", code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMapServiceErrorMapsEverySentinel(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cases := []struct {
|
||||
err error
|
||||
status int
|
||||
code string
|
||||
}{
|
||||
{engineversionsvc.ErrInvalidRequest, http.StatusBadRequest, errorCodeInvalidRequest},
|
||||
{engineversionsvc.ErrNotFound, http.StatusNotFound, errorCodeEngineVersionNotFound},
|
||||
{engineversionsvc.ErrConflict, http.StatusConflict, errorCodeConflict},
|
||||
{engineversionsvc.ErrInUse, http.StatusConflict, errorCodeEngineVersionInUse},
|
||||
{engineversionsvc.ErrServiceUnavailable, http.StatusServiceUnavailable, errorCodeServiceUnavailable},
|
||||
{errors.New("plain go error"), http.StatusInternalServerError, errorCodeInternal},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
status, code, _ := mapServiceError(tc.err)
|
||||
assert.Equalf(t, tc.status, status, "status for %v", tc.err)
|
||||
assert.Equalf(t, tc.code, code, "code for %v", tc.err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveOpSourceMapsCallerHeader(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cases := map[string]operation.OpSource{
|
||||
"": operation.OpSourceAdminRest,
|
||||
"unknown": operation.OpSourceAdminRest,
|
||||
"GATEWAY": operation.OpSourceGatewayPlayer,
|
||||
" lobby ": operation.OpSourceLobbyInternal,
|
||||
"admin": operation.OpSourceAdminRest,
|
||||
}
|
||||
|
||||
for value, expected := range cases {
|
||||
request := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||
if value != "" {
|
||||
request.Header.Set(callerHeader, value)
|
||||
}
|
||||
assert.Equalf(t, expected, resolveOpSource(request), "header %q", value)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRequestSourceRefReadsXRequestID(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||
assert.Empty(t, requestSourceRef(request))
|
||||
|
||||
request.Header.Set(requestIDHeader, " trace-123 ")
|
||||
assert.Equal(t, "trace-123", requestSourceRef(request))
|
||||
}
|
||||
|
||||
func TestDecodeStrictJSONRejectsUnknownFieldsAndTrailingContent(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
type input struct {
|
||||
Field string `json:"field"`
|
||||
}
|
||||
|
||||
var ok input
|
||||
require.NoError(t, decodeStrictJSON(strings.NewReader(`{"field":"value"}`), &ok))
|
||||
assert.Equal(t, "value", ok.Field)
|
||||
|
||||
var rejected input
|
||||
err := decodeStrictJSON(strings.NewReader(`{"field":"v","extra":1}`), &rejected)
|
||||
require.Error(t, err)
|
||||
|
||||
var trailing input
|
||||
err = decodeStrictJSON(strings.NewReader(`{"field":"v"}{"another":true}`), &trailing)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestReadRawJSONBodyValidatesPayload(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
body, err := readRawJSONBody(strings.NewReader(`{"commands":[]}`))
|
||||
require.NoError(t, err)
|
||||
assert.JSONEq(t, `{"commands":[]}`, string(body))
|
||||
|
||||
_, err = readRawJSONBody(strings.NewReader(""))
|
||||
require.Error(t, err)
|
||||
|
||||
_, err = readRawJSONBody(strings.NewReader("not json"))
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestEncodeRuntimeRecordIncludesEveryRequiredField(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
moment := time.Date(2026, 5, 1, 9, 30, 0, 0, time.UTC)
|
||||
next := moment.Add(time.Minute)
|
||||
record := runtime.RuntimeRecord{
|
||||
GameID: "game-1",
|
||||
Status: runtime.StatusRunning,
|
||||
EngineEndpoint: "http://example:8080",
|
||||
CurrentImageRef: "galaxy/game:1.2.3",
|
||||
CurrentEngineVersion: "1.2.3",
|
||||
TurnSchedule: "0 18 * * *",
|
||||
CurrentTurn: 7,
|
||||
NextGenerationAt: &next,
|
||||
SkipNextTick: true,
|
||||
EngineHealth: "healthy",
|
||||
CreatedAt: moment,
|
||||
UpdatedAt: moment,
|
||||
StartedAt: &moment,
|
||||
}
|
||||
|
||||
encoded := encodeRuntimeRecord(record)
|
||||
assert.Equal(t, "game-1", encoded.GameID)
|
||||
assert.Equal(t, "running", encoded.RuntimeStatus)
|
||||
assert.Equal(t, moment.UnixMilli(), encoded.CreatedAt)
|
||||
assert.Equal(t, next.UnixMilli(), encoded.NextGenerationAt)
|
||||
require.NotNil(t, encoded.StartedAt)
|
||||
assert.Equal(t, moment.UnixMilli(), *encoded.StartedAt)
|
||||
assert.Nil(t, encoded.StoppedAt)
|
||||
assert.Nil(t, encoded.FinishedAt)
|
||||
}
|
||||
|
||||
func TestEncodeRuntimeRecordZerosNextGenerationWhenNil(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
moment := time.Date(2026, 5, 1, 9, 30, 0, 0, time.UTC)
|
||||
record := runtime.RuntimeRecord{
|
||||
GameID: "game-1",
|
||||
Status: runtime.StatusStarting,
|
||||
EngineEndpoint: "http://example:8080",
|
||||
CurrentImageRef: "galaxy/game:1.2.3",
|
||||
CurrentEngineVersion: "1.2.3",
|
||||
TurnSchedule: "0 18 * * *",
|
||||
CreatedAt: moment,
|
||||
UpdatedAt: moment,
|
||||
}
|
||||
|
||||
encoded := encodeRuntimeRecord(record)
|
||||
assert.Equal(t, int64(0), encoded.NextGenerationAt)
|
||||
assert.Nil(t, encoded.StartedAt)
|
||||
}
|
||||
|
||||
func TestEncodeEngineVersionDefaultsEmptyOptionsToObject(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
moment := time.Date(2026, 5, 1, 9, 30, 0, 0, time.UTC)
|
||||
encoded := encodeEngineVersion(engineversion.EngineVersion{
|
||||
Version: "1.2.3",
|
||||
ImageRef: "galaxy/game:1.2.3",
|
||||
Status: engineversion.StatusActive,
|
||||
CreatedAt: moment,
|
||||
UpdatedAt: moment,
|
||||
})
|
||||
assert.Equal(t, "{}", string(encoded.Options))
|
||||
assert.Equal(t, "active", encoded.Status)
|
||||
}
|
||||
|
||||
func TestEncodeRuntimeListAlwaysReturnsNonNilSlice(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
resp := encodeRuntimeList(nil)
|
||||
require.NotNil(t, resp.Runtimes)
|
||||
assert.Empty(t, resp.Runtimes)
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user