feat: use postgres
This commit is contained in:
@@ -0,0 +1,10 @@
|
||||
# Makefile for galaxy/lobby.
|
||||
#
|
||||
# The `jet` target regenerates the go-jet/v2 query-builder code under
|
||||
# internal/adapters/postgres/jet/ against a transient PostgreSQL container
|
||||
# brought up by cmd/jetgen. Generated code is committed.
|
||||
|
||||
.PHONY: jet
|
||||
|
||||
jet:
|
||||
go run ./cmd/jetgen
|
||||
+73
-38
@@ -137,7 +137,16 @@ The service starts two HTTP listeners and one Redis Stream consumer pipeline.
|
||||
|
||||
### Startup dependencies
|
||||
|
||||
- one reachable Redis deployment at `LOBBY_REDIS_ADDR`
|
||||
- one reachable Redis deployment at `LOBBY_REDIS_MASTER_ADDR` (mandatory
|
||||
password via `LOBBY_REDIS_PASSWORD`; replicas optional via
|
||||
`LOBBY_REDIS_REPLICA_ADDRS`). Used for streams, race-name directory,
|
||||
per-game runtime aggregates, and stream offsets.
|
||||
- one reachable PostgreSQL primary at `LOBBY_POSTGRES_PRIMARY_DSN` (DSN
|
||||
must include `search_path=lobby&sslmode=disable`). Embedded goose
|
||||
migrations apply at startup before any listener opens; on migration or
|
||||
ping failure the service exits non-zero. The four core enrollment
|
||||
entities (game / application / invite / membership) live here after
|
||||
PG_PLAN.md §6A; `docs/postgres-migration.md` is the decision record.
|
||||
- `User Service` reachable at `LOBBY_USER_SERVICE_BASE_URL` (startup check only;
|
||||
runtime failures are surfaced as request errors, not boot failures)
|
||||
- `Game Master` at `LOBBY_GM_BASE_URL` (same policy — startup check omitted;
|
||||
@@ -147,7 +156,7 @@ The service starts two HTTP listeners and one Redis Stream consumer pipeline.
|
||||
|
||||
- `GET /healthz` on both ports returns `{"status":"ok"}`
|
||||
- `GET /readyz` on both ports returns `{"status":"ready"}` after successful
|
||||
startup; no live Redis ping per request
|
||||
startup; no live Redis or PostgreSQL ping per request
|
||||
|
||||
## Game Record Model
|
||||
|
||||
@@ -576,10 +585,14 @@ Sentinel errors: `ErrNameTaken`, `ErrInvalidName`, `ErrPendingMissing`,
|
||||
|
||||
### v1 backends
|
||||
|
||||
- **Redis** (`lobby/internal/adapters/redisstate/racenamedir.go`) — the
|
||||
production adapter using the key layout in §Redis Logical Model.
|
||||
- **PostgreSQL** (`lobby/internal/adapters/postgres/racenamedir/directory.go`)
|
||||
— the production adapter; one row per binding under
|
||||
`lobby.race_names`, transactional writes guarded by
|
||||
`pg_advisory_xact_lock(hashtextextended(canonical_key, 0))`. See
|
||||
`docs/postgres-migration.md` §6B for the full schema and decision
|
||||
record.
|
||||
- **Stub** (`lobby/internal/adapters/racenamestub/directory.go`) — in-process
|
||||
implementation for unit tests that do not need Redis. Chosen by
|
||||
implementation for unit tests that do not need PostgreSQL. Chosen by
|
||||
`LOBBY_RACE_NAME_DIRECTORY_BACKEND=stub`.
|
||||
|
||||
A future dedicated `Race Name Service` replaces the adapter without changing
|
||||
@@ -1060,7 +1073,9 @@ Stable error codes:
|
||||
|
||||
### Required
|
||||
|
||||
- `LOBBY_REDIS_ADDR`
|
||||
- `LOBBY_REDIS_MASTER_ADDR`
|
||||
- `LOBBY_REDIS_PASSWORD`
|
||||
- `LOBBY_POSTGRES_PRIMARY_DSN`
|
||||
- `LOBBY_USER_SERVICE_BASE_URL`
|
||||
- `LOBBY_GM_BASE_URL`
|
||||
|
||||
@@ -1087,11 +1102,28 @@ Internal HTTP:
|
||||
|
||||
Redis connectivity:
|
||||
|
||||
- `LOBBY_REDIS_USERNAME`
|
||||
- `LOBBY_REDIS_PASSWORD`
|
||||
- `LOBBY_REDIS_DB`
|
||||
- `LOBBY_REDIS_TLS_ENABLED`
|
||||
- `LOBBY_REDIS_OPERATION_TIMEOUT` with default `2s`
|
||||
- `LOBBY_REDIS_MASTER_ADDR` (required)
|
||||
- `LOBBY_REDIS_REPLICA_ADDRS` (optional, comma-separated; not consumed yet)
|
||||
- `LOBBY_REDIS_PASSWORD` (required)
|
||||
- `LOBBY_REDIS_DB` (default 0)
|
||||
- `LOBBY_REDIS_OPERATION_TIMEOUT` (default 250ms)
|
||||
|
||||
The legacy `LOBBY_REDIS_ADDR`, `LOBBY_REDIS_USERNAME`, and
|
||||
`LOBBY_REDIS_TLS_ENABLED` env vars were retired in PG_PLAN.md §6A; setting
|
||||
either of the latter two now fails fast at startup. See
|
||||
`ARCHITECTURE.md §Persistence Backends` for the architectural rules.
|
||||
|
||||
PostgreSQL connectivity (PG_PLAN.md §6A and §6B; durable game /
|
||||
application / invite / membership records and the Race Name Directory
|
||||
live here):
|
||||
|
||||
- `LOBBY_POSTGRES_PRIMARY_DSN` (required;
|
||||
e.g. `postgres://lobbyservice:secret@postgres:5432/galaxy?search_path=lobby&sslmode=disable`)
|
||||
- `LOBBY_POSTGRES_REPLICA_DSNS` (optional, comma-separated; not consumed yet)
|
||||
- `LOBBY_POSTGRES_OPERATION_TIMEOUT` (default 1s)
|
||||
- `LOBBY_POSTGRES_MAX_OPEN_CONNS` (default 25)
|
||||
- `LOBBY_POSTGRES_MAX_IDLE_CONNS` (default 5)
|
||||
- `LOBBY_POSTGRES_CONN_MAX_LIFETIME` (default 30m)
|
||||
|
||||
Stream names:
|
||||
|
||||
@@ -1114,8 +1146,9 @@ Enrollment automation:
|
||||
|
||||
Race Name Directory:
|
||||
|
||||
- `LOBBY_RACE_NAME_DIRECTORY_BACKEND` with default `redis`
|
||||
(alternate: `stub` for in-process tests)
|
||||
- `LOBBY_RACE_NAME_DIRECTORY_BACKEND` with default `postgres`
|
||||
(alternate: `stub` for in-process tests; PG_PLAN.md §6B retired the
|
||||
`redis` backend)
|
||||
- `LOBBY_RACE_NAME_EXPIRATION_INTERVAL` with default `1h` — pending
|
||||
registration expiration worker tick
|
||||
|
||||
@@ -1135,39 +1168,35 @@ OpenTelemetry:
|
||||
- `LOBBY_OTEL_STDOUT_TRACES_ENABLED`
|
||||
- `LOBBY_OTEL_STDOUT_METRICS_ENABLED`
|
||||
|
||||
## Redis Logical Model
|
||||
## Persistence Layout
|
||||
|
||||
Storage rules:
|
||||
Game / application / invite / membership records live in PostgreSQL after
|
||||
PG_PLAN.md §6A; the Race Name Directory followed in §6B. See
|
||||
`docs/postgres-migration.md` for the schema and decision records. The
|
||||
`lobby` schema owns five tables — `games`, `applications`, `invites`,
|
||||
`memberships`, `race_names` — plus the partial UNIQUE index on
|
||||
`applications(applicant_user_id, game_id) WHERE status <> 'rejected'` that
|
||||
enforces the single-active-application invariant and the partial UNIQUE
|
||||
index on `race_names(canonical_key) WHERE binding_kind = 'registered'`
|
||||
that enforces single-registered-per-canonical.
|
||||
|
||||
The Redis-backed keys below survive both stages. Redis owns the
|
||||
runtime-coordination state — per-game runtime aggregates, gap activation,
|
||||
capability-evaluation guards, and stream consumer offsets — plus the
|
||||
event-bus streams themselves.
|
||||
|
||||
### Redis key table
|
||||
|
||||
Storage rules for Redis:
|
||||
|
||||
- durable records are stored as strict JSON blobs
|
||||
- timestamps are stored in Unix milliseconds unless noted otherwise
|
||||
- dynamic key segments are base64url-encoded
|
||||
|
||||
### Key table
|
||||
|
||||
| Logical artifact | Redis key |
|
||||
| --- | --- |
|
||||
| game record | `lobby:games:<game_id>` |
|
||||
| game index by status | `lobby:games_by_status:<status>` (sorted set; score = created_at) |
|
||||
| games by owner | `lobby:games_by_owner:<user_id>` (set of game_ids; populated for private games on Save) |
|
||||
| application record | `lobby:applications:<application_id>` |
|
||||
| applications by game | `lobby:game_applications:<game_id>` (set of application_ids) |
|
||||
| applications by user | `lobby:user_applications:<user_id>` (set of application_ids) |
|
||||
| active application per (user, game) | `lobby:user_game_application:<user_id>:<game_id>` → `application_id` |
|
||||
| invite record | `lobby:invites:<invite_id>` |
|
||||
| invites by game | `lobby:game_invites:<game_id>` (set of invite_ids) |
|
||||
| invites by user (invitee) | `lobby:user_invites:<user_id>` (set of invite_ids) |
|
||||
| invites by inviter | `lobby:user_inviter_invites:<user_id>` (set of invite_ids) |
|
||||
| membership record | `lobby:memberships:<membership_id>` |
|
||||
| memberships by game | `lobby:game_memberships:<game_id>` (set of membership_ids) |
|
||||
| memberships by user | `lobby:user_memberships:<user_id>` (set of membership_ids) |
|
||||
| registered race name | `lobby:race_names:registered:<canonical_key>` → JSON `{user_id, race_name, source_game_id, registered_at}` |
|
||||
| user → registered canonical keys | `lobby:race_names:user_registered:<user_id>` (set of `canonical_key`) |
|
||||
| per-game race name reservation | `lobby:race_names:reservations:<game_id>:<canonical_key>` → JSON `{user_id, race_name, reserved_at, status ∈ reserved/pending_registration, eligible_until_ms?}` |
|
||||
| user → reservations index | `lobby:race_names:user_reservations:<user_id>` (set of `game_id:canonical_key`) |
|
||||
| pending-registration expiry index | `lobby:race_names:pending_index` (sorted set; score = `eligible_until_ms`) |
|
||||
| canonical-key lookup cache | `lobby:race_names:canonical_lookup:<canonical_key>` → JSON `{kind, holder_user_id, game_id?}` |
|
||||
| per-game per-user stats aggregate | `lobby:game_turn_stats:<game_id>:<user_id>` → JSON aggregate |
|
||||
| per-game stats user index | `lobby:game_turn_stats_by_game:<game_id>` (set of `user_id`) |
|
||||
| capability-evaluation guard | `lobby:capability_evaluation:done:<game_id>` (sentinel string) |
|
||||
| GM event stream offset | `lobby:stream_offsets:gm_events` |
|
||||
| runtime job result offset | `lobby:stream_offsets:runtime_results` |
|
||||
| user lifecycle stream offset | `lobby:stream_offsets:user_lifecycle` |
|
||||
@@ -1175,12 +1204,18 @@ Storage rules:
|
||||
|
||||
### Frozen record fields
|
||||
|
||||
The five durable records are stored in PostgreSQL columns; the field set
|
||||
per record is unchanged from the previous Redis JSON shape and is
|
||||
documented inline with the migration scripts under
|
||||
`internal/adapters/postgres/migrations/`.
|
||||
|
||||
| Record | Frozen fields |
|
||||
| --- | --- |
|
||||
| game record | all game fields listed in Game Record Model section |
|
||||
| application record | `application_id`, `game_id`, `applicant_user_id`, `race_name`, `status`, `created_at`, `decided_at` |
|
||||
| invite record | `invite_id`, `game_id`, `inviter_user_id`, `invitee_user_id`, `race_name` (set at redeem), `status`, `created_at`, `expires_at`, `decided_at` |
|
||||
| membership record | all membership fields listed in Membership Model section |
|
||||
| race_names row | `canonical_key`, `game_id`, `holder_user_id`, `race_name`, `binding_kind`, `source_game_id`, `reserved_at_ms`, `eligible_until_ms` (pending only), `registered_at_ms` (registered only) |
|
||||
|
||||
## Observability
|
||||
|
||||
|
||||
@@ -0,0 +1,236 @@
|
||||
// Command jetgen regenerates the go-jet/v2 query-builder code under
|
||||
// galaxy/lobby/internal/adapters/postgres/jet/ against a transient
|
||||
// PostgreSQL instance.
|
||||
//
|
||||
// The program is intended to be invoked as `go run ./cmd/jetgen` (or via the
|
||||
// `make jet` Makefile target) from within `galaxy/lobby`. It is not part of
|
||||
// the runtime binary.
|
||||
//
|
||||
// Steps:
|
||||
//
|
||||
// 1. start a postgres:16-alpine container via testcontainers-go
|
||||
// 2. open it through pkg/postgres as the superuser
|
||||
// 3. CREATE ROLE lobbyservice and CREATE SCHEMA "lobby"
|
||||
// AUTHORIZATION lobbyservice
|
||||
// 4. open a second pool as lobbyservice with search_path=lobby and apply
|
||||
// the embedded goose migrations
|
||||
// 5. run jet's PostgreSQL generator against schema=lobby, writing into
|
||||
// ../internal/adapters/postgres/jet
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/migrations"
|
||||
"galaxy/postgres"
|
||||
|
||||
jetpostgres "github.com/go-jet/jet/v2/generator/postgres"
|
||||
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
const (
|
||||
postgresImage = "postgres:16-alpine"
|
||||
superuserName = "galaxy"
|
||||
superuserPassword = "galaxy"
|
||||
superuserDatabase = "galaxy_lobby"
|
||||
serviceRole = "lobbyservice"
|
||||
servicePassword = "lobbyservice"
|
||||
serviceSchema = "lobby"
|
||||
containerStartup = 90 * time.Second
|
||||
defaultOpTimeout = 10 * time.Second
|
||||
jetOutputDirSuffix = "internal/adapters/postgres/jet"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if err := run(context.Background()); err != nil {
|
||||
log.Fatalf("jetgen: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func run(ctx context.Context) error {
|
||||
outputDir, err := jetOutputDir()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
container, err := tcpostgres.Run(ctx, postgresImage,
|
||||
tcpostgres.WithDatabase(superuserDatabase),
|
||||
tcpostgres.WithUsername(superuserName),
|
||||
tcpostgres.WithPassword(superuserPassword),
|
||||
testcontainers.WithWaitStrategy(
|
||||
wait.ForLog("database system is ready to accept connections").
|
||||
WithOccurrence(2).
|
||||
WithStartupTimeout(containerStartup),
|
||||
),
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("start postgres container: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if termErr := testcontainers.TerminateContainer(container); termErr != nil {
|
||||
log.Printf("jetgen: terminate container: %v", termErr)
|
||||
}
|
||||
}()
|
||||
|
||||
baseDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||
if err != nil {
|
||||
return fmt.Errorf("resolve container dsn: %w", err)
|
||||
}
|
||||
|
||||
if err := provisionRoleAndSchema(ctx, baseDSN); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
scopedDSN, err := dsnForServiceRole(baseDSN)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := applyMigrations(ctx, scopedDSN); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := os.RemoveAll(outputDir); err != nil {
|
||||
return fmt.Errorf("remove existing jet output %q: %w", outputDir, err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Dir(outputDir), 0o755); err != nil {
|
||||
return fmt.Errorf("ensure jet output parent: %w", err)
|
||||
}
|
||||
|
||||
jetCfg := postgres.DefaultConfig()
|
||||
jetCfg.PrimaryDSN = scopedDSN
|
||||
jetCfg.OperationTimeout = defaultOpTimeout
|
||||
jetDB, err := postgres.OpenPrimary(ctx, jetCfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open scoped pool for jet generation: %w", err)
|
||||
}
|
||||
defer func() { _ = jetDB.Close() }()
|
||||
|
||||
if err := jetpostgres.GenerateDB(jetDB, serviceSchema, outputDir); err != nil {
|
||||
return fmt.Errorf("jet generate: %w", err)
|
||||
}
|
||||
|
||||
log.Printf("jetgen: generated jet code into %s (schema=%s)", outputDir, serviceSchema)
|
||||
return nil
|
||||
}
|
||||
|
||||
func provisionRoleAndSchema(ctx context.Context, baseDSN string) error {
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = baseDSN
|
||||
cfg.OperationTimeout = defaultOpTimeout
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open admin pool: %w", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
|
||||
statements := []string{
|
||||
fmt.Sprintf(`DO $$ BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = %s) THEN
|
||||
CREATE ROLE %s LOGIN PASSWORD %s;
|
||||
END IF;
|
||||
END $$;`, sqlLiteral(serviceRole), sqlIdentifier(serviceRole), sqlLiteral(servicePassword)),
|
||||
fmt.Sprintf(`CREATE SCHEMA IF NOT EXISTS %s AUTHORIZATION %s;`,
|
||||
sqlIdentifier(serviceSchema), sqlIdentifier(serviceRole)),
|
||||
fmt.Sprintf(`GRANT USAGE ON SCHEMA %s TO %s;`,
|
||||
sqlIdentifier(serviceSchema), sqlIdentifier(serviceRole)),
|
||||
}
|
||||
for _, statement := range statements {
|
||||
if _, err := db.ExecContext(ctx, statement); err != nil {
|
||||
return fmt.Errorf("provision %q/%q: %w", serviceSchema, serviceRole, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func dsnForServiceRole(baseDSN string) (string, error) {
|
||||
parsed, err := url.Parse(baseDSN)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("parse base dsn: %w", err)
|
||||
}
|
||||
values := url.Values{}
|
||||
values.Set("search_path", serviceSchema)
|
||||
values.Set("sslmode", "disable")
|
||||
scoped := url.URL{
|
||||
Scheme: parsed.Scheme,
|
||||
User: url.UserPassword(serviceRole, servicePassword),
|
||||
Host: parsed.Host,
|
||||
Path: parsed.Path,
|
||||
RawQuery: values.Encode(),
|
||||
}
|
||||
return scoped.String(), nil
|
||||
}
|
||||
|
||||
func applyMigrations(ctx context.Context, dsn string) error {
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = dsn
|
||||
cfg.OperationTimeout = defaultOpTimeout
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open scoped pool: %w", err)
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
|
||||
if err := postgres.Ping(ctx, db, defaultOpTimeout); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := postgres.RunMigrations(ctx, db, migrations.FS(), "."); err != nil {
|
||||
return fmt.Errorf("run migrations: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// jetOutputDir returns the absolute path that jet should write into. We rely
|
||||
// on the runtime caller info to anchor it to galaxy/lobby regardless of the
|
||||
// invoking working directory.
|
||||
func jetOutputDir() (string, error) {
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
return "", errors.New("resolve runtime caller for jet output path")
|
||||
}
|
||||
dir := filepath.Dir(file)
|
||||
// dir = .../galaxy/lobby/cmd/jetgen
|
||||
moduleRoot := filepath.Clean(filepath.Join(dir, "..", ".."))
|
||||
return filepath.Join(moduleRoot, jetOutputDirSuffix), nil
|
||||
}
|
||||
|
||||
func sqlIdentifier(name string) string {
|
||||
return `"` + escapeDoubleQuotes(name) + `"`
|
||||
}
|
||||
|
||||
func sqlLiteral(value string) string {
|
||||
return "'" + escapeSingleQuotes(value) + "'"
|
||||
}
|
||||
|
||||
func escapeDoubleQuotes(value string) string {
|
||||
out := make([]byte, 0, len(value))
|
||||
for index := 0; index < len(value); index++ {
|
||||
if value[index] == '"' {
|
||||
out = append(out, '"', '"')
|
||||
continue
|
||||
}
|
||||
out = append(out, value[index])
|
||||
}
|
||||
return string(out)
|
||||
}
|
||||
|
||||
func escapeSingleQuotes(value string) string {
|
||||
out := make([]byte, 0, len(value))
|
||||
for index := 0; index < len(value); index++ {
|
||||
if value[index] == '\'' {
|
||||
out = append(out, '\'', '\'')
|
||||
continue
|
||||
}
|
||||
out = append(out, value[index])
|
||||
}
|
||||
return string(out)
|
||||
}
|
||||
+32
-14
@@ -6,10 +6,14 @@ and timestamps with values that match the deployment under inspection.
|
||||
## Example `.env`
|
||||
|
||||
A minimum-viable `LOBBY_*` set for a local run against a single Redis
|
||||
container. The full list with defaults lives in `../README.md` §Configuration.
|
||||
container plus a PostgreSQL container with the `lobby` schema and the
|
||||
`lobbyservice` role provisioned. The full list with defaults lives in
|
||||
`../README.md` §Configuration.
|
||||
|
||||
```bash
|
||||
LOBBY_REDIS_ADDR=127.0.0.1:6379
|
||||
LOBBY_REDIS_MASTER_ADDR=127.0.0.1:6379
|
||||
LOBBY_REDIS_PASSWORD=local
|
||||
LOBBY_POSTGRES_PRIMARY_DSN=postgres://lobbyservice:lobbyservice@127.0.0.1:5432/galaxy?search_path=lobby&sslmode=disable
|
||||
LOBBY_USER_SERVICE_BASE_URL=http://127.0.0.1:8083
|
||||
LOBBY_GM_BASE_URL=http://127.0.0.1:8096
|
||||
|
||||
@@ -19,7 +23,7 @@ LOBBY_INTERNAL_HTTP_ADDR=:8095
|
||||
LOBBY_LOG_LEVEL=info
|
||||
LOBBY_SHUTDOWN_TIMEOUT=30s
|
||||
|
||||
LOBBY_RACE_NAME_DIRECTORY_BACKEND=redis
|
||||
LOBBY_RACE_NAME_DIRECTORY_BACKEND=postgres
|
||||
LOBBY_ENROLLMENT_AUTOMATION_INTERVAL=30s
|
||||
LOBBY_RACE_NAME_EXPIRATION_INTERVAL=1h
|
||||
|
||||
@@ -115,16 +119,36 @@ curl -s http://localhost:8095/api/v1/internal/games/game-01HZ...
|
||||
curl -s http://localhost:8095/api/v1/internal/games/game-01HZ.../memberships
|
||||
```
|
||||
|
||||
## Redis Examples
|
||||
## Storage Inspection Examples
|
||||
|
||||
### Inspect a game record
|
||||
### Inspect a game record (PostgreSQL)
|
||||
|
||||
```bash
|
||||
redis-cli GET lobby:games:game-01HZ...
|
||||
psql "$LOBBY_POSTGRES_PRIMARY_DSN" -c \
|
||||
"SELECT * FROM lobby.games WHERE game_id = 'game-01HZ...'"
|
||||
```
|
||||
|
||||
The value is a strict JSON blob with the fields documented in
|
||||
`../README.md` §Game Record Model.
|
||||
The columns mirror the fields documented in `../README.md` §Game Record Model.
|
||||
|
||||
### Inspect open enrollment games (sorted by created_at)
|
||||
|
||||
```bash
|
||||
psql "$LOBBY_POSTGRES_PRIMARY_DSN" -c \
|
||||
"SELECT game_id, game_name, created_at FROM lobby.games
|
||||
WHERE status = 'enrollment_open'
|
||||
ORDER BY created_at DESC"
|
||||
```
|
||||
|
||||
### Inspect a Race Name Directory binding
|
||||
|
||||
```bash
|
||||
psql "$LOBBY_POSTGRES_PRIMARY_DSN" -c \
|
||||
"SELECT canonical_key, game_id, holder_user_id, race_name, binding_kind,
|
||||
source_game_id, eligible_until_ms, registered_at_ms
|
||||
FROM lobby.race_names WHERE race_name = 'Aurora'"
|
||||
```
|
||||
|
||||
## Redis Examples
|
||||
|
||||
### Publish a runtime job result (Runtime Manager simulation)
|
||||
|
||||
@@ -162,12 +186,6 @@ redis-cli XADD gm:lobby_events '*' \
|
||||
finished_at_ms 1714123456789
|
||||
```
|
||||
|
||||
### Inspect open enrollment games (sorted by created_at)
|
||||
|
||||
```bash
|
||||
redis-cli ZRANGE lobby:games_by_status:enrollment_open 0 -1 WITHSCORES
|
||||
```
|
||||
|
||||
## Notification Intent Format
|
||||
|
||||
Lobby produces every notification through `pkg/notificationintent` and
|
||||
|
||||
@@ -0,0 +1,386 @@
|
||||
# PostgreSQL Migration
|
||||
|
||||
PG_PLAN.md §6A migrated the four core enrollment entities of Game Lobby
|
||||
Service — `Game`, `Application`, `Invite`, `Membership` — from Redis-only
|
||||
durable storage to the steady-state Redis + PostgreSQL split codified in
|
||||
`ARCHITECTURE.md §Persistence Backends`. PG_PLAN.md §6B then moved the
|
||||
Race Name Directory onto PostgreSQL, retiring the Redis Lua scripts and
|
||||
canonical-lookup cache that backed it. PG_PLAN.md §6C confirmed which
|
||||
runtime-coordination state intentionally stays on Redis (per-game
|
||||
`game_turn_stats`, `gap_activated_at`, `capability_evaluation:done:*`,
|
||||
`stream_offsets:*`, plus the event-bus streams themselves) and pruned the
|
||||
remaining redisstate keyspace.
|
||||
|
||||
This document records the schema decisions and the non-obvious agreements
|
||||
behind them. Use it together with the migration scripts under
|
||||
`internal/adapters/postgres/migrations/` and the runtime wiring
|
||||
(`internal/app/runtime.go`).
|
||||
|
||||
## Outcomes
|
||||
|
||||
- Schema `lobby` (provisioned externally) holds four tables: `games`,
|
||||
`applications`, `invites`, `memberships`. A partial UNIQUE index on
|
||||
`applications(applicant_user_id, game_id) WHERE status <> 'rejected'`
|
||||
enforces the single-active-application constraint at the database
|
||||
level.
|
||||
- The runtime opens one PostgreSQL pool via `pkg/postgres.OpenPrimary`,
|
||||
applies embedded goose migrations strictly before any HTTP listener
|
||||
becomes ready, and exits non-zero when migration or ping fails.
|
||||
- The runtime opens one shared `*redis.Client` via
|
||||
`pkg/redisconn.NewMasterClient` and passes it to the Race Name
|
||||
Directory adapter, the per-game stats / gap-activation /
|
||||
evaluation-guard / stream-offset stores, the consumer pipelines, and
|
||||
the notification-intent publisher.
|
||||
- The Redis adapter package (`internal/adapters/redisstate/`) keeps the
|
||||
surviving stores (`racenamedir`, `gameturnstatsstore`,
|
||||
`gapactivationstore`, `evaluationguardstore`, `streamoffsetstore`,
|
||||
`streamlagprobe`) and the keyspace methods that back them; the
|
||||
game/application/invite/membership stores, codecs, tests, and
|
||||
per-record TTL constants are gone.
|
||||
- Configuration drops `LOBBY_REDIS_ADDR`, `LOBBY_REDIS_USERNAME`,
|
||||
`LOBBY_REDIS_TLS_ENABLED` and introduces `LOBBY_REDIS_MASTER_ADDR`,
|
||||
`LOBBY_REDIS_REPLICA_ADDRS`, `LOBBY_REDIS_PASSWORD`,
|
||||
`LOBBY_POSTGRES_PRIMARY_DSN`, `LOBBY_POSTGRES_REPLICA_DSNS`, plus
|
||||
the standard `LOBBY_POSTGRES_*` pool tuning knobs. Setting either of
|
||||
the two retired Redis env vars now fails fast at startup via the
|
||||
shared `pkg/redisconn.LoadFromEnv` rejection path.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. One schema, externally-provisioned role
|
||||
|
||||
**Decision.** The `lobby` schema and the matching `lobbyservice` role
|
||||
are created outside the migration sequence (in tests, by
|
||||
`integration/internal/harness/postgres_container.go::EnsureRoleAndSchema`;
|
||||
in production, by an ops init script not in scope for this stage). The
|
||||
embedded migration `00001_init.sql` only contains DDL for tables and
|
||||
indexes and assumes it runs as the schema owner with
|
||||
`search_path=lobby`.
|
||||
|
||||
**Why.** Mirrors the precedent set by Notification Stage 5 and Mail
|
||||
Stage 4 and matches the schema-per-service architectural rule
|
||||
(`ARCHITECTURE.md §Persistence Backends`). Mixing role + schema + table
|
||||
DDL into one script would force every consumer of the migration to run
|
||||
as a superuser; splitting them lines up with the operational split
|
||||
(ops provisions roles and schemas, the service applies schema-scoped
|
||||
migrations).
|
||||
|
||||
### 2. Single-active application = partial UNIQUE on `applications`
|
||||
|
||||
**Decision.** `applications` carries a partial UNIQUE index on
|
||||
`(applicant_user_id, game_id) WHERE status <> 'rejected'`. INSERT
|
||||
attempts that violate the constraint are surfaced to the service layer
|
||||
as `application.ErrConflict` via the shared
|
||||
`sqlx.IsUniqueViolation` helper.
|
||||
|
||||
**Why.** Replaces the Redis lookup key `lobby:user_game_application:*:*`
|
||||
with a deterministic database-level invariant. Multiple `rejected`
|
||||
rows are intentionally allowed (one applicant may submit, get rejected,
|
||||
and resubmit), and the UNIQUE only fires on the second simultaneous
|
||||
submitted/approved row for the same `(user, game)`. The constraint is
|
||||
race-safe: under concurrent submission attempts one INSERT wins, the
|
||||
others fail with conflict.
|
||||
|
||||
### 3. Public games carry an empty `owner_user_id`; partial index excludes them
|
||||
|
||||
**Decision.** `games.owner_user_id` is `text NOT NULL DEFAULT ''`, and
|
||||
the secondary `games_owner_idx` is partial: `WHERE game_type = 'private'`.
|
||||
Public games (admin-owned) carry an empty owner string and are excluded
|
||||
from the index entirely.
|
||||
|
||||
**Why.** Mirrors the previous Redis behaviour where `games_by_owner:*`
|
||||
sets were created only for private games. The partial index keeps the
|
||||
owner lookup tight (only private-game rows participate) while letting
|
||||
the column stay non-nullable and consistent with the domain model.
|
||||
|
||||
### 4. JSONB columns for runtime snapshot and runtime binding
|
||||
|
||||
**Decision.** `games.runtime_snapshot` is `jsonb NOT NULL DEFAULT
|
||||
'{}'::jsonb`; `games.runtime_binding` is `jsonb NULL`. The JSON shapes
|
||||
used inside both columns are stable and live in
|
||||
`internal/adapters/postgres/gamestore/codecs.go`. `runtime_binding`
|
||||
binds NULL when the domain pointer is nil, otherwise an object with
|
||||
`container_id`, `engine_endpoint`, `runtime_job_id`, `bound_at_ms`
|
||||
fields.
|
||||
|
||||
**Why.** Both fields are opaque to queries — Lobby never element-filters
|
||||
on their internals. JSONB matches the "everything outside primary
|
||||
fields is JSON" pattern Notification Stage 5 already established and
|
||||
allows a future GIN index without a schema rewrite. The `bound_at_ms`
|
||||
field inside the binding stays in Unix milliseconds so the encoded
|
||||
payload is naked-comparable across Redis and PostgreSQL audits during
|
||||
the transition window.
|
||||
|
||||
### 5. Optimistic concurrency via current-status compare-and-swap
|
||||
|
||||
**Decision.** `UpdateStatus` on every store is implemented as `UPDATE …
|
||||
WHERE id = $X AND status = $expected`. A zero-rows result is
|
||||
disambiguated with a follow-up `SELECT status` probe — missing rows map
|
||||
to the per-domain `ErrNotFound`, mismatches map to `ErrConflict`.
|
||||
Snapshot/binding overrides on `games` use the same pattern but only
|
||||
guard on the primary key (no expected-status gate).
|
||||
|
||||
**Why.** Mirrors the previous Redis WATCH/TxPipelined behaviour without
|
||||
holding a `SELECT … FOR UPDATE` lock across application logic. The
|
||||
compare-and-swap is local to one statement, never spans more than one
|
||||
network round trip, and produces the same observable error semantics
|
||||
the service layer already depends on.
|
||||
|
||||
### 6. Memberships store `race_name` and `canonical_key` side by side
|
||||
|
||||
**Decision.** `memberships` carries both `race_name` (original casing)
|
||||
and `canonical_key` (policy-derived form) as separate `text NOT NULL`
|
||||
columns. There is no UNIQUE constraint on `canonical_key`.
|
||||
|
||||
**Why.** Downstream consumers — capability evaluation and the
|
||||
user-lifecycle cascade — read the canonical form directly without
|
||||
re-deriving it from `race_name`, which is the same arrangement the
|
||||
Redis JSON record had. Race-name uniqueness across the platform
|
||||
remains the responsibility of the Race Name Directory; enforcing a
|
||||
UNIQUE on memberships' canonical_key now would duplicate the RND
|
||||
invariant and create deadlock potential between the two stores.
|
||||
|
||||
### 7. ON DELETE CASCADE from games to children
|
||||
|
||||
**Decision.** Each child table (`applications`, `invites`,
|
||||
`memberships`) declares its `game_id` as `REFERENCES games(game_id) ON
|
||||
DELETE CASCADE`.
|
||||
|
||||
**Why.** Lobby code never deletes games today — every status terminal
|
||||
is a soft state — so the cascade has no live trigger. It exists for
|
||||
two future paths: scheduled cleanup of `cancelled` games far past
|
||||
retention, and explicit operator/test resets. CASCADE keeps those paths
|
||||
trivial and free of dangling references.
|
||||
|
||||
### 8. Listing order: most-recent-first for games, oldest-first for child tables
|
||||
|
||||
**Decision.** `GetByStatus` and `GetByOwner` on `games` order by
|
||||
`created_at DESC, game_id DESC`. The per-game/per-user listings on
|
||||
`applications`, `invites`, `memberships` order by `created_at ASC,
|
||||
<id> ASC` (memberships order by `joined_at ASC`).
|
||||
|
||||
**Why.** Game listings serve user-facing feeds where most-recent-first
|
||||
is the natural expectation, matching the previous Redis sorted-set
|
||||
score and the `accounts.created_at DESC` convention from User Stage 3.
|
||||
Child-table listings serve administrative and cascade flows where the
|
||||
chronological order helps operators reason about the sequence of
|
||||
events. The ports doc explicitly says "order is adapter-defined", so
|
||||
either convention is contract-compatible.
|
||||
|
||||
### 9. Heavy `runtime_test.go` / `runtime_smoke_test.go` deleted; integration coverage
|
||||
|
||||
**Decision.** The service-local `internal/app/runtime_test.go` and
|
||||
`runtime_smoke_test.go` were removed. Black-box runtime coverage moves
|
||||
to the `integration/lobbyuser` and `integration/lobbynotification`
|
||||
suites, which now spin up both a PostgreSQL container (via
|
||||
`harness.StartLobbyServicePersistence`) and the existing Redis
|
||||
container.
|
||||
|
||||
**Why.** Mirrors the Mail Stage 4 / Notification Stage 5 precedent.
|
||||
Booting a full Lobby runtime now requires both PostgreSQL and Redis,
|
||||
which is the integration-suite shape; duplicating that bootstrap
|
||||
inside `internal/app/` would be heavy and fragile. The remaining
|
||||
service-local tests cover units that do not require the full runtime.
|
||||
|
||||
### 10. Query layer is `go-jet/jet/v2`
|
||||
|
||||
**Decision.** All four PG-store packages build SQL through the jet
|
||||
builder API (`pgtable.<Table>.INSERT/SELECT/UPDATE/DELETE` plus the
|
||||
`pg.AND/OR/SET/COALESCE/...` DSL). Generated table models live under
|
||||
`internal/adapters/postgres/jet/lobby/{model,table}/` and are
|
||||
regenerated by `make jet` (which spins up a transient PostgreSQL via
|
||||
testcontainers, applies the embedded goose migrations, and runs jet's
|
||||
generator). Generated code is committed.
|
||||
|
||||
**Why.** Aligns with `PG_PLAN.md` §Library stack ("Query layer:
|
||||
`github.com/go-jet/jet/v2` (PostgreSQL dialect). Generated code lives
|
||||
under each service `internal/adapters/postgres/jet/`, regenerated via
|
||||
a `make jet` target and committed to the repo"). PostgreSQL constructs
|
||||
that the jet builder does not cover natively (`FOR UPDATE`,
|
||||
`COALESCE`, `LOWER` on subselects, JSONB params) are expressed through
|
||||
the per-DSL helpers (`.FOR(pg.UPDATE())`, `pg.COALESCE`, `pg.LOWER`,
|
||||
direct `[]byte`/string params for JSONB columns). Manual `rowScanner`
|
||||
helpers (`scanGame`, `scanApplication`, `scanInvite`,
|
||||
`scanMembership`) preserve the codecs.go boundary translations and
|
||||
domain-type mapping; jet only owns SQL construction.
|
||||
|
||||
## Out of scope for §6A
|
||||
|
||||
- Read routing through `LOBBY_POSTGRES_REPLICA_DSNS` — config exposes
|
||||
the field, runtime ignores it.
|
||||
- Production provisioning of the `lobby` schema and `lobbyservice`
|
||||
role — operational concern handled outside the service binary.
|
||||
|
||||
## §6B — Race Name Directory on PostgreSQL
|
||||
|
||||
§6B replaces the Redis-backed Race Name Directory (one Lua script + a
|
||||
canonical-lookup cache + a pending-index ZSET + per-binding string keys)
|
||||
with a single PostgreSQL table `race_names` whose rows back all three
|
||||
binding kinds (`registered`, `reservation`, `pending_registration`).
|
||||
The `race_names` DDL lives in `00001_init.sql` next to the four core
|
||||
enrollment tables (it was originally introduced as a separate
|
||||
`00002_race_names.sql`; PG_PLAN.md §9 collapsed the two files into one
|
||||
init migration during the pre-launch development window). The adapter
|
||||
`internal/adapters/postgres/racenamedir/directory.go` is the canonical
|
||||
reference; the architecture rule is unchanged from §6A.
|
||||
|
||||
### 11. One table, composite primary key `(canonical_key, game_id)`
|
||||
|
||||
**Decision.** `race_names` carries one row per binding under the
|
||||
composite primary key `(canonical_key, game_id)`. Reservations and
|
||||
pending_registrations write the actual game id; registered rows write
|
||||
`game_id = ''` and keep the source game in `source_game_id`. A partial
|
||||
UNIQUE index on `(canonical_key)` filtered to `binding_kind =
|
||||
'registered'` enforces the single-registered-per-canonical rule.
|
||||
|
||||
**Why.** PG_PLAN.md §6B sketched the table as `(canonical_key PK, …)`,
|
||||
but the existing port semantics (`testReserveCrossGame`,
|
||||
`testReleaseReservationKeepsCrossGame` in
|
||||
`internal/ports/racenamedirtest/suite.go`) require the same user to hold
|
||||
several per-game reservations on one canonical key concurrently. A flat
|
||||
single-PK table cannot model that without losing the per-game
|
||||
identity. The composite PK matches both invariants — at most one row per
|
||||
(canonical, game) and at most one registered row per canonical — without
|
||||
splitting the data into two tables (which would force every write
|
||||
operation to touch two unrelated indexes and reproduce the old
|
||||
canonical-lookup cache invariant manually).
|
||||
|
||||
### 12. Concurrency: PostgreSQL transactional advisory locks
|
||||
|
||||
**Decision.** Every write operation (`Reserve`, `MarkPendingRegistration`,
|
||||
`Register`, `ReleaseReservation`, the per-row branch of
|
||||
`ExpirePendingRegistrations`) opens a `BEGIN; …; COMMIT` and acquires
|
||||
`pg_advisory_xact_lock(hashtextextended($canonical_key, 0))` as the very
|
||||
first statement. The lock auto-releases on commit or rollback.
|
||||
`ReleaseAllByUser` is a single `DELETE WHERE holder_user_id = $1` and
|
||||
takes no advisory lock — it runs on permanent_blocked / deleted
|
||||
lifecycle events, so the user being deleted cannot be a concurrent
|
||||
writer on those bindings.
|
||||
|
||||
**Why.** PG_PLAN.md §6B explicitly authorised either `SELECT … FOR
|
||||
UPDATE` or advisory locks. `SELECT … FOR UPDATE` cannot serialize
|
||||
against not-yet-existing rows (e.g. concurrent first-time `Reserve`s for
|
||||
the same canonical), so advisory locks are required for race-free
|
||||
INSERTs. Hashing through `hashtextextended` produces a 64-bit lock key
|
||||
covering arbitrary canonical strings, sidestepping `bigint` truncation
|
||||
that older `hashtext` exposes. Holding the lock for one transaction
|
||||
keeps the contention surface tight and matches the Notification §5
|
||||
"narrow CAS, no application-logic-bound row locks" precedent.
|
||||
|
||||
### 13. `binding_kind` values match `ports.Kind*` verbatim
|
||||
|
||||
**Decision.** `race_names.binding_kind` stores `"registered"`,
|
||||
`"reservation"`, or `"pending_registration"` — the same string literals
|
||||
exported by `ports.KindRegistered`, `ports.KindReservation`,
|
||||
`ports.KindPendingRegistration`. The adapter returns the raw value
|
||||
directly through `Availability.Kind` without translation. A `CHECK`
|
||||
constraint on the column rejects anything else.
|
||||
|
||||
**Why.** Avoids one boundary translation and one synonym ("reserved" vs
|
||||
"reservation") that the Redis adapter carried internally as
|
||||
`reservationStatusReserved = "reserved"`. With the port-equivalent
|
||||
literals on disk, future operator-side queries (`SELECT … WHERE
|
||||
binding_kind = 'reservation'`) match the Go-level constants 1:1, and
|
||||
the adapter saves a `switch` per `Check` call.
|
||||
|
||||
### 14. `Check` returns the strongest binding via in-process priority
|
||||
|
||||
**Decision.** `Check` issues `SELECT holder_user_id, binding_kind FROM
|
||||
race_names WHERE canonical_key = $1` and picks the strongest binding in
|
||||
Go using a priority rank `registered > pending_registration >
|
||||
reservation`. There is no SQL `CASE` expression in the ORDER BY.
|
||||
|
||||
**Why.** The dataset per canonical is bounded (at most one registered +
|
||||
one row per active game) and is read frequently by every `Check`. The
|
||||
Go-side rank avoids a SQL DSL detour that go-jet/v2 would express via
|
||||
raw SQL anyway, and it keeps the query plan a single index scan on
|
||||
`canonical_key`.
|
||||
|
||||
### 15. `ExpirePendingRegistrations` scans then locks per row
|
||||
|
||||
**Decision.** The expirer first runs an indexed scan
|
||||
`WHERE binding_kind = 'pending_registration' AND eligible_until_ms <=
|
||||
$cutoff` (served by `race_names_pending_eligible_idx`), then re-reads
|
||||
each candidate inside its own advisory-locked transaction, asserts the
|
||||
binding is still pending and still expired, and DELETEs it. Concurrent
|
||||
`Register` or `ReleaseReservation` simply causes the per-row branch to
|
||||
skip without error.
|
||||
|
||||
**Why.** Mirrors the Redis adapter's two-phase `ZRANGEBYSCORE` + per-
|
||||
member release loop. A bulk `DELETE … WHERE eligible_until_ms <= …`
|
||||
would not produce the per-entry `ports.ExpiredPending` slice the worker
|
||||
needs for telemetry, and would race with `Register` (which targets the
|
||||
same row).
|
||||
|
||||
### 16. Shared port test suite stays on PostgreSQL via a serial harness
|
||||
|
||||
**Decision.** The shared `racenamedirtest` suite no longer calls
|
||||
`t.Parallel()` from its subtests. Every subtest goes through the
|
||||
factory, the factory truncates the lobby tables and constructs a fresh
|
||||
adapter against the package-shared testcontainers PostgreSQL.
|
||||
|
||||
**Why.** The PostgreSQL adapter relies on `pgtest.TruncateAll` between
|
||||
factory invocations; running subtests in parallel against one shared
|
||||
container would race truncate against other subtests' INSERTs. Spinning
|
||||
up a per-subtest schema would multiply container provisioning cost
|
||||
significantly (PG generation step alone takes minutes per fresh
|
||||
container), and the suite is fast enough serially. The Redis-only
|
||||
backend retired in §6B no longer needs the parallelism either; only the
|
||||
in-process stub remains in scope and has trivial setup cost.
|
||||
|
||||
## §6C — Workers, ephemeral stores, cleanup
|
||||
|
||||
§6C closes the Lobby migration: it confirms what intentionally stays on
|
||||
Redis, prunes the dead Redis adapter code, and finalises the
|
||||
service-layer documentation.
|
||||
|
||||
### 17. Workers stayed on ports — no functional change
|
||||
|
||||
**Decision.** The four Lobby workers (`pendingregistration`,
|
||||
`gmevents`, `runtimejobresult`, `userlifecycle`) and the
|
||||
`enrollmentautomation` worker shipped in §6A already consume their
|
||||
storage through ports. After §6B the `RaceNameDirectory` port resolves
|
||||
to the PostgreSQL adapter; no worker required code changes.
|
||||
|
||||
**Why.** §6A established the port-on-storage seam for `GameStore`,
|
||||
`ApplicationStore`, `InviteStore`, `MembershipStore`. §6B kept the same
|
||||
contract for `RaceNameDirectory`. Worker logic depends on the contract,
|
||||
not the backend, so the migration completes via a wiring switch in
|
||||
`internal/app/wiring.go::buildRaceNameDirectory` without re-touching
|
||||
worker code.
|
||||
|
||||
### 18. `redisstate` retains only runtime-coordination adapters
|
||||
|
||||
**Decision.** After §6C the `internal/adapters/redisstate/` package
|
||||
implements only `GameTurnStatsStore`, `GapActivationStore`,
|
||||
`EvaluationGuardStore`, `StreamOffsetStore`, and the `StreamLagProbe`.
|
||||
The legacy `racenamedir.go`, `racenamedir_lua.go`,
|
||||
`racenamedir_test.go`, `codecs_racename.go`, and the dead game
|
||||
codecs (`codecs.go`'s `MarshalGame`/`UnmarshalGame`) are removed. The
|
||||
`Keyspace` type only builds keys for the surviving adapters
|
||||
(`GapActivatedAt`, `StreamOffset`, `GameTurnStat`,
|
||||
`GameTurnStatsByGame`, `CapabilityEvaluationGuard`).
|
||||
|
||||
**Why.** Architectural rule (`ARCHITECTURE.md §Persistence Backends`):
|
||||
Redis owns runtime-coordination state, PostgreSQL owns durable business
|
||||
state. The retained Redis stores back ephemeral per-game aggregates
|
||||
(`game_turn_stats`), short-lived sentinels (`gap_activated_at`,
|
||||
`capability_evaluation:done:*`), and the consumer-offset coordination
|
||||
state (`stream_offsets:*`) — all rebuildable or losable without
|
||||
durability impact. Streams stay on Redis because they *are* the event
|
||||
bus.
|
||||
|
||||
### 19. Default Race Name Directory backend is `postgres`
|
||||
|
||||
**Decision.** `LOBBY_RACE_NAME_DIRECTORY_BACKEND` defaults to
|
||||
`"postgres"`. The accepted values are `postgres` (production) and
|
||||
`stub` (in-process for unit tests that do not need a real PostgreSQL).
|
||||
The `redis` value, the corresponding `RaceNameDirectoryBackendRedis`
|
||||
constant, and the wiring branch are removed.
|
||||
|
||||
**Why.** The Redis adapter is gone; keeping the value in the validator
|
||||
would produce a misleading "configuration accepted, but startup fails
|
||||
when wiring resolves the directory" path. Leaving `stub` as a valid
|
||||
backend lets per-service unit tests run against a small, fast
|
||||
in-process directory; integration suites use `postgres` via the
|
||||
testcontainers harness.
|
||||
+47
-18
@@ -7,8 +7,23 @@ readiness, shutdown, and the handful of recovery paths specific to Lobby.
|
||||
|
||||
Before starting the process, confirm:
|
||||
|
||||
- `LOBBY_REDIS_ADDR` points to the Redis deployment used for state and the
|
||||
five Lobby-related streams.
|
||||
- `LOBBY_REDIS_MASTER_ADDR` and `LOBBY_REDIS_PASSWORD` point to the Redis
|
||||
deployment used for the runtime-coordination state that intentionally
|
||||
stays on Redis: stream consumers/publishers, stream offsets, per-game
|
||||
turn-stats aggregates, gap-activation timestamps, and the
|
||||
capability-evaluation guard. The deprecated `LOBBY_REDIS_ADDR`,
|
||||
`LOBBY_REDIS_USERNAME`, and `LOBBY_REDIS_TLS_ENABLED` env vars were
|
||||
retired in PG_PLAN.md §6A; setting either of the latter two now fails
|
||||
fast at startup.
|
||||
- `LOBBY_POSTGRES_PRIMARY_DSN` points to the PostgreSQL primary that
|
||||
hosts the `lobby` schema. The DSN must include `search_path=lobby` and
|
||||
`sslmode=disable`. Embedded goose migrations apply at startup before
|
||||
any HTTP listener opens; a migration or ping failure terminates the
|
||||
process with a non-zero exit. After PG_PLAN.md §6A the schema holds
|
||||
`games`, `applications`, `invites`, `memberships`; after §6B it also
|
||||
holds `race_names`. The schema and the `lobbyservice` role are
|
||||
provisioned externally (operator init script in production, the
|
||||
testcontainers harness in tests).
|
||||
- `LOBBY_USER_SERVICE_BASE_URL` and `LOBBY_GM_BASE_URL` are reachable from
|
||||
the network the Lobby pods run in. Lobby does not ping these at boot,
|
||||
but transport failures against them will surface as request errors.
|
||||
@@ -19,11 +34,13 @@ Before starting the process, confirm:
|
||||
- `LOBBY_RUNTIME_JOB_RESULTS_STREAM` (default `runtime:job_results`)
|
||||
- `LOBBY_USER_LIFECYCLE_STREAM` (default `user:lifecycle_events`)
|
||||
- `LOBBY_NOTIFICATION_INTENTS_STREAM` (default `notification:intents`)
|
||||
- `LOBBY_RACE_NAME_DIRECTORY_BACKEND` is `redis` for production; the
|
||||
`stub` value is only for unit tests.
|
||||
- `LOBBY_RACE_NAME_DIRECTORY_BACKEND` is `postgres` for production
|
||||
(the default after PG_PLAN.md §6B); the `stub` value is only for
|
||||
unit tests that do not need a real PostgreSQL.
|
||||
|
||||
At startup the process performs a bounded `PING` against Redis. Startup
|
||||
fails fast if the ping fails. There are no liveness checks against User
|
||||
At startup the process opens the PostgreSQL pool, applies migrations,
|
||||
pings PostgreSQL, then opens the Redis client and pings Redis. Startup
|
||||
fails fast if any step fails. There are no liveness checks against User
|
||||
Service or Game Master at boot; those are surfaced at request time.
|
||||
|
||||
Expected listener state after a healthy start:
|
||||
@@ -160,11 +177,15 @@ is reachable again.
|
||||
To inspect the backlog:
|
||||
|
||||
```bash
|
||||
redis-cli ZRANGE lobby:race_names:pending_index 0 -1 WITHSCORES
|
||||
psql -c "SELECT canonical_key, game_id, holder_user_id, eligible_until_ms
|
||||
FROM lobby.race_names
|
||||
WHERE binding_kind = 'pending_registration'
|
||||
ORDER BY eligible_until_ms ASC"
|
||||
```
|
||||
|
||||
Entries with `score < now()` (Unix milliseconds) are expirable on the next
|
||||
tick.
|
||||
Rows whose `eligible_until_ms` is at or below `extract(epoch from now()) * 1000`
|
||||
are expirable on the next tick. The partial index
|
||||
`race_names_pending_eligible_idx` keeps this scan cheap.
|
||||
|
||||
## Cascade Release Operator Notes
|
||||
|
||||
@@ -195,26 +216,34 @@ out-of-band.
|
||||
|
||||
## Diagnostic Queries
|
||||
|
||||
A handful of Redis CLI snippets help during incidents:
|
||||
Durable enrollment state and Race Name Directory bindings live in
|
||||
PostgreSQL; runtime coordination state stays in Redis. A handful of CLI
|
||||
snippets help during incidents:
|
||||
|
||||
```bash
|
||||
# Live game count by status
|
||||
redis-cli ZCARD lobby:games_by_status:enrollment_open
|
||||
redis-cli ZCARD lobby:games_by_status:running
|
||||
# Live game count by status (PostgreSQL)
|
||||
psql -c "SELECT status, COUNT(*) FROM lobby.games GROUP BY status"
|
||||
|
||||
# Inspect a specific game record
|
||||
redis-cli GET lobby:games:<game_id>
|
||||
psql -c "SELECT * FROM lobby.games WHERE game_id = '<game_id>'"
|
||||
|
||||
# Member roster for a game
|
||||
redis-cli SMEMBERS lobby:game_memberships:<game_id>
|
||||
psql -c "SELECT user_id, race_name, status, joined_at
|
||||
FROM lobby.memberships
|
||||
WHERE game_id = '<game_id>'
|
||||
ORDER BY joined_at"
|
||||
|
||||
# Race name pending entries (oldest first)
|
||||
redis-cli ZRANGE lobby:race_names:pending_index 0 -1 WITHSCORES
|
||||
psql -c "SELECT canonical_key, game_id, holder_user_id, eligible_until_ms
|
||||
FROM lobby.race_names
|
||||
WHERE binding_kind = 'pending_registration'
|
||||
ORDER BY eligible_until_ms ASC"
|
||||
|
||||
# Stream lag inspection
|
||||
# Stream lag inspection (Redis)
|
||||
redis-cli XINFO STREAM gm:lobby_events
|
||||
redis-cli GET lobby:stream_offsets:gm_events
|
||||
```
|
||||
|
||||
The gauges and counters surfaced through OpenTelemetry are the primary
|
||||
observability surface; raw Redis access is for last-resort triage.
|
||||
observability surface; raw PostgreSQL and Redis access is for last-resort
|
||||
triage.
|
||||
|
||||
+19
-11
@@ -56,9 +56,10 @@ flowchart LR
|
||||
|
||||
Notes:
|
||||
|
||||
- `cmd/lobby` refuses startup when Redis connectivity is misconfigured. User
|
||||
Service and Game Master reachability are not verified at boot; transport
|
||||
failures surface as request errors.
|
||||
- `cmd/lobby` refuses startup when Redis connectivity is misconfigured, when
|
||||
PostgreSQL is unreachable, or when the embedded goose migrations fail to
|
||||
apply. User Service and Game Master reachability are not verified at boot;
|
||||
transport failures surface as request errors.
|
||||
- Both HTTP listeners expose `/healthz` and `/readyz` independently so health
|
||||
checks can target either port.
|
||||
- `register-runtime` is an outgoing call from Lobby to Game Master after the
|
||||
@@ -85,7 +86,7 @@ Probe routes:
|
||||
|
||||
- `GET /healthz` returns `{"status":"ok"}`
|
||||
- `GET /readyz` returns `{"status":"ready"}` once startup wiring completes.
|
||||
- Neither probe performs a live Redis ping per request.
|
||||
- Neither probe performs a live Redis or PostgreSQL ping per request.
|
||||
- There is no `/metrics` route. Metrics flow through OpenTelemetry exporters.
|
||||
|
||||
## Background Workers
|
||||
@@ -130,13 +131,20 @@ lags or stalls, the gauge climbs and stays high.
|
||||
The full env-var list with defaults lives in `../README.md` §Configuration.
|
||||
The groups below summarize the structure:
|
||||
|
||||
- **Required** — `LOBBY_REDIS_ADDR`, `LOBBY_USER_SERVICE_BASE_URL`,
|
||||
- **Required** — `LOBBY_REDIS_MASTER_ADDR`, `LOBBY_REDIS_PASSWORD`,
|
||||
`LOBBY_POSTGRES_PRIMARY_DSN`, `LOBBY_USER_SERVICE_BASE_URL`,
|
||||
`LOBBY_GM_BASE_URL`.
|
||||
- **Process and logging** — `LOBBY_SHUTDOWN_TIMEOUT`, `LOBBY_LOG_LEVEL`.
|
||||
- **HTTP listeners** — `LOBBY_PUBLIC_HTTP_*`, `LOBBY_INTERNAL_HTTP_*`.
|
||||
- **Redis connectivity** — `LOBBY_REDIS_USERNAME`, `LOBBY_REDIS_PASSWORD`,
|
||||
`LOBBY_REDIS_DB`, `LOBBY_REDIS_TLS_ENABLED`,
|
||||
`LOBBY_REDIS_OPERATION_TIMEOUT`.
|
||||
- **Redis connectivity** — `LOBBY_REDIS_MASTER_ADDR`,
|
||||
`LOBBY_REDIS_REPLICA_ADDRS`, `LOBBY_REDIS_PASSWORD`, `LOBBY_REDIS_DB`,
|
||||
`LOBBY_REDIS_OPERATION_TIMEOUT` (legacy `LOBBY_REDIS_ADDR`,
|
||||
`LOBBY_REDIS_TLS_ENABLED`, `LOBBY_REDIS_USERNAME` removed in PG_PLAN.md
|
||||
§6A).
|
||||
- **PostgreSQL connectivity** — `LOBBY_POSTGRES_PRIMARY_DSN`,
|
||||
`LOBBY_POSTGRES_REPLICA_DSNS`, `LOBBY_POSTGRES_OPERATION_TIMEOUT`,
|
||||
`LOBBY_POSTGRES_MAX_OPEN_CONNS`, `LOBBY_POSTGRES_MAX_IDLE_CONNS`,
|
||||
`LOBBY_POSTGRES_CONN_MAX_LIFETIME`.
|
||||
- **Streams** — `LOBBY_GM_EVENTS_STREAM`, `LOBBY_RUNTIME_START_JOBS_STREAM`,
|
||||
`LOBBY_RUNTIME_STOP_JOBS_STREAM`, `LOBBY_RUNTIME_JOB_RESULTS_STREAM`,
|
||||
`LOBBY_NOTIFICATION_INTENTS_STREAM`, `LOBBY_USER_LIFECYCLE_STREAM`.
|
||||
@@ -152,9 +160,9 @@ The groups below summarize the structure:
|
||||
|
||||
- `Game Lobby` owns platform game state. Game Master may cache snapshots but
|
||||
is not the source of truth.
|
||||
- The Race Name Directory ships a Redis adapter and an in-process stub; the
|
||||
stub is intended for unit tests and is selected via
|
||||
`LOBBY_RACE_NAME_DIRECTORY_BACKEND=stub`.
|
||||
- The Race Name Directory ships a PostgreSQL adapter (default after
|
||||
PG_PLAN.md §6B) and an in-process stub. The stub is intended for unit
|
||||
tests and is selected via `LOBBY_RACE_NAME_DIRECTORY_BACKEND=stub`.
|
||||
- A `permanent_block` or `deleted` event from User Service fans out
|
||||
asynchronously through the `user:lifecycle_events` consumer; in-flight
|
||||
games owned by the affected user receive a stop-job and transition to
|
||||
|
||||
+33
-11
@@ -3,15 +3,17 @@ module galaxy/lobby
|
||||
go 1.26.1
|
||||
|
||||
require (
|
||||
galaxy/postgres v0.0.0-00010101000000-000000000000
|
||||
github.com/alicebob/miniredis/v2 v2.37.0
|
||||
github.com/disciplinedware/go-confusables v0.1.1
|
||||
github.com/getkin/kin-openapi v0.135.0
|
||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0
|
||||
github.com/go-jet/jet/v2 v2.14.1
|
||||
github.com/jackc/pgx/v5 v5.9.2
|
||||
github.com/redis/go-redis/v9 v9.18.0
|
||||
github.com/robfig/cron/v3 v3.0.1
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/testcontainers/testcontainers-go v0.42.0
|
||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0
|
||||
go.opentelemetry.io/otel v1.43.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0
|
||||
@@ -28,6 +30,24 @@ require (
|
||||
golang.org/x/text v0.36.0
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/XSAM/otelsql v0.42.0 // indirect
|
||||
github.com/jackc/chunkreader/v2 v2.0.1 // indirect
|
||||
github.com/jackc/pgconn v1.14.3 // indirect
|
||||
github.com/jackc/pgio v1.0.0 // indirect
|
||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||
github.com/jackc/pgproto3/v2 v2.3.3 // indirect
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||
github.com/jackc/pgtype v1.14.4 // indirect
|
||||
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||
github.com/lib/pq v1.10.9 // indirect
|
||||
github.com/mfridman/interpolate v0.0.2 // indirect
|
||||
github.com/pressly/goose/v3 v3.27.1 // indirect
|
||||
github.com/sethvargo/go-retry v0.3.0 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
golang.org/x/sync v0.20.0 // indirect
|
||||
)
|
||||
|
||||
require (
|
||||
dario.cat/mergo v1.0.2 // indirect
|
||||
galaxy/notificationintent v0.0.0
|
||||
@@ -44,7 +64,7 @@ require (
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
||||
github.com/distribution/reference v0.6.0 // indirect
|
||||
github.com/docker/go-connections v0.6.0 // indirect
|
||||
github.com/docker/go-connections v0.7.0 // indirect
|
||||
github.com/docker/go-units v0.5.0 // indirect
|
||||
github.com/ebitengine/purego v0.10.0 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
@@ -60,11 +80,10 @@ require (
|
||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
||||
github.com/magiconair/properties v1.8.10 // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/mdelapenya/tlscert v0.2.0 // indirect
|
||||
github.com/moby/docker-image-spec v1.3.1 // indirect
|
||||
github.com/moby/go-archive v0.2.0 // indirect
|
||||
github.com/moby/moby/api v1.54.1 // indirect
|
||||
github.com/moby/moby/client v0.4.0 // indirect
|
||||
github.com/moby/moby/api v1.54.2 // indirect
|
||||
github.com/moby/moby/client v0.4.1 // indirect
|
||||
github.com/moby/patternmatcher v0.6.1 // indirect
|
||||
github.com/moby/sys/sequential v0.6.0 // indirect
|
||||
github.com/moby/sys/user v0.4.0 // indirect
|
||||
@@ -78,7 +97,6 @@ require (
|
||||
github.com/perimeterx/marshmallow v1.1.5 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
|
||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 // indirect
|
||||
github.com/shirou/gopsutil/v4 v4.26.3 // indirect
|
||||
github.com/sirupsen/logrus v1.9.4 // indirect
|
||||
github.com/tklauser/go-sysconf v0.3.16 // indirect
|
||||
@@ -91,14 +109,18 @@ require (
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
|
||||
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
|
||||
go.uber.org/atomic v1.11.0 // indirect
|
||||
golang.org/x/crypto v0.49.0 // indirect
|
||||
golang.org/x/net v0.52.0 // indirect
|
||||
golang.org/x/sys v0.42.0 // indirect
|
||||
golang.org/x/crypto v0.50.0 // indirect
|
||||
golang.org/x/net v0.53.0 // indirect
|
||||
golang.org/x/sys v0.43.0 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 // indirect
|
||||
google.golang.org/grpc v1.80.0 // indirect
|
||||
google.golang.org/protobuf v1.36.11 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
||||
replace galaxy/notificationintent => ../pkg/notificationintent
|
||||
|
||||
replace galaxy/postgres => ../pkg/postgres
|
||||
|
||||
replace galaxy/redisconn => ../pkg/redisconn
|
||||
|
||||
+254
-22
@@ -4,8 +4,12 @@ github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8af
|
||||
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
|
||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||
github.com/XSAM/otelsql v0.42.0 h1:Li0xF4eJUxG2e0x3D4rvRlys1f27yJKvjTh7ljkUP5o=
|
||||
github.com/XSAM/otelsql v0.42.0/go.mod h1:4mOrEv+cS1KmKzrvTktvJnstr5GtKSAK+QHvFR9OcpI=
|
||||
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
|
||||
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
||||
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
|
||||
@@ -18,6 +22,7 @@ github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1x
|
||||
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ=
|
||||
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
|
||||
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
||||
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
|
||||
@@ -26,10 +31,15 @@ github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
|
||||
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
|
||||
github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=
|
||||
github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw=
|
||||
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA=
|
||||
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
|
||||
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
|
||||
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
||||
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
@@ -38,16 +48,22 @@ github.com/disciplinedware/go-confusables v0.1.1 h1:l/JVOsdrEDHo7nvL+tQfRO1F14Uy
|
||||
github.com/disciplinedware/go-confusables v0.1.1/go.mod h1:2hAXIAtpSqx+tMKdCzgRNv4J/kmz/oGfSHTBGJjVgfc=
|
||||
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
|
||||
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
|
||||
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
|
||||
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
|
||||
github.com/docker/go-connections v0.7.0 h1:6SsRfJddP22WMrCkj19x9WKjEDTB+ahsdiGYf0mN39c=
|
||||
github.com/docker/go-connections v0.7.0/go.mod h1:no1qkHdjq7kLMGUXYAduOhYPSJxxvgWBh7ogVvptn3Q=
|
||||
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
||||
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
|
||||
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||
github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg=
|
||||
github.com/getkin/kin-openapi v0.135.0/go.mod h1:6dd5FJl6RdX4usBtFBaQhk9q62Yb2J0Mk5IhUO/QqFI=
|
||||
github.com/go-jet/jet/v2 v2.14.1 h1:wsfD9e7CGP9h46+IFNlftfncBcmVnKddikbTtapQM3M=
|
||||
github.com/go-jet/jet/v2 v2.14.1/go.mod h1:dqTAECV2Mo3S2NFjbm4vJ1aDruZjhaJ1RAAR8rGUkkc=
|
||||
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
|
||||
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
@@ -59,43 +75,123 @@ github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1
|
||||
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
|
||||
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
|
||||
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM=
|
||||
github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
|
||||
github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
|
||||
github.com/jackc/chunkreader v1.0.0/go.mod h1:RT6O25fNZIuasFJRyZ4R/Y2BbhasbmZXF9QQ7T3kePo=
|
||||
github.com/jackc/chunkreader/v2 v2.0.0/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
|
||||
github.com/jackc/chunkreader/v2 v2.0.1 h1:i+RDz65UE+mmpjTfyz0MoVTnzeYxroil2G82ki7MGG8=
|
||||
github.com/jackc/chunkreader/v2 v2.0.1/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
|
||||
github.com/jackc/pgconn v0.0.0-20190420214824-7e0022ef6ba3/go.mod h1:jkELnwuX+w9qN5YIfX0fl88Ehu4XC3keFuOJJk9pcnA=
|
||||
github.com/jackc/pgconn v0.0.0-20190824142844-760dd75542eb/go.mod h1:lLjNuW/+OfW9/pnVKPazfWOgNfH2aPem8YQ7ilXGvJE=
|
||||
github.com/jackc/pgconn v0.0.0-20190831204454-2fabfa3c18b7/go.mod h1:ZJKsE/KZfsUgOEh9hBm+xYTstcNHg7UPMVJqRfQxq4s=
|
||||
github.com/jackc/pgconn v1.8.0/go.mod h1:1C2Pb36bGIP9QHGBYCjnyhqu7Rv3sGshaQUvmfGIB/o=
|
||||
github.com/jackc/pgconn v1.9.0/go.mod h1:YctiPyvzfU11JFxoXokUOOKQXQmDMoJL9vJzHH8/2JY=
|
||||
github.com/jackc/pgconn v1.9.1-0.20210724152538-d89c8390a530/go.mod h1:4z2w8XhRbP1hYxkpTuBjTS3ne3J48K83+u0zoyvg2pI=
|
||||
github.com/jackc/pgconn v1.14.3 h1:bVoTr12EGANZz66nZPkMInAV/KHD2TxH9npjXXgiB3w=
|
||||
github.com/jackc/pgconn v1.14.3/go.mod h1:RZbme4uasqzybK2RK5c65VsHxoyaml09lx3tXOcO/VM=
|
||||
github.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE=
|
||||
github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8=
|
||||
github.com/jackc/pgmock v0.0.0-20190831213851-13a1b77aafa2/go.mod h1:fGZlG77KXmcq05nJLRkk0+p82V8B8Dw8KN2/V9c/OAE=
|
||||
github.com/jackc/pgmock v0.0.0-20201204152224-4fe30f7445fd/go.mod h1:hrBW0Enj2AZTNpt/7Y5rr2xe/9Mn757Wtb2xeBzPv2c=
|
||||
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65 h1:DadwsjnMwFjfWc9y5Wi/+Zz7xoE5ALHsRQlOctkOiHc=
|
||||
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65/go.mod h1:5R2h2EEX+qri8jOWMbJCtaPWkrrNc7OHwsp2TCqp7ak=
|
||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||
github.com/jackc/pgproto3 v1.1.0/go.mod h1:eR5FA3leWg7p9aeAqi37XOTgTIbkABlvcPB3E5rlc78=
|
||||
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190420180111-c116219b62db/go.mod h1:bhq50y+xrl9n5mRYyCBFKkpRVTLYJVWeCc+mEAI3yXA=
|
||||
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190609003834-432c2951c711/go.mod h1:uH0AWtUmuShn0bcesswc4aBTWGvw0cAxIJp+6OB//Wg=
|
||||
github.com/jackc/pgproto3/v2 v2.0.0-rc3/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
|
||||
github.com/jackc/pgproto3/v2 v2.0.0-rc3.0.20190831210041-4c03ce451f29/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
|
||||
github.com/jackc/pgproto3/v2 v2.0.6/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||
github.com/jackc/pgproto3/v2 v2.1.1/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||
github.com/jackc/pgproto3/v2 v2.3.3 h1:1HLSx5H+tXR9pW3in3zaztoEwQYRC9SQaYUHjTSUOag=
|
||||
github.com/jackc/pgproto3/v2 v2.3.3/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b/go.mod h1:vsD4gTJCa9TptPL8sPkXrLZ+hDuNrZCnj29CQpr4X1E=
|
||||
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||
github.com/jackc/pgtype v0.0.0-20190421001408-4ed0de4755e0/go.mod h1:hdSHsc1V01CGwFsrv11mJRHWJ6aifDLfdV3aVjFF0zg=
|
||||
github.com/jackc/pgtype v0.0.0-20190824184912-ab885b375b90/go.mod h1:KcahbBH1nCMSo2DXpzsoWOAfFkdEtEJpPbVLq8eE+mc=
|
||||
github.com/jackc/pgtype v0.0.0-20190828014616-a8802b16cc59/go.mod h1:MWlu30kVJrUS8lot6TQqcg7mtthZ9T0EoIBFiJcmcyw=
|
||||
github.com/jackc/pgtype v1.8.1-0.20210724151600-32e20a603178/go.mod h1:C516IlIV9NKqfsMCXTdChteoXmwgUceqaLfjg2e3NlM=
|
||||
github.com/jackc/pgtype v1.14.0/go.mod h1:LUMuVrfsFfdKGLw+AFFVv6KtHOFMwRgDDzBt76IqCA4=
|
||||
github.com/jackc/pgtype v1.14.4 h1:fKuNiCumbKTAIxQwXfB/nsrnkEI6bPJrrSiMKgbJ2j8=
|
||||
github.com/jackc/pgtype v1.14.4/go.mod h1:aKeozOde08iifGosdJpz9MBZonJOUJxqNpPBcMJTlVA=
|
||||
github.com/jackc/pgx/v4 v4.0.0-20190420224344-cc3461e65d96/go.mod h1:mdxmSJJuR08CZQyj1PVQBHy9XOp5p8/SHH6a0psbY9Y=
|
||||
github.com/jackc/pgx/v4 v4.0.0-20190421002000-1b8f0016e912/go.mod h1:no/Y67Jkk/9WuGR0JG/JseM9irFbnEPbuWV2EELPNuM=
|
||||
github.com/jackc/pgx/v4 v4.0.0-pre1.0.20190824185557-6972a5742186/go.mod h1:X+GQnOEnf1dqHGpw7JmHqHc1NxDoalibchSk9/RWuDc=
|
||||
github.com/jackc/pgx/v4 v4.12.1-0.20210724153913-640aa07df17c/go.mod h1:1QD0+tgSXP7iUjYm9C1NxKhny7lq6ee99u/z+IHFcgs=
|
||||
github.com/jackc/pgx/v4 v4.18.2/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
|
||||
github.com/jackc/pgx/v4 v4.18.3 h1:dE2/TrEsGX3RBprb3qryqSV9Y60iZN1C6i8IrmW9/BA=
|
||||
github.com/jackc/pgx/v4 v4.18.3/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
|
||||
github.com/jackc/pgx/v5 v5.9.2 h1:3ZhOzMWnR4yJ+RW1XImIPsD1aNSz4T4fyP7zlQb56hw=
|
||||
github.com/jackc/pgx/v5 v5.9.2/go.mod h1:mal1tBGAFfLHvZzaYh77YS/eC6IX9OWbRV1QIIM0Jn4=
|
||||
github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle v1.1.3/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle v1.3.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
||||
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
||||
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
||||
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
|
||||
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
|
||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
|
||||
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
|
||||
github.com/mattn/go-isatty v0.0.21 h1:xYae+lCNBP7QuW4PUnNG61ffM4hVIfm+zUzDuSzYLGs=
|
||||
github.com/mattn/go-isatty v0.0.21/go.mod h1:ZXfXG4SQHsB/w3ZeOYbR0PrPwLy+n6xiMrJlRFqopa4=
|
||||
github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI=
|
||||
github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o=
|
||||
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
|
||||
github.com/mfridman/interpolate v0.0.2/go.mod h1:p+7uk6oE07mpE/Ik1b8EckO0O4ZXiGAfshKBWLUM9Xg=
|
||||
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
|
||||
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
|
||||
github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8=
|
||||
github.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU=
|
||||
github.com/moby/moby/api v1.54.1 h1:TqVzuJkOLsgLDDwNLmYqACUuTehOHRGKiPhvH8V3Nn4=
|
||||
github.com/moby/moby/api v1.54.1/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs=
|
||||
github.com/moby/moby/client v0.4.0 h1:S+2XegzHQrrvTCvF6s5HFzcrywWQmuVnhOXe2kiWjIw=
|
||||
github.com/moby/moby/client v0.4.0/go.mod h1:QWPbvWchQbxBNdaLSpoKpCdf5E+WxFAgNHogCWDoa7g=
|
||||
github.com/moby/moby/api v1.54.2 h1:wiat9QAhnDQjA7wk1kh/TqHz2I1uUA7M7t9SAl/JNXg=
|
||||
github.com/moby/moby/api v1.54.2/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs=
|
||||
github.com/moby/moby/client v0.4.1 h1:DMQgisVoMkmMs7fp3ROSdiBnoAu8+vo3GggFl06M/wY=
|
||||
github.com/moby/moby/client v0.4.1/go.mod h1:z52C9O2POPOsnxZAy//WtKcQ32P+jT/NGeXu/7nfjGQ=
|
||||
github.com/moby/patternmatcher v0.6.1 h1:qlhtafmr6kgMIJjKJMDmMWq7WLkKIo23hsrpR3x084U=
|
||||
github.com/moby/patternmatcher v0.6.1/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
|
||||
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
|
||||
@@ -108,6 +204,8 @@ github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
|
||||
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
|
||||
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48=
|
||||
github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM=
|
||||
github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g=
|
||||
@@ -118,32 +216,58 @@ github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJw
|
||||
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
|
||||
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
||||
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 h1:QY4nmPHLFAJjtT5O4OMUEOxP8WVaRNOFpcbmxT2NLZU=
|
||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0/go.mod h1:WH8cY/0fT41Bsf341qzo8v4nx0GCE8FykAA23IVbVmo=
|
||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 h1:2dKdoEYBJ0CZCLPiCdvvc7luz3DPwY6hKdzjL6m1eHE=
|
||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0/go.mod h1:WzkrVG9ro9BwCQD0eJOWn6AGL4Z1CleGflM45w1hu10=
|
||||
github.com/pressly/goose/v3 v3.27.1 h1:6uEvcprBybDmW4hcz3gYujhARhye+GoWKhEWyzD5sh4=
|
||||
github.com/pressly/goose/v3 v3.27.1/go.mod h1:maruOxsPnIG2yHHyo8UqKWXYKFcH7Q76csUV7+7KYoM=
|
||||
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
||||
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
|
||||
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
|
||||
github.com/rs/zerolog v1.15.0/go.mod h1:xYTKnLHcpfU2225ny5qZjxnj9NvkumZYjJHlAThCjNc=
|
||||
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
|
||||
github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah2SE=
|
||||
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
|
||||
github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc=
|
||||
github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
|
||||
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
|
||||
github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
|
||||
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
|
||||
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=
|
||||
github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY=
|
||||
github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30=
|
||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 h1:id/6LH8ZeDrtAUVSuNvZUAJ1kVpb82y1pr9yweAWsRg=
|
||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0/go.mod h1:uF0jI8FITagQpBNOgweGBmPf6rP4K0SeL1XFPbsZSSY=
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0 h1:GCbb1ndrF7OTDiIvxXyItaDab4qkzTFJ48LKFdM7EIo=
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0/go.mod h1:IRPBaI8jXdrNfD0e4Zm7Fbcgaz5shKxOQv4axiL09xs=
|
||||
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
|
||||
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
|
||||
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
|
||||
@@ -152,12 +276,14 @@ github.com/ugorji/go/codec v1.3.1 h1:waO7eEiFDwidsBN6agj1vJQ4AG7lh2yqXyOXqhgQuyY
|
||||
github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
|
||||
github.com/woodsbury/decimal128 v1.3.0 h1:8pffMNWIlC0O5vbyHWFZAt5yWvWcrHA+3ovIIjVWss0=
|
||||
github.com/woodsbury/decimal128 v1.3.0/go.mod h1:C5UTmyTjW3JftjUFzOVhC20BEQa2a4ZKOB5I6Zjb+ds=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M=
|
||||
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
|
||||
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
|
||||
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
|
||||
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
|
||||
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
|
||||
github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
|
||||
@@ -188,42 +314,148 @@ go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09
|
||||
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
|
||||
go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g=
|
||||
go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk=
|
||||
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
|
||||
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
|
||||
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
|
||||
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
|
||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201203163018-be400aefbc4c/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
|
||||
golang.org/x/crypto v0.20.0/go.mod h1:Xwo95rrVNIoSMx9wa1JroENMToLWn3RNVrTBpLHgZPQ=
|
||||
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
|
||||
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.35.0 h1:Ww1D637e6Pg+Zb2KrWfHQUnH2dQRLBQyAtpr/haaJeM=
|
||||
golang.org/x/mod v0.35.0/go.mod h1:+GwiRhIInF8wPm+4AoT6L0FA1QWAad3OMdTRx4tFYlU=
|
||||
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
||||
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||
golang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
|
||||
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
||||
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||
golang.org/x/term v0.41.0 h1:QCgPso/Q3RTJx2Th4bDLqML4W6iJiaXFq2/ftQF13YU=
|
||||
golang.org/x/term v0.41.0/go.mod h1:3pfBgksrReYfZ5lvYM0kSO0LIkAl4Yl2bXOkKP7Ec2A=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
||||
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||
golang.org/x/term v0.42.0 h1:UiKe+zDFmJobeJ5ggPwOshJIVt6/Ft0rcfrXZDLWAWY=
|
||||
golang.org/x/term v0.42.0/go.mod h1:Dq/D+snpsbazcBG5+F9Q1n2rXV8Ma+71xEjTRufARgY=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190425163242-31fd60d6bfdc/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190823170909-c4a336ef6a2f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 h1:XF8+t6QQiS0o9ArVan/HW8Q7cycNPGsJf6GA2nXxYAg=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
||||
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/inconshreveable/log15.v2 v2.0.0-20180818164646-67afb5ed74ec/go.mod h1:aPpfJ7XW+gOuirDoZ8gHhLh3kZ1B08FtV2bbmy7Jv3s=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
||||
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
modernc.org/libc v1.72.1 h1:db1xwJ6u1kE3KHTFTTbe2GCrczHPKzlURP0aDC4NGD0=
|
||||
modernc.org/libc v1.72.1/go.mod h1:HRMiC/PhPGLIPM7GzAFCbI+oSgE3dhZ8FWftmRrHVlY=
|
||||
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
|
||||
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
|
||||
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
|
||||
modernc.org/sqlite v1.49.1 h1:dYGHTKcX1sJ+EQDnUzvz4TJ5GbuvhNJa8Fg6ElGx73U=
|
||||
modernc.org/sqlite v1.49.1/go.mod h1:m0w8xhwYUVY3H6pSDwc3gkJ/irZT/0YEXwBlhaxQEew=
|
||||
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
|
||||
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
|
||||
|
||||
@@ -0,0 +1,310 @@
|
||||
// Package applicationstore implements the PostgreSQL-backed adapter for
|
||||
// `ports.ApplicationStore`.
|
||||
//
|
||||
// PG_PLAN.md §6A migrates Game Lobby Service away from Redis-backed durable
|
||||
// application records; see `galaxy/lobby/docs/postgres-migration.md` for
|
||||
// the full decision record.
|
||||
package applicationstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/sqlx"
|
||||
pgtable "galaxy/lobby/internal/adapters/postgres/jet/lobby/table"
|
||||
"galaxy/lobby/internal/domain/application"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
pg "github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
// Config configures one PostgreSQL-backed application store instance.
|
||||
type Config struct {
|
||||
// DB stores the connection pool the store uses for every query.
|
||||
DB *sql.DB
|
||||
|
||||
// OperationTimeout bounds one round trip.
|
||||
OperationTimeout time.Duration
|
||||
}
|
||||
|
||||
// Store persists Game Lobby application records in PostgreSQL.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
// New constructs one PostgreSQL-backed application store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.DB == nil {
|
||||
return nil, errors.New("new postgres application store: db must not be nil")
|
||||
}
|
||||
if cfg.OperationTimeout <= 0 {
|
||||
return nil, errors.New("new postgres application store: operation timeout must be positive")
|
||||
}
|
||||
return &Store{
|
||||
db: cfg.DB,
|
||||
operationTimeout: cfg.OperationTimeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// applicationSelectColumns is the canonical SELECT list for the applications
|
||||
// table, matching scanApplication's column order.
|
||||
var applicationSelectColumns = pg.ColumnList{
|
||||
pgtable.Applications.ApplicationID,
|
||||
pgtable.Applications.GameID,
|
||||
pgtable.Applications.ApplicantUserID,
|
||||
pgtable.Applications.RaceName,
|
||||
pgtable.Applications.Status,
|
||||
pgtable.Applications.CreatedAt,
|
||||
pgtable.Applications.DecidedAt,
|
||||
}
|
||||
|
||||
// Save persists a new submitted application record. The single-active
|
||||
// constraint is enforced by the partial unique index
|
||||
// `applications_active_per_user_game_uidx`.
|
||||
func (store *Store) Save(ctx context.Context, record application.Application) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("save application: nil store")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("save application: %w", err)
|
||||
}
|
||||
if record.Status != application.StatusSubmitted {
|
||||
return fmt.Errorf(
|
||||
"save application: status must be %q, got %q",
|
||||
application.StatusSubmitted, record.Status,
|
||||
)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "save application", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.Applications.INSERT(
|
||||
pgtable.Applications.ApplicationID,
|
||||
pgtable.Applications.GameID,
|
||||
pgtable.Applications.ApplicantUserID,
|
||||
pgtable.Applications.RaceName,
|
||||
pgtable.Applications.Status,
|
||||
pgtable.Applications.CreatedAt,
|
||||
pgtable.Applications.DecidedAt,
|
||||
).VALUES(
|
||||
record.ApplicationID.String(),
|
||||
record.GameID.String(),
|
||||
record.ApplicantUserID,
|
||||
record.RaceName,
|
||||
string(record.Status),
|
||||
record.CreatedAt.UTC(),
|
||||
sqlx.NullableTimePtr(record.DecidedAt),
|
||||
)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
if sqlx.IsUniqueViolation(err) {
|
||||
return fmt.Errorf("save application: %w", application.ErrConflict)
|
||||
}
|
||||
return fmt.Errorf("save application: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get returns the record identified by applicationID. It returns
|
||||
// application.ErrNotFound when no record exists.
|
||||
func (store *Store) Get(ctx context.Context, applicationID common.ApplicationID) (application.Application, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return application.Application{}, errors.New("get application: nil store")
|
||||
}
|
||||
if err := applicationID.Validate(); err != nil {
|
||||
return application.Application{}, fmt.Errorf("get application: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get application", store.operationTimeout)
|
||||
if err != nil {
|
||||
return application.Application{}, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(applicationSelectColumns).
|
||||
FROM(pgtable.Applications).
|
||||
WHERE(pgtable.Applications.ApplicationID.EQ(pg.String(applicationID.String())))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
record, err := scanApplication(row)
|
||||
if sqlx.IsNoRows(err) {
|
||||
return application.Application{}, application.ErrNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return application.Application{}, fmt.Errorf("get application: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// GetByGame returns every application attached to gameID. Sorted by
|
||||
// created_at ASC then application_id ASC.
|
||||
func (store *Store) GetByGame(ctx context.Context, gameID common.GameID) ([]application.Application, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get applications by game: nil store")
|
||||
}
|
||||
if err := gameID.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("get applications by game: %w", err)
|
||||
}
|
||||
|
||||
stmt := pg.SELECT(applicationSelectColumns).
|
||||
FROM(pgtable.Applications).
|
||||
WHERE(pgtable.Applications.GameID.EQ(pg.String(gameID.String()))).
|
||||
ORDER_BY(pgtable.Applications.CreatedAt.ASC(), pgtable.Applications.ApplicationID.ASC())
|
||||
|
||||
return store.queryList(ctx, "get applications by game", stmt)
|
||||
}
|
||||
|
||||
// GetByUser returns every application submitted by applicantUserID.
|
||||
func (store *Store) GetByUser(ctx context.Context, applicantUserID string) ([]application.Application, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get applications by user: nil store")
|
||||
}
|
||||
trimmed := strings.TrimSpace(applicantUserID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get applications by user: applicant user id must not be empty")
|
||||
}
|
||||
|
||||
stmt := pg.SELECT(applicationSelectColumns).
|
||||
FROM(pgtable.Applications).
|
||||
WHERE(pgtable.Applications.ApplicantUserID.EQ(pg.String(trimmed))).
|
||||
ORDER_BY(pgtable.Applications.CreatedAt.ASC(), pgtable.Applications.ApplicationID.ASC())
|
||||
|
||||
return store.queryList(ctx, "get applications by user", stmt)
|
||||
}
|
||||
|
||||
func (store *Store) queryList(ctx context.Context, operation string, stmt pg.SelectStatement) ([]application.Application, error) {
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, operation, store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
records := make([]application.Application, 0)
|
||||
for rows.Next() {
|
||||
record, err := scanApplication(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: scan: %w", operation, err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if len(records) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition with compare-and-swap on the
|
||||
// current status column.
|
||||
func (store *Store) UpdateStatus(ctx context.Context, input ports.UpdateApplicationStatusInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update application status: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update application status: %w", err)
|
||||
}
|
||||
if err := application.Transition(input.ExpectedFrom, input.To); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update application status", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
at := input.At.UTC()
|
||||
stmt := pgtable.Applications.UPDATE(pgtable.Applications.Status, pgtable.Applications.DecidedAt).
|
||||
SET(string(input.To), at).
|
||||
WHERE(pg.AND(
|
||||
pgtable.Applications.ApplicationID.EQ(pg.String(input.ApplicationID.String())),
|
||||
pgtable.Applications.Status.EQ(pg.String(string(input.ExpectedFrom))),
|
||||
))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update application status: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update application status: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
probe := pg.SELECT(pgtable.Applications.Status).
|
||||
FROM(pgtable.Applications).
|
||||
WHERE(pgtable.Applications.ApplicationID.EQ(pg.String(input.ApplicationID.String())))
|
||||
probeQuery, probeArgs := probe.Sql()
|
||||
|
||||
var current string
|
||||
row := store.db.QueryRowContext(operationCtx, probeQuery, probeArgs...)
|
||||
if err := row.Scan(¤t); err != nil {
|
||||
if sqlx.IsNoRows(err) {
|
||||
return application.ErrNotFound
|
||||
}
|
||||
return fmt.Errorf("update application status: probe: %w", err)
|
||||
}
|
||||
return fmt.Errorf("update application status: %w", application.ErrConflict)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type rowScanner interface {
|
||||
Scan(dest ...any) error
|
||||
}
|
||||
|
||||
func scanApplication(rs rowScanner) (application.Application, error) {
|
||||
var (
|
||||
applicationID string
|
||||
gameID string
|
||||
applicantUserID string
|
||||
raceName string
|
||||
status string
|
||||
createdAt time.Time
|
||||
decidedAt sql.NullTime
|
||||
)
|
||||
if err := rs.Scan(
|
||||
&applicationID,
|
||||
&gameID,
|
||||
&applicantUserID,
|
||||
&raceName,
|
||||
&status,
|
||||
&createdAt,
|
||||
&decidedAt,
|
||||
); err != nil {
|
||||
return application.Application{}, err
|
||||
}
|
||||
return application.Application{
|
||||
ApplicationID: common.ApplicationID(applicationID),
|
||||
GameID: common.GameID(gameID),
|
||||
ApplicantUserID: applicantUserID,
|
||||
RaceName: raceName,
|
||||
Status: application.Status(status),
|
||||
CreatedAt: createdAt.UTC(),
|
||||
DecidedAt: sqlx.TimePtrFromNullable(decidedAt),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.ApplicationStore interface at compile
|
||||
// time.
|
||||
var _ ports.ApplicationStore = (*Store)(nil)
|
||||
@@ -0,0 +1,194 @@
|
||||
package applicationstore_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/applicationstore"
|
||||
"galaxy/lobby/internal/adapters/postgres/gamestore"
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/lobby/internal/domain/application"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
func newStores(t *testing.T) (*gamestore.Store, *applicationstore.Store) {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
gs, err := gamestore.New(gamestore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
as, err := applicationstore.New(applicationstore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return gs, as
|
||||
}
|
||||
|
||||
func seedGame(t *testing.T, gs *gamestore.Store, id string) game.Game {
|
||||
t.Helper()
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
g, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID(id),
|
||||
GameName: "Game " + id,
|
||||
GameType: game.GameTypePublic,
|
||||
MinPlayers: 2,
|
||||
MaxPlayers: 8,
|
||||
StartGapHours: 12,
|
||||
StartGapPlayers: 2,
|
||||
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.0.0",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, gs.Save(context.Background(), g))
|
||||
return g
|
||||
}
|
||||
|
||||
func newApplication(t *testing.T, id, gameID, userID string) application.Application {
|
||||
t.Helper()
|
||||
a, err := application.New(application.NewApplicationInput{
|
||||
ApplicationID: common.ApplicationID(id),
|
||||
GameID: common.GameID(gameID),
|
||||
ApplicantUserID: userID,
|
||||
RaceName: "Pilot " + id,
|
||||
Now: time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return a
|
||||
}
|
||||
|
||||
func TestSaveAndGet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, as := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newApplication(t, "application-001", "game-001", "user-a")
|
||||
require.NoError(t, as.Save(ctx, rec))
|
||||
|
||||
got, err := as.Get(ctx, rec.ApplicationID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, rec.ApplicationID, got.ApplicationID)
|
||||
assert.Equal(t, application.StatusSubmitted, got.Status)
|
||||
assert.Equal(t, "user-a", got.ApplicantUserID)
|
||||
assert.Nil(t, got.DecidedAt)
|
||||
}
|
||||
|
||||
func TestSaveRejectsNonSubmittedRecord(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, as := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newApplication(t, "application-001", "game-001", "user-a")
|
||||
rec.Status = application.StatusApproved
|
||||
require.Error(t, as.Save(ctx, rec))
|
||||
}
|
||||
|
||||
func TestSavePartialUniqueRejectsSecondActiveForSameUserGame(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, as := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
a1 := newApplication(t, "application-001", "game-001", "user-a")
|
||||
require.NoError(t, as.Save(ctx, a1))
|
||||
|
||||
// second submission by the same user against the same game must fail.
|
||||
a2 := newApplication(t, "application-002", "game-001", "user-a")
|
||||
err := as.Save(ctx, a2)
|
||||
require.ErrorIs(t, err, application.ErrConflict)
|
||||
}
|
||||
|
||||
func TestSavePartialUniqueAllowsResubmitAfterRejection(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, as := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
a1 := newApplication(t, "application-001", "game-001", "user-a")
|
||||
require.NoError(t, as.Save(ctx, a1))
|
||||
|
||||
require.NoError(t, as.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: a1.ApplicationID,
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusRejected,
|
||||
At: a1.CreatedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
// after rejection a new submission for the same (user, game) is allowed.
|
||||
a2 := newApplication(t, "application-002", "game-001", "user-a")
|
||||
require.NoError(t, as.Save(ctx, a2))
|
||||
}
|
||||
|
||||
func TestUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, as := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newApplication(t, "application-001", "game-001", "user-a")
|
||||
require.NoError(t, as.Save(ctx, rec))
|
||||
|
||||
// First, transition the row to approved.
|
||||
require.NoError(t, as.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: rec.ApplicationID,
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusApproved,
|
||||
At: rec.CreatedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
// Second attempt claims status is still submitted: (submitted, rejected)
|
||||
// is a valid domain transition, but the row is already approved, so the
|
||||
// adapter must surface ErrConflict on the row-level mismatch.
|
||||
err := as.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: rec.ApplicationID,
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusRejected,
|
||||
At: rec.CreatedAt.Add(2 * time.Minute),
|
||||
})
|
||||
require.ErrorIs(t, err, application.ErrConflict)
|
||||
}
|
||||
|
||||
func TestUpdateStatusReturnsNotFoundForMissing(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
_, as := newStores(t)
|
||||
err := as.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: common.ApplicationID("application-missing"),
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusApproved,
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, application.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestGetByGameAndGetByUser(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, as := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
seedGame(t, gs, "game-002")
|
||||
|
||||
require.NoError(t, as.Save(ctx, newApplication(t, "application-001", "game-001", "user-a")))
|
||||
require.NoError(t, as.Save(ctx, newApplication(t, "application-002", "game-001", "user-b")))
|
||||
require.NoError(t, as.Save(ctx, newApplication(t, "application-003", "game-002", "user-a")))
|
||||
|
||||
g1, err := as.GetByGame(ctx, common.GameID("game-001"))
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, g1, 2)
|
||||
|
||||
userA, err := as.GetByUser(ctx, "user-a")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, userA, 2)
|
||||
}
|
||||
|
||||
func TestGetMissingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
_, as := newStores(t)
|
||||
_, err := as.Get(ctx, common.ApplicationID("application-missing"))
|
||||
require.ErrorIs(t, err, application.ErrNotFound)
|
||||
}
|
||||
@@ -0,0 +1,94 @@
|
||||
package gamestore
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
)
|
||||
|
||||
// runtimeSnapshotJSON is the on-disk JSONB shape used for the denormalised
|
||||
// runtime snapshot column on `games`. Keys mirror the field names in
|
||||
// `game.RuntimeSnapshot` so a round-trip remains naked-comparable.
|
||||
type runtimeSnapshotJSON struct {
|
||||
CurrentTurn int `json:"current_turn"`
|
||||
RuntimeStatus string `json:"runtime_status,omitempty"`
|
||||
EngineHealthSummary string `json:"engine_health_summary,omitempty"`
|
||||
}
|
||||
|
||||
func marshalRuntimeSnapshot(snapshot game.RuntimeSnapshot) ([]byte, error) {
|
||||
payload := runtimeSnapshotJSON{
|
||||
CurrentTurn: snapshot.CurrentTurn,
|
||||
RuntimeStatus: snapshot.RuntimeStatus,
|
||||
EngineHealthSummary: snapshot.EngineHealthSummary,
|
||||
}
|
||||
encoded, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal runtime snapshot: %w", err)
|
||||
}
|
||||
return encoded, nil
|
||||
}
|
||||
|
||||
func unmarshalRuntimeSnapshot(payload []byte) (game.RuntimeSnapshot, error) {
|
||||
if len(payload) == 0 {
|
||||
return game.RuntimeSnapshot{}, nil
|
||||
}
|
||||
var stored runtimeSnapshotJSON
|
||||
if err := json.Unmarshal(payload, &stored); err != nil {
|
||||
return game.RuntimeSnapshot{}, fmt.Errorf("unmarshal runtime snapshot: %w", err)
|
||||
}
|
||||
return game.RuntimeSnapshot{
|
||||
CurrentTurn: stored.CurrentTurn,
|
||||
RuntimeStatus: stored.RuntimeStatus,
|
||||
EngineHealthSummary: stored.EngineHealthSummary,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// runtimeBindingJSON is the on-disk JSONB shape used for the optional
|
||||
// runtime binding column on `games`. The `bound_at_ms` field stores Unix
|
||||
// milliseconds so the JSON serialisation matches the previous Redis JSON
|
||||
// shape and the timezone is irrelevant inside the JSON payload itself; the
|
||||
// adapter still re-wraps the resulting time.Time with .UTC() before exposing
|
||||
// it to callers.
|
||||
type runtimeBindingJSON struct {
|
||||
ContainerID string `json:"container_id"`
|
||||
EngineEndpoint string `json:"engine_endpoint"`
|
||||
RuntimeJobID string `json:"runtime_job_id"`
|
||||
BoundAtMS int64 `json:"bound_at_ms"`
|
||||
}
|
||||
|
||||
// marshalRuntimeBinding returns nil bytes (SQL NULL) when binding is nil,
|
||||
// otherwise the JSON encoding of the binding.
|
||||
func marshalRuntimeBinding(binding *game.RuntimeBinding) ([]byte, error) {
|
||||
if binding == nil {
|
||||
return nil, nil
|
||||
}
|
||||
payload := runtimeBindingJSON{
|
||||
ContainerID: binding.ContainerID,
|
||||
EngineEndpoint: binding.EngineEndpoint,
|
||||
RuntimeJobID: binding.RuntimeJobID,
|
||||
BoundAtMS: binding.BoundAt.UTC().UnixMilli(),
|
||||
}
|
||||
encoded, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal runtime binding: %w", err)
|
||||
}
|
||||
return encoded, nil
|
||||
}
|
||||
|
||||
func unmarshalRuntimeBinding(payload []byte) (*game.RuntimeBinding, error) {
|
||||
if len(payload) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
var stored runtimeBindingJSON
|
||||
if err := json.Unmarshal(payload, &stored); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal runtime binding: %w", err)
|
||||
}
|
||||
return &game.RuntimeBinding{
|
||||
ContainerID: stored.ContainerID,
|
||||
EngineEndpoint: stored.EngineEndpoint,
|
||||
RuntimeJobID: stored.RuntimeJobID,
|
||||
BoundAt: time.UnixMilli(stored.BoundAtMS).UTC(),
|
||||
}, nil
|
||||
}
|
||||
@@ -0,0 +1,610 @@
|
||||
// Package gamestore implements the PostgreSQL-backed adapter for
|
||||
// `ports.GameStore`.
|
||||
//
|
||||
// The package owns the on-disk shape of the `games` table (defined in
|
||||
// `galaxy/lobby/internal/adapters/postgres/migrations`) and translates the
|
||||
// schema-agnostic GameStore interface declared in `internal/ports` into
|
||||
// concrete go-jet/v2 statements driven by the pgx driver. Per-row
|
||||
// lifecycle transitions (Save/UpdateStatus/UpdateRuntimeSnapshot/
|
||||
// UpdateRuntimeBinding) use optimistic concurrency on the `updated_at`
|
||||
// column rather than retaining a `SELECT ... FOR UPDATE` lock across the
|
||||
// caller's logic, mirroring the Notification Stage 5 pattern.
|
||||
//
|
||||
// PG_PLAN.md §6A migrates Game Lobby Service away from Redis-backed durable
|
||||
// game records; see `galaxy/lobby/docs/postgres-migration.md` for the full
|
||||
// decision record.
|
||||
package gamestore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/sqlx"
|
||||
pgtable "galaxy/lobby/internal/adapters/postgres/jet/lobby/table"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
pg "github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
// Config configures one PostgreSQL-backed game store instance. The store
|
||||
// does not own the underlying *sql.DB lifecycle: the caller (typically the
|
||||
// service runtime) opens, instruments, migrates, and closes the pool.
|
||||
type Config struct {
|
||||
// DB stores the connection pool the store uses for every query.
|
||||
DB *sql.DB
|
||||
|
||||
// OperationTimeout bounds one round trip. The store creates a derived
|
||||
// context for each operation so callers cannot starve the pool with an
|
||||
// unbounded ctx.
|
||||
OperationTimeout time.Duration
|
||||
}
|
||||
|
||||
// Store persists Game Lobby game records in PostgreSQL.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
// New constructs one PostgreSQL-backed game store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.DB == nil {
|
||||
return nil, errors.New("new postgres game store: db must not be nil")
|
||||
}
|
||||
if cfg.OperationTimeout <= 0 {
|
||||
return nil, errors.New("new postgres game store: operation timeout must be positive")
|
||||
}
|
||||
return &Store{
|
||||
db: cfg.DB,
|
||||
operationTimeout: cfg.OperationTimeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// gameSelectColumns is the canonical SELECT list for the games table,
|
||||
// matching scanGame's column order.
|
||||
var gameSelectColumns = pg.ColumnList{
|
||||
pgtable.Games.GameID,
|
||||
pgtable.Games.GameName,
|
||||
pgtable.Games.Description,
|
||||
pgtable.Games.GameType,
|
||||
pgtable.Games.OwnerUserID,
|
||||
pgtable.Games.Status,
|
||||
pgtable.Games.MinPlayers,
|
||||
pgtable.Games.MaxPlayers,
|
||||
pgtable.Games.StartGapHours,
|
||||
pgtable.Games.StartGapPlayers,
|
||||
pgtable.Games.EnrollmentEndsAt,
|
||||
pgtable.Games.TurnSchedule,
|
||||
pgtable.Games.TargetEngineVersion,
|
||||
pgtable.Games.CreatedAt,
|
||||
pgtable.Games.UpdatedAt,
|
||||
pgtable.Games.StartedAt,
|
||||
pgtable.Games.FinishedAt,
|
||||
pgtable.Games.RuntimeSnapshot,
|
||||
pgtable.Games.RuntimeBinding,
|
||||
}
|
||||
|
||||
// Save upserts record. The status secondary index is intrinsic to
|
||||
// `games_status_created_idx` so callers see the same effect as the previous
|
||||
// Redis adapter without the explicit index rewrite.
|
||||
//
|
||||
// The implementation is INSERT ... ON CONFLICT (game_id) DO UPDATE: the
|
||||
// adapter cannot use plain INSERT because callers (notably the create-game
|
||||
// service and admin updates) expect Save to be upsert.
|
||||
func (store *Store) Save(ctx context.Context, record game.Game) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("save game: nil store")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("save game: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "save game", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
snapshot, err := marshalRuntimeSnapshot(record.RuntimeSnapshot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("save game: %w", err)
|
||||
}
|
||||
binding, err := marshalRuntimeBinding(record.RuntimeBinding)
|
||||
if err != nil {
|
||||
return fmt.Errorf("save game: %w", err)
|
||||
}
|
||||
|
||||
stmt := pgtable.Games.INSERT(
|
||||
pgtable.Games.GameID,
|
||||
pgtable.Games.GameName,
|
||||
pgtable.Games.Description,
|
||||
pgtable.Games.GameType,
|
||||
pgtable.Games.OwnerUserID,
|
||||
pgtable.Games.Status,
|
||||
pgtable.Games.MinPlayers,
|
||||
pgtable.Games.MaxPlayers,
|
||||
pgtable.Games.StartGapHours,
|
||||
pgtable.Games.StartGapPlayers,
|
||||
pgtable.Games.EnrollmentEndsAt,
|
||||
pgtable.Games.TurnSchedule,
|
||||
pgtable.Games.TargetEngineVersion,
|
||||
pgtable.Games.CreatedAt,
|
||||
pgtable.Games.UpdatedAt,
|
||||
pgtable.Games.StartedAt,
|
||||
pgtable.Games.FinishedAt,
|
||||
pgtable.Games.RuntimeSnapshot,
|
||||
pgtable.Games.RuntimeBinding,
|
||||
).VALUES(
|
||||
record.GameID.String(),
|
||||
record.GameName,
|
||||
record.Description,
|
||||
string(record.GameType),
|
||||
record.OwnerUserID,
|
||||
string(record.Status),
|
||||
record.MinPlayers,
|
||||
record.MaxPlayers,
|
||||
record.StartGapHours,
|
||||
record.StartGapPlayers,
|
||||
record.EnrollmentEndsAt.UTC(),
|
||||
record.TurnSchedule,
|
||||
record.TargetEngineVersion,
|
||||
record.CreatedAt.UTC(),
|
||||
record.UpdatedAt.UTC(),
|
||||
sqlx.NullableTimePtr(record.StartedAt),
|
||||
sqlx.NullableTimePtr(record.FinishedAt),
|
||||
snapshot,
|
||||
binding,
|
||||
).ON_CONFLICT(pgtable.Games.GameID).DO_UPDATE(
|
||||
pg.SET(
|
||||
pgtable.Games.GameName.SET(pgtable.Games.EXCLUDED.GameName),
|
||||
pgtable.Games.Description.SET(pgtable.Games.EXCLUDED.Description),
|
||||
pgtable.Games.GameType.SET(pgtable.Games.EXCLUDED.GameType),
|
||||
pgtable.Games.OwnerUserID.SET(pgtable.Games.EXCLUDED.OwnerUserID),
|
||||
pgtable.Games.Status.SET(pgtable.Games.EXCLUDED.Status),
|
||||
pgtable.Games.MinPlayers.SET(pgtable.Games.EXCLUDED.MinPlayers),
|
||||
pgtable.Games.MaxPlayers.SET(pgtable.Games.EXCLUDED.MaxPlayers),
|
||||
pgtable.Games.StartGapHours.SET(pgtable.Games.EXCLUDED.StartGapHours),
|
||||
pgtable.Games.StartGapPlayers.SET(pgtable.Games.EXCLUDED.StartGapPlayers),
|
||||
pgtable.Games.EnrollmentEndsAt.SET(pgtable.Games.EXCLUDED.EnrollmentEndsAt),
|
||||
pgtable.Games.TurnSchedule.SET(pgtable.Games.EXCLUDED.TurnSchedule),
|
||||
pgtable.Games.TargetEngineVersion.SET(pgtable.Games.EXCLUDED.TargetEngineVersion),
|
||||
pgtable.Games.UpdatedAt.SET(pgtable.Games.EXCLUDED.UpdatedAt),
|
||||
pgtable.Games.StartedAt.SET(pgtable.Games.EXCLUDED.StartedAt),
|
||||
pgtable.Games.FinishedAt.SET(pgtable.Games.EXCLUDED.FinishedAt),
|
||||
pgtable.Games.RuntimeSnapshot.SET(pgtable.Games.EXCLUDED.RuntimeSnapshot),
|
||||
pgtable.Games.RuntimeBinding.SET(pgtable.Games.EXCLUDED.RuntimeBinding),
|
||||
),
|
||||
)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
return fmt.Errorf("save game: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get returns the record identified by gameID. It returns
|
||||
// game.ErrNotFound when no record exists.
|
||||
func (store *Store) Get(ctx context.Context, gameID common.GameID) (game.Game, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return game.Game{}, errors.New("get game: nil store")
|
||||
}
|
||||
if err := gameID.Validate(); err != nil {
|
||||
return game.Game{}, fmt.Errorf("get game: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get game", store.operationTimeout)
|
||||
if err != nil {
|
||||
return game.Game{}, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(gameSelectColumns).
|
||||
FROM(pgtable.Games).
|
||||
WHERE(pgtable.Games.GameID.EQ(pg.String(gameID.String())))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
record, err := scanGame(row)
|
||||
if sqlx.IsNoRows(err) {
|
||||
return game.Game{}, game.ErrNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return game.Game{}, fmt.Errorf("get game: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// GetByStatus returns every record whose status equals status. Records are
|
||||
// sorted by created_at DESC then game_id DESC, matching the most-recent-first
|
||||
// ordering Lobby's listing services expect.
|
||||
func (store *Store) GetByStatus(ctx context.Context, status game.Status) ([]game.Game, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get games by status: nil store")
|
||||
}
|
||||
if !status.IsKnown() {
|
||||
return nil, fmt.Errorf("get games by status: status %q is unsupported", status)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get games by status", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(gameSelectColumns).
|
||||
FROM(pgtable.Games).
|
||||
WHERE(pgtable.Games.Status.EQ(pg.String(string(status)))).
|
||||
ORDER_BY(pgtable.Games.CreatedAt.DESC(), pgtable.Games.GameID.DESC())
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by status: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
records, err := scanAllGames(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by status: %w", err)
|
||||
}
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// CountByStatus returns the number of records under each known status.
|
||||
func (store *Store) CountByStatus(ctx context.Context) (map[game.Status]int, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("count games by status: nil store")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "count games by status", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
countAlias := pg.COUNT(pg.STAR).AS("count")
|
||||
stmt := pg.SELECT(pgtable.Games.Status, countAlias).
|
||||
FROM(pgtable.Games).
|
||||
GROUP_BY(pgtable.Games.Status)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("count games by status: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
counts := make(map[game.Status]int, len(game.AllStatuses()))
|
||||
for _, status := range game.AllStatuses() {
|
||||
counts[status] = 0
|
||||
}
|
||||
for rows.Next() {
|
||||
var status string
|
||||
var count int
|
||||
if err := rows.Scan(&status, &count); err != nil {
|
||||
return nil, fmt.Errorf("count games by status: scan: %w", err)
|
||||
}
|
||||
counts[game.Status(status)] = count
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("count games by status: %w", err)
|
||||
}
|
||||
return counts, nil
|
||||
}
|
||||
|
||||
// GetByOwner returns every record whose owner_user_id equals userID. The
|
||||
// underlying `games_owner_idx` is partial (game_type = 'private'); public
|
||||
// games carry an empty owner_user_id and are excluded from the index, matching
|
||||
// the Redis-backed behaviour.
|
||||
func (store *Store) GetByOwner(ctx context.Context, userID string) ([]game.Game, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get games by owner: nil store")
|
||||
}
|
||||
trimmed := strings.TrimSpace(userID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get games by owner: user id must not be empty")
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get games by owner", store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(gameSelectColumns).
|
||||
FROM(pgtable.Games).
|
||||
WHERE(pgtable.Games.OwnerUserID.EQ(pg.String(trimmed))).
|
||||
ORDER_BY(pgtable.Games.CreatedAt.DESC(), pgtable.Games.GameID.DESC())
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by owner: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
records, err := scanAllGames(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by owner: %w", err)
|
||||
}
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition with compare-and-swap on the
|
||||
// current status column. The domain transition gate runs before any SQL
|
||||
// touch.
|
||||
func (store *Store) UpdateStatus(ctx context.Context, input ports.UpdateStatusInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update game status: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update game status: %w", err)
|
||||
}
|
||||
if err := game.Transition(input.ExpectedFrom, input.To, input.Trigger); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update game status", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
at := input.At.UTC()
|
||||
var startedAt, finishedAt any
|
||||
if input.To == game.StatusRunning {
|
||||
startedAt = at
|
||||
}
|
||||
if input.To == game.StatusFinished {
|
||||
finishedAt = at
|
||||
}
|
||||
|
||||
// COALESCE keeps the existing started_at/finished_at when the new value
|
||||
// is NULL (the bind parameter is nil unless we are entering the
|
||||
// running/finished state for the first time).
|
||||
startedExpr := pg.COALESCE(pgtable.Games.StartedAt, pg.TimestampzT(at))
|
||||
if startedAt == nil {
|
||||
startedExpr = pgtable.Games.StartedAt
|
||||
}
|
||||
finishedExpr := pg.COALESCE(pgtable.Games.FinishedAt, pg.TimestampzT(at))
|
||||
if finishedAt == nil {
|
||||
finishedExpr = pgtable.Games.FinishedAt
|
||||
}
|
||||
|
||||
stmt := pgtable.Games.UPDATE(
|
||||
pgtable.Games.Status,
|
||||
pgtable.Games.UpdatedAt,
|
||||
pgtable.Games.StartedAt,
|
||||
pgtable.Games.FinishedAt,
|
||||
).SET(
|
||||
pg.String(string(input.To)),
|
||||
pg.TimestampzT(at),
|
||||
startedExpr,
|
||||
finishedExpr,
|
||||
).WHERE(pg.AND(
|
||||
pgtable.Games.GameID.EQ(pg.String(input.GameID.String())),
|
||||
pgtable.Games.Status.EQ(pg.String(string(input.ExpectedFrom))),
|
||||
))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update game status: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update game status: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
// distinguish "not found" from "status mismatch" with a follow-up read
|
||||
probe := pg.SELECT(pgtable.Games.Status).
|
||||
FROM(pgtable.Games).
|
||||
WHERE(pgtable.Games.GameID.EQ(pg.String(input.GameID.String())))
|
||||
probeQuery, probeArgs := probe.Sql()
|
||||
|
||||
var current string
|
||||
row := store.db.QueryRowContext(operationCtx, probeQuery, probeArgs...)
|
||||
if err := row.Scan(¤t); err != nil {
|
||||
if sqlx.IsNoRows(err) {
|
||||
return game.ErrNotFound
|
||||
}
|
||||
return fmt.Errorf("update game status: probe: %w", err)
|
||||
}
|
||||
return fmt.Errorf("update game status: %w", game.ErrConflict)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateRuntimeSnapshot overwrites the denormalised runtime snapshot fields.
|
||||
func (store *Store) UpdateRuntimeSnapshot(ctx context.Context, input ports.UpdateRuntimeSnapshotInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update runtime snapshot: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update runtime snapshot", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
snapshot, err := marshalRuntimeSnapshot(input.Snapshot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||
}
|
||||
at := input.At.UTC()
|
||||
|
||||
stmt := pgtable.Games.UPDATE(pgtable.Games.RuntimeSnapshot, pgtable.Games.UpdatedAt).
|
||||
SET(snapshot, at).
|
||||
WHERE(pgtable.Games.GameID.EQ(pg.String(input.GameID.String())))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime snapshot: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return game.ErrNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateRuntimeBinding overwrites the runtime binding metadata.
|
||||
func (store *Store) UpdateRuntimeBinding(ctx context.Context, input ports.UpdateRuntimeBindingInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update runtime binding: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update runtime binding: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update runtime binding", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
binding := input.Binding
|
||||
encoded, err := marshalRuntimeBinding(&binding)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime binding: %w", err)
|
||||
}
|
||||
at := input.At.UTC()
|
||||
|
||||
stmt := pgtable.Games.UPDATE(pgtable.Games.RuntimeBinding, pgtable.Games.UpdatedAt).
|
||||
SET(encoded, at).
|
||||
WHERE(pgtable.Games.GameID.EQ(pg.String(input.GameID.String())))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime binding: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime binding: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return game.ErrNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// rowScanner abstracts *sql.Row and *sql.Rows so scanGame can be shared
|
||||
// across both single-row reads and iterated reads.
|
||||
type rowScanner interface {
|
||||
Scan(dest ...any) error
|
||||
}
|
||||
|
||||
// scanGame scans one games row from rs. Returns sql.ErrNoRows verbatim so
|
||||
// callers can distinguish "no row" from a hard error.
|
||||
func scanGame(rs rowScanner) (game.Game, error) {
|
||||
var (
|
||||
gameID string
|
||||
gameName string
|
||||
description string
|
||||
gameType string
|
||||
ownerUserID string
|
||||
status string
|
||||
minPlayers int
|
||||
maxPlayers int
|
||||
startGapHours int
|
||||
startGapPlayers int
|
||||
enrollmentEndsAt time.Time
|
||||
turnSchedule string
|
||||
targetEngineVersion string
|
||||
createdAt time.Time
|
||||
updatedAt time.Time
|
||||
startedAt sql.NullTime
|
||||
finishedAt sql.NullTime
|
||||
runtimeSnapshot []byte
|
||||
runtimeBinding []byte
|
||||
)
|
||||
if err := rs.Scan(
|
||||
&gameID,
|
||||
&gameName,
|
||||
&description,
|
||||
&gameType,
|
||||
&ownerUserID,
|
||||
&status,
|
||||
&minPlayers,
|
||||
&maxPlayers,
|
||||
&startGapHours,
|
||||
&startGapPlayers,
|
||||
&enrollmentEndsAt,
|
||||
&turnSchedule,
|
||||
&targetEngineVersion,
|
||||
&createdAt,
|
||||
&updatedAt,
|
||||
&startedAt,
|
||||
&finishedAt,
|
||||
&runtimeSnapshot,
|
||||
&runtimeBinding,
|
||||
); err != nil {
|
||||
return game.Game{}, err
|
||||
}
|
||||
|
||||
snapshot, err := unmarshalRuntimeSnapshot(runtimeSnapshot)
|
||||
if err != nil {
|
||||
return game.Game{}, err
|
||||
}
|
||||
binding, err := unmarshalRuntimeBinding(runtimeBinding)
|
||||
if err != nil {
|
||||
return game.Game{}, err
|
||||
}
|
||||
|
||||
return game.Game{
|
||||
GameID: common.GameID(gameID),
|
||||
GameName: gameName,
|
||||
Description: description,
|
||||
GameType: game.GameType(gameType),
|
||||
OwnerUserID: ownerUserID,
|
||||
Status: game.Status(status),
|
||||
MinPlayers: minPlayers,
|
||||
MaxPlayers: maxPlayers,
|
||||
StartGapHours: startGapHours,
|
||||
StartGapPlayers: startGapPlayers,
|
||||
EnrollmentEndsAt: enrollmentEndsAt.UTC(),
|
||||
TurnSchedule: turnSchedule,
|
||||
TargetEngineVersion: targetEngineVersion,
|
||||
CreatedAt: createdAt.UTC(),
|
||||
UpdatedAt: updatedAt.UTC(),
|
||||
StartedAt: sqlx.TimePtrFromNullable(startedAt),
|
||||
FinishedAt: sqlx.TimePtrFromNullable(finishedAt),
|
||||
RuntimeSnapshot: snapshot,
|
||||
RuntimeBinding: binding,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func scanAllGames(rows *sql.Rows) ([]game.Game, error) {
|
||||
records := make([]game.Game, 0)
|
||||
for rows.Next() {
|
||||
record, err := scanGame(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("scan: %w", err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(records) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.GameStore interface at compile time.
|
||||
var _ ports.GameStore = (*Store)(nil)
|
||||
@@ -0,0 +1,338 @@
|
||||
package gamestore_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/gamestore"
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
func newStore(t *testing.T) *gamestore.Store {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
store, err := gamestore.New(gamestore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return store
|
||||
}
|
||||
|
||||
func fixturePublicGame(t *testing.T, id string) game.Game {
|
||||
t.Helper()
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
record, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID(id),
|
||||
GameName: "Spring Classic " + id,
|
||||
Description: "first public game",
|
||||
GameType: game.GameTypePublic,
|
||||
MinPlayers: 4,
|
||||
MaxPlayers: 8,
|
||||
StartGapHours: 24,
|
||||
StartGapPlayers: 2,
|
||||
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.2.3",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return record
|
||||
}
|
||||
|
||||
func fixturePrivateGame(t *testing.T, id, ownerID string) game.Game {
|
||||
t.Helper()
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
record, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID(id),
|
||||
GameName: "Private " + id,
|
||||
GameType: game.GameTypePrivate,
|
||||
OwnerUserID: ownerID,
|
||||
MinPlayers: 2,
|
||||
MaxPlayers: 6,
|
||||
StartGapHours: 12,
|
||||
StartGapPlayers: 2,
|
||||
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.0.0",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return record
|
||||
}
|
||||
|
||||
func TestSaveAndGet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
record := fixturePublicGame(t, "game-001")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.GameID, got.GameID)
|
||||
assert.Equal(t, record.GameName, got.GameName)
|
||||
assert.Equal(t, record.Status, got.Status)
|
||||
assert.Equal(t, record.MinPlayers, got.MinPlayers)
|
||||
assert.Equal(t, record.MaxPlayers, got.MaxPlayers)
|
||||
assert.True(t, record.EnrollmentEndsAt.Equal(got.EnrollmentEndsAt))
|
||||
assert.Equal(t, time.UTC, got.CreatedAt.Location())
|
||||
assert.Equal(t, time.UTC, got.UpdatedAt.Location())
|
||||
}
|
||||
|
||||
func TestGetReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
_, err := store.Get(ctx, common.GameID("game-missing-x"))
|
||||
require.ErrorIs(t, err, game.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestSaveIsUpsert(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
record := fixturePublicGame(t, "game-001")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
// edit a few fields, save again
|
||||
record.GameName = "Renamed"
|
||||
record.UpdatedAt = record.UpdatedAt.Add(time.Minute)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "Renamed", got.GameName)
|
||||
}
|
||||
|
||||
func TestUpdateStatusHappyPath(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
record := fixturePublicGame(t, "game-001")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusDraft,
|
||||
To: game.StatusEnrollmentOpen,
|
||||
Trigger: game.TriggerCommand,
|
||||
At: record.UpdatedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, game.StatusEnrollmentOpen, got.Status)
|
||||
}
|
||||
|
||||
func TestUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
record := fixturePublicGame(t, "game-001")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusEnrollmentOpen, // wrong
|
||||
To: game.StatusReadyToStart,
|
||||
Trigger: game.TriggerManual,
|
||||
At: record.UpdatedAt.Add(time.Minute),
|
||||
})
|
||||
require.ErrorIs(t, err, game.ErrConflict)
|
||||
}
|
||||
|
||||
func TestUpdateStatusReturnsNotFoundForMissing(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: common.GameID("game-missing-x"),
|
||||
ExpectedFrom: game.StatusDraft,
|
||||
To: game.StatusEnrollmentOpen,
|
||||
Trigger: game.TriggerCommand,
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, game.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestUpdateStatusSetsStartedAtOnRunning(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
record := fixturePublicGame(t, "game-001")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
advance := func(from, to game.Status, trigger game.Trigger, at time.Time) {
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID, ExpectedFrom: from, To: to, Trigger: trigger, At: at,
|
||||
}))
|
||||
}
|
||||
now := record.UpdatedAt.Add(time.Minute)
|
||||
advance(game.StatusDraft, game.StatusEnrollmentOpen, game.TriggerCommand, now)
|
||||
advance(game.StatusEnrollmentOpen, game.StatusReadyToStart, game.TriggerManual, now.Add(time.Minute))
|
||||
advance(game.StatusReadyToStart, game.StatusStarting, game.TriggerCommand, now.Add(2*time.Minute))
|
||||
startedAt := now.Add(3 * time.Minute)
|
||||
advance(game.StatusStarting, game.StatusRunning, game.TriggerRuntimeEvent, startedAt)
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, game.StatusRunning, got.Status)
|
||||
require.NotNil(t, got.StartedAt)
|
||||
assert.True(t, got.StartedAt.Equal(startedAt))
|
||||
}
|
||||
|
||||
func TestGetByStatusReturnsExpectedRecords(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
a := fixturePublicGame(t, "game-aaa")
|
||||
b := fixturePublicGame(t, "game-bbb")
|
||||
c := fixturePublicGame(t, "game-ccc")
|
||||
for _, r := range []game.Game{a, b, c} {
|
||||
require.NoError(t, store.Save(ctx, r))
|
||||
}
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: b.GameID,
|
||||
ExpectedFrom: game.StatusDraft,
|
||||
To: game.StatusEnrollmentOpen,
|
||||
Trigger: game.TriggerCommand,
|
||||
At: b.UpdatedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
drafts, err := store.GetByStatus(ctx, game.StatusDraft)
|
||||
require.NoError(t, err)
|
||||
gotIDs := map[common.GameID]struct{}{}
|
||||
for _, r := range drafts {
|
||||
gotIDs[r.GameID] = struct{}{}
|
||||
}
|
||||
assert.Contains(t, gotIDs, a.GameID)
|
||||
assert.Contains(t, gotIDs, c.GameID)
|
||||
assert.NotContains(t, gotIDs, b.GameID)
|
||||
|
||||
open, err := store.GetByStatus(ctx, game.StatusEnrollmentOpen)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, open, 1)
|
||||
assert.Equal(t, b.GameID, open[0].GameID)
|
||||
}
|
||||
|
||||
func TestGetByOwnerOnlyReturnsPrivateGames(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
owner := "user-123"
|
||||
pub := fixturePublicGame(t, "game-pub-001")
|
||||
priv1 := fixturePrivateGame(t, "game-priv-001", owner)
|
||||
priv2 := fixturePrivateGame(t, "game-priv-002", owner)
|
||||
priv3 := fixturePrivateGame(t, "game-priv-003", "user-other")
|
||||
for _, r := range []game.Game{pub, priv1, priv2, priv3} {
|
||||
require.NoError(t, store.Save(ctx, r))
|
||||
}
|
||||
|
||||
got, err := store.GetByOwner(ctx, owner)
|
||||
require.NoError(t, err)
|
||||
ids := map[common.GameID]struct{}{}
|
||||
for _, r := range got {
|
||||
ids[r.GameID] = struct{}{}
|
||||
}
|
||||
assert.Contains(t, ids, priv1.GameID)
|
||||
assert.Contains(t, ids, priv2.GameID)
|
||||
assert.NotContains(t, ids, priv3.GameID)
|
||||
assert.NotContains(t, ids, pub.GameID)
|
||||
}
|
||||
|
||||
func TestCountByStatusIncludesAllBuckets(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
require.NoError(t, store.Save(ctx, fixturePublicGame(t, "game-aaa")))
|
||||
require.NoError(t, store.Save(ctx, fixturePublicGame(t, "game-bbb")))
|
||||
|
||||
counts, err := store.CountByStatus(ctx)
|
||||
require.NoError(t, err)
|
||||
for _, status := range game.AllStatuses() {
|
||||
_, ok := counts[status]
|
||||
assert.Truef(t, ok, "missing bucket for %q", status)
|
||||
}
|
||||
assert.Equal(t, 2, counts[game.StatusDraft])
|
||||
}
|
||||
|
||||
func TestUpdateRuntimeSnapshotRoundTripsValues(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
record := fixturePublicGame(t, "game-001")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
snapshot := game.RuntimeSnapshot{
|
||||
CurrentTurn: 42,
|
||||
RuntimeStatus: "running_accepting_commands",
|
||||
EngineHealthSummary: "ok",
|
||||
}
|
||||
require.NoError(t, store.UpdateRuntimeSnapshot(ctx, ports.UpdateRuntimeSnapshotInput{
|
||||
GameID: record.GameID,
|
||||
Snapshot: snapshot,
|
||||
At: record.UpdatedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, snapshot, got.RuntimeSnapshot)
|
||||
}
|
||||
|
||||
func TestUpdateRuntimeBindingRoundTripsValues(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
|
||||
record := fixturePublicGame(t, "game-001")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
at := record.UpdatedAt.Add(time.Minute)
|
||||
require.NoError(t, store.UpdateRuntimeBinding(ctx, ports.UpdateRuntimeBindingInput{
|
||||
GameID: record.GameID,
|
||||
Binding: game.RuntimeBinding{
|
||||
ContainerID: "container-7",
|
||||
EngineEndpoint: "10.0.0.5:9000",
|
||||
RuntimeJobID: "1700000000-0",
|
||||
BoundAt: at,
|
||||
},
|
||||
At: at,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, got.RuntimeBinding)
|
||||
assert.Equal(t, "container-7", got.RuntimeBinding.ContainerID)
|
||||
assert.Equal(t, "10.0.0.5:9000", got.RuntimeBinding.EngineEndpoint)
|
||||
assert.Equal(t, "1700000000-0", got.RuntimeBinding.RuntimeJobID)
|
||||
assert.True(t, got.RuntimeBinding.BoundAt.Equal(at))
|
||||
assert.Equal(t, time.UTC, got.RuntimeBinding.BoundAt.Location())
|
||||
}
|
||||
|
||||
func TestUpdateRuntimeSnapshotReturnsNotFoundForMissing(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := newStore(t)
|
||||
err := store.UpdateRuntimeSnapshot(ctx, ports.UpdateRuntimeSnapshotInput{
|
||||
GameID: common.GameID("game-missing-x"),
|
||||
Snapshot: game.RuntimeSnapshot{CurrentTurn: 1},
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, game.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestNewRejectsNilDB(t *testing.T) {
|
||||
_, err := gamestore.New(gamestore.Config{OperationTimeout: time.Second})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestNewRejectsNonPositiveTimeout(t *testing.T) {
|
||||
_, err := gamestore.New(gamestore.Config{DB: pgtest.Ensure(t).Pool()})
|
||||
require.Error(t, err)
|
||||
}
|
||||
@@ -0,0 +1,208 @@
|
||||
// Package pgtest exposes the testcontainers-backed PostgreSQL bootstrap
|
||||
// shared by every Game Lobby PG adapter test. The package is regular Go
|
||||
// code — not a `_test.go` file — so it can be imported by the `_test.go`
|
||||
// files in the four sibling store packages (`gamestore`, `applicationstore`,
|
||||
// `invitestore`, `membershipstore`).
|
||||
//
|
||||
// No production code in `cmd/lobby` or in the runtime imports this package.
|
||||
// The testcontainers-go dependency therefore stays out of the production
|
||||
// binary's import graph.
|
||||
package pgtest
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"net/url"
|
||||
"os"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/migrations"
|
||||
"galaxy/postgres"
|
||||
|
||||
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
const (
|
||||
postgresImage = "postgres:16-alpine"
|
||||
superUser = "galaxy"
|
||||
superPassword = "galaxy"
|
||||
superDatabase = "galaxy_lobby"
|
||||
serviceRole = "lobbyservice"
|
||||
servicePassword = "lobbyservice"
|
||||
serviceSchema = "lobby"
|
||||
containerStartup = 90 * time.Second
|
||||
|
||||
// OperationTimeout is the per-statement timeout used by every store
|
||||
// constructed via NewStoreConfig. Tests may pass a smaller value if they
|
||||
// need to assert deadline behaviour explicitly.
|
||||
OperationTimeout = 10 * time.Second
|
||||
)
|
||||
|
||||
// Env holds the per-process container plus the *sql.DB pool already
|
||||
// provisioned with the lobby schema, role, and migrations applied.
|
||||
type Env struct {
|
||||
container *tcpostgres.PostgresContainer
|
||||
pool *sql.DB
|
||||
}
|
||||
|
||||
// Pool returns the shared pool. Tests truncate per-table state before each
|
||||
// run via TruncateAll.
|
||||
func (env *Env) Pool() *sql.DB { return env.pool }
|
||||
|
||||
var (
|
||||
once sync.Once
|
||||
cur *Env
|
||||
curEr error
|
||||
)
|
||||
|
||||
// Ensure starts the PostgreSQL container on first invocation and applies
|
||||
// the embedded goose migrations. Subsequent invocations reuse the same
|
||||
// container/pool. When Docker is unavailable Ensure calls t.Skip with the
|
||||
// underlying error so the test suite still passes on machines without
|
||||
// Docker.
|
||||
func Ensure(t testing.TB) *Env {
|
||||
t.Helper()
|
||||
once.Do(func() {
|
||||
cur, curEr = start()
|
||||
})
|
||||
if curEr != nil {
|
||||
t.Skipf("postgres container start failed (Docker unavailable?): %v", curEr)
|
||||
}
|
||||
return cur
|
||||
}
|
||||
|
||||
// TruncateAll wipes every Game Lobby table inside the shared pool, leaving
|
||||
// the schema and indexes intact. Use it from each test that needs a clean
|
||||
// slate.
|
||||
func TruncateAll(t testing.TB) {
|
||||
t.Helper()
|
||||
env := Ensure(t)
|
||||
const stmt = `TRUNCATE TABLE memberships, invites, applications, games, race_names RESTART IDENTITY CASCADE`
|
||||
if _, err := env.pool.ExecContext(context.Background(), stmt); err != nil {
|
||||
t.Fatalf("truncate lobby tables: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown terminates the shared container and closes the pool. It is
|
||||
// invoked from each test package's TestMain after `m.Run` returns so the
|
||||
// container is released even if individual tests panic.
|
||||
func Shutdown() {
|
||||
if cur == nil {
|
||||
return
|
||||
}
|
||||
if cur.pool != nil {
|
||||
_ = cur.pool.Close()
|
||||
}
|
||||
if cur.container != nil {
|
||||
_ = testcontainers.TerminateContainer(cur.container)
|
||||
}
|
||||
cur = nil
|
||||
}
|
||||
|
||||
// RunMain is a convenience helper for each store package's TestMain: it
|
||||
// runs the test main, captures the exit code, shuts the container down, and
|
||||
// exits. Wiring it through one helper keeps every TestMain to two lines.
|
||||
func RunMain(m *testing.M) {
|
||||
code := m.Run()
|
||||
Shutdown()
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func start() (*Env, error) {
|
||||
ctx := context.Background()
|
||||
container, err := tcpostgres.Run(ctx, postgresImage,
|
||||
tcpostgres.WithDatabase(superDatabase),
|
||||
tcpostgres.WithUsername(superUser),
|
||||
tcpostgres.WithPassword(superPassword),
|
||||
testcontainers.WithWaitStrategy(
|
||||
wait.ForLog("database system is ready to accept connections").
|
||||
WithOccurrence(2).
|
||||
WithStartupTimeout(containerStartup),
|
||||
),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
baseDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||
if err != nil {
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
if err := provisionRoleAndSchema(ctx, baseDSN); err != nil {
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
scopedDSN, err := dsnForServiceRole(baseDSN)
|
||||
if err != nil {
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = scopedDSN
|
||||
cfg.OperationTimeout = OperationTimeout
|
||||
pool, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
if err := postgres.Ping(ctx, pool, OperationTimeout); err != nil {
|
||||
_ = pool.Close()
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
if err := postgres.RunMigrations(ctx, pool, migrations.FS(), "."); err != nil {
|
||||
_ = pool.Close()
|
||||
_ = testcontainers.TerminateContainer(container)
|
||||
return nil, err
|
||||
}
|
||||
return &Env{container: container, pool: pool}, nil
|
||||
}
|
||||
|
||||
func provisionRoleAndSchema(ctx context.Context, baseDSN string) error {
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = baseDSN
|
||||
cfg.OperationTimeout = OperationTimeout
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() { _ = db.Close() }()
|
||||
|
||||
statements := []string{
|
||||
`DO $$ BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'lobbyservice') THEN
|
||||
CREATE ROLE lobbyservice LOGIN PASSWORD 'lobbyservice';
|
||||
END IF;
|
||||
END $$;`,
|
||||
`CREATE SCHEMA IF NOT EXISTS lobby AUTHORIZATION lobbyservice;`,
|
||||
`GRANT USAGE ON SCHEMA lobby TO lobbyservice;`,
|
||||
}
|
||||
for _, statement := range statements {
|
||||
if _, err := db.ExecContext(ctx, statement); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func dsnForServiceRole(baseDSN string) (string, error) {
|
||||
parsed, err := url.Parse(baseDSN)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
values := url.Values{}
|
||||
values.Set("search_path", serviceSchema)
|
||||
values.Set("sslmode", "disable")
|
||||
scoped := url.URL{
|
||||
Scheme: parsed.Scheme,
|
||||
User: url.UserPassword(serviceRole, servicePassword),
|
||||
Host: parsed.Host,
|
||||
Path: parsed.Path,
|
||||
RawQuery: values.Encode(),
|
||||
}
|
||||
return scoped.String(), nil
|
||||
}
|
||||
@@ -0,0 +1,96 @@
|
||||
// Package sqlx contains the small set of helpers shared by every Game Lobby
|
||||
// PostgreSQL adapter (gamestore, applicationstore, invitestore,
|
||||
// membershipstore). The helpers centralise the boundary translations from
|
||||
// the per-service ARCHITECTURE.md timestamp-handling rules and from the pgx
|
||||
// SQLSTATE codes the adapters interpret as domain conflicts.
|
||||
package sqlx
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5/pgconn"
|
||||
)
|
||||
|
||||
// PgUniqueViolationCode identifies the SQLSTATE returned by PostgreSQL when
|
||||
// a UNIQUE constraint is violated by INSERT or UPDATE.
|
||||
const PgUniqueViolationCode = "23505"
|
||||
|
||||
// IsUniqueViolation reports whether err is a PostgreSQL unique-violation,
|
||||
// regardless of constraint name.
|
||||
func IsUniqueViolation(err error) bool {
|
||||
var pgErr *pgconn.PgError
|
||||
if !errors.As(err, &pgErr) {
|
||||
return false
|
||||
}
|
||||
return pgErr.Code == PgUniqueViolationCode
|
||||
}
|
||||
|
||||
// IsNoRows reports whether err is sql.ErrNoRows.
|
||||
func IsNoRows(err error) bool {
|
||||
return errors.Is(err, sql.ErrNoRows)
|
||||
}
|
||||
|
||||
// NullableTime returns t.UTC() when non-zero, otherwise nil so the column
|
||||
// is bound as SQL NULL. Several Lobby domain records use *time.Time to
|
||||
// express absent timestamps; for those, callers translate the pointer with
|
||||
// NullableTimePtr instead.
|
||||
func NullableTime(t time.Time) any {
|
||||
if t.IsZero() {
|
||||
return nil
|
||||
}
|
||||
return t.UTC()
|
||||
}
|
||||
|
||||
// NullableTimePtr returns t.UTC() when t is non-nil and non-zero, otherwise
|
||||
// nil. The helper is the *time.Time companion of NullableTime: every Lobby
|
||||
// domain record has at least one optional `*time.Time` field
|
||||
// (`StartedAt`, `FinishedAt`, `DecidedAt`, `RemovedAt`) that maps to a
|
||||
// nullable timestamptz column.
|
||||
func NullableTimePtr(t *time.Time) any {
|
||||
if t == nil {
|
||||
return nil
|
||||
}
|
||||
return NullableTime(*t)
|
||||
}
|
||||
|
||||
// TimeFromNullable copies an optional sql.NullTime read from PostgreSQL
|
||||
// into a domain time.Time, applying the global UTC normalisation rule.
|
||||
// Invalid (NULL) values become the zero time.Time.
|
||||
func TimeFromNullable(value sql.NullTime) time.Time {
|
||||
if !value.Valid {
|
||||
return time.Time{}
|
||||
}
|
||||
return value.Time.UTC()
|
||||
}
|
||||
|
||||
// TimePtrFromNullable copies an optional sql.NullTime into a domain
|
||||
// *time.Time. NULL becomes nil; non-NULL values are wrapped after UTC
|
||||
// normalisation.
|
||||
func TimePtrFromNullable(value sql.NullTime) *time.Time {
|
||||
if !value.Valid {
|
||||
return nil
|
||||
}
|
||||
t := value.Time.UTC()
|
||||
return &t
|
||||
}
|
||||
|
||||
// WithTimeout derives a child context bounded by timeout and prefixes
|
||||
// context errors with operation. Callers must always invoke the returned
|
||||
// cancel.
|
||||
func WithTimeout(ctx context.Context, operation string, timeout time.Duration) (context.Context, context.CancelFunc, error) {
|
||||
if ctx == nil {
|
||||
return nil, nil, fmt.Errorf("%s: nil context", operation)
|
||||
}
|
||||
if err := ctx.Err(); err != nil {
|
||||
return nil, nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if timeout <= 0 {
|
||||
return nil, nil, fmt.Errorf("%s: operation timeout must be positive", operation)
|
||||
}
|
||||
bounded, cancel := context.WithTimeout(ctx, timeout)
|
||||
return bounded, cancel, nil
|
||||
}
|
||||
@@ -0,0 +1,348 @@
|
||||
// Package invitestore implements the PostgreSQL-backed adapter for
|
||||
// `ports.InviteStore`.
|
||||
//
|
||||
// PG_PLAN.md §6A migrates Game Lobby Service away from Redis-backed durable
|
||||
// invite records.
|
||||
package invitestore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/sqlx"
|
||||
pgtable "galaxy/lobby/internal/adapters/postgres/jet/lobby/table"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/invite"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
pg "github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
// Config configures one PostgreSQL-backed invite store instance.
|
||||
type Config struct {
|
||||
DB *sql.DB
|
||||
OperationTimeout time.Duration
|
||||
}
|
||||
|
||||
// Store persists Game Lobby invite records in PostgreSQL.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
// New constructs one PostgreSQL-backed invite store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.DB == nil {
|
||||
return nil, errors.New("new postgres invite store: db must not be nil")
|
||||
}
|
||||
if cfg.OperationTimeout <= 0 {
|
||||
return nil, errors.New("new postgres invite store: operation timeout must be positive")
|
||||
}
|
||||
return &Store{
|
||||
db: cfg.DB,
|
||||
operationTimeout: cfg.OperationTimeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// inviteSelectColumns is the canonical SELECT list for the invites table,
|
||||
// matching scanInvite's column order.
|
||||
var inviteSelectColumns = pg.ColumnList{
|
||||
pgtable.Invites.InviteID,
|
||||
pgtable.Invites.GameID,
|
||||
pgtable.Invites.InviterUserID,
|
||||
pgtable.Invites.InviteeUserID,
|
||||
pgtable.Invites.RaceName,
|
||||
pgtable.Invites.Status,
|
||||
pgtable.Invites.CreatedAt,
|
||||
pgtable.Invites.ExpiresAt,
|
||||
pgtable.Invites.DecidedAt,
|
||||
}
|
||||
|
||||
// Save persists a new created invite record. Save is create-only; a second
|
||||
// save against the same invite id maps the unique-violation to
|
||||
// invite.ErrConflict.
|
||||
func (store *Store) Save(ctx context.Context, record invite.Invite) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("save invite: nil store")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("save invite: %w", err)
|
||||
}
|
||||
if record.Status != invite.StatusCreated {
|
||||
return fmt.Errorf(
|
||||
"save invite: status must be %q, got %q",
|
||||
invite.StatusCreated, record.Status,
|
||||
)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "save invite", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.Invites.INSERT(
|
||||
pgtable.Invites.InviteID,
|
||||
pgtable.Invites.GameID,
|
||||
pgtable.Invites.InviterUserID,
|
||||
pgtable.Invites.InviteeUserID,
|
||||
pgtable.Invites.RaceName,
|
||||
pgtable.Invites.Status,
|
||||
pgtable.Invites.CreatedAt,
|
||||
pgtable.Invites.ExpiresAt,
|
||||
pgtable.Invites.DecidedAt,
|
||||
).VALUES(
|
||||
record.InviteID.String(),
|
||||
record.GameID.String(),
|
||||
record.InviterUserID,
|
||||
record.InviteeUserID,
|
||||
record.RaceName,
|
||||
string(record.Status),
|
||||
record.CreatedAt.UTC(),
|
||||
record.ExpiresAt.UTC(),
|
||||
sqlx.NullableTimePtr(record.DecidedAt),
|
||||
)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
if sqlx.IsUniqueViolation(err) {
|
||||
return fmt.Errorf("save invite: %w", invite.ErrConflict)
|
||||
}
|
||||
return fmt.Errorf("save invite: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get returns the record identified by inviteID.
|
||||
func (store *Store) Get(ctx context.Context, inviteID common.InviteID) (invite.Invite, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return invite.Invite{}, errors.New("get invite: nil store")
|
||||
}
|
||||
if err := inviteID.Validate(); err != nil {
|
||||
return invite.Invite{}, fmt.Errorf("get invite: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get invite", store.operationTimeout)
|
||||
if err != nil {
|
||||
return invite.Invite{}, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(inviteSelectColumns).
|
||||
FROM(pgtable.Invites).
|
||||
WHERE(pgtable.Invites.InviteID.EQ(pg.String(inviteID.String())))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
record, err := scanInvite(row)
|
||||
if sqlx.IsNoRows(err) {
|
||||
return invite.Invite{}, invite.ErrNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return invite.Invite{}, fmt.Errorf("get invite: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// GetByGame returns every invite attached to gameID.
|
||||
func (store *Store) GetByGame(ctx context.Context, gameID common.GameID) ([]invite.Invite, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get invites by game: nil store")
|
||||
}
|
||||
if err := gameID.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("get invites by game: %w", err)
|
||||
}
|
||||
|
||||
stmt := pg.SELECT(inviteSelectColumns).
|
||||
FROM(pgtable.Invites).
|
||||
WHERE(pgtable.Invites.GameID.EQ(pg.String(gameID.String()))).
|
||||
ORDER_BY(pgtable.Invites.CreatedAt.ASC(), pgtable.Invites.InviteID.ASC())
|
||||
|
||||
return store.queryList(ctx, "get invites by game", stmt)
|
||||
}
|
||||
|
||||
// GetByUser returns every invite addressed to inviteeUserID.
|
||||
func (store *Store) GetByUser(ctx context.Context, inviteeUserID string) ([]invite.Invite, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get invites by user: nil store")
|
||||
}
|
||||
trimmed := strings.TrimSpace(inviteeUserID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get invites by user: invitee user id must not be empty")
|
||||
}
|
||||
|
||||
stmt := pg.SELECT(inviteSelectColumns).
|
||||
FROM(pgtable.Invites).
|
||||
WHERE(pgtable.Invites.InviteeUserID.EQ(pg.String(trimmed))).
|
||||
ORDER_BY(pgtable.Invites.CreatedAt.ASC(), pgtable.Invites.InviteID.ASC())
|
||||
|
||||
return store.queryList(ctx, "get invites by user", stmt)
|
||||
}
|
||||
|
||||
// GetByInviter returns every invite created by inviterUserID.
|
||||
func (store *Store) GetByInviter(ctx context.Context, inviterUserID string) ([]invite.Invite, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get invites by inviter: nil store")
|
||||
}
|
||||
trimmed := strings.TrimSpace(inviterUserID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get invites by inviter: inviter user id must not be empty")
|
||||
}
|
||||
|
||||
stmt := pg.SELECT(inviteSelectColumns).
|
||||
FROM(pgtable.Invites).
|
||||
WHERE(pgtable.Invites.InviterUserID.EQ(pg.String(trimmed))).
|
||||
ORDER_BY(pgtable.Invites.CreatedAt.ASC(), pgtable.Invites.InviteID.ASC())
|
||||
|
||||
return store.queryList(ctx, "get invites by inviter", stmt)
|
||||
}
|
||||
|
||||
func (store *Store) queryList(ctx context.Context, operation string, stmt pg.SelectStatement) ([]invite.Invite, error) {
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, operation, store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
records := make([]invite.Invite, 0)
|
||||
for rows.Next() {
|
||||
record, err := scanInvite(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: scan: %w", operation, err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if len(records) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition with compare-and-swap on the
|
||||
// current status column. When transitioning to redeemed the row's race_name
|
||||
// column is replaced with the trimmed input value.
|
||||
func (store *Store) UpdateStatus(ctx context.Context, input ports.UpdateInviteStatusInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update invite status: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update invite status: %w", err)
|
||||
}
|
||||
if err := invite.Transition(input.ExpectedFrom, input.To); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update invite status", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
at := input.At.UTC()
|
||||
raceName := strings.TrimSpace(input.RaceName)
|
||||
|
||||
// race_name is replaced only when the caller supplies a non-empty value;
|
||||
// otherwise the existing value is preserved (CASE WHEN '' THEN race_name).
|
||||
raceExpr := pg.CASE().
|
||||
WHEN(pg.String(raceName).EQ(pg.String(""))).THEN(pgtable.Invites.RaceName).
|
||||
ELSE(pg.String(raceName))
|
||||
|
||||
stmt := pgtable.Invites.UPDATE(
|
||||
pgtable.Invites.Status,
|
||||
pgtable.Invites.DecidedAt,
|
||||
pgtable.Invites.RaceName,
|
||||
).SET(
|
||||
pg.String(string(input.To)),
|
||||
pg.TimestampzT(at),
|
||||
raceExpr,
|
||||
).WHERE(pg.AND(
|
||||
pgtable.Invites.InviteID.EQ(pg.String(input.InviteID.String())),
|
||||
pgtable.Invites.Status.EQ(pg.String(string(input.ExpectedFrom))),
|
||||
))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update invite status: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update invite status: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
probe := pg.SELECT(pgtable.Invites.Status).
|
||||
FROM(pgtable.Invites).
|
||||
WHERE(pgtable.Invites.InviteID.EQ(pg.String(input.InviteID.String())))
|
||||
probeQuery, probeArgs := probe.Sql()
|
||||
|
||||
var current string
|
||||
row := store.db.QueryRowContext(operationCtx, probeQuery, probeArgs...)
|
||||
if err := row.Scan(¤t); err != nil {
|
||||
if sqlx.IsNoRows(err) {
|
||||
return invite.ErrNotFound
|
||||
}
|
||||
return fmt.Errorf("update invite status: probe: %w", err)
|
||||
}
|
||||
return fmt.Errorf("update invite status: %w", invite.ErrConflict)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type rowScanner interface {
|
||||
Scan(dest ...any) error
|
||||
}
|
||||
|
||||
func scanInvite(rs rowScanner) (invite.Invite, error) {
|
||||
var (
|
||||
inviteID string
|
||||
gameID string
|
||||
inviterUserID string
|
||||
inviteeUserID string
|
||||
raceName string
|
||||
status string
|
||||
createdAt time.Time
|
||||
expiresAt time.Time
|
||||
decidedAt sql.NullTime
|
||||
)
|
||||
if err := rs.Scan(
|
||||
&inviteID,
|
||||
&gameID,
|
||||
&inviterUserID,
|
||||
&inviteeUserID,
|
||||
&raceName,
|
||||
&status,
|
||||
&createdAt,
|
||||
&expiresAt,
|
||||
&decidedAt,
|
||||
); err != nil {
|
||||
return invite.Invite{}, err
|
||||
}
|
||||
return invite.Invite{
|
||||
InviteID: common.InviteID(inviteID),
|
||||
GameID: common.GameID(gameID),
|
||||
InviterUserID: inviterUserID,
|
||||
InviteeUserID: inviteeUserID,
|
||||
RaceName: raceName,
|
||||
Status: invite.Status(status),
|
||||
CreatedAt: createdAt.UTC(),
|
||||
ExpiresAt: expiresAt.UTC(),
|
||||
DecidedAt: sqlx.TimePtrFromNullable(decidedAt),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.InviteStore interface at compile time.
|
||||
var _ ports.InviteStore = (*Store)(nil)
|
||||
@@ -0,0 +1,199 @@
|
||||
package invitestore_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/gamestore"
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/lobby/internal/adapters/postgres/invitestore"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/domain/invite"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
func newStores(t *testing.T) (*gamestore.Store, *invitestore.Store) {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
gs, err := gamestore.New(gamestore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
is, err := invitestore.New(invitestore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return gs, is
|
||||
}
|
||||
|
||||
func seedPrivateGame(t *testing.T, gs *gamestore.Store, id, ownerID string) game.Game {
|
||||
t.Helper()
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
g, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID(id),
|
||||
GameName: "Private " + id,
|
||||
GameType: game.GameTypePrivate,
|
||||
OwnerUserID: ownerID,
|
||||
MinPlayers: 2,
|
||||
MaxPlayers: 6,
|
||||
StartGapHours: 12,
|
||||
StartGapPlayers: 2,
|
||||
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.0.0",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, gs.Save(context.Background(), g))
|
||||
return g
|
||||
}
|
||||
|
||||
func newInvite(t *testing.T, id, gameID, inviter, invitee string) invite.Invite {
|
||||
t.Helper()
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
rec, err := invite.New(invite.NewInviteInput{
|
||||
InviteID: common.InviteID(id),
|
||||
GameID: common.GameID(gameID),
|
||||
InviterUserID: inviter,
|
||||
InviteeUserID: invitee,
|
||||
Now: now,
|
||||
ExpiresAt: now.Add(7 * 24 * time.Hour),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return rec
|
||||
}
|
||||
|
||||
func TestSaveAndGet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, is := newStores(t)
|
||||
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||
|
||||
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||
require.NoError(t, is.Save(ctx, rec))
|
||||
|
||||
got, err := is.Get(ctx, rec.InviteID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, rec.InviteID, got.InviteID)
|
||||
assert.Equal(t, invite.StatusCreated, got.Status)
|
||||
assert.Equal(t, "invitee-1", got.InviteeUserID)
|
||||
assert.True(t, got.ExpiresAt.Equal(rec.ExpiresAt))
|
||||
}
|
||||
|
||||
func TestSaveRejectsNonCreated(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, is := newStores(t)
|
||||
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||
|
||||
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||
rec.Status = invite.StatusRedeemed
|
||||
require.Error(t, is.Save(ctx, rec))
|
||||
}
|
||||
|
||||
func TestSaveDuplicateReturnsConflict(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, is := newStores(t)
|
||||
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||
|
||||
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||
require.NoError(t, is.Save(ctx, rec))
|
||||
err := is.Save(ctx, rec)
|
||||
require.ErrorIs(t, err, invite.ErrConflict)
|
||||
}
|
||||
|
||||
func TestUpdateStatusRedeemSetsRaceName(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, is := newStores(t)
|
||||
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||
|
||||
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||
require.NoError(t, is.Save(ctx, rec))
|
||||
|
||||
require.NoError(t, is.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: rec.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusRedeemed,
|
||||
At: rec.CreatedAt.Add(time.Minute),
|
||||
RaceName: "PilotRedeemed",
|
||||
}))
|
||||
|
||||
got, err := is.Get(ctx, rec.InviteID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, invite.StatusRedeemed, got.Status)
|
||||
assert.Equal(t, "PilotRedeemed", got.RaceName)
|
||||
require.NotNil(t, got.DecidedAt)
|
||||
}
|
||||
|
||||
func TestUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, is := newStores(t)
|
||||
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||
|
||||
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||
require.NoError(t, is.Save(ctx, rec))
|
||||
|
||||
// Move row out of `created` so the next attempt's `WHERE status = ?`
|
||||
// fails on persistence even though the (created → revoked) transition is
|
||||
// itself valid in the domain table.
|
||||
require.NoError(t, is.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: rec.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusDeclined,
|
||||
At: rec.CreatedAt.Add(time.Minute),
|
||||
}))
|
||||
err := is.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: rec.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusRevoked,
|
||||
At: rec.CreatedAt.Add(2 * time.Minute),
|
||||
})
|
||||
require.ErrorIs(t, err, invite.ErrConflict)
|
||||
}
|
||||
|
||||
func TestUpdateStatusReturnsNotFoundForMissing(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
_, is := newStores(t)
|
||||
err := is.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: common.InviteID("invite-missing"),
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusDeclined,
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, invite.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestGetByGameUserInviter(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, is := newStores(t)
|
||||
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||
seedPrivateGame(t, gs, "game-002", "owner-2")
|
||||
|
||||
require.NoError(t, is.Save(ctx, newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")))
|
||||
require.NoError(t, is.Save(ctx, newInvite(t, "invite-002", "game-001", "owner-1", "invitee-2")))
|
||||
require.NoError(t, is.Save(ctx, newInvite(t, "invite-003", "game-002", "owner-2", "invitee-1")))
|
||||
|
||||
g1, err := is.GetByGame(ctx, common.GameID("game-001"))
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, g1, 2)
|
||||
|
||||
user1, err := is.GetByUser(ctx, "invitee-1")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, user1, 2)
|
||||
|
||||
by1, err := is.GetByInviter(ctx, "owner-1")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, by1, 2)
|
||||
}
|
||||
|
||||
func TestGetMissingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
_, is := newStores(t)
|
||||
_, err := is.Get(ctx, common.InviteID("invite-missing"))
|
||||
require.ErrorIs(t, err, invite.ErrNotFound)
|
||||
}
|
||||
@@ -0,0 +1,22 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type Applications struct {
|
||||
ApplicationID string `sql:"primary_key"`
|
||||
GameID string
|
||||
ApplicantUserID string
|
||||
RaceName string
|
||||
Status string
|
||||
CreatedAt time.Time
|
||||
DecidedAt *time.Time
|
||||
}
|
||||
@@ -0,0 +1,34 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type Games struct {
|
||||
GameID string `sql:"primary_key"`
|
||||
GameName string
|
||||
Description string
|
||||
GameType string
|
||||
OwnerUserID string
|
||||
Status string
|
||||
MinPlayers int32
|
||||
MaxPlayers int32
|
||||
StartGapHours int32
|
||||
StartGapPlayers int32
|
||||
EnrollmentEndsAt time.Time
|
||||
TurnSchedule string
|
||||
TargetEngineVersion string
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
StartedAt *time.Time
|
||||
FinishedAt *time.Time
|
||||
RuntimeSnapshot string
|
||||
RuntimeBinding *string
|
||||
}
|
||||
@@ -0,0 +1,19 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type GooseDbVersion struct {
|
||||
ID int32 `sql:"primary_key"`
|
||||
VersionID int64
|
||||
IsApplied bool
|
||||
Tstamp time.Time
|
||||
}
|
||||
@@ -0,0 +1,24 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type Invites struct {
|
||||
InviteID string `sql:"primary_key"`
|
||||
GameID string
|
||||
InviterUserID string
|
||||
InviteeUserID string
|
||||
RaceName string
|
||||
Status string
|
||||
CreatedAt time.Time
|
||||
ExpiresAt time.Time
|
||||
DecidedAt *time.Time
|
||||
}
|
||||
@@ -0,0 +1,23 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
type Memberships struct {
|
||||
MembershipID string `sql:"primary_key"`
|
||||
GameID string
|
||||
UserID string
|
||||
RaceName string
|
||||
CanonicalKey string
|
||||
Status string
|
||||
JoinedAt time.Time
|
||||
RemovedAt *time.Time
|
||||
}
|
||||
@@ -0,0 +1,20 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package model
|
||||
|
||||
type RaceNames struct {
|
||||
CanonicalKey string `sql:"primary_key"`
|
||||
GameID string `sql:"primary_key"`
|
||||
HolderUserID string
|
||||
RaceName string
|
||||
BindingKind string
|
||||
SourceGameID string
|
||||
ReservedAtMs int64
|
||||
EligibleUntilMs *int64
|
||||
RegisteredAtMs *int64
|
||||
}
|
||||
@@ -0,0 +1,96 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var Applications = newApplicationsTable("lobby", "applications", "")
|
||||
|
||||
type applicationsTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
ApplicationID postgres.ColumnString
|
||||
GameID postgres.ColumnString
|
||||
ApplicantUserID postgres.ColumnString
|
||||
RaceName postgres.ColumnString
|
||||
Status postgres.ColumnString
|
||||
CreatedAt postgres.ColumnTimestampz
|
||||
DecidedAt postgres.ColumnTimestampz
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type ApplicationsTable struct {
|
||||
applicationsTable
|
||||
|
||||
EXCLUDED applicationsTable
|
||||
}
|
||||
|
||||
// AS creates new ApplicationsTable with assigned alias
|
||||
func (a ApplicationsTable) AS(alias string) *ApplicationsTable {
|
||||
return newApplicationsTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new ApplicationsTable with assigned schema name
|
||||
func (a ApplicationsTable) FromSchema(schemaName string) *ApplicationsTable {
|
||||
return newApplicationsTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new ApplicationsTable with assigned table prefix
|
||||
func (a ApplicationsTable) WithPrefix(prefix string) *ApplicationsTable {
|
||||
return newApplicationsTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new ApplicationsTable with assigned table suffix
|
||||
func (a ApplicationsTable) WithSuffix(suffix string) *ApplicationsTable {
|
||||
return newApplicationsTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newApplicationsTable(schemaName, tableName, alias string) *ApplicationsTable {
|
||||
return &ApplicationsTable{
|
||||
applicationsTable: newApplicationsTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newApplicationsTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newApplicationsTableImpl(schemaName, tableName, alias string) applicationsTable {
|
||||
var (
|
||||
ApplicationIDColumn = postgres.StringColumn("application_id")
|
||||
GameIDColumn = postgres.StringColumn("game_id")
|
||||
ApplicantUserIDColumn = postgres.StringColumn("applicant_user_id")
|
||||
RaceNameColumn = postgres.StringColumn("race_name")
|
||||
StatusColumn = postgres.StringColumn("status")
|
||||
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
||||
DecidedAtColumn = postgres.TimestampzColumn("decided_at")
|
||||
allColumns = postgres.ColumnList{ApplicationIDColumn, GameIDColumn, ApplicantUserIDColumn, RaceNameColumn, StatusColumn, CreatedAtColumn, DecidedAtColumn}
|
||||
mutableColumns = postgres.ColumnList{GameIDColumn, ApplicantUserIDColumn, RaceNameColumn, StatusColumn, CreatedAtColumn, DecidedAtColumn}
|
||||
defaultColumns = postgres.ColumnList{}
|
||||
)
|
||||
|
||||
return applicationsTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
ApplicationID: ApplicationIDColumn,
|
||||
GameID: GameIDColumn,
|
||||
ApplicantUserID: ApplicantUserIDColumn,
|
||||
RaceName: RaceNameColumn,
|
||||
Status: StatusColumn,
|
||||
CreatedAt: CreatedAtColumn,
|
||||
DecidedAt: DecidedAtColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,132 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var Games = newGamesTable("lobby", "games", "")
|
||||
|
||||
type gamesTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
GameID postgres.ColumnString
|
||||
GameName postgres.ColumnString
|
||||
Description postgres.ColumnString
|
||||
GameType postgres.ColumnString
|
||||
OwnerUserID postgres.ColumnString
|
||||
Status postgres.ColumnString
|
||||
MinPlayers postgres.ColumnInteger
|
||||
MaxPlayers postgres.ColumnInteger
|
||||
StartGapHours postgres.ColumnInteger
|
||||
StartGapPlayers postgres.ColumnInteger
|
||||
EnrollmentEndsAt postgres.ColumnTimestampz
|
||||
TurnSchedule postgres.ColumnString
|
||||
TargetEngineVersion postgres.ColumnString
|
||||
CreatedAt postgres.ColumnTimestampz
|
||||
UpdatedAt postgres.ColumnTimestampz
|
||||
StartedAt postgres.ColumnTimestampz
|
||||
FinishedAt postgres.ColumnTimestampz
|
||||
RuntimeSnapshot postgres.ColumnString
|
||||
RuntimeBinding postgres.ColumnString
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type GamesTable struct {
|
||||
gamesTable
|
||||
|
||||
EXCLUDED gamesTable
|
||||
}
|
||||
|
||||
// AS creates new GamesTable with assigned alias
|
||||
func (a GamesTable) AS(alias string) *GamesTable {
|
||||
return newGamesTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new GamesTable with assigned schema name
|
||||
func (a GamesTable) FromSchema(schemaName string) *GamesTable {
|
||||
return newGamesTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new GamesTable with assigned table prefix
|
||||
func (a GamesTable) WithPrefix(prefix string) *GamesTable {
|
||||
return newGamesTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new GamesTable with assigned table suffix
|
||||
func (a GamesTable) WithSuffix(suffix string) *GamesTable {
|
||||
return newGamesTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newGamesTable(schemaName, tableName, alias string) *GamesTable {
|
||||
return &GamesTable{
|
||||
gamesTable: newGamesTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newGamesTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newGamesTableImpl(schemaName, tableName, alias string) gamesTable {
|
||||
var (
|
||||
GameIDColumn = postgres.StringColumn("game_id")
|
||||
GameNameColumn = postgres.StringColumn("game_name")
|
||||
DescriptionColumn = postgres.StringColumn("description")
|
||||
GameTypeColumn = postgres.StringColumn("game_type")
|
||||
OwnerUserIDColumn = postgres.StringColumn("owner_user_id")
|
||||
StatusColumn = postgres.StringColumn("status")
|
||||
MinPlayersColumn = postgres.IntegerColumn("min_players")
|
||||
MaxPlayersColumn = postgres.IntegerColumn("max_players")
|
||||
StartGapHoursColumn = postgres.IntegerColumn("start_gap_hours")
|
||||
StartGapPlayersColumn = postgres.IntegerColumn("start_gap_players")
|
||||
EnrollmentEndsAtColumn = postgres.TimestampzColumn("enrollment_ends_at")
|
||||
TurnScheduleColumn = postgres.StringColumn("turn_schedule")
|
||||
TargetEngineVersionColumn = postgres.StringColumn("target_engine_version")
|
||||
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
||||
UpdatedAtColumn = postgres.TimestampzColumn("updated_at")
|
||||
StartedAtColumn = postgres.TimestampzColumn("started_at")
|
||||
FinishedAtColumn = postgres.TimestampzColumn("finished_at")
|
||||
RuntimeSnapshotColumn = postgres.StringColumn("runtime_snapshot")
|
||||
RuntimeBindingColumn = postgres.StringColumn("runtime_binding")
|
||||
allColumns = postgres.ColumnList{GameIDColumn, GameNameColumn, DescriptionColumn, GameTypeColumn, OwnerUserIDColumn, StatusColumn, MinPlayersColumn, MaxPlayersColumn, StartGapHoursColumn, StartGapPlayersColumn, EnrollmentEndsAtColumn, TurnScheduleColumn, TargetEngineVersionColumn, CreatedAtColumn, UpdatedAtColumn, StartedAtColumn, FinishedAtColumn, RuntimeSnapshotColumn, RuntimeBindingColumn}
|
||||
mutableColumns = postgres.ColumnList{GameNameColumn, DescriptionColumn, GameTypeColumn, OwnerUserIDColumn, StatusColumn, MinPlayersColumn, MaxPlayersColumn, StartGapHoursColumn, StartGapPlayersColumn, EnrollmentEndsAtColumn, TurnScheduleColumn, TargetEngineVersionColumn, CreatedAtColumn, UpdatedAtColumn, StartedAtColumn, FinishedAtColumn, RuntimeSnapshotColumn, RuntimeBindingColumn}
|
||||
defaultColumns = postgres.ColumnList{DescriptionColumn, OwnerUserIDColumn, RuntimeSnapshotColumn}
|
||||
)
|
||||
|
||||
return gamesTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
GameID: GameIDColumn,
|
||||
GameName: GameNameColumn,
|
||||
Description: DescriptionColumn,
|
||||
GameType: GameTypeColumn,
|
||||
OwnerUserID: OwnerUserIDColumn,
|
||||
Status: StatusColumn,
|
||||
MinPlayers: MinPlayersColumn,
|
||||
MaxPlayers: MaxPlayersColumn,
|
||||
StartGapHours: StartGapHoursColumn,
|
||||
StartGapPlayers: StartGapPlayersColumn,
|
||||
EnrollmentEndsAt: EnrollmentEndsAtColumn,
|
||||
TurnSchedule: TurnScheduleColumn,
|
||||
TargetEngineVersion: TargetEngineVersionColumn,
|
||||
CreatedAt: CreatedAtColumn,
|
||||
UpdatedAt: UpdatedAtColumn,
|
||||
StartedAt: StartedAtColumn,
|
||||
FinishedAt: FinishedAtColumn,
|
||||
RuntimeSnapshot: RuntimeSnapshotColumn,
|
||||
RuntimeBinding: RuntimeBindingColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,87 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var GooseDbVersion = newGooseDbVersionTable("lobby", "goose_db_version", "")
|
||||
|
||||
type gooseDbVersionTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
ID postgres.ColumnInteger
|
||||
VersionID postgres.ColumnInteger
|
||||
IsApplied postgres.ColumnBool
|
||||
Tstamp postgres.ColumnTimestamp
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type GooseDbVersionTable struct {
|
||||
gooseDbVersionTable
|
||||
|
||||
EXCLUDED gooseDbVersionTable
|
||||
}
|
||||
|
||||
// AS creates new GooseDbVersionTable with assigned alias
|
||||
func (a GooseDbVersionTable) AS(alias string) *GooseDbVersionTable {
|
||||
return newGooseDbVersionTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new GooseDbVersionTable with assigned schema name
|
||||
func (a GooseDbVersionTable) FromSchema(schemaName string) *GooseDbVersionTable {
|
||||
return newGooseDbVersionTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new GooseDbVersionTable with assigned table prefix
|
||||
func (a GooseDbVersionTable) WithPrefix(prefix string) *GooseDbVersionTable {
|
||||
return newGooseDbVersionTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new GooseDbVersionTable with assigned table suffix
|
||||
func (a GooseDbVersionTable) WithSuffix(suffix string) *GooseDbVersionTable {
|
||||
return newGooseDbVersionTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newGooseDbVersionTable(schemaName, tableName, alias string) *GooseDbVersionTable {
|
||||
return &GooseDbVersionTable{
|
||||
gooseDbVersionTable: newGooseDbVersionTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newGooseDbVersionTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newGooseDbVersionTableImpl(schemaName, tableName, alias string) gooseDbVersionTable {
|
||||
var (
|
||||
IDColumn = postgres.IntegerColumn("id")
|
||||
VersionIDColumn = postgres.IntegerColumn("version_id")
|
||||
IsAppliedColumn = postgres.BoolColumn("is_applied")
|
||||
TstampColumn = postgres.TimestampColumn("tstamp")
|
||||
allColumns = postgres.ColumnList{IDColumn, VersionIDColumn, IsAppliedColumn, TstampColumn}
|
||||
mutableColumns = postgres.ColumnList{VersionIDColumn, IsAppliedColumn, TstampColumn}
|
||||
defaultColumns = postgres.ColumnList{TstampColumn}
|
||||
)
|
||||
|
||||
return gooseDbVersionTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
ID: IDColumn,
|
||||
VersionID: VersionIDColumn,
|
||||
IsApplied: IsAppliedColumn,
|
||||
Tstamp: TstampColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,102 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var Invites = newInvitesTable("lobby", "invites", "")
|
||||
|
||||
type invitesTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
InviteID postgres.ColumnString
|
||||
GameID postgres.ColumnString
|
||||
InviterUserID postgres.ColumnString
|
||||
InviteeUserID postgres.ColumnString
|
||||
RaceName postgres.ColumnString
|
||||
Status postgres.ColumnString
|
||||
CreatedAt postgres.ColumnTimestampz
|
||||
ExpiresAt postgres.ColumnTimestampz
|
||||
DecidedAt postgres.ColumnTimestampz
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type InvitesTable struct {
|
||||
invitesTable
|
||||
|
||||
EXCLUDED invitesTable
|
||||
}
|
||||
|
||||
// AS creates new InvitesTable with assigned alias
|
||||
func (a InvitesTable) AS(alias string) *InvitesTable {
|
||||
return newInvitesTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new InvitesTable with assigned schema name
|
||||
func (a InvitesTable) FromSchema(schemaName string) *InvitesTable {
|
||||
return newInvitesTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new InvitesTable with assigned table prefix
|
||||
func (a InvitesTable) WithPrefix(prefix string) *InvitesTable {
|
||||
return newInvitesTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new InvitesTable with assigned table suffix
|
||||
func (a InvitesTable) WithSuffix(suffix string) *InvitesTable {
|
||||
return newInvitesTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newInvitesTable(schemaName, tableName, alias string) *InvitesTable {
|
||||
return &InvitesTable{
|
||||
invitesTable: newInvitesTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newInvitesTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newInvitesTableImpl(schemaName, tableName, alias string) invitesTable {
|
||||
var (
|
||||
InviteIDColumn = postgres.StringColumn("invite_id")
|
||||
GameIDColumn = postgres.StringColumn("game_id")
|
||||
InviterUserIDColumn = postgres.StringColumn("inviter_user_id")
|
||||
InviteeUserIDColumn = postgres.StringColumn("invitee_user_id")
|
||||
RaceNameColumn = postgres.StringColumn("race_name")
|
||||
StatusColumn = postgres.StringColumn("status")
|
||||
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
||||
ExpiresAtColumn = postgres.TimestampzColumn("expires_at")
|
||||
DecidedAtColumn = postgres.TimestampzColumn("decided_at")
|
||||
allColumns = postgres.ColumnList{InviteIDColumn, GameIDColumn, InviterUserIDColumn, InviteeUserIDColumn, RaceNameColumn, StatusColumn, CreatedAtColumn, ExpiresAtColumn, DecidedAtColumn}
|
||||
mutableColumns = postgres.ColumnList{GameIDColumn, InviterUserIDColumn, InviteeUserIDColumn, RaceNameColumn, StatusColumn, CreatedAtColumn, ExpiresAtColumn, DecidedAtColumn}
|
||||
defaultColumns = postgres.ColumnList{RaceNameColumn}
|
||||
)
|
||||
|
||||
return invitesTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
InviteID: InviteIDColumn,
|
||||
GameID: GameIDColumn,
|
||||
InviterUserID: InviterUserIDColumn,
|
||||
InviteeUserID: InviteeUserIDColumn,
|
||||
RaceName: RaceNameColumn,
|
||||
Status: StatusColumn,
|
||||
CreatedAt: CreatedAtColumn,
|
||||
ExpiresAt: ExpiresAtColumn,
|
||||
DecidedAt: DecidedAtColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,99 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var Memberships = newMembershipsTable("lobby", "memberships", "")
|
||||
|
||||
type membershipsTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
MembershipID postgres.ColumnString
|
||||
GameID postgres.ColumnString
|
||||
UserID postgres.ColumnString
|
||||
RaceName postgres.ColumnString
|
||||
CanonicalKey postgres.ColumnString
|
||||
Status postgres.ColumnString
|
||||
JoinedAt postgres.ColumnTimestampz
|
||||
RemovedAt postgres.ColumnTimestampz
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type MembershipsTable struct {
|
||||
membershipsTable
|
||||
|
||||
EXCLUDED membershipsTable
|
||||
}
|
||||
|
||||
// AS creates new MembershipsTable with assigned alias
|
||||
func (a MembershipsTable) AS(alias string) *MembershipsTable {
|
||||
return newMembershipsTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new MembershipsTable with assigned schema name
|
||||
func (a MembershipsTable) FromSchema(schemaName string) *MembershipsTable {
|
||||
return newMembershipsTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new MembershipsTable with assigned table prefix
|
||||
func (a MembershipsTable) WithPrefix(prefix string) *MembershipsTable {
|
||||
return newMembershipsTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new MembershipsTable with assigned table suffix
|
||||
func (a MembershipsTable) WithSuffix(suffix string) *MembershipsTable {
|
||||
return newMembershipsTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newMembershipsTable(schemaName, tableName, alias string) *MembershipsTable {
|
||||
return &MembershipsTable{
|
||||
membershipsTable: newMembershipsTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newMembershipsTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newMembershipsTableImpl(schemaName, tableName, alias string) membershipsTable {
|
||||
var (
|
||||
MembershipIDColumn = postgres.StringColumn("membership_id")
|
||||
GameIDColumn = postgres.StringColumn("game_id")
|
||||
UserIDColumn = postgres.StringColumn("user_id")
|
||||
RaceNameColumn = postgres.StringColumn("race_name")
|
||||
CanonicalKeyColumn = postgres.StringColumn("canonical_key")
|
||||
StatusColumn = postgres.StringColumn("status")
|
||||
JoinedAtColumn = postgres.TimestampzColumn("joined_at")
|
||||
RemovedAtColumn = postgres.TimestampzColumn("removed_at")
|
||||
allColumns = postgres.ColumnList{MembershipIDColumn, GameIDColumn, UserIDColumn, RaceNameColumn, CanonicalKeyColumn, StatusColumn, JoinedAtColumn, RemovedAtColumn}
|
||||
mutableColumns = postgres.ColumnList{GameIDColumn, UserIDColumn, RaceNameColumn, CanonicalKeyColumn, StatusColumn, JoinedAtColumn, RemovedAtColumn}
|
||||
defaultColumns = postgres.ColumnList{}
|
||||
)
|
||||
|
||||
return membershipsTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
MembershipID: MembershipIDColumn,
|
||||
GameID: GameIDColumn,
|
||||
UserID: UserIDColumn,
|
||||
RaceName: RaceNameColumn,
|
||||
CanonicalKey: CanonicalKeyColumn,
|
||||
Status: StatusColumn,
|
||||
JoinedAt: JoinedAtColumn,
|
||||
RemovedAt: RemovedAtColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,102 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
var RaceNames = newRaceNamesTable("lobby", "race_names", "")
|
||||
|
||||
type raceNamesTable struct {
|
||||
postgres.Table
|
||||
|
||||
// Columns
|
||||
CanonicalKey postgres.ColumnString
|
||||
GameID postgres.ColumnString
|
||||
HolderUserID postgres.ColumnString
|
||||
RaceName postgres.ColumnString
|
||||
BindingKind postgres.ColumnString
|
||||
SourceGameID postgres.ColumnString
|
||||
ReservedAtMs postgres.ColumnInteger
|
||||
EligibleUntilMs postgres.ColumnInteger
|
||||
RegisteredAtMs postgres.ColumnInteger
|
||||
|
||||
AllColumns postgres.ColumnList
|
||||
MutableColumns postgres.ColumnList
|
||||
DefaultColumns postgres.ColumnList
|
||||
}
|
||||
|
||||
type RaceNamesTable struct {
|
||||
raceNamesTable
|
||||
|
||||
EXCLUDED raceNamesTable
|
||||
}
|
||||
|
||||
// AS creates new RaceNamesTable with assigned alias
|
||||
func (a RaceNamesTable) AS(alias string) *RaceNamesTable {
|
||||
return newRaceNamesTable(a.SchemaName(), a.TableName(), alias)
|
||||
}
|
||||
|
||||
// Schema creates new RaceNamesTable with assigned schema name
|
||||
func (a RaceNamesTable) FromSchema(schemaName string) *RaceNamesTable {
|
||||
return newRaceNamesTable(schemaName, a.TableName(), a.Alias())
|
||||
}
|
||||
|
||||
// WithPrefix creates new RaceNamesTable with assigned table prefix
|
||||
func (a RaceNamesTable) WithPrefix(prefix string) *RaceNamesTable {
|
||||
return newRaceNamesTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||
}
|
||||
|
||||
// WithSuffix creates new RaceNamesTable with assigned table suffix
|
||||
func (a RaceNamesTable) WithSuffix(suffix string) *RaceNamesTable {
|
||||
return newRaceNamesTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||
}
|
||||
|
||||
func newRaceNamesTable(schemaName, tableName, alias string) *RaceNamesTable {
|
||||
return &RaceNamesTable{
|
||||
raceNamesTable: newRaceNamesTableImpl(schemaName, tableName, alias),
|
||||
EXCLUDED: newRaceNamesTableImpl("", "excluded", ""),
|
||||
}
|
||||
}
|
||||
|
||||
func newRaceNamesTableImpl(schemaName, tableName, alias string) raceNamesTable {
|
||||
var (
|
||||
CanonicalKeyColumn = postgres.StringColumn("canonical_key")
|
||||
GameIDColumn = postgres.StringColumn("game_id")
|
||||
HolderUserIDColumn = postgres.StringColumn("holder_user_id")
|
||||
RaceNameColumn = postgres.StringColumn("race_name")
|
||||
BindingKindColumn = postgres.StringColumn("binding_kind")
|
||||
SourceGameIDColumn = postgres.StringColumn("source_game_id")
|
||||
ReservedAtMsColumn = postgres.IntegerColumn("reserved_at_ms")
|
||||
EligibleUntilMsColumn = postgres.IntegerColumn("eligible_until_ms")
|
||||
RegisteredAtMsColumn = postgres.IntegerColumn("registered_at_ms")
|
||||
allColumns = postgres.ColumnList{CanonicalKeyColumn, GameIDColumn, HolderUserIDColumn, RaceNameColumn, BindingKindColumn, SourceGameIDColumn, ReservedAtMsColumn, EligibleUntilMsColumn, RegisteredAtMsColumn}
|
||||
mutableColumns = postgres.ColumnList{HolderUserIDColumn, RaceNameColumn, BindingKindColumn, SourceGameIDColumn, ReservedAtMsColumn, EligibleUntilMsColumn, RegisteredAtMsColumn}
|
||||
defaultColumns = postgres.ColumnList{GameIDColumn, SourceGameIDColumn, ReservedAtMsColumn}
|
||||
)
|
||||
|
||||
return raceNamesTable{
|
||||
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||
|
||||
//Columns
|
||||
CanonicalKey: CanonicalKeyColumn,
|
||||
GameID: GameIDColumn,
|
||||
HolderUserID: HolderUserIDColumn,
|
||||
RaceName: RaceNameColumn,
|
||||
BindingKind: BindingKindColumn,
|
||||
SourceGameID: SourceGameIDColumn,
|
||||
ReservedAtMs: ReservedAtMsColumn,
|
||||
EligibleUntilMs: EligibleUntilMsColumn,
|
||||
RegisteredAtMs: RegisteredAtMsColumn,
|
||||
|
||||
AllColumns: allColumns,
|
||||
MutableColumns: mutableColumns,
|
||||
DefaultColumns: defaultColumns,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,19 @@
|
||||
//
|
||||
// Code generated by go-jet DO NOT EDIT.
|
||||
//
|
||||
// WARNING: Changes to this file may cause incorrect behavior
|
||||
// and will be lost if the code is regenerated
|
||||
//
|
||||
|
||||
package table
|
||||
|
||||
// UseSchema sets a new schema name for all generated table SQL builder types. It is recommended to invoke
|
||||
// this method only once at the beginning of the program.
|
||||
func UseSchema(schema string) {
|
||||
Applications = Applications.FromSchema(schema)
|
||||
Games = Games.FromSchema(schema)
|
||||
GooseDbVersion = GooseDbVersion.FromSchema(schema)
|
||||
Invites = Invites.FromSchema(schema)
|
||||
Memberships = Memberships.FromSchema(schema)
|
||||
RaceNames = RaceNames.FromSchema(schema)
|
||||
}
|
||||
@@ -0,0 +1,346 @@
|
||||
// Package membershipstore implements the PostgreSQL-backed adapter for
|
||||
// `ports.MembershipStore`.
|
||||
//
|
||||
// PG_PLAN.md §6A migrates Game Lobby Service away from Redis-backed durable
|
||||
// membership records.
|
||||
package membershipstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/sqlx"
|
||||
pgtable "galaxy/lobby/internal/adapters/postgres/jet/lobby/table"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/membership"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
pg "github.com/go-jet/jet/v2/postgres"
|
||||
)
|
||||
|
||||
// Config configures one PostgreSQL-backed membership store instance.
|
||||
type Config struct {
|
||||
DB *sql.DB
|
||||
OperationTimeout time.Duration
|
||||
}
|
||||
|
||||
// Store persists Game Lobby membership records in PostgreSQL.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
// New constructs one PostgreSQL-backed membership store from cfg.
|
||||
func New(cfg Config) (*Store, error) {
|
||||
if cfg.DB == nil {
|
||||
return nil, errors.New("new postgres membership store: db must not be nil")
|
||||
}
|
||||
if cfg.OperationTimeout <= 0 {
|
||||
return nil, errors.New("new postgres membership store: operation timeout must be positive")
|
||||
}
|
||||
return &Store{
|
||||
db: cfg.DB,
|
||||
operationTimeout: cfg.OperationTimeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// membershipSelectColumns is the canonical SELECT list for the memberships
|
||||
// table, matching scanMembership's column order.
|
||||
var membershipSelectColumns = pg.ColumnList{
|
||||
pgtable.Memberships.MembershipID,
|
||||
pgtable.Memberships.GameID,
|
||||
pgtable.Memberships.UserID,
|
||||
pgtable.Memberships.RaceName,
|
||||
pgtable.Memberships.CanonicalKey,
|
||||
pgtable.Memberships.Status,
|
||||
pgtable.Memberships.JoinedAt,
|
||||
pgtable.Memberships.RemovedAt,
|
||||
}
|
||||
|
||||
// Save persists a new active membership record. Save is create-only; a
|
||||
// second save against the same membership id maps the unique-violation to
|
||||
// membership.ErrConflict.
|
||||
func (store *Store) Save(ctx context.Context, record membership.Membership) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("save membership: nil store")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("save membership: %w", err)
|
||||
}
|
||||
if record.Status != membership.StatusActive {
|
||||
return fmt.Errorf(
|
||||
"save membership: status must be %q, got %q",
|
||||
membership.StatusActive, record.Status,
|
||||
)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "save membership", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.Memberships.INSERT(
|
||||
pgtable.Memberships.MembershipID,
|
||||
pgtable.Memberships.GameID,
|
||||
pgtable.Memberships.UserID,
|
||||
pgtable.Memberships.RaceName,
|
||||
pgtable.Memberships.CanonicalKey,
|
||||
pgtable.Memberships.Status,
|
||||
pgtable.Memberships.JoinedAt,
|
||||
pgtable.Memberships.RemovedAt,
|
||||
).VALUES(
|
||||
record.MembershipID.String(),
|
||||
record.GameID.String(),
|
||||
record.UserID,
|
||||
record.RaceName,
|
||||
record.CanonicalKey,
|
||||
string(record.Status),
|
||||
record.JoinedAt.UTC(),
|
||||
sqlx.NullableTimePtr(record.RemovedAt),
|
||||
)
|
||||
|
||||
query, args := stmt.Sql()
|
||||
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||
if sqlx.IsUniqueViolation(err) {
|
||||
return fmt.Errorf("save membership: %w", membership.ErrConflict)
|
||||
}
|
||||
return fmt.Errorf("save membership: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get returns the record identified by membershipID.
|
||||
func (store *Store) Get(ctx context.Context, membershipID common.MembershipID) (membership.Membership, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return membership.Membership{}, errors.New("get membership: nil store")
|
||||
}
|
||||
if err := membershipID.Validate(); err != nil {
|
||||
return membership.Membership{}, fmt.Errorf("get membership: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get membership", store.operationTimeout)
|
||||
if err != nil {
|
||||
return membership.Membership{}, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pg.SELECT(membershipSelectColumns).
|
||||
FROM(pgtable.Memberships).
|
||||
WHERE(pgtable.Memberships.MembershipID.EQ(pg.String(membershipID.String())))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||
record, err := scanMembership(row)
|
||||
if sqlx.IsNoRows(err) {
|
||||
return membership.Membership{}, membership.ErrNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return membership.Membership{}, fmt.Errorf("get membership: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// GetByGame returns every membership attached to gameID.
|
||||
func (store *Store) GetByGame(ctx context.Context, gameID common.GameID) ([]membership.Membership, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get memberships by game: nil store")
|
||||
}
|
||||
if err := gameID.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("get memberships by game: %w", err)
|
||||
}
|
||||
|
||||
stmt := pg.SELECT(membershipSelectColumns).
|
||||
FROM(pgtable.Memberships).
|
||||
WHERE(pgtable.Memberships.GameID.EQ(pg.String(gameID.String()))).
|
||||
ORDER_BY(pgtable.Memberships.JoinedAt.ASC(), pgtable.Memberships.MembershipID.ASC())
|
||||
|
||||
return store.queryList(ctx, "get memberships by game", stmt)
|
||||
}
|
||||
|
||||
// GetByUser returns every membership held by userID.
|
||||
func (store *Store) GetByUser(ctx context.Context, userID string) ([]membership.Membership, error) {
|
||||
if store == nil || store.db == nil {
|
||||
return nil, errors.New("get memberships by user: nil store")
|
||||
}
|
||||
trimmed := strings.TrimSpace(userID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get memberships by user: user id must not be empty")
|
||||
}
|
||||
|
||||
stmt := pg.SELECT(membershipSelectColumns).
|
||||
FROM(pgtable.Memberships).
|
||||
WHERE(pgtable.Memberships.UserID.EQ(pg.String(trimmed))).
|
||||
ORDER_BY(pgtable.Memberships.JoinedAt.ASC(), pgtable.Memberships.MembershipID.ASC())
|
||||
|
||||
return store.queryList(ctx, "get memberships by user", stmt)
|
||||
}
|
||||
|
||||
func (store *Store) queryList(ctx context.Context, operation string, stmt pg.SelectStatement) ([]membership.Membership, error) {
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, operation, store.operationTimeout)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
query, args := stmt.Sql()
|
||||
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
records := make([]membership.Membership, 0)
|
||||
for rows.Next() {
|
||||
record, err := scanMembership(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: scan: %w", operation, err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if len(records) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition with compare-and-swap on the
|
||||
// current status column. RemovedAt is set to input.At when transitioning out
|
||||
// of active.
|
||||
func (store *Store) UpdateStatus(ctx context.Context, input ports.UpdateMembershipStatusInput) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("update membership status: nil store")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update membership status: %w", err)
|
||||
}
|
||||
if err := membership.Transition(input.ExpectedFrom, input.To); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update membership status", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
at := input.At.UTC()
|
||||
stmt := pgtable.Memberships.UPDATE(pgtable.Memberships.Status, pgtable.Memberships.RemovedAt).
|
||||
SET(string(input.To), at).
|
||||
WHERE(pg.AND(
|
||||
pgtable.Memberships.MembershipID.EQ(pg.String(input.MembershipID.String())),
|
||||
pgtable.Memberships.Status.EQ(pg.String(string(input.ExpectedFrom))),
|
||||
))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update membership status: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("update membership status: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
probe := pg.SELECT(pgtable.Memberships.Status).
|
||||
FROM(pgtable.Memberships).
|
||||
WHERE(pgtable.Memberships.MembershipID.EQ(pg.String(input.MembershipID.String())))
|
||||
probeQuery, probeArgs := probe.Sql()
|
||||
|
||||
var current string
|
||||
row := store.db.QueryRowContext(operationCtx, probeQuery, probeArgs...)
|
||||
if err := row.Scan(¤t); err != nil {
|
||||
if sqlx.IsNoRows(err) {
|
||||
return membership.ErrNotFound
|
||||
}
|
||||
return fmt.Errorf("update membership status: probe: %w", err)
|
||||
}
|
||||
return fmt.Errorf("update membership status: %w", membership.ErrConflict)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete removes the membership record identified by membershipID. The
|
||||
// pre-start removemember path uses Delete; the post-start path uses
|
||||
// UpdateStatus(active → removed).
|
||||
func (store *Store) Delete(ctx context.Context, membershipID common.MembershipID) error {
|
||||
if store == nil || store.db == nil {
|
||||
return errors.New("delete membership: nil store")
|
||||
}
|
||||
if err := membershipID.Validate(); err != nil {
|
||||
return fmt.Errorf("delete membership: %w", err)
|
||||
}
|
||||
|
||||
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "delete membership", store.operationTimeout)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cancel()
|
||||
|
||||
stmt := pgtable.Memberships.DELETE().
|
||||
WHERE(pgtable.Memberships.MembershipID.EQ(pg.String(membershipID.String())))
|
||||
|
||||
query, args := stmt.Sql()
|
||||
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("delete membership: %w", err)
|
||||
}
|
||||
affected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("delete membership: rows affected: %w", err)
|
||||
}
|
||||
if affected == 0 {
|
||||
return membership.ErrNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type rowScanner interface {
|
||||
Scan(dest ...any) error
|
||||
}
|
||||
|
||||
func scanMembership(rs rowScanner) (membership.Membership, error) {
|
||||
var (
|
||||
membershipID string
|
||||
gameID string
|
||||
userID string
|
||||
raceName string
|
||||
canonicalKey string
|
||||
status string
|
||||
joinedAt time.Time
|
||||
removedAt sql.NullTime
|
||||
)
|
||||
if err := rs.Scan(
|
||||
&membershipID,
|
||||
&gameID,
|
||||
&userID,
|
||||
&raceName,
|
||||
&canonicalKey,
|
||||
&status,
|
||||
&joinedAt,
|
||||
&removedAt,
|
||||
); err != nil {
|
||||
return membership.Membership{}, err
|
||||
}
|
||||
return membership.Membership{
|
||||
MembershipID: common.MembershipID(membershipID),
|
||||
GameID: common.GameID(gameID),
|
||||
UserID: userID,
|
||||
RaceName: raceName,
|
||||
CanonicalKey: canonicalKey,
|
||||
Status: membership.Status(status),
|
||||
JoinedAt: joinedAt.UTC(),
|
||||
RemovedAt: sqlx.TimePtrFromNullable(removedAt),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Ensure Store satisfies the ports.MembershipStore interface at compile
|
||||
// time.
|
||||
var _ ports.MembershipStore = (*Store)(nil)
|
||||
@@ -0,0 +1,213 @@
|
||||
package membershipstore_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/gamestore"
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/lobby/internal/adapters/postgres/membershipstore"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/domain/membership"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
func newStores(t *testing.T) (*gamestore.Store, *membershipstore.Store) {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
gs, err := gamestore.New(gamestore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
ms, err := membershipstore.New(membershipstore.Config{
|
||||
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return gs, ms
|
||||
}
|
||||
|
||||
func seedGame(t *testing.T, gs *gamestore.Store, id string) game.Game {
|
||||
t.Helper()
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
g, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID(id),
|
||||
GameName: "G " + id,
|
||||
GameType: game.GameTypePublic,
|
||||
MinPlayers: 2,
|
||||
MaxPlayers: 8,
|
||||
StartGapHours: 12,
|
||||
StartGapPlayers: 2,
|
||||
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.0.0",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, gs.Save(context.Background(), g))
|
||||
return g
|
||||
}
|
||||
|
||||
func newMembership(t *testing.T, id, gameID, userID, race, canon string) membership.Membership {
|
||||
t.Helper()
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
rec, err := membership.New(membership.NewMembershipInput{
|
||||
MembershipID: common.MembershipID(id),
|
||||
GameID: common.GameID(gameID),
|
||||
UserID: userID,
|
||||
RaceName: race,
|
||||
CanonicalKey: canon,
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return rec
|
||||
}
|
||||
|
||||
func TestSaveAndGet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, ms := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newMembership(t, "membership-001", "game-001", "user-a", "Pilot Alpha", "pilot-alpha")
|
||||
require.NoError(t, ms.Save(ctx, rec))
|
||||
|
||||
got, err := ms.Get(ctx, rec.MembershipID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, rec.MembershipID, got.MembershipID)
|
||||
assert.Equal(t, "Pilot Alpha", got.RaceName)
|
||||
assert.Equal(t, "pilot-alpha", got.CanonicalKey)
|
||||
assert.Equal(t, membership.StatusActive, got.Status)
|
||||
assert.Nil(t, got.RemovedAt)
|
||||
}
|
||||
|
||||
func TestSaveRejectsNonActive(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, ms := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newMembership(t, "membership-001", "game-001", "user-a", "Pilot", "pilot")
|
||||
rec.Status = membership.StatusRemoved
|
||||
require.Error(t, ms.Save(ctx, rec))
|
||||
}
|
||||
|
||||
func TestSaveDuplicateReturnsConflict(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, ms := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newMembership(t, "membership-001", "game-001", "user-a", "Pilot", "pilot")
|
||||
require.NoError(t, ms.Save(ctx, rec))
|
||||
err := ms.Save(ctx, rec)
|
||||
require.ErrorIs(t, err, membership.ErrConflict)
|
||||
}
|
||||
|
||||
func TestUpdateStatusToRemovedSetsRemovedAt(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, ms := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newMembership(t, "membership-001", "game-001", "user-a", "Pilot", "pilot")
|
||||
require.NoError(t, ms.Save(ctx, rec))
|
||||
at := rec.JoinedAt.Add(time.Minute)
|
||||
require.NoError(t, ms.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: rec.MembershipID,
|
||||
ExpectedFrom: membership.StatusActive,
|
||||
To: membership.StatusRemoved,
|
||||
At: at,
|
||||
}))
|
||||
got, err := ms.Get(ctx, rec.MembershipID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, membership.StatusRemoved, got.Status)
|
||||
require.NotNil(t, got.RemovedAt)
|
||||
assert.True(t, got.RemovedAt.Equal(at))
|
||||
}
|
||||
|
||||
func TestUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, ms := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newMembership(t, "membership-001", "game-001", "user-a", "Pilot", "pilot")
|
||||
require.NoError(t, ms.Save(ctx, rec))
|
||||
|
||||
// Move the row out of `active` first; the next attempt's
|
||||
// `WHERE status = 'active'` then fails on persistence even though
|
||||
// (active → blocked) is itself a valid transition in the domain table.
|
||||
require.NoError(t, ms.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: rec.MembershipID,
|
||||
ExpectedFrom: membership.StatusActive,
|
||||
To: membership.StatusRemoved,
|
||||
At: rec.JoinedAt.Add(time.Minute),
|
||||
}))
|
||||
err := ms.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: rec.MembershipID,
|
||||
ExpectedFrom: membership.StatusActive,
|
||||
To: membership.StatusBlocked,
|
||||
At: rec.JoinedAt.Add(2 * time.Minute),
|
||||
})
|
||||
require.ErrorIs(t, err, membership.ErrConflict)
|
||||
}
|
||||
|
||||
func TestUpdateStatusReturnsNotFoundForMissing(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
_, ms := newStores(t)
|
||||
err := ms.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: common.MembershipID("membership-missing"),
|
||||
ExpectedFrom: membership.StatusActive,
|
||||
To: membership.StatusRemoved,
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestDeleteRemovesRecord(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, ms := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
|
||||
rec := newMembership(t, "membership-001", "game-001", "user-a", "Pilot", "pilot")
|
||||
require.NoError(t, ms.Save(ctx, rec))
|
||||
require.NoError(t, ms.Delete(ctx, rec.MembershipID))
|
||||
|
||||
_, err := ms.Get(ctx, rec.MembershipID)
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestDeleteReturnsNotFoundForMissing(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
_, ms := newStores(t)
|
||||
err := ms.Delete(ctx, common.MembershipID("membership-missing"))
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestGetByGameAndUser(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
gs, ms := newStores(t)
|
||||
seedGame(t, gs, "game-001")
|
||||
seedGame(t, gs, "game-002")
|
||||
|
||||
require.NoError(t, ms.Save(ctx, newMembership(t, "membership-001", "game-001", "user-a", "P-a", "p-a")))
|
||||
require.NoError(t, ms.Save(ctx, newMembership(t, "membership-002", "game-001", "user-b", "P-b", "p-b")))
|
||||
require.NoError(t, ms.Save(ctx, newMembership(t, "membership-003", "game-002", "user-a", "P-a2", "p-a2")))
|
||||
|
||||
g1, err := ms.GetByGame(ctx, common.GameID("game-001"))
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, g1, 2)
|
||||
|
||||
userA, err := ms.GetByUser(ctx, "user-a")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, userA, 2)
|
||||
}
|
||||
|
||||
func TestGetMissingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
_, ms := newStores(t)
|
||||
_, err := ms.Get(ctx, common.MembershipID("membership-missing"))
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
}
|
||||
@@ -0,0 +1,169 @@
|
||||
-- +goose Up
|
||||
-- Initial Game Lobby PostgreSQL schema.
|
||||
--
|
||||
-- Five tables cover the durable surface of the service:
|
||||
-- * games, applications, invites, memberships — the four core
|
||||
-- enrollment entities;
|
||||
-- * race_names — the Race Name Directory, holding the registered /
|
||||
-- reservation / pending_registration bindings keyed by canonical key.
|
||||
--
|
||||
-- Schema and the matching `lobbyservice` role are provisioned outside
|
||||
-- this script (in tests via
|
||||
-- integration/internal/harness/postgres_container.go::EnsureRoleAndSchema;
|
||||
-- in production via an ops init script). This migration runs as the
|
||||
-- schema owner with `search_path=lobby` and only contains DDL for the
|
||||
-- service-owned tables and indexes.
|
||||
|
||||
-- games holds one durable record per platform game session. The status +
|
||||
-- created_at index serves the listing/scheduler queries that previously
|
||||
-- read `lobby:games_by_status:*`. The partial owner index serves the
|
||||
-- per-owner listings used by user-lifecycle cascade and "my games"
|
||||
-- listings; public games carry an empty owner_user_id and never enter
|
||||
-- the index.
|
||||
CREATE TABLE games (
|
||||
game_id text PRIMARY KEY,
|
||||
game_name text NOT NULL,
|
||||
description text NOT NULL DEFAULT '',
|
||||
game_type text NOT NULL,
|
||||
owner_user_id text NOT NULL DEFAULT '',
|
||||
status text NOT NULL,
|
||||
min_players integer NOT NULL,
|
||||
max_players integer NOT NULL,
|
||||
start_gap_hours integer NOT NULL,
|
||||
start_gap_players integer NOT NULL,
|
||||
enrollment_ends_at timestamptz NOT NULL,
|
||||
turn_schedule text NOT NULL,
|
||||
target_engine_version text NOT NULL,
|
||||
created_at timestamptz NOT NULL,
|
||||
updated_at timestamptz NOT NULL,
|
||||
started_at timestamptz,
|
||||
finished_at timestamptz,
|
||||
runtime_snapshot jsonb NOT NULL DEFAULT '{}'::jsonb,
|
||||
runtime_binding jsonb
|
||||
);
|
||||
|
||||
CREATE INDEX games_status_created_idx
|
||||
ON games (status, created_at DESC, game_id DESC);
|
||||
|
||||
CREATE INDEX games_owner_idx
|
||||
ON games (owner_user_id) WHERE game_type = 'private';
|
||||
|
||||
-- applications carries one row per public-game enrollment request. The
|
||||
-- partial UNIQUE on (applicant_user_id, game_id) WHERE status <> 'rejected'
|
||||
-- replaces the Redis lookup key `lobby:user_game_application:*:*` and
|
||||
-- enforces the single-active constraint at the database level. Rejected
|
||||
-- applications are kept (one applicant may produce multiple rejected rows
|
||||
-- before submitting a successful one).
|
||||
CREATE TABLE applications (
|
||||
application_id text PRIMARY KEY,
|
||||
game_id text NOT NULL REFERENCES games(game_id) ON DELETE CASCADE,
|
||||
applicant_user_id text NOT NULL,
|
||||
race_name text NOT NULL,
|
||||
status text NOT NULL,
|
||||
created_at timestamptz NOT NULL,
|
||||
decided_at timestamptz
|
||||
);
|
||||
|
||||
CREATE INDEX applications_game_idx ON applications (game_id);
|
||||
|
||||
CREATE INDEX applications_user_idx ON applications (applicant_user_id);
|
||||
|
||||
CREATE UNIQUE INDEX applications_active_per_user_game_uidx
|
||||
ON applications (applicant_user_id, game_id)
|
||||
WHERE status <> 'rejected';
|
||||
|
||||
-- invites carries one row per private-game invitation. race_name is empty
|
||||
-- until the invite transitions to redeemed. The (status, expires_at) index
|
||||
-- serves the enrollment-automation expiration sweep; the per-game,
|
||||
-- per-invitee, and per-inviter indexes serve listing queries from the
|
||||
-- service layer.
|
||||
CREATE TABLE invites (
|
||||
invite_id text PRIMARY KEY,
|
||||
game_id text NOT NULL REFERENCES games(game_id) ON DELETE CASCADE,
|
||||
inviter_user_id text NOT NULL,
|
||||
invitee_user_id text NOT NULL,
|
||||
race_name text NOT NULL DEFAULT '',
|
||||
status text NOT NULL,
|
||||
created_at timestamptz NOT NULL,
|
||||
expires_at timestamptz NOT NULL,
|
||||
decided_at timestamptz
|
||||
);
|
||||
|
||||
CREATE INDEX invites_game_idx ON invites (game_id);
|
||||
CREATE INDEX invites_invitee_idx ON invites (invitee_user_id);
|
||||
CREATE INDEX invites_inviter_idx ON invites (inviter_user_id);
|
||||
CREATE INDEX invites_status_expires_idx ON invites (status, expires_at);
|
||||
|
||||
-- memberships carries one row per platform roster entry. Both race_name
|
||||
-- (original casing) and canonical_key are stored explicitly because
|
||||
-- downstream readers (capability evaluation, cascade release) consume the
|
||||
-- canonical form without re-deriving it from race_name. Race-name
|
||||
-- uniqueness is enforced by the Race Name Directory (the race_names
|
||||
-- table below) — this table intentionally has no unique constraint on
|
||||
-- canonical_key.
|
||||
CREATE TABLE memberships (
|
||||
membership_id text PRIMARY KEY,
|
||||
game_id text NOT NULL REFERENCES games(game_id) ON DELETE CASCADE,
|
||||
user_id text NOT NULL,
|
||||
race_name text NOT NULL,
|
||||
canonical_key text NOT NULL,
|
||||
status text NOT NULL,
|
||||
joined_at timestamptz NOT NULL,
|
||||
removed_at timestamptz
|
||||
);
|
||||
|
||||
CREATE INDEX memberships_game_idx ON memberships (game_id);
|
||||
CREATE INDEX memberships_user_idx ON memberships (user_id);
|
||||
|
||||
-- race_names is the durable Race Name Directory store. One row covers one
|
||||
-- of three bindings on a canonical key: a registered name (one per
|
||||
-- canonical_key, immutable holder), a per-game reservation, or a
|
||||
-- pending_registration that is waiting on lobby.race_name.register inside
|
||||
-- the eligible_until_ms window. The composite primary key (canonical_key,
|
||||
-- game_id) lets the same user hold reservations for the same race name
|
||||
-- across multiple active games concurrently, matching the behaviour the
|
||||
-- shared port test suite (lobby/internal/ports/racenamedirtest) covers.
|
||||
-- Registered rows store game_id = '' and keep the source game in
|
||||
-- source_game_id so the per-canonical uniqueness rule expresses cleanly
|
||||
-- as a partial UNIQUE index. Cross-user uniqueness on canonical_key is
|
||||
-- enforced at write time inside transactions guarded by
|
||||
-- pg_advisory_xact_lock(hashtextextended(canonical_key, 0)).
|
||||
CREATE TABLE race_names (
|
||||
canonical_key text NOT NULL,
|
||||
game_id text NOT NULL DEFAULT '',
|
||||
holder_user_id text NOT NULL,
|
||||
race_name text NOT NULL,
|
||||
binding_kind text NOT NULL,
|
||||
source_game_id text NOT NULL DEFAULT '',
|
||||
reserved_at_ms bigint NOT NULL DEFAULT 0,
|
||||
eligible_until_ms bigint,
|
||||
registered_at_ms bigint,
|
||||
PRIMARY KEY (canonical_key, game_id),
|
||||
CONSTRAINT race_names_binding_kind_chk
|
||||
CHECK (binding_kind IN ('registered', 'reservation', 'pending_registration'))
|
||||
);
|
||||
|
||||
-- Exactly one registered binding per canonical_key. Reservations and
|
||||
-- pending_registration entries are differentiated by game_id within the
|
||||
-- primary key.
|
||||
CREATE UNIQUE INDEX race_names_registered_uidx
|
||||
ON race_names (canonical_key)
|
||||
WHERE binding_kind = 'registered';
|
||||
|
||||
-- Per-user listings used by ListRegistered / ListReservations /
|
||||
-- ListPendingRegistrations.
|
||||
CREATE INDEX race_names_holder_idx
|
||||
ON race_names (holder_user_id, binding_kind);
|
||||
|
||||
-- Pending-registration expiration scanner reads only the pending subset
|
||||
-- ordered by eligible_until_ms.
|
||||
CREATE INDEX race_names_pending_eligible_idx
|
||||
ON race_names (eligible_until_ms)
|
||||
WHERE binding_kind = 'pending_registration';
|
||||
|
||||
-- +goose Down
|
||||
DROP TABLE IF EXISTS race_names;
|
||||
DROP TABLE IF EXISTS memberships;
|
||||
DROP TABLE IF EXISTS invites;
|
||||
DROP TABLE IF EXISTS applications;
|
||||
DROP TABLE IF EXISTS games;
|
||||
@@ -0,0 +1,19 @@
|
||||
// Package migrations exposes the embedded goose migration files used by
|
||||
// Game Lobby Service to provision its `lobby` schema in PostgreSQL.
|
||||
//
|
||||
// The embedded filesystem is consumed by `pkg/postgres.RunMigrations` during
|
||||
// lobby-service startup and by `cmd/jetgen` when regenerating the
|
||||
// `internal/adapters/postgres/jet/` code against a transient PostgreSQL
|
||||
// instance.
|
||||
package migrations
|
||||
|
||||
import "embed"
|
||||
|
||||
//go:embed *.sql
|
||||
var fs embed.FS
|
||||
|
||||
// FS returns the embedded filesystem containing every numbered goose
|
||||
// migration shipped with Game Lobby Service.
|
||||
func FS() embed.FS {
|
||||
return fs
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,193 @@
|
||||
package racenamedir_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/internal/pgtest"
|
||||
"galaxy/lobby/internal/adapters/postgres/racenamedir"
|
||||
"galaxy/lobby/internal/domain/racename"
|
||||
"galaxy/lobby/internal/ports"
|
||||
"galaxy/lobby/internal/ports/racenamedirtest"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// TestMain wires the per-package PostgreSQL container shared by every
|
||||
// store test in this module.
|
||||
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||
|
||||
// newDirectory builds one Race Name Directory adapter against a freshly
|
||||
// truncated lobby schema. now selects between the deterministic clock the
|
||||
// shared suite supplies and the default time.Now.
|
||||
func newDirectory(t *testing.T, now func() time.Time) *racenamedir.Directory {
|
||||
t.Helper()
|
||||
pgtest.TruncateAll(t)
|
||||
policy, err := racename.NewPolicy()
|
||||
require.NoError(t, err)
|
||||
cfg := racenamedir.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
Policy: policy,
|
||||
}
|
||||
if now != nil {
|
||||
cfg.Clock = now
|
||||
}
|
||||
directory, err := racenamedir.New(cfg)
|
||||
require.NoError(t, err)
|
||||
return directory
|
||||
}
|
||||
|
||||
// TestRaceNameDirectoryContract runs the shared behavioural suite that
|
||||
// every ports.RaceNameDirectory implementation must pass.
|
||||
func TestRaceNameDirectoryContract(t *testing.T) {
|
||||
racenamedirtest.Run(t, func(now func() time.Time) ports.RaceNameDirectory {
|
||||
return newDirectory(t, now)
|
||||
})
|
||||
}
|
||||
|
||||
func TestNewRejectsNilDB(t *testing.T) {
|
||||
policy, err := racename.NewPolicy()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = racenamedir.New(racenamedir.Config{
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
Policy: policy,
|
||||
})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestNewRejectsNilPolicy(t *testing.T) {
|
||||
_, err := racenamedir.New(racenamedir.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
OperationTimeout: pgtest.OperationTimeout,
|
||||
})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestNewRejectsNonPositiveTimeout(t *testing.T) {
|
||||
policy, err := racename.NewPolicy()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = racenamedir.New(racenamedir.Config{
|
||||
DB: pgtest.Ensure(t).Pool(),
|
||||
Policy: policy,
|
||||
})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
// TestRegisteredRowShape validates the on-disk shape of a registered
|
||||
// binding so future schema migrations have an explicit anchor.
|
||||
func TestRegisteredRowShape(t *testing.T) {
|
||||
now := time.Date(2026, 5, 1, 12, 0, 0, 0, time.UTC)
|
||||
directory := newDirectory(t, func() time.Time { return now })
|
||||
ctx := context.Background()
|
||||
|
||||
const (
|
||||
gameID = "game-shape-1"
|
||||
userID = "user-shape-1"
|
||||
raceName = "PilotNova"
|
||||
)
|
||||
|
||||
require.NoError(t, directory.Reserve(ctx, gameID, userID, raceName))
|
||||
require.NoError(t, directory.MarkPendingRegistration(ctx, gameID, userID, raceName, now.Add(time.Hour)))
|
||||
require.NoError(t, directory.Register(ctx, gameID, userID, raceName))
|
||||
|
||||
pool := pgtest.Ensure(t).Pool()
|
||||
|
||||
canonical, err := directory.Canonicalize(raceName)
|
||||
require.NoError(t, err)
|
||||
|
||||
row := pool.QueryRowContext(ctx, `
|
||||
SELECT canonical_key, game_id, holder_user_id, race_name, binding_kind,
|
||||
source_game_id, reserved_at_ms, eligible_until_ms, registered_at_ms
|
||||
FROM race_names
|
||||
WHERE canonical_key = $1
|
||||
`, canonical)
|
||||
|
||||
var (
|
||||
canonicalKey string
|
||||
storedGameID string
|
||||
holderUserID string
|
||||
raceNameCol string
|
||||
bindingKind string
|
||||
sourceGameID string
|
||||
reservedAtMs int64
|
||||
eligibleAtMs sql.NullInt64
|
||||
registeredAtMs sql.NullInt64
|
||||
)
|
||||
require.NoError(t, row.Scan(
|
||||
&canonicalKey,
|
||||
&storedGameID,
|
||||
&holderUserID,
|
||||
&raceNameCol,
|
||||
&bindingKind,
|
||||
&sourceGameID,
|
||||
&reservedAtMs,
|
||||
&eligibleAtMs,
|
||||
®isteredAtMs,
|
||||
))
|
||||
|
||||
assert.Equal(t, canonical, canonicalKey)
|
||||
assert.Equal(t, "", storedGameID, "registered rows store game_id = ''")
|
||||
assert.Equal(t, userID, holderUserID)
|
||||
assert.Equal(t, raceName, raceNameCol)
|
||||
assert.Equal(t, ports.KindRegistered, bindingKind)
|
||||
assert.Equal(t, gameID, sourceGameID)
|
||||
assert.True(t, registeredAtMs.Valid)
|
||||
assert.Equal(t, now.UTC().UnixMilli(), registeredAtMs.Int64)
|
||||
assert.False(t, eligibleAtMs.Valid, "registered rows null out eligible_until_ms")
|
||||
assert.Equal(t, now.UTC().UnixMilli(), reservedAtMs, "reserved_at_ms is preserved across promote+register")
|
||||
}
|
||||
|
||||
// TestRegisteredPartialUniqueIndex confirms that a second user cannot
|
||||
// register the same canonical key, even when they own a separate
|
||||
// reservation row at a different (canonical_key, game_id) PK.
|
||||
func TestRegisteredPartialUniqueIndex(t *testing.T) {
|
||||
now := time.Date(2026, 5, 1, 12, 0, 0, 0, time.UTC)
|
||||
directory := newDirectory(t, func() time.Time { return now })
|
||||
ctx := context.Background()
|
||||
|
||||
const (
|
||||
raceName = "PilotNova"
|
||||
gameA = "game-unique-a"
|
||||
userA = "user-unique-a"
|
||||
userB = "user-unique-b"
|
||||
)
|
||||
|
||||
require.NoError(t, directory.Reserve(ctx, gameA, userA, raceName))
|
||||
require.NoError(t, directory.MarkPendingRegistration(ctx, gameA, userA, raceName, now.Add(time.Hour)))
|
||||
require.NoError(t, directory.Register(ctx, gameA, userA, raceName))
|
||||
|
||||
err := directory.Reserve(ctx, gameA, userB, raceName)
|
||||
require.ErrorIs(t, err, ports.ErrNameTaken)
|
||||
}
|
||||
|
||||
// TestExpirePendingRegistrationsBatched seeds two pending entries with
|
||||
// distinct canonical keys and asserts both are released by a single pass
|
||||
// even when the worker iterates via separate advisory locks.
|
||||
func TestExpirePendingRegistrationsBatched(t *testing.T) {
|
||||
now := time.Date(2026, 5, 1, 12, 0, 0, 0, time.UTC)
|
||||
directory := newDirectory(t, func() time.Time { return now })
|
||||
ctx := context.Background()
|
||||
|
||||
for index := range 3 {
|
||||
gameID := "game-batch-" + strconv.Itoa(index)
|
||||
userID := "user-batch-" + strconv.Itoa(index)
|
||||
raceName := "PilotBatch" + strconv.Itoa(index)
|
||||
require.NoError(t, directory.Reserve(ctx, gameID, userID, raceName))
|
||||
require.NoError(t, directory.MarkPendingRegistration(ctx, gameID, userID, raceName, now.Add(time.Hour)))
|
||||
}
|
||||
|
||||
expired, err := directory.ExpirePendingRegistrations(ctx, now.Add(2*time.Hour))
|
||||
require.NoError(t, err)
|
||||
require.Len(t, expired, 3)
|
||||
|
||||
expired, err = directory.ExpirePendingRegistrations(ctx, now.Add(2*time.Hour))
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, expired, "second pass releases nothing")
|
||||
}
|
||||
@@ -1,277 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"galaxy/lobby/internal/domain/application"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
)
|
||||
|
||||
// ApplicationStore provides Redis-backed durable storage for application
|
||||
// records.
|
||||
type ApplicationStore struct {
|
||||
client *redis.Client
|
||||
keys Keyspace
|
||||
}
|
||||
|
||||
// NewApplicationStore constructs one Redis-backed application store. It
|
||||
// returns an error when client is nil.
|
||||
func NewApplicationStore(client *redis.Client) (*ApplicationStore, error) {
|
||||
if client == nil {
|
||||
return nil, errors.New("new application store: nil redis client")
|
||||
}
|
||||
|
||||
return &ApplicationStore{
|
||||
client: client,
|
||||
keys: Keyspace{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Save persists a new submitted application record and enforces the
|
||||
// single-active (non-rejected) constraint per (applicant, game) pair.
|
||||
func (store *ApplicationStore) Save(ctx context.Context, record application.Application) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("save application: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("save application: nil context")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("save application: %w", err)
|
||||
}
|
||||
if record.Status != application.StatusSubmitted {
|
||||
return fmt.Errorf(
|
||||
"save application: status must be %q, got %q",
|
||||
application.StatusSubmitted, record.Status,
|
||||
)
|
||||
}
|
||||
|
||||
payload, err := MarshalApplication(record)
|
||||
if err != nil {
|
||||
return fmt.Errorf("save application: %w", err)
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Application(record.ApplicationID)
|
||||
activeLookupKey := store.keys.UserGameApplication(record.ApplicantUserID, record.GameID)
|
||||
gameIndexKey := store.keys.ApplicationsByGame(record.GameID)
|
||||
userIndexKey := store.keys.ApplicationsByUser(record.ApplicantUserID)
|
||||
member := record.ApplicationID.String()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
existingPrimary, getErr := tx.Exists(ctx, primaryKey).Result()
|
||||
if getErr != nil {
|
||||
return fmt.Errorf("save application: %w", getErr)
|
||||
}
|
||||
if existingPrimary != 0 {
|
||||
return fmt.Errorf("save application: %w", application.ErrConflict)
|
||||
}
|
||||
|
||||
existingActive, getErr := tx.Exists(ctx, activeLookupKey).Result()
|
||||
if getErr != nil {
|
||||
return fmt.Errorf("save application: %w", getErr)
|
||||
}
|
||||
if existingActive != 0 {
|
||||
return fmt.Errorf("save application: %w", application.ErrConflict)
|
||||
}
|
||||
|
||||
_, err := tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, payload, ApplicationRecordTTL)
|
||||
pipe.Set(ctx, activeLookupKey, member, ApplicationRecordTTL)
|
||||
pipe.SAdd(ctx, gameIndexKey, member)
|
||||
pipe.SAdd(ctx, userIndexKey, member)
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey, activeLookupKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("save application: %w", application.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns the record identified by applicationID.
|
||||
func (store *ApplicationStore) Get(ctx context.Context, applicationID common.ApplicationID) (application.Application, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return application.Application{}, errors.New("get application: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return application.Application{}, errors.New("get application: nil context")
|
||||
}
|
||||
if err := applicationID.Validate(); err != nil {
|
||||
return application.Application{}, fmt.Errorf("get application: %w", err)
|
||||
}
|
||||
|
||||
payload, err := store.client.Get(ctx, store.keys.Application(applicationID)).Bytes()
|
||||
switch {
|
||||
case errors.Is(err, redis.Nil):
|
||||
return application.Application{}, application.ErrNotFound
|
||||
case err != nil:
|
||||
return application.Application{}, fmt.Errorf("get application: %w", err)
|
||||
}
|
||||
|
||||
record, err := UnmarshalApplication(payload)
|
||||
if err != nil {
|
||||
return application.Application{}, fmt.Errorf("get application: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// GetByGame returns every application attached to gameID.
|
||||
func (store *ApplicationStore) GetByGame(ctx context.Context, gameID common.GameID) ([]application.Application, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get applications by game: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get applications by game: nil context")
|
||||
}
|
||||
if err := gameID.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("get applications by game: %w", err)
|
||||
}
|
||||
|
||||
return store.loadApplicationsBySet(ctx,
|
||||
"get applications by game",
|
||||
store.keys.ApplicationsByGame(gameID),
|
||||
)
|
||||
}
|
||||
|
||||
// GetByUser returns every application submitted by applicantUserID.
|
||||
func (store *ApplicationStore) GetByUser(ctx context.Context, applicantUserID string) ([]application.Application, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get applications by user: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get applications by user: nil context")
|
||||
}
|
||||
trimmed := strings.TrimSpace(applicantUserID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get applications by user: applicant user id must not be empty")
|
||||
}
|
||||
|
||||
return store.loadApplicationsBySet(ctx,
|
||||
"get applications by user",
|
||||
store.keys.ApplicationsByUser(trimmed),
|
||||
)
|
||||
}
|
||||
|
||||
// loadApplicationsBySet materializes applications whose ids are stored in
|
||||
// setKey. Stale set members (primary key removed out-of-band) are dropped
|
||||
// silently, mirroring gamestore.GetByStatus.
|
||||
func (store *ApplicationStore) loadApplicationsBySet(ctx context.Context, operation, setKey string) ([]application.Application, error) {
|
||||
members, err := store.client.SMembers(ctx, setKey).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if len(members) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
primaryKeys := make([]string, len(members))
|
||||
for index, member := range members {
|
||||
primaryKeys[index] = store.keys.Application(common.ApplicationID(member))
|
||||
}
|
||||
|
||||
payloads, err := store.client.MGet(ctx, primaryKeys...).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
|
||||
records := make([]application.Application, 0, len(payloads))
|
||||
for _, entry := range payloads {
|
||||
if entry == nil {
|
||||
continue
|
||||
}
|
||||
raw, ok := entry.(string)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("%s: unexpected payload type %T", operation, entry)
|
||||
}
|
||||
record, err := UnmarshalApplication([]byte(raw))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition in a compare-and-swap fashion.
|
||||
func (store *ApplicationStore) UpdateStatus(ctx context.Context, input ports.UpdateApplicationStatusInput) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("update application status: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("update application status: nil context")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update application status: %w", err)
|
||||
}
|
||||
|
||||
if err := application.Transition(input.ExpectedFrom, input.To); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Application(input.ApplicationID)
|
||||
at := input.At.UTC()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
payload, getErr := tx.Get(ctx, primaryKey).Bytes()
|
||||
switch {
|
||||
case errors.Is(getErr, redis.Nil):
|
||||
return application.ErrNotFound
|
||||
case getErr != nil:
|
||||
return fmt.Errorf("update application status: %w", getErr)
|
||||
}
|
||||
|
||||
existing, err := UnmarshalApplication(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update application status: %w", err)
|
||||
}
|
||||
if existing.Status != input.ExpectedFrom {
|
||||
return fmt.Errorf("update application status: %w", application.ErrConflict)
|
||||
}
|
||||
|
||||
existing.Status = input.To
|
||||
decidedAt := at
|
||||
existing.DecidedAt = &decidedAt
|
||||
|
||||
encoded, err := MarshalApplication(existing)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update application status: %w", err)
|
||||
}
|
||||
|
||||
activeLookupKey := store.keys.UserGameApplication(existing.ApplicantUserID, existing.GameID)
|
||||
|
||||
_, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, encoded, ApplicationRecordTTL)
|
||||
if input.To == application.StatusRejected {
|
||||
pipe.Del(ctx, activeLookupKey)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("update application status: %w", application.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure ApplicationStore satisfies the ports.ApplicationStore interface
|
||||
// at compile time.
|
||||
var _ ports.ApplicationStore = (*ApplicationStore)(nil)
|
||||
@@ -1,360 +0,0 @@
|
||||
package redisstate_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sort"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/redisstate"
|
||||
"galaxy/lobby/internal/domain/application"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newApplicationTestStore(t *testing.T) (*redisstate.ApplicationStore, *miniredis.Miniredis, *redis.Client) {
|
||||
t.Helper()
|
||||
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() {
|
||||
_ = client.Close()
|
||||
})
|
||||
|
||||
store, err := redisstate.NewApplicationStore(client)
|
||||
require.NoError(t, err)
|
||||
|
||||
return store, server, client
|
||||
}
|
||||
|
||||
func fixtureApplication(t *testing.T, id common.ApplicationID, userID string, gameID common.GameID) application.Application {
|
||||
t.Helper()
|
||||
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
record, err := application.New(application.NewApplicationInput{
|
||||
ApplicationID: id,
|
||||
GameID: gameID,
|
||||
ApplicantUserID: userID,
|
||||
RaceName: "Spring Racer",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return record
|
||||
}
|
||||
|
||||
func TestNewApplicationStoreRejectsNilClient(t *testing.T) {
|
||||
_, err := redisstate.NewApplicationStore(nil)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestApplicationStoreSaveAndGet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newApplicationTestStore(t)
|
||||
|
||||
record := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, record.ApplicationID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.ApplicationID, got.ApplicationID)
|
||||
assert.Equal(t, record.GameID, got.GameID)
|
||||
assert.Equal(t, record.ApplicantUserID, got.ApplicantUserID)
|
||||
assert.Equal(t, record.RaceName, got.RaceName)
|
||||
assert.Equal(t, application.StatusSubmitted, got.Status)
|
||||
assert.Nil(t, got.DecidedAt)
|
||||
|
||||
byGame, err := client.SMembers(ctx, "lobby:game_applications:"+base64URL(record.GameID.String())).Result()
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{record.ApplicationID.String()}, byGame)
|
||||
|
||||
byUser, err := client.SMembers(ctx, "lobby:user_applications:"+base64URL(record.ApplicantUserID)).Result()
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{record.ApplicationID.String()}, byUser)
|
||||
|
||||
active, err := client.Get(ctx,
|
||||
"lobby:user_game_application:"+base64URL(record.ApplicantUserID)+":"+base64URL(record.GameID.String()),
|
||||
).Result()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.ApplicationID.String(), active)
|
||||
}
|
||||
|
||||
func TestApplicationStoreGetReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
_, err := store.Get(ctx, common.ApplicationID("application-missing"))
|
||||
require.ErrorIs(t, err, application.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestApplicationStoreSaveRejectsNonSubmitted(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
record := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
record.Status = application.StatusApproved
|
||||
decidedAt := record.CreatedAt.Add(time.Minute)
|
||||
record.DecidedAt = &decidedAt
|
||||
|
||||
err := store.Save(ctx, record)
|
||||
require.Error(t, err)
|
||||
assert.False(t, errors.Is(err, application.ErrConflict))
|
||||
}
|
||||
|
||||
func TestApplicationStoreSaveRejectsSecondActiveForSameUserGame(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
first := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, first))
|
||||
|
||||
second := fixtureApplication(t, "application-b", "user-1", "game-1")
|
||||
err := store.Save(ctx, second)
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, application.ErrConflict))
|
||||
|
||||
_, err = store.Get(ctx, second.ApplicationID)
|
||||
require.ErrorIs(t, err, application.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestApplicationStoreSaveRejectsDuplicateApplicationID(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
first := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, first))
|
||||
|
||||
err := store.Save(ctx, first)
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, application.ErrConflict))
|
||||
}
|
||||
|
||||
func TestApplicationStoreSaveAllowsSameUserDifferentGame(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
first := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
second := fixtureApplication(t, "application-b", "user-1", "game-2")
|
||||
|
||||
require.NoError(t, store.Save(ctx, first))
|
||||
require.NoError(t, store.Save(ctx, second))
|
||||
|
||||
byUser, err := store.GetByUser(ctx, "user-1")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byUser, 2)
|
||||
}
|
||||
|
||||
func TestApplicationStoreUpdateStatusApproveKeepsActiveKey(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newApplicationTestStore(t)
|
||||
|
||||
record := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
at := record.CreatedAt.Add(time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: record.ApplicationID,
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusApproved,
|
||||
At: at,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.ApplicationID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, application.StatusApproved, got.Status)
|
||||
require.NotNil(t, got.DecidedAt)
|
||||
assert.True(t, got.DecidedAt.Equal(at.UTC()))
|
||||
|
||||
activeKey := "lobby:user_game_application:" + base64URL(record.ApplicantUserID) + ":" + base64URL(record.GameID.String())
|
||||
stored, err := client.Get(ctx, activeKey).Result()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.ApplicationID.String(), stored)
|
||||
}
|
||||
|
||||
func TestApplicationStoreUpdateStatusRejectClearsActiveKey(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newApplicationTestStore(t)
|
||||
|
||||
record := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
at := record.CreatedAt.Add(time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: record.ApplicationID,
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusRejected,
|
||||
At: at,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.ApplicationID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, application.StatusRejected, got.Status)
|
||||
require.NotNil(t, got.DecidedAt)
|
||||
|
||||
activeKey := "lobby:user_game_application:" + base64URL(record.ApplicantUserID) + ":" + base64URL(record.GameID.String())
|
||||
_, err = client.Get(ctx, activeKey).Result()
|
||||
require.ErrorIs(t, err, redis.Nil)
|
||||
|
||||
// After rejection, the same user may re-apply to the same game.
|
||||
reapplied := fixtureApplication(t, "application-b", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, reapplied))
|
||||
}
|
||||
|
||||
func TestApplicationStoreUpdateStatusRejectsInvalidTransitionWithoutMutation(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
record := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: record.ApplicationID,
|
||||
ExpectedFrom: application.StatusApproved,
|
||||
To: application.StatusSubmitted,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, application.ErrInvalidTransition))
|
||||
|
||||
got, err := store.Get(ctx, record.ApplicationID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, application.StatusSubmitted, got.Status)
|
||||
assert.Nil(t, got.DecidedAt)
|
||||
}
|
||||
|
||||
func TestApplicationStoreUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
record := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: record.ApplicationID,
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusApproved,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: record.ApplicationID,
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusRejected,
|
||||
At: record.CreatedAt.Add(2 * time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, application.ErrConflict))
|
||||
}
|
||||
|
||||
func TestApplicationStoreUpdateStatusReturnsNotFoundForMissingRecord(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||
ApplicationID: common.ApplicationID("application-missing"),
|
||||
ExpectedFrom: application.StatusSubmitted,
|
||||
To: application.StatusApproved,
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, application.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestApplicationStoreGetByGameAndByUser(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newApplicationTestStore(t)
|
||||
|
||||
a1 := fixtureApplication(t, "application-a1", "user-1", "game-1")
|
||||
a2 := fixtureApplication(t, "application-a2", "user-2", "game-1")
|
||||
a3 := fixtureApplication(t, "application-a3", "user-1", "game-2")
|
||||
|
||||
for _, record := range []application.Application{a1, a2, a3} {
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
}
|
||||
|
||||
byGame1, err := store.GetByGame(ctx, "game-1")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byGame1, 2)
|
||||
|
||||
byUser1, err := store.GetByUser(ctx, "user-1")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byUser1, 2)
|
||||
|
||||
ids := collectApplicationIDs(byUser1)
|
||||
sort.Strings(ids)
|
||||
assert.Equal(t, []string{"application-a1", "application-a3"}, ids)
|
||||
|
||||
byUser3, err := store.GetByUser(ctx, "user-missing")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, byUser3)
|
||||
}
|
||||
|
||||
func TestApplicationStoreGetByGameDropsStaleIndexEntries(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, server, _ := newApplicationTestStore(t)
|
||||
|
||||
record := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
server.Del("lobby:applications:" + base64URL(record.ApplicationID.String()))
|
||||
|
||||
records, err := store.GetByGame(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, records)
|
||||
}
|
||||
|
||||
func TestApplicationStoreConcurrentSaveHasExactlyOneWinner(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
_, _, client := newApplicationTestStore(t)
|
||||
|
||||
storeA, err := redisstate.NewApplicationStore(client)
|
||||
require.NoError(t, err)
|
||||
storeB, err := redisstate.NewApplicationStore(client)
|
||||
require.NoError(t, err)
|
||||
|
||||
recordA := fixtureApplication(t, "application-a", "user-1", "game-1")
|
||||
recordB := fixtureApplication(t, "application-b", "user-1", "game-1")
|
||||
|
||||
var (
|
||||
wg sync.WaitGroup
|
||||
successes atomic.Int32
|
||||
conflicts atomic.Int32
|
||||
others atomic.Int32
|
||||
)
|
||||
|
||||
apply := func(target *redisstate.ApplicationStore, record application.Application) {
|
||||
defer wg.Done()
|
||||
err := target.Save(ctx, record)
|
||||
switch {
|
||||
case err == nil:
|
||||
successes.Add(1)
|
||||
case errors.Is(err, application.ErrConflict):
|
||||
conflicts.Add(1)
|
||||
default:
|
||||
others.Add(1)
|
||||
}
|
||||
}
|
||||
|
||||
wg.Add(2)
|
||||
go apply(storeA, recordA)
|
||||
go apply(storeB, recordB)
|
||||
wg.Wait()
|
||||
|
||||
assert.Equal(t, int32(0), others.Load(), "unexpected non-conflict error")
|
||||
assert.Equal(t, int32(1), successes.Load(), "expected exactly one success")
|
||||
assert.Equal(t, int32(1), conflicts.Load(), "expected exactly one conflict")
|
||||
}
|
||||
|
||||
func collectApplicationIDs(records []application.Application) []string {
|
||||
ids := make([]string, len(records))
|
||||
for index, record := range records {
|
||||
ids[index] = record.ApplicationID.String()
|
||||
}
|
||||
return ids
|
||||
}
|
||||
@@ -1,172 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
)
|
||||
|
||||
// gameRecord stores the strict Redis JSON shape used for one game record.
|
||||
type gameRecord struct {
|
||||
GameID string `json:"game_id"`
|
||||
GameName string `json:"game_name"`
|
||||
Description string `json:"description,omitempty"`
|
||||
GameType game.GameType `json:"game_type"`
|
||||
OwnerUserID string `json:"owner_user_id,omitempty"`
|
||||
Status game.Status `json:"status"`
|
||||
MinPlayers int `json:"min_players"`
|
||||
MaxPlayers int `json:"max_players"`
|
||||
StartGapHours int `json:"start_gap_hours"`
|
||||
StartGapPlayers int `json:"start_gap_players"`
|
||||
EnrollmentEndsAtSec int64 `json:"enrollment_ends_at_sec"`
|
||||
TurnSchedule string `json:"turn_schedule"`
|
||||
TargetEngineVersion string `json:"target_engine_version"`
|
||||
CreatedAtMS int64 `json:"created_at_ms"`
|
||||
UpdatedAtMS int64 `json:"updated_at_ms"`
|
||||
StartedAtMS *int64 `json:"started_at_ms,omitempty"`
|
||||
FinishedAtMS *int64 `json:"finished_at_ms,omitempty"`
|
||||
CurrentTurn int `json:"current_turn"`
|
||||
RuntimeStatus string `json:"runtime_status,omitempty"`
|
||||
EngineHealthSummary string `json:"engine_health_summary,omitempty"`
|
||||
RuntimeBinding *runtimeBindingRecord `json:"runtime_binding,omitempty"`
|
||||
}
|
||||
|
||||
// runtimeBindingRecord stores the strict Redis JSON shape used for the
|
||||
// optional runtime binding object on one game record.
|
||||
type runtimeBindingRecord struct {
|
||||
ContainerID string `json:"container_id"`
|
||||
EngineEndpoint string `json:"engine_endpoint"`
|
||||
RuntimeJobID string `json:"runtime_job_id"`
|
||||
BoundAtMS int64 `json:"bound_at_ms"`
|
||||
}
|
||||
|
||||
// MarshalGame encodes record into the strict Redis JSON shape used for
|
||||
// game records. The record is re-validated before marshalling.
|
||||
func MarshalGame(record game.Game) ([]byte, error) {
|
||||
if err := record.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("marshal redis game record: %w", err)
|
||||
}
|
||||
|
||||
stored := gameRecord{
|
||||
GameID: record.GameID.String(),
|
||||
GameName: record.GameName,
|
||||
Description: record.Description,
|
||||
GameType: record.GameType,
|
||||
OwnerUserID: record.OwnerUserID,
|
||||
Status: record.Status,
|
||||
MinPlayers: record.MinPlayers,
|
||||
MaxPlayers: record.MaxPlayers,
|
||||
StartGapHours: record.StartGapHours,
|
||||
StartGapPlayers: record.StartGapPlayers,
|
||||
EnrollmentEndsAtSec: record.EnrollmentEndsAt.UTC().Unix(),
|
||||
TurnSchedule: record.TurnSchedule,
|
||||
TargetEngineVersion: record.TargetEngineVersion,
|
||||
CreatedAtMS: record.CreatedAt.UTC().UnixMilli(),
|
||||
UpdatedAtMS: record.UpdatedAt.UTC().UnixMilli(),
|
||||
StartedAtMS: optionalUnixMilli(record.StartedAt),
|
||||
FinishedAtMS: optionalUnixMilli(record.FinishedAt),
|
||||
CurrentTurn: record.RuntimeSnapshot.CurrentTurn,
|
||||
RuntimeStatus: record.RuntimeSnapshot.RuntimeStatus,
|
||||
EngineHealthSummary: record.RuntimeSnapshot.EngineHealthSummary,
|
||||
}
|
||||
if record.RuntimeBinding != nil {
|
||||
stored.RuntimeBinding = &runtimeBindingRecord{
|
||||
ContainerID: record.RuntimeBinding.ContainerID,
|
||||
EngineEndpoint: record.RuntimeBinding.EngineEndpoint,
|
||||
RuntimeJobID: record.RuntimeBinding.RuntimeJobID,
|
||||
BoundAtMS: record.RuntimeBinding.BoundAt.UTC().UnixMilli(),
|
||||
}
|
||||
}
|
||||
|
||||
payload, err := json.Marshal(stored)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal redis game record: %w", err)
|
||||
}
|
||||
|
||||
return payload, nil
|
||||
}
|
||||
|
||||
// UnmarshalGame decodes payload from the strict Redis JSON shape used for
|
||||
// game records. The decoded record is validated before returning.
|
||||
func UnmarshalGame(payload []byte) (game.Game, error) {
|
||||
var stored gameRecord
|
||||
if err := decodeStrictJSON("decode redis game record", payload, &stored); err != nil {
|
||||
return game.Game{}, err
|
||||
}
|
||||
|
||||
record := game.Game{
|
||||
GameID: common.GameID(stored.GameID),
|
||||
GameName: stored.GameName,
|
||||
Description: stored.Description,
|
||||
GameType: stored.GameType,
|
||||
OwnerUserID: stored.OwnerUserID,
|
||||
Status: stored.Status,
|
||||
MinPlayers: stored.MinPlayers,
|
||||
MaxPlayers: stored.MaxPlayers,
|
||||
StartGapHours: stored.StartGapHours,
|
||||
StartGapPlayers: stored.StartGapPlayers,
|
||||
EnrollmentEndsAt: time.Unix(stored.EnrollmentEndsAtSec, 0).UTC(),
|
||||
TurnSchedule: stored.TurnSchedule,
|
||||
TargetEngineVersion: stored.TargetEngineVersion,
|
||||
CreatedAt: time.UnixMilli(stored.CreatedAtMS).UTC(),
|
||||
UpdatedAt: time.UnixMilli(stored.UpdatedAtMS).UTC(),
|
||||
StartedAt: inflateOptionalTime(stored.StartedAtMS),
|
||||
FinishedAt: inflateOptionalTime(stored.FinishedAtMS),
|
||||
RuntimeSnapshot: game.RuntimeSnapshot{
|
||||
CurrentTurn: stored.CurrentTurn,
|
||||
RuntimeStatus: stored.RuntimeStatus,
|
||||
EngineHealthSummary: stored.EngineHealthSummary,
|
||||
},
|
||||
}
|
||||
if stored.RuntimeBinding != nil {
|
||||
record.RuntimeBinding = &game.RuntimeBinding{
|
||||
ContainerID: stored.RuntimeBinding.ContainerID,
|
||||
EngineEndpoint: stored.RuntimeBinding.EngineEndpoint,
|
||||
RuntimeJobID: stored.RuntimeBinding.RuntimeJobID,
|
||||
BoundAt: time.UnixMilli(stored.RuntimeBinding.BoundAtMS).UTC(),
|
||||
}
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return game.Game{}, fmt.Errorf("decode redis game record: %w", err)
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
|
||||
func decodeStrictJSON(operation string, payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return fmt.Errorf("%s: unexpected trailing JSON input", operation)
|
||||
}
|
||||
return fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func optionalUnixMilli(value *time.Time) *int64 {
|
||||
if value == nil {
|
||||
return nil
|
||||
}
|
||||
milliseconds := value.UTC().UnixMilli()
|
||||
return &milliseconds
|
||||
}
|
||||
|
||||
func inflateOptionalTime(value *int64) *time.Time {
|
||||
if value == nil {
|
||||
return nil
|
||||
}
|
||||
converted := time.UnixMilli(*value).UTC()
|
||||
return &converted
|
||||
}
|
||||
@@ -1,73 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/domain/application"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
)
|
||||
|
||||
// applicationRecord stores the strict Redis JSON shape used for one
|
||||
// application record.
|
||||
type applicationRecord struct {
|
||||
ApplicationID string `json:"application_id"`
|
||||
GameID string `json:"game_id"`
|
||||
ApplicantUserID string `json:"applicant_user_id"`
|
||||
RaceName string `json:"race_name"`
|
||||
Status application.Status `json:"status"`
|
||||
CreatedAtMS int64 `json:"created_at_ms"`
|
||||
DecidedAtMS *int64 `json:"decided_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
// MarshalApplication encodes record into the strict Redis JSON shape
|
||||
// used for application records. The record is re-validated before
|
||||
// marshalling.
|
||||
func MarshalApplication(record application.Application) ([]byte, error) {
|
||||
if err := record.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("marshal redis application record: %w", err)
|
||||
}
|
||||
|
||||
stored := applicationRecord{
|
||||
ApplicationID: record.ApplicationID.String(),
|
||||
GameID: record.GameID.String(),
|
||||
ApplicantUserID: record.ApplicantUserID,
|
||||
RaceName: record.RaceName,
|
||||
Status: record.Status,
|
||||
CreatedAtMS: record.CreatedAt.UTC().UnixMilli(),
|
||||
DecidedAtMS: optionalUnixMilli(record.DecidedAt),
|
||||
}
|
||||
|
||||
payload, err := json.Marshal(stored)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal redis application record: %w", err)
|
||||
}
|
||||
|
||||
return payload, nil
|
||||
}
|
||||
|
||||
// UnmarshalApplication decodes payload from the strict Redis JSON shape
|
||||
// used for application records. The decoded record is validated before
|
||||
// returning.
|
||||
func UnmarshalApplication(payload []byte) (application.Application, error) {
|
||||
var stored applicationRecord
|
||||
if err := decodeStrictJSON("decode redis application record", payload, &stored); err != nil {
|
||||
return application.Application{}, err
|
||||
}
|
||||
|
||||
record := application.Application{
|
||||
ApplicationID: common.ApplicationID(stored.ApplicationID),
|
||||
GameID: common.GameID(stored.GameID),
|
||||
ApplicantUserID: stored.ApplicantUserID,
|
||||
RaceName: stored.RaceName,
|
||||
Status: stored.Status,
|
||||
CreatedAt: time.UnixMilli(stored.CreatedAtMS).UTC(),
|
||||
DecidedAt: inflateOptionalTime(stored.DecidedAtMS),
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return application.Application{}, fmt.Errorf("decode redis application record: %w", err)
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
@@ -1,77 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/invite"
|
||||
)
|
||||
|
||||
// inviteRecord stores the strict Redis JSON shape used for one invite
|
||||
// record.
|
||||
type inviteRecord struct {
|
||||
InviteID string `json:"invite_id"`
|
||||
GameID string `json:"game_id"`
|
||||
InviterUserID string `json:"inviter_user_id"`
|
||||
InviteeUserID string `json:"invitee_user_id"`
|
||||
RaceName string `json:"race_name,omitempty"`
|
||||
Status invite.Status `json:"status"`
|
||||
CreatedAtMS int64 `json:"created_at_ms"`
|
||||
ExpiresAtMS int64 `json:"expires_at_ms"`
|
||||
DecidedAtMS *int64 `json:"decided_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
// MarshalInvite encodes record into the strict Redis JSON shape used for
|
||||
// invite records. The record is re-validated before marshalling.
|
||||
func MarshalInvite(record invite.Invite) ([]byte, error) {
|
||||
if err := record.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("marshal redis invite record: %w", err)
|
||||
}
|
||||
|
||||
stored := inviteRecord{
|
||||
InviteID: record.InviteID.String(),
|
||||
GameID: record.GameID.String(),
|
||||
InviterUserID: record.InviterUserID,
|
||||
InviteeUserID: record.InviteeUserID,
|
||||
RaceName: record.RaceName,
|
||||
Status: record.Status,
|
||||
CreatedAtMS: record.CreatedAt.UTC().UnixMilli(),
|
||||
ExpiresAtMS: record.ExpiresAt.UTC().UnixMilli(),
|
||||
DecidedAtMS: optionalUnixMilli(record.DecidedAt),
|
||||
}
|
||||
|
||||
payload, err := json.Marshal(stored)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal redis invite record: %w", err)
|
||||
}
|
||||
|
||||
return payload, nil
|
||||
}
|
||||
|
||||
// UnmarshalInvite decodes payload from the strict Redis JSON shape used
|
||||
// for invite records. The decoded record is validated before returning.
|
||||
func UnmarshalInvite(payload []byte) (invite.Invite, error) {
|
||||
var stored inviteRecord
|
||||
if err := decodeStrictJSON("decode redis invite record", payload, &stored); err != nil {
|
||||
return invite.Invite{}, err
|
||||
}
|
||||
|
||||
record := invite.Invite{
|
||||
InviteID: common.InviteID(stored.InviteID),
|
||||
GameID: common.GameID(stored.GameID),
|
||||
InviterUserID: stored.InviterUserID,
|
||||
InviteeUserID: stored.InviteeUserID,
|
||||
RaceName: stored.RaceName,
|
||||
Status: stored.Status,
|
||||
CreatedAt: time.UnixMilli(stored.CreatedAtMS).UTC(),
|
||||
ExpiresAt: time.UnixMilli(stored.ExpiresAtMS).UTC(),
|
||||
DecidedAt: inflateOptionalTime(stored.DecidedAtMS),
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return invite.Invite{}, fmt.Errorf("decode redis invite record: %w", err)
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
@@ -1,75 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/membership"
|
||||
)
|
||||
|
||||
// membershipRecord stores the strict Redis JSON shape used for one
|
||||
// membership record.
|
||||
type membershipRecord struct {
|
||||
MembershipID string `json:"membership_id"`
|
||||
GameID string `json:"game_id"`
|
||||
UserID string `json:"user_id"`
|
||||
RaceName string `json:"race_name"`
|
||||
CanonicalKey string `json:"canonical_key"`
|
||||
Status membership.Status `json:"status"`
|
||||
JoinedAtMS int64 `json:"joined_at_ms"`
|
||||
RemovedAtMS *int64 `json:"removed_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
// MarshalMembership encodes record into the strict Redis JSON shape used
|
||||
// for membership records. The record is re-validated before marshalling.
|
||||
func MarshalMembership(record membership.Membership) ([]byte, error) {
|
||||
if err := record.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("marshal redis membership record: %w", err)
|
||||
}
|
||||
|
||||
stored := membershipRecord{
|
||||
MembershipID: record.MembershipID.String(),
|
||||
GameID: record.GameID.String(),
|
||||
UserID: record.UserID,
|
||||
RaceName: record.RaceName,
|
||||
CanonicalKey: record.CanonicalKey,
|
||||
Status: record.Status,
|
||||
JoinedAtMS: record.JoinedAt.UTC().UnixMilli(),
|
||||
RemovedAtMS: optionalUnixMilli(record.RemovedAt),
|
||||
}
|
||||
|
||||
payload, err := json.Marshal(stored)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal redis membership record: %w", err)
|
||||
}
|
||||
|
||||
return payload, nil
|
||||
}
|
||||
|
||||
// UnmarshalMembership decodes payload from the strict Redis JSON shape
|
||||
// used for membership records. The decoded record is validated before
|
||||
// returning.
|
||||
func UnmarshalMembership(payload []byte) (membership.Membership, error) {
|
||||
var stored membershipRecord
|
||||
if err := decodeStrictJSON("decode redis membership record", payload, &stored); err != nil {
|
||||
return membership.Membership{}, err
|
||||
}
|
||||
|
||||
record := membership.Membership{
|
||||
MembershipID: common.MembershipID(stored.MembershipID),
|
||||
GameID: common.GameID(stored.GameID),
|
||||
UserID: stored.UserID,
|
||||
RaceName: stored.RaceName,
|
||||
CanonicalKey: stored.CanonicalKey,
|
||||
Status: stored.Status,
|
||||
JoinedAt: time.UnixMilli(stored.JoinedAtMS).UTC(),
|
||||
RemovedAt: inflateOptionalTime(stored.RemovedAtMS),
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return membership.Membership{}, fmt.Errorf("decode redis membership record: %w", err)
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
@@ -1,111 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// registeredRecord stores the strict Redis JSON shape of one registered
|
||||
// race name. The canonical key is stored only as the Redis key suffix and
|
||||
// is not duplicated inside the blob.
|
||||
type registeredRecord struct {
|
||||
UserID string `json:"user_id"`
|
||||
RaceName string `json:"race_name"`
|
||||
SourceGameID string `json:"source_game_id"`
|
||||
RegisteredAtMS int64 `json:"registered_at_ms"`
|
||||
}
|
||||
|
||||
// reservationStatusReserved marks a per-game race name reservation that
|
||||
// has not yet been promoted by capability evaluation.
|
||||
const reservationStatusReserved = "reserved"
|
||||
|
||||
// reservationStatusPending marks a reservation that has been promoted to
|
||||
// pending_registration by the capability evaluator at game_finished.
|
||||
const reservationStatusPending = "pending_registration"
|
||||
|
||||
// reservationRecord stores the strict Redis JSON shape of one per-game
|
||||
// race name reservation. The game_id and canonical key are carried by the
|
||||
// Redis key suffix; the blob never duplicates them.
|
||||
type reservationRecord struct {
|
||||
UserID string `json:"user_id"`
|
||||
RaceName string `json:"race_name"`
|
||||
ReservedAtMS int64 `json:"reserved_at_ms"`
|
||||
Status string `json:"status"`
|
||||
EligibleUntilMS *int64 `json:"eligible_until_ms,omitempty"`
|
||||
}
|
||||
|
||||
// canonicalLookupRecord stores the eager canonical-lookup cache entry
|
||||
// used by Check to return availability without scanning the authoritative
|
||||
// keys. GameID is populated only for reservation and pending_registration
|
||||
// kinds; it is omitted for registered bindings.
|
||||
type canonicalLookupRecord struct {
|
||||
Kind string `json:"kind"`
|
||||
HolderUserID string `json:"holder_user_id"`
|
||||
GameID string `json:"game_id,omitempty"`
|
||||
}
|
||||
|
||||
// marshalRegisteredRecord encodes record into the strict Redis JSON shape
|
||||
// used for registered race names.
|
||||
func marshalRegisteredRecord(record registeredRecord) ([]byte, error) {
|
||||
payload, err := json.Marshal(record)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal redis registered race name record: %w", err)
|
||||
}
|
||||
|
||||
return payload, nil
|
||||
}
|
||||
|
||||
// unmarshalRegisteredRecord decodes payload from the strict Redis JSON
|
||||
// shape used for registered race names.
|
||||
func unmarshalRegisteredRecord(payload []byte) (registeredRecord, error) {
|
||||
var record registeredRecord
|
||||
if err := decodeStrictJSON("decode redis registered race name record", payload, &record); err != nil {
|
||||
return registeredRecord{}, err
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// marshalReservationRecord encodes record into the strict Redis JSON
|
||||
// shape used for per-game race name reservations.
|
||||
func marshalReservationRecord(record reservationRecord) ([]byte, error) {
|
||||
payload, err := json.Marshal(record)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal redis race name reservation record: %w", err)
|
||||
}
|
||||
|
||||
return payload, nil
|
||||
}
|
||||
|
||||
// unmarshalReservationRecord decodes payload from the strict Redis JSON
|
||||
// shape used for per-game race name reservations.
|
||||
func unmarshalReservationRecord(payload []byte) (reservationRecord, error) {
|
||||
var record reservationRecord
|
||||
if err := decodeStrictJSON("decode redis race name reservation record", payload, &record); err != nil {
|
||||
return reservationRecord{}, err
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// marshalCanonicalLookupRecord encodes record into the strict Redis JSON
|
||||
// shape used for canonical-lookup cache entries.
|
||||
func marshalCanonicalLookupRecord(record canonicalLookupRecord) ([]byte, error) {
|
||||
payload, err := json.Marshal(record)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal redis race name canonical lookup record: %w", err)
|
||||
}
|
||||
|
||||
return payload, nil
|
||||
}
|
||||
|
||||
// unmarshalCanonicalLookupRecord decodes payload from the strict Redis
|
||||
// JSON shape used for canonical-lookup cache entries.
|
||||
func unmarshalCanonicalLookupRecord(payload []byte) (canonicalLookupRecord, error) {
|
||||
var record canonicalLookupRecord
|
||||
if err := decodeStrictJSON("decode redis race name canonical lookup record", payload, &record); err != nil {
|
||||
return canonicalLookupRecord{}, err
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
@@ -1,10 +1,11 @@
|
||||
// Package redisstate defines the frozen Game Lobby Service Redis keyspace,
|
||||
// strict JSON record shapes, and low-level mutation helpers used by the
|
||||
// Game Lobby store adapters.
|
||||
// Package redisstate defines the Game Lobby Service Redis keyspace and
|
||||
// the adapters for the runtime-coordination state that intentionally
|
||||
// stays on Redis after the PG_PLAN.md §6A and §6B migrations.
|
||||
//
|
||||
// Adapters in this package implement ports.GameStore,
|
||||
// ports.ApplicationStore, ports.InviteStore, and ports.MembershipStore on
|
||||
// top of a `*redis.Client`. Every marshal and unmarshal round-trip calls
|
||||
// the domain-level Validate method to guarantee that the store never
|
||||
// exposes malformed records.
|
||||
// Adapters in this package implement ports.GameTurnStatsStore,
|
||||
// ports.GapActivationStore, ports.EvaluationGuardStore, and
|
||||
// ports.StreamOffsetStore plus the StreamLagProbe used for telemetry. The
|
||||
// durable enrollment entities (game, application, invite, membership)
|
||||
// and the Race Name Directory live in PostgreSQL; their previous Redis
|
||||
// adapters and codecs have been removed.
|
||||
package redisstate
|
||||
|
||||
@@ -1,454 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
)
|
||||
|
||||
// GameStore provides Redis-backed durable storage for game records.
|
||||
type GameStore struct {
|
||||
client *redis.Client
|
||||
keys Keyspace
|
||||
}
|
||||
|
||||
// NewGameStore constructs one Redis-backed game store. It returns an
|
||||
// error when client is nil.
|
||||
func NewGameStore(client *redis.Client) (*GameStore, error) {
|
||||
if client == nil {
|
||||
return nil, errors.New("new game store: nil redis client")
|
||||
}
|
||||
|
||||
return &GameStore{
|
||||
client: client,
|
||||
keys: Keyspace{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Save upserts record and rewrites the status secondary index when the
|
||||
// status changes.
|
||||
func (store *GameStore) Save(ctx context.Context, record game.Game) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("save game: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("save game: nil context")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("save game: %w", err)
|
||||
}
|
||||
|
||||
payload, err := MarshalGame(record)
|
||||
if err != nil {
|
||||
return fmt.Errorf("save game: %w", err)
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Game(record.GameID)
|
||||
newIndexKey := store.keys.GamesByStatus(record.Status)
|
||||
member := record.GameID.String()
|
||||
createdAtScore := CreatedAtScore(record.CreatedAt)
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
var previousStatus game.Status
|
||||
existingPayload, getErr := tx.Get(ctx, primaryKey).Bytes()
|
||||
switch {
|
||||
case errors.Is(getErr, redis.Nil):
|
||||
previousStatus = ""
|
||||
case getErr != nil:
|
||||
return fmt.Errorf("save game: %w", getErr)
|
||||
default:
|
||||
existing, err := UnmarshalGame(existingPayload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("save game: %w", err)
|
||||
}
|
||||
previousStatus = existing.Status
|
||||
}
|
||||
|
||||
_, err := tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, payload, GameRecordTTL)
|
||||
if previousStatus != "" && previousStatus != record.Status {
|
||||
pipe.ZRem(ctx, store.keys.GamesByStatus(previousStatus), member)
|
||||
}
|
||||
pipe.ZAdd(ctx, newIndexKey, redis.Z{
|
||||
Score: createdAtScore,
|
||||
Member: member,
|
||||
})
|
||||
if owner := strings.TrimSpace(record.OwnerUserID); owner != "" {
|
||||
pipe.SAdd(ctx, store.keys.GamesByOwner(owner), member)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("save game: %w", game.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns the record identified by gameID.
|
||||
func (store *GameStore) Get(ctx context.Context, gameID common.GameID) (game.Game, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return game.Game{}, errors.New("get game: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return game.Game{}, errors.New("get game: nil context")
|
||||
}
|
||||
if err := gameID.Validate(); err != nil {
|
||||
return game.Game{}, fmt.Errorf("get game: %w", err)
|
||||
}
|
||||
|
||||
payload, err := store.client.Get(ctx, store.keys.Game(gameID)).Bytes()
|
||||
switch {
|
||||
case errors.Is(err, redis.Nil):
|
||||
return game.Game{}, game.ErrNotFound
|
||||
case err != nil:
|
||||
return game.Game{}, fmt.Errorf("get game: %w", err)
|
||||
}
|
||||
|
||||
record, err := UnmarshalGame(payload)
|
||||
if err != nil {
|
||||
return game.Game{}, fmt.Errorf("get game: %w", err)
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// GetByStatus returns every record indexed under status. Stale index
|
||||
// entries (primary key removed out-of-band) are dropped silently.
|
||||
func (store *GameStore) GetByStatus(ctx context.Context, status game.Status) ([]game.Game, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get games by status: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get games by status: nil context")
|
||||
}
|
||||
if !status.IsKnown() {
|
||||
return nil, fmt.Errorf("get games by status: status %q is unsupported", status)
|
||||
}
|
||||
|
||||
members, err := store.client.ZRange(ctx, store.keys.GamesByStatus(status), 0, -1).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by status: %w", err)
|
||||
}
|
||||
if len(members) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
primaryKeys := make([]string, len(members))
|
||||
for index, member := range members {
|
||||
primaryKeys[index] = store.keys.Game(common.GameID(member))
|
||||
}
|
||||
|
||||
payloads, err := store.client.MGet(ctx, primaryKeys...).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by status: %w", err)
|
||||
}
|
||||
|
||||
records := make([]game.Game, 0, len(payloads))
|
||||
for _, entry := range payloads {
|
||||
if entry == nil {
|
||||
continue
|
||||
}
|
||||
raw, ok := entry.(string)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("get games by status: unexpected payload type %T", entry)
|
||||
}
|
||||
record, err := UnmarshalGame([]byte(raw))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by status: %w", err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// CountByStatus returns the number of game identifiers indexed under each
|
||||
// known status. The map carries one entry per game.AllStatuses, with zero
|
||||
// counts for empty buckets. The implementation issues one ZCARD per status
|
||||
// in a single Redis pipeline so the cost stays O(number of statuses).
|
||||
func (store *GameStore) CountByStatus(ctx context.Context) (map[game.Status]int, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("count games by status: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("count games by status: nil context")
|
||||
}
|
||||
|
||||
statuses := game.AllStatuses()
|
||||
pipeline := store.client.Pipeline()
|
||||
results := make([]*redis.IntCmd, len(statuses))
|
||||
for index, status := range statuses {
|
||||
results[index] = pipeline.ZCard(ctx, store.keys.GamesByStatus(status))
|
||||
}
|
||||
if _, err := pipeline.Exec(ctx); err != nil {
|
||||
return nil, fmt.Errorf("count games by status: %w", err)
|
||||
}
|
||||
|
||||
counts := make(map[game.Status]int, len(statuses))
|
||||
for index, status := range statuses {
|
||||
count, err := results[index].Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("count games by status: %s: %w", status, err)
|
||||
}
|
||||
counts[status] = int(count)
|
||||
}
|
||||
return counts, nil
|
||||
}
|
||||
|
||||
// GetByOwner returns every record whose OwnerUserID equals userID.
|
||||
// Stale index entries (primary key removed out-of-band) are dropped
|
||||
// silently. The slice order is adapter-defined.
|
||||
func (store *GameStore) GetByOwner(ctx context.Context, userID string) ([]game.Game, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get games by owner: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get games by owner: nil context")
|
||||
}
|
||||
trimmed := strings.TrimSpace(userID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get games by owner: user id must not be empty")
|
||||
}
|
||||
|
||||
members, err := store.client.SMembers(ctx, store.keys.GamesByOwner(trimmed)).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by owner: %w", err)
|
||||
}
|
||||
if len(members) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
primaryKeys := make([]string, len(members))
|
||||
for index, member := range members {
|
||||
primaryKeys[index] = store.keys.Game(common.GameID(member))
|
||||
}
|
||||
|
||||
payloads, err := store.client.MGet(ctx, primaryKeys...).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by owner: %w", err)
|
||||
}
|
||||
|
||||
records := make([]game.Game, 0, len(payloads))
|
||||
for _, entry := range payloads {
|
||||
if entry == nil {
|
||||
continue
|
||||
}
|
||||
raw, ok := entry.(string)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("get games by owner: unexpected payload type %T", entry)
|
||||
}
|
||||
record, err := UnmarshalGame([]byte(raw))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get games by owner: %w", err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition in a compare-and-swap
|
||||
// fashion.
|
||||
func (store *GameStore) UpdateStatus(ctx context.Context, input ports.UpdateStatusInput) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("update game status: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("update game status: nil context")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update game status: %w", err)
|
||||
}
|
||||
|
||||
if err := game.Transition(input.ExpectedFrom, input.To, input.Trigger); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Game(input.GameID)
|
||||
member := input.GameID.String()
|
||||
at := input.At.UTC()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
payload, getErr := tx.Get(ctx, primaryKey).Bytes()
|
||||
switch {
|
||||
case errors.Is(getErr, redis.Nil):
|
||||
return game.ErrNotFound
|
||||
case getErr != nil:
|
||||
return fmt.Errorf("update game status: %w", getErr)
|
||||
}
|
||||
|
||||
existing, err := UnmarshalGame(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update game status: %w", err)
|
||||
}
|
||||
if existing.Status != input.ExpectedFrom {
|
||||
return fmt.Errorf("update game status: %w", game.ErrConflict)
|
||||
}
|
||||
|
||||
existing.Status = input.To
|
||||
existing.UpdatedAt = at
|
||||
if input.To == game.StatusRunning && existing.StartedAt == nil {
|
||||
startedAt := at
|
||||
existing.StartedAt = &startedAt
|
||||
}
|
||||
if input.To == game.StatusFinished && existing.FinishedAt == nil {
|
||||
finishedAt := at
|
||||
existing.FinishedAt = &finishedAt
|
||||
}
|
||||
|
||||
encoded, err := MarshalGame(existing)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update game status: %w", err)
|
||||
}
|
||||
|
||||
_, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, encoded, GameRecordTTL)
|
||||
pipe.ZRem(ctx, store.keys.GamesByStatus(input.ExpectedFrom), member)
|
||||
pipe.ZAdd(ctx, store.keys.GamesByStatus(input.To), redis.Z{
|
||||
Score: CreatedAtScore(existing.CreatedAt),
|
||||
Member: member,
|
||||
})
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("update game status: %w", game.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateRuntimeSnapshot overwrites the denormalized runtime snapshot
|
||||
// fields on the record identified by input.GameID.
|
||||
func (store *GameStore) UpdateRuntimeSnapshot(ctx context.Context, input ports.UpdateRuntimeSnapshotInput) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("update runtime snapshot: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("update runtime snapshot: nil context")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Game(input.GameID)
|
||||
at := input.At.UTC()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
payload, getErr := tx.Get(ctx, primaryKey).Bytes()
|
||||
switch {
|
||||
case errors.Is(getErr, redis.Nil):
|
||||
return game.ErrNotFound
|
||||
case getErr != nil:
|
||||
return fmt.Errorf("update runtime snapshot: %w", getErr)
|
||||
}
|
||||
|
||||
existing, err := UnmarshalGame(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||
}
|
||||
|
||||
existing.RuntimeSnapshot = input.Snapshot
|
||||
existing.UpdatedAt = at
|
||||
|
||||
encoded, err := MarshalGame(existing)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||
}
|
||||
|
||||
_, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, encoded, GameRecordTTL)
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("update runtime snapshot: %w", game.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateRuntimeBinding overwrites the runtime binding metadata on the
|
||||
// record identified by input.GameID. calls this method from
|
||||
// the runtimejobresult worker after a successful container start.
|
||||
func (store *GameStore) UpdateRuntimeBinding(ctx context.Context, input ports.UpdateRuntimeBindingInput) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("update runtime binding: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("update runtime binding: nil context")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update runtime binding: %w", err)
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Game(input.GameID)
|
||||
at := input.At.UTC()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
payload, getErr := tx.Get(ctx, primaryKey).Bytes()
|
||||
switch {
|
||||
case errors.Is(getErr, redis.Nil):
|
||||
return game.ErrNotFound
|
||||
case getErr != nil:
|
||||
return fmt.Errorf("update runtime binding: %w", getErr)
|
||||
}
|
||||
|
||||
existing, err := UnmarshalGame(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime binding: %w", err)
|
||||
}
|
||||
|
||||
binding := input.Binding
|
||||
existing.RuntimeBinding = &binding
|
||||
existing.UpdatedAt = at
|
||||
|
||||
encoded, err := MarshalGame(existing)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update runtime binding: %w", err)
|
||||
}
|
||||
|
||||
_, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, encoded, GameRecordTTL)
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("update runtime binding: %w", game.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure GameStore satisfies the ports.GameStore interface at compile
|
||||
// time.
|
||||
var _ ports.GameStore = (*GameStore)(nil)
|
||||
@@ -1,557 +0,0 @@
|
||||
package redisstate_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/redisstate"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newTestStore(t *testing.T) (*redisstate.GameStore, *miniredis.Miniredis, *redis.Client) {
|
||||
t.Helper()
|
||||
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() {
|
||||
_ = client.Close()
|
||||
})
|
||||
|
||||
store, err := redisstate.NewGameStore(client)
|
||||
require.NoError(t, err)
|
||||
|
||||
return store, server, client
|
||||
}
|
||||
|
||||
func fixtureGame(t *testing.T) game.Game {
|
||||
t.Helper()
|
||||
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
record, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID("game-1"),
|
||||
GameName: "Spring Classic",
|
||||
Description: "first public game",
|
||||
GameType: game.GameTypePublic,
|
||||
MinPlayers: 4,
|
||||
MaxPlayers: 8,
|
||||
StartGapHours: 24,
|
||||
StartGapPlayers: 2,
|
||||
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.2.3",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
return record
|
||||
}
|
||||
|
||||
func statusIndexMembers(t *testing.T, client *redis.Client, status game.Status) []string {
|
||||
t.Helper()
|
||||
|
||||
members, err := client.ZRange(context.Background(), "lobby:games_by_status:"+base64URL(string(status)), 0, -1).Result()
|
||||
require.NoError(t, err)
|
||||
return members
|
||||
}
|
||||
|
||||
func TestNewGameStoreRejectsNilClient(t *testing.T) {
|
||||
_, err := redisstate.NewGameStore(nil)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestGameStoreSaveAndGet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.GameID, got.GameID)
|
||||
assert.Equal(t, record.Status, got.Status)
|
||||
assert.Equal(t, record.GameName, got.GameName)
|
||||
assert.Equal(t, record.MinPlayers, got.MinPlayers)
|
||||
assert.Equal(t, record.MaxPlayers, got.MaxPlayers)
|
||||
assert.Equal(t, record.EnrollmentEndsAt.Unix(), got.EnrollmentEndsAt.Unix())
|
||||
|
||||
members := statusIndexMembers(t, client, game.StatusDraft)
|
||||
assert.Contains(t, members, record.GameID.String())
|
||||
}
|
||||
|
||||
func TestGameStoreGetReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
_, err := store.Get(ctx, common.GameID("game-missing"))
|
||||
require.ErrorIs(t, err, game.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestGameStoreSaveRewritesStatusIndexOnStatusChange(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
record.Status = game.StatusEnrollmentOpen
|
||||
record.UpdatedAt = record.UpdatedAt.Add(time.Minute)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
assert.Empty(t, statusIndexMembers(t, client, game.StatusDraft))
|
||||
assert.Contains(t, statusIndexMembers(t, client, game.StatusEnrollmentOpen), record.GameID.String())
|
||||
}
|
||||
|
||||
func TestGameStoreCountByStatusReturnsAllBuckets(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
record1 := fixtureGame(t)
|
||||
record1.GameID = common.GameID("game-count-a")
|
||||
|
||||
record2 := fixtureGame(t)
|
||||
record2.GameID = common.GameID("game-count-b")
|
||||
record2.CreatedAt = record2.CreatedAt.Add(time.Second)
|
||||
record2.UpdatedAt = record2.CreatedAt
|
||||
|
||||
record3 := fixtureGame(t)
|
||||
record3.GameID = common.GameID("game-count-c")
|
||||
record3.Status = game.StatusEnrollmentOpen
|
||||
|
||||
for _, record := range []game.Game{record1, record2, record3} {
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
}
|
||||
|
||||
counts, err := store.CountByStatus(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, status := range game.AllStatuses() {
|
||||
_, present := counts[status]
|
||||
require.True(t, present, "expected %s bucket", status)
|
||||
}
|
||||
require.Equal(t, 2, counts[game.StatusDraft])
|
||||
require.Equal(t, 1, counts[game.StatusEnrollmentOpen])
|
||||
require.Equal(t, 0, counts[game.StatusRunning])
|
||||
}
|
||||
|
||||
func TestGameStoreGetByStatusReturnsMatchingRecords(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
record1 := fixtureGame(t)
|
||||
record1.GameID = common.GameID("game-a")
|
||||
|
||||
record2 := fixtureGame(t)
|
||||
record2.GameID = common.GameID("game-b")
|
||||
record2.CreatedAt = record2.CreatedAt.Add(time.Second)
|
||||
record2.UpdatedAt = record2.CreatedAt
|
||||
|
||||
record3 := fixtureGame(t)
|
||||
record3.GameID = common.GameID("game-c")
|
||||
record3.Status = game.StatusEnrollmentOpen
|
||||
|
||||
for _, record := range []game.Game{record1, record2, record3} {
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
}
|
||||
|
||||
drafts, err := store.GetByStatus(ctx, game.StatusDraft)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, drafts, 2)
|
||||
gotIDs := []string{drafts[0].GameID.String(), drafts[1].GameID.String()}
|
||||
assert.Contains(t, gotIDs, record1.GameID.String())
|
||||
assert.Contains(t, gotIDs, record2.GameID.String())
|
||||
|
||||
enrollment, err := store.GetByStatus(ctx, game.StatusEnrollmentOpen)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, enrollment, 1)
|
||||
assert.Equal(t, record3.GameID, enrollment[0].GameID)
|
||||
|
||||
running, err := store.GetByStatus(ctx, game.StatusRunning)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, running)
|
||||
}
|
||||
|
||||
func TestGameStoreGetByOwnerReturnsOwnedGames(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
record1, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID("game-priv-a"),
|
||||
GameName: "Owner A first",
|
||||
GameType: game.GameTypePrivate,
|
||||
OwnerUserID: "user-owner-a",
|
||||
MinPlayers: 2,
|
||||
MaxPlayers: 4,
|
||||
StartGapHours: 1,
|
||||
StartGapPlayers: 1,
|
||||
EnrollmentEndsAt: now.Add(48 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.0.0",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
record2, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID("game-priv-b"),
|
||||
GameName: "Owner A second",
|
||||
GameType: game.GameTypePrivate,
|
||||
OwnerUserID: "user-owner-a",
|
||||
MinPlayers: 2,
|
||||
MaxPlayers: 4,
|
||||
StartGapHours: 1,
|
||||
StartGapPlayers: 1,
|
||||
EnrollmentEndsAt: now.Add(48 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.0.0",
|
||||
Now: now.Add(time.Second),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
record3, err := game.New(game.NewGameInput{
|
||||
GameID: common.GameID("game-priv-c"),
|
||||
GameName: "Owner B",
|
||||
GameType: game.GameTypePrivate,
|
||||
OwnerUserID: "user-owner-b",
|
||||
MinPlayers: 2,
|
||||
MaxPlayers: 4,
|
||||
StartGapHours: 1,
|
||||
StartGapPlayers: 1,
|
||||
EnrollmentEndsAt: now.Add(48 * time.Hour),
|
||||
TurnSchedule: "0 18 * * *",
|
||||
TargetEngineVersion: "v1.0.0",
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
publicRecord := fixtureGame(t)
|
||||
|
||||
for _, record := range []game.Game{record1, record2, record3, publicRecord} {
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
}
|
||||
|
||||
ownerA, err := store.GetByOwner(ctx, "user-owner-a")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, ownerA, 2)
|
||||
|
||||
ownerB, err := store.GetByOwner(ctx, "user-owner-b")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, ownerB, 1)
|
||||
assert.Equal(t, record3.GameID, ownerB[0].GameID)
|
||||
|
||||
ownerNone, err := store.GetByOwner(ctx, "user-owner-none")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, ownerNone)
|
||||
}
|
||||
|
||||
func TestGameStoreGetByStatusDropsStaleIndexEntries(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, server, _ := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
// Delete the primary key out-of-band, leaving the index entry stale.
|
||||
server.Del("lobby:games:" + base64URL(record.GameID.String()))
|
||||
|
||||
records, err := store.GetByStatus(ctx, game.StatusDraft)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, records)
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateStatusValidTransition(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
at := record.CreatedAt.Add(time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusDraft,
|
||||
To: game.StatusEnrollmentOpen,
|
||||
Trigger: game.TriggerCommand,
|
||||
At: at,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, game.StatusEnrollmentOpen, got.Status)
|
||||
assert.True(t, got.UpdatedAt.Equal(at.UTC()))
|
||||
assert.Nil(t, got.StartedAt)
|
||||
assert.Nil(t, got.FinishedAt)
|
||||
|
||||
assert.Empty(t, statusIndexMembers(t, client, game.StatusDraft))
|
||||
assert.Contains(t, statusIndexMembers(t, client, game.StatusEnrollmentOpen), record.GameID.String())
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateStatusSetsStartedAtAndFinishedAt(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
record.Status = game.StatusStarting
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
startedAt := record.CreatedAt.Add(time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusStarting,
|
||||
To: game.StatusRunning,
|
||||
Trigger: game.TriggerRuntimeEvent,
|
||||
At: startedAt,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, game.StatusRunning, got.Status)
|
||||
require.NotNil(t, got.StartedAt)
|
||||
assert.True(t, got.StartedAt.Equal(startedAt.UTC()))
|
||||
assert.Nil(t, got.FinishedAt)
|
||||
|
||||
finishedAt := startedAt.Add(2 * time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusRunning,
|
||||
To: game.StatusFinished,
|
||||
Trigger: game.TriggerRuntimeEvent,
|
||||
At: finishedAt,
|
||||
}))
|
||||
|
||||
got, err = store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, game.StatusFinished, got.Status)
|
||||
require.NotNil(t, got.StartedAt)
|
||||
assert.True(t, got.StartedAt.Equal(startedAt.UTC()))
|
||||
require.NotNil(t, got.FinishedAt)
|
||||
assert.True(t, got.FinishedAt.Equal(finishedAt.UTC()))
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateStatusRejectsInvalidTransitionWithoutMutation(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusDraft,
|
||||
To: game.StatusRunning,
|
||||
Trigger: game.TriggerCommand,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, game.ErrInvalidTransition))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, game.StatusDraft, got.Status)
|
||||
assert.True(t, got.UpdatedAt.Equal(record.UpdatedAt))
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateStatusRejectsWrongTrigger(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusDraft,
|
||||
To: game.StatusEnrollmentOpen,
|
||||
Trigger: game.TriggerDeadline,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, game.ErrInvalidTransition))
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusEnrollmentOpen,
|
||||
To: game.StatusReadyToStart,
|
||||
Trigger: game.TriggerManual,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, game.ErrConflict))
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateStatusReturnsNotFoundForMissingRecord(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: common.GameID("game-missing"),
|
||||
ExpectedFrom: game.StatusDraft,
|
||||
To: game.StatusEnrollmentOpen,
|
||||
Trigger: game.TriggerCommand,
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, game.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateRuntimeSnapshot(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
record.Status = game.StatusRunning
|
||||
startedAt := record.CreatedAt.Add(time.Hour)
|
||||
record.StartedAt = &startedAt
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
at := startedAt.Add(10 * time.Minute)
|
||||
require.NoError(t, store.UpdateRuntimeSnapshot(ctx, ports.UpdateRuntimeSnapshotInput{
|
||||
GameID: record.GameID,
|
||||
Snapshot: game.RuntimeSnapshot{
|
||||
CurrentTurn: 5,
|
||||
RuntimeStatus: "running_accepting_commands",
|
||||
EngineHealthSummary: "ok",
|
||||
},
|
||||
At: at,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 5, got.RuntimeSnapshot.CurrentTurn)
|
||||
assert.Equal(t, "running_accepting_commands", got.RuntimeSnapshot.RuntimeStatus)
|
||||
assert.Equal(t, "ok", got.RuntimeSnapshot.EngineHealthSummary)
|
||||
assert.True(t, got.UpdatedAt.Equal(at.UTC()))
|
||||
assert.Equal(t, game.StatusRunning, got.Status)
|
||||
|
||||
assert.Contains(t, statusIndexMembers(t, client, game.StatusRunning), record.GameID.String())
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateRuntimeSnapshotReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
err := store.UpdateRuntimeSnapshot(ctx, ports.UpdateRuntimeSnapshotInput{
|
||||
GameID: common.GameID("game-missing"),
|
||||
Snapshot: game.RuntimeSnapshot{},
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, game.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateRuntimeBinding(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
record.Status = game.StatusStarting
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
bound := record.CreatedAt.Add(time.Hour)
|
||||
require.NoError(t, store.UpdateRuntimeBinding(ctx, ports.UpdateRuntimeBindingInput{
|
||||
GameID: record.GameID,
|
||||
Binding: game.RuntimeBinding{
|
||||
ContainerID: "container-1",
|
||||
EngineEndpoint: "engine.local:9000",
|
||||
RuntimeJobID: "1700000000000-0",
|
||||
BoundAt: bound,
|
||||
},
|
||||
At: bound,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, got.RuntimeBinding)
|
||||
assert.Equal(t, "container-1", got.RuntimeBinding.ContainerID)
|
||||
assert.Equal(t, "engine.local:9000", got.RuntimeBinding.EngineEndpoint)
|
||||
assert.Equal(t, "1700000000000-0", got.RuntimeBinding.RuntimeJobID)
|
||||
assert.True(t, got.RuntimeBinding.BoundAt.Equal(bound.UTC()))
|
||||
assert.Equal(t, game.StatusStarting, got.Status, "binding update must not change status")
|
||||
assert.True(t, got.UpdatedAt.Equal(bound.UTC()))
|
||||
}
|
||||
|
||||
func TestGameStoreUpdateRuntimeBindingReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newTestStore(t)
|
||||
|
||||
err := store.UpdateRuntimeBinding(ctx, ports.UpdateRuntimeBindingInput{
|
||||
GameID: common.GameID("game-missing"),
|
||||
Binding: game.RuntimeBinding{
|
||||
ContainerID: "container-1",
|
||||
EngineEndpoint: "engine.local:9000",
|
||||
RuntimeJobID: "1700000000000-0",
|
||||
BoundAt: time.Now().UTC(),
|
||||
},
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, game.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestGameStoreConcurrentUpdateStatusHasExactlyOneWinner(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newTestStore(t)
|
||||
|
||||
record := fixtureGame(t)
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
storeA, err := redisstate.NewGameStore(client)
|
||||
require.NoError(t, err)
|
||||
storeB, err := redisstate.NewGameStore(client)
|
||||
require.NoError(t, err)
|
||||
|
||||
var (
|
||||
wg sync.WaitGroup
|
||||
successes atomic.Int32
|
||||
conflicts atomic.Int32
|
||||
others atomic.Int32
|
||||
)
|
||||
|
||||
apply := func(target *redisstate.GameStore) {
|
||||
defer wg.Done()
|
||||
err := target.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||
GameID: record.GameID,
|
||||
ExpectedFrom: game.StatusDraft,
|
||||
To: game.StatusEnrollmentOpen,
|
||||
Trigger: game.TriggerCommand,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
})
|
||||
switch {
|
||||
case err == nil:
|
||||
successes.Add(1)
|
||||
case errors.Is(err, game.ErrConflict):
|
||||
conflicts.Add(1)
|
||||
default:
|
||||
others.Add(1)
|
||||
}
|
||||
}
|
||||
|
||||
wg.Add(2)
|
||||
go apply(storeA)
|
||||
go apply(storeB)
|
||||
wg.Wait()
|
||||
|
||||
assert.Equal(t, int32(0), others.Load(), "unexpected non-conflict error")
|
||||
assert.Equal(t, int32(1), successes.Load(), "expected exactly one success")
|
||||
assert.Equal(t, int32(1), conflicts.Load(), "expected exactly one conflict")
|
||||
}
|
||||
|
||||
// base64URL mirrors the private key-segment encoding used by Keyspace.
|
||||
// The tests use it to assert on exact Redis key shapes.
|
||||
func base64URL(value string) string {
|
||||
return base64.RawURLEncoding.EncodeToString([]byte(value))
|
||||
}
|
||||
@@ -1,284 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/invite"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
)
|
||||
|
||||
// InviteStore provides Redis-backed durable storage for invite records.
|
||||
type InviteStore struct {
|
||||
client *redis.Client
|
||||
keys Keyspace
|
||||
}
|
||||
|
||||
// NewInviteStore constructs one Redis-backed invite store. It returns an
|
||||
// error when client is nil.
|
||||
func NewInviteStore(client *redis.Client) (*InviteStore, error) {
|
||||
if client == nil {
|
||||
return nil, errors.New("new invite store: nil redis client")
|
||||
}
|
||||
|
||||
return &InviteStore{
|
||||
client: client,
|
||||
keys: Keyspace{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Save persists a new created invite record. Save is create-only; a
|
||||
// second save against the same invite id returns invite.ErrConflict.
|
||||
func (store *InviteStore) Save(ctx context.Context, record invite.Invite) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("save invite: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("save invite: nil context")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("save invite: %w", err)
|
||||
}
|
||||
if record.Status != invite.StatusCreated {
|
||||
return fmt.Errorf(
|
||||
"save invite: status must be %q, got %q",
|
||||
invite.StatusCreated, record.Status,
|
||||
)
|
||||
}
|
||||
|
||||
payload, err := MarshalInvite(record)
|
||||
if err != nil {
|
||||
return fmt.Errorf("save invite: %w", err)
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Invite(record.InviteID)
|
||||
gameIndexKey := store.keys.InvitesByGame(record.GameID)
|
||||
userIndexKey := store.keys.InvitesByUser(record.InviteeUserID)
|
||||
inviterIndexKey := store.keys.InvitesByInviter(record.InviterUserID)
|
||||
member := record.InviteID.String()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
existing, getErr := tx.Exists(ctx, primaryKey).Result()
|
||||
if getErr != nil {
|
||||
return fmt.Errorf("save invite: %w", getErr)
|
||||
}
|
||||
if existing != 0 {
|
||||
return fmt.Errorf("save invite: %w", invite.ErrConflict)
|
||||
}
|
||||
|
||||
_, err := tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, payload, InviteRecordTTL)
|
||||
pipe.SAdd(ctx, gameIndexKey, member)
|
||||
pipe.SAdd(ctx, userIndexKey, member)
|
||||
pipe.SAdd(ctx, inviterIndexKey, member)
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("save invite: %w", invite.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns the record identified by inviteID.
|
||||
func (store *InviteStore) Get(ctx context.Context, inviteID common.InviteID) (invite.Invite, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return invite.Invite{}, errors.New("get invite: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return invite.Invite{}, errors.New("get invite: nil context")
|
||||
}
|
||||
if err := inviteID.Validate(); err != nil {
|
||||
return invite.Invite{}, fmt.Errorf("get invite: %w", err)
|
||||
}
|
||||
|
||||
payload, err := store.client.Get(ctx, store.keys.Invite(inviteID)).Bytes()
|
||||
switch {
|
||||
case errors.Is(err, redis.Nil):
|
||||
return invite.Invite{}, invite.ErrNotFound
|
||||
case err != nil:
|
||||
return invite.Invite{}, fmt.Errorf("get invite: %w", err)
|
||||
}
|
||||
|
||||
record, err := UnmarshalInvite(payload)
|
||||
if err != nil {
|
||||
return invite.Invite{}, fmt.Errorf("get invite: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// GetByGame returns every invite attached to gameID.
|
||||
func (store *InviteStore) GetByGame(ctx context.Context, gameID common.GameID) ([]invite.Invite, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get invites by game: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get invites by game: nil context")
|
||||
}
|
||||
if err := gameID.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("get invites by game: %w", err)
|
||||
}
|
||||
|
||||
return store.loadInvitesBySet(ctx,
|
||||
"get invites by game",
|
||||
store.keys.InvitesByGame(gameID),
|
||||
)
|
||||
}
|
||||
|
||||
// GetByUser returns every invite addressed to inviteeUserID.
|
||||
func (store *InviteStore) GetByUser(ctx context.Context, inviteeUserID string) ([]invite.Invite, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get invites by user: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get invites by user: nil context")
|
||||
}
|
||||
trimmed := strings.TrimSpace(inviteeUserID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get invites by user: invitee user id must not be empty")
|
||||
}
|
||||
|
||||
return store.loadInvitesBySet(ctx,
|
||||
"get invites by user",
|
||||
store.keys.InvitesByUser(trimmed),
|
||||
)
|
||||
}
|
||||
|
||||
// GetByInviter returns every invite created by inviterUserID.
|
||||
func (store *InviteStore) GetByInviter(ctx context.Context, inviterUserID string) ([]invite.Invite, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get invites by inviter: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get invites by inviter: nil context")
|
||||
}
|
||||
trimmed := strings.TrimSpace(inviterUserID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get invites by inviter: inviter user id must not be empty")
|
||||
}
|
||||
|
||||
return store.loadInvitesBySet(ctx,
|
||||
"get invites by inviter",
|
||||
store.keys.InvitesByInviter(trimmed),
|
||||
)
|
||||
}
|
||||
|
||||
// loadInvitesBySet materializes invites whose ids are stored in setKey.
|
||||
// Stale set members (primary key removed out-of-band) are dropped silently.
|
||||
func (store *InviteStore) loadInvitesBySet(ctx context.Context, operation, setKey string) ([]invite.Invite, error) {
|
||||
members, err := store.client.SMembers(ctx, setKey).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if len(members) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
primaryKeys := make([]string, len(members))
|
||||
for index, member := range members {
|
||||
primaryKeys[index] = store.keys.Invite(common.InviteID(member))
|
||||
}
|
||||
|
||||
payloads, err := store.client.MGet(ctx, primaryKeys...).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
|
||||
records := make([]invite.Invite, 0, len(payloads))
|
||||
for _, entry := range payloads {
|
||||
if entry == nil {
|
||||
continue
|
||||
}
|
||||
raw, ok := entry.(string)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("%s: unexpected payload type %T", operation, entry)
|
||||
}
|
||||
record, err := UnmarshalInvite([]byte(raw))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition in a compare-and-swap fashion.
|
||||
func (store *InviteStore) UpdateStatus(ctx context.Context, input ports.UpdateInviteStatusInput) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("update invite status: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("update invite status: nil context")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update invite status: %w", err)
|
||||
}
|
||||
|
||||
if err := invite.Transition(input.ExpectedFrom, input.To); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Invite(input.InviteID)
|
||||
at := input.At.UTC()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
payload, getErr := tx.Get(ctx, primaryKey).Bytes()
|
||||
switch {
|
||||
case errors.Is(getErr, redis.Nil):
|
||||
return invite.ErrNotFound
|
||||
case getErr != nil:
|
||||
return fmt.Errorf("update invite status: %w", getErr)
|
||||
}
|
||||
|
||||
existing, err := UnmarshalInvite(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update invite status: %w", err)
|
||||
}
|
||||
if existing.Status != input.ExpectedFrom {
|
||||
return fmt.Errorf("update invite status: %w", invite.ErrConflict)
|
||||
}
|
||||
|
||||
existing.Status = input.To
|
||||
decidedAt := at
|
||||
existing.DecidedAt = &decidedAt
|
||||
if input.To == invite.StatusRedeemed {
|
||||
existing.RaceName = strings.TrimSpace(input.RaceName)
|
||||
}
|
||||
|
||||
encoded, err := MarshalInvite(existing)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update invite status: %w", err)
|
||||
}
|
||||
|
||||
_, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, encoded, InviteRecordTTL)
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("update invite status: %w", invite.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure InviteStore satisfies the ports.InviteStore interface at
|
||||
// compile time.
|
||||
var _ ports.InviteStore = (*InviteStore)(nil)
|
||||
@@ -1,363 +0,0 @@
|
||||
package redisstate_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sort"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/redisstate"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/invite"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newInviteTestStore(t *testing.T) (*redisstate.InviteStore, *miniredis.Miniredis, *redis.Client) {
|
||||
t.Helper()
|
||||
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() {
|
||||
_ = client.Close()
|
||||
})
|
||||
|
||||
store, err := redisstate.NewInviteStore(client)
|
||||
require.NoError(t, err)
|
||||
|
||||
return store, server, client
|
||||
}
|
||||
|
||||
func fixtureInvite(t *testing.T, id common.InviteID, inviter, invitee string, gameID common.GameID) invite.Invite {
|
||||
t.Helper()
|
||||
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
record, err := invite.New(invite.NewInviteInput{
|
||||
InviteID: id,
|
||||
GameID: gameID,
|
||||
InviterUserID: inviter,
|
||||
InviteeUserID: invitee,
|
||||
Now: now,
|
||||
ExpiresAt: now.Add(7 * 24 * time.Hour),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return record
|
||||
}
|
||||
|
||||
func TestNewInviteStoreRejectsNilClient(t *testing.T) {
|
||||
_, err := redisstate.NewInviteStore(nil)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestInviteStoreSaveAndGet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, record.InviteID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.InviteID, got.InviteID)
|
||||
assert.Equal(t, record.InviteeUserID, got.InviteeUserID)
|
||||
assert.Equal(t, invite.StatusCreated, got.Status)
|
||||
assert.Equal(t, "", got.RaceName)
|
||||
assert.Nil(t, got.DecidedAt)
|
||||
assert.True(t, got.ExpiresAt.Equal(record.ExpiresAt))
|
||||
|
||||
byGame, err := client.SMembers(ctx, "lobby:game_invites:"+base64URL(record.GameID.String())).Result()
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{record.InviteID.String()}, byGame)
|
||||
|
||||
byUser, err := client.SMembers(ctx, "lobby:user_invites:"+base64URL(record.InviteeUserID)).Result()
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{record.InviteID.String()}, byUser)
|
||||
}
|
||||
|
||||
func TestInviteStoreGetReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
_, err := store.Get(ctx, common.InviteID("invite-missing"))
|
||||
require.ErrorIs(t, err, invite.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestInviteStoreSaveRejectsDuplicate(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.Save(ctx, record)
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, invite.ErrConflict))
|
||||
}
|
||||
|
||||
func TestInviteStoreSaveRejectsNonCreated(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
record.Status = invite.StatusRevoked
|
||||
decidedAt := record.CreatedAt.Add(time.Minute)
|
||||
record.DecidedAt = &decidedAt
|
||||
|
||||
err := store.Save(ctx, record)
|
||||
require.Error(t, err)
|
||||
assert.False(t, errors.Is(err, invite.ErrConflict))
|
||||
}
|
||||
|
||||
func TestInviteStoreUpdateStatusRedeemSetsRaceName(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
at := record.CreatedAt.Add(time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: record.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusRedeemed,
|
||||
At: at,
|
||||
RaceName: "Lunar Raider",
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.InviteID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, invite.StatusRedeemed, got.Status)
|
||||
assert.Equal(t, "Lunar Raider", got.RaceName)
|
||||
require.NotNil(t, got.DecidedAt)
|
||||
assert.True(t, got.DecidedAt.Equal(at.UTC()))
|
||||
}
|
||||
|
||||
func TestInviteStoreUpdateStatusTerminalTransitions(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
target invite.Status
|
||||
}{
|
||||
{"declined", invite.StatusDeclined},
|
||||
{"revoked", invite.StatusRevoked},
|
||||
{"expired", invite.StatusExpired},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, common.InviteID("invite-"+tc.name), "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
at := record.CreatedAt.Add(30 * time.Minute)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: record.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: tc.target,
|
||||
At: at,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.InviteID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, tc.target, got.Status)
|
||||
assert.Equal(t, "", got.RaceName)
|
||||
require.NotNil(t, got.DecidedAt)
|
||||
assert.True(t, got.DecidedAt.Equal(at.UTC()))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestInviteStoreUpdateStatusRejectsRedeemWithoutRaceName(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: record.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusRedeemed,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.False(t, errors.Is(err, invite.ErrInvalidTransition))
|
||||
}
|
||||
|
||||
func TestInviteStoreUpdateStatusRejectsRaceNameOnNonRedeem(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: record.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusDeclined,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
RaceName: "Nope",
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.False(t, errors.Is(err, invite.ErrInvalidTransition))
|
||||
}
|
||||
|
||||
func TestInviteStoreUpdateStatusRejectsInvalidTransitionWithoutMutation(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: record.InviteID,
|
||||
ExpectedFrom: invite.StatusRedeemed,
|
||||
To: invite.StatusExpired,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, invite.ErrInvalidTransition))
|
||||
}
|
||||
|
||||
func TestInviteStoreUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: record.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusRevoked,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: record.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusDeclined,
|
||||
At: record.CreatedAt.Add(2 * time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, invite.ErrConflict))
|
||||
}
|
||||
|
||||
func TestInviteStoreUpdateStatusReturnsNotFoundForMissingRecord(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: common.InviteID("invite-missing"),
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusDeclined,
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, invite.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestInviteStoreGetByGameAndByUser(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
i1 := fixtureInvite(t, "invite-a1", "user-owner", "user-1", "game-1")
|
||||
i2 := fixtureInvite(t, "invite-a2", "user-owner", "user-2", "game-1")
|
||||
i3 := fixtureInvite(t, "invite-a3", "user-owner", "user-1", "game-2")
|
||||
|
||||
for _, record := range []invite.Invite{i1, i2, i3} {
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
}
|
||||
|
||||
byGame1, err := store.GetByGame(ctx, "game-1")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byGame1, 2)
|
||||
|
||||
byUser1, err := store.GetByUser(ctx, "user-1")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byUser1, 2)
|
||||
|
||||
ids := collectInviteIDs(byUser1)
|
||||
sort.Strings(ids)
|
||||
assert.Equal(t, []string{"invite-a1", "invite-a3"}, ids)
|
||||
|
||||
byGameMissing, err := store.GetByGame(ctx, "game-missing")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, byGameMissing)
|
||||
}
|
||||
|
||||
func TestInviteStoreGetByInviter(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
i1 := fixtureInvite(t, "invite-i1", "user-owner-a", "user-guest-1", "game-1")
|
||||
i2 := fixtureInvite(t, "invite-i2", "user-owner-a", "user-guest-2", "game-2")
|
||||
i3 := fixtureInvite(t, "invite-i3", "user-owner-b", "user-guest-1", "game-3")
|
||||
|
||||
for _, record := range []invite.Invite{i1, i2, i3} {
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
}
|
||||
|
||||
byInviterA, err := store.GetByInviter(ctx, "user-owner-a")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byInviterA, 2)
|
||||
idsA := collectInviteIDs(byInviterA)
|
||||
sort.Strings(idsA)
|
||||
assert.Equal(t, []string{"invite-i1", "invite-i2"}, idsA)
|
||||
|
||||
byInviterB, err := store.GetByInviter(ctx, "user-owner-b")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byInviterB, 1)
|
||||
assert.Equal(t, "invite-i3", byInviterB[0].InviteID.String())
|
||||
|
||||
byInviterMissing, err := store.GetByInviter(ctx, "user-owner-none")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, byInviterMissing)
|
||||
}
|
||||
|
||||
func TestInviteStoreGetByInviterRetainsAfterStatusChange(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-i", "user-owner-a", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||
InviteID: record.InviteID,
|
||||
ExpectedFrom: invite.StatusCreated,
|
||||
To: invite.StatusRevoked,
|
||||
At: record.CreatedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
matches, err := store.GetByInviter(ctx, "user-owner-a")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, matches, 1)
|
||||
assert.Equal(t, invite.StatusRevoked, matches[0].Status)
|
||||
}
|
||||
|
||||
func TestInviteStoreGetByGameDropsStaleIndexEntries(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, server, _ := newInviteTestStore(t)
|
||||
|
||||
record := fixtureInvite(t, "invite-a", "user-owner", "user-guest", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
server.Del("lobby:invites:" + base64URL(record.InviteID.String()))
|
||||
|
||||
records, err := store.GetByGame(ctx, record.GameID)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, records)
|
||||
}
|
||||
|
||||
func collectInviteIDs(records []invite.Invite) []string {
|
||||
ids := make([]string, len(records))
|
||||
for index, record := range records {
|
||||
ids[index] = record.InviteID.String()
|
||||
}
|
||||
return ids
|
||||
}
|
||||
@@ -2,178 +2,25 @@ package redisstate
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/domain/racename"
|
||||
)
|
||||
|
||||
// defaultPrefix is the mandatory `lobby:` namespace prefix shared by every
|
||||
// Game Lobby Redis key.
|
||||
const defaultPrefix = "lobby:"
|
||||
|
||||
// GameRecordTTL is the Redis retention applied to game records. The
|
||||
// value is zero (no expiry); a future stage will revisit this
|
||||
// choice when the platform locks in archival/GDPR policy.
|
||||
const GameRecordTTL time.Duration = 0
|
||||
|
||||
// ApplicationRecordTTL is the Redis retention applied to application
|
||||
// records. uses zero (no expiry) to match game records; the
|
||||
// archival policy will be revisited when the platform locks it in.
|
||||
const ApplicationRecordTTL time.Duration = 0
|
||||
|
||||
// InviteRecordTTL is the Redis retention applied to invite records.
|
||||
// uses zero (no expiry); the `expires_at` field is a business
|
||||
// deadline enforced by the service layer, not a Redis TTL.
|
||||
const InviteRecordTTL time.Duration = 0
|
||||
|
||||
// MembershipRecordTTL is the Redis retention applied to membership
|
||||
// records. uses zero (no expiry) to match the other participant
|
||||
// entities.
|
||||
const MembershipRecordTTL time.Duration = 0
|
||||
|
||||
// Keyspace builds the frozen Game Lobby Redis keys. All dynamic key
|
||||
// segments are encoded with base64url so raw key structure does not
|
||||
// depend on user-provided or caller-provided characters.
|
||||
// Keyspace builds the Game Lobby Redis keys that survive the PG_PLAN.md
|
||||
// §6A and §6B migrations: per-game ephemeral runtime aggregates,
|
||||
// capability-evaluation guards, gap activation timestamps, and stream
|
||||
// consumer offsets. The four core enrollment entities (game, application,
|
||||
// invite, membership) and the Race Name Directory live in PostgreSQL —
|
||||
// their previous keyspace methods are gone.
|
||||
//
|
||||
// All dynamic key segments are encoded with base64url so raw key structure
|
||||
// does not depend on user-provided or caller-provided characters.
|
||||
type Keyspace struct{}
|
||||
|
||||
// Game returns the primary Redis key for one game record.
|
||||
func (Keyspace) Game(gameID common.GameID) string {
|
||||
return defaultPrefix + "games:" + encodeKeyComponent(gameID.String())
|
||||
}
|
||||
|
||||
// GamesByStatus returns the sorted-set key that stores game identifiers
|
||||
// indexed by their current status.
|
||||
func (Keyspace) GamesByStatus(status game.Status) string {
|
||||
return defaultPrefix + "games_by_status:" + encodeKeyComponent(string(status))
|
||||
}
|
||||
|
||||
// GamesByOwner returns the set key that stores game identifiers owned
|
||||
// by one user. The set is maintained for private games whose
|
||||
// OwnerUserID is non-empty (public games are admin-owned and carry an
|
||||
// empty OwnerUserID, so they never enter the index).
|
||||
func (Keyspace) GamesByOwner(userID string) string {
|
||||
return defaultPrefix + "games_by_owner:" + encodeKeyComponent(userID)
|
||||
}
|
||||
|
||||
// Application returns the primary Redis key for one application record.
|
||||
func (Keyspace) Application(applicationID common.ApplicationID) string {
|
||||
return defaultPrefix + "applications:" + encodeKeyComponent(applicationID.String())
|
||||
}
|
||||
|
||||
// ApplicationsByGame returns the set key that stores application
|
||||
// identifiers attached to one game.
|
||||
func (Keyspace) ApplicationsByGame(gameID common.GameID) string {
|
||||
return defaultPrefix + "game_applications:" + encodeKeyComponent(gameID.String())
|
||||
}
|
||||
|
||||
// ApplicationsByUser returns the set key that stores application
|
||||
// identifiers submitted by one applicant.
|
||||
func (Keyspace) ApplicationsByUser(applicantUserID string) string {
|
||||
return defaultPrefix + "user_applications:" + encodeKeyComponent(applicantUserID)
|
||||
}
|
||||
|
||||
// UserGameApplication returns the lookup key that stores the single
|
||||
// non-rejected application identifier for one (user, game) pair. Presence
|
||||
// of this key blocks a second submitted/approved application for the
|
||||
// same user and game.
|
||||
func (Keyspace) UserGameApplication(applicantUserID string, gameID common.GameID) string {
|
||||
return defaultPrefix + "user_game_application:" +
|
||||
encodeKeyComponent(applicantUserID) + ":" +
|
||||
encodeKeyComponent(gameID.String())
|
||||
}
|
||||
|
||||
// Invite returns the primary Redis key for one invite record.
|
||||
func (Keyspace) Invite(inviteID common.InviteID) string {
|
||||
return defaultPrefix + "invites:" + encodeKeyComponent(inviteID.String())
|
||||
}
|
||||
|
||||
// InvitesByGame returns the set key that stores invite identifiers
|
||||
// attached to one game.
|
||||
func (Keyspace) InvitesByGame(gameID common.GameID) string {
|
||||
return defaultPrefix + "game_invites:" + encodeKeyComponent(gameID.String())
|
||||
}
|
||||
|
||||
// InvitesByUser returns the set key that stores invite identifiers
|
||||
// addressed to one invitee.
|
||||
func (Keyspace) InvitesByUser(inviteeUserID string) string {
|
||||
return defaultPrefix + "user_invites:" + encodeKeyComponent(inviteeUserID)
|
||||
}
|
||||
|
||||
// InvitesByInviter returns the set key that stores invite identifiers
|
||||
// created by one inviter (private-game owner). The set retains
|
||||
// invite_ids regardless of subsequent status transitions; callers
|
||||
// filter by status when needed.
|
||||
func (Keyspace) InvitesByInviter(inviterUserID string) string {
|
||||
return defaultPrefix + "user_inviter_invites:" + encodeKeyComponent(inviterUserID)
|
||||
}
|
||||
|
||||
// Membership returns the primary Redis key for one membership record.
|
||||
func (Keyspace) Membership(membershipID common.MembershipID) string {
|
||||
return defaultPrefix + "memberships:" + encodeKeyComponent(membershipID.String())
|
||||
}
|
||||
|
||||
// MembershipsByGame returns the set key that stores membership
|
||||
// identifiers attached to one game.
|
||||
func (Keyspace) MembershipsByGame(gameID common.GameID) string {
|
||||
return defaultPrefix + "game_memberships:" + encodeKeyComponent(gameID.String())
|
||||
}
|
||||
|
||||
// MembershipsByUser returns the set key that stores membership
|
||||
// identifiers held by one user.
|
||||
func (Keyspace) MembershipsByUser(userID string) string {
|
||||
return defaultPrefix + "user_memberships:" + encodeKeyComponent(userID)
|
||||
}
|
||||
|
||||
// RegisteredRaceName returns the Redis key that stores the registered
|
||||
// race name bound to canonical.
|
||||
func (Keyspace) RegisteredRaceName(canonical racename.CanonicalKey) string {
|
||||
return defaultPrefix + "race_names:registered:" + encodeKeyComponent(canonical.String())
|
||||
}
|
||||
|
||||
// UserRegisteredRaceNames returns the set key that stores canonical keys
|
||||
// of every registered race name owned by userID.
|
||||
func (Keyspace) UserRegisteredRaceNames(userID string) string {
|
||||
return defaultPrefix + "race_names:user_registered:" + encodeKeyComponent(userID)
|
||||
}
|
||||
|
||||
// RaceNameReservation returns the Redis key that stores the per-game race
|
||||
// name reservation bound to (gameID, canonical).
|
||||
func (Keyspace) RaceNameReservation(gameID common.GameID, canonical racename.CanonicalKey) string {
|
||||
return defaultPrefix + "race_names:reservations:" +
|
||||
encodeKeyComponent(gameID.String()) + ":" +
|
||||
encodeKeyComponent(canonical.String())
|
||||
}
|
||||
|
||||
// UserRaceNameReservations returns the set key that stores
|
||||
// `<encodedGameID>:<encodedCanonical>` tuples of every active reservation
|
||||
// (including pending_registration) owned by userID.
|
||||
func (Keyspace) UserRaceNameReservations(userID string) string {
|
||||
return defaultPrefix + "race_names:user_reservations:" + encodeKeyComponent(userID)
|
||||
}
|
||||
|
||||
// RaceNameCanonicalLookup returns the Redis key that stores the eager
|
||||
// canonical-lookup cache entry for canonical. The cache surfaces the
|
||||
// strongest existing binding (registered > pending_registration >
|
||||
// reservation) so Check remains an O(1) read.
|
||||
func (Keyspace) RaceNameCanonicalLookup(canonical racename.CanonicalKey) string {
|
||||
return defaultPrefix + "race_names:canonical_lookup:" + encodeKeyComponent(canonical.String())
|
||||
}
|
||||
|
||||
// PendingRaceNameIndex returns the singleton sorted-set key that indexes
|
||||
// pending registrations by eligible_until_ms for the expiration worker.
|
||||
func (Keyspace) PendingRaceNameIndex() string {
|
||||
return defaultPrefix + "race_names:pending_index"
|
||||
}
|
||||
|
||||
// RaceNameReservationMember returns the canonical member representation
|
||||
// stored inside UserRaceNameReservations and PendingRaceNameIndex for
|
||||
// (gameID, canonical).
|
||||
func (Keyspace) RaceNameReservationMember(gameID common.GameID, canonical racename.CanonicalKey) string {
|
||||
return encodeKeyComponent(gameID.String()) + ":" + encodeKeyComponent(canonical.String())
|
||||
}
|
||||
|
||||
// GapActivatedAt returns the Redis key that stores the gap-window
|
||||
// activation timestamp for one game.
|
||||
func (Keyspace) GapActivatedAt(gameID common.GameID) string {
|
||||
@@ -216,12 +63,6 @@ func (Keyspace) CapabilityEvaluationGuard(gameID common.GameID) string {
|
||||
encodeKeyComponent(gameID.String())
|
||||
}
|
||||
|
||||
// CreatedAtScore returns the frozen sorted-set score representation for
|
||||
// game creation timestamps stored in the status index.
|
||||
func CreatedAtScore(createdAt time.Time) float64 {
|
||||
return float64(createdAt.UTC().UnixMilli())
|
||||
}
|
||||
|
||||
func encodeKeyComponent(value string) string {
|
||||
return base64.RawURLEncoding.EncodeToString([]byte(value))
|
||||
}
|
||||
|
||||
@@ -0,0 +1,10 @@
|
||||
package redisstate_test
|
||||
|
||||
import "encoding/base64"
|
||||
|
||||
// base64URL is the test helper that mirrors the encodeKeyComponent function
|
||||
// inside Keyspace. Per-store tests use it to assert the exact Redis key
|
||||
// shape the adapter writes.
|
||||
func base64URL(value string) string {
|
||||
return base64.RawURLEncoding.EncodeToString([]byte(value))
|
||||
}
|
||||
@@ -1,317 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/membership"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
)
|
||||
|
||||
// MembershipStore provides Redis-backed durable storage for membership
|
||||
// records.
|
||||
type MembershipStore struct {
|
||||
client *redis.Client
|
||||
keys Keyspace
|
||||
}
|
||||
|
||||
// NewMembershipStore constructs one Redis-backed membership store. It
|
||||
// returns an error when client is nil.
|
||||
func NewMembershipStore(client *redis.Client) (*MembershipStore, error) {
|
||||
if client == nil {
|
||||
return nil, errors.New("new membership store: nil redis client")
|
||||
}
|
||||
|
||||
return &MembershipStore{
|
||||
client: client,
|
||||
keys: Keyspace{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Save persists a new active membership record. Save is create-only; a
|
||||
// second save against the same membership id returns
|
||||
// membership.ErrConflict.
|
||||
func (store *MembershipStore) Save(ctx context.Context, record membership.Membership) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("save membership: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("save membership: nil context")
|
||||
}
|
||||
if err := record.Validate(); err != nil {
|
||||
return fmt.Errorf("save membership: %w", err)
|
||||
}
|
||||
if record.Status != membership.StatusActive {
|
||||
return fmt.Errorf(
|
||||
"save membership: status must be %q, got %q",
|
||||
membership.StatusActive, record.Status,
|
||||
)
|
||||
}
|
||||
|
||||
payload, err := MarshalMembership(record)
|
||||
if err != nil {
|
||||
return fmt.Errorf("save membership: %w", err)
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Membership(record.MembershipID)
|
||||
gameIndexKey := store.keys.MembershipsByGame(record.GameID)
|
||||
userIndexKey := store.keys.MembershipsByUser(record.UserID)
|
||||
member := record.MembershipID.String()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
existing, getErr := tx.Exists(ctx, primaryKey).Result()
|
||||
if getErr != nil {
|
||||
return fmt.Errorf("save membership: %w", getErr)
|
||||
}
|
||||
if existing != 0 {
|
||||
return fmt.Errorf("save membership: %w", membership.ErrConflict)
|
||||
}
|
||||
|
||||
_, err := tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, payload, MembershipRecordTTL)
|
||||
pipe.SAdd(ctx, gameIndexKey, member)
|
||||
pipe.SAdd(ctx, userIndexKey, member)
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("save membership: %w", membership.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns the record identified by membershipID.
|
||||
func (store *MembershipStore) Get(ctx context.Context, membershipID common.MembershipID) (membership.Membership, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return membership.Membership{}, errors.New("get membership: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return membership.Membership{}, errors.New("get membership: nil context")
|
||||
}
|
||||
if err := membershipID.Validate(); err != nil {
|
||||
return membership.Membership{}, fmt.Errorf("get membership: %w", err)
|
||||
}
|
||||
|
||||
payload, err := store.client.Get(ctx, store.keys.Membership(membershipID)).Bytes()
|
||||
switch {
|
||||
case errors.Is(err, redis.Nil):
|
||||
return membership.Membership{}, membership.ErrNotFound
|
||||
case err != nil:
|
||||
return membership.Membership{}, fmt.Errorf("get membership: %w", err)
|
||||
}
|
||||
|
||||
record, err := UnmarshalMembership(payload)
|
||||
if err != nil {
|
||||
return membership.Membership{}, fmt.Errorf("get membership: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
|
||||
// GetByGame returns every membership attached to gameID.
|
||||
func (store *MembershipStore) GetByGame(ctx context.Context, gameID common.GameID) ([]membership.Membership, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get memberships by game: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get memberships by game: nil context")
|
||||
}
|
||||
if err := gameID.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("get memberships by game: %w", err)
|
||||
}
|
||||
|
||||
return store.loadMembershipsBySet(ctx,
|
||||
"get memberships by game",
|
||||
store.keys.MembershipsByGame(gameID),
|
||||
)
|
||||
}
|
||||
|
||||
// GetByUser returns every membership held by userID.
|
||||
func (store *MembershipStore) GetByUser(ctx context.Context, userID string) ([]membership.Membership, error) {
|
||||
if store == nil || store.client == nil {
|
||||
return nil, errors.New("get memberships by user: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return nil, errors.New("get memberships by user: nil context")
|
||||
}
|
||||
trimmed := strings.TrimSpace(userID)
|
||||
if trimmed == "" {
|
||||
return nil, fmt.Errorf("get memberships by user: user id must not be empty")
|
||||
}
|
||||
|
||||
return store.loadMembershipsBySet(ctx,
|
||||
"get memberships by user",
|
||||
store.keys.MembershipsByUser(trimmed),
|
||||
)
|
||||
}
|
||||
|
||||
// loadMembershipsBySet materializes memberships whose ids are stored in
|
||||
// setKey. Stale set members are dropped silently.
|
||||
func (store *MembershipStore) loadMembershipsBySet(ctx context.Context, operation, setKey string) ([]membership.Membership, error) {
|
||||
members, err := store.client.SMembers(ctx, setKey).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
if len(members) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
primaryKeys := make([]string, len(members))
|
||||
for index, member := range members {
|
||||
primaryKeys[index] = store.keys.Membership(common.MembershipID(member))
|
||||
}
|
||||
|
||||
payloads, err := store.client.MGet(ctx, primaryKeys...).Result()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
|
||||
records := make([]membership.Membership, 0, len(payloads))
|
||||
for _, entry := range payloads {
|
||||
if entry == nil {
|
||||
continue
|
||||
}
|
||||
raw, ok := entry.(string)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("%s: unexpected payload type %T", operation, entry)
|
||||
}
|
||||
record, err := UnmarshalMembership([]byte(raw))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
return records, nil
|
||||
}
|
||||
|
||||
// UpdateStatus applies one status transition in a compare-and-swap fashion.
|
||||
func (store *MembershipStore) UpdateStatus(ctx context.Context, input ports.UpdateMembershipStatusInput) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("update membership status: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("update membership status: nil context")
|
||||
}
|
||||
if err := input.Validate(); err != nil {
|
||||
return fmt.Errorf("update membership status: %w", err)
|
||||
}
|
||||
|
||||
if err := membership.Transition(input.ExpectedFrom, input.To); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Membership(input.MembershipID)
|
||||
at := input.At.UTC()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
payload, getErr := tx.Get(ctx, primaryKey).Bytes()
|
||||
switch {
|
||||
case errors.Is(getErr, redis.Nil):
|
||||
return membership.ErrNotFound
|
||||
case getErr != nil:
|
||||
return fmt.Errorf("update membership status: %w", getErr)
|
||||
}
|
||||
|
||||
existing, err := UnmarshalMembership(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update membership status: %w", err)
|
||||
}
|
||||
if existing.Status != input.ExpectedFrom {
|
||||
return fmt.Errorf("update membership status: %w", membership.ErrConflict)
|
||||
}
|
||||
|
||||
existing.Status = input.To
|
||||
removedAt := at
|
||||
existing.RemovedAt = &removedAt
|
||||
|
||||
encoded, err := MarshalMembership(existing)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update membership status: %w", err)
|
||||
}
|
||||
|
||||
_, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Set(ctx, primaryKey, encoded, MembershipRecordTTL)
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("update membership status: %w", membership.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Delete removes the membership record identified by membershipID from
|
||||
// the primary store and from the per-game and per-user index sets in
|
||||
// one transaction. It returns membership.ErrNotFound when no record
|
||||
// exists for the id and membership.ErrConflict when a concurrent
|
||||
// mutation invalidates the watched key.
|
||||
func (store *MembershipStore) Delete(ctx context.Context, membershipID common.MembershipID) error {
|
||||
if store == nil || store.client == nil {
|
||||
return errors.New("delete membership: nil store")
|
||||
}
|
||||
if ctx == nil {
|
||||
return errors.New("delete membership: nil context")
|
||||
}
|
||||
if err := membershipID.Validate(); err != nil {
|
||||
return fmt.Errorf("delete membership: %w", err)
|
||||
}
|
||||
|
||||
primaryKey := store.keys.Membership(membershipID)
|
||||
member := membershipID.String()
|
||||
|
||||
watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error {
|
||||
payload, getErr := tx.Get(ctx, primaryKey).Bytes()
|
||||
switch {
|
||||
case errors.Is(getErr, redis.Nil):
|
||||
return membership.ErrNotFound
|
||||
case getErr != nil:
|
||||
return fmt.Errorf("delete membership: %w", getErr)
|
||||
}
|
||||
|
||||
existing, err := UnmarshalMembership(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("delete membership: %w", err)
|
||||
}
|
||||
|
||||
gameIndexKey := store.keys.MembershipsByGame(existing.GameID)
|
||||
userIndexKey := store.keys.MembershipsByUser(existing.UserID)
|
||||
|
||||
_, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
|
||||
pipe.Del(ctx, primaryKey)
|
||||
pipe.SRem(ctx, gameIndexKey, member)
|
||||
pipe.SRem(ctx, userIndexKey, member)
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}, primaryKey)
|
||||
|
||||
switch {
|
||||
case errors.Is(watchErr, redis.TxFailedErr):
|
||||
return fmt.Errorf("delete membership: %w", membership.ErrConflict)
|
||||
case watchErr != nil:
|
||||
return watchErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure MembershipStore satisfies the ports.MembershipStore interface at
|
||||
// compile time.
|
||||
var _ ports.MembershipStore = (*MembershipStore)(nil)
|
||||
@@ -1,299 +0,0 @@
|
||||
package redisstate_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/redisstate"
|
||||
"galaxy/lobby/internal/domain/common"
|
||||
"galaxy/lobby/internal/domain/membership"
|
||||
"galaxy/lobby/internal/ports"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newMembershipTestStore(t *testing.T) (*redisstate.MembershipStore, *miniredis.Miniredis, *redis.Client) {
|
||||
t.Helper()
|
||||
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() {
|
||||
_ = client.Close()
|
||||
})
|
||||
|
||||
store, err := redisstate.NewMembershipStore(client)
|
||||
require.NoError(t, err)
|
||||
|
||||
return store, server, client
|
||||
}
|
||||
|
||||
func fixtureMembership(t *testing.T, id common.MembershipID, userID, raceName string, gameID common.GameID) membership.Membership {
|
||||
t.Helper()
|
||||
|
||||
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||
record, err := membership.New(membership.NewMembershipInput{
|
||||
MembershipID: id,
|
||||
GameID: gameID,
|
||||
UserID: userID,
|
||||
RaceName: raceName,
|
||||
CanonicalKey: strings.ToLower(strings.ReplaceAll(raceName, " ", "")),
|
||||
Now: now,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
return record
|
||||
}
|
||||
|
||||
func TestNewMembershipStoreRejectsNilClient(t *testing.T) {
|
||||
_, err := redisstate.NewMembershipStore(nil)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestMembershipStoreSaveAndGet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, "membership-a", "user-1", "Solar Pilot", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
got, err := store.Get(ctx, record.MembershipID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, record.MembershipID, got.MembershipID)
|
||||
assert.Equal(t, "Solar Pilot", got.RaceName)
|
||||
assert.Equal(t, membership.StatusActive, got.Status)
|
||||
assert.Nil(t, got.RemovedAt)
|
||||
|
||||
byGame, err := client.SMembers(ctx, "lobby:game_memberships:"+base64URL(record.GameID.String())).Result()
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{record.MembershipID.String()}, byGame)
|
||||
|
||||
byUser, err := client.SMembers(ctx, "lobby:user_memberships:"+base64URL(record.UserID)).Result()
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{record.MembershipID.String()}, byUser)
|
||||
}
|
||||
|
||||
func TestMembershipStoreGetReturnsNotFound(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
_, err := store.Get(ctx, common.MembershipID("membership-missing"))
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestMembershipStoreSaveRejectsNonActive(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, "membership-a", "user-1", "Solar Pilot", "game-1")
|
||||
record.Status = membership.StatusRemoved
|
||||
removedAt := record.JoinedAt.Add(time.Hour)
|
||||
record.RemovedAt = &removedAt
|
||||
|
||||
err := store.Save(ctx, record)
|
||||
require.Error(t, err)
|
||||
assert.False(t, errors.Is(err, membership.ErrConflict))
|
||||
}
|
||||
|
||||
func TestMembershipStoreSaveRejectsDuplicate(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, "membership-a", "user-1", "Solar Pilot", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.Save(ctx, record)
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, membership.ErrConflict))
|
||||
}
|
||||
|
||||
func TestMembershipStoreUpdateStatusSetsRemovedAt(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
target membership.Status
|
||||
}{
|
||||
{"removed", membership.StatusRemoved},
|
||||
{"blocked", membership.StatusBlocked},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, common.MembershipID("membership-"+tc.name), "user-1", "Solar Pilot", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
at := record.JoinedAt.Add(2 * time.Hour)
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: record.MembershipID,
|
||||
ExpectedFrom: membership.StatusActive,
|
||||
To: tc.target,
|
||||
At: at,
|
||||
}))
|
||||
|
||||
got, err := store.Get(ctx, record.MembershipID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, tc.target, got.Status)
|
||||
require.NotNil(t, got.RemovedAt)
|
||||
assert.True(t, got.RemovedAt.Equal(at.UTC()))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMembershipStoreUpdateStatusRejectsInvalidTransitionWithoutMutation(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, "membership-a", "user-1", "Solar Pilot", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: record.MembershipID,
|
||||
ExpectedFrom: membership.StatusRemoved,
|
||||
To: membership.StatusBlocked,
|
||||
At: record.JoinedAt.Add(time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, membership.ErrInvalidTransition))
|
||||
|
||||
got, err := store.Get(ctx, record.MembershipID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, membership.StatusActive, got.Status)
|
||||
assert.Nil(t, got.RemovedAt)
|
||||
}
|
||||
|
||||
func TestMembershipStoreUpdateStatusReturnsConflictWhenStatusDiverges(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, "membership-a", "user-1", "Solar Pilot", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: record.MembershipID,
|
||||
ExpectedFrom: membership.StatusActive,
|
||||
To: membership.StatusBlocked,
|
||||
At: record.JoinedAt.Add(time.Minute),
|
||||
}))
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: record.MembershipID,
|
||||
ExpectedFrom: membership.StatusActive,
|
||||
To: membership.StatusRemoved,
|
||||
At: record.JoinedAt.Add(2 * time.Minute),
|
||||
})
|
||||
require.Error(t, err)
|
||||
assert.True(t, errors.Is(err, membership.ErrConflict))
|
||||
}
|
||||
|
||||
func TestMembershipStoreUpdateStatusReturnsNotFoundForMissingRecord(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
err := store.UpdateStatus(ctx, ports.UpdateMembershipStatusInput{
|
||||
MembershipID: common.MembershipID("membership-missing"),
|
||||
ExpectedFrom: membership.StatusActive,
|
||||
To: membership.StatusRemoved,
|
||||
At: time.Now().UTC(),
|
||||
})
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestMembershipStoreGetByGameAndByUser(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
m1 := fixtureMembership(t, "membership-a1", "user-1", "Racer A", "game-1")
|
||||
m2 := fixtureMembership(t, "membership-a2", "user-2", "Racer B", "game-1")
|
||||
m3 := fixtureMembership(t, "membership-a3", "user-1", "Racer C", "game-2")
|
||||
|
||||
for _, record := range []membership.Membership{m1, m2, m3} {
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
}
|
||||
|
||||
byGame1, err := store.GetByGame(ctx, "game-1")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byGame1, 2)
|
||||
|
||||
byUser1, err := store.GetByUser(ctx, "user-1")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, byUser1, 2)
|
||||
|
||||
ids := collectMembershipIDs(byUser1)
|
||||
sort.Strings(ids)
|
||||
assert.Equal(t, []string{"membership-a1", "membership-a3"}, ids)
|
||||
|
||||
byUserMissing, err := store.GetByUser(ctx, "user-missing")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, byUserMissing)
|
||||
}
|
||||
|
||||
func TestMembershipStoreGetByUserDropsStaleIndexEntries(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, server, _ := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, "membership-a", "user-1", "Solar Pilot", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
server.Del("lobby:memberships:" + base64URL(record.MembershipID.String()))
|
||||
|
||||
records, err := store.GetByUser(ctx, record.UserID)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, records)
|
||||
}
|
||||
|
||||
func TestMembershipStoreDeleteRemovesPrimaryAndIndexes(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, client := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, "membership-a", "user-1", "Solar Pilot", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
require.NoError(t, store.Delete(ctx, record.MembershipID))
|
||||
|
||||
_, err := store.Get(ctx, record.MembershipID)
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
|
||||
byGame, err := client.SMembers(ctx, "lobby:game_memberships:"+base64URL(record.GameID.String())).Result()
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, byGame)
|
||||
|
||||
byUser, err := client.SMembers(ctx, "lobby:user_memberships:"+base64URL(record.UserID)).Result()
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, byUser)
|
||||
}
|
||||
|
||||
func TestMembershipStoreDeleteReturnsNotFoundForMissingRecord(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
err := store.Delete(ctx, common.MembershipID("membership-missing"))
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestMembershipStoreDeleteIsIdempotentAfterFirstSuccess(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store, _, _ := newMembershipTestStore(t)
|
||||
|
||||
record := fixtureMembership(t, "membership-a", "user-1", "Solar Pilot", "game-1")
|
||||
require.NoError(t, store.Save(ctx, record))
|
||||
|
||||
require.NoError(t, store.Delete(ctx, record.MembershipID))
|
||||
|
||||
err := store.Delete(ctx, record.MembershipID)
|
||||
require.ErrorIs(t, err, membership.ErrNotFound)
|
||||
}
|
||||
|
||||
func collectMembershipIDs(records []membership.Membership) []string {
|
||||
ids := make([]string, len(records))
|
||||
for index, record := range records {
|
||||
ids[index] = record.MembershipID.String()
|
||||
}
|
||||
return ids
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,52 +0,0 @@
|
||||
package redisstate
|
||||
|
||||
// releaseAllByUserScript atomically clears every registered, reservation,
|
||||
// and pending_registration binding owned by one user. Inputs:
|
||||
//
|
||||
// KEYS[1] — user_registered set key
|
||||
// KEYS[2] — user_reservations set key
|
||||
// KEYS[3] — pending_index sorted-set key
|
||||
// ARGV[1] — Lobby Redis key prefix (e.g. "lobby:")
|
||||
//
|
||||
// The script returns a three-entry table `{registeredCount,
|
||||
// reservationsTotal, pendingCount}` so callers can emit telemetry without
|
||||
// a second round-trip. reservationsTotal includes both reserved and
|
||||
// pending_registration entries; pendingCount is the pending-only subset.
|
||||
const releaseAllByUserScript = `
|
||||
local userRegisteredKey = KEYS[1]
|
||||
local userReservationsKey = KEYS[2]
|
||||
local pendingIndexKey = KEYS[3]
|
||||
local prefix = ARGV[1]
|
||||
|
||||
local registered = redis.call('SMEMBERS', userRegisteredKey)
|
||||
for _, canonical in ipairs(registered) do
|
||||
redis.call('DEL', prefix .. 'race_names:registered:' .. canonical)
|
||||
redis.call('DEL', prefix .. 'race_names:canonical_lookup:' .. canonical)
|
||||
end
|
||||
local registeredCount = #registered
|
||||
if registeredCount > 0 then
|
||||
redis.call('DEL', userRegisteredKey)
|
||||
end
|
||||
|
||||
local reservations = redis.call('SMEMBERS', userReservationsKey)
|
||||
local pendingCount = 0
|
||||
for _, member in ipairs(reservations) do
|
||||
local sep = string.find(member, ':', 1, true)
|
||||
if sep then
|
||||
local encGame = string.sub(member, 1, sep - 1)
|
||||
local encCanonical = string.sub(member, sep + 1)
|
||||
redis.call('DEL', prefix .. 'race_names:reservations:' .. encGame .. ':' .. encCanonical)
|
||||
local pendingRemoved = redis.call('ZREM', pendingIndexKey, member)
|
||||
if pendingRemoved == 1 then
|
||||
pendingCount = pendingCount + 1
|
||||
end
|
||||
redis.call('DEL', prefix .. 'race_names:canonical_lookup:' .. encCanonical)
|
||||
end
|
||||
end
|
||||
local reservationsTotal = #reservations
|
||||
if reservationsTotal > 0 then
|
||||
redis.call('DEL', userReservationsKey)
|
||||
end
|
||||
|
||||
return {registeredCount, reservationsTotal, pendingCount}
|
||||
`
|
||||
@@ -1,244 +0,0 @@
|
||||
package redisstate_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/redisstate"
|
||||
"galaxy/lobby/internal/domain/racename"
|
||||
"galaxy/lobby/internal/ports"
|
||||
"galaxy/lobby/internal/ports/racenamedirtest"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newRaceNameDirectoryAdapter(
|
||||
t *testing.T,
|
||||
now func() time.Time,
|
||||
) (*redisstate.RaceNameDirectory, *miniredis.Miniredis, *redis.Client) {
|
||||
t.Helper()
|
||||
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() {
|
||||
_ = client.Close()
|
||||
})
|
||||
|
||||
policy, err := racename.NewPolicy()
|
||||
require.NoError(t, err)
|
||||
|
||||
var opts []redisstate.RaceNameDirectoryOption
|
||||
if now != nil {
|
||||
opts = append(opts, redisstate.WithRaceNameDirectoryClock(now))
|
||||
}
|
||||
directory, err := redisstate.NewRaceNameDirectory(client, policy, opts...)
|
||||
require.NoError(t, err)
|
||||
|
||||
return directory, server, client
|
||||
}
|
||||
|
||||
func TestRaceNameDirectoryContract(t *testing.T) {
|
||||
racenamedirtest.Run(t, func(now func() time.Time) ports.RaceNameDirectory {
|
||||
directory, _, _ := newRaceNameDirectoryAdapter(t, now)
|
||||
return directory
|
||||
})
|
||||
}
|
||||
|
||||
func TestNewRaceNameDirectoryRejectsNilClient(t *testing.T) {
|
||||
policy, err := racename.NewPolicy()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = redisstate.NewRaceNameDirectory(nil, policy)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestNewRaceNameDirectoryRejectsNilPolicy(t *testing.T) {
|
||||
server := miniredis.RunT(t)
|
||||
client := redis.NewClient(&redis.Options{Addr: server.Addr()})
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
|
||||
_, err := redisstate.NewRaceNameDirectory(client, nil)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRaceNameDirectoryPersistsExactKeyShapes(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
directory, server, _ := newRaceNameDirectoryAdapter(t, nil)
|
||||
|
||||
const (
|
||||
gameID = "game-shape"
|
||||
userID = "user-shape"
|
||||
raceName = "PilotNova"
|
||||
)
|
||||
|
||||
require.NoError(t, directory.Reserve(ctx, gameID, userID, raceName))
|
||||
|
||||
canonical, err := directory.Canonicalize(raceName)
|
||||
require.NoError(t, err)
|
||||
|
||||
encGame := base64URL(gameID)
|
||||
encUser := base64URL(userID)
|
||||
encCanonical := base64URL(canonical)
|
||||
|
||||
require.True(t, server.Exists("lobby:race_names:reservations:"+encGame+":"+encCanonical))
|
||||
require.True(t, server.Exists("lobby:race_names:canonical_lookup:"+encCanonical))
|
||||
require.True(t, server.Exists("lobby:race_names:user_reservations:"+encUser))
|
||||
|
||||
members, err := server.SMembers("lobby:race_names:user_reservations:" + encUser)
|
||||
require.NoError(t, err)
|
||||
require.Contains(t, members, encGame+":"+encCanonical)
|
||||
|
||||
lookupPayload, err := server.Get("lobby:race_names:canonical_lookup:" + encCanonical)
|
||||
require.NoError(t, err)
|
||||
var lookup map[string]any
|
||||
require.NoError(t, json.Unmarshal([]byte(lookupPayload), &lookup))
|
||||
assert.Equal(t, ports.KindReservation, lookup["kind"])
|
||||
assert.Equal(t, userID, lookup["holder_user_id"])
|
||||
assert.Equal(t, gameID, lookup["game_id"])
|
||||
}
|
||||
|
||||
func TestRaceNameDirectoryCanonicalLookupUpgradesOnPendingAndRegistered(t *testing.T) {
|
||||
now, _ := fixedNow(t)
|
||||
directory, server, _ := newRaceNameDirectoryAdapter(t, now)
|
||||
ctx := context.Background()
|
||||
|
||||
const (
|
||||
gameID = "game-upgrade"
|
||||
userID = "user-upgrade"
|
||||
raceName = "UpgradePilot"
|
||||
)
|
||||
|
||||
require.NoError(t, directory.Reserve(ctx, gameID, userID, raceName))
|
||||
|
||||
canonical, err := directory.Canonicalize(raceName)
|
||||
require.NoError(t, err)
|
||||
lookupKey := "lobby:race_names:canonical_lookup:" + base64URL(canonical)
|
||||
|
||||
lookupAfterReserve, err := server.Get(lookupKey)
|
||||
require.NoError(t, err)
|
||||
require.Contains(t, lookupAfterReserve, `"kind":"`+ports.KindReservation+`"`)
|
||||
|
||||
eligibleUntil := now().Add(time.Hour)
|
||||
require.NoError(t, directory.MarkPendingRegistration(ctx, gameID, userID, raceName, eligibleUntil))
|
||||
|
||||
lookupAfterPending, err := server.Get(lookupKey)
|
||||
require.NoError(t, err)
|
||||
require.Contains(t, lookupAfterPending, `"kind":"`+ports.KindPendingRegistration+`"`)
|
||||
|
||||
require.NoError(t, directory.Register(ctx, gameID, userID, raceName))
|
||||
|
||||
lookupAfterRegister, err := server.Get(lookupKey)
|
||||
require.NoError(t, err)
|
||||
require.Contains(t, lookupAfterRegister, `"kind":"`+ports.KindRegistered+`"`)
|
||||
require.NotContains(t, lookupAfterRegister, `"game_id"`, "registered lookup omits the game id")
|
||||
}
|
||||
|
||||
func TestRaceNameDirectoryCanonicalLookupDowngradesOnReleaseCrossGame(t *testing.T) {
|
||||
directory, server, _ := newRaceNameDirectoryAdapter(t, nil)
|
||||
ctx := context.Background()
|
||||
|
||||
const (
|
||||
gameA = "game-keep-a"
|
||||
gameB = "game-keep-b"
|
||||
userID = "user-keep"
|
||||
raceNam = "KeepPilot"
|
||||
)
|
||||
|
||||
require.NoError(t, directory.Reserve(ctx, gameA, userID, raceNam))
|
||||
require.NoError(t, directory.Reserve(ctx, gameB, userID, raceNam))
|
||||
|
||||
canonical, err := directory.Canonicalize(raceNam)
|
||||
require.NoError(t, err)
|
||||
lookupKey := "lobby:race_names:canonical_lookup:" + base64URL(canonical)
|
||||
|
||||
require.NoError(t, directory.ReleaseReservation(ctx, gameA, userID, raceNam))
|
||||
|
||||
payload, err := server.Get(lookupKey)
|
||||
require.NoError(t, err)
|
||||
require.Contains(t, payload, `"kind":"`+ports.KindReservation+`"`)
|
||||
require.Contains(t, payload, `"game_id":"`+gameB+`"`)
|
||||
|
||||
require.NoError(t, directory.ReleaseReservation(ctx, gameB, userID, raceNam))
|
||||
require.False(t, server.Exists(lookupKey))
|
||||
}
|
||||
|
||||
func TestRaceNameDirectoryReleaseAllByUserLua(t *testing.T) {
|
||||
now, _ := fixedNow(t)
|
||||
directory, server, _ := newRaceNameDirectoryAdapter(t, now)
|
||||
ctx := context.Background()
|
||||
|
||||
const (
|
||||
userID = "user-lua"
|
||||
otherID = "user-lua-other"
|
||||
raceName = "LuaPilot"
|
||||
otherRN = "LuaVanguard"
|
||||
gameA = "game-lua-a"
|
||||
gameB = "game-lua-b"
|
||||
)
|
||||
|
||||
require.NoError(t, directory.Reserve(ctx, gameA, userID, raceName))
|
||||
require.NoError(t, directory.MarkPendingRegistration(ctx, gameA, userID, raceName, now().Add(time.Hour)))
|
||||
require.NoError(t, directory.Register(ctx, gameA, userID, raceName))
|
||||
require.NoError(t, directory.Reserve(ctx, gameB, userID, otherRN))
|
||||
require.NoError(t, directory.MarkPendingRegistration(ctx, gameB, userID, otherRN, now().Add(2*time.Hour)))
|
||||
|
||||
const isolatedRN = "LuaGoldenChain"
|
||||
require.NoError(t, directory.Reserve(ctx, gameA, otherID, isolatedRN))
|
||||
|
||||
require.NoError(t, directory.ReleaseAllByUser(ctx, userID))
|
||||
|
||||
require.False(t, server.Exists("lobby:race_names:user_registered:"+base64URL(userID)))
|
||||
require.False(t, server.Exists("lobby:race_names:user_reservations:"+base64URL(userID)))
|
||||
pendingMembers, err := server.ZMembers("lobby:race_names:pending_index")
|
||||
if err != nil {
|
||||
require.ErrorContains(t, err, "ERR no such key")
|
||||
} else {
|
||||
require.Empty(t, pendingMembers)
|
||||
}
|
||||
|
||||
otherCanonical, err := directory.Canonicalize(isolatedRN)
|
||||
require.NoError(t, err)
|
||||
require.True(t, server.Exists("lobby:race_names:canonical_lookup:"+base64URL(otherCanonical)))
|
||||
|
||||
reservations, err := directory.ListReservations(ctx, otherID)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, reservations, 1)
|
||||
}
|
||||
|
||||
func TestRaceNameDirectoryReleaseAllByUserIsSafeOnEmpty(t *testing.T) {
|
||||
directory, _, _ := newRaceNameDirectoryAdapter(t, nil)
|
||||
ctx := context.Background()
|
||||
|
||||
require.NoError(t, directory.ReleaseAllByUser(ctx, "unknown-user"))
|
||||
}
|
||||
|
||||
func TestRaceNameDirectoryCheckRejectsInvalidName(t *testing.T) {
|
||||
directory, _, _ := newRaceNameDirectoryAdapter(t, nil)
|
||||
|
||||
_, err := directory.Check(context.Background(), "Pilot Nova", "user-x")
|
||||
require.Error(t, err)
|
||||
require.True(t, errors.Is(err, ports.ErrInvalidName))
|
||||
}
|
||||
|
||||
func fixedNow(t *testing.T) (func() time.Time, func(delta time.Duration)) {
|
||||
t.Helper()
|
||||
|
||||
instant := time.Date(2026, 5, 1, 12, 0, 0, 0, time.UTC)
|
||||
var mu struct {
|
||||
value time.Time
|
||||
}
|
||||
mu.value = instant
|
||||
return func() time.Time { return mu.value },
|
||||
func(delta time.Duration) { mu.value = mu.value.Add(delta) }
|
||||
}
|
||||
|
||||
// base64URL is the package-level helper defined in gamestore_test.go;
|
||||
// race-name adapter tests reuse it via the same test package.
|
||||
var _ = base64.RawURLEncoding
|
||||
@@ -6,28 +6,23 @@ import (
|
||||
|
||||
"galaxy/lobby/internal/config"
|
||||
"galaxy/lobby/internal/telemetry"
|
||||
"galaxy/redisconn"
|
||||
|
||||
"github.com/redis/go-redis/extra/redisotel/v9"
|
||||
"github.com/redis/go-redis/v9"
|
||||
)
|
||||
|
||||
// newRedisClient builds a Redis client wired with the configured timeouts
|
||||
// and TLS settings taken from cfg.
|
||||
// newRedisClient builds the master Redis client from cfg via the shared
|
||||
// `pkg/redisconn` helper. Replica clients are not opened in this iteration
|
||||
// per ARCHITECTURE.md §Persistence Backends; they will be wired when read
|
||||
// routing is introduced.
|
||||
func newRedisClient(cfg config.RedisConfig) *redis.Client {
|
||||
return redis.NewClient(&redis.Options{
|
||||
Addr: cfg.Addr,
|
||||
Username: cfg.Username,
|
||||
Password: cfg.Password,
|
||||
DB: cfg.DB,
|
||||
TLSConfig: cfg.TLSConfig(),
|
||||
DialTimeout: cfg.OperationTimeout,
|
||||
ReadTimeout: cfg.OperationTimeout,
|
||||
WriteTimeout: cfg.OperationTimeout,
|
||||
})
|
||||
return redisconn.NewMasterClient(cfg.Conn)
|
||||
}
|
||||
|
||||
// instrumentRedisClient attaches the OpenTelemetry tracing and metrics
|
||||
// instrumentation to client when telemetryRuntime is available.
|
||||
// instrumentation to client when telemetryRuntime is available. The actual
|
||||
// instrumentation lives in `pkg/redisconn` so every Galaxy service shares one
|
||||
// surface.
|
||||
func instrumentRedisClient(client *redis.Client, telemetryRuntime *telemetry.Runtime) error {
|
||||
if client == nil {
|
||||
return fmt.Errorf("instrument redis client: nil client")
|
||||
@@ -35,37 +30,14 @@ func instrumentRedisClient(client *redis.Client, telemetryRuntime *telemetry.Run
|
||||
if telemetryRuntime == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := redisotel.InstrumentTracing(
|
||||
client,
|
||||
redisotel.WithTracerProvider(telemetryRuntime.TracerProvider()),
|
||||
redisotel.WithDBStatement(false),
|
||||
); err != nil {
|
||||
return fmt.Errorf("instrument redis client tracing: %w", err)
|
||||
}
|
||||
if err := redisotel.InstrumentMetrics(
|
||||
client,
|
||||
redisotel.WithMeterProvider(telemetryRuntime.MeterProvider()),
|
||||
); err != nil {
|
||||
return fmt.Errorf("instrument redis client metrics: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
return redisconn.Instrument(client,
|
||||
redisconn.WithTracerProvider(telemetryRuntime.TracerProvider()),
|
||||
redisconn.WithMeterProvider(telemetryRuntime.MeterProvider()),
|
||||
)
|
||||
}
|
||||
|
||||
// pingRedis performs a single Redis PING bounded by cfg.OperationTimeout to
|
||||
// confirm that the configured Redis endpoint is reachable at startup.
|
||||
// pingRedis performs a single Redis PING bounded by cfg.Conn.OperationTimeout
|
||||
// to confirm that the configured Redis endpoint is reachable at startup.
|
||||
func pingRedis(ctx context.Context, cfg config.RedisConfig, client *redis.Client) error {
|
||||
if client == nil {
|
||||
return fmt.Errorf("ping redis: nil client")
|
||||
}
|
||||
|
||||
pingCtx, cancel := context.WithTimeout(ctx, cfg.OperationTimeout)
|
||||
defer cancel()
|
||||
|
||||
if err := client.Ping(pingCtx).Err(); err != nil {
|
||||
return fmt.Errorf("ping redis: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
return redisconn.Ping(ctx, client, cfg.Conn.OperationTimeout)
|
||||
}
|
||||
|
||||
@@ -6,20 +6,28 @@ import (
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/config"
|
||||
"galaxy/redisconn"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newTestRedisCfg(addr string) config.RedisConfig {
|
||||
return config.RedisConfig{
|
||||
Conn: redisconn.Config{
|
||||
MasterAddr: addr,
|
||||
Password: "test",
|
||||
OperationTimeout: time.Second,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func TestPingRedisSucceedsAgainstMiniredis(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
server := miniredis.RunT(t)
|
||||
|
||||
redisCfg := config.RedisConfig{
|
||||
Addr: server.Addr(),
|
||||
OperationTimeout: time.Second,
|
||||
}
|
||||
redisCfg := newTestRedisCfg(server.Addr())
|
||||
client := newRedisClient(redisCfg)
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
|
||||
@@ -31,10 +39,7 @@ func TestPingRedisReturnsErrorWhenClosed(t *testing.T) {
|
||||
|
||||
server := miniredis.RunT(t)
|
||||
|
||||
redisCfg := config.RedisConfig{
|
||||
Addr: server.Addr(),
|
||||
OperationTimeout: time.Second,
|
||||
}
|
||||
redisCfg := newTestRedisCfg(server.Addr())
|
||||
client := newRedisClient(redisCfg)
|
||||
require.NoError(t, client.Close())
|
||||
|
||||
@@ -45,7 +50,7 @@ func TestPingRedisReturnsErrorWhenClosed(t *testing.T) {
|
||||
func TestPingRedisNilClient(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
err := pingRedis(context.Background(), config.RedisConfig{OperationTimeout: time.Second}, nil)
|
||||
err := pingRedis(context.Background(), newTestRedisCfg("127.0.0.1:0"), nil)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "nil client")
|
||||
}
|
||||
@@ -62,10 +67,7 @@ func TestInstrumentRedisClientNilTelemetryIsNoop(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
server := miniredis.RunT(t)
|
||||
client := newRedisClient(config.RedisConfig{
|
||||
Addr: server.Addr(),
|
||||
OperationTimeout: time.Second,
|
||||
})
|
||||
client := newRedisClient(newTestRedisCfg(server.Addr()))
|
||||
t.Cleanup(func() { _ = client.Close() })
|
||||
|
||||
require.NoError(t, instrumentRedisClient(client, nil))
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"log/slog"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/adapters/postgres/migrations"
|
||||
"galaxy/lobby/internal/adapters/redisstate"
|
||||
"galaxy/lobby/internal/api/internalhttp"
|
||||
"galaxy/lobby/internal/api/publichttp"
|
||||
@@ -14,6 +15,7 @@ import (
|
||||
"galaxy/lobby/internal/domain/game"
|
||||
"galaxy/lobby/internal/ports"
|
||||
"galaxy/lobby/internal/telemetry"
|
||||
"galaxy/postgres"
|
||||
)
|
||||
|
||||
// activeGamesProbe adapts ports.GameStore to telemetry.ActiveGamesProbe by
|
||||
@@ -110,7 +112,31 @@ func NewRuntime(ctx context.Context, cfg config.Config, logger *slog.Logger) (*R
|
||||
return cleanupOnError(fmt.Errorf("new lobby runtime: %w", err))
|
||||
}
|
||||
|
||||
wiring, err := newWiring(cfg, redisClient, time.Now, logger, telemetryRuntime)
|
||||
pgPool, err := postgres.OpenPrimary(ctx, cfg.Postgres.Conn,
|
||||
postgres.WithTracerProvider(telemetryRuntime.TracerProvider()),
|
||||
postgres.WithMeterProvider(telemetryRuntime.MeterProvider()),
|
||||
)
|
||||
if err != nil {
|
||||
return cleanupOnError(fmt.Errorf("new lobby runtime: open postgres: %w", err))
|
||||
}
|
||||
runtime.cleanupFns = append(runtime.cleanupFns, pgPool.Close)
|
||||
unregisterPGStats, err := postgres.InstrumentDBStats(pgPool,
|
||||
postgres.WithMeterProvider(telemetryRuntime.MeterProvider()),
|
||||
)
|
||||
if err != nil {
|
||||
return cleanupOnError(fmt.Errorf("new lobby runtime: instrument postgres: %w", err))
|
||||
}
|
||||
runtime.cleanupFns = append(runtime.cleanupFns, func() error {
|
||||
return unregisterPGStats()
|
||||
})
|
||||
if err := postgres.Ping(ctx, pgPool, cfg.Postgres.Conn.OperationTimeout); err != nil {
|
||||
return cleanupOnError(fmt.Errorf("new lobby runtime: ping postgres: %w", err))
|
||||
}
|
||||
if err := postgres.RunMigrations(ctx, pgPool, migrations.FS(), "."); err != nil {
|
||||
return cleanupOnError(fmt.Errorf("new lobby runtime: run postgres migrations: %w", err))
|
||||
}
|
||||
|
||||
wiring, err := newWiring(cfg, redisClient, pgPool, time.Now, logger, telemetryRuntime)
|
||||
if err != nil {
|
||||
return cleanupOnError(fmt.Errorf("new lobby runtime: wiring: %w", err))
|
||||
}
|
||||
|
||||
@@ -1,159 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/api/internalhttp"
|
||||
"galaxy/lobby/internal/api/publichttp"
|
||||
"galaxy/lobby/internal/config"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||
rediscontainer "github.com/testcontainers/testcontainers-go/modules/redis"
|
||||
)
|
||||
|
||||
const (
|
||||
realRuntimeSmokeEnv = "LOBBY_REAL_RUNTIME_SMOKE"
|
||||
realRuntimeRedisImage = "redis:7"
|
||||
)
|
||||
|
||||
// TestRealRuntimeCompatibility boots the full Runtime against a real Redis
|
||||
// container, verifies that both HTTP listeners serve /healthz and /readyz,
|
||||
// and asserts graceful shutdown on context cancellation. The test is skipped
|
||||
// unless LOBBY_REAL_RUNTIME_SMOKE=1 because it depends on Docker.
|
||||
func TestRealRuntimeCompatibility(t *testing.T) {
|
||||
if os.Getenv(realRuntimeSmokeEnv) != "1" {
|
||||
t.Skipf("set %s=1 to run the real runtime smoke suite", realRuntimeSmokeEnv)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
redisContainer, err := rediscontainer.Run(ctx, realRuntimeRedisImage)
|
||||
require.NoError(t, err)
|
||||
testcontainers.CleanupContainer(t, redisContainer)
|
||||
|
||||
redisAddr, err := redisContainer.Endpoint(ctx, "")
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg := config.DefaultConfig()
|
||||
cfg.Redis.Addr = redisAddr
|
||||
cfg.UserService.BaseURL = "http://127.0.0.1:1"
|
||||
cfg.GM.BaseURL = "http://127.0.0.1:1"
|
||||
cfg.PublicHTTP.Addr = mustFreeAddr(t)
|
||||
cfg.InternalHTTP.Addr = mustFreeAddr(t)
|
||||
cfg.ShutdownTimeout = 2 * time.Second
|
||||
cfg.Telemetry.TracesExporter = "none"
|
||||
cfg.Telemetry.MetricsExporter = "none"
|
||||
|
||||
runtime, err := NewRuntime(context.Background(), cfg, testLogger())
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
require.NoError(t, runtime.Close())
|
||||
}()
|
||||
|
||||
runCtx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
runErrCh := make(chan error, 1)
|
||||
go func() {
|
||||
runErrCh <- runtime.Run(runCtx)
|
||||
}()
|
||||
|
||||
client := newTestHTTPClient(t)
|
||||
|
||||
waitForRuntimeReady(t, client, cfg.PublicHTTP.Addr, publichttp.ReadyzPath)
|
||||
waitForRuntimeReady(t, client, cfg.InternalHTTP.Addr, internalhttp.ReadyzPath)
|
||||
|
||||
assertHTTPStatus(t, client, "http://"+cfg.PublicHTTP.Addr+publichttp.HealthzPath, http.StatusOK)
|
||||
assertHTTPStatus(t, client, "http://"+cfg.PublicHTTP.Addr+publichttp.ReadyzPath, http.StatusOK)
|
||||
assertHTTPStatus(t, client, "http://"+cfg.InternalHTTP.Addr+internalhttp.HealthzPath, http.StatusOK)
|
||||
assertHTTPStatus(t, client, "http://"+cfg.InternalHTTP.Addr+internalhttp.ReadyzPath, http.StatusOK)
|
||||
|
||||
cancel()
|
||||
waitForRunResult(t, runErrCh, cfg.ShutdownTimeout+2*time.Second)
|
||||
}
|
||||
|
||||
func testLogger() *slog.Logger {
|
||||
return slog.New(slog.NewTextHandler(io.Discard, nil))
|
||||
}
|
||||
|
||||
func newTestHTTPClient(t *testing.T) *http.Client {
|
||||
t.Helper()
|
||||
|
||||
transport := &http.Transport{DisableKeepAlives: true}
|
||||
t.Cleanup(transport.CloseIdleConnections)
|
||||
|
||||
return &http.Client{
|
||||
Timeout: 500 * time.Millisecond,
|
||||
Transport: transport,
|
||||
}
|
||||
}
|
||||
|
||||
func waitForRuntimeReady(t *testing.T, client *http.Client, addr string, path string) {
|
||||
t.Helper()
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
request, err := http.NewRequest(http.MethodGet, "http://"+addr+path, nil)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer response.Body.Close()
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
|
||||
return response.StatusCode == http.StatusOK
|
||||
}, 5*time.Second, 25*time.Millisecond, "lobby runtime did not become reachable on %s", addr)
|
||||
}
|
||||
|
||||
func waitForRunResult(t *testing.T, runErrCh <-chan error, waitTimeout time.Duration) {
|
||||
t.Helper()
|
||||
|
||||
var err error
|
||||
require.Eventually(t, func() bool {
|
||||
select {
|
||||
case err = <-runErrCh:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}, waitTimeout, 10*time.Millisecond, "lobby runtime did not stop")
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func assertHTTPStatus(t *testing.T, client *http.Client, target string, want int) {
|
||||
t.Helper()
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, target, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
|
||||
require.Equal(t, want, response.StatusCode)
|
||||
}
|
||||
|
||||
func mustFreeAddr(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
listener, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
assert.NoError(t, listener.Close())
|
||||
}()
|
||||
|
||||
return listener.Addr().String()
|
||||
}
|
||||
@@ -1,151 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/api/internalhttp"
|
||||
"galaxy/lobby/internal/api/publichttp"
|
||||
"galaxy/lobby/internal/config"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// newTestConfig builds a valid Config that listens on ephemeral ports and a
|
||||
// miniredis instance provided by redisServer.
|
||||
func newTestConfig(t *testing.T, redisAddr string) config.Config {
|
||||
t.Helper()
|
||||
|
||||
reserve := func() string {
|
||||
listener, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
addr := listener.Addr().String()
|
||||
require.NoError(t, listener.Close())
|
||||
return addr
|
||||
}
|
||||
|
||||
cfg := config.DefaultConfig()
|
||||
cfg.Redis.Addr = redisAddr
|
||||
cfg.UserService.BaseURL = "http://127.0.0.1:1"
|
||||
cfg.GM.BaseURL = "http://127.0.0.1:1"
|
||||
cfg.PublicHTTP.Addr = reserve()
|
||||
cfg.InternalHTTP.Addr = reserve()
|
||||
|
||||
return cfg
|
||||
}
|
||||
|
||||
func TestNewRuntimeValidatesContext(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
_, err := NewRuntime(nil, config.Config{}, nil) //nolint:staticcheck // test exercises the nil-context guard.
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "nil context")
|
||||
}
|
||||
|
||||
func TestNewRuntimeRejectsInvalidConfig(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
_, err := NewRuntime(context.Background(), config.Config{}, nil)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "new lobby runtime")
|
||||
}
|
||||
|
||||
func TestNewRuntimeSucceedsWithMiniredis(t *testing.T) {
|
||||
redisServer := miniredis.RunT(t)
|
||||
|
||||
runtime, err := NewRuntime(context.Background(), newTestConfig(t, redisServer.Addr()), nil)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, runtime)
|
||||
t.Cleanup(func() { _ = runtime.Close() })
|
||||
|
||||
assert.NotNil(t, runtime.PublicServer())
|
||||
assert.NotNil(t, runtime.InternalServer())
|
||||
}
|
||||
|
||||
func TestNewRuntimeWiresRaceNameDirectory(t *testing.T) {
|
||||
redisServer := miniredis.RunT(t)
|
||||
|
||||
runtime, err := NewRuntime(context.Background(), newTestConfig(t, redisServer.Addr()), nil)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { _ = runtime.Close() })
|
||||
|
||||
require.NotNil(t, runtime.wiring)
|
||||
assert.NotNil(t, runtime.wiring.raceNameDirectory)
|
||||
}
|
||||
|
||||
func TestNewRuntimeFailsWhenRedisUnreachable(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cfg := newTestConfig(t, "127.0.0.1:1") // guaranteed unreachable
|
||||
cfg.Redis.OperationTimeout = 100 * time.Millisecond
|
||||
|
||||
_, err := NewRuntime(context.Background(), cfg, nil)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "ping redis")
|
||||
}
|
||||
|
||||
func TestRuntimeCloseIsIdempotent(t *testing.T) {
|
||||
redisServer := miniredis.RunT(t)
|
||||
runtime, err := NewRuntime(context.Background(), newTestConfig(t, redisServer.Addr()), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, runtime.Close())
|
||||
require.NoError(t, runtime.Close())
|
||||
}
|
||||
|
||||
func TestRuntimeRunServesProbesAndStopsOnCancel(t *testing.T) {
|
||||
redisServer := miniredis.RunT(t)
|
||||
cfg := newTestConfig(t, redisServer.Addr())
|
||||
|
||||
runtime, err := NewRuntime(context.Background(), cfg, nil)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { _ = runtime.Close() })
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
t.Cleanup(cancel)
|
||||
|
||||
runErr := make(chan error, 1)
|
||||
go func() {
|
||||
runErr <- runtime.Run(ctx)
|
||||
}()
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
return runtime.PublicServer().Addr() != "" && runtime.InternalServer().Addr() != ""
|
||||
}, 2*time.Second, 10*time.Millisecond)
|
||||
|
||||
for _, probe := range []struct {
|
||||
label string
|
||||
url string
|
||||
}{
|
||||
{"public healthz", "http://" + runtime.PublicServer().Addr() + publichttp.HealthzPath},
|
||||
{"public readyz", "http://" + runtime.PublicServer().Addr() + publichttp.ReadyzPath},
|
||||
{"internal healthz", "http://" + runtime.InternalServer().Addr() + internalhttp.HealthzPath},
|
||||
{"internal readyz", "http://" + runtime.InternalServer().Addr() + internalhttp.ReadyzPath},
|
||||
} {
|
||||
resp, err := http.Get(probe.url)
|
||||
require.NoError(t, err, probe.label)
|
||||
_ = resp.Body.Close()
|
||||
assert.Equal(t, http.StatusOK, resp.StatusCode, probe.label)
|
||||
}
|
||||
|
||||
cancel()
|
||||
|
||||
select {
|
||||
case err := <-runErr:
|
||||
require.NoError(t, err)
|
||||
case <-time.After(3 * time.Second):
|
||||
t.Fatal("runtime did not stop after cancel")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRuntimeRunNilContext(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
var runtime *Runtime
|
||||
require.Error(t, runtime.Run(context.Background()))
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
@@ -10,6 +11,11 @@ import (
|
||||
"galaxy/lobby/internal/adapters/idgen"
|
||||
"galaxy/lobby/internal/adapters/metricsintentpub"
|
||||
"galaxy/lobby/internal/adapters/metricsracenamedir"
|
||||
pgapplicationstore "galaxy/lobby/internal/adapters/postgres/applicationstore"
|
||||
pggamestore "galaxy/lobby/internal/adapters/postgres/gamestore"
|
||||
pginvitestore "galaxy/lobby/internal/adapters/postgres/invitestore"
|
||||
pgmembershipstore "galaxy/lobby/internal/adapters/postgres/membershipstore"
|
||||
pgracenamedir "galaxy/lobby/internal/adapters/postgres/racenamedir"
|
||||
"galaxy/lobby/internal/adapters/racenameintents"
|
||||
"galaxy/lobby/internal/adapters/racenamestub"
|
||||
"galaxy/lobby/internal/adapters/redisstate"
|
||||
@@ -234,6 +240,7 @@ type wiring struct {
|
||||
func newWiring(
|
||||
cfg config.Config,
|
||||
redisClient *redis.Client,
|
||||
pgPool *sql.DB,
|
||||
clock func() time.Time,
|
||||
logger *slog.Logger,
|
||||
telemetryRuntime *telemetry.Runtime,
|
||||
@@ -249,29 +256,47 @@ func newWiring(
|
||||
logger = slog.Default()
|
||||
}
|
||||
|
||||
rawDirectory, err := buildRaceNameDirectory(cfg, redisClient, policy, clock)
|
||||
if redisClient == nil {
|
||||
return nil, errors.New("new lobby wiring: nil redis client")
|
||||
}
|
||||
if pgPool == nil {
|
||||
return nil, errors.New("new lobby wiring: nil postgres pool")
|
||||
}
|
||||
|
||||
rawDirectory, err := buildRaceNameDirectory(cfg, pgPool, policy, clock)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("new lobby wiring: %w", err)
|
||||
}
|
||||
directory := metricsracenamedir.New(rawDirectory, telemetryRuntime)
|
||||
|
||||
if redisClient == nil {
|
||||
return nil, errors.New("new lobby wiring: nil redis client")
|
||||
pgStoreCfg := struct {
|
||||
DB *sql.DB
|
||||
OperationTimeout time.Duration
|
||||
}{
|
||||
DB: pgPool,
|
||||
OperationTimeout: cfg.Postgres.Conn.OperationTimeout,
|
||||
}
|
||||
|
||||
gameStore, err := redisstate.NewGameStore(redisClient)
|
||||
gameStore, err := pggamestore.New(pggamestore.Config{
|
||||
DB: pgStoreCfg.DB, OperationTimeout: pgStoreCfg.OperationTimeout,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("new lobby wiring: %w", err)
|
||||
}
|
||||
applicationStore, err := redisstate.NewApplicationStore(redisClient)
|
||||
applicationStore, err := pgapplicationstore.New(pgapplicationstore.Config{
|
||||
DB: pgStoreCfg.DB, OperationTimeout: pgStoreCfg.OperationTimeout,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("new lobby wiring: %w", err)
|
||||
}
|
||||
inviteStore, err := redisstate.NewInviteStore(redisClient)
|
||||
inviteStore, err := pginvitestore.New(pginvitestore.Config{
|
||||
DB: pgStoreCfg.DB, OperationTimeout: pgStoreCfg.OperationTimeout,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("new lobby wiring: %w", err)
|
||||
}
|
||||
membershipStore, err := redisstate.NewMembershipStore(redisClient)
|
||||
membershipStore, err := pgmembershipstore.New(pgmembershipstore.Config{
|
||||
DB: pgStoreCfg.DB, OperationTimeout: pgStoreCfg.OperationTimeout,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("new lobby wiring: %w", err)
|
||||
}
|
||||
@@ -763,20 +788,21 @@ func newWiring(
|
||||
// selected by cfg.RaceNameDirectory.Backend.
|
||||
func buildRaceNameDirectory(
|
||||
cfg config.Config,
|
||||
redisClient *redis.Client,
|
||||
pgPool *sql.DB,
|
||||
policy *racename.Policy,
|
||||
clock func() time.Time,
|
||||
) (ports.RaceNameDirectory, error) {
|
||||
switch cfg.RaceNameDirectory.Backend {
|
||||
case config.RaceNameDirectoryBackendRedis:
|
||||
if redisClient == nil {
|
||||
return nil, errors.New("redis race name directory backend requires a Redis client")
|
||||
case config.RaceNameDirectoryBackendPostgres:
|
||||
if pgPool == nil {
|
||||
return nil, errors.New("postgres race name directory backend requires a Postgres pool")
|
||||
}
|
||||
return redisstate.NewRaceNameDirectory(
|
||||
redisClient,
|
||||
policy,
|
||||
redisstate.WithRaceNameDirectoryClock(clock),
|
||||
)
|
||||
return pgracenamedir.New(pgracenamedir.Config{
|
||||
DB: pgPool,
|
||||
OperationTimeout: cfg.Postgres.Conn.OperationTimeout,
|
||||
Policy: policy,
|
||||
Clock: clock,
|
||||
})
|
||||
case config.RaceNameDirectoryBackendStub:
|
||||
return racenamestub.NewDirectory(racenamestub.WithClock(clock))
|
||||
default:
|
||||
|
||||
@@ -3,15 +3,18 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/lobby/internal/telemetry"
|
||||
"galaxy/postgres"
|
||||
"galaxy/redisconn"
|
||||
)
|
||||
|
||||
const (
|
||||
envPrefix = "LOBBY"
|
||||
|
||||
shutdownTimeoutEnvVar = "LOBBY_SHUTDOWN_TIMEOUT"
|
||||
logLevelEnvVar = "LOBBY_LOG_LEVEL"
|
||||
|
||||
@@ -25,13 +28,6 @@ const (
|
||||
internalHTTPReadTimeoutEnvVar = "LOBBY_INTERNAL_HTTP_READ_TIMEOUT"
|
||||
internalHTTPIdleTimeoutEnvVar = "LOBBY_INTERNAL_HTTP_IDLE_TIMEOUT"
|
||||
|
||||
redisAddrEnvVar = "LOBBY_REDIS_ADDR"
|
||||
redisUsernameEnvVar = "LOBBY_REDIS_USERNAME"
|
||||
redisPasswordEnvVar = "LOBBY_REDIS_PASSWORD"
|
||||
redisDBEnvVar = "LOBBY_REDIS_DB"
|
||||
redisTLSEnabledEnvVar = "LOBBY_REDIS_TLS_ENABLED"
|
||||
redisOperationTimeoutEnvVar = "LOBBY_REDIS_OPERATION_TIMEOUT"
|
||||
|
||||
gmEventsStreamEnvVar = "LOBBY_GM_EVENTS_STREAM"
|
||||
gmEventsReadBlockTimeoutEnvVar = "LOBBY_GM_EVENTS_READ_BLOCK_TIMEOUT"
|
||||
userLifecycleStreamEnvVar = "LOBBY_USER_LIFECYCLE_STREAM"
|
||||
@@ -69,8 +65,6 @@ const (
|
||||
defaultReadHeaderTimeout = 2 * time.Second
|
||||
defaultReadTimeout = 10 * time.Second
|
||||
defaultIdleTimeout = time.Minute
|
||||
defaultRedisDB = 0
|
||||
defaultRedisOperationTimeout = 2 * time.Second
|
||||
defaultGMEventsStream = "gm:lobby_events"
|
||||
defaultGMEventsReadBlockTimeout = 2 * time.Second
|
||||
defaultUserLifecycleStream = "user:lifecycle_events"
|
||||
@@ -86,12 +80,13 @@ const (
|
||||
defaultRaceNameExpirationInterval = time.Hour
|
||||
defaultOTelServiceName = "galaxy-lobby"
|
||||
|
||||
// RaceNameDirectoryBackendRedis selects the Redis-backed Race Name
|
||||
// Directory adapter. It is the default production backend.
|
||||
RaceNameDirectoryBackendRedis = "redis"
|
||||
// RaceNameDirectoryBackendPostgres selects the PostgreSQL-backed
|
||||
// Race Name Directory adapter. It is the default production backend
|
||||
// after PG_PLAN.md §6B.
|
||||
RaceNameDirectoryBackendPostgres = "postgres"
|
||||
|
||||
// RaceNameDirectoryBackendStub selects the in-process Race Name
|
||||
// Directory stub used by unit tests that do not need Redis.
|
||||
// Directory stub used by unit tests that do not need PostgreSQL.
|
||||
RaceNameDirectoryBackendStub = "stub"
|
||||
)
|
||||
|
||||
@@ -115,6 +110,10 @@ type Config struct {
|
||||
// consumed by the runnable service skeleton and its future workers.
|
||||
Redis RedisConfig
|
||||
|
||||
// Postgres configures the PostgreSQL-backed durable store consumed via
|
||||
// `pkg/postgres`.
|
||||
Postgres PostgresConfig
|
||||
|
||||
// UserService configures the synchronous User Service eligibility client.
|
||||
UserService UserServiceConfig
|
||||
|
||||
@@ -143,7 +142,7 @@ type Config struct {
|
||||
// is wired into the runtime.
|
||||
type RaceNameDirectoryConfig struct {
|
||||
// Backend selects the Race Name Directory adapter. Accepted values
|
||||
// are RaceNameDirectoryBackendRedis and RaceNameDirectoryBackendStub.
|
||||
// are RaceNameDirectoryBackendPostgres and RaceNameDirectoryBackendStub.
|
||||
Backend string
|
||||
}
|
||||
|
||||
@@ -151,14 +150,14 @@ type RaceNameDirectoryConfig struct {
|
||||
// backend selector.
|
||||
func (cfg RaceNameDirectoryConfig) Validate() error {
|
||||
switch cfg.Backend {
|
||||
case RaceNameDirectoryBackendRedis, RaceNameDirectoryBackendStub:
|
||||
case RaceNameDirectoryBackendPostgres, RaceNameDirectoryBackendStub:
|
||||
return nil
|
||||
case "":
|
||||
return fmt.Errorf("race name directory backend must not be empty")
|
||||
default:
|
||||
return fmt.Errorf("race name directory backend %q must be one of %q or %q",
|
||||
cfg.Backend,
|
||||
RaceNameDirectoryBackendRedis,
|
||||
RaceNameDirectoryBackendPostgres,
|
||||
RaceNameDirectoryBackendStub)
|
||||
}
|
||||
}
|
||||
@@ -237,26 +236,15 @@ func (cfg InternalHTTPConfig) Validate() error {
|
||||
}
|
||||
}
|
||||
|
||||
// RedisConfig configures the shared Redis client and the Redis-owned
|
||||
// Streams keys consumed by the runnable service skeleton.
|
||||
// RedisConfig configures the Game Lobby Redis connection topology and the
|
||||
// Redis Stream names Lobby reads from / writes to. Per-call timeouts and
|
||||
// connection topology live inside `Conn`.
|
||||
type RedisConfig struct {
|
||||
// Addr stores the Redis network address.
|
||||
Addr string
|
||||
|
||||
// Username stores the optional Redis ACL username.
|
||||
Username string
|
||||
|
||||
// Password stores the optional Redis ACL password.
|
||||
Password string
|
||||
|
||||
// DB stores the Redis logical database index.
|
||||
DB int
|
||||
|
||||
// TLSEnabled reports whether TLS must be used for Redis connections.
|
||||
TLSEnabled bool
|
||||
|
||||
// OperationTimeout bounds one Redis round trip including the startup PING.
|
||||
OperationTimeout time.Duration
|
||||
// Conn carries the connection topology (master, replicas, password, db,
|
||||
// per-call timeout). Loaded via redisconn.LoadFromEnv("LOBBY"); rejects
|
||||
// the deprecated LOBBY_REDIS_TLS_ENABLED / LOBBY_REDIS_USERNAME env vars
|
||||
// at startup.
|
||||
Conn redisconn.Config
|
||||
|
||||
// GMEventsStream stores the Redis Streams key for Game Master runtime
|
||||
// events consumed by Lobby.
|
||||
@@ -297,27 +285,12 @@ type RedisConfig struct {
|
||||
UserLifecycleReadBlockTimeout time.Duration
|
||||
}
|
||||
|
||||
// TLSConfig returns the conservative TLS configuration used by the Redis
|
||||
// client when TLSEnabled is true.
|
||||
func (cfg RedisConfig) TLSConfig() *tls.Config {
|
||||
if !cfg.TLSEnabled {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &tls.Config{MinVersion: tls.VersionTLS12}
|
||||
}
|
||||
|
||||
// Validate reports whether cfg stores a usable Redis configuration.
|
||||
func (cfg RedisConfig) Validate() error {
|
||||
if err := cfg.Conn.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
switch {
|
||||
case strings.TrimSpace(cfg.Addr) == "":
|
||||
return fmt.Errorf("redis addr must not be empty")
|
||||
case !isTCPAddr(cfg.Addr):
|
||||
return fmt.Errorf("redis addr %q must use host:port form", cfg.Addr)
|
||||
case cfg.DB < 0:
|
||||
return fmt.Errorf("redis db must not be negative")
|
||||
case cfg.OperationTimeout <= 0:
|
||||
return fmt.Errorf("redis operation timeout must be positive")
|
||||
case strings.TrimSpace(cfg.GMEventsStream) == "":
|
||||
return fmt.Errorf("redis gm events stream must not be empty")
|
||||
case cfg.GMEventsReadBlockTimeout <= 0:
|
||||
@@ -341,6 +314,19 @@ func (cfg RedisConfig) Validate() error {
|
||||
}
|
||||
}
|
||||
|
||||
// PostgresConfig configures the PostgreSQL-backed durable store consumed via
|
||||
// `pkg/postgres`. Topology and pool tuning live in `Conn`; loaded via
|
||||
// `postgres.LoadFromEnv("LOBBY")`.
|
||||
type PostgresConfig struct {
|
||||
// Conn carries the primary plus replica DSN topology and pool tuning.
|
||||
Conn postgres.Config
|
||||
}
|
||||
|
||||
// Validate reports whether cfg stores a usable PostgreSQL configuration.
|
||||
func (cfg PostgresConfig) Validate() error {
|
||||
return cfg.Conn.Validate()
|
||||
}
|
||||
|
||||
// UserServiceConfig configures the synchronous User Service eligibility
|
||||
// client used by the application flow.
|
||||
type UserServiceConfig struct {
|
||||
@@ -489,8 +475,7 @@ func DefaultConfig() Config {
|
||||
IdleTimeout: defaultIdleTimeout,
|
||||
},
|
||||
Redis: RedisConfig{
|
||||
DB: defaultRedisDB,
|
||||
OperationTimeout: defaultRedisOperationTimeout,
|
||||
Conn: redisconn.DefaultConfig(),
|
||||
GMEventsStream: defaultGMEventsStream,
|
||||
GMEventsReadBlockTimeout: defaultGMEventsReadBlockTimeout,
|
||||
RuntimeStartJobsStream: defaultRuntimeStartJobsStream,
|
||||
@@ -501,6 +486,9 @@ func DefaultConfig() Config {
|
||||
UserLifecycleStream: defaultUserLifecycleStream,
|
||||
UserLifecycleReadBlockTimeout: defaultUserLifecycleReadBlockTimeout,
|
||||
},
|
||||
Postgres: PostgresConfig{
|
||||
Conn: postgres.DefaultConfig(),
|
||||
},
|
||||
UserService: UserServiceConfig{
|
||||
Timeout: defaultUserServiceTimeout,
|
||||
},
|
||||
@@ -511,7 +499,7 @@ func DefaultConfig() Config {
|
||||
Interval: defaultEnrollmentAutomationInterval,
|
||||
},
|
||||
RaceNameDirectory: RaceNameDirectoryConfig{
|
||||
Backend: RaceNameDirectoryBackendRedis,
|
||||
Backend: RaceNameDirectoryBackendPostgres,
|
||||
},
|
||||
PendingRegistration: PendingRegistrationConfig{
|
||||
Interval: defaultRaceNameExpirationInterval,
|
||||
|
||||
@@ -5,10 +5,21 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/postgres"
|
||||
"galaxy/redisconn"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
testDSN = "postgres://lobbyservice:lobbyservice@127.0.0.1:5432/galaxy?search_path=lobby&sslmode=disable"
|
||||
testRedisAddr = "127.0.0.1:6379"
|
||||
testRedisSecret = "secret"
|
||||
testUserBaseURL = "http://user.internal:8090"
|
||||
testGMBaseURL = "http://gm.internal:8091"
|
||||
)
|
||||
|
||||
func TestDefaultConfig(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
@@ -18,7 +29,8 @@ func TestDefaultConfig(t *testing.T) {
|
||||
assert.Equal(t, "info", cfg.Logging.Level)
|
||||
assert.Equal(t, ":8094", cfg.PublicHTTP.Addr)
|
||||
assert.Equal(t, ":8095", cfg.InternalHTTP.Addr)
|
||||
assert.Equal(t, 2*time.Second, cfg.Redis.OperationTimeout)
|
||||
assert.Equal(t, redisconn.DefaultOperationTimeout, cfg.Redis.Conn.OperationTimeout)
|
||||
assert.Equal(t, postgres.DefaultOperationTimeout, cfg.Postgres.Conn.OperationTimeout)
|
||||
assert.Equal(t, "gm:lobby_events", cfg.Redis.GMEventsStream)
|
||||
assert.Equal(t, "runtime:start_jobs", cfg.Redis.RuntimeStartJobsStream)
|
||||
assert.Equal(t, "runtime:stop_jobs", cfg.Redis.RuntimeStopJobsStream)
|
||||
@@ -35,16 +47,20 @@ func TestDefaultConfig(t *testing.T) {
|
||||
|
||||
func TestLoadFromEnvAppliesRequiredFields(t *testing.T) {
|
||||
clearAllEnv(t)
|
||||
t.Setenv("LOBBY_REDIS_ADDR", "127.0.0.1:6379")
|
||||
t.Setenv("LOBBY_USER_SERVICE_BASE_URL", "http://user.internal:8090")
|
||||
t.Setenv("LOBBY_GM_BASE_URL", "http://gm.internal:8091")
|
||||
t.Setenv("LOBBY_REDIS_MASTER_ADDR", testRedisAddr)
|
||||
t.Setenv("LOBBY_REDIS_PASSWORD", testRedisSecret)
|
||||
t.Setenv("LOBBY_POSTGRES_PRIMARY_DSN", testDSN)
|
||||
t.Setenv("LOBBY_USER_SERVICE_BASE_URL", testUserBaseURL)
|
||||
t.Setenv("LOBBY_GM_BASE_URL", testGMBaseURL)
|
||||
|
||||
cfg, err := LoadFromEnv()
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, "127.0.0.1:6379", cfg.Redis.Addr)
|
||||
assert.Equal(t, "http://user.internal:8090", cfg.UserService.BaseURL)
|
||||
assert.Equal(t, "http://gm.internal:8091", cfg.GM.BaseURL)
|
||||
assert.Equal(t, testRedisAddr, cfg.Redis.Conn.MasterAddr)
|
||||
assert.Equal(t, testRedisSecret, cfg.Redis.Conn.Password)
|
||||
assert.Equal(t, testDSN, cfg.Postgres.Conn.PrimaryDSN)
|
||||
assert.Equal(t, testUserBaseURL, cfg.UserService.BaseURL)
|
||||
assert.Equal(t, testGMBaseURL, cfg.GM.BaseURL)
|
||||
}
|
||||
|
||||
func TestLoadFromEnvMissingRequiredFields(t *testing.T) {
|
||||
@@ -52,21 +68,48 @@ func TestLoadFromEnvMissingRequiredFields(t *testing.T) {
|
||||
|
||||
_, err := LoadFromEnv()
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "redis addr must not be empty")
|
||||
require.Contains(t, err.Error(), "LOBBY_REDIS_MASTER_ADDR")
|
||||
}
|
||||
|
||||
func TestLoadFromEnvRejectsDeprecatedRedisVars(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
envName string
|
||||
}{
|
||||
{name: "TLS_ENABLED", envName: "LOBBY_REDIS_TLS_ENABLED"},
|
||||
{name: "USERNAME", envName: "LOBBY_REDIS_USERNAME"},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
clearAllEnv(t)
|
||||
t.Setenv("LOBBY_REDIS_MASTER_ADDR", testRedisAddr)
|
||||
t.Setenv("LOBBY_REDIS_PASSWORD", testRedisSecret)
|
||||
t.Setenv("LOBBY_POSTGRES_PRIMARY_DSN", testDSN)
|
||||
t.Setenv("LOBBY_USER_SERVICE_BASE_URL", testUserBaseURL)
|
||||
t.Setenv("LOBBY_GM_BASE_URL", testGMBaseURL)
|
||||
t.Setenv(tt.envName, "anything")
|
||||
|
||||
_, err := LoadFromEnv()
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), tt.envName)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadFromEnvOverrides(t *testing.T) {
|
||||
clearAllEnv(t)
|
||||
t.Setenv("LOBBY_REDIS_ADDR", "127.0.0.1:6379")
|
||||
t.Setenv("LOBBY_USER_SERVICE_BASE_URL", "http://user.internal:8090")
|
||||
t.Setenv("LOBBY_GM_BASE_URL", "http://gm.internal:8091")
|
||||
t.Setenv("LOBBY_REDIS_MASTER_ADDR", testRedisAddr)
|
||||
t.Setenv("LOBBY_REDIS_PASSWORD", testRedisSecret)
|
||||
t.Setenv("LOBBY_POSTGRES_PRIMARY_DSN", testDSN)
|
||||
t.Setenv("LOBBY_USER_SERVICE_BASE_URL", testUserBaseURL)
|
||||
t.Setenv("LOBBY_GM_BASE_URL", testGMBaseURL)
|
||||
|
||||
t.Setenv("LOBBY_SHUTDOWN_TIMEOUT", "12s")
|
||||
t.Setenv("LOBBY_LOG_LEVEL", "debug")
|
||||
t.Setenv("LOBBY_PUBLIC_HTTP_ADDR", "127.0.0.1:9001")
|
||||
t.Setenv("LOBBY_INTERNAL_HTTP_ADDR", "127.0.0.1:9002")
|
||||
t.Setenv("LOBBY_REDIS_DB", "5")
|
||||
t.Setenv("LOBBY_REDIS_TLS_ENABLED", "true")
|
||||
t.Setenv("LOBBY_REDIS_OPERATION_TIMEOUT", "300ms")
|
||||
t.Setenv("LOBBY_GM_EVENTS_STREAM", "alt:gm_events")
|
||||
t.Setenv("LOBBY_NOTIFICATION_INTENTS_STREAM", "alt:intents")
|
||||
t.Setenv("LOBBY_ENROLLMENT_AUTOMATION_INTERVAL", "45s")
|
||||
@@ -80,21 +123,22 @@ func TestLoadFromEnvOverrides(t *testing.T) {
|
||||
assert.Equal(t, "debug", cfg.Logging.Level)
|
||||
assert.Equal(t, "127.0.0.1:9001", cfg.PublicHTTP.Addr)
|
||||
assert.Equal(t, "127.0.0.1:9002", cfg.InternalHTTP.Addr)
|
||||
assert.Equal(t, 5, cfg.Redis.DB)
|
||||
assert.True(t, cfg.Redis.TLSEnabled)
|
||||
assert.Equal(t, 5, cfg.Redis.Conn.DB)
|
||||
assert.Equal(t, 300*time.Millisecond, cfg.Redis.Conn.OperationTimeout)
|
||||
assert.Equal(t, "alt:gm_events", cfg.Redis.GMEventsStream)
|
||||
assert.Equal(t, "alt:intents", cfg.Redis.NotificationIntentsStream)
|
||||
assert.Equal(t, 45*time.Second, cfg.EnrollmentAutomation.Interval)
|
||||
assert.Equal(t, 15*time.Minute, cfg.PendingRegistration.Interval)
|
||||
assert.Equal(t, "galaxy-lobby-test", cfg.Telemetry.ServiceName)
|
||||
assert.NotNil(t, cfg.Redis.TLSConfig())
|
||||
}
|
||||
|
||||
func TestLoadFromEnvInvalidDuration(t *testing.T) {
|
||||
clearAllEnv(t)
|
||||
t.Setenv("LOBBY_REDIS_ADDR", "127.0.0.1:6379")
|
||||
t.Setenv("LOBBY_USER_SERVICE_BASE_URL", "http://user.internal:8090")
|
||||
t.Setenv("LOBBY_GM_BASE_URL", "http://gm.internal:8091")
|
||||
t.Setenv("LOBBY_REDIS_MASTER_ADDR", testRedisAddr)
|
||||
t.Setenv("LOBBY_REDIS_PASSWORD", testRedisSecret)
|
||||
t.Setenv("LOBBY_POSTGRES_PRIMARY_DSN", testDSN)
|
||||
t.Setenv("LOBBY_USER_SERVICE_BASE_URL", testUserBaseURL)
|
||||
t.Setenv("LOBBY_GM_BASE_URL", testGMBaseURL)
|
||||
t.Setenv("LOBBY_SHUTDOWN_TIMEOUT", "not-a-duration")
|
||||
|
||||
_, err := LoadFromEnv()
|
||||
@@ -153,7 +197,8 @@ func TestRedisConfigValidate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
base := DefaultConfig().Redis
|
||||
base.Addr = "127.0.0.1:6379"
|
||||
base.Conn.MasterAddr = testRedisAddr
|
||||
base.Conn.Password = testRedisSecret
|
||||
require.NoError(t, base.Validate())
|
||||
|
||||
tests := []struct {
|
||||
@@ -161,10 +206,10 @@ func TestRedisConfigValidate(t *testing.T) {
|
||||
mutate func(*RedisConfig)
|
||||
wantErr string
|
||||
}{
|
||||
{name: "empty addr", mutate: func(cfg *RedisConfig) { cfg.Addr = "" }, wantErr: "addr must not be empty"},
|
||||
{name: "bad addr", mutate: func(cfg *RedisConfig) { cfg.Addr = "weird" }, wantErr: "must use host:port"},
|
||||
{name: "negative db", mutate: func(cfg *RedisConfig) { cfg.DB = -1 }, wantErr: "must not be negative"},
|
||||
{name: "zero op timeout", mutate: func(cfg *RedisConfig) { cfg.OperationTimeout = 0 }, wantErr: "operation timeout"},
|
||||
{name: "empty master addr", mutate: func(cfg *RedisConfig) { cfg.Conn.MasterAddr = "" }, wantErr: "master addr"},
|
||||
{name: "empty password", mutate: func(cfg *RedisConfig) { cfg.Conn.Password = "" }, wantErr: "password"},
|
||||
{name: "negative db", mutate: func(cfg *RedisConfig) { cfg.Conn.DB = -1 }, wantErr: "must not be negative"},
|
||||
{name: "zero op timeout", mutate: func(cfg *RedisConfig) { cfg.Conn.OperationTimeout = 0 }, wantErr: "operation timeout"},
|
||||
{name: "empty gm stream", mutate: func(cfg *RedisConfig) { cfg.GMEventsStream = "" }, wantErr: "gm events stream"},
|
||||
{name: "zero gm block", mutate: func(cfg *RedisConfig) { cfg.GMEventsReadBlockTimeout = 0 }, wantErr: "gm events read block timeout"},
|
||||
{name: "empty start jobs", mutate: func(cfg *RedisConfig) { cfg.RuntimeStartJobsStream = "" }, wantErr: "runtime start jobs"},
|
||||
@@ -188,6 +233,18 @@ func TestRedisConfigValidate(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestPostgresConfigValidate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
base := DefaultConfig().Postgres
|
||||
base.Conn.PrimaryDSN = testDSN
|
||||
require.NoError(t, base.Validate())
|
||||
|
||||
bad := base
|
||||
bad.Conn.PrimaryDSN = ""
|
||||
require.ErrorContains(t, bad.Validate(), "primary DSN")
|
||||
}
|
||||
|
||||
func TestUserServiceConfigValidate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
@@ -255,7 +312,9 @@ func TestConfigValidateLogLevel(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cfg := DefaultConfig()
|
||||
cfg.Redis.Addr = "127.0.0.1:6379"
|
||||
cfg.Redis.Conn.MasterAddr = testRedisAddr
|
||||
cfg.Redis.Conn.Password = testRedisSecret
|
||||
cfg.Postgres.Conn.PrimaryDSN = testDSN
|
||||
cfg.UserService.BaseURL = "http://u:1"
|
||||
cfg.GM.BaseURL = "http://gm:1"
|
||||
require.NoError(t, cfg.Validate())
|
||||
@@ -266,18 +325,6 @@ func TestConfigValidateLogLevel(t *testing.T) {
|
||||
require.Contains(t, err.Error(), "slog level")
|
||||
}
|
||||
|
||||
func TestLoadFromEnvBoolParseError(t *testing.T) {
|
||||
clearAllEnv(t)
|
||||
t.Setenv("LOBBY_REDIS_ADDR", "127.0.0.1:6379")
|
||||
t.Setenv("LOBBY_USER_SERVICE_BASE_URL", "http://u:1")
|
||||
t.Setenv("LOBBY_GM_BASE_URL", "http://gm:1")
|
||||
t.Setenv("LOBBY_REDIS_TLS_ENABLED", "not-bool")
|
||||
|
||||
_, err := LoadFromEnv()
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "LOBBY_REDIS_TLS_ENABLED")
|
||||
}
|
||||
|
||||
// clearAllEnv unsets every environment variable the config package reads so
|
||||
// tests can configure their expected values explicitly.
|
||||
func clearAllEnv(t *testing.T) {
|
||||
@@ -294,12 +341,19 @@ func clearAllEnv(t *testing.T) {
|
||||
internalHTTPReadHeaderTimeoutEnvVar,
|
||||
internalHTTPReadTimeoutEnvVar,
|
||||
internalHTTPIdleTimeoutEnvVar,
|
||||
redisAddrEnvVar,
|
||||
redisUsernameEnvVar,
|
||||
redisPasswordEnvVar,
|
||||
redisDBEnvVar,
|
||||
redisTLSEnabledEnvVar,
|
||||
redisOperationTimeoutEnvVar,
|
||||
"LOBBY_REDIS_MASTER_ADDR",
|
||||
"LOBBY_REDIS_REPLICA_ADDRS",
|
||||
"LOBBY_REDIS_PASSWORD",
|
||||
"LOBBY_REDIS_DB",
|
||||
"LOBBY_REDIS_OPERATION_TIMEOUT",
|
||||
"LOBBY_REDIS_TLS_ENABLED",
|
||||
"LOBBY_REDIS_USERNAME",
|
||||
"LOBBY_POSTGRES_PRIMARY_DSN",
|
||||
"LOBBY_POSTGRES_REPLICA_DSNS",
|
||||
"LOBBY_POSTGRES_OPERATION_TIMEOUT",
|
||||
"LOBBY_POSTGRES_MAX_OPEN_CONNS",
|
||||
"LOBBY_POSTGRES_MAX_IDLE_CONNS",
|
||||
"LOBBY_POSTGRES_CONN_MAX_LIFETIME",
|
||||
gmEventsStreamEnvVar,
|
||||
gmEventsReadBlockTimeoutEnvVar,
|
||||
runtimeStartJobsStreamEnvVar,
|
||||
|
||||
@@ -6,6 +6,9 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"galaxy/postgres"
|
||||
"galaxy/redisconn"
|
||||
)
|
||||
|
||||
// LoadFromEnv builds Config from environment variables and validates the
|
||||
@@ -50,21 +53,18 @@ func LoadFromEnv() (Config, error) {
|
||||
return Config{}, err
|
||||
}
|
||||
|
||||
cfg.Redis.Addr = stringEnv(redisAddrEnvVar, cfg.Redis.Addr)
|
||||
cfg.Redis.Username = stringEnv(redisUsernameEnvVar, cfg.Redis.Username)
|
||||
cfg.Redis.Password = stringEnv(redisPasswordEnvVar, cfg.Redis.Password)
|
||||
cfg.Redis.DB, err = intEnv(redisDBEnvVar, cfg.Redis.DB)
|
||||
redisConn, err := redisconn.LoadFromEnv(envPrefix)
|
||||
if err != nil {
|
||||
return Config{}, err
|
||||
}
|
||||
cfg.Redis.TLSEnabled, err = boolEnv(redisTLSEnabledEnvVar, cfg.Redis.TLSEnabled)
|
||||
if err != nil {
|
||||
return Config{}, err
|
||||
}
|
||||
cfg.Redis.OperationTimeout, err = durationEnv(redisOperationTimeoutEnvVar, cfg.Redis.OperationTimeout)
|
||||
cfg.Redis.Conn = redisConn
|
||||
|
||||
pgConn, err := postgres.LoadFromEnv(envPrefix)
|
||||
if err != nil {
|
||||
return Config{}, err
|
||||
}
|
||||
cfg.Postgres.Conn = pgConn
|
||||
|
||||
cfg.Redis.GMEventsStream = stringEnv(gmEventsStreamEnvVar, cfg.Redis.GMEventsStream)
|
||||
cfg.Redis.GMEventsReadBlockTimeout, err = durationEnv(gmEventsReadBlockTimeoutEnvVar, cfg.Redis.GMEventsReadBlockTimeout)
|
||||
if err != nil {
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
// Package racenamedirtest exposes the shared behavioural test suite that
|
||||
// every ports.RaceNameDirectory implementation must pass. The Redis
|
||||
// every ports.RaceNameDirectory implementation must pass. The PostgreSQL
|
||||
// adapter and the in-process stub run the same cases so both back ends
|
||||
// stay behaviourally equivalent.
|
||||
//
|
||||
// Subtests run sequentially: the PostgreSQL adapter shares one
|
||||
// testcontainers instance across the suite and relies on TruncateAll
|
||||
// between factory invocations, which would race under t.Parallel.
|
||||
package racenamedirtest
|
||||
|
||||
import (
|
||||
@@ -29,144 +33,111 @@ func Run(t *testing.T, factory Factory) {
|
||||
t.Helper()
|
||||
|
||||
t.Run("Canonicalize rejects invalid input", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testCanonicalizeRejectsInvalid(t, factory)
|
||||
})
|
||||
t.Run("Canonicalize is deterministic", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testCanonicalizeDeterministic(t, factory)
|
||||
})
|
||||
|
||||
t.Run("Check empty directory", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testCheckEmpty(t, factory)
|
||||
})
|
||||
t.Run("Check treats actor as own holder", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testCheckActorNotTaken(t, factory)
|
||||
})
|
||||
t.Run("Check exposes holder and kind to other users", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testCheckHolderAndKind(t, factory)
|
||||
})
|
||||
|
||||
t.Run("Reserve records new holding", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReserveRecords(t, factory)
|
||||
})
|
||||
t.Run("Reserve idempotent for same holder same game", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReserveIdempotent(t, factory)
|
||||
})
|
||||
t.Run("Reserve allows same user across games", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReserveCrossGame(t, factory)
|
||||
})
|
||||
t.Run("Reserve rejects cross-user same game", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReserveCrossUserSameGame(t, factory)
|
||||
})
|
||||
t.Run("Reserve rejects cross-user different games", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReserveCrossUserDifferentGames(t, factory)
|
||||
})
|
||||
t.Run("Reserve rejects invalid name", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReserveInvalidName(t, factory)
|
||||
})
|
||||
|
||||
t.Run("ReleaseReservation missing", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReleaseReservationMissing(t, factory)
|
||||
})
|
||||
t.Run("ReleaseReservation wrong holder", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReleaseReservationWrongHolder(t, factory)
|
||||
})
|
||||
t.Run("ReleaseReservation clears sole binding", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReleaseReservationClears(t, factory)
|
||||
})
|
||||
t.Run("ReleaseReservation swallows invalid name", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReleaseReservationInvalidName(t, factory)
|
||||
})
|
||||
t.Run("ReleaseReservation keeps cross-game holding visible", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReleaseReservationKeepsCrossGame(t, factory)
|
||||
})
|
||||
|
||||
t.Run("MarkPendingRegistration promotes reservation", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testMarkPendingPromotes(t, factory)
|
||||
})
|
||||
t.Run("MarkPendingRegistration idempotent same eligible", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testMarkPendingIdempotent(t, factory)
|
||||
})
|
||||
t.Run("MarkPendingRegistration rejects different eligible", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testMarkPendingDifferentEligible(t, factory)
|
||||
})
|
||||
t.Run("MarkPendingRegistration rejects missing reservation", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testMarkPendingMissing(t, factory)
|
||||
})
|
||||
|
||||
t.Run("ExpirePendingRegistrations empty", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testExpirePendingEmpty(t, factory)
|
||||
})
|
||||
t.Run("ExpirePendingRegistrations releases expired entries", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testExpirePendingReleasesExpired(t, factory)
|
||||
})
|
||||
t.Run("ExpirePendingRegistrations skips future entries", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testExpirePendingSkipsFuture(t, factory)
|
||||
})
|
||||
t.Run("ExpirePendingRegistrations idempotent replay", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testExpirePendingIdempotent(t, factory)
|
||||
})
|
||||
|
||||
t.Run("Register converts pending to registered", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testRegisterConverts(t, factory)
|
||||
})
|
||||
t.Run("Register idempotent on repeat", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testRegisterIdempotent(t, factory)
|
||||
})
|
||||
t.Run("Register rejects missing pending", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testRegisterMissingPending(t, factory)
|
||||
})
|
||||
t.Run("Register rejects expired pending", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testRegisterExpiredPending(t, factory)
|
||||
})
|
||||
|
||||
t.Run("List methods partition correctly", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testListsPartition(t, factory)
|
||||
})
|
||||
|
||||
t.Run("ReleaseAllByUser clears every kind", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReleaseAllByUserClears(t, factory)
|
||||
})
|
||||
t.Run("ReleaseAllByUser leaves other users intact", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReleaseAllByUserIsolated(t, factory)
|
||||
})
|
||||
t.Run("ReleaseAllByUser idempotent", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testReleaseAllByUserIdempotent(t, factory)
|
||||
})
|
||||
|
||||
t.Run("Honors canceled context", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testContextCancellation(t, factory)
|
||||
})
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user