Compare commits

121 Commits

Author SHA1 Message Date
developer c2f811640b Merge pull request 'ui: plan 01-27 done' (#1) from ai/ui-client into main
ui-test / test (push) Failing after 10s
Reviewed-on: https://gitea.dev/developer/galaxy-game/pulls/1
2026-05-13 18:55:13 +00:00
Ilia Denisov 6921c70df7 ui/phase-27: mark stage done after local-ci run 14
ui-test / test (push) Failing after 11s
ui-test / test (pull_request) Failing after 56s
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 18:59:00 +02:00
Ilia Denisov bd11cd80da ui/phase-27: root-cause aggregation of duplicate (race, className) rows
Legacy reports list the same `(race, className)` pair across several
roster rows; the engine likewise creates one ShipGroup per arrival.
Both the legacy parser and `TransformBattle` were keyed on shipClass
without summing — only the last row / group's counts survived, so a
protocol's destroy count appeared to exceed the recorded initial
roster. The UI worked around this with phantom-frame logic.

Both parser and engine now SUM `Number`/`NumberLeft` across rows /
groups sharing the same class; the phantom-frame workaround is gone.
KNNTS041 turn 41 planet #7 reconciles: `Nails:pup` 1168 initial −
86 survivors = 1082 destroys.

The engine's previously latent nil-map write on `bg.Tech` (would
have paniced on any group with non-empty Tech) is fixed in the same
patch — it blocked the aggregation regression test.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 18:52:40 +02:00
Ilia Denisov 2e7478f5ea ui/phase-27: skip phantom frames during play + freeze final layout
Two more KNNTS041 viewer fixes:

1. Phantom-frame fast-forward. `buildFrames` now flags every frame
   whose shot landed on an already-empty defender group as
   `phantom: true`. During play the BattleViewer effect detects a
   phantom frame and chains a 0 ms timer to the next non-phantom,
   so streaks of phantoms (the ~30 frames between shots 224 and
   255, and the 401..414 stretch) collapse from "the player just
   mots the timeline" into a single visual tick. Step controls and
   the scrubber can still land on a phantom deliberately for
   protocol inspection.

2. Final-frame layout freeze. `displayFrame` derives from the raw
   `frames[i]` and, on the very last frame when `activeRaceIds`
   shrinks vs the penultimate frame (the killing blow eliminates a
   race), substitutes the penultimate's `remaining` and
   `activeRaceIds` while keeping the current `shotIndex` and
   `lastAction`. The result: the surviving cluster no longer
   reflows onto the planet ring on the very last shot — the user
   sees the killing line + defender flash rendered against the
   picture they saw a moment earlier.

Tests: `phantom-destroy clamp` case extended with `frame.phantom`
flag assertions across the protocol; 644 Vitest cases stay green,
4 Playwright `battle-viewer` cases stay green.

Docs: `ui/docs/battle-viewer-ux.md` documents the fast-forward
behaviour and the final-frame freeze.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 18:16:11 +02:00
Ilia Denisov e2aba856b5 ui/phase-27: viewer layout pass + static cluster + duel layout
Layout reshuffle so the scene captures the maximum viewer area:

- Header collapses three rows into one: `back to map` / `back to
  report` on the left, the centred title `Battle on planet <name>
  (#<number>)` (new i18n key `game.battle.header_title`), and the
  frame counter on the right. The wrapper `.active-view` no longer
  renders its own back-row; routes flow through props.
- Viewer drops the `max-width: 880px` cap so on a wide monitor the
  scene scales up across the full active-view-host.
- A drag-seek `<input type="range">` sits between the scene and the
  controls; dragging pauses playback and lands `frameIndex` on the
  chosen shot.
- Speed control is one cycling button: `1x → 2x → 4x → 6x → 1x`.
  The label shows the current speed; the new 6x adds a 67 ms frame
  interval for skimming a long timeline.
- The text protocol log is now collapsible behind a `Log ▲▼`
  toggle in the controls bar. The toggle is its own button; the
  default state stays expanded. Collapsing the log hands the
  remaining height to the scene.
- Numerical list markers (`1. 2. 3.`) are dropped from the log;
  `list-style: none` keeps each row visually clean.

Static cluster + visibility filter:

- `staticBucketsByRace` now locks bucket order, mass, radius and
  local Vogel-spiral positions for the lifetime of the viewer; it
  only re-derives when `report` or the wasm `core` change.
- `renderedByRace` overlays the per-frame `remaining` map and drops
  buckets whose `numLeft` hits zero. The surviving buckets keep
  their slots, so a class emptying never reshuffles the cluster —
  the empty bucket simply disappears.
- A shot whose attacker or defender bucket is no longer visible
  draws no line (phantom shots into already-empty buckets are
  silently skipped, matching the user expectation that pup at 0
  should stop attracting fire visually).
- Race label clamps to a minimum y inside the SVG viewport so
  three-or-more-race layouts with a north anchor never clip the
  top race name off-canvas.

Duel layout (user suggestion):

- `layoutRaces` rotates the radial start angle by 90° when only
  two participants remain, so race 0 lands at 9 o'clock and race 1
  at 3 o'clock. The pair faces off horizontally; neither label
  pushes against the SVG top edge. The existing test for two-race
  positions is updated accordingly.

Tests: the existing `layoutRaces` two-race case is rewritten for
the horizontal duel; the `game-shell-stubs` battle case checks the
loading placeholder (back buttons now live in the loaded viewer,
not the wrapper). 644 Vitest cases stay green; 4 Playwright
battle-viewer cases stay green.

Docs: `ui/docs/battle-viewer-ux.md` documents the static cluster /
visibility filter, the duel layout, the scrubber, the cycling
speed button and the collapsible log.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 17:38:46 +02:00
Ilia Denisov 17a3afd5e9 ui/phase-27: viewer polish + phantom-destroy clamp
Nine BattleViewer refinements from the latest review pass:

1. Mass radii were uniform in synthetic mode because
   `+layout.svelte` skipped `loadCore()` on the synthetic branch.
   The wasm bridge to `pkg/calc/ship.go` now boots in both modes
   so `computeBattleGroupMass` resolves a real FullMass and
   `radiusForMass` produces a per-battle scale.

2. Phantom-destroy clamp in `buildFrames`. Legacy emitters
   (KNNTS041 planet #7) log many more `Destroyed` lines against a
   group than the group's initial population — at frame 406 of
   2317 the race totals previously hit zero on phantom shots and
   the scene blanked while playback continued silently. We now
   only shrink the per-group remaining count and the race totals
   when the group still has ships. The line still draws on
   phantom frames; only the counters stay sane.

3. Vogel sunflower positions are now reassigned by inward dot
   product before being handed to ranks: the rank-0 bucket — the
   one with the largest initial ship count — always lands at the
   most-inward spiral slot. The previous quarter-step anchor bias
   was too weak; ranks r ≥ 2 routinely overtook rank-0 toward
   the planet. The anchor offset is gone.

4. Bucket order inside a cluster is locked at battle start by
   each bucket's *initial* ship count (`num`), not its live
   `numLeft`. The position of every class circle stays put for
   the whole battle; only the label number changes as ships die.

5. Shot line + defender flash blink on a per-frame timer during
   play. The line stays on for the first 90 % of frame duration,
   off for the last 10 %, so two consecutive shots from the same
   attacker on the same defender look like two distinct pulses.
   On pause the line and flash stay drawn for inspection.

6. The defender's class circle now flashes red (destroyed) or
   green (shielded) in sync with the shot line, so the eye
   catches *who* was hit, not just where the line lands.

7. Battle log rows are buttons. Click / Enter / Space pauses
   playback and seeks to that shot. The list also auto-scrolls
   the current row into view so the highlight does not race off
   the bottom on long battles.

8. Race labels now sit above the cloud's bounding top instead of
   a fixed offset, so a dense cluster does not swallow its own
   race name.

9. Planet glyph + label switch to neutral grey
   (`#2a2f40` / `#4a5066` / `#6d7388`), keeping the planet "in the
   background" rather than competing with the combatants.

Step-back icon switched to `◀︎◀︎` to mirror step-forward.

Tests: two new Vitest cases cover the phantom-destroy clamp
(single-race wipe, mixed-class race survives a class wipe). The
existing 642 Vitest tests stay green; all four `battle-viewer`
Playwright cases pass.

Docs: `ui/docs/battle-viewer-ux.md` rewrites the cluster section
(locked order + Vogel reassignment), adds Playback Details (blink
+ flash semantics), and a Phantom Destroys section explaining the
clamp.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 16:44:46 +02:00
Ilia Denisov 8c260f8715 ui/phase-27: mass-based circles + cloud cluster + height fit
Three Phase-27 BattleViewer refinements on top of the radial scene:

1. Height fit. The viewer is pinned to `calc(100dvh − 80px)` so it
   never pushes the in-game shell past the viewport. `.active-view`
   gains `overflow: hidden` + flex column; `.viewer` becomes a
   `flex: 1` child; the always-visible text log shrinks to a 30 dvh
   ceiling with its own scroll. A global `body { margin: 0 }`
   reset (added to `app.html`) plugs the 16 px the browser's
   default body margin used to leak.

2. Mass-based ship-class circles. New `lib/battle-player/mass.ts`
   carries the radius formula and the per-battle FullMass compute:
   `MIN_RADIUS + (MAX_RADIUS − MIN_RADIUS) * sqrt(mass / max)`,
   clamped to `[6, 24] px`. FullMass goes through the existing
   wasm bridge (`emptyMass` → `carryingMass` → `fullMass`) — no
   new wire fields. The viewer page resolves a
   `(race, className) → ShipClassRef` lookup from the parent
   GameReport's `localShipClass` + `otherShipClass` tables and
   passes it to the viewer via context. Unknown class or
   degenerate (weapons/armament) params fall back to MAX_RADIUS
   so the bucket stays visible.

3. Cloud cluster layout. Cluster key shifts from per-group
   `g.key` to `(raceId, className)` so tech-variants of the same
   hull collapse into one visual bucket. The horizontal
   classCircleX row is replaced by a Vogel sunflower spiral in
   the local `(u, v)` basis — `u` points from the race anchor to
   the planet, `v` is `u` rotated 90° clockwise. Buckets are
   sorted by NumberLeft desc; the cluster anchor is pushed inward
   by a quarter step so rank-0 sits closest to the planet. The
   step is adaptive (`min(baseStep, MAX_CLUSTER_RADIUS / sqrt(N))`)
   so clusters with many classes do not spill into neighbours.

Tests:
- Vitest: `radiusForMass` covering zero / max / quarter-mass /
  out-of-range cases (6 cases).
- Playwright: new `battle-viewer.spec.ts` case asserts
  `document.documentElement.scrollHeight - window.innerHeight ≤ 4`
  at a 1280×720 desktop viewport. The existing fixture gains
  `localShipClass` + `otherShipClass` so the lookup has data to
  render proportional circles.

Docs: `ui/docs/battle-viewer-ux.md` rewrites the "Radial scene"
section (cloud layout, mass-based radius, height fit) and adds
a "Height fit" subsection. `docs/FUNCTIONAL.md` §6.5 (+ ru
mirror) get the one-line story about per-mass sizing, cluster
aggregation, and the viewport-locked layout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 15:51:31 +02:00
Ilia Denisov b23649059f legacy-report: parse battles + envelope JSON output
Side activity on top of Phase 27: the legacy-report tool now extracts
the "Battle at (#N) Name" / "Battle Protocol" blocks the parser used
to skip. Both the per-battle summary (Report.Battle: []BattleSummary)
and the full BattleReport (rosters + protocol) flow through.

Parser:
- new sectionBattle / sectionBattleProtocol states, with handle()
  trapping the per-race "<Race> Groups" sub-headers so the roster
  stays attributed to the right race;
- parseBattleHeader extracts (planet, planetName) from
  "Battle at (#NN) <Name>";
- parseBattleRosterRow maps the 10-token row into
  BattleReportGroup; column 8 ("L") is NumberLeft, confirmed against
  KNNTS fixtures;
- parseBattleProtocolLine counts shots and builds
  BattleActionReport entries from the 8-token "X Y fires on A B :
  Destroyed|Shields" lines;
- flushPendingBattle finalises a battle on next "Battle at" or any
  top-level section change and appends both the summary and the
  full report;
- syntheticBattleID(idx) + syntheticBattleRaceID(name) synthesise
  stable UUIDs in dedicated namespaces so re-runs produce
  byte-identical JSON.

Parse() signature widens to (Report, []BattleReport, error); the
single caller — the CLI — is updated.

CLI emits a v1 envelope:
  { "version": 1, "report": <Report>, "battles": { <uuid>: <BR>, ... } }
Bare-Report JSONs still load on the UI side for backward compat.

UI synthetic loader: loadSyntheticReportFromJSON detects the v1
envelope, decodes the report as before, and forwards every battle
through registerSyntheticBattle so the Battle Viewer resolves any
UUID offline. Pre-envelope JSON files (no `version` field) still
load — the battle registry stays empty for them.

Docs: legacy-report README moves Battles from "Skipped" to
in-scope, documents the envelope and UUID namespaces;
docs/FUNCTIONAL.md §6.5 (and the ru mirror) note that synthetic
mode is now end-to-end via the envelope.

Tests:
- TestParseBattles covers two battles with full rosters,
  per-shot destroyed/shielded mapping, NumberLeft from column 8,
  deterministic UUIDs across re-parses, and proves a trailing
  top-level section still parses (battle state closes cleanly);
- smokeWant gains a battles count; runSmoke cross-checks
  BattleSummary ↔ BattleReport alignment (id/planet/shots);
- all six real-fixture smoke tests pinned to their `Battle at`
  counts (28, 79, 56, 30, 83, 57);
- Vitest covers the synthetic-report envelope path (battles
  forwarded, missing-battles tolerated, bare-Report backward
  compat);
- KNNTS041.json regenerated against the new parser (existing
  diff was stale w.r.t. Phase 23 anyway; this commit brings it
  in line with the v1 envelope).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 14:22:53 +02:00
Ilia Denisov 46996ebf31 docs: clarify BattleSummary.shots scaling in FBS schema
Doc-only nit; triggers a CI rerun on the workflow's path filter to
verify the new Monitor permission lets local-CI polling run without
prompts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 13:03:10 +02:00
Ilia Denisov 37cf34a587 ci: rerun local-ci to verify monitor permission
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 13:01:46 +02:00
Ilia Denisov 659ba00ebf ui/phase-27: mark stage done after local-ci run 7
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 12:58:34 +02:00
Ilia Denisov 969c0480ba ui/phase-27: battle viewer (radial scene, playback, map markers)
Engine wire change: Report.battle switched from []uuid.UUID to
[]BattleSummary{id, planet, shots} so the map can place battle
markers without N extra fetches. FBS schema + generated Go/TS
regenerated; transcoder + report controller updated; openapi
adds the BattleSummary schema with a freeze test.

Backend gateway forwards engine GET /api/v1/battle/:turn/:uuid as
/api/v1/user/games/{game_id}/battles/{turn}/{battle_id} (handler
plus engineclient.FetchBattle, contract test stub, openapi spec).

UI:
- BattleViewer (lib/battle-player/) is a logically isolated SVG
  radial scene that consumes a BattleReport prop. Planet at the
  centre, races on the outer ring at equal angular spacing, race
  clusters by (race, className) with <class>:<numLeft> labels;
  observer groups (inBattle: false) are not drawn; eliminated
  races drop out and survivors re-distribute on the next frame.
- Shot line per frame: red on destroyed, green otherwise; erased
  on the next frame. Playback controls: play/pause + step ± +
  rewind + 1x/2x/4x speed (400/200/100 ms per frame).
- Page wrapper (lib/active-view/battle.svelte) loads BattleReport
  via api/battle-fetch.ts; synthetic-gameId prefix routes to a
  fixture loader, otherwise REST through the gateway. Always-
  visible <ol> text protocol satisfies the accessibility ask.
- section-battles.svelte links every battle UUID into the viewer.
- map/battle-markers.ts: yellow X cross of 2 LinePrim through the
  corners of the planet's circumscribed square (stroke width
  clamps from 1 px at 1 shot to 5 px at 100+ shots); bombing
  marker is a stroke-only ring (yellow when damaged, red when
  wiped). Wired into state-binding.ts; click handler dispatches
  battle clicks to the viewer and bombing clicks to the matching
  Reports row.
- i18n keys for the viewer in en + ru.

Docs: ui/docs/battle-viewer-ux.md, FUNCTIONAL.md §6.5 + ru
mirror, ui/PLAN.md Phase 27 decisions + deferred TODOs (push
event, richer class visuals, animated re-distribution).

Tests: Vitest unit (radial layout + timeline frame builder +
marker stroke formula + marker primitives), Playwright e2e for
the viewer (Reports link → viewer, playback step, not-found),
backend engineclient FetchBattle (200 / 404 / bad input), engine
openapi freezes (BattleReport, BattleReportGroup,
BattleActionReport, BattleSummary, Report.battle items).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-13 12:24:20 +02:00
Ilia Denisov 4ffcac00d0 tests, docs: game engine fetch battle api
ui-test / test (push) Failing after 37s
2026-05-13 11:28:28 +02:00
Ilia Denisov a9adbad7ef feat: game engine fetch battle api
ui-test / test (push) Failing after 47s
2026-05-13 10:50:45 +02:00
Ilia Denisov ce8e869731 ui/phase-26: mark stage done after local-ci run 6
ui-test / test (push) Failing after 41s
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 00:27:29 +02:00
Ilia Denisov 2d17760a5e ui/phase-26: history mode (turn navigator + read-only banner)
Split GameStateStore into currentTurn (server's latest) and viewedTurn
(displayed snapshot) so history excursions don't corrupt the resume
bookmark or the live-turn bound. Add viewTurn / returnToCurrent /
historyMode rune, plus a game-history cache namespace that stores
past-turn reports for fast re-entry. OrderDraftStore.bindClient takes
a getHistoryMode getter and short-circuits add / remove / move while
the user is viewing a past turn; RenderedReportSource skips the order
overlay in the same case. Header replaces the static "turn N" with a
clickable triplet (TurnNavigator), the layout mounts HistoryBanner
under the header, and visibility-refresh is a no-op while history is
active.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 00:13:19 +02:00
Ilia Denisov 070fdc0ee5 update gitattributes
ui-test / test (push) Failing after 38s
2026-05-11 22:18:16 +02:00
Ilia Denisov e98e6bda73 ui/phase-25: mark stage done after local-ci run 5
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 22:07:03 +02:00
Ilia Denisov 2ca47eb4df ui/phase-25: backend turn-cutoff guard + auto-pause + UI sync protocol
Backend now owns the turn-cutoff and pause guards the order tab
relies on: the scheduler flips runtime_status between
generation_in_progress and running around every engine tick, a
failed tick auto-pauses the game through OnRuntimeSnapshot, and a
new game.paused notification kind fans out alongside
game.turn.ready. The user-games handlers reject submits with
HTTP 409 turn_already_closed or game_paused depending on the
runtime state.

UI delegates auto-sync to a new OrderQueue: offline detection,
single retry on reconnect, conflict / paused classification.
OrderDraftStore surfaces conflictBanner / pausedBanner runes,
clears them on local mutation or on a game.turn.ready push via
resetForNewTurn. The order tab renders the matching banners and
the new conflict per-row badge; i18n bundles cover en + ru.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 22:00:16 +02:00
Ilia Denisov bbdcc36e05 ui/phase-24: declare game.turn.ready as JSON-friendly catalog kind
ui-test / test (push) Failing after 40s
TestBuildClientPushEventCoversCatalog required every catalog kind to
encode through a FlatBuffers `preMarshaledEvent`. game.turn.ready
intentionally rides on the JSON fallback because its payload is just
`{game_id, turn}` and the only consumer (Phase 24 UI handler) parses
JSON inline. Make the policy explicit through a jsonFriendlyKinds
allow-list so the test still asserts each kind is covered and a future
producer that picks the wrong encoding fails loudly.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 17:27:29 +02:00
Ilia Denisov 5b07bb4e14 ui/phase-24: push events, turn-ready toast, single SubscribeEvents consumer
Wires the gateway's signed SubscribeEvents stream end-to-end:

- backend: emit game.turn.ready from lobby.OnRuntimeSnapshot on every
  current_turn advance, addressed to every active membership, push-only
  channel, idempotency key turn-ready:<game_id>:<turn>;
- ui: single EventStream singleton replaces revocation-watcher.ts and
  carries both per-event dispatch and revocation detection; toast
  primitive (store + host) lives in lib/; GameStateStore gains
  pendingTurn/markPendingTurn/advanceToPending and a persisted
  lastViewedTurn so a return after multiple turns surfaces the same
  "view now" affordance as a live push event;
- mandatory event-signature verification through ui/core
  (verifyPayloadHash + verifyEvent), full-jitter exponential backoff
  1s -> 30s on transient failure, signOut("revoked") on
  Unauthenticated or clean end-of-stream;
- catalog and migration accept the new kind; tests cover producer
  (testcontainers + capturing publisher), consumer (Vitest event
  stream, toast, game-state extensions), and a Playwright e2e
  delivering a signed frame to the live UI.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 16:16:31 +02:00
Ilia Denisov 5a2a977dc6 ui/phase-23: mark stage done after local-ci run 2
ui-test / test (push) Failing after 2m11s
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 14:41:35 +02:00
Ilia Denisov c58027c034 ui/phase-23: turn-report view with twenty sections and TOC
Replaces the Phase 10 report stub with a scrollable orchestrator that
renders every FBS array as a dedicated section (galaxy summary, votes,
player status, my/foreign sciences, my/foreign ship classes, battles,
bombings, approaching groups, my/foreign/uninhabited/unknown planets,
ships in production, cargo routes, my fleets, my/foreign/unidentified
ship groups). A sticky table of contents (a <select> on mobile),
"back to map" affordance, IntersectionObserver-driven active-section
highlight, and SvelteKit Snapshot-based scroll save/restore round out
the view.

GameReport gains six new fields (players, otherScience, otherShipClass,
battleIds, bombings, shipProductions); decodeReport, the synthetic-
report loader, the e2e fixture builder, and EMPTY_SHIP_GROUPS extend
in lockstep. ~90 new i18n keys land in en + ru together.

The legacy-report parser is extended to populate the new sections from
the dg/gplus text formats (Your Sciences, <Race> Sciences, <Race> Ship
Types, Bombings, Ships In Production). Ships-in-production prod_used
is derived through a new pkg/calc.ShipBuildCost helper; the engine's
controller.ProduceShip refactors to call the same helper without any
behaviour change (engine tests stay unchanged and green). Battles
remain in the parser's Skipped list — the legacy text carries no
stable per-battle UUID.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 14:33:56 +02:00
Ilia Denisov 81d8be08b2 phase 22 2026-05-11 11:38:40 +02:00
Ilia Denisov e2a4790f6c ui/phase-22: skip the no-op stance click in the races table
Clicking the already-active WAR/PEACE button still appended a
\`setDiplomaticStance\` whose \`relation\` matched the row's current
value. The engine would accept the duplicate harmlessly, but the
order tab inflates with rows that say nothing and every auto-sync
re-ships the redundant payload. Compare against the overlayed
stance (so a queued-but-not-applied change suppresses a re-click
that matches the *intended* state, not just the server snapshot)
and short-circuit when they agree. Mirrors the vote picker, which
already had the same guard.

vitest.config.ts: \`mergeConfig\` refuses callback-form base
configs, so resolve \`vite.config.ts\`'s callback with the test
context first and merge the plain object. Surfaced after the
\`loadEnv\` migration switched the root config to the callback
form.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 11:19:57 +02:00
Ilia Denisov c0382117b8 ui: read dev-server config from .env files and add VITE_DEV_HOST opt-in
`vite.config.ts` read `VITE_DEV_PROXY_TARGET` /
`VITE_DEV_GRPC_PROXY_TARGET` straight from `process.env`, so the
gateway-override knob only worked when the variable was exported in
the shell that ran `pnpm dev`. Per-developer `.env.development.local`
files (the documented way to override) were silently ignored by the
config: Vite auto-populates `import.meta.env` for client code from
those files, but the config itself runs in Node and has to call
`loadEnv` explicitly.

Switch the config to the function-form + `loadEnv` so every
`VITE_*` entry in any `.env*` file reaches both client code and the
config. Now adding `VITE_DEV_PROXY_TARGET=http://localhost:18080` to
`.env.development.local` actually retargets the proxy, no shell
gymnastics required.

While there, introduce `VITE_DEV_HOST` as an opt-in for wider
listener binding: unset (default) keeps Vite's loopback-only
behaviour; `true`/`1`/`yes` flips to "all interfaces" (`0.0.0.0` +
IPv6); any other string is passed through verbatim to pin a
specific LAN address. Useful when reaching the dev server through
SSH port forwarding, a VM, or a container needs a non-loopback
bind, and intentionally opt-in so an unattended `pnpm dev` on a
laptop never exposes the unauthenticated dev surface to the LAN by
accident.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 10:46:08 +02:00
Ilia Denisov 5867afd168 local-dev: parameterize host-port mappings via LOCAL_DEV_*_PORT
The compose stack hard-coded host ports (postgres 5433, redis 6380,
mailpit 8025, gateway REST 8080, gateway gRPC 9090) — fine for a
clean dev machine, painful when those ports collide with other
services on the same host (e.g. a `crowdsec` sitting on
127.0.0.1:8080 or a Prometheus instance on :9090).

Every host-port mapping is now `${LOCAL_DEV_*_PORT:-<old-default>}`,
so the defaults match prior behaviour for everyone and a per-host
override is a single environment variable away. `.env` carries the
overrides as commented-out lines so the customisation surface is
discoverable without grepping the compose file. README's
"Port 8080 already in use" troubleshooting entry now points at the
new variables and the optional `docker-compose.override.yml`
workflow.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 10:23:42 +02:00
Ilia Denisov 9111dd955a ui/phase-22: races table with stance toggle and vote slot
Adds the Races View in the in-game shell. The table lists every
non-extinct other race with tech levels (percent), totals,
planets, votes received, and a per-row WAR | PEACE segmented
control. A single vote-recipient slot above the table queues a
`CommandRaceVote`; per-row buttons queue `CommandRaceRelation`.
Both commands flow through the existing order draft store with
collapse-by-acceptor (stance) and singleton (vote) rules.

`GameReport` widens with `races`, `myVotes`, `myVoteFor`; the
decoder walks `report.player[]` once for the richer projection.
The optimistic overlay flips stance and vote target immediately;
`votesReceived`, `myVotes`, and the alliance summary stay
server-authoritative — alliance grouping and the 2/3 victory
check are tallied on the server at turn cutoff and explicitly
not surfaced client-side (`rules.txt` keeps foreign races'
outgoing vote targets private).

Includes Vitest component coverage of stance and vote
collapse rules + a Playwright e2e that drives both commands
through the dispatcher route and verifies the gateway saw the
expected `CommandRaceRelation` / `CommandRaceVote` payloads.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 01:52:23 +02:00
Ilia Denisov 7a7f2e4b98 chore: claude settings 2026-05-11 01:10:32 +02:00
Ilia Denisov 9c29f03d66 ui/phase-21: make MapView's mounted flag reactive
The renderer-mount effect in `lib/active-view/map.svelte` reads
`mounted` to gate the runSerializedMount call, but the variable was
declared as a plain `let`, not `$state`. On the first navigation to
/map this is benign: the effect's first pass returns early (gameState
still hydrating, `report` null), and once `report` arrives the
effect re-fires — by which point `onMount` has already flipped
`mounted = true`.

On every subsequent return to /map the report is already loaded by
the long-lived gameState in the layout. The effect therefore makes
exactly one pass on the freshly-mounted component, gates on
`mounted === false` (the brand-new instance has not run `onMount`
yet), and never wakes up again because no tracked state changes
afterwards. Symptom: black canvas — fresh DOM, no mount-error
overlay, but Pixi never rebuilt the world on the new canvas.

Convert `mounted` to `$state(false)` so flipping it true inside
`onMount` triggers the effect's second pass, which now finds all
preconditions satisfied and proceeds to `runSerializedMount`. The
detailed lifecycle reasoning is preserved as a code comment so the
next reader can see why this one variable must be reactive.

Add tests/e2e/map-roundtrip.spec.ts: navigates /map → {report,
ship-class designer, science designer, mail} → /map for each
non-map view, then asserts the renderer republished primitives onto
the DEV `__galaxyDebug.getMapPrimitives()` surface. The pre-fix
build failed every variant; the patch lands all four green.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 22:58:32 +02:00
Ilia Denisov 85ea6f413e local-dev: thread pkg/calc into the dockerfile build context
Commit 408097e ('feat: move func to calc package') moved a helper
into pkg/calc and made pkg/util/map.go import galaxy/calc, but the
local-dev backend / gateway Dockerfiles never picked up the new
module. The synthesised go.work has no replace directive for
galaxy/calc and the build context never copies pkg/calc, so any
backend / gateway image rebuild fails with

    galaxy/calc@v0.0.0: malformed module path "galaxy/calc": missing
    dot in first path element

Add the missing COPY, the matching `use ./pkg/calc` line, and the
`galaxy/calc v0.0.0 => ./pkg/calc` replace to both local-dev
Dockerfiles. The local-dev stack now rebuilds cleanly and the
auto-heal flow (prune-broken-engines + pre-bootstrap reconciler
tick) finishes by spawning a fresh engine container for the new
sandbox game.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 22:45:54 +02:00
Ilia Denisov ff53cc0ad3 local-dev: prune broken engines on rebuild + document one-time bake
`make rebuild` runs `compose build --no-cache backend gateway` plus
a fresh `up -d --wait`. It must therefore also reap any engine
container whose bind-mount source went away during host downtime,
otherwise the new backend image boots into a stack with the same
orphan that triggered the heal flow in the first place.

Also extend the troubleshooting note: pulling the heal-cycle fix
requires one explicit `make rebuild` so the backend image picks up
the pre-bootstrap reconciler tick. Without that, `make up` runs
the new Makefile target but the legacy backend cannot follow
through, and the developer is left staring at a `cancelled`
sandbox with no running replacement.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 22:40:27 +02:00
Ilia Denisov edc9709bd6 local-dev: auto-recreate engine containers when bind-mount disappears
After a host reboot macOS clears /private/tmp, so the per-game
bind-mount source under /tmp/galaxy-game-state/<uuid> vanishes and
Docker refuses to restart the long-lived engine container under
`restart: unless-stopped`. The container then sits in `exited` state
and the dev sandbox is unreachable until the developer manually rms
it and runs `make up` twice.

Fix `make -C tools/local-dev up` to heal this in one cycle:

1. `prune-broken-engines` (new make target wired into `up`) walks
   every container labelled `galaxy-game-engine` and removes the ones
   not in `running` / `restarting` state. Healthy long-lived
   containers survive normal up/down cycles untouched.
2. The backend now runs a single reconciliation pass before the
   dev-sandbox bootstrap (`Reconciler().Tick(ctx)` in main.go).
   Without it, bootstrap would reuse the soon-to-be-cancelled game
   that the periodic ticker is about to mark `removed`. The pre-tick
   cascades the orphan runtime row through markRemoved → lobby
   cancel before bootstrap purges terminal sandbox games and creates
   a fresh one — so a single `make up` lands a working sandbox with
   a brand new state directory.

README troubleshooting section documents the symptom and the
recovery so the bind-mount-source error message is greppable.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 22:27:31 +02:00
Ilia Denisov 5a3bec5acd ui/phase-21: bump done marker to local-ci run 30
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 22:10:01 +02:00
Ilia Denisov e55355a2cf ui/phase-21: harden applyOrderOverlay against HMR-stale localScience
Fixes a black-canvas regression on /map after creating a science in
DEV: when Vite hot-reloads the decoder bump that adds the
`localScience` field, the live in-memory `gameState.report` keeps
its older shape with no such field, so the overlay's
`[...report.localScience]` throws inside the reactive getter and
silently aborts the map view's `$effect`. The fix wraps the spread
and the final return in `?? []` defaults — and matches the
ship-class branches for symmetry — so the overlay stays well-defined
for any partial report shape upstream consumers may carry across an
HMR boundary.

Also adds order-overlay regression tests covering the createScience /
removeScience branches plus the explicit HMR-stale shape, and a
Playwright e2e (sciences-map-regress.spec.ts) replaying the
user-reported flow: /map → /designer/science → save → /map, asserting
no map-mount-error overlay and no console errors.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 22:00:03 +02:00
Ilia Denisov f674c86e4b ui/phase-21: mark stage as done after local-ci run 29
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 21:42:29 +02:00
Ilia Denisov 7bea22b0b5 ui/phase-21: sciences CRUD list, designer, and production-picker integration
Lights up the player-defined sciences feature: a table view with sort
and filter, a designer with four percent inputs and a strict
sum-equals-100 gate, and a Research-sub-row integration so the
planet production picker lists the user's sciences alongside the
four tech buttons. Phase 21 decisions are baked back into ui/PLAN.md
(no UpdateScience on the wire — write-once via createScience +
removeScience; percentages instead of fractions; sciences live under
the existing Research segment).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 21:32:37 +02:00
Ilia Denisov 0509f2cde2 ui/phase-20: bump done marker to local-ci run 28
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 18:06:20 +02:00
Ilia Denisov 54733bfb14 ui/phase-20: lock after Send + dashed tracks for in-flight & pending sends
Send joins Modernize / Dismantle / Transfer as a lockable command:
once any of the four lands in the draft for a group, every action
button on its inspector is disabled with a "command pending"
tooltip and the banner names the queued kind. Load / Unload /
Split / Join Fleet stay non-locking — they stack legitimately on
the engine side.

Two dashed overlays now run alongside the cargo-route arrows:

- Yellow dashed track for own in-space groups, drawn from the
  origin planet to the destination (matches the in-space point
  colour so eye reads both as one entity).
- Green dashed track for every wire-valid sendShipGroup command
  in the order draft, drawn from the source group's orbit planet
  to the chosen destination. Disappears when the command is
  removed from the order tab, when the engine rejects it, or
  when the group has left orbit (in-space track replaces it).

Both tracks are wrap-aware via torusShortestDelta and never
participate in hit-test.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 17:55:43 +02:00
Ilia Denisov 2d201537ee ui/phase-20: bump done marker to local-ci run 27
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 17:30:46 +02:00
Ilia Denisov ac14eaff10 ui/phase-20: pick-first Send + lock after Modernize/Dismantle/Transfer
Send no longer carries a destination control inside the form: a
click on the action drops the inspector straight into map-pick
mode, and the form (ship count + confirm) only mounts after the
player chooses a destination. Cancelling the picker leaves no
form behind.

A queued Modernize / Dismantle / Transfer for a given group
locks every action button on its inspector and surfaces a banner
that points the player at the order list. Cancelling the queued
entry from the order tab releases the lock on the next render —
the derivation watches draft.commands directly. Send / Load /
Unload / Split / Join Fleet do not lock; Send is naturally
followed by an out-of-orbit state at turn cutoff, the rest can
stack legitimately.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 17:20:48 +02:00
Ilia Denisov de824dfc9a ui/phase-20: mark stage as done after local-ci run 26
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 16:38:16 +02:00
Ilia Denisov 3626998a33 ui/phase-20: ship-group inspector actions
Eight ship-group operations land on the inspector behind a single
inline-form panel: split, send, load, unload, modernize, dismantle,
transfer, join fleet. Each action either appends a typed command to
the local order draft or surfaces a tooltip explaining the
disabled state. Partial-ship operations emit an implicit
breakShipGroup command before the targeted action so the engine
sees a clean (Break, Action) pair on the wire.

`pkg/calc.BlockUpgradeCost` migrates from
`game/internal/controller/ship_group_upgrade.go` so the calc
bridge can wrap a pure pkg/calc formula; the controller now
imports it. The bridge surfaces the function as
`core.blockUpgradeCost`, which the inspector calls once per ship
block to render the modernize cost preview.

`GameReport.otherRaces` is decoded from the report's player block
(non-extinct, ≠ self) and feeds the transfer-to-race picker. The
planet inspector's stationed-ship rows become clickable for own
groups so the actions panel is reachable from the standard click
flow (the renderer continues to hide on-planet groups).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 16:27:55 +02:00
Ilia Denisov f7109af55c ui/phase-19: torus-aware incoming track + on-planet groups in inspector
Two follow-up fixes after the initial Phase 19 landing:

  1. The IncomingGroup dashed trajectory was drawn between raw
     (x1, y1) and (x2, y2) world coordinates. On torus wrap mode
     this took the long way around when origin and destination
     sat near opposite seams. The line now picks endpoints via
     `torusShortestDelta` so the segment crosses the seam when
     that's the shorter visual path. The interpolated clickable
     point follows the same unwrapped vector. The same helper
     fixes the in-hyperspace position for local / foreign groups.
  2. On-planet local and foreign groups previously rendered as
     small offset points next to every populated planet, which
     turned the canvas into noise as soon as a player held more
     than a handful of planets. The map no longer renders any
     in-orbit group; the planet inspector grows a compact
     "stationed ship groups" subsection
     (`lib/inspectors/planet/ship-groups.svelte`) that lists
     each in-orbit group as a row of `<race> · <class> · <count>
     ships · <mass>`. Race attribution: LocalGroup → the player's
     race, OtherGroup on a foreign-owned planet → the planet's
     owner, OtherGroup elsewhere → "foreign" placeholder. Rows
     are non-interactive in Phase 19; Phase 21+ will deep-link
     into the ship-groups table view with a (planet, race) filter.

Tests:
  - `state-binding-groups.test.ts` swaps the on-planet rendering
    expectation for the new "no map primitive" rule, and adds a
    regression that asserts the incoming line crosses the torus
    seam via `torusShortestDelta`.
  - new `inspector-planet-ship-groups.test.ts` covers row
    composition, the destination-mismatch filter, the
    in-hyperspace exclusion, the foreign-planet owner fallback,
    and the empty-state collapse.
  - `inspector-planet.test.ts` and `inspector-ship-group.spec.ts`
    pick up the new prop chain (`localShipGroups`,
    `otherShipGroups`, `localRace`).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 15:08:41 +02:00
Ilia Denisov d63fe44618 pkg/calc: fix Deltas wrap on rectangular maps + add signed ShortestDelta
The pre-existing `Deltas` helper used the height to wrap the x-axis,
which silently produced wrong values on any rectangular galaxy
(`w != h`). Square galaxies — the only configuration the engine
ships today — masked the bug, so it stayed in tree.

`Deltas` is now a thin wrapper around the new `ShortestDelta(a, b,
size)`, which returns the signed per-axis shortest delta on a 1-D
circle (range `(-size/2, size/2]`). The signed flavour is what the
Phase 19 ship-group renderer needs to draw an IncomingGroup
trajectory across the torus seam; `Deltas` continues to return the
pair of absolute deltas for distance computation.

Adds `pkg/calc/map_test.go` with table-driven coverage for both
helpers, including a regression that exercises the rectangular
case the bug was hiding behind, and the half-circumference
tie-break.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 15:08:16 +02:00
Ilia Denisov 408097e3aa feat: move func to calc package 2026-05-10 14:55:14 +02:00
Ilia Denisov 92413575f3 ui/phase-19: mark stage as done after local-ci run 24
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 13:48:30 +02:00
Ilia Denisov 3694847792 ui/phase-19: seed an authenticated session in the synthetic-report e2e
The root +layout.svelte redirects anonymous traffic to /login, so a
fresh CI browser context never gets to render the lobby's
DEV-gated synthetic-report section — the previous spec relied on
leftover session state in the local browser and silently broke on
clean runners (local-ci run 23).

Bootstrap the session through /__debug/store before navigating to
/lobby: load a device keypair, set a deterministic device session
id. The synthetic flow itself still bypasses the gateway entirely;
the seed only ensures `session.status === "authenticated"` so the
layout guard lets the lobby through.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 13:39:08 +02:00
Ilia Denisov 86e77efe39 ui/phase-19: read-only ship-group inspector + sheet + tab dispatch
Closes Phase 19's UI surface. The inspector dispatches on the
selection variant: local / other groups render class, count, the
four tech levels, mass, cargo (type + amount when loaded),
location (planet name on-orbit, from/to/distance in hyperspace),
and — for local groups only — fleet membership + state. Incoming
groups surface origin / destination / distance / speed and the
inline ETA = ceil(distance / speed); zero speed collapses to the
designer's existing "—" placeholder. Unidentified groups render
just the (x, y) coordinates and the no-data hint, mirroring the
unidentified planet treatment.

Layout / inspector-tab plumbing:
  - inspector-tab.svelte derives selectedShipGroup against the
    rendered report and mounts <ShipGroup /> when the planet
    branch doesn't match. Stale refs (an index that no longer
    resolves after a turn refresh) collapse cleanly to the empty
    state.
  - +layout.svelte mounts <ShipGroupSheet /> alongside the
    existing planet sheet on mobile; both share the
    `effectiveTool === "map"` guard and clear-on-close.

i18n: en + ru both grow ~30 keys under
`game.inspector.ship_group.*`. Adding a key to one without the
other is a TS error (TranslationKey is `keyof typeof en`), so the
Russian mirror stays mandatory.

Tests:
  - inspector-ship-group.test.ts exercises every variant —
    on-planet local, in-hyperspace local, cargo-loaded local,
    foreign, incoming with ETA, incoming with zero speed,
    unidentified, plus the missing-planet `#NN` fallback.
  - tests/e2e/inspector-ship-group.spec.ts is a smoke spec that
    drives the DEV-only synthetic-report loader from /lobby
    through navigation to /games/synthetic-XXX/map.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 13:24:17 +02:00
Ilia Denisov 676556db4e ui/phase-19: ship-group decoder + map binding + selection store
Wires Phase 19's data and rendering layers without yet adding the
inspector UI:

  - game-state.ts grows ReportLocalShipGroup / ReportOtherShipGroup
    / ReportIncomingShipGroup / ReportUnidentifiedShipGroup /
    ReportLocalFleet types and walks the matching FlatBuffers
    vectors (LocalGroup, OtherGroup, IncomingGroup,
    UnidentifiedGroup, LocalFleet) inside decodeReport. The Tech
    map is folded into the fixed-shape ShipGroupTech struct;
    cargo strings normalise to the closed CargoLoadType | "NONE"
    union; UUIDs come back as canonical 36-char strings.
  - synthetic-report.ts mirrors the new fields so the DEV-only
    lobby loader can feed JSON produced by legacy-report-to-json
    straight into the live UI surface.
  - selection.svelte.ts widens its discriminated union with a
    `kind: "shipGroup"` branch carrying a ShipGroupRef
    (local UUID / other / incoming / unidentified by index).
  - world.ts adds Style.strokeDashPx and render.ts.drawLine
    honours it via manual segmentation (PixiJS v8 has no native
    dash API). Ignored on points and circles.
  - state-binding.ts now returns { world, hitLookup }: the
    hit-lookup map keys every primitive id back to a concrete
    HitTarget so the click handler can dispatch to selectPlanet
    or selectShipGroup. Ship-group primitives live in a separate
    ship-groups.ts that emits one point per local / other /
    unidentified group, plus a dashed origin→destination line +
    clickable point per incoming group. Position is interpolated
    along the trajectory for in-hyperspace groups.
  - map.svelte threads the hitLookup into handleMapClick.

Vitest:
  - tests/helpers/empty-ship-groups.ts exposes EMPTY_SHIP_GROUPS
    so existing fixtures can spread the new five empty arrays
    without enumerating every field.
  - state-binding-groups.test.ts covers each group variant's
    primitive geometry and lookup correctness.
  - All previously-existing fixture builders pick up the spread
    so GameReport stays a complete object.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 13:23:56 +02:00
Ilia Denisov 8839f46c25 ui/phase-19: legacy parser learns Your Groups / Your Fleets / Incoming Groups
The parity rule from ui/PLAN.md says every UI phase that decodes a
new Report field must extend the legacy converter in lockstep.
Phase 19 brings ship groups (LocalGroup / OtherGroup /
UnidentifiedGroup / IncomingGroup) and LocalFleet onto the wire-
compatible UI surface; this commit teaches
tools/local-dev/legacy-report to populate the three sections that
exist in the legacy text format:

  - "Your Groups" → []LocalGroup. Cargo type, load, fleet name,
    state, on-planet vs hyperspace position (origin / range) all
    decoded; LocalGroup.ID is synthesised deterministically from
    the per-report group index so re-running the converter
    produces byte-identical JSON. Speed is left zero — the legacy
    table doesn't expose it.
  - "Your Fleets" → []LocalFleet. Origin / range / state mirror
    the row layout used by Killer / Tancordia variants; gplus's
    state-less rows still resolve.
  - "Incoming Groups" → []IncomingGroup. Origin / destination
    names — and `#NN` by-id references — resolve against the
    parsed planet tables. Because the section can land before
    "Your Planets" in some engines, group / fleet / incoming rows
    are buffered and resolved in `parser.finish` after every
    planet is known.

Battles, OtherGroup (only ever in battle rosters), and
UnidentifiedGroup stay out of scope — README.md spells out what
remains not-derivable.

Adds Killer031–033 / TSERCON_Z032–033 / Tancordia036–039 fixtures
to the dg directory and exercises three of them through new
TestParseDg{Killer031,Tancordia037,KNNTS041} smoke tests, plus
inline tests for each new section parser. Drops the stale
KNNTS039.json artefact left over from Phase 18 development.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 13:23:17 +02:00
Ilia Denisov 132ed4e0db feat: load legary reports 2026-05-10 12:16:08 +02:00
Ilia Denisov f5ac9fac59 ui/synthetic-report: PLAN parity rule + testing doc
Locks in the synthetic-report parity rule as a global "Assumptions
and Defaults" entry in ui/PLAN.md: every phase that extends the
server->UI report contract must also extend the legacy parser in
the same PR (or document in tools/local-dev/legacy-report/README.md
why the new field cannot be derived from legacy text). The Go side
already enforces shape compatibility via the pkg/model/report
import; this rule extends that mechanical guard to "did we remember
to wire the new field through".

ui/docs/testing.md grows a "Synthetic reports for visual testing"
section with the full conversion -> load -> compose loop and the
two operational gotchas (no network on synthetic ids, page reload
clears the in-memory map).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 11:08:13 +02:00
Ilia Denisov 8f320010c6 ui/synthetic-report: dev-only legacy report loader on lobby
Adds api/synthetic-report.ts, an in-memory registry + JSON->GameReport
decoder for synthetic-mode game sessions. The lobby grows a
import.meta.env.DEV-gated "Synthetic test reports" section with a
JSON file picker; loading a file registers the decoded report under
a synthetic-<uuid> id and navigates to /games/<id>/map.

The in-game shell layout detects the synthetic id range, takes the
report straight from the registry via gameState.initSynthetic, and
deliberately skips both galaxyClient.set and orderDraft.bindClient.
Order auto-sync stays silent: scheduleSync already short-circuits on
non-UUID game ids, and without a bound client the network path is
unreachable. applyOrderOverlay continues to project locally-valid
draft commands onto the rendered report so renames / production
choices / route edits are visible immediately.

A page reload loses the in-memory entry and redirects to /lobby —
synthetic mode is a debug affordance, not a session.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 11:08:05 +02:00
Ilia Denisov 99962b295f tools/local-dev: legacy-report-to-json CLI for synthetic UI testing
A Go module under tools/local-dev/legacy-report that converts the
"dg" / "gplus" engine .REP files in tools/local-dev/reports/ into the
JSON shape of pkg/model/report.Report. The output drives a DEV-only
synthetic-mode loader on the UI lobby so the map, inspectors, and
order-overlay can be exercised against rich game states without
playing many turns end-to-end.

Scope is intentionally narrow: only the fields the UI client decodes
today (planets, players, own ship classes, header). Importing
pkg/model/report keeps the parser and the typed contract in lockstep
— any backwards-incompatible schema change breaks the tool's
compilation before it ships. The README spells out the parity rule
for extending the parser alongside future UI decoders.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 11:07:50 +02:00
Ilia Denisov e0e0f00daf chore: legacy reports 2026-05-10 10:41:59 +02:00
Ilia Denisov e4dc0ce029 ui/phase-18: ship-class calc bridge with live designer preview
Wires pkg/calc/ship.go into the WASM Core boundary as seven thin
wrappers (DriveEffective, EmptyMass, WeaponsBlockMass, FullMass,
Speed, CargoCapacity, CarryingMass). The ship-class designer reads
Core through a new CORE_CONTEXT_KEY populated by the in-game layout
and renders a five-row preview pane (mass, full-load mass, max
speed, range at full load, cargo capacity) that updates reactively
on every form edit and on the player's localPlayer{Drive,Weapons,
Shields,Cargo} tech levels — three of which are now decoded from
the report's Player block alongside the existing localPlayerDrive.

CarryingMass is the seventh wrapper added to the original six-function
list so that "full-load mass" composes through pkg/calc/ functions
without putting math in TypeScript.
2026-05-09 23:14:40 +02:00
Ilia Denisov 721fa2172d ui/phase-17: mark stage as done after local-ci run 20
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 22:01:30 +02:00
Ilia Denisov c8332bb122 ui/phase-17: clamp ship-class name input to validator's 30-rune limit
Mirrors the validateEntityName MAX_LENGTH on the form input so the
field stops accepting characters once the limit is hit. The
validator still runs and surfaces the localised reason if a paste
overshoots; the maxlength is purely a typing-time guardrail.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 21:52:24 +02:00
Ilia Denisov 0068e065ea ui/phase-17: spell out overlay invariant in plan acceptance criteria
Adds an explicit acceptance line documenting that pending Save /
Delete actions reflect on the table immediately via
applyOrderOverlay (already exercised by the new Vitest + Playwright
suites). Touches a watched path so the local-ci runner picks up the
retrigger after the previous flaky run.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 21:50:35 +02:00
Ilia Denisov 2cfe427ce9 ui/phase-17: retrigger local-ci after flaky pkg/util test
Run 19 failed on a known flake in pkg/util.TestRandomSuffixGenerator
(line 289 — generator can produce the same 4-char suffix in two
consecutive draws out of 10000). Empty commit retriggers the
runner; flake is unrelated to Phase 17.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 21:49:11 +02:00
Ilia Denisov 785c3483f8 ui/phase-17: ship-class CRUD without calc
Phase 17 lights up the ship-class table and designer active views,
extends the order-draft pipeline with createShipClass and
removeShipClass commands, and projects pending Save/Delete actions
through applyOrderOverlay so the table reflects the player's
intent before auto-sync lands. The plan is corrected in the same
patch: per game/rules.txt, ship classes are designed once and
cannot be edited — the engine has no Update command, so the UI
exposes only Create + Delete.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 21:44:21 +02:00
Ilia Denisov 8a236bef14 ui/phase-16: pick any planet in reach + stronger pick-mode dim
The cargo-route picker filtered out unidentified planets, so an
early-game player who had spotted but not surveyed a destination
could not configure a route to it — the engine has no such
restriction (`game/internal/controller/route.go.PlanetRouteSet`
only checks ownership of the origin and `util.ShortDistance(...) <=
FligthDistance`). Drop the unidentified guard and document the
contract in `cargo-routes-ux.md` plus a comment over `reachableSet()`.

Pick-mode dim now drops both alpha and tint on out-of-reach
planets so bright shapes (`STYLE_LOCAL` is `0x6dd2ff`) collapse
into a single muted gray. The single-channel `dimAlpha=0.3` was too
gentle against the dark theme — the user reported the dim wasn't
visible. Tighten to `dimAlpha=0.35 + dimTint=0x303841`; restore
both on tear-down.

Also threads through the user's `pkg/calc/race.go.FligthDistance`
addition: `calc-bridge.md` records the new Go-side reference (the
engine's `Race.FlightDistance()` already wraps it), and the picker
comment points at the canonical formula location.

Tests:
- `inspector-planet-cargo-routes.test.ts` adds two cases — a
  reach-spans-every-kind case (own + foreign + uninhabited +
  unidentified all picked when in range) and a successful pick to
  an unidentified destination.
- All 356 vitest cases + chromium-desktop / webkit-desktop e2e
  cargo-routes pass.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 20:48:42 +02:00
Ilia Denisov 3442dc94f7 ui/phase-16: mark stage as done after local-ci run 17
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 20:10:56 +02:00
Ilia Denisov 7c8b5aeb23 ui/phase-16: cargo routes inspector + map pick foundation
Add per-planet cargo routes (COL/CAP/MAT/EMP) to the inspector with
a renderer-driven destination picker (faded out-of-reach planets,
cursor-line anchor, hover-highlight) and per-route arrows on the
map. The pick-mode primitives are exposed via `MapPickService` so
ship-group dispatch in Phase 19/20 can reuse the same surface.

Pass A — generic map foundation:
- hit-test now sizes the click zone to `pointRadiusPx + slopPx` so
  the visible disc is always part of the target.
- `RendererHandle` gains `onPointerMove`, `onHoverChange`,
  `setPickMode`, `getPickState`, `getPrimitiveAlpha`,
  `setExtraPrimitives`, `getPrimitives`. The click dispatcher is
  centralised: pick-mode swallows clicks atomically so the standard
  selection consumers do not race against teardown.
- `MapPickService` (`lib/map-pick.svelte.ts`) wraps the renderer
  contract in a promise-shaped `pick(...)`. The in-game shell
  layout owns the service so sidebar and bottom-sheet inspectors
  see the same instance.
- Debug-surface registry exposes `getMapPrimitives`,
  `getMapPickState`, `getMapCamera` to e2e specs without spawning a
  separate debug page after navigation.

Pass B — cargo-route feature:
- `CargoLoadType`, `setCargoRoute`, `removeCargoRoute` typed
  variants with `(source, loadType)` collapse rule on the order
  draft; round-trip through the FBS encoder/decoder.
- `GameReport` decodes `routes` and the local player's drive tech
  for the inline reach formula (40 × drive). `applyOrderOverlay`
  upserts/drops route entries for valid/submitting/applied
  commands.
- `lib/inspectors/planet/cargo-routes.svelte` renders the
  four-slot section. `Add` / `Edit` call `MapPickService.pick`,
  `Remove` emits `removeCargoRoute`.
- `map/cargo-routes.ts` builds shaft + arrowhead primitives per
  cargo type; the map view pushes them through
  `setExtraPrimitives` so the renderer never re-inits Pixi on
  route mutations (Pixi 8 doesn't support that on a reused
  canvas).

Docs:
- `docs/cargo-routes-ux.md` covers engine semantics + UI map.
- `docs/renderer.md` documents pick mode and the debug surface.
- `docs/calc-bridge.md` records the Phase 16 reach waiver.
- `PLAN.md` rewrites Phase 16 to reflect the foundation + feature
  split and the decisions baked in (map-driven picker, inline
  reach, optimistic overlay via `setExtraPrimitives`).

Tests:
- `tests/map-pick-mode.test.ts` — pure overlay-spec helper.
- `tests/map-cargo-routes.test.ts` — `buildCargoRouteLines`.
- `tests/inspector-planet-cargo-routes.test.ts` — slot rendering,
  picker invocation, collapse, cancel, remove.
- Extensions to `order-draft`, `submit`, `order-load`,
  `order-overlay`, `state-binding`, `inspector-planet`,
  `inspector-overlay`, `game-shell-sidebar`, `game-shell-header`.
- `tests/e2e/cargo-routes.spec.ts` — Playwright happy path: add
  COL, add CAP, remove COL, asserting both the inspector and the
  arrow count via `__galaxyDebug.getMapPrimitives()`.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 20:01:34 +02:00
Ilia Denisov 5fd67ed958 ui/phase-15: mark stage as done after local-ci run 16 2026-05-09 16:15:20 +02:00
Ilia Denisov 42731022fb ui/phase-15: update game-shell-inspector e2e for new production component
The desktop spec previously asserted the read-only `inspector-planet-
field-production` row for an owned planet. Phase 15 replaced that
row with the interactive production component on the local-planet
branch — the assertion now confirms the component is mounted and
the legacy field is absent.
2026-05-09 16:05:50 +02:00
Ilia Denisov 915b4372dd ui/phase-15: planet inspector production controls + order-draft collapse
Adds the second end-to-end command (`setProductionType`) with a
collapse-by-`planetNumber` rule on the order draft, the segmented
production-controls component on the planet inspector, the FBS
encoder/decoder pair for `CommandPlanetProduce`, and the
`localShipClass` projection on `GameReport`. Forecast number is
deferred and tracked in the new `ui/docs/calc-bridge.md`.
2026-05-09 15:54:30 +02:00
Ilia Denisov c4f1409329 ui/order-draft: silence hydrate path on non-UUID game ids + Phase 10 e2e fixture upgrade
Phase 14's auto-sync calls `uuidToHiLo` on every layout boot. The
existing Phase 10 e2e specs use a placeholder string `test-shell`
as the game id, which throws in the FBS request encoder and
surfaced as a noisy `console.warn` plus a flaky webkit-desktop
test on the local-ci ARM runner.

`OrderDraftStore.hydrateFromServer` and `scheduleSync` now skip
when the active game id isn't a real UUID — the auto-sync path is
inert for fixture data and the placeholder-warning is gone. The
Phase 10 spec switches to a deterministic UUID
(`10101010-1010-1010-1010-101010101010`) so future Phase 14+
specs don't have to special-case it.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 14:19:47 +02:00
Ilia Denisov 6d6a384bee local-dev: auto-purge terminal Dev Sandbox games on every boot
Previously a cancelled / finished / start_failed sandbox game would
hang in the dev user's lobby until manually cleaned up — `make up`
would create a new running game alongside it but the dead tiles
piled up. Now backend's `devsandbox.Bootstrap` deletes every
terminal sandbox game owned by the dev user before find-or-create
runs, so the lobby always shows exactly one running tile.

Schema: `runtime_records` and `player_mappings` gain
`ON DELETE CASCADE` on their `game_id` foreign keys so a single
`DELETE FROM games` cleans every referencing row in one write.
Pre-prod migration rule applies — change goes into
`00001_init.sql`, not a new migration.

API: `lobby.Service.DeleteGame` is the new destructive helper that
backs the bootstrap purge. It bypasses the cancel-cascade-notify
pipeline; production callers must stay on the regular lifecycle.
The dev-sandbox docs in `tools/local-dev/README.md` spell out the
new behaviour.

Tests:
- backend/internal/lobby/lobby_e2e_test.go gains
  `TestDeleteGameCascadesEverything` proving CASCADE works
  end-to-end against a real Postgres testcontainer.
- backend/internal/devsandbox keeps its existing terminal-status
  contract test; the new `purgeTerminalSandboxGames` helper rides
  on the same `terminalSandboxStatus` predicate.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 14:06:04 +02:00
Ilia Denisov 229c43beb5 ui/phase-14: auto-sync order draft + always GET on boot + header headline
Replaces the manual Submit button with an auto-sync pipeline driven
by `OrderDraftStore`: every successful add / remove / move
coalesces a `submitOrder` call so the engine always mirrors the
local draft. Removing the last command sends an empty cmd[] PUT —
the engine, repo, and rest model now accept that as a valid
"player cleared their draft" state.

`hydrateFromServer` is now invoked unconditionally on game boot so
a fresh device picks up the player's stored order, and the local
cache is overwritten by the server's view (server is the source of
truth).

Header replaces the static "race ?" + turn counter with a single
headline string `<race> @ <game>, turn <n>`, sourced from the
engine's Report.race + the lobby's GameSummary.gameName + the live
turn number, with a `?` fallback while any piece is loading.

Tests:
- engine: empty PUT round-trips, repo round-trips empty Commands
- order-draft: auto-sync sends full draft on every mutation,
  rejected response surfaces error sync status, rapid mutations
  coalesce, server hydration overwrites cache
- order-tab: per-row status flips through the auto-sync lifecycle,
  remove → empty cmd[] PUT, rejected → retry button
- inspector overlay: applied + valid + submitting all participate
  in the optimistic projection
- header: live race / game / turn rendering with fall-back

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 13:34:10 +02:00
Ilia Denisov 68d8607eaa local-dev: spell out compose rebuild after Go-side changes
The Phase 14 follow-up surfaced a footgun: `make up` reuses any
pre-built backend / gateway images and silently misses route table
or transcoder edits. Add a dedicated section to the README that
points at `make rebuild`, plus the engine-only path through
`make stop-engines` + `docker rmi`.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 12:41:31 +02:00
Ilia Denisov 0aaa4473a4 ui/phase-14: regression tests for routes registry + overlay reactivity
The owner reported two symptoms after pulling the Phase 14 stack:

1. user.games.order.get answered with `unimplemented: message_type
   is not routed`. The gateway/backend code was correct, but the
   local-dev compose images were stale — `make rebuild` picked up
   the new routes table and the symptom went away. To prevent this
   class of regression from depending on docker-image freshness,
   gateway/internal/backendclient/routes_test.go now asserts that
   every authenticated MessageType constant declared in
   pkg/model/{user,lobby,order,report} is registered, and verifies
   that user.games.order.get specifically resolves to the game
   command client.

2. The inspector kept the un-renamed name after a successful submit.
   ui/frontend/tests/inspector-overlay.test.ts mounts the inspector
   tab against a real OrderDraftStore + a stubbed GameStateStore
   and walks the full happy path (add planetRename → markSubmitting
   → applied → simulate refresh) plus the integration scenario
   driven through the order-tab Submit button. Both cases pass —
   the underlying overlay path is reactive and resilient to a
   refresh that returns the un-renamed snapshot. The original
   in-browser symptom was the rebuilt-image freshness issue from
   point 1; this test pins the reactive contract for future
   refactors.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 12:40:33 +02:00
Ilia Denisov 57e053764a ui/phase-14: mark stage done after green local-ci run 11
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 11:59:49 +02:00
Ilia Denisov f80c623a74 ui/phase-14: rename planet end-to-end + order read-back
Wires the first end-to-end command through the full pipeline:
inspector rename action → local order draft → user.games.order
submit → optimistic overlay on map / inspector → server hydration
on cache miss via the new user.games.order.get message type.

Backend: GET /api/v1/user/games/{id}/orders forwards to engine
GET /api/v1/order. Gateway parses the engine PUT response into the
extended UserGamesOrderResponse FBS envelope and adds
executeUserGamesOrderGet for the read-back path. Frontend ports
ValidateTypeName to TS, lands the inline rename editor + Submit
button, and exposes a renderedReport context so consumers see the
overlay-applied snapshot.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 11:50:09 +02:00
Ilia Denisov 381e41b325 fix: game order api & tests 2026-05-09 10:55:55 +02:00
Ilia Denisov 2a1e80053a feat: game order api methods 2026-05-09 10:36:44 +02:00
Ilia Denisov f2a7f2b515 phase 13 2026-05-09 08:44:10 +02:00
Ilia Denisov 42a0de6537 ui/phase-13: mark stage done after green local-ci run 10
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 08:37:56 +02:00
Ilia Denisov 6364bba6fd ui/phase-13: planet inspector — read-only
Plumbs the map → inspector pathway: a click on a planet selects it
through the new SelectionStore, the sidebar Inspector tab swaps
its empty-state copy for a per-kind read-only field set, and a
mobile-only bottom-sheet mirrors the same content over the map.
Field projection in api/game-state.ts now surfaces every documented
planet field.
2026-05-09 08:29:03 +02:00
Ilia Denisov a3fdcfe9c5 ui/map-renderer: clarify rationale for synthetic moved-event type
Expanding the comment so future readers know the `type` field is
informational here — no `pixi-viewport@6` plugin or local listener
switches on it, so picking any literal from the closed union works.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 00:01:03 +02:00
Ilia Denisov 164f23fbed ui/map-renderer: pin synthetic moved-event type to a real literal
`MovedEvent.type` in pixi-viewport@6 is a closed union of built-in
plugin names; the prior `"manual"` value tripped svelte-check.
`"animate"` is the closest semantic match for a programmatic move
and the renderer's listeners read only `viewport`.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 23:46:24 +02:00
Ilia Denisov 3ed4531a01 ui/phase-12: mark stage done after green local-ci run 7
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 23:39:02 +02:00
Ilia Denisov 460591c159 ui/phase-12: order composer skeleton
OrderDraftStore persists per-game command drafts in Cache; the
sidebar Order tab renders the list with a per-row delete control.
The layout passes a `historyMode` prop through Sidebar / BottomTabs
as a constant `false`, so Phase 26 only flips the source.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 23:26:58 +02:00
Ilia Denisov e5dab2a43a ui/map-renderer: wrap torus camera into the central tile on pan
Even with the zoom-out clamp from cc004f9, panning still let the
user walk the camera centre out of the central tile of the 3×3
wrap layout — they would see the wrap copies one tile out and then
empty space beyond, because the renderer paints exactly nine
copies and nothing further. The fix is the standard torus trick:
treat camera coordinates modulo world dimensions. The toroidal
world looks identical at `(x, y)` and `(x mod W, y mod H)`, so
snapping the centre back into `[0, W) × [0, H)` is invisible to
the user, and the fixed 3×3 layout is then sufficient to cover
infinite pan in any direction.

Implementation:

- `src/map/torus.ts::wrapCameraTorus` — pure helper that returns
  the modulo-wrapped camera (positive remainder; scale preserved).
- `src/map/render.ts` — the torus-mode path now installs a
  `'moved'` listener that runs the wrap, with a re-entry guard
  because `viewport.moveCenter` itself fires the same event the
  listener subscribes to. The `'moved'` event is emitted by
  every `pixi-viewport` plugin that moves the camera (drag,
  wheel, decelerate, snap, pinch — confirmed against the v6
  source) so production drag inertia and wheel-pan both trigger
  the wrap.
- `src/routes/__debug/map/+page.svelte` — adds `setCameraCenter`
  to `__galaxyMap`, with an explicit `viewport.emit('moved')`
  after the programmatic `moveCenter` (the v6 source does not
  emit `'moved'` from `moveCenter`, only plugins do; the manual
  emit matches the user-drag semantics).

Tests:

- `tests/map-torus.test.ts` — Vitest unit coverage for
  `wrapCameraTorus` (in-bounds noop, one tile / many tiles past
  on each axis, negative inputs never return negative, scale
  preserved, right/bottom edge folds to left/top, toroidal-
  congruence invariant).
- `tests/e2e/playground-map.spec.ts` — torus pan regression: push
  the camera to (5.4×W, 7.25×H) through the new debug entry,
  assert the centre lands in the central tile and matches the
  expected `(0.4×W, 0.25×H)` modulo position. Runs across all
  four Playwright projects.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 22:47:38 +02:00
Ilia Denisov cc004f935d ui/map-renderer: clamp torus zoom-out to minScaleNoWrap
The renderer's torus mode laid out the world in a 3×3 grid of wrap
copies (TORUS_OFFSETS) so the user could pan past an edge without
seeing a void. Below `minScale = max(viewport/world)` the world
shrinks below the viewport along at least one axis and the wrap
copies become visible side-by-side — the user reported a 9-tile
mosaic that pans and zooms as one rigid unit. The doc explicitly
deferred the fix ("if profiling ever reveals that users do this");
real usage is the trigger.

Apply `clampZoom({ minScale })` in both modes; torus still keeps
free pan (no `clamp({ direction: "all" })`) so the wrap copies
fill the cross-edge slack as designed. Resize re-evaluates the
clamp so a window resize does not strand the camera below the new
floor. Documentation in `ui/docs/renderer.md` updated to describe
the new shared invariant.

Regression test in `tests/e2e/playground-map.spec.ts` wheels out
aggressively in torus mode and asserts `camera.scale >= minScale`
across all four Playwright projects.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 21:45:01 +02:00
Ilia Denisov 12e666ba91 ui/phase-11: mark stage done after green local-ci run 4
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 21:24:26 +02:00
Ilia Denisov ce7a66b3e6 ui/phase-11: map wired to live game state
Replaces the Phase 10 map stub with live planet rendering driven by
`user.games.report`, and wires the header turn counter to the same
data. Phase 11's frontend sits on a per-game `GameStateStore` that
lives in `lib/game-state.svelte.ts`: the in-game shell layout
instantiates one per game, exposes it through Svelte context, and
disposes it on remount. The store discovers the game's current turn
through `lobby.my.games.list`, fetches the matching report, and
exposes a TS-friendly snapshot to the header turn counter, the map
view, and the inspector / order / calculator tabs that later phases
will plug onto the same instance.

The pipeline forced one cross-stage decision: the user surface needs
the current turn number to know which report to fetch, but
`GameSummary` did not expose it. Phase 11 extends the lobby
catalogue (FB schema, transcoder, Go model, backend
gameSummaryWire, gateway decoders, openapi, TS bindings,
api/lobby.ts) with `current_turn:int32`. The data was already
tracked in backend's `RuntimeSnapshot.CurrentTurn`; surfacing it is
a wire change only. Two alternatives were rejected: a brand-new
`user.games.state` message (full wire-flow for one field) and
hard-coding `turn=0` (works for the dev sandbox, which never
advances past zero, but renders the initial state for any real
game). The change crosses Phase 8's already-shipped catalogue per
the project's "decisions baked back into the live plan" rule —
existing tests and fixtures are updated in the same patch.

The state binding lives in `map/state-binding.ts::reportToWorld`:
one Point primitive per planet across all four kinds (local /
other / uninhabited / unidentified) with distinct fill colours,
fill alphas, and point radii so the user can tell them apart at a
glance. The planet engine number is reused as the primitive id so
a hit-test result resolves directly to a planet without an extra
lookup table. Zero-planet reports yield a well-formed empty world;
malformed dimensions fall back to 1×1 so a bad report cannot crash
the renderer.

The map view's mount effect creates the renderer once and skips
re-mount on no-op refreshes (same turn, same wrap mode); a turn
change or wrap-mode flip disposes and recreates it. The renderer's
external API does not yet expose `setWorld`; Phase 24 / 34 will
extract it once high-frequency updates land. The store installs a
`visibilitychange` listener that calls `refresh()` when the tab
regains focus.

Wrap-mode preference uses `Cache` namespace `game-prefs`, key
`<gameId>/wrap-mode`, default `torus`. Phase 11 reads through
`store.wrapMode`; Phase 29 wires the toggle UI on top of
`setWrapMode`.

Tests: Vitest unit coverage for `reportToWorld` (every kind,
ids, styling, empty / zero-dimension edges, priority order) and
for the store lifecycle (init success, missing-membership error,
forbidden-result error, `setTurn`, wrap-mode persistence across
instances, `failBootstrap`). Playwright e2e mocks the gateway for
`lobby.my.games.list` and `user.games.report` and asserts the
live data path: turn counter shows the reported turn,
`active-view-map` flips to `data-status="ready"`, and
`data-planet-count` matches the fixture count. The zero-planet
regression and the missing-membership error path are covered.

Phase 11 status stays `pending` in `ui/PLAN.md` until the local-ci
run lands green; flipping to `done` follows in the next commit per
the per-stage CI gate in `CLAUDE.md`.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 21:17:17 +02:00
Ilia Denisov ff524fabc6 ui/phase-10: mark stage done after green local-ci run 3
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 20:22:49 +02:00
Ilia Denisov fc371c7fe1 ui/phase-10: in-game shell with view-replacement skeleton
Wraps every in-game route under `/games/:id/*` in a responsive shell
with a header (race / turn placeholders, view-menu dropdown or mobile
hamburger, account menu), a three-tab sidebar (Calculator, Inspector,
Order), an active-view slot, and a mobile-only bottom-tabs row
`[Map, Calc, Order, More]`. Every view in the IA section
(`map`, `table/:entity`, `report`, `battle/:battleId?`, `mail`,
`designer/{ship-class,science}/:id?`) ships as a thin SvelteKit route
that mounts a `lib/active-view/<name>.svelte` stub rendering a
localised `coming soon` body. The lobby's `gotoGame` path now actually
lands on a rendered shell instead of a 404.

The "view router" mentioned in the plan is implemented as the file
system plus two-line route wrappers — no separate dispatch component.
Sidebar tab state lives as a `$state` rune inside `sidebar.svelte`,
which sits in the layout that SvelteKit keeps mounted across child
route swaps, so tab choice survives every active-view navigation for
free. A `?sidebar=calc|inspector|order` URL param seeds the initial
tab on first mount; the mobile bottom-tabs use a layout-owned
`mobileTool` rune with a URL-gated `effectiveTool` derivation so the
Calc / Order tool overlay only applies on `/map` and naturally drops
when the user navigates elsewhere.

Tablet ships with a click-toggle drawer for the sidebar rather than
the IA section's swipe-from-right gesture; the structural breakpoint
satisfies Phase 10's acceptance criterion and Phase 35 polish lands
the swipe. The mobile More drawer mirrors the header view-menu
content; the IA's narrower More list (Mail, Battle, Tables, History,
Settings, Logout) is also a Phase 35 polish target once History
exists.

Topic doc `ui/docs/navigation.md` captures the active-view model, the
sidebar state-preservation rule, the `?sidebar=` and `mobileTool`
conventions, and the transient map-overlay back-stack concept (with
the implementation deferred to Phase 34 alongside its first user).
i18n catalogues for `en` and `ru` add the full `game.shell.*`,
`game.view.*`, `game.sidebar.*`, `game.bottom_tabs.*` namespaces.

Tests: Vitest covers the header view-menu (every IA destination
including the Tables sub-list), the account-menu Logout / Language
wiring, the sidebar default tab / switching / `?sidebar=` seed /
close button, and every active-view stub. Playwright e2e boots an
authenticated session via `__galaxyDebug.setDeviceSessionId` (no
gateway calls — the shell makes none in Phase 10), exercises every
view through both the desktop dropdown and the mobile More drawer,
verifies sidebar tab survival across navigation, and uses
`setViewportSize` to validate the breakpoint switches at 768 px and
1024 px.

Phase 10 status stays `pending` in `ui/PLAN.md` until the local-ci
run lands green; flipping to `done` follows in the next commit per
the per-stage CI gate in `CLAUDE.md`.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 20:15:49 +02:00
Ilia Denisov 0f8f8698bd local-dev: rebuild dead sandbox + harden lobby card UX
Three fixes around the dev sandbox end-to-end path. Each one was
flushed out by an actual login walkthrough after the previous
commit.

Backend bootstrap now treats `cancelled`, `finished`, and
`start_failed` as terminal: the per-boot find-or-create skips such
games and provisions a fresh one. Without this, a single bad
shutdown cascade leaves the developer staring at a dead lobby tile
forever (cancelled games don't transition back). Covered by
TestTerminalSandboxStatus.

Tools/local-dev: stop killing engine containers in `make down`. The
runtime treats the disappearance of an engine as a real failure
(cascading the lobby game to `cancelled`); leaving the container
running across `down/up` lets the runtime reconciler re-attach on
the next boot. The teardown happens only in `make clean`, where the
DB is wiped anyway. Compose now also exposes :9090 (authenticated
EdgeGateway listener) on the host so the Vite dev proxy can reach
the Connect-Web surface, and bumps the gateway anti-abuse limits
for `public_misc` so the same surface is not blanket-rejected with
413.

Ui/frontend: the lobby's `My Games` cards are now clickable only
for the playable statuses (`running`, `paused`, `finished`). All
other statuses render as disabled buttons so a click on a draft or
cancelled game no longer drops the user on a 404 — the in-game
view at /games/:id/* doesn't exist before Phase 10 and never makes
sense for a cancelled game. Vite proxy splits the dev targets so
`/api/*` continues to talk to the REST listener and
`/galaxy.gateway.v1.EdgeGateway/*` is routed to the Connect-Web
listener via VITE_DEV_GRPC_PROXY_TARGET (defaults to :9090).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 19:32:44 +02:00
Ilia Denisov 82c4f70156 local-dev: stop spawned engine containers in down/clean
Backend's runtime spawns the engine container outside the compose
project, so `docker compose down` left a `galaxy-game-…` container
running. Add a `stop-engines` target that finds them by their OCI
image-title label (set in game/Dockerfile) and remove forcibly;
make `down` and `clean` depend on it. `clean` additionally wipes
the per-game state directory under /tmp/galaxy-game-state.

Add a troubleshooting note for the related symptom: when the
browser holds a keypair from a previous DB and `make clean`
recreates everything, the lobby renders "no games yet" until the
user clears site data or opens an incognito window. The dev user
keeps the same email but receives a fresh user_id, which the old
keypair cannot authenticate against.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 19:04:05 +02:00
Ilia Denisov 804fdd2a72 local-dev: log memberships-ensured count in dev_sandbox bootstrap
Adds a single zap.Info line after the membership-insertion loop so
the boot log explicitly shows how many participants the sandbox
provisioned. The number is fixed by config (PlayerCount) but
surfacing it in the log makes troubleshooting "why is the lobby
empty" cases (typo in the email, partial failure) faster than
querying the DB.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 16:18:38 +02:00
Ilia Denisov e63748c344 local-dev: boot-time dev sandbox provisions a runnable game on up
Adds backend/internal/devsandbox: an idempotent boot-time hook that,
when BACKEND_DEV_SANDBOX_EMAIL is set, ensures (1) the configured
engine_version row, (2) the real dev user, (3) PlayerCount-1
deterministic dummy users, (4) a private "Dev Sandbox" game with a
year-out turn schedule, (5) memberships for every participant via
the new lobby.Service.InsertMembershipDirect helper, (6) a drive of
the lifecycle to running. Re-running on a populated DB is a no-op;
partial states from earlier crashes are recovered.

tools/local-dev gains the matching env vars in .env, surfaces them
in compose, and acquires a `make build-engine` target that builds
galaxy-engine:local-dev from game/Dockerfile (a prerequisite of
`up`/`rebuild`). The compose game-state mount is changed from a
named volume to a host bind on /tmp/galaxy-game-state so backend's
bind-mount source for spawned engine containers resolves on the
docker daemon.

After `make -C tools/local-dev up`, login as dev@local.test with
the dev code 123456 and the Dev Sandbox already shows up in My
Games. Per-user behaviour for the same email survives a backend
restart.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 15:51:09 +02:00
Ilia Denisov 73fb0ae968 ui/phase-9: mark stage done after green local-ci run 15
Local-ci run 15 (b4f37d6) finished with status=success: Vitest
17 files / 150 tests, Playwright 64 cases across the four
projects, Go suites for backend/gateway/game/pkg/ui/core all
green.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 14:50:36 +02:00
Ilia Denisov b4f37d6669 ui/phase-9: revert premature done-mark, reuse minScaleNoWrap
Previous Phase 9 commit pre-marked PLAN.md with "Status: done"
before the local-ci gate ran green. Project rule
(galaxy/CLAUDE.md "Per-stage CI gate") allows the marker only
after the run is success; revert to "Status: pending".

Also folds the inline minScale formula in the playground page
into a call to map/no-wrap.ts:minScaleNoWrap so the playground
and the renderer share one source of truth for the floor.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 14:35:25 +02:00
Ilia Denisov db415f8aa4 ui/phase-9: PixiJS map renderer with torus and no-wrap modes
Stand up the vector map renderer in ui/frontend/src/map/ on top of
PixiJS v8 + pixi-viewport@^6. Torus mode renders nine container
copies for seamless wrap; no-wrap mode pins the camera at world
bounds and centres on an axis when the viewport exceeds the world
along that axis. Hit-test is a brute-force pass with deterministic
[-priority, distSq, kindOrder, id] ordering and torus-shortest
distance, validated by hand-built unit cases.

The development playground at /__debug/map exposes a window
debug surface for the Playwright spec, which forces WebGPU on
chromium-desktop, WebGL on webkit-desktop, and accepts the
auto-picked backend on mobile projects.

Algorithm spec lives in ui/docs/renderer.md, which also pins the
new deprecation status of galaxy/client (the entire Fyne client
module, including client/world). client/world/README.md and the
Phase 9 stub in ui/PLAN.md gain matching deprecation banners.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 14:06:23 +02:00
Ilia Denisov 9d2504c42d backend: embed tzdata so time.LoadLocation works in distroless/alpine
`time.LoadLocation` is called from
backend/internal/server/handlers_public_auth.go:108 (confirm-email-code)
and backend/internal/user/account.go:218 (user.settings.update). Both
runtime images shipped today have no tzdata — production
backend/Dockerfile uses gcr.io/distroless/static-debian12:nonroot, and
local-dev tools/local-dev/backend.Dockerfile uses alpine:3.20 without
the optional tzdata apk — so the container-side binary resolves only
the no-data fallback (UTC and fixed offsets) and rejects every real
IANA zone with HTTP 400 `invalid_request: time_zone must be a valid
IANA zone`.

Adding `import _ "time/tzdata"` to backend's main is the idiomatic
Go fix: the binary embeds the IANA database, time.LoadLocation works
on every base image, no Dockerfile changes needed. Cost is ~800 KB
of binary growth — invisible next to the existing /usr/local/bin/backend
size and well below any container layer threshold.

The OpenAPI spec already documents the field as "IANA time-zone
identifier" (gateway/openapi.yaml:205, backend/openapi.yaml:2334)
and the UI sends Intl.DateTimeFormat().resolvedOptions().timeZone,
so neither the contract nor the client needs a change.

Why this slipped through: backend unit tests run as a host Go test
process (developer's tzdata covers them), Playwright tests mock the
gateway (backend never reached), and the integration suite — the only
layer that exercises the real backend container — uses
RegisterSession which hardcoded `time_zone="UTC"`. Switching that
default to "Europe/Berlin" makes every integration scenario that
enrols a pilot exercise the tzdata path, so the next regression
surfaces in the integration run instead of escaping into manual
smoke. (The integration suite is not in the per-PR workflow yet; that
gap is tracked separately.)

Verified end-to-end against `tools/local-dev`:
  - Europe/Amsterdam, Asia/Tokyo, America/Los_Angeles → 200 +
    device_session_id (was 400 before this patch).
  - Mars/Olympus still → 400 (validation behaviour unchanged).
Host tests: backend/internal/{auth,user,config} green.
UI: pnpm test 14/14, CI=1 pnpm exec playwright test 44/44.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 11:58:47 +02:00
Ilia Denisov 6f6a854337 local-dev: Vite proxy for same-origin requests + upstream gateway Dockerfile fix
vite.config.ts now proxies `/api` and `/galaxy.gateway.v1.EdgeGateway`
to the gateway, so the browser sees only `localhost:5173` and never
trips a cross-origin preflight. `.env.development` accordingly points
`VITE_GATEWAY_BASE_URL` at the Vite origin. The proxy target is
overridable via `VITE_DEV_PROXY_TARGET=...` for non-default gateways
without touching the compose file.

`gateway/Dockerfile` previously failed to build because gateway
imports `galaxy/core` (replaced to `../ui/core` in `gateway/go.mod`)
but the Dockerfile did not copy `ui/core/` into the build context
nor declare the replace in the synthesised `go.work`. Adding both
makes `docker build -f gateway/Dockerfile .` succeed; this is the
same fix already shipped in `tools/local-dev/gateway.Dockerfile`,
back-ported to upstream.

Verified:
- docker build -f gateway/Dockerfile . — builds cleanly
- pnpm test 14/14, pnpm exec playwright test 44/44 (with CI=1 to
  force a fresh dev server; reuse keeps the previous startup env)
- curl POST through localhost:5173/api/* and /galaxy.gateway.v1.* —
  reach the gateway, no CORS preflight on the browser side

tools/local-dev/README.md updated with the new network map and the
`VITE_DEV_PROXY_TARGET` override.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 11:04:00 +02:00
Ilia Denisov 69fa6b30e1 tools/local-dev: docker-compose stack for UI development
Adds tools/local-dev/ with postgres + redis + mailpit + backend +
gateway plus a Make wrapper, so `make -C tools/local-dev up` brings
the full authenticated stack online and `pnpm -C ui/frontend dev`
talks to it directly. The committed `.env.development` already
points at the stack and pins the matching gateway response public
key from the dev keypair under tools/local-dev/keys/.

The backend ships a new opt-in env, BACKEND_AUTH_DEV_FIXED_CODE
(`tools/local-dev/.env` defaults it to 123456). When set,
ConfirmEmailCode accepts that literal in addition to the real
bcrypt-verified code; SendEmailCode still queues a real email so
Mailpit captures the issued code at http://localhost:8025/, and
both paths coexist. The override is rejected as non-six-digit by
config validation and emits a loud warning at backend startup.

The local-dev Dockerfiles mirror backend/Dockerfile and
gateway/Dockerfile but switch the runtime stage to alpine so
docker-compose healthchecks can wget /healthz; the gateway
Dockerfile additionally copies ui/core/ into the build context
because gateway/go.mod's `replace galaxy/core => ../ui/core` is
required to compile the gateway main.

Smoke tested:
- `make -C tools/local-dev up` boots all five services to healthy.
- send-email-code + confirm-email-code with code=123456 returns a
  device_session_id; a real code in Mailpit also redeems
  successfully.
- `pnpm test` 14/14, `pnpm exec playwright test` 44/44.
- `go test ./backend/internal/config/...` green.

Docs: tools/local-dev/README.md, tools/local-dev/keys/README.md,
new "Local development stack" section in ui/docs/testing.md, and a
short pointer in ui/README.md.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 09:42:29 +02:00
Ilia Denisov f57a290432 phase 8: lobby UI + cross-stack lobby command catalog + TS FlatBuffers
- Extend pkg/model/lobby and pkg/schema/fbs/lobby.fbs with public-games
  list, my-applications/invites lists, game-create, application-submit,
  invite-redeem/decline. Mirror the matching transcoder pairs and Go
  fixture round-trip tests.
- Wire the seven new lobby message types through
  gateway/internal/backendclient/{routes,lobby_commands}.go with
  per-command REST helpers, JSON-tolerant decoding of backend wire
  shapes, and httptest-based unit coverage for success / 4xx / 5xx /
  503 across each command.
- Introduce TS-side FlatBuffers via the `flatbuffers` runtime dep, a
  `make fbs-ts` target driving flatc, and the generated bindings under
  ui/frontend/src/proto/galaxy/fbs. Phase 7's `user.account.get` decode
  now uses these bindings as well, closing the JSON.parse vs
  FlatBuffers gap that would have failed against a real local stack.
- Replace the placeholder lobby with five sections (my games, pending
  invitations, my applications, public games, create new game) and the
  /lobby/create form. Submit-application uses an inline race-name
  form on the public-game card; create-game keeps name / description /
  turn_schedule / enrollment_ends_at always visible and the rest under
  an Advanced toggle with TS-side defaults.
- Update lobby/+page.svelte to throw LobbyError on non-ok result codes;
  GalaxyClient.executeCommand now returns { resultCode, payloadBytes }.
- Vitest binding round-trips, lobby.ts wrapper unit tests, lobby-page
  + lobby-create component tests, Playwright lobby-flow.spec covering
  create / submit / accept across all four projects. Phase 7 e2e was
  migrated to the FlatBuffers fixtures and to click+fill against the
  Safari-autofill readonly inputs.
- Mark Phase 8 done in ui/PLAN.md, mirror the wire-format note into
  Phase 7, append the new lobby commands to gateway/README.md and
  docs/ARCHITECTURE.md, add ui/docs/lobby.md.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 18:05:08 +02:00
Ilia Denisov 5d2a3b79c5 phase 7 2026-05-07 16:56:19 +02:00
Ilia Denisov 9101aba816 phase 7+: i18n primitive + login language picker + autocomplete-off
Adds a minimal Svelte 5 i18n primitive (`src/lib/i18n/`) backing the
login form, the layout blocker page, and the lobby placeholder.
SUPPORTED_LOCALES drives both the picker and the runtime lookup;
adding a language is a two-step change inside `src/lib/i18n/`.

Login form gains a globe-icon language dropdown (English / Русский
in their native names), defaulting to navigator.languages with `en`
as the fallback. Switching the locale re-renders the form in place;
on submit, the locale rides in the JSON body of `send-email-code`
because Safari/WebKit silently drops JS-set Accept-Language. Gateway
gains a body `locale` field that takes priority over the request
header for preferred-language resolution.

Email and code inputs disable browser autofill / suggestions
(`autocomplete=off` + `autocorrect=off` + `autocapitalize=off` +
`spellcheck=false`) so Keychain / address-book pickers and
remembered-value dropdowns no longer fire on focus.

Cross-cuts:
- backend & gateway openapi: clarify that body `locale` is honored.
- docs/FUNCTIONAL{,_ru}.md §1.2: document body-vs-header priority.
- gateway tests: body `locale` overrides Accept-Language; blank
  body `locale` falls back to header.
- new ui/docs/i18n.md; cross-links from auth-flow.md and ui/README.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 16:14:40 +02:00
Ilia Denisov 22b0710d04 phase 7: auth flow UI (email-code login + session resume + revocation)
Implements ui/PLAN.md Phase 7 end-to-end:

- /login two-step form (email -> code) over the gateway public REST
  surface; /lobby placeholder issues the first authenticated
  user.account.get and renders the decoded display name.
- SessionStore (Svelte 5 runes) with loading / unsupported / anonymous /
  authenticated states; layout-level route guard, browser-not-supported
  blocker, and a minimal SubscribeEvents revocation watcher that closes
  the active client within 1s on a clean stream end or
  Unauthenticated.
- VITE_GATEWAY_BASE_URL + VITE_GATEWAY_RESPONSE_PUBLIC_KEY config plus
  AuthError taxonomy in api/auth.ts.
- Vitest (auth-api, session-store, login-page) and Playwright e2e
  (auth-flow.spec.ts) on the four configured projects, with a fixture
  Ed25519 keypair forging Connect-Web JSON responses.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 15:24:21 +02:00
Ilia Denisov 390ad3196b phase 6: mark stage done after local-ci #7 green
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 14:14:24 +02:00
Ilia Denisov ecd2bc9348 phase 6: web storage layer (KeyStore, Cache, session)
KeyStore + Cache TS interfaces with WebCrypto non-extractable Ed25519
keys persisted via IndexedDB (idb), plus thin api/session.ts that
loads or creates the device session at app startup. Vitest unit
tests under fake-indexeddb cover both adapters; Playwright e2e
verifies the keypair survives reload and produces signatures still
verifiable under the persisted public key (gateway round-trip moves
to Phase 7's existing acceptance bullet).

Browser baseline: WebCrypto Ed25519 — Chrome >=137, Firefox >=130,
Safari >=17.4. No JS fallback; ui/docs/storage.md documents the
matrix and the WebKit non-determinism quirk.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 14:08:09 +02:00
Ilia Denisov 87a6694e2d phase 5 2026-05-07 13:41:33 +02:00
Ilia Denisov fbc0260720 phase 5: wasm core, GalaxyClient skeleton, Connect-Web stubs
Compile `ui/core` to WebAssembly via TinyGo (903 KB) and expose four
canonical-bytes / signature-verification functions on
`globalThis.galaxyCore` from `ui/wasm/main.go`. The TypeScript-side
`Core` interface plus a `WasmCore` adapter (browser + JSDOM loader)
bridge those into a typed shape, and a `GalaxyClient` skeleton wires
`Core.signRequest` → injected `Signer` → typed Connect client →
`Core.verifyPayloadHash` / `verifyResponse`.

Wire `ui/buf.gen.yaml` against the local
`@bufbuild/protoc-gen-es` v2 binary (devDependency) so the codegen
step does not depend on the buf.build BSR. Vitest covers the bridge
end-to-end: per-method WasmCore tests under JSDOM, byte-for-byte
canon parity against the gateway fixtures committed in Phase 3, and
a `GalaxyClient` orchestration test using
`createRouterTransport`. The committed `core.wasm` snapshot tracks
TinyGo output so contributors run `make wasm` only when `ui/core/`
changes; CI consumes the snapshot directly.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 12:58:37 +02:00
Ilia Denisov cd61868881 chore: add game .gitignore 2026-05-07 11:58:28 +02:00
Ilia Denisov 3acbbabcc4 chore: stop tracking .claude/scheduled_tasks.lock
The lock is harness runtime state; it must not be committed.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 11:52:35 +02:00
Ilia Denisov 89bf7e6576 phase 4: drop stale gRPC nomenclature from integration tests
Phase 4 replaced the gateway's authenticated edge listener with a
Connect-Go HTTP/h2c bootstrap that natively serves Connect, gRPC,
and gRPC-Web. Sweep the integration suite so test names, comments,
and helper docs match the new transport posture: rename
TestUserAccount_GetThroughGatewayGRPC to TestUserAccount_GetThroughGatewayEdge,
flip "authenticated gRPC" / "signed gRPC" / "gateway gRPC" comments
to "authenticated edge", and align testenv doc strings.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 11:52:17 +02:00
Ilia Denisov 118f7c17a2 phase 4: connectrpc on the gateway authenticated edge
Replace the native-gRPC server bootstrap with a single
`connectrpc.com/connect` HTTP/h2c listener. Connect-Go natively
serves Connect, gRPC, and gRPC-Web on the same port, so browsers can
now reach the authenticated surface without giving up the gRPC
framing native and desktop clients may use later. The decorator
stack (envelope → session → payload-hash → signature →
freshness/replay → rate-limit → routing/push) is reused unchanged
behind a small Connect → gRPC adapter and a `grpc.ServerStream`
shim around `*connect.ServerStream`.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 11:49:28 +02:00
Ilia Denisov 39b7b2ef29 ci: skip docs-only triggers; document per-stage local-ci gate
ui-test workflow gains a `!**/*.md` negation so commits touching only
markdown (READMEs, PLAN.md updates, topic docs) no longer kick off the
full Go + Vitest + Playwright pipeline. Mixed commits keep triggering
because at least one positive path (`ui/**`, `gateway/**`, …) still
matches.

Project CLAUDE.md adds a per-stage CI gate section so the local
Gitea Actions runner is exercised at the close of every stage from
any PLAN.md, with the push step pre-authorised.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 09:47:27 +02:00
Ilia Denisov dc1c9b109c phase 3 2026-05-07 09:40:37 +02:00
Ilia Denisov 63cccdc958 docs: testing.md — local gitea ci cheat sheet
Replaces the act-as-fallback section with the operations needed to
work with the local Gitea + arm64 act_runner shipped in tools/local-ci/:
how to bring it up, push, query run status from curl, and pull
zstd-compressed step logs from inside the gitea container. Keeps a
short act note as a syntax-only dry-run.

Also drops `client/**` from the path-filter list documented at the
top (the workflow excludes deprecated client/ from triggers and from
the go test command), and notes that the checkout step now uses
submodules: recursive so MaxMind-DB fixtures land for pkg/geoip.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 08:49:43 +02:00
Ilia Denisov 1b5749bd31 fix: make ci green on a fresh runner
Two issues surfaced by the first end-to-end ui-test.yaml run on a
clean Linux runner that don't reproduce locally:

- pkg/geoip tests load fixtures from the pkg/geoip/test-data git
  submodule (MaxMind-DB). actions/checkout@v4 does not fetch
  submodules by default, so the fixture path is missing on the
  runner. Both ui-test and ui-release workflows now check out with
  submodules: recursive.

- pkg/util/TestWritable asserts that /usr/lib is not writable, which
  holds for unprivileged users but fails inside the catthehacker
  workflow container that runs as root. Skip that branch when
  os.Geteuid() == 0; the root-only "the writable dir is writable"
  branch still runs.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 08:35:34 +02:00
Ilia Denisov 7450006ed3 phase 2: ui testing infrastructure
Vitest + @testing-library/jest-dom matchers wired through tests/setup.ts.
Playwright with four projects: chromium-desktop, webkit-desktop,
chromium-mobile-iphone-13, chromium-mobile-pixel-5; traces and
screenshots retained on failure.

.gitea/workflows/ui-test.yaml runs Tier 1 on every push and pull
request: monorepo Go service tests (backend with -p 1 to dodge
testcontainer contention; gateway, game, every pkg/<name> module),
pnpm install --frozen-lockfile, playwright install --with-deps,
pnpm test, pnpm exec playwright test. Uploads playwright-report
and test-results on failure. Integration suite stays gated behind
make -C integration integration; deprecated client/ excluded.

.gitea/workflows/ui-release.yaml mirrors Tier 1 on v* tag push and
keeps commented placeholders for visual regression (Phase 33) and
macOS iOS smoke (Phase 32).

ui/docs/testing.md documents both tiers and the local invocations
that mirror what CI runs. ui/PLAN.md Phase 2 marked done; Phase 3
gains a bullet to extend the go test command with ./ui/core/...;
Phase 36 has the renamed release workflow path.

tools/local-ci/ ships a self-contained docker-compose for verifying
workflows against a local Gitea + arm64 act_runner before pushing
to a real instance.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 08:24:44 +02:00
Ilia Denisov cf41be9eff fix: mock /healthz in runtime service e2e test
TestServiceStartGameEndToEnd's httptest server had no handler for
/healthz, the path engineclient.Healthz probes after a runtime
container starts. Without it the runtime never transitions out of
starting state and the test fails on its 5s deadline. Surfaced by
introducing CI that runs the backend service tests outside the
integration harness.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 08:24:25 +02:00
Ilia Denisov 7cc18159e9 phase 1 2026-05-07 07:18:55 +02:00
Ilia Denisov 7af57933eb chore: plan formatting 2026-05-07 06:35:58 +02:00
Ilia Denisov 08f1917bc1 docs: ui plan 2026-05-07 06:32:46 +02:00
634 changed files with 258810 additions and 4167 deletions
+9 -14
View File
@@ -1,8 +1,10 @@
{
"permissions": {
"allow": [],
"defaultMode": "default"
},
"sandbox": {
"network": {
"allowLocalBinding": true,
"allowUnixSockets": ["/Users/id/.colima/default/docker.sock"],
"allowedDomains": [
"github.com",
"registry.npmjs.org",
@@ -11,18 +13,11 @@
"docker.io",
"gcr.io",
"*.golang.org"
]
],
"allowUnixSockets": [
"/var/run/docker.sock"
],
"allowLocalBinding": true
}
},
"enabledPlugins": {
"gopls-lsp@claude-plugins-official": true,
"context7@claude-plugins-official": true
},
"permissions": {
"defaultMode": "plan",
"allow": [
"mcp__context7__resolve-library-id",
"mcp__context7__get-library-docs"
]
}
}
+5
View File
@@ -0,0 +1,5 @@
*.wasm binary
*.ts linguist-language=TypeScript
*.ts linguist-detectable=true
*.ts linguist-vendored=false
*.ts linguist-generated=false
+148
View File
@@ -0,0 +1,148 @@
name: ui-release
# Tier 2 (release) workflow. Runs on tag push.
#
# Currently mirrors the Tier 1 step set. Visual regression baseline
# checks and the macOS-runner iOS smoke job are landed in later phases
# of ui/PLAN.md and live as commented sections at the end of this file
# until those phases ship.
on:
push:
tags:
- 'v*'
jobs:
test:
runs-on: ubuntu-latest
defaults:
run:
shell: bash
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: go.work
cache: true
- name: Run Go tests
# client/ is the deprecated Fyne client; excluded from CI per
# ui/PLAN.md §74. -count=1 disables Go's test cache so a green
# run never depends on a previous runner's cached state. The
# backend suite is run with -p 1 because most backend packages
# spawn their own Postgres testcontainer, and parallel
# Postgres bootstraps starve each other on a constrained
# runner. pkg modules are listed one by one because ./pkg/...
# does not recurse across the independent go.work modules
# under pkg/.
run: |
go test -count=1 -p 1 ./backend/...
go test -count=1 \
./gateway/... \
./game/... \
./ui/core/... \
./pkg/calc/... \
./pkg/connector/... \
./pkg/cronutil/... \
./pkg/error/... \
./pkg/geoip/... \
./pkg/model/... \
./pkg/postgres/... \
./pkg/redisconn/... \
./pkg/schema/... \
./pkg/storage/... \
./pkg/transcoder/... \
./pkg/util/...
- name: Set up pnpm
uses: pnpm/action-setup@v4
with:
version: 11.0.7
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: 22
cache: pnpm
cache-dependency-path: ui/pnpm-lock.yaml
- name: Install npm dependencies
working-directory: ui
run: pnpm install --frozen-lockfile
- name: Install Playwright browsers
working-directory: ui/frontend
run: pnpm exec playwright install --with-deps
- name: Run Vitest
working-directory: ui/frontend
run: pnpm test
- name: Run Playwright
working-directory: ui/frontend
run: pnpm exec playwright test
- name: Upload Playwright report on failure
if: failure()
uses: actions/upload-artifact@v4
with:
name: playwright-report
path: ui/frontend/playwright-report/
retention-days: 14
- name: Upload Playwright traces on failure
if: failure()
uses: actions/upload-artifact@v4
with:
name: playwright-traces
path: ui/frontend/test-results/
retention-days: 14
# visual-regression: enabled in Phase 33 of ui/PLAN.md, once the PWA
# shell and service worker land and a snapshot baseline is committed
# under ui/frontend/tests/__snapshots__/.
#
# visual-regression:
# runs-on: ubuntu-latest
# needs: test
# steps:
# - uses: actions/checkout@v4
# - uses: pnpm/action-setup@v4
# with: { version: 11.0.7 }
# - uses: actions/setup-node@v4
# with:
# node-version: 22
# cache: pnpm
# cache-dependency-path: ui/pnpm-lock.yaml
# - working-directory: ui
# run: pnpm install --frozen-lockfile
# - working-directory: ui/frontend
# run: pnpm exec playwright install --with-deps
# - working-directory: ui/frontend
# run: pnpm exec playwright test --grep @visual
# ios-smoke: enabled in Phase 32 of ui/PLAN.md, once the Capacitor
# wrapper lands. Runs a Capacitor + Appium smoke against an iOS
# simulator on a macOS runner.
#
# ios-smoke:
# runs-on: macos-13
# needs: test
# steps:
# - uses: actions/checkout@v4
# - uses: pnpm/action-setup@v4
# with: { version: 11.0.7 }
# - uses: actions/setup-node@v4
# with:
# node-version: 22
# cache: pnpm
# cache-dependency-path: ui/pnpm-lock.yaml
# - working-directory: ui
# run: pnpm install --frozen-lockfile
# - working-directory: ui/mobile
# run: pnpm exec cap sync ios && pnpm exec appium-smoke ios
+128
View File
@@ -0,0 +1,128 @@
name: ui-test
# Tier 1 (per-PR) workflow. Runs Vitest + Playwright for the UI client and
# the monorepo Go service tests (everything except the integration suite,
# which lives behind `make -C integration integration` and needs a Docker
# daemon set up for testcontainers).
#
# The path filter is intentionally broad until a dedicated go-test
# workflow is introduced; this is the only CI gate today.
on:
push:
paths:
- 'ui/**'
- 'backend/**'
- 'gateway/**'
- 'game/**'
- 'pkg/**'
- 'go.work'
- 'go.work.sum'
- '.gitea/workflows/ui-test.yaml'
# Skip docs-only commits. Negation removes pure markdown changes;
# mixed commits (code + .md) still match a positive pattern above
# and trigger the workflow. Image and other binary asset paths
# are already outside the positive list.
- '!**/*.md'
pull_request:
paths:
- 'ui/**'
- 'backend/**'
- 'gateway/**'
- 'game/**'
- 'pkg/**'
- 'go.work'
- 'go.work.sum'
- '.gitea/workflows/ui-test.yaml'
- '!**/*.md'
jobs:
test:
runs-on: ubuntu-latest
defaults:
run:
shell: bash
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: go.work
cache: true
- name: Run Go tests
# client/ is the deprecated Fyne client; excluded from CI per
# ui/PLAN.md §74. -count=1 disables Go's test cache so a green
# run never depends on a previous runner's cached state. The
# backend suite is run with -p 1 because most backend packages
# spawn their own Postgres testcontainer, and parallel
# Postgres bootstraps starve each other on a constrained
# runner. pkg modules are listed one by one because ./pkg/...
# does not recurse across the independent go.work modules
# under pkg/.
run: |
go test -count=1 -p 1 ./backend/...
go test -count=1 \
./gateway/... \
./game/... \
./ui/core/... \
./pkg/calc/... \
./pkg/connector/... \
./pkg/cronutil/... \
./pkg/error/... \
./pkg/geoip/... \
./pkg/model/... \
./pkg/postgres/... \
./pkg/redisconn/... \
./pkg/schema/... \
./pkg/storage/... \
./pkg/transcoder/... \
./pkg/util/...
- name: Set up pnpm
uses: pnpm/action-setup@v4
with:
version: 11.0.7
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: 22
cache: pnpm
cache-dependency-path: ui/pnpm-lock.yaml
- name: Install npm dependencies
working-directory: ui
run: pnpm install --frozen-lockfile
- name: Install Playwright browsers
working-directory: ui/frontend
run: pnpm exec playwright install --with-deps
- name: Run Vitest
working-directory: ui/frontend
run: pnpm test
- name: Run Playwright
working-directory: ui/frontend
run: pnpm exec playwright test
- name: Upload Playwright report on failure
if: failure()
uses: actions/upload-artifact@v4
with:
name: playwright-report
path: ui/frontend/playwright-report/
retention-days: 14
- name: Upload Playwright traces on failure
if: failure()
uses: actions/upload-artifact@v4
with:
name: playwright-traces
path: ui/frontend/test-results/
retention-days: 14
+14 -1
View File
@@ -1,3 +1,16 @@
.codex
.vscode/
artifacts/
artifacts/.claude/scheduled_tasks.lock
# Per-developer Claude Code overrides. The committed
# `.claude/settings.json` holds the shared project defaults;
# `settings.local.json` is each developer's local override
# (looser permissions, disabled sandbox) and must not be staged.
.claude/settings.local.json
# Per-developer Vite dotenv overrides. The committed
# `ui/frontend/.env.development` ships sane defaults for the
# `tools/local-dev/` stack; `.local` siblings stay personal and
# unstaged.
**/.env.local
**/.env.*.local
File diff suppressed because it is too large Load Diff
+66 -13
View File
@@ -30,19 +30,56 @@ This repository hosts the Galaxy Game project.
- `galaxy/<service>/PLAN.md` — staged implementation plan for the service.
May be already complete and resides for historical reasons.
- `galaxy/<service>/docs/`per-stage decision records
(one file per decision, re-organized after full implementation
of `PLAN.md`).
- `galaxy/<service>/docs/`live topic-based documentation that's
deeper than what fits in `README.md` (per-feature design notes,
protocol specs, runbooks). Not stage-by-stage history.
## Decision records when implementing stages from PLAN.md
## Per-stage CI gate
- Stage-related discussion and decisions do NOT live in `README.md` or
`docs/ARCHITECTURE.md`. Those files describe the current state, not the history.
- Each non-trivial decision gets its own `.md` under the module's `docs/`,
referenced from the relevant `README.md`.
- Any agreement reached during interactive planning that is not obvious from
the code must be captured — either as a decision record or as an entry in
the module's README.
Every completed stage from any `PLAN.md` (per-service or `ui/PLAN.md`)
must be exercised on the local Gitea Actions runner before being
declared done. The runbook lives in `tools/local-ci/README.md`; the
short version is:
1. Commit the stage changes.
2. `make -C tools/local-ci push` — pushes `HEAD` to the local Gitea
instance and triggers every workflow that matches the changed
paths.
3. Poll the latest run via the API snippet in `ui/docs/testing.md`
(or the Gitea UI on `http://localhost:3000`) until it leaves
`running`. Inspect the log on failure.
4. Only after the run is `success` may the stage be marked done in
the corresponding `PLAN.md`.
This applies even when the local unit-test suite is green —
workflow-only failures (path filters, action-version mismatches,
missing secrets, runner-only environment differences) are cheap to
catch here and expensive to catch on a remote PR. The push step is
implicitly authorised: do not ask for confirmation on every stage.
If `tools/local-ci` is not running, bring it up first
(`make -C tools/local-ci up`); do not skip this gate. The single
exception is when the user explicitly waives it for a stage.
## Decisions during stage implementation
Stages from `PLAN.md` produce decisions. Those decisions never live in a
separate per-decision history file. Instead, every non-obvious decision is
baked back into the live state in three places:
1. **The plan itself.** Update the relevant stage's text, acceptance
criteria, or targeted tests so it reflects what was decided. If
earlier already-implemented stages need to follow the new agreement,
correct their code, tests, and live docs in the same patch.
2. **Later, not-yet-implemented stages.** When a decision affects later
stages — scope, dependencies, deliverables, or tests — update those
stages now, do not leave the future to re-derive them.
3. **Live documentation.** Module `README.md`, project
`docs/ARCHITECTURE.md`, `docs/FUNCTIONAL.md` (with its
`docs/FUNCTIONAL_ru.md` mirror), the affected service `openapi.yaml`
or `*.proto`, and any topic doc under `galaxy/<service>/docs/` that
the decision touches. `README.md` and `ARCHITECTURE.md` always
describe current state, not the history of how it was reached.
## Scope of PLAN.md changes
@@ -82,8 +119,8 @@ details.
The same behaviour is described in several parallel sources: code,
`docs/ARCHITECTURE.md`, `docs/FUNCTIONAL.md` (with its Russian mirror
`docs/FUNCTIONAL_ru.md`), the affected service `README.md`, the
relevant `openapi.yaml` or `*.proto`, and the per-stage decision
records under `galaxy/<service>/docs/`. They must never disagree.
relevant `openapi.yaml` or `*.proto`, and the topic-based docs under
`galaxy/<service>/docs/`. They must never disagree.
- Any patch that changes user-visible behaviour, an API contract, or a
cross-service flow updates every affected source in the same change
@@ -103,6 +140,22 @@ records under `galaxy/<service>/docs/`. They must never disagree.
`docs/FUNCTIONAL_ru.md` (translate only the touched paragraphs).
Skipping the mirror is treated as an incomplete patch.
## Code compactness
- Prefer compact code over speculative universality. Three similar
occurrences are not yet a pattern — wait for the third real caller
before extracting an abstraction.
- Do not add seams, hooks, or configuration knobs for hypothetical
future requirements. If the next stage of `PLAN.md` will need
something, the next stage will add it.
- A bug fix does not need surrounding cleanup; a one-shot operation
does not need a helper function; a single concrete value does not
need a parameter.
- When the plan can be satisfied by reusing an existing function or
type, do that instead of introducing a new one.
- This rule is about scope, not laziness — well-named identifiers,
precise types, and full test coverage stay non-negotiable.
## Dependencies
- Before adding a new module, check its upstream repository for the latest
-868
View File
@@ -1,868 +0,0 @@
# backend — Implementation Plan
This plan has been already implemented and stays here for historical reasons.
It should NOT be threated as source of truth for service functionality.
---
## Summary
This plan is the technical specification for implementing the
consolidated Galaxy `backend` service. It is read together with
`../docs/ARCHITECTURE.md` (architecture and security model) and
`README.md` (module layout, configuration, operations).
After reading those two documents and this plan, an implementing
engineer should not need to ask architectural questions. Every stage is
self-contained inside its domain area; stages run in order; each stage
has explicit Critical files.
The plan does not invent new domain concepts. It catalogues the work
required to assemble what the architecture document already defines.
## ~~Stage 1~~ — Repository cleanup
This stage was implemented and marked as done.
Goal: remove every module whose responsibility moves into `backend`,
and prepare the workspace for the new module.
Actions:
1. `git rm -r authsession/ lobby/ mail/ notification/ gamemaster/
rtmanager/ geoprofile/ user/ integration/ pkg/redisconn/
pkg/notificationintent/`.
2. Edit `go.work`:
- Remove `use` lines for the deleted modules.
- Remove `replace` lines for `galaxy/redisconn` and
`galaxy/notificationintent`.
- Do not add `./backend` yet — the module is created in Stage 2.
3. Confirm that surviving modules still build:
`go build ./gateway/... ./game/... ./client/... ./pkg/...`.
Any compile error here means a surviving module imported a
removed package and must be patched (the only realistic culprit is
`gateway`, which references `pkg/redisconn` and the deleted streams;
patches there belong to Stage 6, not Stage 1 — for Stage 1 it is
acceptable to leave gateway broken if and only if the only failures
come from imports of removed packages).
4. Run `go vet ./pkg/...` and confirm no diagnostic.
Out of scope: any code change inside surviving modules. Stage 1 is
purely deletion plus `go.work` edits.
Critical files:
- `go.work`
- the deletion of `authsession/`, `lobby/`, `mail/`, `notification/`,
`gamemaster/`, `rtmanager/`, `geoprofile/`, `user/`, `integration/`,
`pkg/redisconn/`, `pkg/notificationintent/`.
Done criteria:
- `git status` shows only deletions plus the `go.work` edit.
- `go build ./pkg/...` is clean.
- `go vet ./pkg/...` is clean.
## ~~Stage 2~~ — Backend skeleton & shared infrastructure
This stage was implemented and marked as done.
Goal: stand up the new module with its boot path, configuration,
telemetry, logger, HTTP listener, Postgres pool, and gRPC listener — all
with empty handlers. After this stage `go run ./backend/cmd/backend`
must boot to a state where probes return 200 and migrations run (with an
empty migration file).
Actions:
1. Create `backend/go.mod` with module path `galaxy/backend` and Go
version matching `go.work`. Add direct dependencies:
`github.com/gin-gonic/gin`, `github.com/jackc/pgx/v5`,
`github.com/go-jet/jet/v2`, `github.com/pressly/goose/v3`,
`go.uber.org/zap`, `go.opentelemetry.io/otel` and the OTLP
trace/metric exporters used by other services, and the `galaxy/*`
pkg modules (`postgres`, `model`, `geoip`, `cronutil`, `error`,
`util`).
2. Add `./backend` to `go.work` `use(...)`.
3. `backend/cmd/backend/main.go` — boot order:
1. Load `config.LoadFromEnv()`; `cfg.Validate()`.
2. Initialise telemetry (`telemetry.NewProcess(cfg.Telemetry)`). Set
global tracer and meter providers.
3. Construct the zap logger; inject trace fields helper.
4. Open Postgres pool. Apply embedded migrations with goose. Fail
fast on any error.
5. Construct module wiring (empty for now; populated in Stage 5).
6. Start the HTTP server (gin engine with empty route groups, plus
`/healthz` and `/readyz`).
7. Start the gRPC push server (no streams accepted yet — Stage 6).
8. Block on `signal.NotifyContext(ctx, SIGINT, SIGTERM)`; on signal,
drain in the order described in `README.md` §16.
4. `backend/internal/config/config.go` — env-loader following the
pattern used by surviving services. Cover every variable listed in
`README.md` §4. Provide `DefaultConfig()` and `Validate()`.
5. `backend/internal/telemetry/runtime.go` — port the existing service
pattern verbatim: configurable OTLP gRPC/HTTP exporter, optional
stdout exporter, Prometheus pull endpoint when configured. Expose
`TraceFieldsFromContext(ctx) []zap.Field`.
6. `backend/internal/server/server.go` — gin engine, three empty route
groups, request id middleware, panic recovery middleware, otel
middleware. Probe handlers in `server/probes.go`.
7. `backend/internal/postgres/pool.go` — pgx pool factory using the
shared `galaxy/postgres` helper.
8. `backend/internal/postgres/migrations/00001_init.sql` — empty file
containing the `-- +goose Up` and `-- +goose Down` markers and a
single `CREATE SCHEMA IF NOT EXISTS backend;` statement so the
migration is non-empty and can be verified.
9. `backend/internal/postgres/migrations/embed.go` — `embed.FS` and
exported `Migrations() fs.FS` helper.
10. `backend/internal/push/server.go` — gRPC server skeleton bound to
`cfg.GRPCPushListenAddr`. No service registered yet.
11. `backend/Makefile` — at minimum a `jet` target stub that prints
"not generated yet"; will be filled in Stage 4.
Critical files:
- `backend/go.mod`, `go.work`
- `backend/cmd/backend/main.go`
- `backend/internal/config/config.go`
- `backend/internal/telemetry/runtime.go`
- `backend/internal/server/server.go`, `backend/internal/server/probes.go`
- `backend/internal/postgres/pool.go`,
`backend/internal/postgres/migrations/00001_init.sql`,
`backend/internal/postgres/migrations/embed.go`
- `backend/internal/push/server.go`
- `backend/Makefile`
Done criteria:
- `go build ./backend/...` is clean.
- `go run ./backend/cmd/backend` starts, applies the placeholder
migration, opens HTTP and gRPC listeners, and serves `/healthz` 200
and `/readyz` 200.
- Telemetry output (stdout exporter) shows trace and metric activity on
a probe hit.
## ~~Stage~~ 3 — API contract & routing
This stage was implemented and marked as done.
Goal: define the entire backend REST contract in `openapi.yaml` and
register every handler as a placeholder that returns
`501 Not Implemented`. Wire the middleware stack for each route group.
The contract test suite must validate every endpoint round-trip against
the OpenAPI document and pass on the placeholders.
Actions:
1. Author `backend/openapi.yaml` — single document with three tags
(`Public`, `User`, `Admin`) and the endpoint set below. Reuse
schemas from `pkg/model` where possible; keep the rest under
`components/schemas/*`.
2. Implement middleware in `backend/internal/server/middleware/`:
- `requestid` — assigns and propagates a request id (Stage 2 may
have already done this; consolidate here).
- `logging` — emits an access log entry with trace fields.
- `metrics` — counters and histograms per route group.
- `panicrecovery` — converts panics to 500 with structured logging.
- `userid` — required on `/api/v1/user/*`. Reads `X-User-ID`,
parses as UUID, places it in the request context. Rejects with
400 if missing or malformed. Backend trusts the value (see
architecture trust note).
- `basicauth` — required on `/api/v1/admin/*`. Stage 3 uses a stub
verifier that accepts any non-empty username and a fixed password
read from a test-only env var so contract tests can pass; Stage
5.3 replaces the verifier with the real Postgres-backed one.
3. Implement handlers per endpoint in
`backend/internal/server/handlers_<group>_<topic>.go`. Every handler
returns `501 Not Implemented` with the standard error body
`{"error":{"code":"not_implemented","message":"..."}}`.
4. Implement the contract test:
`backend/internal/server/contract_test.go`. Loads
`backend/openapi.yaml` via `kin-openapi`, builds the gin engine,
walks every operation, sends a representative request, and
validates both the request and response against the OpenAPI
document.
5. Document `openapi.yaml` location and contract test pattern in
`backend/docs/api-contract.md` (a brief decision record).
### Endpoint inventory
Public (`/api/v1/public/*`):
- `POST /auth/send-email-code` — request body `{email, locale?}`;
response `{challenge_id}`.
- `POST /auth/confirm-email-code` — request body
`{challenge_id, code, client_public_key, time_zone}`; response
`{device_session_id}`.
Probes (root):
- `GET /healthz` — `200` always when the process is alive.
- `GET /readyz` — `200` once Postgres reachable, migrations applied,
gRPC listener bound; `503` otherwise.
User (`/api/v1/user/*`, all require `X-User-ID`):
- `GET /account` — current account view (profile + settings +
entitlements).
- `PATCH /account/profile` — update mutable profile fields
(`display_name`).
- `PATCH /account/settings` — update `preferred_language`, `time_zone`.
- `POST /account/delete` — soft delete; cascade is in process.
- `GET /lobby/games` — public list with paging.
- `POST /lobby/games` — create.
- `GET /lobby/games/{game_id}`.
- `PATCH /lobby/games/{game_id}`.
- `POST /lobby/games/{game_id}/open-enrollment`.
- `POST /lobby/games/{game_id}/ready-to-start`.
- `POST /lobby/games/{game_id}/start`.
- `POST /lobby/games/{game_id}/pause`.
- `POST /lobby/games/{game_id}/resume`.
- `POST /lobby/games/{game_id}/cancel`.
- `POST /lobby/games/{game_id}/retry-start`.
- `POST /lobby/games/{game_id}/applications`.
- `POST /lobby/games/{game_id}/applications/{application_id}/approve`.
- `POST /lobby/games/{game_id}/applications/{application_id}/reject`.
- `POST /lobby/games/{game_id}/invites`.
- `POST /lobby/games/{game_id}/invites/{invite_id}/redeem`.
- `POST /lobby/games/{game_id}/invites/{invite_id}/decline`.
- `POST /lobby/games/{game_id}/invites/{invite_id}/revoke`.
- `GET /lobby/games/{game_id}/memberships`.
- `POST /lobby/games/{game_id}/memberships/{membership_id}/remove`.
- `POST /lobby/games/{game_id}/memberships/{membership_id}/block`.
- `GET /lobby/my/games`.
- `GET /lobby/my/applications`.
- `GET /lobby/my/invites`.
- `GET /lobby/my/race-names`.
- `POST /lobby/race-names/register` — promote a `pending_registration`
to `registered` within the 30-day window.
- `POST /games/{game_id}/commands` — proxy to engine command path.
- `POST /games/{game_id}/orders` — proxy to engine order validation.
- `GET /games/{game_id}/reports/{turn}` — proxy to engine report path.
Admin (`/api/v1/admin/*`, all require Basic Auth):
- `GET /admin-accounts`, `POST /admin-accounts`,
`GET /admin-accounts/{username}`,
`POST /admin-accounts/{username}/disable`,
`POST /admin-accounts/{username}/enable`,
`POST /admin-accounts/{username}/reset-password`.
- `GET /users`, `GET /users/{user_id}`,
`POST /users/{user_id}/sanctions`,
`POST /users/{user_id}/limits`,
`POST /users/{user_id}/entitlements`,
`POST /users/{user_id}/soft-delete`.
- `GET /games`, `GET /games/{game_id}`,
`POST /games/{game_id}/force-start`,
`POST /games/{game_id}/force-stop`,
`POST /games/{game_id}/ban-member`.
- `GET /runtimes/{game_id}`,
`POST /runtimes/{game_id}/restart`,
`POST /runtimes/{game_id}/patch`,
`POST /runtimes/{game_id}/force-next-turn`,
`GET /engine-versions`, `POST /engine-versions`,
`PATCH /engine-versions/{id}`,
`POST /engine-versions/{id}/disable`.
- `GET /mail/deliveries`,
`GET /mail/deliveries/{delivery_id}`,
`GET /mail/deliveries/{delivery_id}/attempts`,
`POST /mail/deliveries/{delivery_id}/resend`,
`GET /mail/dead-letters`.
- `GET /notifications`, `GET /notifications/{notification_id}`,
`GET /notifications/dead-letters`,
`GET /notifications/malformed`.
- `GET /geo/users/{user_id}/countries` — counter listing.
Internal (gateway-only, `/api/v1/internal/*`):
- `GET /sessions/{device_session_id}` — gateway session lookup.
- `POST /sessions/{device_session_id}/revoke` — admin or self revoke
passthrough; backend emits `session_invalidation`.
- `POST /sessions/users/{user_id}/revoke-all`.
- `GET /users/{user_id}/account-internal` — server-to-server fetch
used by gateway flows that need account state alongside the session.
The internal group is on `/api/v1/internal/*`. The trust model treats
it as part of the user surface (no extra auth in MVP).
Critical files:
- `backend/openapi.yaml`
- `backend/internal/server/router.go`
- `backend/internal/server/middleware/{requestid,logging,metrics,panicrecovery,userid,basicauth}.go`
- `backend/internal/server/handlers_*.go`
- `backend/internal/server/contract_test.go`
- `backend/docs/api-contract.md`
Done criteria:
- `go test ./backend/internal/server/...` is green; the contract test
exercises every endpoint and validates against `openapi.yaml`.
- Every endpoint returns `501 Not Implemented` with the standard error
body.
- gin route table at startup matches the OpenAPI inventory exactly.
## ~~Stage 4~~ — Persistence layer
This stage was implemented and marked as done.
Goal: define every `backend` schema table, generate jet code, and make
the wiring of the persistence layer ready for the domain modules.
Actions:
1. Replace `backend/internal/postgres/migrations/00001_init.sql` with
the full DDL. The schema is `backend`. The expected tables and
their primary purposes:
Auth:
- `device_sessions(device_session_id uuid pk, user_id uuid not null,
client_public_key bytea not null, status text not null,
created_at, revoked_at, last_seen_at)` plus indexes on
`user_id` and `status`.
- `auth_challenges(challenge_id uuid pk, email text not null,
code_hash bytea not null, created_at, expires_at, consumed_at,
attempts int not null default 0)`. Index on `email`.
- `blocked_emails(email text pk, blocked_at, reason text)`.
User:
- `accounts(user_id uuid pk, email text unique not null,
user_name text unique not null, display_name text not null,
preferred_language text not null, time_zone text not null,
declared_country text, permanent_block bool not null default false,
created_at, updated_at, deleted_at)`.
- `entitlement_records(record_id uuid pk, user_id uuid not null,
tier text not null, source text not null, created_at)`.
- `entitlement_snapshots(user_id uuid pk, tier text not null,
max_registered_race_names int not null, taken_at timestamptz)`.
Updated on every entitlement change.
- `sanction_records`, `sanction_active`, `limit_records`,
`limit_active` — same shape as the previous `user` service had
(record + active rollup pattern).
Admin:
- `admin_accounts(username text pk, password_hash bytea not null,
created_at, last_used_at, disabled_at)`.
Lobby:
- `games(game_id uuid pk, owner_user_id uuid not null,
visibility text not null, status text not null, ...)` covering
enrollment state machine fields documented in
`ARCHITECTURE_deprecated.md` § Game Lobby.
- `applications(application_id uuid pk, game_id uuid not null,
applicant_user_id uuid not null, status text not null, ...)`.
- `invites(invite_id uuid pk, game_id uuid not null,
invited_user_id uuid, code text unique, status text, ...)`.
- `memberships(membership_id uuid pk, game_id uuid not null,
user_id uuid not null, race_name text not null, status text,
...)` plus `unique(game_id, user_id)`.
- `race_names(name text not null, canonical text not null,
status text not null, owner_user_id uuid, game_id uuid,
expires_at, registered_at, ...)` plus
`unique(canonical) where status in ('registered','reservation','pending_registration')`.
Runtime:
- `runtime_records(game_id uuid pk, current_container_id text,
status text not null, image_ref text, started_at, last_observed_at,
...)`.
- `engine_versions(version text pk, image_ref text not null,
enabled bool not null default true, created_at, ...)`.
- `player_mappings(game_id uuid not null, user_id uuid not null,
race_name text not null, engine_player_uuid uuid not null,
primary key(game_id, user_id))`.
- `runtime_operation_log(operation_id uuid pk, game_id uuid,
op text, status text, started_at, finished_at, error text)`.
- `runtime_health_snapshots(snapshot_id uuid pk, game_id uuid,
observed_at, payload jsonb)`.
Mail:
- `mail_deliveries(delivery_id uuid pk, template_id text not null,
idempotency_key text not null, status text not null,
attempts int not null default 0, next_attempt_at timestamptz,
payload_id uuid not null, created_at, ...)` plus
`unique(template_id, idempotency_key)`.
- `mail_recipients(recipient_id uuid pk, delivery_id uuid not null,
address text not null, kind text not null)`.
- `mail_attempts(attempt_id uuid pk, delivery_id uuid, attempt_no int,
started_at, finished_at, outcome text, error text)`.
- `mail_dead_letters(dead_letter_id uuid pk, delivery_id uuid,
archived_at, reason text)`.
- `mail_payloads(payload_id uuid pk, content_type text not null,
subject text, body bytea not null)`.
Notification:
- `notifications(notification_id uuid pk, kind text not null,
idempotency_key text not null, user_id uuid, payload jsonb,
created_at)` plus `unique(kind, idempotency_key)`.
- `notification_routes(route_id uuid pk, notification_id uuid,
channel text not null, status text not null, last_attempt_at,
...)`.
- `notification_dead_letters(dead_letter_id uuid pk, notification_id
uuid, archived_at, reason text)`.
- `notification_malformed_intents(id uuid pk, received_at, payload
jsonb, reason text)`.
Geo:
- `user_country_counters(user_id uuid not null, country text not null,
count bigint not null default 0, last_seen_at timestamptz,
primary key(user_id, country))`.
2. Add `created_at TIMESTAMPTZ DEFAULT now()` to every table; add
`updated_at` and `deleted_at` where the domain reasons in
`ARCHITECTURE_deprecated.md` apply. UTC normalisation is performed
in Go on read and write (the existing `pkg/postgres` helpers cover
this).
3. `backend/cmd/jetgen/main.go` — port the existing pattern from a
surviving reference (the previous services' `cmd/jetgen` is a good
template; adjust import paths to `galaxy/backend`). The tool spins
up a transient Postgres container, applies the embedded migrations,
and runs `jet -dsn=...` writing into `internal/postgres/jet/`.
4. `backend/Makefile` — fill in the `jet` target.
5. Run `make jet` and commit `internal/postgres/jet/`.
6. Add `backend/internal/postgres/jet/jet.go` — package doc and
`//go:generate` comment pointing to `cmd/jetgen`.
7. Sanity test in `backend/internal/postgres/migrations_test.go`:
spin up a Postgres testcontainer, apply migrations, assert that
the `backend` schema exists and that every expected table is
present.
Critical files:
- `backend/internal/postgres/migrations/00001_init.sql`
- `backend/internal/postgres/jet/**`
- `backend/cmd/jetgen/main.go`
- `backend/Makefile`
- `backend/internal/postgres/migrations_test.go`
Done criteria:
- `go test ./backend/internal/postgres/...` is green.
- `make jet` regenerates without diff.
- All tables listed above exist after a fresh migration.
## ~~Stage 5~~ — Domain implementation
Goal: implement domain modules in dependency order. After each substage
the backend is functional for the substage's slice of behaviour. The
contract tests from Stage 3 progressively flip from `501` to actual
responses as each substage replaces placeholders.
Substages run strictly in order. Each substage:
- Implements package code in `backend/internal/<domain>/`.
- Replaces the corresponding `501` handler bodies in
`backend/internal/server/handlers_*.go` with real logic that calls
the domain package.
- Adds focused unit and contract coverage for the substage's
endpoints.
- Wires the new package into `backend/cmd/backend/main.go`.
### ~~5.1~~ — auth
This substage was implemented and marked as done. See
[`docs/stage05_1-auth.md`](docs/stage05_1-auth.md) for the decisions
taken during implementation.
Behaviour:
- `POST /api/v1/public/auth/send-email-code` — generates a challenge,
hashes the code, persists in `auth_challenges`, calls
`mail.EnqueueLoginCode(email, code)`. Returns `{challenge_id}` for
every non-blocked email (existing user, new user, throttled — all
return identical shape; blocked email rejects with 400 only when the
block is permanent).
- `POST /api/v1/public/auth/confirm-email-code` — looks up the
challenge, verifies the code (constant-time), enforces attempt
ceiling, marks consumed, calls `user.EnsureByEmail(email,
preferred_language, time_zone)` to obtain the user_id, stores the
Ed25519 public key, creates a `device_session` row, populates the
in-memory cache, calls
`geo.SetDeclaredCountryAtRegistration(user_id, source_ip)`, and
returns `{device_session_id}`.
- `GET /api/v1/internal/sessions/{device_session_id}` — sync session
lookup for gateway.
- `POST /api/v1/internal/sessions/{device_session_id}/revoke` and
`POST /api/v1/internal/sessions/users/{user_id}/revoke-all` — mark
sessions revoked, evict from in-memory cache, emit
`session_invalidation` push event (Stage 6 wires the actual
emission; until then `auth` calls a no-op publisher injected at
wiring).
Cache: full session table read at startup; write-through on every
mutation.
### ~~5.2~~ — user
This substage was implemented and marked as done. See
[`docs/stage05_2-user.md`](docs/stage05_2-user.md) for the decisions
taken during implementation.
Behaviour:
- Account CRUD limited to allowed mutations on profile and settings.
- `EnsureByEmail` and `ResolveByEmail` for `auth`.
- Entitlement records and snapshots; tier downgrades never revoke
already-registered race names.
- Sanctions and limits using the record + active rollup pattern.
- Soft delete: writes `deleted_at` and triggers in-process cascade —
`lobby.OnUserDeleted(user_id)`, `notification.OnUserDeleted(user_id)`,
`geo.OnUserDeleted(user_id)`. Permanent block triggers
`lobby.OnUserBlocked(user_id)`.
- Cache: latest entitlement snapshot per user; warmed on startup;
write-through on entitlement mutation.
### ~~5.3~~ — admin
This substage was implemented and marked as done. See
[`docs/stage05_3-admin.md`](docs/stage05_3-admin.md) for the decisions
taken during implementation.
Behaviour:
- `admin_accounts` CRUD with bcrypt hashing.
- Bootstrap on startup via env vars (`BACKEND_ADMIN_BOOTSTRAP_USER`,
`BACKEND_ADMIN_BOOTSTRAP_PASSWORD`); idempotent.
- Replace the Stage 3 stub `basicauth` middleware with the real
Postgres-backed verifier. Constant-time comparison via bcrypt.
- Admin CRUD endpoints across users, games, runtime, mail,
notification, geo. Each admin endpoint delegates to the domain
package's admin-facing methods.
Cache: full admin table at startup; write-through on mutation.
### ~~5.4~~ — lobby
This substage was implemented and marked as done. See
[`docs/stage05_4-lobby.md`](docs/stage05_4-lobby.md) for the decisions
taken during implementation.
Behaviour:
- Games CRUD with the enrollment state machine.
- Applications and invites with their lifecycles.
- Memberships with race name binding.
- Race Name Directory: registered, reservation, and
pending_registration tiers; canonical key via `disciplinedware/go-confusables`;
uniqueness across all three tiers; capability promotion based on
`max_planets > initial AND max_population > initial` from the
runtime snapshot.
- Pending-registration sweeper: scheduled job, releases entries past
the 30-day window; uses `pkg/cronutil`. The same sweeper auto-closes
enrollment-expired games whose `approved_count >= min_players`.
- Hooks consumed from other modules:
- `OnUserBlocked(user_id)` — release all RND/applications/invites/
memberships in one transaction.
- `OnUserDeleted(user_id)` — same.
- `OnRuntimeSnapshot(snapshot)` — update denormalised runtime view
on the game (current_turn, status, per-member max stats).
- `OnGameFinished(game_id)` — drive race name promotion logic and
move game to `finished`.
Cache: active games and memberships, RND canonical set; warmed on
startup; write-through on mutation.
### ~~5.5~~ — runtime (with dockerclient and engineclient)
This substage was implemented and marked as done. See
[`docs/stage05_5-runtime.md`](docs/stage05_5-runtime.md) for the
decisions taken during implementation.
Behaviour:
- Engine version registry CRUD.
- `engineclient` is a thin `net/http` client over `pkg/model` types,
one method per engine endpoint listed in `README.md` §8.
- `dockerclient` wraps `github.com/docker/docker` for: pull, create,
start, stop, remove, inspect, list (filtered by the
`galaxy.backend=1` label), patch (semver-only, validated against
`engine_versions`).
- Per-game serialisation: a `sync.Map[game_id]*sync.Mutex` ensures
concurrent ops on the same game are sequential.
- Worker pool for long-running operations: started in Stage 5.5; jobs
enqueued on a buffered channel; bounded concurrency.
- `runtime_operation_log` records every op (start time, finish time,
outcome, error).
- Reconciliation: on startup and on a `pkg/cronutil` schedule, list
containers labelled `galaxy.backend=1`, match against
`runtime_records`, adopt unrecorded labelled containers, mark
recorded but missing as removed. Emit
`lobby.OnRuntimeJobResult` for each removed.
- Snapshot publication: after every successful engine read or a
health-probe transition, synthesise a snapshot and call
`lobby.OnRuntimeSnapshot(snapshot)` synchronously.
- Turn scheduler: `pkg/cronutil` schedule per running game; each tick
invokes the engine `admin/turn`, on success snapshots and publishes;
force-next-turn sets a one-shot skip flag stored in
`runtime_records`.
Cache: active runtime records, engine version registry; warmed on
startup; write-through on mutation.
### ~~5.6~~ — mail
This substage was implemented and marked as done. See
[`docs/stage05_6-mail.md`](docs/stage05_6-mail.md) for the decisions
taken during implementation.
Behaviour:
- Outbox tables defined in Stage 4.
- Worker goroutine: scans `mail_deliveries` with
`SELECT ... FOR UPDATE SKIP LOCKED` ordered by `next_attempt_at`,
attempts SMTP delivery via `wneessen/go-mail`, records in
`mail_attempts`, updates status, schedules backoff with jitter, or
dead-letters past the configured maximum attempts.
- Drain on startup: replays all `pending` and `retrying` rows.
- Public API for producers: `EnqueueLoginCode(email, code, ttl)`,
`EnqueueTemplate(template_id, recipient, payload, idempotency_key)`.
- Admin endpoints implemented: list, view, resend.
### ~~5.7~~ — notification
This substage was implemented and marked as done. See
[`docs/stage05_7-notification.md`](docs/stage05_7-notification.md) for
the decisions taken during implementation.
Behaviour:
- `Submit(intent)` — validate intent shape, enforce idempotency,
persist `notifications`, materialise `notification_routes`, fan out
to push (Stage 6 wires the actual push emission; until then a no-op
publisher) and email (`mail.EnqueueTemplate`).
- Each kind has a fixed channel set documented in `README.md` §10.
- Malformed intents go to `notification_malformed_intents` and never
block the producer.
- Dead-letter handling: a failed route past max attempts moves to
`notification_dead_letters`.
- Producers (lobby, runtime, geo, auth) are wired via direct function
calls.
### ~~5.8~~ — geo
This substage was implemented and marked as done. See
[`docs/stage05_8-geo.md`](docs/stage05_8-geo.md) for the decisions
taken during implementation.
Behaviour:
- Load GeoLite2 Country DB at startup from `BACKEND_GEOIP_DB_PATH`.
- `SetDeclaredCountryAtRegistration(user_id, ip)` — sync; lookup,
update `accounts.declared_country`. No-op on lookup error.
- `IncrementCounterAsync(user_id, ip)` — fire-and-forget goroutine;
upsert `user_country_counters` with `count = count + 1`,
`last_seen_at = now()`.
- Middleware on `/api/v1/user/*` extracts the source IP from
`X-Forwarded-For` (or `RemoteAddr`) and calls
`IncrementCounterAsync` after the handler returns successfully.
- `OnUserDeleted(user_id)` — delete the user's counter rows.
Critical files (Stage 5 as a whole):
- `backend/internal/auth/**`
- `backend/internal/user/**`
- `backend/internal/admin/**`
- `backend/internal/lobby/**`
- `backend/internal/runtime/**`
- `backend/internal/dockerclient/**`
- `backend/internal/engineclient/**`
- `backend/internal/mail/**`
- `backend/internal/notification/**`
- `backend/internal/geo/**`
- `backend/internal/server/handlers_*.go` (replacing 501 stubs)
- `backend/cmd/backend/main.go` (wiring expansion)
Done criteria:
- All Stage 3 contract tests pass against real responses.
- Each substage adds focused unit tests (`testify`, mocks where
external boundaries justify them).
- `go run ./backend/cmd/backend` boots, all caches warm, all workers
start.
## ~~Stage 6~~ — Push gRPC interface and gateway adaptation
Goal: stand up the bidirectional control channel between backend and
gateway. Backend pushes `client_event` and `session_invalidation`;
gateway opens the stream, signs and forwards client events, immediately
acts on session invalidations. Remove every Redis dependency from
gateway except anti-replay reservations.
### ~~6.1~~ — Backend push server
This substage was implemented and marked as done. See
[`docs/stage06_1-push.md`](docs/stage06_1-push.md) for the decisions
taken during implementation.
Actions:
1. Author `backend/proto/push/v1/push.proto` with
`service Push { rpc SubscribePush(GatewaySubscribeRequest) returns
(stream PushEvent); }` and the message types defined in
`README.md` §7. Include a `cursor` field (string).
2. `backend/buf.yaml`, `backend/buf.gen.yaml` mirroring the gateway
pattern; generate Go bindings into `backend/proto/push/v1/`.
3. `backend/internal/push/server.go` — gRPC service implementation:
- Maintains a connection registry keyed by gateway client id (the
`GatewaySubscribeRequest` provides one; if multiple gateway
instances connect, each gets its own queue).
- Holds an in-memory ring buffer keyed by cursor, with TTL equal to
`BACKEND_FRESHNESS_WINDOW`. Cursors past TTL are discarded.
- Resume: if the client's cursor is still in the buffer, replay
from there; otherwise replay nothing and start fresh.
- Backpressure: per-connection buffered channel; on overflow, drop
the oldest events for that connection and log.
4. Provide a publisher API consumed by `auth`, `lobby`, `notification`,
and `runtime`:
- `push.PublishClientEvent(user_id, device_session_id?, payload, kind)`.
- `push.PublishSessionInvalidation(device_session_id|user_id, reason)`.
### ~~6.2~~ — Gateway adaptation
This substage was implemented and marked as done. See
[`docs/stage06_2-gateway.md`](docs/stage06_2-gateway.md) for the
decisions taken during implementation.
Actions:
1. Remove `redisconn` usage for session projection and for the two
stream consumers. Keep `redisconn` only for anti-replay
reservations.
2. Remove `gateway/internal/config` env vars
`GATEWAY_SESSION_EVENTS_REDIS_STREAM` and
`GATEWAY_CLIENT_EVENTS_REDIS_STREAM`. Add
`GATEWAY_BACKEND_HTTP_URL` and `GATEWAY_BACKEND_GRPC_PUSH_URL`.
3. Add `gateway/internal/backendclient/` with:
- `RESTClient` — HTTP client for `/api/v1/internal/sessions/...` and
for forwarding public/user requests.
- `PushClient` — gRPC client to `SubscribePush` with reconnect
loop, exponential backoff with jitter, and cursor persistence in
process memory.
4. Replace gateway session validation with a sync REST call to
backend per request.
5. Replace gateway client-events Redis consumer with the
`SubscribePush` consumer. On `client_event`: sign envelope (Ed25519)
and deliver to the matching client subscription. On
`session_invalidation`: look up active subscriptions for the target
sessions, close them, and reject any in-flight authenticated
request bound to those sessions.
6. Anti-replay request_id reservations remain in Redis (unchanged).
7. Update gateway tests to use a mocked backend HTTP and gRPC server.
Critical files:
- `backend/proto/push/v1/push.proto`
- `backend/buf.yaml`, `backend/buf.gen.yaml`
- `backend/internal/push/server.go`,
`backend/internal/push/publisher.go`
- `gateway/internal/backendclient/*.go`
- `gateway/internal/config/config.go` (env var changes)
- `gateway/internal/handlers/*.go` (route forwarding to backend)
- `gateway/internal/auth/*.go` (session lookup → REST)
- `gateway/internal/eventfanout/*.go` (replace Redis consumer with
gRPC consumer; rename if helpful)
Done criteria:
- `go run ./backend/cmd/backend` and `go run ./gateway/cmd/gateway`
cooperate end-to-end with no Redis stream usage.
- A revocation through the admin surface causes immediate stream
closure on the affected client.
- Gateway anti-replay still rejects duplicates.
- gateway test suite green.
## ~~Stage 7~~ — Integration testing
This stage was implemented and marked as done. See
[`docs/stage07-integration.md`](docs/stage07-integration.md) for the
decisions taken during implementation, including the testenv layout,
the signed-envelope gRPC client, and the per-scenario coverage notes.
Goal: end-to-end coverage of the platform with real binaries and real
infrastructure where practical.
Actions:
1. Recreate the top-level `integration/` module, registered in
`go.work`. The module hosts black-box test suites that drive
`gateway` from outside and verify behaviour at the public boundary
(with `backend` and `game` running in containers).
2. Add testcontainers fixtures: Postgres, an SMTP capture server (for
example `axllent/mailpit`), the `galaxy/game` engine image, the
`galaxy/backend` image (built from this repo), and the
`galaxy/gateway` image. The Docker daemon used by testcontainers
is the same one backend will use to manage engines.
3. Add a synthetic GeoLite2 mmdb (use `pkg/geoip/test-data/`).
4. Cover scenarios:
- Registration flow: send-email-code → confirm-email-code →
`declared_country` populated from synthetic mmdb.
- User account fetch: `X-User-ID` path returns the expected
account; geo counter increments per request.
- Lobby flow: create game → invite → application → ready-to-start
→ start (engine container starts, healthz green, status read) →
command → force-next-turn → finish → race name promotion.
- Mail flow: trigger an email-bound notification → SMTP capture
receives it → admin resend works.
- Notification flow: lobby invite triggers a push event reaching
the test client's gateway subscription, plus an email captured
by SMTP.
- Admin flow: bootstrap admin authenticates; CRUD admin creates a
second admin; second admin disables the first.
- Soft delete flow: user soft-delete cascades; their RND entries,
memberships, applications, invites, geo counters are released
or removed.
- Session revocation: admin revokes a session → push
`session_invalidation` arrives at gateway → active subscription
closes; subsequent requests with that `device_session_id`
rejected by gateway.
- Anti-replay: same `request_id` replayed within freshness window
is rejected by gateway.
5. CI: run `go test ./integration/... -tags=integration` (or whichever
flag the team prefers). Tests requiring real Docker run only when
a Docker daemon is available; otherwise they skip with a clear
message.
Critical files:
- `integration/go.mod`
- `integration/auth_flow_test.go`
- `integration/lobby_flow_test.go`
- `integration/mail_flow_test.go`
- `integration/notification_flow_test.go`
- `integration/admin_flow_test.go`
- `integration/soft_delete_test.go`
- `integration/session_revoke_test.go`
- `integration/anti_replay_test.go`
- `integration/testenv/*.go` (shared fixtures)
Done criteria:
- `go test ./integration/...` runs the full suite.
- All listed scenarios pass green on a developer machine with Docker
available.
- Failures produce actionable diagnostics (logs from each component
attached to the test report).
## Stage acceptance and decision records
After each stage, the implementing engineer writes a short decision
record under `backend/docs/stage<NN>-<topic>.md` capturing any
non-trivial choice made during implementation that is not obvious from
the code or from this plan. Records that contradict this plan must be
brought to the architecture conversation before merge — the plan and
the architecture document are the agreed contract.
+30 -3
View File
@@ -333,15 +333,42 @@ cannot guarantee.
| `runtime.image_pull_failed` | admin email | `game_id`, `image_ref` |
| `runtime.container_start_failed` | admin email | `game_id` |
| `runtime.start_config_invalid` | admin email | `game_id`, `reason` |
| `game.turn.ready` | push | `game_id`, `turn` |
| `game.paused` | push | `game_id`, `turn`, `reason` |
Admin-channel kinds (`runtime.*`) deliver email to
`BACKEND_NOTIFICATION_ADMIN_EMAIL`; when the variable is empty, those
routes land in `notification_routes` with `status='skipped'` and the
operator log line records the configuration miss.
`game.*` (`game.started`, `game.turn.ready`, `game.generation.failed`,
`game.finished`) and `mail.dead_lettered` are reserved kinds without a
producer in the catalog; adding them is an additive change to the
`game.turn.ready` and `game.paused` are emitted by
`lobby.Service.OnRuntimeSnapshot`
(`backend/internal/lobby/runtime_hooks.go`):
- `game.turn.ready` fires whenever the engine's `current_turn`
advances. Idempotency key `turn-ready:<game_id>:<turn>`, JSON
payload `{game_id, turn}`.
- `game.paused` fires whenever the same hook flips the game
`running → paused` because a runtime snapshot landed with
`engine_unreachable` / `generation_failed`. Idempotency key
`paused:<game_id>:<turn>`, JSON payload
`{game_id, turn, reason}` (reason carries the runtime status
that triggered the transition). The runtime scheduler
(`backend/internal/runtime/scheduler.go`) forwards the failing
snapshot through `Service.publishFailureSnapshot` so a single
failing tick reliably reaches lobby.
Both kinds target every active membership and route through the
push channel only — per-turn / per-pause email would be spam — so
the UI's signed `SubscribeEvents` stream
(`ui/frontend/src/api/events.svelte.ts`) is the sole delivery
path. The order tab consumes them via
`OrderDraftStore.resetForNewTurn` / `markPaused`
(`ui/docs/sync-protocol.md`).
The remaining `game.*` (`game.started`, `game.generation.failed`,
`game.finished`) and `mail.dead_lettered` are reserved kinds without
a producer in the catalog; adding them is an additive change to the
catalog vocabulary and the migration CHECK constraint.
Templates ship in English only; localisation belongs to clients that
+31
View File
@@ -13,10 +13,18 @@ import (
"os/signal"
"syscall"
// time/tzdata embeds the IANA timezone database so time.LoadLocation
// works in container images without /usr/share/zoneinfo (distroless
// static, alpine without the tzdata apk). The auth and user-settings
// flows validate the caller's `time_zone` via time.LoadLocation;
// without this import only "UTC" and fixed offsets would resolve.
_ "time/tzdata"
"galaxy/backend/internal/admin"
"galaxy/backend/internal/app"
"galaxy/backend/internal/auth"
"galaxy/backend/internal/config"
"galaxy/backend/internal/devsandbox"
"galaxy/backend/internal/dockerclient"
"galaxy/backend/internal/engineclient"
"galaxy/backend/internal/geo"
@@ -258,6 +266,29 @@ func run(ctx context.Context) (err error) {
)
runtimeGateway.svc = runtimeSvc
// Run a single reconciliation pass before the dev-sandbox
// bootstrap so any runtime row pointing at a vanished engine
// container (host reboot wiped /tmp/galaxy-game-state/<uuid>;
// `tools/local-dev`'s `prune-broken-engines` target reaped the
// husk) is already cascaded through `markRemoved` → lobby
// `cancelled` by the time the bootstrap walks the sandbox list.
// Without this pre-tick the bootstrap would reuse the
// soon-to-be-cancelled game and force the developer into a
// second `make up` cycle to land a healthy sandbox. Failures are
// non-fatal: the periodic ticker started later catches up, and
// the worst case degrades to the legacy two-cycle recovery.
if err := runtimeSvc.Reconciler().Tick(ctx); err != nil {
logger.Warn("pre-bootstrap reconciler tick failed", zap.Error(err))
}
if err := devsandbox.Bootstrap(ctx, devsandbox.Deps{
Users: userSvc,
Lobby: lobbySvc,
EngineVersions: engineVersionSvc,
}, cfg.DevSandbox, logger); err != nil {
return fmt.Errorf("dev sandbox bootstrap: %w", err)
}
notifStore := notification.NewStore(db)
notifSvc := notification.NewService(notification.Deps{
Store: notifStore,
+21
View File
@@ -76,9 +76,30 @@ func NewService(deps Deps) *Service {
// not a security primitive, so a constant key is acceptable.
copy(key, []byte("galaxy-backend-auth-fallback-key"))
}
if deps.Config.DevFixedCode != "" {
// Loud, repeated warning so a stray production deployment cannot
// claim the operator was unaware. The override is intended for
// `tools/local-dev/` and never reaches production binaries in
// normal operation.
deps.Logger.Warn("DEV-MODE: BACKEND_AUTH_DEV_FIXED_CODE is set; ConfirmEmailCode accepts the literal code in addition to the bcrypt-verified one. NEVER use in production.")
}
return &Service{deps: deps, emailHashKey: key}
}
// devFixedCodeMatches reports whether the dev-mode fixed-code override
// is configured and the submitted code matches it verbatim. The
// override is opt-in via `BACKEND_AUTH_DEV_FIXED_CODE`; production
// deployments leave the field empty and devFixedCodeMatches always
// returns false. See `tools/local-dev/README.md` for the full
// rationale.
func (s *Service) devFixedCodeMatches(code string) bool {
fixed := s.deps.Config.DevFixedCode
if fixed == "" {
return false
}
return code == fixed
}
// hashEmail returns a stable, hex-encoded HMAC-SHA256 prefix of email
// suitable for use in structured logs. The key is per-process so the
// same email maps to the same hash across log lines emitted by this
+78
View File
@@ -185,6 +185,35 @@ func authConfig() config.AuthConfig {
}
}
// buildServiceWithConfig wires every dependency around db using cfg as
// the auth configuration. Returns only the service — assertions on the
// dev-mode override path do not inspect the recording fakes.
func buildServiceWithConfig(t *testing.T, db *sql.DB, cfg config.AuthConfig) *auth.Service {
t.Helper()
store := auth.NewStore(db)
cache := auth.NewCache()
if err := cache.Warm(context.Background(), store); err != nil {
t.Fatalf("warm cache: %v", err)
}
userStore := user.NewStore(db)
userSvc := user.NewService(user.Deps{
Store: userStore,
Cache: user.NewCache(),
UserNameMaxRetries: 10,
Now: time.Now,
})
return auth.NewService(auth.Deps{
Store: store,
Cache: cache,
User: userSvc,
Geo: newStubGeo(),
Mail: newRecordingMailer(),
Push: newRecordingPush(),
Config: cfg,
Now: time.Now,
})
}
// buildService wires every dependency around db and returns the service
// plus the recording fakes for assertions.
func buildService(t *testing.T, db *sql.DB) (*auth.Service, *recordingMailer, *recordingPush, *stubGeo) {
@@ -412,6 +441,55 @@ func TestSendEmailCodeThrottleReusesChallenge(t *testing.T) {
}
}
func TestConfirmEmailCodeDevFixedCodeBypass(t *testing.T) {
db := startPostgres(t)
cfg := authConfig()
cfg.DevFixedCode = "999999"
svc := buildServiceWithConfig(t, db, cfg)
ctx := context.Background()
id, err := svc.SendEmailCode(ctx, "dev-bypass@example.test", "en", "", "")
if err != nil {
t.Fatalf("send: %v", err)
}
session, err := svc.ConfirmEmailCode(ctx, auth.ConfirmInputs{
ChallengeID: id,
Code: "999999",
ClientPublicKey: randomKey(t),
TimeZone: "UTC",
})
if err != nil {
t.Fatalf("ConfirmEmailCode with dev fixed code: %v", err)
}
if session.DeviceSessionID == uuid.Nil {
t.Fatalf("dev fixed code did not produce a session")
}
}
func TestConfirmEmailCodeDevFixedCodeStillRejectsWrong(t *testing.T) {
db := startPostgres(t)
cfg := authConfig()
cfg.DevFixedCode = "999999"
svc := buildServiceWithConfig(t, db, cfg)
ctx := context.Background()
id, err := svc.SendEmailCode(ctx, "dev-bypass-wrong@example.test", "en", "", "")
if err != nil {
t.Fatalf("send: %v", err)
}
_, err = svc.ConfirmEmailCode(ctx, auth.ConfirmInputs{
ChallengeID: id,
Code: "111111",
ClientPublicKey: randomKey(t),
TimeZone: "UTC",
})
if !errors.Is(err, auth.ErrCodeMismatch) {
t.Fatalf("ConfirmEmailCode with neither real nor dev code = %v, want ErrCodeMismatch", err)
}
}
func TestConfirmEmailCodeWrongCode(t *testing.T) {
db := startPostgres(t)
svc, mailer, _, _ := buildService(t, db)
+6
View File
@@ -171,6 +171,7 @@ func (s *Service) ConfirmEmailCode(ctx context.Context, in ConfirmInputs) (Sessi
return Session{}, ErrTooManyAttempts
}
if !s.devFixedCodeMatches(in.Code) {
if err := verifyCode(loaded.CodeHash, in.Code); err != nil {
if errors.Is(err, ErrCodeMismatch) {
s.deps.Logger.Info("auth challenge code mismatch",
@@ -181,6 +182,11 @@ func (s *Service) ConfirmEmailCode(ctx context.Context, in ConfirmInputs) (Sessi
}
return Session{}, err
}
} else {
s.deps.Logger.Warn("auth challenge accepted via dev-mode fixed code override",
zap.String("challenge_id", in.ChallengeID.String()),
)
}
// Re-check permanent_block after verifying the code. SendEmailCode
// guards against fresh challenges for already-blocked addresses;
+80
View File
@@ -71,6 +71,7 @@ const (
envAuthChallengeThrottleWindow = "BACKEND_AUTH_CHALLENGE_THROTTLE_WINDOW"
envAuthChallengeThrottleMax = "BACKEND_AUTH_CHALLENGE_THROTTLE_MAX"
envAuthUserNameMaxRetries = "BACKEND_AUTH_USERNAME_MAX_RETRIES"
envAuthDevFixedCode = "BACKEND_AUTH_DEV_FIXED_CODE"
envLobbySweeperInterval = "BACKEND_LOBBY_SWEEPER_INTERVAL"
envLobbyPendingRegistrationTTL = "BACKEND_LOBBY_PENDING_REGISTRATION_TTL"
@@ -94,6 +95,11 @@ const (
envNotificationAdminEmail = "BACKEND_NOTIFICATION_ADMIN_EMAIL"
envNotificationWorkerInterval = "BACKEND_NOTIFICATION_WORKER_INTERVAL"
envNotificationMaxAttempts = "BACKEND_NOTIFICATION_MAX_ATTEMPTS"
envDevSandboxEmail = "BACKEND_DEV_SANDBOX_EMAIL"
envDevSandboxEngineImage = "BACKEND_DEV_SANDBOX_ENGINE_IMAGE"
envDevSandboxEngineVersion = "BACKEND_DEV_SANDBOX_ENGINE_VERSION"
envDevSandboxPlayerCount = "BACKEND_DEV_SANDBOX_PLAYER_COUNT"
)
// Default values applied when an environment variable is absent.
@@ -156,6 +162,9 @@ const (
defaultNotificationWorkerInterval = 5 * time.Second
defaultNotificationMaxAttempts = 8
defaultDevSandboxEngineVersion = "0.1.0"
defaultDevSandboxPlayerCount = 20
)
// Allowed values for the closed-set string options.
@@ -192,12 +201,29 @@ type Config struct {
Engine EngineConfig
Runtime RuntimeConfig
Notification NotificationConfig
DevSandbox DevSandboxConfig
// FreshnessWindow mirrors the gateway freshness window and is used by the
// push server to bound the cursor TTL.
FreshnessWindow time.Duration
}
// DevSandboxConfig configures the boot-time bootstrap implemented in
// `backend/internal/devsandbox`. When Email is empty the bootstrap
// is a no-op, which is the production posture. When Email is set —
// from `BACKEND_DEV_SANDBOX_EMAIL` in the `tools/local-dev` stack —
// the bootstrap idempotently provisions a real user, the configured
// number of dummy participants, a private "Dev Sandbox" game, the
// matching memberships, and drives the lifecycle to `running`. The
// engine image and engine version refer to a row that the bootstrap
// also seeds in `engine_versions`.
type DevSandboxConfig struct {
Email string
EngineImage string
EngineVersion string
PlayerCount int
}
// LoggingConfig stores the parameters used by the structured logger.
type LoggingConfig struct {
// Level is the zap level name (e.g. "debug", "info", "warn", "error").
@@ -293,6 +319,16 @@ type AuthConfig struct {
ChallengeMaxAttempts int
ChallengeThrottle AuthChallengeThrottleConfig
UserNameMaxRetries int
// DevFixedCode, when non-empty, makes ConfirmEmailCode accept this
// literal as a valid code in addition to the bcrypt-verified one
// stored on the challenge row. The override is intended for the
// `tools/local-dev` stack so a developer can log in without
// reading codes out of Mailpit. The variable MUST stay unset in
// production: validation requires a six-digit decimal value, and
// the auth service emits a loud startup warning when it picks the
// override up.
DevFixedCode string
}
// AuthChallengeThrottleConfig bounds how many un-consumed, non-expired
@@ -458,6 +494,10 @@ func DefaultConfig() Config {
WorkerInterval: defaultNotificationWorkerInterval,
MaxAttempts: defaultNotificationMaxAttempts,
},
DevSandbox: DevSandboxConfig{
EngineVersion: defaultDevSandboxEngineVersion,
PlayerCount: defaultDevSandboxPlayerCount,
},
Runtime: RuntimeConfig{
WorkerPoolSize: defaultRuntimeWorkerPoolSize,
JobQueueSize: defaultRuntimeJobQueueSize,
@@ -566,6 +606,7 @@ func LoadFromEnv() (Config, error) {
if cfg.Auth.UserNameMaxRetries, err = loadInt(envAuthUserNameMaxRetries, cfg.Auth.UserNameMaxRetries); err != nil {
return Config{}, err
}
cfg.Auth.DevFixedCode = loadString(envAuthDevFixedCode, cfg.Auth.DevFixedCode)
if cfg.Lobby.SweeperInterval, err = loadDuration(envLobbySweeperInterval, cfg.Lobby.SweeperInterval); err != nil {
return Config{}, err
@@ -616,6 +657,13 @@ func LoadFromEnv() (Config, error) {
return Config{}, err
}
cfg.DevSandbox.Email = strings.TrimSpace(loadString(envDevSandboxEmail, cfg.DevSandbox.Email))
cfg.DevSandbox.EngineImage = strings.TrimSpace(loadString(envDevSandboxEngineImage, cfg.DevSandbox.EngineImage))
cfg.DevSandbox.EngineVersion = strings.TrimSpace(loadString(envDevSandboxEngineVersion, cfg.DevSandbox.EngineVersion))
if cfg.DevSandbox.PlayerCount, err = loadInt(envDevSandboxPlayerCount, cfg.DevSandbox.PlayerCount); err != nil {
return Config{}, err
}
if err := cfg.Validate(); err != nil {
return Config{}, err
}
@@ -745,6 +793,11 @@ func (c Config) Validate() error {
if c.Auth.UserNameMaxRetries <= 0 {
return fmt.Errorf("%s must be positive", envAuthUserNameMaxRetries)
}
if c.Auth.DevFixedCode != "" {
if !isDecimalString(c.Auth.DevFixedCode, 6) {
return fmt.Errorf("%s must be a six-digit decimal string when set", envAuthDevFixedCode)
}
}
if c.Lobby.SweeperInterval <= 0 {
return fmt.Errorf("%s must be positive", envLobbySweeperInterval)
@@ -806,9 +859,36 @@ func (c Config) Validate() error {
}
}
if email := strings.TrimSpace(c.DevSandbox.Email); email != "" {
if _, err := netmail.ParseAddress(email); err != nil {
return fmt.Errorf("%s must be a valid RFC 5322 address: %w", envDevSandboxEmail, err)
}
if strings.TrimSpace(c.DevSandbox.EngineImage) == "" {
return fmt.Errorf("%s must not be empty when %s is set", envDevSandboxEngineImage, envDevSandboxEmail)
}
if strings.TrimSpace(c.DevSandbox.EngineVersion) == "" {
return fmt.Errorf("%s must not be empty when %s is set", envDevSandboxEngineVersion, envDevSandboxEmail)
}
if c.DevSandbox.PlayerCount <= 0 {
return fmt.Errorf("%s must be positive when %s is set", envDevSandboxPlayerCount, envDevSandboxEmail)
}
}
return nil
}
func isDecimalString(value string, length int) bool {
if len(value) != length {
return false
}
for _, r := range value {
if r < '0' || r > '9' {
return false
}
}
return true
}
func loadString(name, fallback string) string {
raw, ok := os.LookupEnv(name)
if !ok {
+34
View File
@@ -77,6 +77,40 @@ func TestValidateRejectsUnknownTracesExporter(t *testing.T) {
}
}
func TestLoadFromEnvAcceptsDevFixedCode(t *testing.T) {
env := validEnv()
env["BACKEND_AUTH_DEV_FIXED_CODE"] = "123456"
setEnv(t, env)
cfg, err := LoadFromEnv()
if err != nil {
t.Fatalf("LoadFromEnv returned error: %v", err)
}
if cfg.Auth.DevFixedCode != "123456" {
t.Fatalf("Auth.DevFixedCode = %q, want \"123456\"", cfg.Auth.DevFixedCode)
}
}
func TestValidateRejectsDevFixedCodeWrongLength(t *testing.T) {
env := validEnv()
env["BACKEND_AUTH_DEV_FIXED_CODE"] = "12345"
setEnv(t, env)
if _, err := LoadFromEnv(); err == nil || !strings.Contains(err.Error(), "BACKEND_AUTH_DEV_FIXED_CODE") {
t.Fatalf("expected DEV fixed-code length error, got %v", err)
}
}
func TestValidateRejectsDevFixedCodeNonDecimal(t *testing.T) {
env := validEnv()
env["BACKEND_AUTH_DEV_FIXED_CODE"] = "abcdef"
setEnv(t, env)
if _, err := LoadFromEnv(); err == nil || !strings.Contains(err.Error(), "BACKEND_AUTH_DEV_FIXED_CODE") {
t.Fatalf("expected DEV fixed-code decimal error, got %v", err)
}
}
func TestValidateRejectsPrometheusWithoutAddr(t *testing.T) {
cfg := DefaultConfig()
cfg.Postgres.DSN = "postgres://x:y@127.0.0.1/galaxy"
+287
View File
@@ -0,0 +1,287 @@
// Package devsandbox provisions a ready-to-play game on backend boot
// for the `tools/local-dev` stack.
//
// Bootstrap is invoked from `backend/cmd/backend/main.go` after the
// admin bootstrap and before the HTTP listener starts. It reads
// `cfg.DevSandbox`; when `Email` is empty (the production posture)
// the function logs "skipped" and returns nil. When set, it
// idempotently:
//
// 1. registers the configured engine version and image;
// 2. find-or-creates the real dev user with the configured email;
// 3. find-or-creates `cfg.PlayerCount - 1` deterministic dummy
// users so the engine's minimum-players constraint is met;
// 4. find-or-creates a private "Dev Sandbox" game owned by the
// real user with min/max_players = cfg.PlayerCount and a
// year-out turn schedule (effectively frozen at turn 1);
// 5. inserts memberships for all participants bypassing the
// application/approval flow;
// 6. drives the lifecycle to `running` (or as far as possible if
// the runtime is busy).
//
// The function is a no-op on subsequent boots once the game is
// running; partial states from earlier crashes are recovered.
package devsandbox
import (
"context"
"errors"
"fmt"
"time"
"galaxy/backend/internal/config"
"galaxy/backend/internal/lobby"
"galaxy/backend/internal/runtime"
"github.com/google/uuid"
"go.uber.org/zap"
)
// SandboxGameName is the display name used to identify the
// auto-provisioned game on subsequent reboots. The combination of
// game_name and owner_user_id is unique enough in practice — only
// the dev sandbox bootstrap creates a game owned by the configured
// real user with this exact name.
const SandboxGameName = "Dev Sandbox"
// SandboxTurnSchedule keeps the game on turn 1 by scheduling the
// next turn a year out. The runtime scheduler still parses this and
// will tick once a year — long enough to never interfere with
// solo UI development.
const SandboxTurnSchedule = "0 0 1 1 *"
// UserEnsurer matches `auth.UserEnsurer`. We define a local
// interface to avoid importing the auth package and circular
// dependencies — the production wiring passes the same `*user.Service`
// instance used by auth.
type UserEnsurer interface {
EnsureByEmail(ctx context.Context, email, preferredLanguage, timeZone, declaredCountry string) (uuid.UUID, error)
}
// Deps aggregates the collaborators Bootstrap needs.
type Deps struct {
Users UserEnsurer
Lobby *lobby.Service
EngineVersions *runtime.EngineVersionService
}
// Bootstrap runs the seven-step provisioning flow described on the
// package doc comment. Errors are returned to the caller; the boot
// path in `cmd/backend/main.go` aborts startup if Bootstrap fails so
// a misconfigured dev environment surfaces immediately rather than
// silently leaving the lobby empty.
func Bootstrap(ctx context.Context, deps Deps, cfg config.DevSandboxConfig, logger *zap.Logger) error {
if logger == nil {
logger = zap.NewNop()
}
logger = logger.Named("dev_sandbox")
if cfg.Email == "" {
logger.Info("skipped (no email)")
return nil
}
if deps.Users == nil || deps.Lobby == nil || deps.EngineVersions == nil {
return errors.New("dev_sandbox: deps.Users, deps.Lobby and deps.EngineVersions are required")
}
if cfg.PlayerCount <= 0 {
return fmt.Errorf("dev_sandbox: PlayerCount must be positive, got %d", cfg.PlayerCount)
}
if err := ensureEngineVersion(ctx, deps.EngineVersions, cfg, logger); err != nil {
return err
}
realID, err := deps.Users.EnsureByEmail(ctx, cfg.Email, "en", "UTC", "")
if err != nil {
return fmt.Errorf("dev_sandbox: ensure real user: %w", err)
}
dummyIDs := make([]uuid.UUID, 0, cfg.PlayerCount-1)
for i := 1; i < cfg.PlayerCount; i++ {
email := fmt.Sprintf("dev-dummy-%02d@local.test", i)
id, err := deps.Users.EnsureByEmail(ctx, email, "en", "UTC", "")
if err != nil {
return fmt.Errorf("dev_sandbox: ensure dummy %d: %w", i, err)
}
dummyIDs = append(dummyIDs, id)
}
if err := purgeTerminalSandboxGames(ctx, deps.Lobby, realID, logger); err != nil {
return err
}
game, err := findOrCreateSandboxGame(ctx, deps.Lobby, realID, cfg)
if err != nil {
return err
}
game, err = ensureMembershipsAndDrive(ctx, deps.Lobby, game, realID, dummyIDs, logger)
if err != nil {
return err
}
logger.Info("bootstrap complete",
zap.String("user_id", realID.String()),
zap.String("game_id", game.GameID.String()),
zap.String("status", game.Status),
)
return nil
}
func ensureEngineVersion(ctx context.Context, svc *runtime.EngineVersionService, cfg config.DevSandboxConfig, logger *zap.Logger) error {
_, err := svc.Register(ctx, runtime.RegisterInput{
Version: cfg.EngineVersion,
ImageRef: cfg.EngineImage,
})
switch {
case err == nil:
logger.Info("engine version registered",
zap.String("version", cfg.EngineVersion),
zap.String("image", cfg.EngineImage),
)
return nil
case errors.Is(err, runtime.ErrEngineVersionTaken):
logger.Debug("engine version already registered",
zap.String("version", cfg.EngineVersion),
)
return nil
default:
return fmt.Errorf("dev_sandbox: register engine version: %w", err)
}
}
// terminalSandboxStatus reports whether a sandbox game has reached a
// state from which it can no longer be driven back to running. We
// treat such games as "absent" so the next bootstrap creates a fresh
// one rather than handing the developer a dead lobby tile.
func terminalSandboxStatus(status string) bool {
switch status {
case lobby.GameStatusCancelled, lobby.GameStatusFinished, lobby.GameStatusStartFailed:
return true
}
return false
}
// purgeTerminalSandboxGames deletes every previous "Dev Sandbox" game
// the dev user owns that has reached a terminal state
// (cancelled / finished / start_failed). The cascade declared in
// `00001_init.sql` removes the matching memberships, applications,
// invites, runtime records, and player mappings in the same write,
// so the developer's lobby never piles up dead tiles between
// `make rebuild` cycles. Non-terminal games are left untouched —
// a `running` sandbox from a previous boot is the happy path.
func purgeTerminalSandboxGames(ctx context.Context, svc *lobby.Service, ownerID uuid.UUID, logger *zap.Logger) error {
games, err := svc.ListMyGames(ctx, ownerID)
if err != nil {
return fmt.Errorf("dev_sandbox: list my games: %w", err)
}
for _, g := range games {
if g.GameName != SandboxGameName || g.OwnerUserID == nil || *g.OwnerUserID != ownerID {
continue
}
if !terminalSandboxStatus(g.Status) {
continue
}
if err := svc.DeleteGame(ctx, g.GameID); err != nil {
return fmt.Errorf("dev_sandbox: delete terminal sandbox %s: %w", g.GameID, err)
}
logger.Info("purged terminal sandbox game",
zap.String("game_id", g.GameID.String()),
zap.String("status", g.Status),
)
}
return nil
}
func findOrCreateSandboxGame(ctx context.Context, svc *lobby.Service, ownerID uuid.UUID, cfg config.DevSandboxConfig) (lobby.GameRecord, error) {
games, err := svc.ListMyGames(ctx, ownerID)
if err != nil {
return lobby.GameRecord{}, fmt.Errorf("dev_sandbox: list my games: %w", err)
}
for _, g := range games {
if g.GameName != SandboxGameName || g.OwnerUserID == nil || *g.OwnerUserID != ownerID {
continue
}
// `purgeTerminalSandboxGames` ran before us, so any sandbox
// game still in the list is either a live one we should
// reuse or a transient state we can drive forward.
return g, nil
}
rec, err := svc.CreateGame(ctx, lobby.CreateGameInput{
OwnerUserID: &ownerID,
Visibility: lobby.VisibilityPrivate,
GameName: SandboxGameName,
Description: "Auto-provisioned by backend/internal/devsandbox for solo UI development.",
MinPlayers: int32(cfg.PlayerCount),
MaxPlayers: int32(cfg.PlayerCount),
StartGapHours: 0,
StartGapPlayers: 0,
EnrollmentEndsAt: time.Now().Add(365 * 24 * time.Hour),
TurnSchedule: SandboxTurnSchedule,
TargetEngineVersion: cfg.EngineVersion,
})
if err != nil {
return lobby.GameRecord{}, fmt.Errorf("dev_sandbox: create game: %w", err)
}
return rec, nil
}
func ensureMembershipsAndDrive(ctx context.Context, svc *lobby.Service, game lobby.GameRecord, realID uuid.UUID, dummyIDs []uuid.UUID, logger *zap.Logger) (lobby.GameRecord, error) {
caller := realID
if game.Status == lobby.GameStatusDraft {
next, err := svc.OpenEnrollment(ctx, &caller, false, game.GameID)
if err != nil {
return game, fmt.Errorf("dev_sandbox: open enrollment: %w", err)
}
game = next
}
if game.Status == lobby.GameStatusEnrollmentOpen {
users := append([]uuid.UUID{realID}, dummyIDs...)
for i, uid := range users {
raceName := fmt.Sprintf("Sandbox-%02d", i+1)
if _, err := svc.InsertMembershipDirect(ctx, lobby.InsertMembershipDirectInput{
GameID: game.GameID,
UserID: uid,
RaceName: raceName,
}); err != nil {
return game, fmt.Errorf("dev_sandbox: insert membership %d: %w", i+1, err)
}
}
logger.Info("memberships ensured",
zap.Int("count", len(users)),
zap.String("game_id", game.GameID.String()),
)
next, err := svc.ReadyToStart(ctx, &caller, false, game.GameID)
if err != nil {
return game, fmt.Errorf("dev_sandbox: ready to start: %w", err)
}
game = next
}
if game.Status == lobby.GameStatusReadyToStart {
next, err := svc.Start(ctx, &caller, false, game.GameID)
if err != nil {
return game, fmt.Errorf("dev_sandbox: start: %w", err)
}
game = next
}
if game.Status == lobby.GameStatusStartFailed {
next, err := svc.RetryStart(ctx, &caller, false, game.GameID)
if err != nil {
logger.Warn("retry start failed", zap.Error(err))
return game, nil
}
game = next
if game.Status == lobby.GameStatusReadyToStart {
next, err := svc.Start(ctx, &caller, false, game.GameID)
if err != nil {
return game, fmt.Errorf("dev_sandbox: start after retry: %w", err)
}
game = next
}
}
return game, nil
}
@@ -0,0 +1,106 @@
package devsandbox
import (
"context"
"errors"
"testing"
"galaxy/backend/internal/config"
"github.com/google/uuid"
"go.uber.org/zap"
)
// TestBootstrapSkippedWhenEmailEmpty exercises the no-op branch: with
// the production posture (Email == "") Bootstrap must return without
// touching any dependency. The fact that Users/Lobby/EngineVersions
// are nil here doubles as a check that the early-return runs first.
func TestBootstrapSkippedWhenEmailEmpty(t *testing.T) {
err := Bootstrap(
context.Background(),
Deps{},
config.DevSandboxConfig{},
zap.NewNop(),
)
if err != nil {
t.Fatalf("expected nil error on empty email, got: %v", err)
}
}
// TestBootstrapRejectsZeroPlayerCount confirms the validation
// short-circuits the flow before any DB call when PlayerCount is
// non-positive but Email is set. The error path is fast and never
// dereferences the (still-nil) Users/Lobby deps.
func TestBootstrapRejectsZeroPlayerCount(t *testing.T) {
err := Bootstrap(
context.Background(),
Deps{Users: stubEnsurer{}, Lobby: nil, EngineVersions: nil},
config.DevSandboxConfig{
Email: "dev@local.test",
EngineImage: "galaxy-engine:local-dev",
EngineVersion: "0.0.0-local-dev",
PlayerCount: 0,
},
zap.NewNop(),
)
if err == nil {
t.Fatal("expected error on zero PlayerCount, got nil")
}
}
// TestBootstrapRejectsMissingDeps checks that a misconfigured wiring
// (Email set but one of the required services nil) fails fast rather
// than panicking when the bootstrap reaches its first service call.
func TestBootstrapRejectsMissingDeps(t *testing.T) {
err := Bootstrap(
context.Background(),
Deps{Users: stubEnsurer{}, Lobby: nil, EngineVersions: nil},
config.DevSandboxConfig{
Email: "dev@local.test",
EngineImage: "galaxy-engine:local-dev",
EngineVersion: "0.0.0-local-dev",
PlayerCount: 20,
},
zap.NewNop(),
)
if err == nil {
t.Fatal("expected error on missing deps, got nil")
}
if !errors.Is(err, errMissingDepsSentinel) && err.Error() == "" {
// The exact wording is not part of the contract; this branch
// only asserts the error is non-nil and human-readable.
t.Fatalf("error has empty message: %v", err)
}
}
// errMissingDepsSentinel exists so the assertion above can compile;
// the real error is constructed via errors.New inside Bootstrap and
// is intentionally not exported. The test only needs to confirm the
// returned error has a message.
var errMissingDepsSentinel = errors.New("sentinel")
// TestTerminalSandboxStatus pins the contract that decides whether a
// previously created sandbox game gets purged on the next boot.
// Terminal states are deleted (cascade-style) so the developer's
// lobby never piles up dead tiles between `make rebuild` cycles.
func TestTerminalSandboxStatus(t *testing.T) {
terminal := []string{"cancelled", "finished", "start_failed"}
live := []string{"draft", "enrollment_open", "ready_to_start", "starting", "running", "paused"}
for _, status := range terminal {
if !terminalSandboxStatus(status) {
t.Errorf("expected %q to be terminal", status)
}
}
for _, status := range live {
if terminalSandboxStatus(status) {
t.Errorf("expected %q to be non-terminal", status)
}
}
}
type stubEnsurer struct{}
func (stubEnsurer) EnsureByEmail(_ context.Context, _, _, _, _ string) (uuid.UUID, error) {
return uuid.UUID{}, nil
}
+76
View File
@@ -26,6 +26,7 @@ const (
pathPlayerCommand = "/api/v1/command"
pathPlayerOrder = "/api/v1/order"
pathPlayerReport = "/api/v1/report"
pathPlayerBattle = "/api/v1/battle"
pathHealthz = "/healthz"
)
@@ -196,6 +197,46 @@ func (c *Client) PutOrders(ctx context.Context, baseURL string, payload json.Raw
return c.forwardPlayerWrite(ctx, baseURL, pathPlayerOrder, payload, "engine order")
}
// GetOrder calls `GET /api/v1/order?player=<raceName>&turn=<turn>` and
// returns the engine response body verbatim. A `204 No Content` body
// is signalled by `(nil, http.StatusNoContent, nil)` so callers can
// surface "no stored order" without parsing the empty payload.
// Other non-`200` statuses come back wrapped in `ErrEngineValidation`
// (4xx) or `ErrEngineUnreachable` (everything else), matching the
// existing player-write conventions.
func (c *Client) GetOrder(ctx context.Context, baseURL, raceName string, turn int) (json.RawMessage, int, error) {
if err := validateBaseURL(baseURL); err != nil {
return nil, 0, err
}
if strings.TrimSpace(raceName) == "" {
return nil, 0, errors.New("engineclient order get: race name must not be empty")
}
if turn < 0 {
return nil, 0, fmt.Errorf("engineclient order get: turn must not be negative, got %d", turn)
}
values := url.Values{}
values.Set("player", raceName)
values.Set("turn", strconv.Itoa(turn))
target := baseURL + pathPlayerOrder + "?" + values.Encode()
body, status, doErr := c.doRequest(ctx, http.MethodGet, target, nil, c.probeTimeout)
if doErr != nil {
return nil, 0, fmt.Errorf("%w: engine order get: %w", ErrEngineUnreachable, doErr)
}
switch status {
case http.StatusOK:
if len(body) == 0 {
return nil, status, fmt.Errorf("%w: engine order get: empty response body", ErrEngineProtocolViolation)
}
return json.RawMessage(body), status, nil
case http.StatusNoContent:
return nil, status, nil
case http.StatusBadRequest, http.StatusConflict:
return json.RawMessage(body), status, fmt.Errorf("%w: engine order get: %s", ErrEngineValidation, summariseEngineError(body, status))
default:
return nil, status, fmt.Errorf("%w: engine order get: %s", ErrEngineUnreachable, summariseEngineError(body, status))
}
}
// GetReport calls `GET /api/v1/report?player=<raceName>&turn=<turn>`
// and returns the engine response body verbatim.
func (c *Client) GetReport(ctx context.Context, baseURL, raceName string, turn int) (json.RawMessage, error) {
@@ -229,6 +270,41 @@ func (c *Client) GetReport(ctx context.Context, baseURL, raceName string, turn i
}
}
// FetchBattle calls `GET /api/v1/battle/<turn>/<battleID>` and returns
// the engine response body verbatim alongside the engine status code.
// 200 carries the BattleReport JSON; 404 means the battle is unknown
// and the body may be empty. Other 4xx statuses come back wrapped in
// ErrEngineValidation, everything else in ErrEngineUnreachable.
func (c *Client) FetchBattle(ctx context.Context, baseURL string, turn int, battleID string) (json.RawMessage, int, error) {
if err := validateBaseURL(baseURL); err != nil {
return nil, 0, err
}
if turn < 0 {
return nil, 0, fmt.Errorf("engineclient battle get: turn must not be negative, got %d", turn)
}
if strings.TrimSpace(battleID) == "" {
return nil, 0, errors.New("engineclient battle get: battle id must not be empty")
}
target := baseURL + pathPlayerBattle + "/" + strconv.Itoa(turn) + "/" + url.PathEscape(battleID)
body, status, doErr := c.doRequest(ctx, http.MethodGet, target, nil, c.probeTimeout)
if doErr != nil {
return nil, 0, fmt.Errorf("%w: engine battle get: %w", ErrEngineUnreachable, doErr)
}
switch status {
case http.StatusOK:
if len(body) == 0 {
return nil, status, fmt.Errorf("%w: engine battle get: empty response body", ErrEngineProtocolViolation)
}
return json.RawMessage(body), status, nil
case http.StatusNotFound:
return nil, status, nil
case http.StatusBadRequest, http.StatusConflict:
return json.RawMessage(body), status, fmt.Errorf("%w: engine battle get: %s", ErrEngineValidation, summariseEngineError(body, status))
default:
return nil, status, fmt.Errorf("%w: engine battle get: %s", ErrEngineUnreachable, summariseEngineError(body, status))
}
}
// Healthz calls `GET /healthz`. Returns nil on 2xx.
func (c *Client) Healthz(ctx context.Context, baseURL string) error {
if err := validateBaseURL(baseURL); err != nil {
@@ -195,6 +195,125 @@ func TestClientReportsForwardsQuery(t *testing.T) {
}
}
func TestClientGetOrderForwardsQuery(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != pathPlayerOrder {
t.Fatalf("unexpected path: %s", r.URL.Path)
}
if r.Method != http.MethodGet {
t.Fatalf("unexpected method: %s", r.Method)
}
if r.URL.Query().Get("player") != "alpha" {
t.Fatalf("player = %q", r.URL.Query().Get("player"))
}
if r.URL.Query().Get("turn") != "3" {
t.Fatalf("turn = %q", r.URL.Query().Get("turn"))
}
_, _ = w.Write([]byte(`{"game_id":"abc","updatedAt":99,"cmd":[]}`))
}))
t.Cleanup(srv.Close)
cli := newTestClient(t, srv)
body, status, err := cli.GetOrder(context.Background(), srv.URL, "alpha", 3)
if err != nil {
t.Fatalf("GetOrder: %v", err)
}
if status != http.StatusOK {
t.Fatalf("status = %d", status)
}
if !strings.Contains(string(body), `"updatedAt":99`) {
t.Fatalf("body = %s", body)
}
}
func TestClientGetOrderNoContent(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNoContent)
}))
t.Cleanup(srv.Close)
cli := newTestClient(t, srv)
body, status, err := cli.GetOrder(context.Background(), srv.URL, "alpha", 3)
if err != nil {
t.Fatalf("GetOrder: %v", err)
}
if status != http.StatusNoContent {
t.Fatalf("status = %d", status)
}
if body != nil {
t.Fatalf("expected nil body on 204, got %s", body)
}
}
func TestClientGetOrderRejectsBadInput(t *testing.T) {
cli := newTestClient(t, httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
t.Fatal("server must not be hit on bad input")
})))
if _, _, err := cli.GetOrder(context.Background(), "http://example.com", "", 0); err == nil {
t.Fatal("expected error on empty race name")
}
if _, _, err := cli.GetOrder(context.Background(), "http://example.com", "alpha", -1); err == nil {
t.Fatal("expected error on negative turn")
}
}
func TestClientFetchBattleForwardsPath(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
t.Fatalf("unexpected method: %s", r.Method)
}
want := pathPlayerBattle + "/3/" + "11111111-1111-1111-1111-111111111111"
if r.URL.Path != want {
t.Fatalf("path = %q, want %q", r.URL.Path, want)
}
_, _ = w.Write([]byte(`{"id":"11111111-1111-1111-1111-111111111111","planet":4}`))
}))
t.Cleanup(srv.Close)
cli := newTestClient(t, srv)
body, status, err := cli.FetchBattle(context.Background(), srv.URL, 3, "11111111-1111-1111-1111-111111111111")
if err != nil {
t.Fatalf("FetchBattle: %v", err)
}
if status != http.StatusOK {
t.Fatalf("status = %d", status)
}
if !strings.Contains(string(body), `"planet":4`) {
t.Fatalf("body = %s", body)
}
}
func TestClientFetchBattleNotFound(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
}))
t.Cleanup(srv.Close)
cli := newTestClient(t, srv)
body, status, err := cli.FetchBattle(context.Background(), srv.URL, 0, "11111111-1111-1111-1111-111111111111")
if err != nil {
t.Fatalf("FetchBattle: %v", err)
}
if status != http.StatusNotFound {
t.Fatalf("status = %d", status)
}
if body != nil {
t.Fatalf("expected nil body on 404, got %s", body)
}
}
func TestClientFetchBattleRejectsBadInput(t *testing.T) {
cli := newTestClient(t, httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
t.Fatal("server must not be hit on bad input")
})))
if _, _, err := cli.FetchBattle(context.Background(), "http://example.com", -1, "11111111-1111-1111-1111-111111111111"); err == nil {
t.Fatal("expected error on negative turn")
}
if _, _, err := cli.FetchBattle(context.Background(), "http://example.com", 0, ""); err == nil {
t.Fatal("expected error on empty battle id")
}
}
func TestClientHealthzSuccess(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != pathHealthz {
+18
View File
@@ -233,6 +233,24 @@ func (s *Service) ListMyGames(ctx context.Context, userID uuid.UUID) ([]GameReco
return s.deps.Store.ListMyGames(ctx, userID)
}
// DeleteGame removes the game and every referencing row (memberships,
// applications, invites, runtime_records, player_mappings) via the
// `ON DELETE CASCADE` constraints declared in `00001_init.sql`.
// Idempotent: returns nil when no game matches.
//
// Phase 14 introduces this method for the dev-sandbox bootstrap so a
// terminal "Dev Sandbox" tile from a previous local-dev session can
// be scrubbed before a fresh game spawns. Production callers must
// stay on the regular cancel / finish lifecycle — `DeleteGame` is
// destructive and bypasses the cascade-notification machinery.
func (s *Service) DeleteGame(ctx context.Context, gameID uuid.UUID) error {
if err := s.deps.Store.DeleteGame(ctx, gameID); err != nil {
return err
}
s.deps.Cache.RemoveGame(gameID)
return nil
}
// State-machine transition handlers below take the same shape: load the
// game (cache or store), check owner, validate the current status, run
// the transition write, refresh the cache, optionally tell the runtime
+2
View File
@@ -109,6 +109,8 @@ const (
NotificationLobbyRaceNameRegistered = "lobby.race_name.registered"
NotificationLobbyRaceNamePending = "lobby.race_name.pending"
NotificationLobbyRaceNameExpired = "lobby.race_name.expired"
NotificationGameTurnReady = "game.turn.ready"
NotificationGamePaused = "game.paused"
)
// Deps aggregates every collaborator the lobby Service depends on.
+64
View File
@@ -244,6 +244,70 @@ func TestEndToEndPrivateGameFlow(t *testing.T) {
}
}
// TestDeleteGameCascadesEverything pins the contract the dev-sandbox
// bootstrap relies on: removing a game wipes every referencing row
// (memberships, applications, invites, runtime_records,
// player_mappings) in a single SQL statement. Before this is wired
// the developer's lobby pile up cancelled tiles between
// `make rebuild` cycles; with it, every boot starts from a clean
// slate.
func TestDeleteGameCascadesEverything(t *testing.T) {
db := startPostgres(t)
now := time.Now().UTC()
clock := func() time.Time { return now }
svc := newServiceForTest(t, db, clock, 5)
owner := uuid.New()
seedAccount(t, db, owner)
game, err := svc.CreateGame(context.Background(), lobby.CreateGameInput{
OwnerUserID: &owner,
Visibility: lobby.VisibilityPrivate,
GameName: "Doomed",
MinPlayers: 1,
MaxPlayers: 4,
StartGapHours: 1,
StartGapPlayers: 1,
EnrollmentEndsAt: now.Add(time.Hour),
TurnSchedule: "0 0 * * *",
TargetEngineVersion: "1.0.0",
})
if err != nil {
t.Fatalf("create game: %v", err)
}
if _, err := svc.OpenEnrollment(context.Background(), &owner, false, game.GameID); err != nil {
t.Fatalf("open enrollment: %v", err)
}
if _, err := svc.InsertMembershipDirect(context.Background(), lobby.InsertMembershipDirectInput{
GameID: game.GameID,
UserID: owner,
RaceName: "Owner",
}); err != nil {
t.Fatalf("insert membership: %v", err)
}
if err := svc.DeleteGame(context.Background(), game.GameID); err != nil {
t.Fatalf("delete game: %v", err)
}
// Verify cascade: the game must be gone, ListMyGames must drop
// it, and re-deleting the same id is a no-op.
if _, err := svc.GetGame(context.Background(), game.GameID); !errors.Is(err, lobby.ErrNotFound) {
t.Fatalf("get after delete: err = %v, want ErrNotFound", err)
}
games, err := svc.ListMyGames(context.Background(), owner)
if err != nil {
t.Fatalf("list my games: %v", err)
}
for _, g := range games {
if g.GameID == game.GameID {
t.Fatalf("ListMyGames still lists the deleted game")
}
}
if err := svc.DeleteGame(context.Background(), game.GameID); err != nil {
t.Fatalf("delete idempotent: %v", err)
}
}
func TestEndToEndPublicGameApplicationApproval(t *testing.T) {
db := startPostgres(t)
now := time.Now().UTC()
@@ -0,0 +1,96 @@
package lobby
import (
"context"
"fmt"
"github.com/google/uuid"
)
// InsertMembershipDirectInput is the parameter struct for
// Service.InsertMembershipDirect.
type InsertMembershipDirectInput struct {
GameID uuid.UUID
UserID uuid.UUID
RaceName string
}
// InsertMembershipDirect grants a membership to userID inside gameID
// bypassing the application/approval flow. It performs the same DB
// writes as ApproveApplication: the per-game race-name reservation
// row plus the membership row, and refreshes the in-memory caches.
//
// The method is intended for boot-time provisioning by
// `backend/internal/devsandbox` and similar trusted callers. It is
// not exposed through any HTTP handler. The caller must guarantee
// game.Status == GameStatusEnrollmentOpen — the function returns
// ErrConflict otherwise — and that the race-name policy and
// canonical-key invariants are honoured (the implementation reuses
// the lobby's own Policy and assertRaceNameAvailable so a duplicate
// or unsuitable name still fails).
//
// Idempotency: if a membership for (GameID, UserID) already exists
// the function returns the existing row without modifying state.
// This makes the helper safe to call on every backend boot from
// devsandbox.Bootstrap.
func (s *Service) InsertMembershipDirect(ctx context.Context, in InsertMembershipDirectInput) (Membership, error) {
displayName, err := ValidateDisplayName(in.RaceName)
if err != nil {
return Membership{}, err
}
game, err := s.GetGame(ctx, in.GameID)
if err != nil {
return Membership{}, err
}
if game.Status != GameStatusEnrollmentOpen {
return Membership{}, fmt.Errorf("%w: game status is %q, want enrollment_open", ErrConflict, game.Status)
}
canonical, err := s.deps.Policy.Canonical(displayName)
if err != nil {
return Membership{}, err
}
existing, err := s.deps.Store.ListMembershipsForGame(ctx, in.GameID)
if err != nil {
return Membership{}, err
}
for _, m := range existing {
if m.UserID == in.UserID && m.Status == MembershipStatusActive {
return m, nil
}
}
if err := s.assertRaceNameAvailable(ctx, canonical, in.UserID, in.GameID); err != nil {
return Membership{}, err
}
now := s.deps.Now().UTC()
if _, err := s.deps.Store.InsertRaceName(ctx, raceNameInsert{
Name: displayName,
Canonical: canonical,
Status: RaceNameStatusReservation,
OwnerUserID: in.UserID,
GameID: in.GameID,
ReservedAt: &now,
}); err != nil {
return Membership{}, err
}
membership, err := s.deps.Store.InsertMembership(ctx, membershipInsert{
MembershipID: uuid.New(),
GameID: in.GameID,
UserID: in.UserID,
RaceName: displayName,
CanonicalKey: canonical,
})
if err != nil {
_ = s.deps.Store.DeleteRaceName(ctx, canonical, in.GameID)
return Membership{}, err
}
s.deps.Cache.PutMembership(membership)
s.deps.Cache.PutRaceName(RaceNameEntry{
Name: displayName,
Canonical: canonical,
Status: RaceNameStatusReservation,
OwnerUserID: in.UserID,
GameID: in.GameID,
ReservedAt: &now,
})
return membership, nil
}
+121 -1
View File
@@ -30,12 +30,14 @@ func (s *Service) OnRuntimeSnapshot(ctx context.Context, gameID uuid.UUID, snaps
if err != nil {
return err
}
prevTurn := game.RuntimeSnapshot.CurrentTurn
merged := mergeRuntimeSnapshot(game.RuntimeSnapshot, snapshot)
now := s.deps.Now().UTC()
updated, err := s.deps.Store.UpdateGameRuntimeSnapshot(ctx, gameID, merged, now)
if err != nil {
return err
}
transitionedToPaused := false
if next, transition := nextStatusFromSnapshot(updated.Status, snapshot); transition {
switch next {
case GameStatusFinished:
@@ -52,12 +54,115 @@ func (s *Service) OnRuntimeSnapshot(ctx context.Context, gameID uuid.UUID, snaps
return err
}
updated = rec
if next == GameStatusPaused {
transitionedToPaused = true
}
}
}
s.deps.Cache.PutGame(updated)
if merged.CurrentTurn > prevTurn {
s.publishTurnReady(ctx, gameID, merged.CurrentTurn)
}
if transitionedToPaused {
s.publishGamePaused(ctx, gameID, merged.CurrentTurn, snapshot.RuntimeStatus)
}
return nil
}
// publishTurnReady fans out a `game.turn.ready` notification to every
// active member of the game once the engine reports a new
// `current_turn`. The intent is best-effort: a publisher failure is
// logged at warn level (matching the rest of OnRuntimeSnapshot's
// notification calls) and does not abort the snapshot bookkeeping.
// Idempotency is anchored on (game_id, turn), so a duplicate snapshot
// for the same turn collapses into a single notification at the
// notification.Submit boundary.
func (s *Service) publishTurnReady(ctx context.Context, gameID uuid.UUID, turn int32) {
memberships, err := s.deps.Store.ListMembershipsForGame(ctx, gameID)
if err != nil {
s.deps.Logger.Warn("turn-ready notification: list memberships failed",
zap.String("game_id", gameID.String()),
zap.Int32("turn", turn),
zap.Error(err))
return
}
recipients := make([]uuid.UUID, 0, len(memberships))
for _, m := range memberships {
if m.Status != MembershipStatusActive {
continue
}
recipients = append(recipients, m.UserID)
}
if len(recipients) == 0 {
return
}
intent := LobbyNotification{
Kind: NotificationGameTurnReady,
IdempotencyKey: fmt.Sprintf("turn-ready:%s:%d", gameID, turn),
Recipients: recipients,
Payload: map[string]any{
"game_id": gameID.String(),
"turn": turn,
},
}
if pubErr := s.deps.Notification.PublishLobbyEvent(ctx, intent); pubErr != nil {
s.deps.Logger.Warn("turn-ready notification failed",
zap.String("game_id", gameID.String()),
zap.Int32("turn", turn),
zap.Error(pubErr))
}
}
// publishGamePaused fans out a `game.paused` notification to every
// active member of the game when the lobby flips the game to
// `paused` in reaction to a runtime snapshot (typically a failed
// turn generation). The intent is best-effort: a publisher failure
// is logged at warn level and does not abort the snapshot
// bookkeeping. Idempotency is anchored on (game_id, turn) so a
// repeated `generation_failed` snapshot for the same turn collapses
// into a single notification at the notification.Submit boundary.
//
// reason carries the raw runtime status that triggered the pause
// (`engine_unreachable` / `generation_failed`); the UI displays a
// status-agnostic banner today but the payload is preserved so a
// future revision of the order tab can differentiate.
func (s *Service) publishGamePaused(ctx context.Context, gameID uuid.UUID, turn int32, reason string) {
memberships, err := s.deps.Store.ListMembershipsForGame(ctx, gameID)
if err != nil {
s.deps.Logger.Warn("game-paused notification: list memberships failed",
zap.String("game_id", gameID.String()),
zap.Int32("turn", turn),
zap.Error(err))
return
}
recipients := make([]uuid.UUID, 0, len(memberships))
for _, m := range memberships {
if m.Status != MembershipStatusActive {
continue
}
recipients = append(recipients, m.UserID)
}
if len(recipients) == 0 {
return
}
intent := LobbyNotification{
Kind: NotificationGamePaused,
IdempotencyKey: fmt.Sprintf("paused:%s:%d", gameID, turn),
Recipients: recipients,
Payload: map[string]any{
"game_id": gameID.String(),
"turn": turn,
"reason": reason,
},
}
if pubErr := s.deps.Notification.PublishLobbyEvent(ctx, intent); pubErr != nil {
s.deps.Logger.Warn("game-paused notification failed",
zap.String("game_id", gameID.String()),
zap.Int32("turn", turn),
zap.Error(pubErr))
}
}
// OnGameFinished completes the game lifecycle: marks the game as
// `finished`, evaluates capable-finish per active member, and
// transitions reservation rows to either `pending_registration`
@@ -230,13 +335,28 @@ func mergeRuntimeSnapshot(prev, next RuntimeSnapshot) RuntimeSnapshot {
// nextStatusFromSnapshot maps the runtime-reported runtime status into
// a lobby status transition. Returns (next, true) when the lobby
// status must change; (current, false) otherwise.
//
// The map intentionally distinguishes the pre-running boot path
// (`starting → start_failed`) from the in-flight failure path
// (`running → paused`). Paused games can be resumed by the admin via
// the explicit `/resume` transition; the runtime keeps the engine
// container alive, the scheduler short-circuits ticks while paused,
// and any user-games command/order is rejected by the order handler
// with `turn_already_closed` until the game resumes.
func nextStatusFromSnapshot(currentStatus string, snapshot RuntimeSnapshot) (string, bool) {
switch snapshot.RuntimeStatus {
case "running":
if currentStatus == GameStatusStarting {
return GameStatusRunning, true
}
case "engine_unreachable", "start_failed", "generation_failed":
case "engine_unreachable", "generation_failed":
if currentStatus == GameStatusStarting {
return GameStatusStartFailed, true
}
if currentStatus == GameStatusRunning {
return GameStatusPaused, true
}
case "start_failed":
if currentStatus == GameStatusStarting {
return GameStatusStartFailed, true
}
@@ -0,0 +1,207 @@
package lobby_test
import (
"context"
"database/sql"
"fmt"
"sync"
"testing"
"time"
"galaxy/backend/internal/config"
"galaxy/backend/internal/lobby"
"github.com/google/uuid"
)
// capturingPublisher records every `LobbyNotification` intent that the
// lobby service emits, so a test can assert the producer side without
// running the real notification.Submit pipeline.
type capturingPublisher struct {
mu sync.Mutex
items []lobby.LobbyNotification
}
func (p *capturingPublisher) PublishLobbyEvent(_ context.Context, ev lobby.LobbyNotification) error {
p.mu.Lock()
defer p.mu.Unlock()
p.items = append(p.items, ev)
return nil
}
func (p *capturingPublisher) byKind(kind string) []lobby.LobbyNotification {
p.mu.Lock()
defer p.mu.Unlock()
out := make([]lobby.LobbyNotification, 0, len(p.items))
for _, ev := range p.items {
if ev.Kind == kind {
out = append(out, ev)
}
}
return out
}
// newServiceWithPublisher mirrors `newServiceForTest` but lets the
// caller inject a custom NotificationPublisher; the runtime-hooks
// emit path needs to observe intents directly.
func newServiceWithPublisher(t *testing.T, db *sql.DB, now func() time.Time, max int32, publisher lobby.NotificationPublisher) *lobby.Service {
t.Helper()
store := lobby.NewStore(db)
cache := lobby.NewCache()
if err := cache.Warm(context.Background(), store); err != nil {
t.Fatalf("warm cache: %v", err)
}
svc, err := lobby.NewService(lobby.Deps{
Store: store,
Cache: cache,
Notification: publisher,
Entitlement: stubEntitlement{max: max},
Config: config.LobbyConfig{
SweeperInterval: time.Second,
PendingRegistrationTTL: time.Hour,
InviteDefaultTTL: time.Hour,
},
Now: now,
})
if err != nil {
t.Fatalf("new service: %v", err)
}
return svc
}
// TestOnRuntimeSnapshotEmitsTurnReady verifies that an engine snapshot
// advancing `current_turn` fans out a `game.turn.ready` intent to every
// active member, that the idempotency key is anchored on (game_id, turn),
// and that a snapshot with the same turn does not re-emit.
func TestOnRuntimeSnapshotEmitsTurnReady(t *testing.T) {
db := startPostgres(t)
now := time.Now().UTC()
clock := func() time.Time { return now }
publisher := &capturingPublisher{}
svc := newServiceWithPublisher(t, db, clock, 5, publisher)
owner := uuid.New()
seedAccount(t, db, owner)
game, err := svc.CreateGame(context.Background(), lobby.CreateGameInput{
OwnerUserID: &owner,
Visibility: lobby.VisibilityPrivate,
GameName: "Turn-Ready Fan-Out",
MinPlayers: 1,
MaxPlayers: 4,
StartGapHours: 1,
StartGapPlayers: 1,
EnrollmentEndsAt: now.Add(time.Hour),
TurnSchedule: "0 0 * * *",
TargetEngineVersion: "1.0.0",
})
if err != nil {
t.Fatalf("create game: %v", err)
}
if _, err := svc.OpenEnrollment(context.Background(), &owner, false, game.GameID); err != nil {
t.Fatalf("open enrollment: %v", err)
}
// Seed two active members through the store so the test focuses on
// the runtime hook, not the membership state machine.
store := lobby.NewStore(db)
canonicalPolicy, err := lobby.NewPolicy()
if err != nil {
t.Fatalf("new policy: %v", err)
}
memberA := uuid.New()
memberB := uuid.New()
seedAccount(t, db, memberA)
seedAccount(t, db, memberB)
for i, m := range []uuid.UUID{memberA, memberB} {
race := fmt.Sprintf("Race%d", i+1)
canonical, err := canonicalPolicy.Canonical(race)
if err != nil {
t.Fatalf("canonical %q: %v", race, err)
}
if _, err := db.ExecContext(context.Background(), `
INSERT INTO backend.memberships (
membership_id, game_id, user_id, race_name, canonical_key, status
) VALUES ($1, $2, $3, $4, $5, 'active')
`, uuid.New(), game.GameID, m, race, string(canonical)); err != nil {
t.Fatalf("seed membership %s: %v", m, err)
}
}
if err := svc.Cache().Warm(context.Background(), store); err != nil {
t.Fatalf("re-warm cache: %v", err)
}
if _, err := svc.ReadyToStart(context.Background(), &owner, false, game.GameID); err != nil {
t.Fatalf("ready-to-start: %v", err)
}
if _, err := svc.Start(context.Background(), &owner, false, game.GameID); err != nil {
t.Fatalf("start: %v", err)
}
// First snapshot: prev=0, current_turn=1 → emit on the very first
// turn after the engine starts producing.
if err := svc.OnRuntimeSnapshot(context.Background(), game.GameID, lobby.RuntimeSnapshot{
CurrentTurn: 1,
RuntimeStatus: "running",
}); err != nil {
t.Fatalf("on-runtime-snapshot 1: %v", err)
}
intents := publisher.byKind(lobby.NotificationGameTurnReady)
if len(intents) != 1 {
t.Fatalf("after turn 1 want 1 turn-ready intent, got %d", len(intents))
}
first := intents[0]
wantKey := fmt.Sprintf("turn-ready:%s:1", game.GameID)
if first.IdempotencyKey != wantKey {
t.Errorf("turn 1 idempotency key = %q, want %q", first.IdempotencyKey, wantKey)
}
if got := first.Payload["turn"]; got != int32(1) {
t.Errorf("turn 1 payload turn = %v, want 1", got)
}
if got := first.Payload["game_id"]; got != game.GameID.String() {
t.Errorf("turn 1 payload game_id = %v, want %s", got, game.GameID)
}
if len(first.Recipients) != 2 {
t.Errorf("turn 1 recipients = %d, want 2", len(first.Recipients))
}
recipientSet := map[uuid.UUID]struct{}{}
for _, r := range first.Recipients {
recipientSet[r] = struct{}{}
}
if _, ok := recipientSet[memberA]; !ok {
t.Errorf("turn 1 missing memberA in recipients")
}
if _, ok := recipientSet[memberB]; !ok {
t.Errorf("turn 1 missing memberB in recipients")
}
// Same turn re-delivered (duplicate snapshot, gateway replay) must
// not re-emit at the lobby layer: prev catches up to merged.
if err := svc.OnRuntimeSnapshot(context.Background(), game.GameID, lobby.RuntimeSnapshot{
CurrentTurn: 1,
RuntimeStatus: "running",
}); err != nil {
t.Fatalf("on-runtime-snapshot 1 replay: %v", err)
}
if got := len(publisher.byKind(lobby.NotificationGameTurnReady)); got != 1 {
t.Fatalf("after duplicate turn 1 want 1 intent, got %d", got)
}
// Next turn advances → second emit with key anchored on turn 2.
if err := svc.OnRuntimeSnapshot(context.Background(), game.GameID, lobby.RuntimeSnapshot{
CurrentTurn: 2,
RuntimeStatus: "running",
}); err != nil {
t.Fatalf("on-runtime-snapshot 2: %v", err)
}
intents = publisher.byKind(lobby.NotificationGameTurnReady)
if len(intents) != 2 {
t.Fatalf("after turn 2 want 2 turn-ready intents, got %d", len(intents))
}
wantKey2 := fmt.Sprintf("turn-ready:%s:2", game.GameID)
if intents[1].IdempotencyKey != wantKey2 {
t.Errorf("turn 2 idempotency key = %q, want %q", intents[1].IdempotencyKey, wantKey2)
}
if got := intents[1].Payload["turn"]; got != int32(2) {
t.Errorf("turn 2 payload turn = %v, want 2", got)
}
}
@@ -0,0 +1,127 @@
package lobby
import "testing"
// TestNextStatusFromSnapshot covers the pure status-mapping function
// that drives `OnRuntimeSnapshot`'s lifecycle transitions. The Phase
// 25 contribution is the `running → paused` branch on
// `engine_unreachable` / `generation_failed`: the order handler relies
// on the `paused` game status to reject late submits with
// `turn_already_closed`.
func TestNextStatusFromSnapshot(t *testing.T) {
t.Parallel()
tests := []struct {
name string
currentStatus string
runtimeStatus string
wantStatus string
wantTransit bool
}{
{
name: "starting then running flips to running",
currentStatus: GameStatusStarting,
runtimeStatus: "running",
wantStatus: GameStatusRunning,
wantTransit: true,
},
{
name: "running on running snapshot does not transit",
currentStatus: GameStatusRunning,
runtimeStatus: "running",
wantStatus: GameStatusRunning,
wantTransit: false,
},
{
name: "starting then engine_unreachable flips to start_failed",
currentStatus: GameStatusStarting,
runtimeStatus: "engine_unreachable",
wantStatus: GameStatusStartFailed,
wantTransit: true,
},
{
name: "starting then generation_failed flips to start_failed",
currentStatus: GameStatusStarting,
runtimeStatus: "generation_failed",
wantStatus: GameStatusStartFailed,
wantTransit: true,
},
{
name: "running then engine_unreachable flips to paused",
currentStatus: GameStatusRunning,
runtimeStatus: "engine_unreachable",
wantStatus: GameStatusPaused,
wantTransit: true,
},
{
name: "running then generation_failed flips to paused",
currentStatus: GameStatusRunning,
runtimeStatus: "generation_failed",
wantStatus: GameStatusPaused,
wantTransit: true,
},
{
name: "paused stays paused on repeated failed snapshot",
currentStatus: GameStatusPaused,
runtimeStatus: "generation_failed",
wantStatus: GameStatusPaused,
wantTransit: false,
},
{
name: "starting then start_failed flips to start_failed",
currentStatus: GameStatusStarting,
runtimeStatus: "start_failed",
wantStatus: GameStatusStartFailed,
wantTransit: true,
},
{
name: "running ignores start_failed",
currentStatus: GameStatusRunning,
runtimeStatus: "start_failed",
wantStatus: GameStatusRunning,
wantTransit: false,
},
{
name: "running on finished flips to finished",
currentStatus: GameStatusRunning,
runtimeStatus: "finished",
wantStatus: GameStatusFinished,
wantTransit: true,
},
{
name: "finished stays finished on finished snapshot",
currentStatus: GameStatusFinished,
runtimeStatus: "finished",
wantStatus: GameStatusFinished,
wantTransit: false,
},
{
name: "cancelled stays cancelled on finished snapshot",
currentStatus: GameStatusCancelled,
runtimeStatus: "finished",
wantStatus: GameStatusCancelled,
wantTransit: false,
},
{
name: "paused on stopped snapshot flips to finished",
currentStatus: GameStatusPaused,
runtimeStatus: "stopped",
wantStatus: GameStatusFinished,
wantTransit: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
got, transit := nextStatusFromSnapshot(tt.currentStatus, RuntimeSnapshot{
RuntimeStatus: tt.runtimeStatus,
})
if got != tt.wantStatus {
t.Errorf("status = %q, want %q", got, tt.wantStatus)
}
if transit != tt.wantTransit {
t.Errorf("transit = %v, want %v", transit, tt.wantTransit)
}
})
}
}
+16
View File
@@ -232,6 +232,22 @@ func (s *Store) ListMyGames(ctx context.Context, userID uuid.UUID) ([]GameRecord
return modelsToGameRecords(rows)
}
// DeleteGame removes the row at gameID. Cascades through every
// referencing table (memberships / applications / invites /
// runtime_records / player_mappings — all declared with ON DELETE
// CASCADE in `00001_init.sql`). Idempotent: returns nil when no row
// matches. Used by the dev-sandbox bootstrap to scrub terminal
// games on every backend boot so the developer's lobby never piles
// up cancelled tiles.
func (s *Store) DeleteGame(ctx context.Context, gameID uuid.UUID) error {
g := table.Games
stmt := g.DELETE().WHERE(g.GameID.EQ(postgres.UUID(gameID)))
if _, err := stmt.ExecContext(ctx, s.db); err != nil {
return fmt.Errorf("lobby store: delete game %s: %w", gameID, err)
}
return nil
}
// gameUpdate is the parameter struct for UpdateGame. Nil pointers leave
// the corresponding column alone.
type gameUpdate struct {
+10
View File
@@ -17,6 +17,8 @@ const (
KindRuntimeImagePullFailed = "runtime.image_pull_failed"
KindRuntimeContainerStartFailed = "runtime.container_start_failed"
KindRuntimeStartConfigInvalid = "runtime.start_config_invalid"
KindGameTurnReady = "game.turn.ready"
KindGamePaused = "game.paused"
)
// CatalogEntry describes the per-kind delivery policy: which channels
@@ -95,6 +97,12 @@ var catalog = map[string]CatalogEntry{
Admin: true,
MailTemplateID: KindRuntimeStartConfigInvalid,
},
KindGameTurnReady: {
Channels: []string{ChannelPush},
},
KindGamePaused: {
Channels: []string{ChannelPush},
},
}
// LookupCatalog returns the per-kind policy and a boolean reporting
@@ -123,5 +131,7 @@ func SupportedKinds() []string {
KindRuntimeImagePullFailed,
KindRuntimeContainerStartFailed,
KindRuntimeStartConfigInvalid,
KindGameTurnReady,
KindGamePaused,
}
}
@@ -39,6 +39,8 @@ func TestCatalogChannels(t *testing.T) {
KindRuntimeImagePullFailed: {ChannelEmail},
KindRuntimeContainerStartFailed: {ChannelEmail},
KindRuntimeStartConfigInvalid: {ChannelEmail},
KindGameTurnReady: {ChannelPush},
KindGamePaused: {ChannelPush},
}
for kind, want := range expect {
entry, ok := LookupCatalog(kind)
+37 -4
View File
@@ -9,9 +9,31 @@ import (
"github.com/google/uuid"
)
// jsonFriendlyKinds lists catalog kinds whose payload is small and
// stable enough that the gateway-bound encoding stays JSON instead of
// FlatBuffers. The default for new producers is still FB; declaring a
// kind here is a deliberate decision baked into the build target's
// payload contract.
//
// `game.turn.ready` ships `{game_id, turn}` only, the UI parses it
// inline in `routes/games/[id]/+layout.svelte` (Phase 24), and no
// other consumer reads the payload — adopting the FB encoder would
// require a new TS notification stub set and the regen tooling for
// `pkg/schema/fbs/notification.fbs` without buying anything.
//
// `game.paused` (Phase 25) follows the same JSON-friendly contract:
// payload is `{game_id, turn, reason}` consumed by the same in-game
// shell layout, so there is no value in dragging a FB schema in for
// one consumer.
var jsonFriendlyKinds = map[string]bool{
KindGameTurnReady: true,
KindGamePaused: true,
}
// TestBuildClientPushEventCoversCatalog asserts that every catalog kind
// returns a typed FB event (preMarshaledEvent) and that an unknown kind
// falls through to the JSON safety net.
// is exercised by this test, that FB-typed kinds return a
// `preMarshaledEvent`, and that JSON-friendly kinds (see
// `jsonFriendlyKinds` above) return a `push.JSONEvent`.
func TestBuildClientPushEventCoversCatalog(t *testing.T) {
t.Parallel()
@@ -57,6 +79,15 @@ func TestBuildClientPushEventCoversCatalog(t *testing.T) {
"game_id": gameID.String(),
"reason": "missing engine version",
}},
{"game turn ready", KindGameTurnReady, map[string]any{
"game_id": gameID.String(),
"turn": int32(7),
}},
{"game paused", KindGamePaused, map[string]any{
"game_id": gameID.String(),
"turn": int32(7),
"reason": "generation_failed",
}},
}
seenKinds := map[string]bool{}
@@ -78,8 +109,10 @@ func TestBuildClientPushEventCoversCatalog(t *testing.T) {
if len(bytes) == 0 {
t.Fatalf("Marshal returned empty bytes")
}
if _, isJSON := event.(push.JSONEvent); isJSON {
t.Fatalf("expected typed FB event for %s, got JSONEvent", tt.kind)
_, isJSON := event.(push.JSONEvent)
wantJSON := jsonFriendlyKinds[tt.kind]
if isJSON != wantJSON {
t.Fatalf("kind %s: JSONEvent=%v, want JSONEvent=%v", tt.kind, isJSON, wantJSON)
}
})
seenKinds[tt.kind] = true
@@ -418,7 +418,7 @@ CREATE INDEX race_names_pending_eligible_idx
-- finished) and the container-state escape hatch (removed) used by
-- reconciliation when the recorded container has disappeared.
CREATE TABLE runtime_records (
game_id uuid PRIMARY KEY,
game_id uuid PRIMARY KEY REFERENCES games (game_id) ON DELETE CASCADE,
status text NOT NULL,
current_container_id text,
current_image_ref text,
@@ -465,7 +465,7 @@ CREATE TABLE engine_versions (
-- roster reads. The partial UNIQUE on (game_id, race_name) enforces the
-- one-race-per-game invariant at the storage boundary.
CREATE TABLE player_mappings (
game_id uuid NOT NULL,
game_id uuid NOT NULL REFERENCES games (game_id) ON DELETE CASCADE,
user_id uuid NOT NULL,
race_name text NOT NULL,
engine_player_uuid uuid NOT NULL,
@@ -605,7 +605,8 @@ CREATE TABLE notifications (
'lobby.race_name.registered', 'lobby.race_name.pending',
'lobby.race_name.expired',
'runtime.image_pull_failed', 'runtime.container_start_failed',
'runtime.start_config_invalid'
'runtime.start_config_invalid',
'game.turn.ready', 'game.paused'
))
);
+19
View File
@@ -42,4 +42,23 @@ var (
// ErrShutdown means the runtime service has stopped accepting
// work because the parent context was cancelled.
ErrShutdown = errors.New("runtime: shutting down")
// ErrTurnAlreadyClosed reports that the runtime is currently
// producing a turn — runtime status is `generation_in_progress`
// — and the engine is not accepting writes for the closing
// turn. Handlers map this to HTTP 409 with httperr code
// `turn_already_closed`; the UI shows a conflict banner and
// waits for the next `game.turn.ready` push.
ErrTurnAlreadyClosed = errors.New("runtime: turn already closed")
// ErrGamePaused reports that the game is not in a state that
// accepts user-games commands or orders: the runtime row
// carries `paused = true`, or the runtime status lands on any
// terminal value (`engine_unreachable`, `generation_failed`,
// `stopped`, `finished`, `removed`), or the game has not yet
// finished bootstrapping (`starting`). Handlers map this to
// HTTP 409 with httperr code `game_paused`; the UI surfaces a
// pause banner and waits for an admin resume or a fresh
// snapshot.
ErrGamePaused = errors.New("runtime: game paused")
)
@@ -0,0 +1,82 @@
package runtime
import (
"errors"
"testing"
)
// TestOrdersAcceptStatus pins down the Phase 25 pre-check that
// gates the user-games command/order handlers against the runtime
// record. The decision must distinguish a turn cutoff (engine is
// producing) from a paused game so the UI can surface the right
// banner; all other non-running runtime statuses collapse into
// `ErrGamePaused`.
func TestOrdersAcceptStatus(t *testing.T) {
t.Parallel()
tests := []struct {
name string
rec RuntimeRecord
want error
}{
{
name: "running and not paused accepts orders",
rec: RuntimeRecord{Status: RuntimeStatusRunning, Paused: false},
want: nil,
},
{
name: "running but paused returns game paused",
rec: RuntimeRecord{Status: RuntimeStatusRunning, Paused: true},
want: ErrGamePaused,
},
{
name: "generation in progress returns turn already closed",
rec: RuntimeRecord{Status: RuntimeStatusGenerationInProgress},
want: ErrTurnAlreadyClosed,
},
{
name: "generation failed returns game paused",
rec: RuntimeRecord{Status: RuntimeStatusGenerationFailed},
want: ErrGamePaused,
},
{
name: "engine unreachable returns game paused",
rec: RuntimeRecord{Status: RuntimeStatusEngineUnreachable},
want: ErrGamePaused,
},
{
name: "stopped returns game paused",
rec: RuntimeRecord{Status: RuntimeStatusStopped},
want: ErrGamePaused,
},
{
name: "finished returns game paused",
rec: RuntimeRecord{Status: RuntimeStatusFinished},
want: ErrGamePaused,
},
{
name: "removed returns game paused",
rec: RuntimeRecord{Status: RuntimeStatusRemoved},
want: ErrGamePaused,
},
{
name: "starting returns game paused",
rec: RuntimeRecord{Status: RuntimeStatusStarting},
want: ErrGamePaused,
},
{
name: "paused takes precedence over generation in progress",
rec: RuntimeRecord{Status: RuntimeStatusGenerationInProgress, Paused: true},
want: ErrGamePaused,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
got := OrdersAcceptStatus(tt.rec)
if !errors.Is(got, tt.want) {
t.Errorf("OrdersAcceptStatus = %v, want %v", got, tt.want)
}
})
}
}
+38 -1
View File
@@ -7,6 +7,7 @@ import (
"time"
"galaxy/backend/internal/dockerclient"
"galaxy/backend/internal/engineclient"
"galaxy/cronutil"
"github.com/google/uuid"
@@ -213,6 +214,22 @@ func (sch *Scheduler) loop(ctx context.Context, rec RuntimeRecord, done chan str
// tick runs one engine /admin/turn call under the per-game mutex,
// publishes the resulting snapshot, and clears `skip_next_tick`.
//
// Phase 25 wraps the engine call between two runtime-status flips so
// the backend order handler can reject late submits while the engine
// is producing:
//
// - before `Engine.Turn`: runtime status moves to
// `generation_in_progress`; the loop's running-only guard tolerates
// this because the flip back happens inside the same tick.
// - on success: runtime status moves back to `running` (unless the
// engine reports `finished`, in which case `publishSnapshot` has
// already promoted the row to `finished`).
// - on error: runtime status moves to `generation_failed` (engine
// validation failure) or `engine_unreachable` (transport / 5xx).
// The matching snapshot is forwarded to lobby through
// `publishFailureSnapshot` so lobby can flip the game to `paused`
// and emit `game.paused`.
func (sch *Scheduler) tick(ctx context.Context, rec RuntimeRecord) error {
mu := sch.svc.gameLock(rec.GameID)
if !mu.TryLock() {
@@ -224,10 +241,24 @@ func (sch *Scheduler) tick(ctx context.Context, rec RuntimeRecord) error {
if err != nil {
return err
}
if _, err := sch.svc.transitionRuntimeStatus(ctx, rec.GameID, RuntimeStatusGenerationInProgress, ""); err != nil {
sch.svc.completeOperation(ctx, op, err)
return err
}
state, err := sch.svc.deps.Engine.Turn(ctx, rec.EngineEndpoint)
if err != nil {
sch.svc.completeOperation(ctx, op, err)
_, _ = sch.svc.transitionRuntimeStatus(ctx, rec.GameID, RuntimeStatusEngineUnreachable, "")
failureStatus := RuntimeStatusEngineUnreachable
if errors.Is(err, engineclient.ErrEngineValidation) {
failureStatus = RuntimeStatusGenerationFailed
}
_, _ = sch.svc.transitionRuntimeStatus(ctx, rec.GameID, failureStatus, "down")
if pubErr := sch.svc.publishFailureSnapshot(ctx, rec.GameID, failureStatus); pubErr != nil {
sch.svc.deps.Logger.Warn("publish failure snapshot to lobby",
zap.String("game_id", rec.GameID.String()),
zap.String("runtime_status", failureStatus),
zap.Error(pubErr))
}
// On engine unreachable, also clear skip_next_tick so the next
// real tick can start fresh.
_ = sch.clearSkipFlag(ctx, rec.GameID)
@@ -244,6 +275,12 @@ func (sch *Scheduler) tick(ctx context.Context, rec RuntimeRecord) error {
sch.svc.completeOperation(ctx, op, err)
return err
}
if !state.Finished {
// `publishSnapshot` patches CurrentTurn / EngineHealth but does
// not reset the status column; reopen the orders window here so
// the next loop iteration finds the runtime back in `running`.
_, _ = sch.svc.transitionRuntimeStatus(ctx, rec.GameID, RuntimeStatusRunning, "ok")
}
sch.svc.completeOperation(ctx, op, nil)
_ = sch.clearSkipFlag(ctx, rec.GameID)
return nil
+78
View File
@@ -257,6 +257,57 @@ func (s *Service) ResolvePlayerMapping(ctx context.Context, gameID, userID uuid.
return s.deps.Store.LoadPlayerMapping(ctx, gameID, userID)
}
// CheckOrdersAccept verifies that the runtime is in a state that
// accepts user-games commands and orders. It is called by the user
// game-proxy handlers (`Commands`, `Orders`) before forwarding to
// engine, so the backend's turn-cutoff and pause guards run before
// network traffic leaves the host. The decision itself lives in the
// pure helper `OrdersAcceptStatus` so it can be unit-tested without
// constructing a full Service.
//
// A missing runtime row is surfaced as `ErrNotFound` so the handler
// keeps its existing 404 behaviour.
func (s *Service) CheckOrdersAccept(ctx context.Context, gameID uuid.UUID) error {
rec, err := s.GetRuntime(ctx, gameID)
if err != nil {
return err
}
return OrdersAcceptStatus(rec)
}
// OrdersAcceptStatus inspects a runtime record and returns the
// matching sentinel for the user-games order/command pre-check:
//
// - `runtime_status = generation_in_progress` → `ErrTurnAlreadyClosed`.
// The cron-driven `Scheduler.tick` has flipped the row before
// calling the engine. The order window reopens once the tick
// completes successfully.
//
// - `runtime_status ∈ {engine_unreachable, generation_failed,
// stopped, finished, removed, starting}` → `ErrGamePaused`.
// The game is not in a state that accepts writes; the lobby
// state machine has either already flipped the game to
// `paused` / `finished` or is still bootstrapping.
//
// - `runtime.Paused = true` → `ErrGamePaused`. The lobby admin
// paused the game explicitly.
//
// - `runtime_status = running` and `Paused = false` → nil
// (forward).
func OrdersAcceptStatus(rec RuntimeRecord) error {
if rec.Paused {
return ErrGamePaused
}
switch rec.Status {
case RuntimeStatusRunning:
return nil
case RuntimeStatusGenerationInProgress:
return ErrTurnAlreadyClosed
default:
return ErrGamePaused
}
}
// EngineEndpoint returns the engine endpoint URL for gameID. Used by
// the user game-proxy handlers.
func (s *Service) EngineEndpoint(ctx context.Context, gameID uuid.UUID) (string, error) {
@@ -812,6 +863,33 @@ func (s *Service) publishSnapshot(ctx context.Context, gameID uuid.UUID, state r
return nil
}
// publishFailureSnapshot forwards a runtime-failure observation to
// lobby so the game lifecycle can react (e.g. flipping `running` to
// `paused` on `engine_unreachable` / `generation_failed` per Phase
// 25). The snapshot carries the unchanged `current_turn` because no
// new turn has been produced; lobby uses the turn number to anchor
// the `game.paused` idempotency key.
//
// The call is best-effort: lobby errors are returned to the caller
// (the scheduler tick) so the warn-level logging stays in one place.
// A missing runtime cache entry (e.g. the row was just removed by
// the reconciler) collapses into a silent no-op.
func (s *Service) publishFailureSnapshot(ctx context.Context, gameID uuid.UUID, runtimeStatus string) error {
if s.deps.Lobby == nil {
return nil
}
rec, ok := s.deps.Cache.GetRuntime(gameID)
if !ok {
return nil
}
return s.deps.Lobby.OnRuntimeSnapshot(ctx, gameID, LobbySnapshot{
CurrentTurn: rec.CurrentTurn,
RuntimeStatus: runtimeStatus,
EngineHealth: "down",
ObservedAt: s.deps.Now().UTC(),
})
}
// transitionRuntimeStatus updates the status / engine_health columns
// and refreshes the cache.
func (s *Service) transitionRuntimeStatus(ctx context.Context, gameID uuid.UUID, status, health string) (RuntimeRecord, error) {
@@ -200,6 +200,8 @@ func TestServiceStartGameEndToEnd(t *testing.T) {
engineSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
switch r.URL.Path {
case "/healthz":
w.WriteHeader(http.StatusOK)
case "/api/v1/admin/init":
_ = json.NewEncoder(w).Encode(rest.StateResponse{ID: gameID, Turn: 0, Players: []rest.PlayerState{{RaceName: "Alpha", Planets: 3, Population: 10}}})
case "/api/v1/admin/status":
+37
View File
@@ -45,11 +45,20 @@ var pathParamStubs = map[string]string{
"delivery_id": "00000000-0000-0000-0000-000000000006",
"user_id": "00000000-0000-0000-0000-000000000007",
"device_session_id": "00000000-0000-0000-0000-000000000008",
"battle_id": "00000000-0000-0000-0000-000000000009",
"id": "1.2.3",
"username": "alice",
"turn": "42",
}
// queryParamStubs lists the deterministic substitutions used to fill
// query-string parameters declared in `openapi.yaml`. Every required
// query parameter must have an entry here; optional ones can stay
// blank (the contract test omits them when no stub is registered).
var queryParamStubs = map[string]string{
"turn": "42",
}
// requestBodyStubs lists the JSON request bodies the contract test sends for
// each operationId. Operations missing from the map default to an empty
// object `{}`, which is a valid placeholder thanks to `additionalProperties:
@@ -323,6 +332,9 @@ func buildRequest(t *testing.T, c contractOperation) *http.Request {
t.Helper()
target := substitutePathParams(t, c.path)
if query := buildQuery(t, c); query != "" {
target += "?" + query
}
url := "http://backend.internal" + target
body := bodyFor(t, c)
@@ -376,6 +388,31 @@ func bodyFor(t *testing.T, c contractOperation) requestBody {
}
}
func buildQuery(t *testing.T, c contractOperation) string {
t.Helper()
if c.op == nil {
return ""
}
values := make([]string, 0, len(c.op.Parameters))
for _, p := range c.op.Parameters {
if p == nil || p.Value == nil {
continue
}
if p.Value.In != "query" {
continue
}
stub, ok := queryParamStubs[p.Value.Name]
if !ok {
if p.Value.Required {
t.Fatalf("operation %q requires query parameter %q with no stub registered", c.operationID, p.Value.Name)
}
continue
}
values = append(values, p.Value.Name+"="+stub)
}
return strings.Join(values, "&")
}
func substitutePathParams(t *testing.T, templated string) string {
t.Helper()
+126 -2
View File
@@ -14,7 +14,6 @@ import (
"galaxy/backend/internal/server/httperr"
"galaxy/backend/internal/server/middleware/userid"
"galaxy/backend/internal/telemetry"
"galaxy/model/order"
gamerest "galaxy/model/rest"
"github.com/gin-gonic/gin"
@@ -61,6 +60,10 @@ func (h *UserGamesHandlers) Commands() gin.HandlerFunc {
return
}
ctx := c.Request.Context()
if err := h.runtime.CheckOrdersAccept(ctx, gameID); err != nil {
respondGameProxyError(c, h.logger, "user games commands", ctx, err)
return
}
mapping, err := h.runtime.ResolvePlayerMapping(ctx, gameID, userID)
if err != nil {
respondGameProxyError(c, h.logger, "user games commands", ctx, err)
@@ -106,6 +109,10 @@ func (h *UserGamesHandlers) Orders() gin.HandlerFunc {
return
}
ctx := c.Request.Context()
if err := h.runtime.CheckOrdersAccept(ctx, gameID); err != nil {
respondGameProxyError(c, h.logger, "user games orders", ctx, err)
return
}
mapping, err := h.runtime.ResolvePlayerMapping(ctx, gameID, userID)
if err != nil {
respondGameProxyError(c, h.logger, "user games orders", ctx, err)
@@ -123,7 +130,6 @@ func (h *UserGamesHandlers) Orders() gin.HandlerFunc {
// handler. Per ARCHITECTURE.md §9 backend is the only caller
// of the engine, so the body never carries a client-supplied
// actor.
_ = order.Order{}
payload, err := rebindActor(body, mapping.RaceName)
if err != nil {
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "request body must be a JSON object")
@@ -138,6 +144,64 @@ func (h *UserGamesHandlers) Orders() gin.HandlerFunc {
}
}
// GetOrders handles GET /api/v1/user/games/{game_id}/orders?turn=N.
// Forwards to the engine's `GET /api/v1/order` with the player rebound
// from the runtime mapping. The query parameter `turn` is required
// and must be a non-negative integer; the engine itself enforces the
// same rule, but rejecting up-front saves a network hop.
//
// On `204 No Content` the handler answers `204` so the gateway can
// translate the FBS envelope to `found = false`. On `200` the
// engine's body is forwarded verbatim — the gateway re-encodes the
// JSON `UserGamesOrder` shape into FlatBuffers.
func (h *UserGamesHandlers) GetOrders() gin.HandlerFunc {
if h == nil || h.runtime == nil || h.engine == nil {
return handlers.NotImplemented("userGamesGetOrders")
}
return func(c *gin.Context) {
gameID, ok := parseGameIDParam(c)
if !ok {
return
}
turnRaw := c.Query("turn")
if turnRaw == "" {
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "turn is required")
return
}
turn, err := strconv.Atoi(turnRaw)
if err != nil || turn < 0 {
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "turn must be a non-negative integer")
return
}
userID, ok := userid.FromContext(c.Request.Context())
if !ok {
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "user id missing")
return
}
ctx := c.Request.Context()
mapping, err := h.runtime.ResolvePlayerMapping(ctx, gameID, userID)
if err != nil {
respondGameProxyError(c, h.logger, "user games get orders", ctx, err)
return
}
endpoint, err := h.runtime.EngineEndpoint(ctx, gameID)
if err != nil {
respondGameProxyError(c, h.logger, "user games get orders", ctx, err)
return
}
body, status, err := h.engine.GetOrder(ctx, endpoint, mapping.RaceName, turn)
if err != nil {
respondEngineProxyError(c, h.logger, "user games get orders", ctx, body, err)
return
}
if status == http.StatusNoContent {
c.Status(http.StatusNoContent)
return
}
c.Data(http.StatusOK, "application/json", body)
}
}
// Report handles GET /api/v1/user/games/{game_id}/reports/{turn}.
func (h *UserGamesHandlers) Report() gin.HandlerFunc {
if h == nil || h.runtime == nil || h.engine == nil {
@@ -179,6 +243,60 @@ func (h *UserGamesHandlers) Report() gin.HandlerFunc {
}
}
// Battle handles GET /api/v1/user/games/{game_id}/battles/{turn}/{battle_id}.
// Forwards to the engine's `GET /api/v1/battle/:turn/:uuid`. Path
// parameters are validated up-front to save a network hop. 404 from
// the engine is forwarded as 404. The recipient race is resolved
// from the runtime mapping but not forwarded — engine returns the
// battle by id, visibility is enforced by the engine state.
func (h *UserGamesHandlers) Battle() gin.HandlerFunc {
if h == nil || h.runtime == nil || h.engine == nil {
return handlers.NotImplemented("userGamesBattle")
}
return func(c *gin.Context) {
gameID, ok := parseGameIDParam(c)
if !ok {
return
}
turnRaw := c.Param("turn")
turn, err := strconv.Atoi(turnRaw)
if err != nil || turn < 0 {
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "turn must be a non-negative integer")
return
}
battleID := c.Param("battle_id")
if battleID == "" {
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "battle id is required")
return
}
userID, ok := userid.FromContext(c.Request.Context())
if !ok {
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "user id missing")
return
}
ctx := c.Request.Context()
if _, err := h.runtime.ResolvePlayerMapping(ctx, gameID, userID); err != nil {
respondGameProxyError(c, h.logger, "user games battle", ctx, err)
return
}
endpoint, err := h.runtime.EngineEndpoint(ctx, gameID)
if err != nil {
respondGameProxyError(c, h.logger, "user games battle", ctx, err)
return
}
body, status, err := h.engine.FetchBattle(ctx, endpoint, turn, battleID)
if err != nil {
respondEngineProxyError(c, h.logger, "user games battle", ctx, body, err)
return
}
if status == http.StatusNotFound {
httperr.Abort(c, http.StatusNotFound, httperr.CodeNotFound, "battle not found")
return
}
c.Data(http.StatusOK, "application/json", body)
}
}
// rebindActor decodes a JSON object from raw, sets `actor` to
// raceName, and re-encodes. Backend never trusts the actor field
// supplied by the client (per ARCHITECTURE.md §9).
@@ -201,6 +319,12 @@ func respondGameProxyError(c *gin.Context, logger *zap.Logger, op string, ctx co
switch {
case errors.Is(err, runtime.ErrNotFound):
httperr.Abort(c, http.StatusNotFound, httperr.CodeNotFound, "no runtime mapping for this user/game")
case errors.Is(err, runtime.ErrTurnAlreadyClosed):
httperr.Abort(c, http.StatusConflict, httperr.CodeTurnAlreadyClosed,
"turn already closed; orders are not accepted while the engine is producing")
case errors.Is(err, runtime.ErrGamePaused):
httperr.Abort(c, http.StatusConflict, httperr.CodeGamePaused,
"game is paused; orders are not accepted until it resumes")
case errors.Is(err, runtime.ErrConflict):
httperr.Abort(c, http.StatusConflict, httperr.CodeConflict, err.Error())
default:
@@ -89,9 +89,12 @@ type gameSummaryWire struct {
EnrollmentEndsAt string `json:"enrollment_ends_at"`
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"updated_at"`
CurrentTurn int32 `json:"current_turn"`
}
// lobbyGameDetailWire mirrors `LobbyGameDetail` from openapi.yaml.
// `current_turn` is inherited from `gameSummaryWire`; the runtime
// fields below carry the runtime projection on top of it.
type lobbyGameDetailWire struct {
gameSummaryWire
Visibility string `json:"visibility"`
@@ -100,7 +103,6 @@ type lobbyGameDetailWire struct {
TargetEngineVersion string `json:"target_engine_version"`
StartGapHours int32 `json:"start_gap_hours"`
StartGapPlayers int32 `json:"start_gap_players"`
CurrentTurn int32 `json:"current_turn"`
RuntimeStatus string `json:"runtime_status"`
EngineHealth string `json:"engine_health,omitempty"`
StartedAt *string `json:"started_at,omitempty"`
@@ -118,6 +120,7 @@ func gameSummaryToWire(g lobby.GameRecord) gameSummaryWire {
EnrollmentEndsAt: g.EnrollmentEndsAt.UTC().Format(timestampLayout),
CreatedAt: g.CreatedAt.UTC().Format(timestampLayout),
UpdatedAt: g.UpdatedAt.UTC().Format(timestampLayout),
CurrentTurn: g.RuntimeSnapshot.CurrentTurn,
}
if g.OwnerUserID != nil {
s := g.OwnerUserID.String()
@@ -135,7 +138,6 @@ func lobbyGameDetailToWire(g lobby.GameRecord) lobbyGameDetailWire {
TargetEngineVersion: g.TargetEngineVersion,
StartGapHours: g.StartGapHours,
StartGapPlayers: g.StartGapPlayers,
CurrentTurn: g.RuntimeSnapshot.CurrentTurn,
RuntimeStatus: g.RuntimeSnapshot.RuntimeStatus,
EngineHealth: g.RuntimeSnapshot.EngineHealth,
}
@@ -23,6 +23,22 @@ const (
CodeMethodNotAllowed = "method_not_allowed"
CodeInternalError = "internal_error"
CodeServiceUnavailable = "service_unavailable"
// CodeTurnAlreadyClosed marks a user-games command or order rejection
// caused by the backend's turn-cutoff guard: the request arrived
// after the active turn started generating (runtime status
// `generation_in_progress` / `generation_failed` / `engine_unreachable`)
// and the engine no longer accepts writes for the closing turn. The
// caller is expected to wait for the next `game.turn.ready` push and
// resubmit against the new turn.
CodeTurnAlreadyClosed = "turn_already_closed"
// CodeGamePaused marks a user-games command or order rejection caused
// by the lobby-side game lifecycle: the game is in `paused`,
// `finished`, or any other status that does not accept writes. The
// caller is expected to wait for the game to resume before
// resubmitting.
CodeGamePaused = "game_paused"
)
// Body stores the inner `error` object of the standard envelope.
+2
View File
@@ -261,7 +261,9 @@ func registerUserRoutes(router *gin.Engine, instruments *metrics.Instruments, de
userGames := group.Group("/games")
userGames.POST("/:game_id/commands", deps.UserGames.Commands())
userGames.POST("/:game_id/orders", deps.UserGames.Orders())
userGames.GET("/:game_id/orders", deps.UserGames.GetOrders())
userGames.GET("/:game_id/reports/:turn", deps.UserGames.Report())
userGames.GET("/:game_id/battles/:turn/:battle_id", deps.UserGames.Battle())
userSessions := group.Group("/sessions")
userSessions.GET("", deps.UserSessions.List())
+104 -8
View File
@@ -1023,7 +1023,11 @@ paths:
$ref: "#/components/schemas/EngineOrder"
responses:
"200":
description: Engine order validation result passed through.
description: |
Engine order validation result passed through. Body is the
engine's `UserGamesOrder` shape — game_id, updatedAt, and
the per-command `cmd[]` list with `cmdApplied` /
`cmdErrorCode` populated by the engine.
content:
application/json:
schema:
@@ -1036,6 +1040,46 @@ paths:
$ref: "#/components/responses/NotImplementedError"
"500":
$ref: "#/components/responses/InternalError"
get:
tags: [User]
operationId: userGamesGetOrders
summary: Read the player's stored order for a turn
description: |
Forwards `GET /api/v1/order` against the engine container.
The caller always knows the current turn from the lobby
record at game boot, so `turn` is required.
security:
- UserHeader: []
parameters:
- $ref: "#/components/parameters/XUserID"
- $ref: "#/components/parameters/GameID"
- name: turn
in: query
required: true
description: Turn number whose stored order to fetch. Non-negative.
schema:
type: integer
format: int32
minimum: 0
responses:
"200":
description: |
Engine returned the stored order for this player + turn.
Body is the engine's `UserGamesOrder` shape.
content:
application/json:
schema:
$ref: "#/components/schemas/PassthroughObject"
"204":
description: No order has been stored for this player on this turn.
"400":
$ref: "#/components/responses/InvalidRequestError"
"404":
$ref: "#/components/responses/NotFoundError"
"501":
$ref: "#/components/responses/NotImplementedError"
"500":
$ref: "#/components/responses/InternalError"
/api/v1/user/games/{game_id}/reports/{turn}:
get:
tags: [User]
@@ -1062,6 +1106,44 @@ paths:
$ref: "#/components/responses/NotImplementedError"
"500":
$ref: "#/components/responses/InternalError"
/api/v1/user/games/{game_id}/battles/{turn}/{battle_id}:
get:
tags: [User]
operationId: userGamesBattle
summary: Read one engine battle report
description: |
Forwards to the engine's `GET /api/v1/battle/:turn/:uuid`. The
engine response body is passed through verbatim. `404 Not Found`
is returned when the battle does not exist for the supplied
`turn` / `battle_id` pair.
security:
- UserHeader: []
parameters:
- $ref: "#/components/parameters/XUserID"
- $ref: "#/components/parameters/GameID"
- $ref: "#/components/parameters/Turn"
- name: battle_id
in: path
required: true
description: Battle identifier (RFC 4122 UUID).
schema:
type: string
format: uuid
responses:
"200":
description: Engine battle report passed through.
content:
application/json:
schema:
$ref: "#/components/schemas/PassthroughObject"
"400":
$ref: "#/components/responses/InvalidRequestError"
"404":
$ref: "#/components/responses/NotFoundError"
"501":
$ref: "#/components/responses/NotImplementedError"
"500":
$ref: "#/components/responses/InternalError"
/api/v1/user/sessions:
get:
tags: [User]
@@ -2270,9 +2352,10 @@ components:
type: string
description: |
Stable machine-readable failure marker. The closed set is
`not_implemented`, `invalid_request`, `unauthorized`, `not_found`,
`conflict`, `method_not_allowed`, `internal_error`,
`service_unavailable`.
`not_implemented`, `invalid_request`, `unauthorized`,
`forbidden`, `not_found`, `conflict`, `method_not_allowed`,
`internal_error`, `service_unavailable`,
`turn_already_closed`, `game_paused`.
enum:
- not_implemented
- invalid_request
@@ -2283,6 +2366,8 @@ components:
- method_not_allowed
- internal_error
- service_unavailable
- turn_already_closed
- game_paused
message:
type: string
description: Human-readable client-safe failure description.
@@ -2303,7 +2388,13 @@ components:
format: email
locale:
type: string
description: Optional BCP 47 locale tag preferred for the delivered code.
description: |
Optional BCP 47 locale tag preferred for the delivered code.
Read by the gateway in preference to the request
`Accept-Language` header so Safari clients (which silently
drop JS-set `Accept-Language`) can still pick a non-system
mail language. Empty / malformed values fall back to the
header, which in turn falls back to `en`.
PublicAuthSendEmailCodeResponse:
type: object
additionalProperties: false
@@ -2509,6 +2600,7 @@ components:
- enrollment_ends_at
- created_at
- updated_at
- current_turn
properties:
game_id:
type: string
@@ -2557,6 +2649,13 @@ components:
updated_at:
type: string
format: date-time
current_turn:
type: integer
description: |
Most recent turn number observed by backend's runtime
projection. Zero before the engine produces its first
snapshot. The user surface uses it to fetch the matching
`user.games.report` without a separate state query.
GameSummaryPage:
type: object
additionalProperties: false
@@ -2714,7 +2813,6 @@ components:
- target_engine_version
- start_gap_hours
- start_gap_players
- current_turn
- runtime_status
properties:
visibility:
@@ -2730,8 +2828,6 @@ components:
type: integer
start_gap_players:
type: integer
current_turn:
type: integer
runtime_status:
type: string
engine_health:
+7
View File
@@ -1,5 +1,12 @@
# World rendering package
> **Deprecated.** This package belongs to the deprecated
> `galaxy/client` Fyne client. New code must not import it. The
> active map renderer lives in `ui/frontend/src/map/` (TypeScript
> + PixiJS), with its specification in `ui/docs/renderer.md`. The
> sources here remain for historical context only and are not the
> reference algorithm for the new renderer.
## Purpose
`world` is the client-side map model and renderer for a 2D world that normally
+45 -9
View File
@@ -145,6 +145,15 @@ because they cross domain boundaries:
`X-User-ID`. Public games carry `owner_user_id IS NULL`; the partial
index on `(owner_user_id) WHERE visibility = 'private'` keeps the
private-owner lookup efficient.
- **Authenticated lobby commands** flow through the gateway envelope
by `message_type`. The catalog is `lobby.my.games.list`,
`lobby.public.games.list`, `lobby.my.applications.list`,
`lobby.my.invites.list`, `lobby.game.create`,
`lobby.game.open-enrollment`, `lobby.application.submit`,
`lobby.invite.redeem`, and `lobby.invite.decline`. Each lands on a
REST handler under `/api/v1/user/lobby/*`; the gateway forces
visibility to `private` on `lobby.game.create` before forwarding,
matching the user-surface invariant above.
| Package | Responsibility |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -362,11 +371,15 @@ Authenticated client traffic for in-game operations crosses three
serialisation boundaries: signed-gRPC FlatBuffers (client ↔ gateway),
JSON over REST (gateway ↔ backend), and JSON over REST again
(backend ↔ engine). Gateway owns the FB ↔ JSON transcoding for the
three message types `user.games.command`, `user.games.order`,
`user.games.report` (FB schemas in `pkg/schema/fbs/{order,report}`,
encoders in `pkg/transcoder`). Backend never touches FlatBuffers and
never re-interprets the JSON beyond rebinding the actor field from
the runtime player mapping (clients never carry a trusted actor).
four message types `user.games.command`, `user.games.order`,
`user.games.order.get`, `user.games.report` (FB schemas in
`pkg/schema/fbs/{order,report}`, encoders in `pkg/transcoder`).
`user.games.order.get` reads back the player's stored order for a
given turn — paired with the POST `user.games.order` so the client
can hydrate its local draft after a cache loss without re-deriving
from the report. Backend never touches FlatBuffers and never
re-interprets the JSON beyond rebinding the actor field from the
runtime player mapping (clients never carry a trusted actor).
Container state is owned by `backend/internal/runtime`:
@@ -531,6 +544,15 @@ This section describes the secure exchange model between client and
gateway. It applies at the public boundary and does not rely on backend
behaviour for any of its guarantees.
The authenticated edge listener is built on `connectrpc.com/connect` and
natively serves the Connect, gRPC, and gRPC-Web protocols on a single
HTTP/2 cleartext (`h2c`) port. Browser clients use Connect via
`@connectrpc/connect-web`; native iOS / Android / desktop clients can
use either Connect or raw gRPC framing against the same listener.
Envelope, signature, freshness, and anti-replay rules below are
protocol-agnostic — they apply identically to every supported wire
framing.
### Principles
- No browser cookies.
@@ -563,7 +585,9 @@ and revoke metadata.
the device.
- Browser/WASM clients use WebCrypto with non-exportable storage where
available. Loss of browser storage is acceptable and is recovered by
re-login.
re-login. The concrete browser baseline, IndexedDB schema, and
keystore lifecycle live in
[`ui/docs/storage.md`](../ui/docs/storage.md).
### Request envelope
@@ -761,9 +785,21 @@ Future scale-out hooks (not in MVP):
- **runtime snapshot** — engine-status read materialised into the lobby's
denormalised view: `current_turn`, `runtime_status`,
`engine_health_summary`, `player_turn_stats`.
- **turn cutoff** — the `running → generation_in_progress` CAS transition
that closes the command window. Commands arriving after the CAS are
rejected.
- **turn cutoff** — the `running → generation_in_progress` runtime-status
flip performed by `backend/internal/runtime/scheduler.go` before each
engine `/admin/turn` call. Commands and orders arriving while the
flag is set are rejected by the user-games handlers with HTTP 409
`turn_already_closed`. The matching reopening flip
(`generation_in_progress → running`) happens on a successful tick;
a failing tick instead drives the lobby to `paused` and fans out
`game.paused` (FUNCTIONAL.md §6.3, §6.5).
- **auto-pause** — the lobby reaction to a failed runtime snapshot
(`engine_unreachable` / `generation_failed`): the game flips
`running → paused`, the order handlers refuse new submits with
HTTP 409 `game_paused`, and `lobby.publishGamePaused` fans out the
push event. Only an admin `/resume` followed by a successful tick
recovers the game; the UI relies on the next `game.turn.ready` to
clear the paused banner.
- **outbox** — the durable queue of pending mail rows in
`mail_deliveries`, drained by the mail worker.
- **freshness window** — the symmetric ±5-minute interval around server
+155 -36
View File
@@ -100,12 +100,15 @@ Branches inside backend:
new one. The client gets the same response shape and is unaware of
the reuse.
- **Otherwise.** Backend creates a new challenge with the resolved
preferred language (derived from the optional `Accept-Language`
header forwarded by gateway, falling back to a default), and
enqueues the auth-mail row directly into the outbox in the same
transaction. SMTP delivery is asynchronous; the auth response
returns as soon as the challenge and outbox rows are durably
committed.
preferred language (derived from the optional `locale` body field
the caller sends — which takes priority — or, if absent or blank,
from the `Accept-Language` header forwarded by gateway, falling
back to a default), and enqueues the auth-mail row directly into
the outbox in the same transaction. SMTP delivery is asynchronous;
the auth response returns as soon as the challenge and outbox rows
are durably committed. The body field is the canonical channel
because Safari silently drops JS-set `Accept-Language` headers;
non-Safari clients can still rely on the header alone.
### 1.3 Confirming the challenge
@@ -139,9 +142,10 @@ consumed exactly once.
### 1.4 Per-request session lookup
Once the client holds a device session id and a private key, every
authenticated call is a signed gRPC request to gateway. Gateway is the
only component that ever sees the request signature; backend trusts
gateway's verdict.
authenticated call is a signed request to gateway over the
authenticated edge listener (Connect / gRPC / gRPC-Web on a single
HTTP/h2c port). Gateway is the only component that ever sees the
request signature; backend trusts gateway's verdict.
Gateway needs the session's public key to verify the signature, so each
authenticated request resolves the device session through an in-memory
@@ -602,13 +606,16 @@ not duplicated here.
### 6.2 Backend's role: pass-through with authorisation
The signed-gRPC pipeline for in-game traffic uses three message types
on the authenticated surface — `user.games.command`,
`user.games.order`, `user.games.report` each with a typed
FlatBuffers payload. Gateway transcodes the FB request into the JSON
shape backend expects, forwards over plain REST to the corresponding
`/api/v1/user/games/{game_id}/*` endpoint, then transcodes the JSON
response back into FB before signing the reply.
The signed authenticated-edge pipeline for in-game traffic uses four
message types on the authenticated surface — `user.games.command`,
`user.games.order`, `user.games.order.get`, `user.games.report`
each with a typed FlatBuffers payload. Gateway transcodes the FB
request into the JSON shape backend expects, forwards over plain
REST to the corresponding `/api/v1/user/games/{game_id}/*` endpoint,
then transcodes the JSON response back into FB before signing the
reply. `user.games.order.get` is the read-back companion to
`user.games.order`: clients use it to hydrate the local order draft
after a cache loss (fresh install, cleared storage, new device).
For every in-game endpoint the user surface acts as an authorised
pass-through to the engine container. Backend:
@@ -628,18 +635,40 @@ validity and ordering of in-game decisions. Gateway needs to know
the typed FB shape only to transcode the wire format; the per-command
semantics live in the engine.
### 6.3 Turn cutoff
### 6.3 Turn cutoff and auto-pause
A running game continuously alternates between a command-accepting
window and a generation phase. The transition `running →
generation_in_progress` is the cutoff: any command or order that
arrives after the cutoff is rejected by backend before forwarding,
because the engine no longer accepts writes for the closing turn.
After generation finishes, backend re-opens the window for the next
turn.
window and a generation phase, driven by the cron expression stored
in `runtime_records.turn_schedule`. The backend scheduler
(`backend/internal/runtime/scheduler.go`) wraps each engine
`/admin/turn` call between two `runtime_status` flips:
- Before the engine call: `running → generation_in_progress`.
The user-games command/order handlers
(`backend/internal/server/handlers_user_games.go`) consult the
per-game runtime record on every request and reject with
HTTP 409 + `code = turn_already_closed` while the runtime sits in
`generation_in_progress`. The error envelope mirrors backend's
standard `httperr` shape: `{"error": {"code":
"turn_already_closed", "message": "..."}}`.
- After a successful tick: `generation_in_progress → running`.
The order window re-opens for the new turn and the next
scheduled tick continues normally.
- After a failed tick (`engine_unreachable` /
`generation_failed`): the lobby's `OnRuntimeSnapshot` flips the
game from `running` to `paused` and publishes a `game.paused`
push event (see §6.6). The order handlers reject with HTTP 409
+ `code = game_paused` until an admin resume succeeds.
`force-next-turn` (admin) schedules a one-shot extra tick that
advances the next scheduled turn by one cron step.
advances the next scheduled turn by one cron step; the same
status-flip and rejection rules apply.
Clients distinguish the two rejections by `code`:
`turn_already_closed` means "wait for the next `game.turn.ready`
and resubmit", whereas `game_paused` means "wait for an admin
resume". The web client implements both reactions in
`ui/docs/sync-protocol.md`.
### 6.4 Reports
@@ -647,7 +676,79 @@ Per-turn reports are read-only views fetched from the engine on
demand. Backend authorises the caller and forwards the request;
there is no caching or denormalisation in this path.
### 6.5 Side effects
The web client renders the report as one section per FBS array
(galaxy summary, votes, player status, my / foreign sciences, my /
foreign ship classes, battles, bombings, approaching groups, my /
foreign / uninhabited / unknown planets, ships in production,
cargo routes, my fleets, my / foreign / unidentified ship groups).
Empty sections render explicit empty-state copy. Section anchors
are exposed in a sticky table of contents (a `<select>` on mobile)
and the scroll position is preserved across active-view switches
via SvelteKit's `Snapshot` API.
The Bombings section is a flat read-only table — one row per
bombing event, columns for `attacker`, `attack_power`, `wiped`
state and the post-bombing resource snapshot. The Battles section
is a list of links into the Battle Viewer (see [§6.5](#65-battle-viewer)).
### 6.5 Battle viewer
The Battle Viewer is a dedicated view that replaces the map and
renders one battle at a time. Entry points:
- A row in the Reports view's Battles section (link with the
current turn pinned via `?turn=`).
- A battle marker on the map (yellow cross drawn through the
corners of the square that circumscribes the planet circle;
stroke width scales with the protocol length).
The viewer is a logically isolated component that consumes a
`BattleReport` (shape per `pkg/model/report/battle.go`). The page
loader (`ui/frontend/src/lib/active-view/battle.svelte`) fetches
the report through the backend gateway route
`GET /api/v1/user/games/{game_id}/battles/{turn}/{battle_id}`,
which forwards verbatim to the engine's
`GET /api/v1/battle/:turn/:uuid`.
Visual model is radial: the planet sits at the centre, races are
placed at equal angular spacing on an outer ring, and each race is
rendered as a cloud of ship-class circles arranged on a Vogel
sunflower spiral biased toward the planet (the largest group by
NumberLeft sits closest to the planet, lighter buckets fan behind).
Tech-variants of the same `(race, className)` collapse into one
visual bucket labelled `<className>:<numLeft>`; per-class detail
stays available in the Reports view. Circle radius scales with
per-ship FullMass (range `[6, 24] px`, per-battle normalisation)
so heavy ships visually dominate. Observer groups (`inBattle:
false`) are not drawn. Eliminated races drop out and the survivors
re-spread on the next frame. The viewer is pinned to the viewport
(scene grows, log scrolls internally) so no page-level scroll
appears.
Each frame is one protocol entry; the shot is drawn as a thin line
from attacker to defender, red on `destroyed`, green otherwise.
Continuous playback offers 1x / 2x / 4x speeds (400 / 200 / 100 ms
per frame), plus play/pause, step ±, and rewind. The accessibility
text protocol below the scene mirrors the same events line-by-line.
Bombings and battles are intentionally not mixed: bombings remain a
static table in the Reports view; the bombing marker on the map is
a thin stroke-only ring around the planet (yellow when damaged, red
when wiped) and a click scrolls the corresponding row into view.
The current report wire carries a `battle: [{ id, planet, shots }]`
summary per battle so the map markers know where to anchor without
fetching every full `BattleReport`.
For DEV / e2e the legacy-report CLI
(`tools/local-dev/legacy-report/cmd/legacy-report-to-json`) emits an
envelope `{version: 1, report, battles}` where `battles` carries the
full `BattleReport`-s parsed out of legacy `Battle at (#N)` blocks.
The synthetic-report loader on the lobby unwraps the envelope and
hands every battle to `registerSyntheticBattle`, so the Battle Viewer
resolves any UUID without a network fetch.
### 6.6 Side effects
A successful turn generation publishes a runtime snapshot into the
lobby module, which updates the denormalised view (current turn,
@@ -655,15 +756,32 @@ runtime status, per-player stats). The engine's "game finished"
report drives the `running → finished` transition ([Section 3.5](#35-cancellation-and-finish))
and triggers Race Name Directory promotions ([Section 5](#5-race-name-directory)).
The `game.*` notification kinds (`game.started`, `game.turn.ready`,
`game.generation.failed`, `game.finished`) are reserved in the
documentation but have **no producer** in the codebase today; the
notification catalog explicitly omits them (`backend/internal/notification/catalog.go`).
Adding a producer is purely additive: register the kind in the
catalog, populate `MailTemplateID` if email fan-out is desired, and
have the appropriate domain module call `notification.Submit`.
Among the `game.*` notification kinds, `game.turn.ready` and
`game.paused` are wired:
### 6.6 Cross-references
- `game.turn.ready`
`lobby.Service.OnRuntimeSnapshot` (`backend/internal/lobby/runtime_hooks.go`)
emits one intent per advancing `current_turn`, addressed to every
active membership of the game, with idempotency key
`turn-ready:<game_id>:<turn>` and JSON payload `{game_id, turn}`.
- `game.paused` — the same hook publishes one intent per transition
into `paused` driven by an `engine_unreachable` /
`generation_failed` runtime snapshot, addressed to every active
membership, with idempotency key `paused:<game_id>:<turn>` and
JSON payload `{game_id, turn, reason}`. The runtime status that
triggered the transition is carried as `reason` so the UI can
differentiate the copy in a future revision.
Both kinds route through the push channel only; email is
deliberately omitted to avoid per-turn / per-pause spam.
The remaining `game.*` kinds (`game.started`, `game.generation.failed`,
`game.finished`) and `mail.dead_lettered` are reserved without a
producer; adding one is purely additive (register the kind in the
catalog, extend the migration `CHECK` constraint, and call
`notification.Submit` from the appropriate domain module).
### 6.7 Cross-references
- Backend ↔ engine wire contract (`pkg/model/{order,report,rest}`):
[ARCHITECTURE.md §9](ARCHITECTURE.md#9-backend--game-engine-communication).
@@ -680,9 +798,10 @@ session invalidations).
### 7.1 Scope
In scope: the gRPC stream a client opens against gateway, the
bootstrap event, the framing of forwarded events, and the
backend → gateway control channel that produces those events.
In scope: the server-streaming subscription a client opens against
gateway (Connect / gRPC / gRPC-Web framing all map to the same
endpoint), the bootstrap event, the framing of forwarded events, and
the backend → gateway control channel that produces those events.
Out of scope: the catalog of event kinds — see [Section 8](#8-notifications-and-mail) for the
notification side and [`backend/README.md` §10](../backend/README.md#10-notification-catalog) for the closed list.
+157 -33
View File
@@ -99,11 +99,15 @@ Backend выпускает непрозрачный идентификатор
backend переиспользует последний имеющийся вызов вместо создания
нового. Клиент получает ту же форму ответа и не знает о повторе.
- **Иначе.** Backend создаёт новый вызов с разрешённым preferred_language
(выводится из опционального заголовка `Accept-Language`,
форварднутого gateway, с откатом на дефолт) и в той же транзакции
ставит auth-mail-строку прямо в outbox. SMTP-доставка асинхронна;
auth-ответ возвращается, как только строки challenge и outbox
durably закоммитены.
(выводится из опционального поля `locale` в JSON-теле — оно имеет
приоритет — либо, если оно отсутствует или пустое, из заголовка
`Accept-Language`, форварднутого gateway, с откатом на дефолт) и
в той же транзакции ставит auth-mail-строку прямо в outbox.
SMTP-доставка асинхронна; auth-ответ возвращается, как только
строки challenge и outbox durably закоммитены. Поле в теле — это
канонический канал, потому что Safari молча сбрасывает выставляемые
из JS заголовки `Accept-Language`; клиентам не на Safari достаточно
одного заголовка.
### 1.3 Подтверждение вызова
@@ -138,9 +142,10 @@ Throttle-переиспользование на стороне send означ
### 1.4 Поиск сессии для каждого запроса
Когда у клиента есть идентификатор устройства-сессии и приватный ключ,
каждый аутентифицированный вызов — это подписанный gRPC-запрос к
gateway. Gateway — единственный компонент, который видит подпись
запроса; backend доверяет вердикту gateway.
каждый аутентифицированный вызов — это подписанный запрос к gateway
по аутентифицированному edge-листенеру (Connect / gRPC / gRPC-Web на
одном HTTP/h2c-порту). Gateway — единственный компонент, который видит
подпись запроса; backend доверяет вердикту gateway.
Gateway нужен публичный ключ сессии для проверки подписи, поэтому
каждый аутентифицированный запрос разрешает устройство-сессию через
@@ -618,13 +623,18 @@ Wire-формат команд, приказов и отчётов — собс
### 6.2 Роль backend: pass-through с авторизацией
Signed-gRPC-конвейер для in-game-трафика использует три message
types на аутентифицированной поверхности — `user.games.command`,
`user.games.order`, `user.games.report` — у каждого типизированный
FlatBuffers-payload. Gateway транскодирует FB-запрос в JSON-форму,
которую ждёт backend, форвардит её REST'ом в соответствующий
Подписанный конвейер аутентифицированного edge для in-game-трафика
использует четыре message types на аутентифицированной поверхности —
`user.games.command`, `user.games.order`, `user.games.order.get`,
`user.games.report` — у каждого типизированный FlatBuffers-payload.
Gateway транскодирует FB-запрос в JSON-форму, которую ждёт backend,
форвардит её REST'ом в соответствующий
`/api/v1/user/games/{game_id}/*` endpoint, после чего транскодирует
JSON-ответ обратно в FB перед подписью.
`user.games.order.get` — read-back-компаньон для `user.games.order`:
клиент использует его, чтобы восстановить локальный черновик приказа
после потери кэша (свежая установка, очищенное хранилище, новое
устройство).
Для каждого in-game-endpoint user-surface работает как
авторизующий pass-through к engine-контейнеру. Backend:
@@ -643,17 +653,40 @@ Backend не парсит содержимое payload команд или пр
FB-форму только чтобы транскодировать wire-формат; per-command-
семантика живёт в движке.
### 6.3 Окно хода
### 6.3 Окно хода и auto-pause
Запущенная игра постоянно чередуется между окном приёма команд
и фазой генерации. Переход `running → generation_in_progress`
cutoff: любая команда или приказ, пришедшие после cutoff,
отклоняются backend до форварда, потому что движок больше не
принимает запись для закрывающегося хода. После окончания
генерации backend заново открывает окно для следующего хода.
и фазой генерации, управляемой cron-выражением из
`runtime_records.turn_schedule`. Backend-планировщик
(`backend/internal/runtime/scheduler.go`) оборачивает каждый
engine `/admin/turn` двумя `runtime_status`-флипами:
- Перед engine-вызовом: `running → generation_in_progress`.
User-games-handler'ы команд/приказов
(`backend/internal/server/handlers_user_games.go`) на каждом
запросе сверяются с per-game runtime-записью и отклоняют с
HTTP 409 + `code = turn_already_closed`, пока runtime в
`generation_in_progress`. Тело ошибки — стандартный
`httperr`-конверт: `{"error": {"code": "turn_already_closed",
"message": "..."}}`.
- После успешного тика: `generation_in_progress → running`.
Окно приказов открывается на новый ход, следующий тик идёт
как обычно.
- После провалившегося тика (`engine_unreachable` /
`generation_failed`): `lobby.OnRuntimeSnapshot` переводит игру
`running → paused` и публикует push-эвент `game.paused`
(см. §6.6). Order-handler'ы отклоняют запросы с HTTP 409 +
`code = game_paused`, пока админ не выполнит resume.
`force-next-turn` (admin) планирует one-shot-доп-тик, который
сдвигает следующий запланированный ход на один cron-шаг.
сдвигает следующий запланированный ход на один cron-шаг; те же
правила status-flip и отклонения применимы.
Клиенты различают два варианта отказа по `code`:
`turn_already_closed` — «дождись следующего `game.turn.ready` и
отправь ещё раз», `game_paused` — «дождись resume администратором».
Web-клиент реализует оба сценария согласно
`ui/docs/sync-protocol.md`.
### 6.4 Отчёты
@@ -661,7 +694,79 @@ Per-turn-отчёты — read-only-вью, забираемые из движк
Backend авторизует вызывающего и форвардит запрос; в этом пути
нет ни кэширования, ни денормализации.
### 6.5 Побочные эффекты
Web-клиент рендерит отчёт как одну секцию на каждый FBS-массив
(общие сведения, голоса, статус игроков, мои / чужие науки, мои /
чужие классы кораблей, сражения, бомбардировки, приближающиеся
группы, мои / чужие / необитаемые / неопознанные планеты, корабли в
производстве, грузовые маршруты, мои флоты, мои / чужие /
неопознанные группы кораблей). Пустые секции получают явную копию
empty-state. Якоря секций отображены в sticky-TOC (на мобильном —
`<select>`); позиция скролла сохраняется при переключении активного
представления через SvelteKit `Snapshot` API.
Секция бомбардировок — это плоская read-only-таблица: одна строка на
событие, колонки `attacker`, `attack_power`, признак `wiped` и
ресурсный снимок после удара. Секция сражений — список ссылок в
Battle Viewer (см. [§6.5](#65-battle-viewer)).
### 6.5 Battle viewer
Battle Viewer — отдельное представление, заменяющее карту и
показывающее одну битву. Входы:
- Строка в секции «сражения» в Reports (ссылка с пиннингом
текущего хода через `?turn=`).
- Battle-marker на карте (жёлтый крест через противоположные углы
квадрата, описанного вокруг круга планеты; толщина линий растёт
с длиной протокола).
Сам Viewer — логически изолированный компонент, потребляющий
`BattleReport` в форме `pkg/model/report/battle.go`. Страница-обёртка
(`ui/frontend/src/lib/active-view/battle.svelte`) забирает отчёт
через backend-маршрут
`GET /api/v1/user/games/{game_id}/battles/{turn}/{battle_id}`,
который проксирует ответ engine-эндпоинта
`GET /api/v1/battle/:turn/:uuid`.
Визуальная модель — радиальная: планета в центре, расы по внешней
окружности на равных угловых интервалах, внутри расы — облако
кружков по классам кораблей, выложенное Vogel-спиралью с биасом к
планете (самая многочисленная группа по NumberLeft — ближе к
планете, остальные раскручиваются спиралью позади). Tech-варианты
одного `(race, className)` схлопываются в один визуальный нод
`<className>:<numLeft>`; детали по тех-уровням остаются в Reports.
Радиус кружка масштабируется по FullMass корабля (диапазон
`[6, 24] px`, нормировка на самую тяжёлую группу в битве), так что
тяжёлые корабли визуально доминируют. Наблюдатели (`inBattle:
false`) не рисуются. Выбывшие расы убираются из сцены, оставшиеся
перераспределяются на следующем кадре. Viewer закреплён по высоте
viewport-а: сцена растягивается, лог скроллит внутри — никаких
скроллов на уровне страницы.
Каждый кадр — одна запись протокола; выстрел рисуется тонкой линией
от атакующего к защитнику, красной при `destroyed`, зелёной иначе.
Непрерывное воспроизведение: 1x / 2x / 4x (400 / 200 / 100 мс на
кадр), плюс play/pause, шаг вперёд/назад, rewind. Текстовый протокол
доступности под сценой дублирует те же события построчно.
Бомбардировки и сражения умышленно не смешиваются: бомбардировки
остаются статической таблицей в Reports; bombing-marker на карте —
тонкая окружность вокруг планеты (жёлтая при damaged, красная при
wiped), клик скроллит соответствующую строку в Reports.
Текущая wire-форма отчёта несёт `battle: [{ id, planet, shots }]`
на каждую битву, чтобы map-маркеры могли расположиться без
дополнительного запроса полного `BattleReport`.
Для DEV / e2e легаси-CLI
(`tools/local-dev/legacy-report/cmd/legacy-report-to-json`) выдаёт
envelope `{version: 1, report, battles}`, где `battles` несёт полные
`BattleReport`-ы, распарсенные из `Battle at (#N)`-блоков. Synthetic-
загрузчик в лобби разбирает envelope и регистрирует каждую битву
через `registerSyntheticBattle`, так что Battle Viewer открывает
любой UUID без сетевого запроса.
### 6.6 Побочные эффекты
Успешная генерация хода публикует runtime-snapshot в lobby-модуль,
который обновляет денормализованное вью (текущий ход, runtime-
@@ -670,16 +775,34 @@ status, per-player-stats). Engine-отчёт "game finished" гонит
([Раздел 3.5](#35-отмена-и-завершение)) и триггерит Race Name
Directory-промоушен ([Раздел 5](#5-реестр-названий-рас)).
`game.*`-виды уведомлений (`game.started`, `game.turn.ready`,
`game.generation.failed`, `game.finished`) зарезервированы в
документации, но **не имеют поставщика** в кодовой базе сегодня;
notification-каталог явно их опускает
(`backend/internal/notification/catalog.go`). Добавление поставщика
аддитивно: зарегистрировать вид в каталоге, заполнить
`MailTemplateID`, если нужен email-веер, и заставить нужный
доменный модуль вызвать `notification.Submit`.
Из `game.*`-видов уведомлений подключены `game.turn.ready` и
`game.paused`:
### 6.6 Перекрёстные ссылки
- `game.turn.ready`
`lobby.Service.OnRuntimeSnapshot` (`backend/internal/lobby/runtime_hooks.go`)
выпускает один intent на каждое увеличение `current_turn`,
адресуя его всем активным membership-ам игры, с
idempotency-ключом `turn-ready:<game_id>:<turn>` и
JSON-payload-ом `{game_id, turn}`.
- `game.paused` — тот же хук публикует один intent на каждое
выставление статуса `paused` по runtime-снапшоту
(`engine_unreachable` / `generation_failed`), адресуя его всем
активным membership-ам игры, с idempotency-ключом
`paused:<game_id>:<turn>` и JSON-payload-ом
`{game_id, turn, reason}`. `reason` несёт runtime-статус,
спровоцировавший переход, чтобы UI смог в будущем
дифференцировать копию.
Оба вида направляются только в push-канал; email-фан-аут
сознательно опущен, чтобы избежать спама на каждом ходе/паузе.
Остальные `game.*`-виды (`game.started`, `game.generation.failed`,
`game.finished`) и `mail.dead_lettered` зарезервированы без поставщика;
добавление поставщика чисто аддитивное (зарегистрировать вид в
каталоге, расширить `CHECK`-констрейнт миграции и вызвать
`notification.Submit` из подходящего доменного модуля).
### 6.7 Перекрёстные ссылки
- Backend ↔ engine wire-контракт (`pkg/model/{order,report,rest}`):
[ARCHITECTURE.md §9](ARCHITECTURE.md#9-backend--game-engine-communication).
@@ -697,9 +820,10 @@ notification-каталог явно их опускает
### 7.1 Состав
В составе: gRPC-стрим, который клиент открывает к gateway,
bootstrap-событие, фрейминг форварднутых событий, control-канал
backend → gateway, который производит эти события.
В составе: server-streaming-подписка, которую клиент открывает к
gateway (Connect / gRPC / gRPC-Web фреймы все маршрутизируются на
одну точку), bootstrap-событие, фрейминг форварднутых событий,
control-канал backend → gateway, который производит эти события.
Вне состава: каталог видов событий — см.
[Раздел 8](#8-уведомления-и-почта) для notification-стороны и
+1
View File
@@ -0,0 +1 @@
artifacts/
+1
View File
@@ -49,6 +49,7 @@ described below. Endpoints split into two route classes:
| Admin (GM-only) | `POST /api/v1/admin/race/banish` | `Game Master` | Deactivate a race after a permanent platform removal. |
| Player | `PUT /api/v1/command` | `Game Master` (forwarded from `Edge Gateway`) | Execute a batch of player commands. |
| Player | `PUT /api/v1/order` | `Game Master` | Validate and store a batch of player orders. |
| Player | `GET /api/v1/order` | `Game Master` | Fetch the previously stored player order for a turn. |
| Player | `GET /api/v1/report` | `Game Master` | Fetch the per-player turn report. |
| Probe | `GET /healthz` | `Runtime Manager` | Technical liveness probe. |
+87
View File
@@ -8,6 +8,7 @@ import (
"galaxy/calc"
"galaxy/game/internal/controller"
"galaxy/game/internal/model/game"
"galaxy/model/report"
"github.com/stretchr/testify/assert"
)
@@ -184,3 +185,89 @@ func TestProduceBattles(t *testing.T) {
assert.Zero(t, c.ShipGroup(3).Number)
}
}
// TestTransformBattleAggregatesSameShipClass guards against the
// engine-side variant of the duplicate-class bug. Several ShipGroups
// of the same ShipClass.ID can take part in the same battle (arrivals
// from different planets, tech splits, etc.); they must collapse into
// a single BattleReportGroup with summed Number and NumberLeft. The
// pre-fix engine cached the first group's index and silently dropped
// every subsequent group's initial / survivor counts, which manifested
// downstream as more Destroyed shots in the protocol than the
// recorded initial roster could account for.
func TestTransformBattleAggregatesSameShipClass(t *testing.T) {
c, g := newCache()
assert.NoError(t, g.RaceRelation(Race_0.Name, Race_1.Name, game.RelationWar.String()))
assert.NoError(t, g.RaceRelation(Race_1.Name, Race_0.Name, game.RelationWar.String()))
// Two Race_0 groups of the SAME ship class (Race_0_Gunship) plus
// one Race_1 group of Race_1_Gunship — all parked on Planet_0
// (owned by Race_0; the Race_1 group lands there via the Unsafe
// helper that bypasses the ownership check). Group indices land
// at 0, 1, 2 in creation order.
assert.NoError(t, c.CreateShips(Race_0_idx, Race_0_Gunship, R0_Planet_0_num, 10))
assert.NoError(t, c.CreateShips(Race_0_idx, Race_0_Gunship, R0_Planet_0_num, 10))
c.CreateShipsUnsafe_T(Race_1_idx, c.MustShipClass(Race_1_idx, Race_1_Gunship).ID, R0_Planet_0_num, 5)
// Simulate post-battle survivor counts: Group 0 ended the battle
// with 8 ships, Group 1 with 6. The aggregated BattleReportGroup
// must report NumberLeft = 8 + 6 = 14 (not just the last cached
// group's 6 — that's the regression).
c.ShipGroup(0).Number = 8
c.ShipGroup(1).Number = 6
b := &controller.Battle{
Planet: R0_Planet_0_num,
ObserverGroups: map[int]bool{0: true, 1: true, 2: true},
InitialNumbers: map[int]uint{0: 10, 1: 10, 2: 5},
// Protocol must reference every in-battle group at least once
// (otherwise TransformBattle won't register it through the
// `ship()` path). Two shots from Race_1 against each Race_0
// group hits both groupIds.
Protocol: []controller.BattleAction{
{Attacker: 2, Defender: 0, Destroyed: true},
{Attacker: 2, Defender: 1, Destroyed: true},
},
}
r := controller.TransformBattle(c, b)
// Two BattleReportGroup entries total: one merged Race_0_Gunship
// (groups 0 + 1) and one Race_1_Gunship. NOT three.
if got, want := len(r.Ships), 2; got != want {
t.Fatalf("len(r.Ships) = %d, want %d (duplicate ShipClass.ID must merge)", got, want)
}
var gunship0, gunship1 *report.BattleReportGroup
for i := range r.Ships {
grp := r.Ships[i]
switch grp.Race {
case Race_0.Name:
gunship0 = &grp
case Race_1.Name:
gunship1 = &grp
}
}
if gunship0 == nil || gunship1 == nil {
t.Fatalf("missing race entry: race0=%v race1=%v", gunship0, gunship1)
}
if gunship0.ClassName != Race_0_Gunship {
t.Errorf("race0.ClassName = %q, want %q", gunship0.ClassName, Race_0_Gunship)
}
if gunship0.Number != 20 {
t.Errorf("race0.Number = %d, want 20 (10+10)", gunship0.Number)
}
if gunship0.NumberLeft != 14 {
t.Errorf("race0.NumberLeft = %d, want 14 (8+6)", gunship0.NumberLeft)
}
if !gunship0.InBattle {
t.Errorf("race0.InBattle = false, want true (both source groups were in-battle)")
}
if gunship1.Number != 5 || gunship1.NumberLeft != 5 {
t.Errorf("race1 = (Number=%d, NumberLeft=%d), want (5, 5)",
gunship1.Number, gunship1.NumberLeft)
}
}
+27 -5
View File
@@ -18,10 +18,35 @@ func TransformBattle(c *Cache, b *Battle) *report.BattleReport {
cacheShipClass := make(map[uuid.UUID]int)
cacheRaceName := make(map[uuid.UUID]int)
processedGroup := make(map[int]bool)
addShipGroup := func(groupId int, inBattle bool) int {
shipClass := c.ShipGroupShipClass(groupId)
sg := c.ShipGroup(groupId)
// Several ship-groups of the same race/class can take part
// in the same battle (different tech upgrades, arrivals from
// different planets, …). They share a single
// BattleReportGroup entry keyed by ShipClass.ID — when a
// later group lands on a cached class we add its Number and
// NumberLeft into the existing entry instead of dropping
// them, so the protocol's per-class destroy counts reconcile
// with the recorded totals. `processedGroup` guards against
// double-counting a single groupId across multiple shots in
// the protocol — `ship()` runs on every attacker and defender
// reference, the merge must happen once per groupId.
if existing, ok := cacheShipClass[shipClass.ID]; ok {
if !processedGroup[groupId] {
bg := r.Ships[existing]
bg.Number += b.InitialNumbers[groupId]
bg.NumberLeft += sg.Number
if inBattle {
bg.InBattle = true
}
r.Ships[existing] = bg
processedGroup[groupId] = true
}
return existing
}
itemNumber := len(r.Ships)
bg := &report.BattleReportGroup{
Race: c.g.Race[c.RaceIndex(sg.OwnerID)].Name,
@@ -31,23 +56,20 @@ func TransformBattle(c *Cache, b *Battle) *report.BattleReport {
ClassName: shipClass.Name,
LoadType: sg.CargoString(),
LoadQuantity: report.F(sg.Load.F()),
Tech: make(map[string]report.Float, len(sg.Tech)),
}
for t, v := range sg.Tech {
bg.Tech[t.String()] = report.F(v.F())
}
r.Ships[itemNumber] = *bg
cacheShipClass[shipClass.ID] = itemNumber
processedGroup[groupId] = true
return itemNumber
}
ship := func(groupId int) int {
shipClass := c.ShipGroupShipClass(groupId)
if v, ok := cacheShipClass[shipClass.ID]; ok {
return v
} else {
return addShipGroup(groupId, true)
}
}
race := func(groupId int) int {
race := c.ShipGroupOwnerRace(groupId)
+59 -7
View File
@@ -2,6 +2,7 @@ package controller
import (
"errors"
"time"
"galaxy/game/internal/model/game"
@@ -37,6 +38,10 @@ type Repo interface {
// SaveBattle stores a new battle protocol and battle meta data for turn t
SaveBattle(uint, *report.BattleReport, *game.BattleMeta) error
// LoadBattle reads battle's protocol for turn t and battle id.
// Returns false if battle with such id was never stored at turn t
LoadBattle(t uint, id uuid.UUID) (*report.BattleReport, bool, error)
// SaveBombing stores all prodused bombings for turn t
SaveBombings(uint, []*game.Bombing) error
@@ -47,10 +52,10 @@ type Repo interface {
LoadReport(uint, uuid.UUID) (*report.Report, error)
// SaveOrder stores order for given turn
SaveOrder(uint, uuid.UUID, *order.Order) error
SaveOrder(uint, uuid.UUID, *order.UserGamesOrder) error
// LoadOrder loads order for specific turn and player id
LoadOrder(uint, uuid.UUID) (*order.Order, bool, error)
LoadOrder(uint, uuid.UUID) (*order.UserGamesOrder, bool, error)
}
type Ctrl interface {
@@ -126,14 +131,30 @@ func ExecuteCommand(configure func(*Param), consumer func(c Ctrl) error) (err er
return ec.executeCommand(func(c *Controller) error { return consumer(c) })
}
func ValidateOrder(configure func(*Param), actor string, cmd ...order.DecodableCommand) (err error) {
func ValidateOrder(configure func(*Param), actor string, cmd ...order.DecodableCommand) (*order.UserGamesOrder, error) {
ec, err := NewRepoController(configure)
if err != nil {
return err
return nil, err
}
return ec.validateOrder(actor, cmd...)
}
func FetchOrder(configure func(*Param), actor string, turn uint) (order *order.UserGamesOrder, ok bool, err error) {
ec, err := NewRepoController(configure)
if err != nil {
return nil, false, err
}
return ec.fetchOrder(actor, turn)
}
func FetchBattle(configure func(*Param), turn uint, ID uuid.UUID) (b *report.BattleReport, exists bool, err error) {
ec, err := NewRepoController(configure)
if err != nil {
return nil, false, err
}
return ec.fetchBattle(turn, ID)
}
func BanishRace(configure func(*Param), actor string) error {
ec, err := NewRepoController(configure)
if err != nil {
@@ -213,8 +234,8 @@ func (ec *RepoController) NewGameController(g *game.Game) *Controller {
}
}
func (ec *RepoController) validateOrder(actor string, cmd ...order.DecodableCommand) (err error) {
return ec.executeSafe(func(t uint, c *Controller) error {
func (ec *RepoController) validateOrder(actor string, cmd ...order.DecodableCommand) (o *order.UserGamesOrder, err error) {
err = ec.executeSafe(func(t uint, c *Controller) error {
id, err := c.RaceID(actor)
if err != nil {
return err
@@ -223,10 +244,41 @@ func (ec *RepoController) validateOrder(actor string, cmd ...order.DecodableComm
if err != nil {
return err
}
o := &order.Order{Commands: make([]order.DecodableCommand, len(cmd))}
o = &order.UserGamesOrder{
GameID: c.Cache.g.ID,
UpdatedAt: time.Now().UTC().UnixMilli(),
Commands: make([]order.DecodableCommand, len(cmd)),
}
copy(o.Commands, cmd)
return ec.Repo.SaveOrder(t, id, o)
})
if err != nil {
return nil, err
}
return
}
func (ec *RepoController) fetchOrder(actor string, turn uint) (order *order.UserGamesOrder, ok bool, err error) {
err = ec.executeSafe(func(t uint, c *Controller) error {
id, err := c.RaceID(actor)
if err != nil {
return err
}
order, ok, err = ec.Repo.LoadOrder(turn, id)
return err
})
if err != nil {
return
}
return
}
func (ec *RepoController) fetchBattle(turn uint, ID uuid.UUID) (order *report.BattleReport, exists bool, err error) {
err = ec.executeSafe(func(t uint, c *Controller) error {
order, exists, err = ec.Repo.LoadBattle(turn, ID)
return err
})
return
}
func (ec *RepoController) loadReport(actor string, turn uint) (r *report.Report, err error) {
+2 -3
View File
@@ -1,8 +1,7 @@
package controller
import (
"galaxy/util"
"galaxy/calc"
e "galaxy/error"
"galaxy/game/internal/model/game"
@@ -25,7 +24,7 @@ func (c *Cache) FleetSend(ri, fi int, planetNumber uint) error {
if !ok {
return e.NewEntityNotExistsError("destination planet #%d", planetNumber)
}
rangeToDestination := util.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
rangeToDestination := calc.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
if rangeToDestination > c.g.Race[ri].FlightDistance() {
return e.NewSendUnreachableDestinationError("range=%.03f", rangeToDestination)
}
+8 -4
View File
@@ -114,6 +114,7 @@ func (c *Controller) applyCommand(actor string, cmd order.DecodableCommand) (err
func (c *Controller) applyOrders(t uint) error {
raceOrder := make(map[int][]order.DecodableCommand)
raceOrderUpdated := make(map[int]int64)
commandRace := make(map[string]string)
challenge := make(map[string]*order.CommandShipGroupUnload)
cmdApplied := make(map[string]bool)
@@ -127,6 +128,7 @@ func (c *Controller) applyOrders(t uint) error {
continue
}
raceOrder[ri] = o.Commands
raceOrderUpdated[ri] = o.UpdatedAt
for i := range o.Commands {
commandRace[o.Commands[i].CommandID()] = c.Cache.g.Race[ri].Name
if v, ok := order.AsCommand[*order.CommandShipGroupUnload](o.Commands[i]); ok {
@@ -156,10 +158,12 @@ func (c *Controller) applyOrders(t uint) error {
// any command might fail due to challenged planets colonization
_ = c.applyCommand(commandRace[cmd.CommandID()], cmd)
}
}
for ri := range c.Cache.listRaceActingIdx() {
if err := c.Repo.SaveOrder(t, c.Cache.g.Race[ri].ID, &order.Order{Commands: raceOrder[ri]}); err != nil {
// re-save order to persist possible changed commands result outcome
if err := c.Repo.SaveOrder(t, c.Cache.g.Race[ri].ID, &order.UserGamesOrder{
GameID: c.Cache.g.ID,
UpdatedAt: raceOrderUpdated[ri],
Commands: raceOrder[ri],
}); err != nil {
return err
}
}
+3 -4
View File
@@ -267,21 +267,20 @@ func (c *Cache) putMaterial(pn uint, v float64) {
c.MustPlanet(pn).Mat(v)
}
// ProduceShip returns number of ships with shipMass planet p can produce in one turn
func ProduceShip(p *game.Planet, productionAvailable, shipMass float64) uint {
if productionAvailable <= 0 {
return 0
}
ships := uint(0)
pa := productionAvailable
PRODcost := calc.ShipProductionCost(shipMass)
var MATneed, MATfarm, totalCost float64
var MATneed, totalCost float64
for {
MATneed = shipMass - float64(p.Material)
if MATneed < 0 {
MATneed = 0
}
MATfarm = MATneed / float64(p.Resources)
totalCost = PRODcost + MATfarm
totalCost = calc.ShipBuildCost(shipMass, float64(p.Material), float64(p.Resources))
if pa < totalCost {
progress := pa / totalCost
pval := game.F(progress)
+10 -8
View File
@@ -9,8 +9,6 @@ import (
"galaxy/calc"
mr "galaxy/model/report"
"galaxy/util"
"galaxy/game/internal/model/game"
"github.com/google/uuid"
@@ -39,7 +37,7 @@ func (c *Cache) InitReport(t uint) *mr.Report {
OtherScience: make([]mr.OtherScience, 0, 10),
LocalShipClass: make([]mr.ShipClass, 0, 20),
OtherShipClass: make([]mr.OthersShipClass, 0, 50),
Battle: make([]uuid.UUID, 0, 10),
Battle: make([]mr.BattleSummary, 0, 10),
Bombing: make([]*mr.Bombing, 0, 10),
IncomingGroup: make([]mr.IncomingGroup, 0, 10),
OnPlanetGroupCache: make(map[uint][]int),
@@ -94,7 +92,7 @@ func (c *Cache) InitReport(t uint) *mr.Report {
}
for pi := range c.g.Map.Planet {
p2 := &c.g.Map.Planet[pi]
distance := util.ShortDistance(c.g.Map.Width, c.g.Map.Height, sg.StateInSpace.X.F(), sg.StateInSpace.Y.F(), p2.X.F(), p2.Y.F())
distance := calc.ShortDistance(c.g.Map.Width, c.g.Map.Height, sg.StateInSpace.X.F(), sg.StateInSpace.Y.F(), p2.X.F(), p2.Y.F())
report.InSpaceGroupRangeCache[sgi][p2.Number] = distance
}
} else {
@@ -344,7 +342,11 @@ func (c *Cache) ReportBattle(ri int, rep *mr.Report, br []*mr.BattleReport) {
}
sliceIndexValidate(&rep.Battle, i)
rep.Battle[i] = br[bi].ID
rep.Battle[i] = mr.BattleSummary{
ID: br[bi].ID,
Planet: br[bi].Planet,
Shots: uint(len(br[bi].Protocol)),
}
i++
}
}
@@ -396,7 +398,7 @@ func (c *Cache) ReportIncomingGroup(ri int, rep *mr.Report) {
continue
}
distance := util.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
distance := calc.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
var speed, mass float64
if sg.FleetID != nil {
speed, mass = c.FleetSpeedAndMass(c.MustFleetIndex(*sg.FleetID))
@@ -597,7 +599,7 @@ func (c *Cache) ReportLocalFleet(ri int, rep *mr.Report) {
if inSpace, ok := fleetState.InSpace(); ok {
rep.LocalFleet[i].Origin = &inSpace.Origin
p2 := c.MustPlanet(rep.LocalFleet[i].Destination)
rangeToDestination := mr.F(util.ShortDistance(c.g.Map.Width, c.g.Map.Height, inSpace.X.F(), inSpace.Y.F(), p2.X.F(), p2.Y.F()))
rangeToDestination := mr.F(calc.ShortDistance(c.g.Map.Width, c.g.Map.Height, inSpace.X.F(), inSpace.Y.F(), p2.X.F(), p2.Y.F()))
rep.LocalFleet[i].Range = &rangeToDestination
}
i++
@@ -726,7 +728,7 @@ func (c *Cache) otherGroup(v *mr.OtherGroup, sg *game.ShipGroup, st *game.ShipTy
if sg.State() == game.StateInSpace {
v.Origin = &sg.StateInSpace.Origin
p2 := c.MustPlanet(v.Destination)
rangeToDestination := mr.F(util.ShortDistance(c.g.Map.Width, c.g.Map.Height, sg.StateInSpace.X.F(), sg.StateInSpace.Y.F(), p2.X.F(), p2.Y.F()))
rangeToDestination := mr.F(calc.ShortDistance(c.g.Map.Width, c.g.Map.Height, sg.StateInSpace.X.F(), sg.StateInSpace.Y.F(), p2.X.F(), p2.Y.F()))
v.Range = &rangeToDestination
}
v.Speed = mr.F(sg.Speed(st))
+3 -3
View File
@@ -8,7 +8,7 @@ import (
"math/rand/v2"
"slices"
"galaxy/util"
"galaxy/calc"
e "galaxy/error"
@@ -28,7 +28,7 @@ func (c *Cache) PlanetRouteSet(ri int, rt game.RouteType, origin, destination ui
if !ok {
return e.NewEntityNotExistsError("destination planet #%d", destination)
}
rangeToDestination := util.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
rangeToDestination := calc.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
if rangeToDestination > c.g.Race[ri].FlightDistance() {
return e.NewSendUnreachableDestinationError("range=%.03f max=%.03f", rangeToDestination, c.g.Race[ri].FlightDistance())
}
@@ -194,7 +194,7 @@ func (c *Cache) RemoveUnreachableRoutes() {
ri := c.RaceIndex(*p1.Owner)
for rt, destination := range p1.Route {
p2 := c.MustPlanet(destination)
rangeToDestination := util.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
rangeToDestination := calc.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
if rangeToDestination > c.g.Race[ri].FlightDistance() {
delete(p1.Route, rt)
}
+2 -3
View File
@@ -1,8 +1,7 @@
package controller
import (
"galaxy/util"
"galaxy/calc"
e "galaxy/error"
"galaxy/game/internal/model/game"
@@ -47,7 +46,7 @@ func (c *Cache) shipGroupSend(ri int, groupID uuid.UUID, planetNumber uint) erro
if !ok {
return e.NewEntityNotExistsError("destination planet #%d", planetNumber)
}
rangeToDestination := util.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
rangeToDestination := calc.ShortDistance(c.g.Map.Width, c.g.Map.Height, p1.X.F(), p1.Y.F(), p2.X.F(), p2.Y.F())
if rangeToDestination > c.g.Race[ri].FlightDistance() {
return e.NewSendUnreachableDestinationError("range=%.03f", rangeToDestination)
}
+6 -12
View File
@@ -5,6 +5,7 @@ import (
"slices"
"strings"
"galaxy/calc"
e "galaxy/error"
"galaxy/game/internal/model/game"
@@ -156,26 +157,19 @@ func (uc UpgradeCalc) UpgradeMaxShips(resources float64) uint {
return uint(math.Floor(resources / uc.UpgradeCost(1)))
}
func BlockUpgradeCost(blockMass, currentBlockTech, targetBlockTech float64) float64 {
if blockMass == 0 || targetBlockTech <= currentBlockTech {
return 0
}
return (1 - currentBlockTech/targetBlockTech) * 10 * blockMass
}
func GroupUpgradeCost(sg *game.ShipGroup, st game.ShipType, drive, weapons, shields, cargo float64) UpgradeCalc {
uc := &UpgradeCalc{Cost: make(map[game.Tech]float64)}
if drive > 0 {
uc.Cost[game.TechDrive] = BlockUpgradeCost(st.DriveBlockMass(), sg.TechLevel(game.TechDrive).F(), drive)
uc.Cost[game.TechDrive] = calc.BlockUpgradeCost(st.DriveBlockMass(), sg.TechLevel(game.TechDrive).F(), drive)
}
if weapons > 0 {
uc.Cost[game.TechWeapons] = BlockUpgradeCost(st.WeaponsBlockMass(), sg.TechLevel(game.TechWeapons).F(), weapons)
uc.Cost[game.TechWeapons] = calc.BlockUpgradeCost(st.WeaponsBlockMass(), sg.TechLevel(game.TechWeapons).F(), weapons)
}
if shields > 0 {
uc.Cost[game.TechShields] = BlockUpgradeCost(st.ShieldsBlockMass(), sg.TechLevel(game.TechShields).F(), shields)
uc.Cost[game.TechShields] = calc.BlockUpgradeCost(st.ShieldsBlockMass(), sg.TechLevel(game.TechShields).F(), shields)
}
if cargo > 0 {
uc.Cost[game.TechCargo] = BlockUpgradeCost(st.CargoBlockMass(), sg.TechLevel(game.TechCargo).F(), cargo)
uc.Cost[game.TechCargo] = calc.BlockUpgradeCost(st.CargoBlockMass(), sg.TechLevel(game.TechCargo).F(), cargo)
}
return *uc
}
@@ -218,7 +212,7 @@ func UpgradeGroupPreference(sg game.ShipGroup, st game.ShipType, tech game.Tech,
ti = len(su.UpgradeTech) - 1
}
su.UpgradeTech[ti].Level = game.F(v)
su.UpgradeTech[ti].Cost = game.F(BlockUpgradeCost(st.BlockMass(tech), sg.TechLevel(tech).F(), v) * float64(sg.Number))
su.UpgradeTech[ti].Cost = game.F(calc.BlockUpgradeCost(st.BlockMass(tech), sg.TechLevel(tech).F(), v) * float64(sg.Number))
sg.StateUpgrade = &su
return sg
@@ -13,12 +13,6 @@ import (
"github.com/stretchr/testify/assert"
)
func TestBlockUpgradeCost(t *testing.T) {
assert.Equal(t, 00.0, controller.BlockUpgradeCost(1, 1.0, 1.0))
assert.Equal(t, 25.0, controller.BlockUpgradeCost(5, 1.0, 2.0))
assert.Equal(t, 50.0, controller.BlockUpgradeCost(10, 1.0, 2.0))
}
func TestGroupUpgradeCost(t *testing.T) {
sg := &g.ShipGroup{
Tech: map[g.Tech]g.Float{
+2 -3
View File
@@ -4,8 +4,7 @@ import (
"fmt"
"math/rand"
"galaxy/util"
"galaxy/calc"
"galaxy/game/internal/generator/plotter"
)
@@ -59,7 +58,7 @@ func (m Map) NewCoordinate(deadZoneRaduis float64) (Coordinate, error) {
}
func (m Map) ShortDistance(from, to Coordinate) float64 {
return util.ShortDistance(m.Width, m.Height, from.X, from.Y, to.X, to.Y)
return calc.ShortDistance(m.Width, m.Height, from.X, from.Y, to.X, to.Y)
}
// RandI returns a random float64 value between min and max
+3 -2
View File
@@ -1,6 +1,7 @@
package game
import (
"galaxy/calc"
"strings"
"github.com/google/uuid"
@@ -54,9 +55,9 @@ func (r Race) TechLevel(t Tech) float64 {
}
func (r Race) FlightDistance() float64 {
return r.TechLevel(TechDrive) * 40
return calc.FligthDistance(r.TechLevel(TechDrive))
}
func (r Race) VisibilityDistance() float64 {
return r.TechLevel(TechDrive) * 30
return calc.VisibilityDistance(r.TechLevel(TechDrive))
}
+84 -29
View File
@@ -12,8 +12,8 @@ package repo
import (
"encoding/json"
"errors"
"fmt"
"slices"
"galaxy/model/order"
"galaxy/model/report"
@@ -29,6 +29,8 @@ const (
)
type storedOrder struct {
GameID uuid.UUID `json:"game_id"`
UpdatedAt int64 `json:"updatedAt"`
Commands []json.RawMessage `json:"cmd"`
}
@@ -116,9 +118,25 @@ func loadMeta(s Storage) (*game.GameMeta, error) {
return result, nil
}
func saveMeta(s Storage, t uint, gm *game.GameMeta) error {
func loadTurnMeta(s Storage, turn uint) (*game.GameMeta, error) {
var result *game.GameMeta = new(game.GameMeta)
path := fmt.Sprintf("%s/%s", TurnDir(turn), metaPath)
exist, err := s.Exists(path)
if err != nil {
return nil, NewStorageError(err)
}
if !exist {
return result, nil
}
if err := s.ReadSafe(path, result); err != nil {
return nil, NewStorageError(err)
}
return result, nil
}
func saveMeta(s Storage, turn uint, gm *game.GameMeta) error {
// save turn's meta
path := fmt.Sprintf("%s/%s", TurnDir(t), metaPath)
path := fmt.Sprintf("%s/%s", TurnDir(turn), metaPath)
if err := s.Write(path, gm); err != nil {
return NewStorageError(err)
}
@@ -130,27 +148,43 @@ func saveMeta(s Storage, t uint, gm *game.GameMeta) error {
return nil
}
func (r *repo) SaveBattle(t uint, b *report.BattleReport, m *game.BattleMeta) error {
func (r *repo) LoadBattle(turn uint, id uuid.UUID) (*report.BattleReport, bool, error) {
meta, err := loadTurnMeta(r.s, turn)
if err != nil {
return nil, false, err
}
i := slices.IndexFunc(meta.Battles, func(m game.BattleMeta) bool { return m.BattleID == id })
if i < 0 {
return nil, false, nil
}
result, err := loadBattle(r.s, turn, meta.Battles[i].BattleID)
if err != nil {
return nil, false, err
}
return result, true, nil
}
func (r *repo) SaveBattle(turn uint, b *report.BattleReport, m *game.BattleMeta) error {
meta, err := loadMeta(r.s)
if err != nil {
return err
}
err = saveBattle(r.s, t, b)
err = saveBattle(r.s, turn, b)
if err != nil {
return err
}
meta.Battles = append(meta.Battles, *m)
return saveMeta(r.s, t, meta)
return saveMeta(r.s, turn, meta)
}
func saveBattle(s Storage, t uint, b *report.BattleReport) error {
path := fmt.Sprintf("%s/battle/%s.json", TurnDir(t), b.ID.String())
func saveBattle(s Storage, turn uint, b *report.BattleReport) error {
path := fmt.Sprintf("%s/battle/%s.json", TurnDir(turn), b.ID.String())
exist, err := s.Exists(path)
if err != nil {
return NewStorageError(err)
}
if exist {
return NewStateError(fmt.Sprintf("battle %v for turn %d already has been saved", b.ID, t))
return NewStateError(fmt.Sprintf("battle %v for turn %d already has been saved", b.ID, turn))
}
if err := s.Write(path, b); err != nil {
return NewStorageError(err)
@@ -158,7 +192,23 @@ func saveBattle(s Storage, t uint, b *report.BattleReport) error {
return nil
}
func (r *repo) SaveBombings(t uint, b []*game.Bombing) error {
func loadBattle(s Storage, turn uint, id uuid.UUID) (*report.BattleReport, error) {
path := fmt.Sprintf("%s/battle/%s.json", TurnDir(turn), id.String())
exist, err := s.Exists(path)
if err != nil {
return nil, NewStorageError(err)
}
if !exist {
return nil, NewStateError(fmt.Sprintf("battle %v for turn %d never was saved", id, turn))
}
result := new(report.BattleReport)
if err := s.ReadSafe(path, result); err != nil {
return nil, NewStorageError(err)
}
return result, nil
}
func (r *repo) SaveBombings(turn uint, b []*game.Bombing) error {
meta, err := loadMeta(r.s)
if err != nil {
return err
@@ -166,11 +216,11 @@ func (r *repo) SaveBombings(t uint, b []*game.Bombing) error {
for i := range b {
meta.Bombings = append(meta.Bombings, *b[i])
}
return saveMeta(r.s, t, meta)
return saveMeta(r.s, turn, meta)
}
func (r *repo) SaveReport(t uint, rep *report.Report) error {
return saveReport(r.s, t, rep)
func (r *repo) SaveReport(turn uint, rep *report.Report) error {
return saveReport(r.s, turn, rep)
}
func saveReport(s Storage, t uint, v *report.Report) error {
@@ -181,12 +231,12 @@ func saveReport(s Storage, t uint, v *report.Report) error {
return nil
}
func (r *repo) LoadReport(t uint, id uuid.UUID) (*report.Report, error) {
return loadReport(r.s, t, id)
func (r *repo) LoadReport(turn uint, id uuid.UUID) (*report.Report, error) {
return loadReport(r.s, turn, id)
}
func loadReport(s Storage, t uint, id uuid.UUID) (*report.Report, error) {
path := ReportDir(t, id)
func loadReport(s Storage, turn uint, id uuid.UUID) (*report.Report, error) {
path := ReportDir(turn, id)
result := new(report.Report)
exist, err := s.Exists(path)
if err != nil {
@@ -201,11 +251,11 @@ func loadReport(s Storage, t uint, id uuid.UUID) (*report.Report, error) {
return result, nil
}
func (r *repo) SaveOrder(t uint, id uuid.UUID, o *order.Order) error {
func (r *repo) SaveOrder(t uint, id uuid.UUID, o *order.UserGamesOrder) error {
return saveOrder(r.s, t, id, o)
}
func saveOrder(s Storage, t uint, id uuid.UUID, o *order.Order) error {
func saveOrder(s Storage, t uint, id uuid.UUID, o *order.UserGamesOrder) error {
path := OrderDir(t, id)
if err := s.WriteSafe(path, o); err != nil {
return NewStorageError(err)
@@ -213,11 +263,11 @@ func saveOrder(s Storage, t uint, id uuid.UUID, o *order.Order) error {
return nil
}
func (r *repo) LoadOrder(t uint, id uuid.UUID) (*order.Order, bool, error) {
func (r *repo) LoadOrder(t uint, id uuid.UUID) (*order.UserGamesOrder, bool, error) {
return loadOrder(r.s, t, id)
}
func loadOrder(s Storage, t uint, id uuid.UUID) (*order.Order, bool, error) {
func loadOrder(s Storage, t uint, id uuid.UUID) (*order.UserGamesOrder, bool, error) {
path := OrderDir(t, id)
exist, err := s.Exists(path)
@@ -228,17 +278,22 @@ func loadOrder(s Storage, t uint, id uuid.UUID) (*order.Order, bool, error) {
return nil, false, nil
}
cmd := new(storedOrder)
if err := s.ReadSafe(path, cmd); err != nil {
stored := new(storedOrder)
if err := s.ReadSafe(path, stored); err != nil {
return nil, false, NewStorageError(err)
}
result := &order.Order{Commands: make([]order.DecodableCommand, len(cmd.Commands))}
if len(cmd.Commands) == 0 {
return nil, false, errors.New("no commands were stored")
// An empty stored batch is a valid state — the player either
// cleared their draft or never added a command yet. We round-
// trip it as `(*UserGamesOrder, true, nil)` with an empty
// `Commands` slice so callers can distinguish "no order yet"
// (ok=false) from "order exists but is empty" (ok=true).
result := &order.UserGamesOrder{
GameID: stored.GameID,
UpdatedAt: stored.UpdatedAt,
Commands: make([]order.DecodableCommand, len(stored.Commands)),
}
for i := range cmd.Commands {
command, err := ParseOrder(cmd.Commands[i], nil)
for i := range stored.Commands {
command, err := ParseOrder(stored.Commands[i], nil)
if err != nil {
return nil, false, err
}
+2 -2
View File
@@ -6,10 +6,10 @@ import (
"github.com/google/uuid"
)
func LoadOrder_T(s Storage, t uint, id uuid.UUID) (*order.Order, bool, error) {
func LoadOrder_T(s Storage, t uint, id uuid.UUID) (*order.UserGamesOrder, bool, error) {
return loadOrder(s, t, id)
}
func SaveOrder_T(s Storage, t uint, id uuid.UUID, o *order.Order) error {
func SaveOrder_T(s Storage, t uint, id uuid.UUID, o *order.UserGamesOrder) error {
return saveOrder(s, t, id, o)
}
+54 -3
View File
@@ -3,6 +3,7 @@ package repo_test
import (
"path/filepath"
"testing"
"time"
"galaxy/model/order"
@@ -18,7 +19,11 @@ func TestSaveOrder(t *testing.T) {
s, err := fs.NewFileStorage(root)
assert.NoError(t, err)
id := uuid.New()
o := &order.Order{
gameID := uuid.New()
now := time.Now().UTC().UnixMilli()
o := &order.UserGamesOrder{
GameID: gameID,
UpdatedAt: now,
Commands: []order.DecodableCommand{
&order.CommandRaceVote{
CommandMeta: order.CommandMeta{
@@ -87,17 +92,63 @@ func TestSaveOrder(t *testing.T) {
LoadOrderTest(t, s, root, turn, id, o)
}
func LoadOrderTest(t *testing.T, s repo.Storage, root string, turn uint, id uuid.UUID, expected *order.Order) {
func LoadOrderTest(t *testing.T, s repo.Storage, root string, turn uint, id uuid.UUID, expected *order.UserGamesOrder) {
o, ok, err := repo.LoadOrder_T(s, turn, id)
assert.NoError(t, err)
assert.True(t, ok)
assert.Len(t, o.Commands, 5)
assert.Equal(t, expected.GameID, o.GameID)
assert.Equal(t, expected.UpdatedAt, o.UpdatedAt)
assert.ElementsMatch(t, expected.Commands, o.Commands)
CommandResultTest(t, o)
}
func CommandResultTest(t *testing.T, o *order.Order) {
func TestSaveOrderEmptyRoundTrip(t *testing.T) {
// An empty order is a legal player intent (the user removed
// every command from the draft). The repo round-trips it as an
// `(*UserGamesOrder, true, nil)` triple with `Commands` empty
// so the front-end can distinguish "no order yet" (ok=false)
// from "order exists but is empty" (ok=true).
root := t.ArtifactDir()
s, err := fs.NewFileStorage(root)
assert.NoError(t, err)
id := uuid.New()
gameID := uuid.New()
now := time.Now().UTC().UnixMilli()
o := &order.UserGamesOrder{
GameID: gameID,
UpdatedAt: now,
}
var turn uint = 3
assert.NoError(t, repo.SaveOrder_T(s, turn, id, o))
assert.FileExists(t, filepath.Join(root, repo.OrderDir(turn, id)))
loaded, ok, err := repo.LoadOrder_T(s, turn, id)
assert.NoError(t, err)
assert.True(t, ok, "empty order must surface as ok=true so callers can tell it apart from a missing one")
assert.NotNil(t, loaded)
assert.Equal(t, gameID, loaded.GameID)
assert.Equal(t, now, loaded.UpdatedAt)
assert.Empty(t, loaded.Commands)
}
func TestLoadOrderMissing(t *testing.T) {
// A turn that has never had a PUT must come back as
// `(nil, false, nil)` — the engine's "no stored order" path.
root := t.ArtifactDir()
s, err := fs.NewFileStorage(root)
assert.NoError(t, err)
id := uuid.New()
loaded, ok, err := repo.LoadOrder_T(s, 7, id)
assert.NoError(t, err)
assert.False(t, ok)
assert.Nil(t, loaded)
}
func CommandResultTest(t *testing.T, o *order.UserGamesOrder) {
assert.NotEmpty(t, o.Commands)
for i := range o.Commands {
if v, ok := order.AsCommand[*order.CommandRaceVote](o.Commands[i]); ok {
+152
View File
@@ -0,0 +1,152 @@
package router_test
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"galaxy/model/report"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGetBattleValidation(t *testing.T) {
validUUID := uuid.New().String()
for _, tc := range []struct {
description string
turn string
battleID string
expectStatus int
}{
{"Negative turn", "-1", validUUID, http.StatusBadRequest},
{"Non-numeric turn", "abc", validUUID, http.StatusBadRequest},
{"Invalid uuid", "0", invalidId, http.StatusBadRequest},
} {
t.Run(tc.description, func(t *testing.T) {
e := &dummyExecutor{}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
path := fmt.Sprintf("/api/v1/battle/%s/%s", tc.turn, tc.battleID)
req, _ := http.NewRequest(http.MethodGet, path, nil)
r.ServeHTTP(w, req)
assert.Equal(t, tc.expectStatus, w.Code, w.Body)
assert.Equal(t, uuid.Nil, e.FetchBattleID, "FetchBattle must not be called on validation error")
})
}
}
func TestGetBattleFound(t *testing.T) {
id := uuid.New()
raceA := uuid.New()
raceB := uuid.New()
stored := &report.BattleReport{
ID: id,
Planet: 42,
PlanetName: "X-Prime",
Races: map[int]uuid.UUID{
0: raceA,
1: raceB,
},
Ships: map[int]report.BattleReportGroup{
10: {
Race: "Alpha",
ClassName: "Drone",
Tech: map[string]report.Float{"WEAPONS": report.F(1)},
Number: 5,
NumberLeft: 3,
LoadType: "EMP",
LoadQuantity: report.F(0),
InBattle: true,
},
20: {
Race: "Beta",
ClassName: "Spy",
Tech: map[string]report.Float{"SHIELDS": report.F(2)},
Number: 4,
NumberLeft: 0,
LoadType: "EMP",
LoadQuantity: report.F(0),
InBattle: true,
},
},
Protocol: []report.BattleActionReport{
{Attacker: 0, AttackerShipClass: 10, Defender: 1, DefenderShipClass: 20, Destroyed: true},
},
}
e := &dummyExecutor{
FetchBattleResult: stored,
FetchBattleOK: true,
}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
path := fmt.Sprintf("/api/v1/battle/%d/%s", 7, id.String())
req, _ := http.NewRequest(http.MethodGet, path, nil)
r.ServeHTTP(w, req)
require.Equal(t, http.StatusOK, w.Code, w.Body)
assert.Equal(t, uint(7), e.FetchBattleTurn)
assert.Equal(t, id, e.FetchBattleID)
var got report.BattleReport
require.NoError(t, json.Unmarshal(w.Body.Bytes(), &got))
assert.Equal(t, stored.ID, got.ID)
assert.Equal(t, stored.Planet, got.Planet)
assert.Equal(t, stored.PlanetName, got.PlanetName)
assert.Equal(t, stored.Races, got.Races)
require.Len(t, got.Ships, len(stored.Ships))
assert.Equal(t, stored.Ships[10].ClassName, got.Ships[10].ClassName)
assert.Equal(t, stored.Ships[20].NumberLeft, got.Ships[20].NumberLeft)
require.Len(t, got.Protocol, 1)
assert.Equal(t, stored.Protocol[0], got.Protocol[0])
}
func TestGetBattleTurnZero(t *testing.T) {
id := uuid.New()
e := &dummyExecutor{
FetchBattleResult: &report.BattleReport{ID: id},
FetchBattleOK: true,
}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodGet, fmt.Sprintf("/api/v1/battle/0/%s", id.String()), nil)
r.ServeHTTP(w, req)
require.Equal(t, http.StatusOK, w.Code, w.Body)
assert.Equal(t, uint(0), e.FetchBattleTurn)
assert.Equal(t, id, e.FetchBattleID)
}
func TestGetBattleNotFound(t *testing.T) {
id := uuid.New()
e := &dummyExecutor{FetchBattleOK: false}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodGet, fmt.Sprintf("/api/v1/battle/3/%s", id.String()), nil)
r.ServeHTTP(w, req)
assert.Equal(t, http.StatusNotFound, w.Code, w.Body)
assert.Equal(t, uint(3), e.FetchBattleTurn)
assert.Equal(t, id, e.FetchBattleID)
}
func TestGetBattleEngineError(t *testing.T) {
e := &dummyExecutor{FetchBattleErr: errors.New("engine boom")}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodGet, fmt.Sprintf("/api/v1/battle/3/%s", uuid.NewString()), nil)
r.ServeHTTP(w, req)
assert.Equal(t, http.StatusInternalServerError, w.Code, w.Body)
}
+37
View File
@@ -0,0 +1,37 @@
package handler
import (
"net/http"
"strconv"
"github.com/gin-gonic/gin"
"github.com/google/uuid"
)
func BattleHandler(c *gin.Context, executor CommandExecutor) {
turn := c.Param("turn")
t, err := strconv.Atoi(turn)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
if t < 0 {
c.JSON(http.StatusBadRequest, gin.H{"error": "turn number can't be negative"})
return
}
id := c.Param("uuid")
battleID, err := uuid.Parse(id)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
r, exists, err := executor.FetchBattle(uint(t), battleID)
if errorResponse(c, err) {
return
}
if !exists {
c.JSON(http.StatusNotFound, gin.H{"error": "unknown battle"})
return
}
c.JSON(http.StatusOK, r)
}
+7 -3
View File
@@ -2,7 +2,6 @@ package handler
import (
"encoding/json"
"errors"
"fmt"
"net/http"
@@ -33,7 +32,12 @@ func CommandHandler(c *gin.Context, executor CommandExecutor) {
commands[i] = command
}
if len(commands) == 0 {
errorResponse(c, errors.New("no commands given"))
// `PUT /api/v1/command` is the immediate-execution path —
// running an empty batch is a meaningless no-op, so we
// reject it with `400` rather than rely on the validator.
// `PUT /api/v1/order` keeps an empty list (the player
// cleared their draft) — see `OrderHandler`.
c.JSON(http.StatusBadRequest, gin.H{"error": "no commands given"})
return
}
@@ -41,7 +45,7 @@ func CommandHandler(c *gin.Context, executor CommandExecutor) {
return
}
c.Status(http.StatusNoContent)
c.Status(http.StatusAccepted)
}
func parseCommand(actor string, c json.RawMessage) (Command, error) {
+14 -2
View File
@@ -17,6 +17,7 @@ import (
"github.com/gin-gonic/gin"
"github.com/go-playground/validator/v10"
"github.com/google/uuid"
)
type CommandExecutor interface {
@@ -25,8 +26,11 @@ type CommandExecutor interface {
GameState() (rest.StateResponse, error)
BanishRace(string) error
LoadReport(actor string, turn uint) (*report.Report, error)
// Execute is reserved for future use; any API request for orders should use ValidateOrder
Execute(cmd ...Command) error
ValidateOrder(actor string, cmd ...order.DecodableCommand) error
ValidateOrder(actor string, cmd ...order.DecodableCommand) (*order.UserGamesOrder, error)
FetchOrder(actor string, turn uint) (*order.UserGamesOrder, bool, error)
FetchBattle(turn uint, ID uuid.UUID) (*report.BattleReport, bool, error)
}
type Command func(controller.Ctrl) error
@@ -76,10 +80,18 @@ func (e *executor) Execute(cmd ...Command) error {
})
}
func (e *executor) ValidateOrder(actor string, cmd ...order.DecodableCommand) error {
func (e *executor) ValidateOrder(actor string, cmd ...order.DecodableCommand) (*order.UserGamesOrder, error) {
return controller.ValidateOrder(e.cfg, actor, cmd...)
}
func (e *executor) FetchOrder(actor string, turn uint) (*order.UserGamesOrder, bool, error) {
return controller.FetchOrder(e.cfg, actor, turn)
}
func (e *executor) FetchBattle(turn uint, ID uuid.UUID) (*report.BattleReport, bool, error) {
return controller.FetchBattle(e.cfg, turn, ID)
}
func (e *executor) GenerateGame(races []string) (rest.StateResponse, error) {
s, err := controller.GenerateGame(e.cfg, races)
if err != nil {
+36 -9
View File
@@ -1,7 +1,6 @@
package handler
import (
"errors"
"net/http"
"galaxy/model/order"
@@ -12,12 +11,16 @@ import (
"github.com/gin-gonic/gin"
)
func OrderHandler(c *gin.Context, executor CommandExecutor) {
func PutOrderHandler(c *gin.Context, executor CommandExecutor) {
var cmd rest.Command
if errorResponse(c, c.ShouldBindJSON(&cmd)) {
return
}
// An empty `cmd` array is a valid PUT: the client clears its
// local order draft and expects the server to mirror that
// state. The engine stores the empty batch so the next GET
// returns the same empty list with the new `updatedAt`.
commands := make([]order.DecodableCommand, len(cmd.Commands))
for i := range cmd.Commands {
command, err := repo.ParseOrder(cmd.Commands[i], validateCommand)
@@ -26,14 +29,38 @@ func OrderHandler(c *gin.Context, executor CommandExecutor) {
}
commands[i] = command
}
if len(commands) == 0 {
errorResponse(c, errors.New("no commands given"))
result, err := executor.ValidateOrder(cmd.Actor, commands...)
if errorResponse(c, err) {
return
}
if errorResponse(c, executor.ValidateOrder(cmd.Actor, commands...)) {
return
}
c.Status(http.StatusNoContent)
c.JSON(http.StatusAccepted, result)
}
type orderParam struct {
Player string `form:"player" binding:"required,notblank"`
Turn int `form:"turn" binding:"gte=0"`
}
func GetOrderHandler(c *gin.Context, executor CommandExecutor) {
p := &orderParam{}
// ShouldBindQuery surfaces both validator failures and strconv parse
// errors; both are client-side faults, so 400 is the correct mapping.
if err := c.ShouldBindQuery(p); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
o, ok, err := executor.FetchOrder(p.Player, uint(p.Turn))
if errorResponse(c, err) {
return
}
if !ok {
// no order has been previously stored by the player for this turn
c.Status(http.StatusNoContent)
return
}
c.JSON(http.StatusOK, o)
}
+180 -3
View File
@@ -2,6 +2,7 @@ package router_test
import (
"encoding/json"
"errors"
"net/http"
"net/http/httptest"
"testing"
@@ -9,7 +10,9 @@ import (
"galaxy/model/order"
"galaxy/model/rest"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestOrderRaceQuit(t *testing.T) {
@@ -57,16 +60,25 @@ func TestOrderRaceQuit(t *testing.T) {
assert.Equal(t, http.StatusBadRequest, w.Code, w.Body)
// error: no commands
// empty cmd[] is a valid PUT — the player cleared their draft;
// the engine stores the empty batch and answers with the
// canonical `UserGamesOrder` envelope. ValidateOrder receives a
// zero-length variadic and the response carries no commands.
payload = &rest.Command{
Actor: commandDefaultActor,
}
exec := &dummyExecutor{}
emptyRouter := setupRouterExecutor(exec)
w = httptest.NewRecorder()
req, _ = http.NewRequest(apiCommandMethod, apiOrderPath, asBody(payload))
r.ServeHTTP(w, req)
emptyRouter.ServeHTTP(w, req)
assert.Equal(t, http.StatusBadRequest, w.Code, w.Body)
assert.Equal(t, commandNoErrorsStatus, w.Code, w.Body)
assert.Equal(t, 0, exec.CommandsExecuted)
var stored order.UserGamesOrder
require.NoError(t, json.Unmarshal(w.Body.Bytes(), &stored))
assert.Empty(t, stored.Commands)
}
func TestOrderRaceVote(t *testing.T) {
@@ -940,3 +952,168 @@ func TestMultipleCommandOrder(t *testing.T) {
assert.Equal(t, 2, e.(*dummyExecutor).CommandsExecuted)
}
func TestPutOrderResponseBody(t *testing.T) {
e := &dummyExecutor{
ValidateOrderResult: &order.UserGamesOrder{
GameID: uuid.New(),
UpdatedAt: 1700,
Commands: []order.DecodableCommand{
&order.CommandRaceVote{
CommandMeta: order.CommandMeta{CmdID: id(), CmdType: order.CommandTypeRaceVote},
Acceptor: "Opponent",
},
},
},
}
r := setupRouterExecutor(e)
payload := &rest.Command{
Actor: commandDefaultActor,
Commands: []json.RawMessage{
encodeCommand(&order.CommandRaceVote{
CommandMeta: order.CommandMeta{CmdID: id(), CmdType: order.CommandTypeRaceVote},
Acceptor: "Opponent",
}),
},
}
w := httptest.NewRecorder()
req, _ := http.NewRequest(apiCommandMethod, apiOrderPath, asBody(payload))
r.ServeHTTP(w, req)
require.Equal(t, http.StatusAccepted, w.Code, w.Body)
var got struct {
GameID uuid.UUID `json:"game_id"`
UpdatedAt int64 `json:"updatedAt"`
Commands []json.RawMessage `json:"cmd"`
}
require.NoError(t, json.Unmarshal(w.Body.Bytes(), &got))
assert.Equal(t, e.ValidateOrderResult.GameID, got.GameID)
assert.Equal(t, e.ValidateOrderResult.UpdatedAt, got.UpdatedAt)
assert.Len(t, got.Commands, 1)
}
func TestPutOrderEngineError(t *testing.T) {
e := &dummyExecutor{ValidateOrderErr: errors.New("engine boom")}
r := setupRouterExecutor(e)
payload := &rest.Command{
Actor: commandDefaultActor,
Commands: []json.RawMessage{
encodeCommand(&order.CommandRaceVote{
CommandMeta: order.CommandMeta{CmdID: id(), CmdType: order.CommandTypeRaceVote},
Acceptor: "Opponent",
}),
},
}
w := httptest.NewRecorder()
req, _ := http.NewRequest(apiCommandMethod, apiOrderPath, asBody(payload))
r.ServeHTTP(w, req)
assert.Equal(t, http.StatusInternalServerError, w.Code, w.Body)
}
func TestGetOrderQueryValidation(t *testing.T) {
for _, tc := range []struct {
description string
query string
expectStatus int
}{
{"Missing player param", "", http.StatusBadRequest},
{"Empty player", "?player=", http.StatusBadRequest},
{"Blank player", "?player=%20%20%20", http.StatusBadRequest},
{"Negative turn", "?player=Race_01&turn=-1", http.StatusBadRequest},
{"Non-numeric turn", "?player=Race_01&turn=abc", http.StatusBadRequest},
} {
t.Run(tc.description, func(t *testing.T) {
e := &dummyExecutor{}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodGet, apiOrderPath+tc.query, nil)
r.ServeHTTP(w, req)
assert.Equal(t, tc.expectStatus, w.Code, w.Body)
assert.Empty(t, e.FetchOrderActor, "FetchOrder must not be called on validation error")
})
}
}
func TestGetOrderFound(t *testing.T) {
stored := &order.UserGamesOrder{
GameID: uuid.New(),
UpdatedAt: 4242,
Commands: []order.DecodableCommand{
&order.CommandRaceVote{
CommandMeta: order.CommandMeta{CmdID: id(), CmdType: order.CommandTypeRaceVote},
Acceptor: "Opponent",
},
},
}
e := &dummyExecutor{
FetchOrderResult: stored,
FetchOrderOK: true,
}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodGet, apiOrderPath+"?player=Race_01&turn=3", nil)
r.ServeHTTP(w, req)
require.Equal(t, http.StatusOK, w.Code, w.Body)
assert.Equal(t, "Race_01", e.FetchOrderActor)
assert.Equal(t, uint(3), e.FetchOrderTurn)
var got struct {
GameID uuid.UUID `json:"game_id"`
UpdatedAt int64 `json:"updatedAt"`
Commands []json.RawMessage `json:"cmd"`
}
require.NoError(t, json.Unmarshal(w.Body.Bytes(), &got))
assert.Equal(t, stored.GameID, got.GameID)
assert.Equal(t, stored.UpdatedAt, got.UpdatedAt)
assert.Len(t, got.Commands, 1)
}
func TestGetOrderTurnDefaultsToZero(t *testing.T) {
e := &dummyExecutor{
FetchOrderResult: &order.UserGamesOrder{GameID: uuid.New(), UpdatedAt: 1, Commands: []order.DecodableCommand{}},
FetchOrderOK: true,
}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodGet, apiOrderPath+"?player=Race_01", nil)
r.ServeHTTP(w, req)
require.Equal(t, http.StatusOK, w.Code, w.Body)
assert.Equal(t, uint(0), e.FetchOrderTurn)
}
func TestGetOrderNotFound(t *testing.T) {
e := &dummyExecutor{FetchOrderOK: false}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodGet, apiOrderPath+"?player=Race_01&turn=2", nil)
r.ServeHTTP(w, req)
assert.Equal(t, http.StatusNoContent, w.Code, w.Body)
assert.Empty(t, w.Body.Bytes(), "204 response must carry no body")
assert.Equal(t, "Race_01", e.FetchOrderActor)
assert.Equal(t, uint(2), e.FetchOrderTurn)
}
func TestGetOrderEngineError(t *testing.T) {
e := &dummyExecutor{FetchOrderErr: errors.New("engine boom")}
r := setupRouterExecutor(e)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodGet, apiOrderPath+"?player=Race_01&turn=0", nil)
r.ServeHTTP(w, req)
assert.Equal(t, http.StatusInternalServerError, w.Code, w.Body)
}
+5 -1
View File
@@ -74,8 +74,12 @@ func setupRouter(executor handler.CommandExecutor) *gin.Engine {
groupAdmin.POST("/race/banish", func(ctx *gin.Context) { handler.BanishHandler(ctx, executor) })
groupV1.GET("/report", func(ctx *gin.Context) { handler.ReportHandler(ctx, executor) })
groupV1.PUT("/order", func(ctx *gin.Context) { handler.PutOrderHandler(ctx, executor) })
groupV1.GET("/order", func(ctx *gin.Context) { handler.GetOrderHandler(ctx, executor) })
groupV1.GET("/battle/:turn/:uuid", func(ctx *gin.Context) { handler.BattleHandler(ctx, executor) })
// /command is reserved for future use; any API request for orders should use /order
groupV1.PUT("/command", LimitMiddleware(1), func(ctx *gin.Context) { handler.CommandHandler(ctx, executor) })
groupV1.PUT("/order", func(ctx *gin.Context) { handler.OrderHandler(ctx, executor) })
return r
}
+45 -3
View File
@@ -16,7 +16,7 @@ import (
)
var (
commandNoErrorsStatus = http.StatusNoContent
commandNoErrorsStatus = http.StatusAccepted
commandDefaultActor = "Gorlum"
apiCommandMethod = "PUT"
apiCommandPath = "/api/v1/command"
@@ -32,11 +32,53 @@ func id() string {
type dummyExecutor struct {
CommandsExecuted int
// ValidateOrderResult, when non-nil, is returned from ValidateOrder.
// When nil, ValidateOrder synthesises an order from the received args
// so the response body is non-empty for status assertions.
ValidateOrderResult *order.UserGamesOrder
ValidateOrderErr error
// FetchOrder controls and observes calls to FetchOrder.
FetchOrderActor string
FetchOrderTurn uint
FetchOrderResult *order.UserGamesOrder
FetchOrderOK bool
FetchOrderErr error
// FetchBattle controls and observes calls to FetchBattle.
FetchBattleTurn uint
FetchBattleID uuid.UUID
FetchBattleResult *report.BattleReport
FetchBattleOK bool
FetchBattleErr error
}
func (e *dummyExecutor) ValidateOrder(actor string, cmd ...order.DecodableCommand) error {
func (e *dummyExecutor) ValidateOrder(actor string, cmd ...order.DecodableCommand) (*order.UserGamesOrder, error) {
e.CommandsExecuted = len(cmd)
return nil
if e.ValidateOrderErr != nil {
return nil, e.ValidateOrderErr
}
if e.ValidateOrderResult != nil {
return e.ValidateOrderResult, nil
}
return &order.UserGamesOrder{
GameID: uuid.New(),
UpdatedAt: 1,
Commands: append([]order.DecodableCommand(nil), cmd...),
}, nil
}
func (e *dummyExecutor) FetchOrder(actor string, turn uint) (*order.UserGamesOrder, bool, error) {
e.FetchOrderActor = actor
e.FetchOrderTurn = turn
return e.FetchOrderResult, e.FetchOrderOK, e.FetchOrderErr
}
func (e *dummyExecutor) FetchBattle(turn uint, ID uuid.UUID) (*report.BattleReport, bool, error) {
e.FetchBattleTurn = turn
e.FetchBattleID = ID
return e.FetchBattleResult, e.FetchBattleOK, e.FetchBattleErr
}
func (e *dummyExecutor) Execute(command ...handler.Command) error {
+252 -10
View File
@@ -136,8 +136,9 @@ paths:
description: |
Applies one or more game commands for the specified actor. Serialized
to one concurrent execution; requests that cannot acquire the execution
slot within 100 ms return `504 Gateway Timeout`. Returns `204 No
Content` on success.
slot within 100 ms return `504 Gateway Timeout`. Returns `202 Accepted`
with no body on success. Reserved for future use; player order
submissions go through `/api/v1/order`.
requestBody:
required: true
content:
@@ -145,8 +146,8 @@ paths:
schema:
$ref: "#/components/schemas/CommandRequest"
responses:
"204":
description: All commands applied successfully.
"202":
description: All commands accepted.
"400":
$ref: "#/components/responses/ValidationError"
"504":
@@ -161,7 +162,9 @@ paths:
summary: Validate and store a player order without executing it
description: |
Validates and stores the game commands structurally without executing them.
Returns `204 No Content` if the order is valid and accepted.
On success returns `202 Accepted` with the stored order, including the
engine-assigned `updatedAt` timestamp used by clients to detect stale
submissions.
requestBody:
required: true
content:
@@ -169,12 +172,68 @@ paths:
schema:
$ref: "#/components/schemas/CommandRequest"
responses:
"204":
description: Order is structurally valid.
"202":
description: Order is structurally valid and stored.
content:
application/json:
schema:
$ref: "#/components/schemas/UserGamesOrder"
"400":
$ref: "#/components/responses/ValidationError"
"500":
$ref: "#/components/responses/InternalError"
get:
tags:
- PlayerActions
operationId: getOrder
summary: Fetch the stored order for a player and turn
description: |
Returns the order previously stored by `PUT /api/v1/order` for the
specified player and turn. Responds `204 No Content` when no order
has been stored for that turn.
parameters:
- $ref: "#/components/parameters/PlayerParam"
- $ref: "#/components/parameters/TurnParam"
responses:
"200":
description: Stored player order for the requested turn.
content:
application/json:
schema:
$ref: "#/components/schemas/UserGamesOrder"
"204":
description: No order has been stored for this player on this turn.
"400":
$ref: "#/components/responses/ValidationError"
"500":
$ref: "#/components/responses/InternalError"
/api/v1/battle/{turn}/{uuid}:
get:
tags:
- PlayerActions
operationId: getBattle
summary: Fetch a single battle report
description: |
Returns the full `BattleReport` for the supplied `turn` and battle
identifier. The `turn` segment must be a non-negative integer; the
`uuid` segment must be a valid RFC 4122 UUID. Responds with
`404 Not Found` when no battle is stored for the supplied pair.
parameters:
- $ref: "#/components/parameters/BattleTurnParam"
- $ref: "#/components/parameters/BattleIDParam"
responses:
"200":
description: Battle report for the supplied turn and identifier.
content:
application/json:
schema:
$ref: "#/components/schemas/BattleReport"
"400":
$ref: "#/components/responses/ValidationError"
"404":
description: No battle exists for the supplied turn and identifier.
"500":
$ref: "#/components/responses/InternalError"
/api/v1/admin/turn:
put:
tags:
@@ -233,6 +292,22 @@ components:
type: integer
minimum: 0
default: 0
BattleTurnParam:
name: turn
in: path
required: true
description: Turn number the battle was generated on.
schema:
type: integer
minimum: 0
BattleIDParam:
name: uuid
in: path
required: true
description: Battle identifier (RFC 4122 UUID).
schema:
type: string
format: uuid
schemas:
HealthzResponse:
type: object
@@ -362,6 +437,32 @@ components:
minItems: 1
items:
$ref: "#/components/schemas/Command"
UserGamesOrder:
type: object
description: |
Stored player order. Returned by `PUT /api/v1/order` after successful
validation and by `GET /api/v1/order` when fetching a previously stored
batch. `cmd` mirrors the command list submitted by the player; entries
carry per-command result fields (`cmdApplied`, `cmdErrorCode`) once the
order has been processed during turn generation.
required:
- game_id
- updatedAt
- cmd
properties:
game_id:
type: string
format: uuid
description: Identifier of the game this order belongs to.
updatedAt:
type: integer
format: int64
description: Engine-assigned UTC millisecond timestamp of the last write.
cmd:
type: array
description: Commands stored as part of this order, in submission order.
items:
$ref: "#/components/schemas/Command"
Command:
type: object
description: |
@@ -483,10 +584,9 @@ components:
$ref: "#/components/schemas/OtherShipClass"
battle:
type: array
description: UUIDs of battle reports relevant to this turn.
description: Battle summaries relevant to this turn.
items:
type: string
format: uuid
$ref: "#/components/schemas/BattleSummary"
bombing:
type: array
description: Bombing events that occurred during this turn.
@@ -730,6 +830,148 @@ components:
wiped:
type: boolean
description: True when all population was eliminated by the bombing.
BattleSummary:
type: object
description: |
Identifies one battle relevant to the report recipient. Used by
clients to render a battle marker on the map without fetching
the full BattleReport. `planet` locates the marker; `shots`
scales the marker stroke with the battle length.
required:
- id
- planet
- shots
properties:
id:
type: string
format: uuid
description: Battle identifier; fetch the full report via `/api/v1/battle/{turn}/{uuid}`.
planet:
type: integer
minimum: 0
description: Planet number the battle took place on.
shots:
type: integer
minimum: 0
description: Number of shots exchanged during the battle.
BattleReport:
type: object
description: |
Full battle report. `races` and `ships` are JSON objects whose
keys are stringified integers used to cross-reference entries
from `protocol`: a `BattleActionReport` carries integer indices
into both maps. The serialised key is a string because JSON
object keys are always strings.
required:
- id
- planet
- planetName
- races
- ships
- protocol
properties:
id:
type: string
format: uuid
description: Battle identifier.
planet:
type: integer
minimum: 0
description: Planet number the battle took place on.
planetName:
type: string
description: Planet name at battle start.
races:
type: object
description: |
Participating races keyed by the integer index used in
`protocol.a` / `protocol.d`. Values are race identifiers.
additionalProperties:
type: string
format: uuid
ships:
type: object
description: |
Participating ship groups keyed by the integer index used
in `protocol.sa` / `protocol.sd`.
additionalProperties:
$ref: "#/components/schemas/BattleReportGroup"
protocol:
type: array
description: Ordered list of shots exchanged during the battle.
items:
$ref: "#/components/schemas/BattleActionReport"
BattleReportGroup:
type: object
description: One ship group participating in the battle.
required:
- race
- className
- tech
- num
- numLeft
- loadType
- loadQuantity
- inBattle
properties:
race:
type: string
description: Race name of the group owner.
className:
type: string
description: Ship class name; resolvable through `LocalShipClass` or `OtherShipClass`.
tech:
type: object
description: Technology levels keyed by tech type name.
additionalProperties:
type: number
num:
type: integer
minimum: 0
description: Initial number of ships in this group.
numLeft:
type: integer
minimum: 0
description: Number of ships remaining at the end of the battle.
loadType:
type: string
description: Type of cargo loaded.
loadQuantity:
type: number
description: Quantity of cargo loaded.
inBattle:
type: boolean
description: |
True when the group actually fights. False groups observe
the battle in peace state and never fire or take damage.
BattleActionReport:
type: object
description: |
One shot in the battle. Attacker and defender indices reference
`BattleReport.races`; ship-class indices reference
`BattleReport.ships`.
required:
- a
- sa
- d
- sd
- x
properties:
a:
type: integer
description: Index into `BattleReport.races` for the attacker.
sa:
type: integer
description: Index into `BattleReport.ships` for the attacker's group.
d:
type: integer
description: Index into `BattleReport.races` for the defender.
sd:
type: integer
description: Index into `BattleReport.ships` for the defender's group.
x:
type: boolean
description: True when the defender ship was destroyed by this shot.
IncomingGroup:
type: object
description: An identified ship group inbound toward a planet of this race.
+166
View File
@@ -58,6 +58,20 @@ func TestGameOpenAPISpecFreezesResponseSchemas(t *testing.T) {
status: http.StatusOK,
wantRef: "#/components/schemas/StateResponse",
},
{
name: "put order",
path: "/api/v1/order",
method: http.MethodPut,
status: http.StatusAccepted,
wantRef: "#/components/schemas/UserGamesOrder",
},
{
name: "get order",
path: "/api/v1/order",
method: http.MethodGet,
status: http.StatusOK,
wantRef: "#/components/schemas/UserGamesOrder",
},
{
name: "healthz probe",
path: "/healthz",
@@ -65,6 +79,13 @@ func TestGameOpenAPISpecFreezesResponseSchemas(t *testing.T) {
status: http.StatusOK,
wantRef: "#/components/schemas/HealthzResponse",
},
{
name: "get battle",
path: "/api/v1/battle/{turn}/{uuid}",
method: http.MethodGet,
status: http.StatusOK,
wantRef: "#/components/schemas/BattleReport",
},
}
for _, tt := range tests {
@@ -77,6 +98,86 @@ func TestGameOpenAPISpecFreezesResponseSchemas(t *testing.T) {
}
}
func TestGameOpenAPISpecFreezesEmptyResponses(t *testing.T) {
t.Parallel()
doc := loadOpenAPISpec(t)
tests := []struct {
name string
path string
method string
status int
}{
{
name: "command accepted",
path: "/api/v1/command",
method: http.MethodPut,
status: http.StatusAccepted,
},
{
name: "get order no content",
path: "/api/v1/order",
method: http.MethodGet,
status: http.StatusNoContent,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
operation := getOpenAPIOperation(t, doc, tt.path, tt.method)
require.NotNil(t, operation.Responses, "operation must declare responses")
response := operation.Responses.Status(tt.status)
require.NotNil(t, response, "operation must declare %d response", tt.status)
require.NotNil(t, response.Value, "%d response must have a value", tt.status)
require.Empty(t, response.Value.Content, "%d response must carry no body", tt.status)
})
}
}
func TestGameOpenAPISpecFreezesUserGamesOrder(t *testing.T) {
t.Parallel()
doc := loadOpenAPISpec(t)
schema := componentSchemaRef(t, doc, "UserGamesOrder")
assertRequiredFields(t, schema, "game_id", "updatedAt", "cmd")
gameIDSchema := schema.Value.Properties["game_id"]
require.NotNil(t, gameIDSchema, "UserGamesOrder.game_id schema must exist")
require.Equal(t, "uuid", gameIDSchema.Value.Format, "UserGamesOrder.game_id format must be uuid")
updatedAtSchema := schema.Value.Properties["updatedAt"]
require.NotNil(t, updatedAtSchema, "UserGamesOrder.updatedAt schema must exist")
require.True(t, updatedAtSchema.Value.Type.Is("integer"), "UserGamesOrder.updatedAt must be integer")
require.Equal(t, "int64", updatedAtSchema.Value.Format, "UserGamesOrder.updatedAt format must be int64")
cmdSchema := schema.Value.Properties["cmd"]
require.NotNil(t, cmdSchema, "UserGamesOrder.cmd schema must exist")
require.True(t, cmdSchema.Value.Type.Is("array"), "UserGamesOrder.cmd must be array")
require.NotNil(t, cmdSchema.Value.Items, "UserGamesOrder.cmd items must be defined")
assertSchemaRef(t, cmdSchema.Value.Items, "#/components/schemas/Command", "UserGamesOrder.cmd items schema")
}
func TestGameOpenAPISpecFreezesGetOrderOperation(t *testing.T) {
t.Parallel()
doc := loadOpenAPISpec(t)
operation := getOpenAPIOperation(t, doc, "/api/v1/order", http.MethodGet)
require.Equal(t, "getOrder", operation.OperationID, "GET /api/v1/order operation id")
paramRefs := make(map[string]bool)
for _, p := range operation.Parameters {
require.NotNil(t, p.Value, "parameter must have value")
paramRefs[p.Ref] = true
}
require.True(t, paramRefs["#/components/parameters/PlayerParam"], "GET /api/v1/order must reference PlayerParam")
require.True(t, paramRefs["#/components/parameters/TurnParam"], "GET /api/v1/order must reference TurnParam")
}
func TestGameOpenAPISpecFreezesInitRequest(t *testing.T) {
t.Parallel()
@@ -177,6 +278,71 @@ func TestGameOpenAPISpecFreezesCommandRequest(t *testing.T) {
require.Equal(t, uint64(1), cmdSchema.Value.MinItems, "CommandRequest.cmd minItems must be 1")
}
func TestGameOpenAPISpecFreezesGetBattleOperation(t *testing.T) {
t.Parallel()
doc := loadOpenAPISpec(t)
operation := getOpenAPIOperation(t, doc, "/api/v1/battle/{turn}/{uuid}", http.MethodGet)
require.Equal(t, "getBattle", operation.OperationID, "GET /api/v1/battle/{turn}/{uuid} operation id")
paramRefs := make(map[string]bool)
for _, p := range operation.Parameters {
require.NotNil(t, p.Value, "parameter must have value")
paramRefs[p.Ref] = true
}
require.True(t, paramRefs["#/components/parameters/BattleTurnParam"], "GET /api/v1/battle/{turn}/{uuid} must reference BattleTurnParam")
require.True(t, paramRefs["#/components/parameters/BattleIDParam"], "GET /api/v1/battle/{turn}/{uuid} must reference BattleIDParam")
require.NotNil(t, operation.Responses, "operation must declare responses")
notFound := operation.Responses.Status(http.StatusNotFound)
require.NotNil(t, notFound, "operation must declare 404 response")
require.NotNil(t, notFound.Value, "404 response must have a value")
}
func TestGameOpenAPISpecFreezesBattleReport(t *testing.T) {
t.Parallel()
doc := loadOpenAPISpec(t)
reportSchema := componentSchemaRef(t, doc, "BattleReport")
assertRequiredFields(t, reportSchema, "id", "planet", "planetName", "races", "ships", "protocol")
groupSchema := componentSchemaRef(t, doc, "BattleReportGroup")
assertRequiredFields(t, groupSchema, "race", "className", "tech", "num", "numLeft", "loadType", "loadQuantity", "inBattle")
actionSchema := componentSchemaRef(t, doc, "BattleActionReport")
assertRequiredFields(t, actionSchema, "a", "sa", "d", "sd", "x")
protocolSchema := reportSchema.Value.Properties["protocol"]
require.NotNil(t, protocolSchema, "BattleReport.protocol schema must exist")
require.True(t, protocolSchema.Value.Type.Is("array"), "BattleReport.protocol must be array")
require.NotNil(t, protocolSchema.Value.Items, "BattleReport.protocol items must be defined")
assertSchemaRef(t, protocolSchema.Value.Items, "#/components/schemas/BattleActionReport", "BattleReport.protocol items schema")
shipsSchema := reportSchema.Value.Properties["ships"]
require.NotNil(t, shipsSchema, "BattleReport.ships schema must exist")
require.True(t, shipsSchema.Value.Type.Is("object"), "BattleReport.ships must be object")
require.NotNil(t, shipsSchema.Value.AdditionalProperties.Schema, "BattleReport.ships additionalProperties must be a schema")
assertSchemaRef(t, shipsSchema.Value.AdditionalProperties.Schema, "#/components/schemas/BattleReportGroup", "BattleReport.ships additionalProperties schema")
}
func TestGameOpenAPISpecFreezesBattleSummary(t *testing.T) {
t.Parallel()
doc := loadOpenAPISpec(t)
summary := componentSchemaRef(t, doc, "BattleSummary")
assertRequiredFields(t, summary, "id", "planet", "shots")
report := componentSchemaRef(t, doc, "Report")
battle := report.Value.Properties["battle"]
require.NotNil(t, battle, "Report.battle schema must exist")
require.True(t, battle.Value.Type.Is("array"), "Report.battle must be array")
require.NotNil(t, battle.Value.Items, "Report.battle items must be defined")
assertSchemaRef(t, battle.Value.Items, "#/components/schemas/BattleSummary", "Report.battle items schema")
}
func TestGameOpenAPISpecHealthzStatusEnum(t *testing.T) {
t.Parallel()
+1448
View File
File diff suppressed because it is too large Load Diff
+6 -3
View File
@@ -1,9 +1,9 @@
# syntax=docker/dockerfile:1.7
# Build context is the workspace root (galaxy/), not the gateway/
# subdirectory, because the gateway module pulls galaxy/{backend,model,
# redisconn,transcoder} through the go.work replace directives. Build
# with:
# subdirectory, because the gateway module pulls
# galaxy/{backend,core,model,redisconn,transcoder} through the
# go.work replace directives. Build with:
#
# docker build -t galaxy/gateway:integration -f gateway/Dockerfile .
@@ -23,6 +23,7 @@ COPY pkg/redisconn/ ./pkg/redisconn/
COPY pkg/schema/ ./pkg/schema/
COPY pkg/transcoder/ ./pkg/transcoder/
COPY pkg/util/ ./pkg/util/
COPY ui/core/ ./ui/core/
COPY backend/ ./backend/
COPY gateway/ ./gateway/
@@ -41,6 +42,7 @@ use (
./pkg/schema
./pkg/transcoder
./pkg/util
./ui/core
)
replace (
@@ -53,6 +55,7 @@ replace (
galaxy/schema v0.0.0 => ./pkg/schema
galaxy/transcoder v0.0.0 => ./pkg/transcoder
galaxy/util v0.0.0 => ./pkg/util
galaxy/core v0.0.0 => ./ui/core
)
EOF
-552
View File
@@ -1,552 +0,0 @@
# Edge Gateway Implementation Plan
This plan has been already implemented and stays here for historical reasons.
It should NOT be threated as source of truth for service functionality.
---
## Summary
This plan breaks implementation into small, reviewable phases.
Each phase has a single primary goal, clear deliverables, explicit dependencies,
acceptance criteria, and focused tests.
The intended v1 architecture is:
- unauthenticated public ingress over REST/JSON;
- authenticated ingress over gRPC on HTTP/2;
- FlatBuffers payloads for authenticated business commands;
- protobuf-based gRPC control envelopes;
- authenticated server-streaming push through gRPC;
- separate public traffic classes and isolated anti-abuse counters.
## Assumptions and Defaults
- `message_type` is the stable downstream routing key.
- `protocol_version` covers transport and envelope compatibility, not business
payload schema compatibility.
- FlatBuffers are used for business payload bytes only.
- Phase 3 public auth uses a challenge-token REST flow:
`send-email-code(email) -> challenge_id` and
`confirm-email-code(challenge_id, code, client_public_key) -> device_session_id`.
- Phase 3 uses a consumer-side `AuthServiceClient` inside `gateway`; the
default process wiring keeps public auth routes mounted and returns
`503 service_unavailable` until a concrete upstream adapter is added.
- Browser bootstrap and asset traffic are within gateway scope, even when backed
by a pluggable proxy or handler.
- Long-polling is out of scope for v1.
## ~~Phase 1.~~ Module Skeleton
Status: implemented.
Goal: create the runnable gateway process skeleton.
Artifacts:
- `cmd/gateway`
- `internal/app`
- base configuration types
- startup and shutdown wiring
Dependencies: none.
Acceptance criteria:
- the process starts with config;
- the process shuts down cleanly on signal;
- lifecycle wiring is testable.
Targeted tests:
- startup with valid config;
- shutdown without leaked goroutines.
## ~~Phase 2.~~ Public REST Server
Status: implemented.
Goal: add the unauthenticated HTTP server shell.
Artifacts:
- public REST listener
- `GET /healthz`
- `GET /readyz`
- base error serialization
- request classification hook
Dependencies: Phase 1.
Acceptance criteria:
- health endpoints respond deterministically;
- public requests are classified at least into `public_auth` and `browser_*`.
Targeted tests:
- health endpoint responses;
- request classification smoke tests.
## ~~Phase 3.~~ Public Auth REST Handlers
Status: implemented.
Goal: expose unauthenticated auth commands through REST/JSON.
Artifacts:
- `POST /api/v1/public/auth/send-email-code`
- `POST /api/v1/public/auth/confirm-email-code`
- request and response DTOs
- adapter calls into `AuthServiceClient`
Dependencies: Phase 2.
Acceptance criteria:
- no session authentication is required for these routes;
- handlers delegate only through the auth service adapter.
Targeted tests:
- success and validation errors for both routes;
- no session lookup on public auth paths.
## ~~Phase 4.~~ Public Traffic Classification
Status: implemented.
Goal: isolate public traffic into stable anti-abuse classes.
Artifacts:
- `PublicTrafficClassifier`
- classes `public_auth`, `browser_bootstrap`, `browser_asset`, `public_misc`
- isolated rate-limit bucket keys
Dependencies: Phase 2.
Acceptance criteria:
- browser traffic does not share buckets with public auth;
- auth counters remain unaffected by asset bursts.
Targeted tests:
- per-class routing tests;
- bucket isolation tests.
## ~~Phase 5.~~ Public REST Anti-Abuse
Status: implemented.
Goal: add coarse protection to unauthenticated REST traffic.
Artifacts:
- body size limits
- method allow-lists
- malformed request counters
- per-class rate-limit thresholds
Dependencies: Phase 4.
Acceptance criteria:
- first-load browser bursts are not marked hostile because of burst pattern
alone;
- malformed or oversized requests are rejected predictably.
Targeted tests:
- bootstrap burst stays outside auth abuse counters;
- invalid methods and oversized bodies are rejected.
## ~~Phase 6.~~ gRPC Server and Public Contracts
Status: implemented.
Goal: bring up authenticated transport over gRPC and HTTP/2.
Artifacts:
- gRPC listener
- protobuf service definitions
- `ExecuteCommand`
- `SubscribeEvents`
Dependencies: Phase 1.
Acceptance criteria:
- unary and server-streaming RPCs are reachable;
- the server runs only over HTTP/2.
Targeted tests:
- unary transport smoke test;
- stream transport smoke test.
## ~~Phase 7.~~ Envelope Parsing and Protocol Gate
Status: implemented.
Goal: validate the gRPC control envelope before security checks continue.
Artifacts:
- envelope parser
- required-field validation
- protocol version gate
Dependencies: Phase 6.
Acceptance criteria:
- unsupported or malformed envelopes are rejected before routing.
Targeted tests:
- missing field rejection;
- unsupported `protocol_version` rejection.
## ~~Phase 8.~~ Session Cache Lookup
Status: implemented.
Goal: resolve authenticated identity from cache.
Artifacts:
- `SessionCache`
- session lookup pipeline
- revoked versus active session handling
Dependencies: Phase 7.
Acceptance criteria:
- unknown and revoked sessions are blocked before signature verification.
Targeted tests:
- cache hit with active session;
- cache miss reject;
- revoked session reject.
## ~~Phase 9.~~ Payload Hash and Signing Input
Status: implemented.
Goal: verify payload integrity before signature verification.
Artifacts:
- `payload_hash` verification
- canonical signing input builder
Dependencies: Phase 8.
Acceptance criteria:
- changing payload bytes or envelope fields breaks the signing input.
Targeted tests:
- payload hash mismatch reject;
- canonical bytes differ when signed fields change.
## ~~Phase 10.~~ Client Signature Verification
Status: implemented.
Goal: authenticate the request origin using the session public key.
Artifacts:
- signature verifier
- deterministic auth reject mapping
Dependencies: Phase 9.
Acceptance criteria:
- wrong key and invalid signature produce stable rejects.
Targeted tests:
- success case with valid signature;
- bad signature reject;
- wrong-key reject.
## ~~Phase 11.~~ Freshness and Anti-Replay
Status: implemented.
Goal: enforce transport freshness and replay protection.
Artifacts:
- timestamp freshness window
- `ReplayStore`
- replay reservation and rejection logic
Dependencies: Phase 10.
Acceptance criteria:
- stale requests and duplicate `request_id` values are rejected.
Targeted tests:
- stale timestamp reject;
- replay reject for same session and request ID;
- distinct sessions do not collide.
## ~~Phase 12.~~ Authenticated Rate Limits and Policy
Status: implemented.
Goal: apply edge policy after transport authenticity is established.
Artifacts:
- rate-limit keys for IP, session, user, and message class
- authenticated policy evaluation hook
Dependencies: Phase 11.
Acceptance criteria:
- authenticated buckets are independent from public REST buckets.
Targeted tests:
- per-dimension throttling;
- bucket isolation from public traffic.
## ~~Phase 13.~~ Internal Authenticated Command and Routing
Status: implemented.
Note: delivered together with Phase 14 signed unary responses.
Goal: forward only verified context to downstream services.
Artifacts:
- `AuthenticatedCommand`
- `DownstreamRouter`
- `DownstreamClient`
Dependencies: Phase 12.
Acceptance criteria:
- downstream services receive verified context only;
- raw transport details do not leak as authoritative input.
Targeted tests:
- route selection by `message_type`;
- downstream receives the expected authenticated context.
## ~~Phase 14.~~ Signed Unary Responses
Status: implemented as part of Phase 13 delivery.
Goal: return verifiable server responses to authenticated clients.
Artifacts:
- response envelope builder
- payload hash generation
- `ResponseSigner`
Dependencies: Phase 13.
Acceptance criteria:
- unary responses always carry the original `request_id`, `payload_hash`, and
server signature.
Targeted tests:
- response correlation test;
- server signature generation test.
## ~~Phase 15.~~ Session Update and Revocation Events
Status: implemented.
Goal: keep gateway session state current without synchronous hot-path lookups.
Artifacts:
- `EventSubscriber`
- session update handlers
- session revoke handlers
Dependencies: Phase 8.
Acceptance criteria:
- session updates change gateway behavior without per-request sync calls to the
auth service.
Targeted tests:
- cache update from event;
- revocation event invalidates cached session.
## ~~Phase 16.~~ Authenticated Push Stream
Status: implemented.
Goal: open a verified server-streaming channel for client-facing delivery.
Artifacts:
- `SubscribeEvents` handler
- stream binding to `user_id` and `device_session_id`
- initial server time event
Dependencies: Phase 15.
Acceptance criteria:
- the stream opens only after the full auth pipeline succeeds.
Targeted tests:
- authorized stream open;
- rejected stream open for invalid session;
- first event contains server time.
## ~~Phase 17.~~ Event Fan-Out
Status: implemented.
Goal: deliver client-facing events from internal pub/sub to active streams.
Artifacts:
- `PushHub`
- event fan-out logic
- user and session targeting rules
Dependencies: Phase 16.
Acceptance criteria:
- events are delivered to the correct active streams only.
Targeted tests:
- single-session delivery;
- multi-device delivery for one user;
- unrelated sessions do not receive the event.
## ~~Phase 18.~~ Revocation-Driven Stream Teardown
Status: implemented.
Goal: terminate active delivery channels when a session is revoked.
Artifacts:
- stream teardown on revoke
- connection cleanup logic
Dependencies: Phase 17.
Acceptance criteria:
- revocation blocks new unary requests and closes active streams for the same
session.
Targeted tests:
- revoke closes active stream;
- revoked session cannot reopen the stream.
## ~~Phase 19.~~ Observability and Shutdown Hardening
Status: implemented.
Note: delivered with `zap` structured logging, OpenTelemetry tracing and
metrics, the optional private admin `/metrics` listener, timeout budgets, and
shutdown-driven push-stream teardown.
Goal: make the service operable in production.
Artifacts:
- structured logs
- metrics
- trace propagation
- timeout budgets
- graceful shutdown for unary and streaming traffic
Dependencies: Phase 18.
Acceptance criteria:
- shutdown is deterministic;
- logs and metrics expose stable edge outcomes without leaking secrets.
Targeted tests:
- shutdown closes listeners and active streams;
- secret and signature values are not logged.
## ~~Phase 20.~~ Acceptance Pass
Status: implemented.
Note: acceptance pass reconciled README/OpenAPI/root architecture
documentation, fixed the documented public-auth projected-error contract, and
added focused regression coverage including OpenAPI validation.
Goal: reconcile implementation, documentation, and regression coverage.
Artifacts:
- updated README and PLAN
- final protocol and interface review
- focused regression test run
Dependencies: Phases 1 through 19.
Acceptance criteria:
- implementation matches documented contracts and ordering guarantees;
- docs describe the actual gateway behavior.
Targeted tests:
- run focused package tests for gateway packages;
- rerun cross-cutting regression scenarios.
## Cross-Cutting Regression Scenarios
- `send_email_code` and `confirm_email_code` are available without session auth
and are still limited by public auth policy.
- Public browser bootstrap and asset bursts do not increase auth abuse counters
and are not rejected as hostile because of intensity alone.
- Any gRPC command without a valid session is rejected before routing.
- Unknown and revoked sessions are handled predictably and consistently where
policy requires identical behavior.
- Signature verification fails when `payload_bytes`, `payload_hash`,
`message_type`, `request_id`, or the signing key changes.
- `payload_hash` is verified before downstream execution.
- Requests outside the freshness window are rejected.
- Reused `request_id` values are rejected within the session replay window.
- Public REST and authenticated gRPC traffic use independent buckets and
independent abuse telemetry.
- Downstream services receive `AuthenticatedCommand`, not raw REST or gRPC
transport requests.
- Unary responses preserve `request_id` correlation and are server-signed.
- Streaming connections open only after the auth pipeline and close on revoke.
- Session cache updates from events change gateway behavior without synchronous
auth-service lookups per request.
- Graceful shutdown terminates unary and streaming traffic cleanly.
+48 -16
View File
@@ -87,7 +87,15 @@ The gateway exposes two external transport classes.
| Transport | Audience | Authentication | Payload format | Primary use |
| --- | --- | --- | --- | --- |
| REST/JSON | Public, unauthenticated traffic | No device session auth | JSON | Health checks, public auth commands, and browser/bootstrap traffic |
| gRPC over HTTP/2 | Authenticated clients only | Required | FlatBuffers payload inside protobuf control envelope | Verified commands and push delivery |
| Connect / gRPC / gRPC-Web over HTTP/2 (h2c) | Authenticated clients only | Required | FlatBuffers payload inside protobuf control envelope | Verified commands and push delivery |
The authenticated edge listener is built on
[`connectrpc.com/connect`](https://connectrpc.com/) and natively serves
the Connect, gRPC, and gRPC-Web protocols on a single HTTP/2 cleartext
(`h2c`) port. Browser clients use `@connectrpc/connect-web`; native
clients can use either Connect or raw gRPC framing against the same
listener. Production TLS termination happens upstream of the gateway,
matching the previous gRPC-only deployment posture.
### Public REST Surface
@@ -181,16 +189,21 @@ The endpoint exposes metrics in the Prometheus text exposition format described
in the official Prometheus documentation:
<https://prometheus.io/docs/instrumenting/exposition_formats/>.
### Authenticated gRPC Surface
### Authenticated Edge Surface
All authenticated client requests use HTTP/2 and gRPC.
The listener address is configured by `GATEWAY_AUTHENTICATED_GRPC_ADDR`.
Inbound authenticated gRPC connection setup is bounded by
All authenticated client requests use HTTP/2 cleartext (`h2c`) and are
served through `connectrpc.com/connect`, which natively accepts the
Connect, gRPC, and gRPC-Web protocols on the same listener.
The listener address is configured by `GATEWAY_AUTHENTICATED_GRPC_ADDR`
(the env-var name retains the historical `GRPC` infix for operational
stability — it labels the authenticated edge tier, not the wire
protocol).
Inbound authenticated edge connection setup is bounded by
`GATEWAY_AUTHENTICATED_GRPC_CONNECTION_TIMEOUT`, which defaults to `5s`.
The accepted client timestamp skew is configured by
`GATEWAY_AUTHENTICATED_GRPC_FRESHNESS_WINDOW` and defaults to `5m`.
The public gRPC service exposes two methods:
The public service exposes two methods:
- `ExecuteCommand(ExecuteCommandRequest) returns (ExecuteCommandResponse)`
- `SubscribeEvents(SubscribeEventsRequest) returns (stream GatewayEvent)`
@@ -200,9 +213,12 @@ The gateway routes the request downstream by `message_type` after transport
verification succeeds.
Downstream unary execution is bounded by
`GATEWAY_AUTHENTICATED_DOWNSTREAM_TIMEOUT`, which defaults to `5s`.
When that timeout expires, the gateway preserves the authenticated gRPC
contract and returns gRPC `UNAVAILABLE` with message
`downstream service is unavailable`.
When that timeout expires, the gateway preserves the authenticated edge
contract and returns `UNAVAILABLE` with message
`downstream service is unavailable`. Reject codes are documented using
their gRPC names (`INVALID_ARGUMENT`, `UNAUTHENTICATED`, …); the same
codes flow back to Connect clients as the corresponding `connect.Code*`
values.
`SubscribeEvents` is an authenticated server-streaming RPC.
It binds the stream to `user_id` and `device_session_id` and starts by sending
@@ -211,8 +227,9 @@ a signed service event that includes the current server time in milliseconds.
The v1 protobuf contract lives in
`proto/galaxy/gateway/v1/edge_gateway.proto` under package
`galaxy.gateway.v1` and service `EdgeGateway`.
Generated Go bindings are committed under `proto/galaxy/gateway/v1/` and are
regenerated with:
Generated Go bindings are committed under
`proto/galaxy/gateway/v1/` (gRPC stubs and `gatewayv1connect/` Connect
handlers) and are regenerated with:
```bash
buf generate
@@ -286,8 +303,8 @@ affected stream is closed with gRPC `RESOURCE_EXHAUSTED` and message
same `device_session_id` was revoked, every active `SubscribeEvents` stream
bound to that exact session is closed with gRPC `FAILED_PRECONDITION` and
message `device session is revoked`. During gateway shutdown, the in-memory
push hub is closed before gRPC graceful stop, and every active
`SubscribeEvents` stream is terminated with gRPC `UNAVAILABLE` and message
push hub is closed before HTTP graceful stop, and every active
`SubscribeEvents` stream is terminated with `UNAVAILABLE` and message
`gateway is shutting down`.
Authenticated anti-abuse budgets are configured by the
`GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_*` environment variables.
@@ -352,6 +369,15 @@ The current direct `Gateway -> User` self-service boundary uses that pattern:
- `user.games.command`
- `user.games.order`
- `user.games.report`
- `lobby.my.games.list`
- `lobby.my.applications.list`
- `lobby.my.invites.list`
- `lobby.public.games.list`
- `lobby.game.create`
- `lobby.game.open-enrollment`
- `lobby.application.submit`
- `lobby.invite.redeem`
- `lobby.invite.decline`
- external payloads and responses:
- FlatBuffers
- internal downstream transport:
@@ -359,6 +385,12 @@ The current direct `Gateway -> User` self-service boundary uses that pattern:
- business error projection:
- gateway `result_code`
- FlatBuffers error payload mirroring User Service `code` and `message`
- User Service `code` values pass through verbatim as `result_code`
via `projectUserBackendError`; known non-`ok` codes that clients
branch on include `turn_already_closed` (Phase 25 turn cutoff,
HTTP 409 from `Orders` / `Commands` while the runtime is in
`generation_in_progress`) and `game_paused` (Phase 25 auto-pause,
HTTP 409 while the game is in `paused` / `finished` / `removed`).
The request envelope version literal is `v1`.
`payload_hash` is the raw 32-byte SHA-256 digest of `payload_bytes`.
@@ -851,9 +883,9 @@ subscribers, and telemetry runtime.
`GATEWAY_SHUTDOWN_TIMEOUT` configures the per-component graceful shutdown
budget and defaults to `5s`.
During authenticated gRPC shutdown, the in-memory `PushHub` closes active
streams before gRPC graceful stop, so active `SubscribeEvents` calls terminate
with gRPC `UNAVAILABLE` and message `gateway is shutting down`.
During authenticated edge shutdown, the in-memory `PushHub` closes active
streams before HTTP graceful stop, so active `SubscribeEvents` calls terminate
with `UNAVAILABLE` and message `gateway is shutting down`.
## Recommended Package Layout
+227
View File
@@ -0,0 +1,227 @@
package authn_test
import (
"crypto/ed25519"
"crypto/rand"
"crypto/sha256"
"encoding/base64"
"testing"
"galaxy/core/canon"
"galaxy/core/keypair"
"galaxy/gateway/authn"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func sha256Of(payload []byte) []byte {
sum := sha256.Sum256(payload)
return sum[:]
}
// TestParityWithUICoreCanonicalBytes proves that the gateway-side
// authn package and the client-side ui/core canon package produce the
// exact same canonical signing input for every v1 envelope. Any drift
// here means a client signature would be silently rejected by the
// gateway (or vice versa).
func TestParityWithUICoreCanonicalBytes(t *testing.T) {
t.Parallel()
t.Run("request", func(t *testing.T) {
t.Parallel()
gatewayFields := authn.RequestSigningFields{
ProtocolVersion: "v1",
DeviceSessionID: "device-session-parity",
MessageType: "user.games.command",
TimestampMS: 1_700_000_000_000,
RequestID: "request-parity",
PayloadHash: sha256Of([]byte("payload")),
}
clientFields := canon.RequestSigningFields{
ProtocolVersion: gatewayFields.ProtocolVersion,
DeviceSessionID: gatewayFields.DeviceSessionID,
MessageType: gatewayFields.MessageType,
TimestampMS: gatewayFields.TimestampMS,
RequestID: gatewayFields.RequestID,
PayloadHash: gatewayFields.PayloadHash,
}
assert.Equal(t,
authn.BuildRequestSigningInput(gatewayFields),
canon.BuildRequestSigningInput(clientFields))
})
t.Run("response", func(t *testing.T) {
t.Parallel()
gatewayFields := authn.ResponseSigningFields{
ProtocolVersion: "v1",
RequestID: "request-parity",
TimestampMS: 1_700_000_000_500,
ResultCode: "ok",
PayloadHash: sha256Of([]byte("response-payload")),
}
clientFields := canon.ResponseSigningFields{
ProtocolVersion: gatewayFields.ProtocolVersion,
RequestID: gatewayFields.RequestID,
TimestampMS: gatewayFields.TimestampMS,
ResultCode: gatewayFields.ResultCode,
PayloadHash: gatewayFields.PayloadHash,
}
assert.Equal(t,
authn.BuildResponseSigningInput(gatewayFields),
canon.BuildResponseSigningInput(clientFields))
})
t.Run("event", func(t *testing.T) {
t.Parallel()
gatewayFields := authn.EventSigningFields{
EventType: "gateway.server_time",
EventID: "evt-parity",
TimestampMS: 1_700_000_001_000,
RequestID: "request-parity",
TraceID: "trace-parity",
PayloadHash: sha256Of([]byte("event-payload")),
}
clientFields := canon.EventSigningFields{
EventType: gatewayFields.EventType,
EventID: gatewayFields.EventID,
TimestampMS: gatewayFields.TimestampMS,
RequestID: gatewayFields.RequestID,
TraceID: gatewayFields.TraceID,
PayloadHash: gatewayFields.PayloadHash,
}
assert.Equal(t,
authn.BuildEventSigningInput(gatewayFields),
canon.BuildEventSigningInput(clientFields))
})
}
// TestParityRequestSignedByUICoreAcceptedByGateway proves that a
// request the client signs with `keypair.Sign` is accepted by the
// gateway's `authn.VerifyRequestSignature`. This is the acceptance
// criterion from `ui/PLAN.md` Phase 3.
func TestParityRequestSignedByUICoreAcceptedByGateway(t *testing.T) {
t.Parallel()
privateKey, publicKey, err := keypair.Generate(rand.Reader)
require.NoError(t, err)
clientFields := canon.RequestSigningFields{
ProtocolVersion: "v1",
DeviceSessionID: "device-session-parity",
MessageType: "user.account.get",
TimestampMS: 1_700_000_000_000,
RequestID: "request-parity",
PayloadHash: sha256Of([]byte("payload")),
}
signature, err := keypair.Sign(privateKey, canon.BuildRequestSigningInput(clientFields))
require.NoError(t, err)
encodedKey, err := keypair.MarshalPublicKey(publicKey)
require.NoError(t, err)
gatewayFields := authn.RequestSigningFields{
ProtocolVersion: clientFields.ProtocolVersion,
DeviceSessionID: clientFields.DeviceSessionID,
MessageType: clientFields.MessageType,
TimestampMS: clientFields.TimestampMS,
RequestID: clientFields.RequestID,
PayloadHash: clientFields.PayloadHash,
}
require.NoError(t,
authn.VerifyRequestSignature(encodedKey, signature, gatewayFields))
}
// TestParityResponseSignedByGatewayAcceptedByUICore proves that a
// response signed by the gateway's `Ed25519ResponseSigner` is
// accepted by the client's `canon.VerifyResponseSignature`. The
// reverse acceptance criterion from `ui/PLAN.md` Phase 3.
func TestParityResponseSignedByGatewayAcceptedByUICore(t *testing.T) {
t.Parallel()
_, privateKey, err := ed25519.GenerateKey(rand.Reader)
require.NoError(t, err)
signer, err := authn.NewEd25519ResponseSigner(privateKey)
require.NoError(t, err)
gatewayFields := authn.ResponseSigningFields{
ProtocolVersion: "v1",
RequestID: "request-parity",
TimestampMS: 1_700_000_000_500,
ResultCode: "ok",
PayloadHash: sha256Of([]byte("response-payload")),
}
signature, err := signer.SignResponse(gatewayFields)
require.NoError(t, err)
clientFields := canon.ResponseSigningFields{
ProtocolVersion: gatewayFields.ProtocolVersion,
RequestID: gatewayFields.RequestID,
TimestampMS: gatewayFields.TimestampMS,
ResultCode: gatewayFields.ResultCode,
PayloadHash: gatewayFields.PayloadHash,
}
require.NoError(t,
canon.VerifyResponseSignature(signer.PublicKey(), signature, clientFields))
}
// TestParityEventSignedByGatewayAcceptedByUICore proves that a
// stream event signed by the gateway's response signer (which signs
// both responses and events with the same key) is accepted by the
// client's `canon.VerifyEventSignature`.
func TestParityEventSignedByGatewayAcceptedByUICore(t *testing.T) {
t.Parallel()
_, privateKey, err := ed25519.GenerateKey(rand.Reader)
require.NoError(t, err)
signer, err := authn.NewEd25519ResponseSigner(privateKey)
require.NoError(t, err)
gatewayFields := authn.EventSigningFields{
EventType: "gateway.server_time",
EventID: "evt-parity",
TimestampMS: 1_700_000_001_000,
RequestID: "request-parity",
TraceID: "trace-parity",
PayloadHash: sha256Of([]byte("event-payload")),
}
signature, err := signer.SignEvent(gatewayFields)
require.NoError(t, err)
clientFields := canon.EventSigningFields{
EventType: gatewayFields.EventType,
EventID: gatewayFields.EventID,
TimestampMS: gatewayFields.TimestampMS,
RequestID: gatewayFields.RequestID,
TraceID: gatewayFields.TraceID,
PayloadHash: gatewayFields.PayloadHash,
}
require.NoError(t,
canon.VerifyEventSignature(signer.PublicKey(), signature, clientFields))
}
// TestParityClientPublicKeyEncodingMatchesBackend proves that the
// base64 encoding `keypair.MarshalPublicKey` produces is the exact
// string form `authn.VerifyRequestSignature` expects when the
// gateway reads a client public key out of session cache.
func TestParityClientPublicKeyEncodingMatchesBackend(t *testing.T) {
t.Parallel()
_, publicKey, err := keypair.Generate(rand.Reader)
require.NoError(t, err)
encoded, err := keypair.MarshalPublicKey(publicKey)
require.NoError(t, err)
expected := base64.StdEncoding.EncodeToString(publicKey)
require.Equal(t, expected, encoded)
}
+4
View File
@@ -9,3 +9,7 @@ plugins:
out: proto
opt:
- paths=source_relative
- remote: buf.build/connectrpc/go:v1.19.2
out: proto
opt:
- paths=source_relative
+1 -1
View File
@@ -75,6 +75,6 @@ sequenceDiagram
Dispatcher->>Hub: RevokeDeviceSession or RevokeAllForUser
Hub-->>Client: stream closes with FAILED_PRECONDITION
Note over Gateway,Hub: During shutdown the gateway closes PushHub before gRPC graceful stop.
Note over Gateway,Hub: During shutdown the gateway closes PushHub before HTTP graceful stop.
Hub-->>Client: stream closes with UNAVAILABLE
```
+2 -2
View File
@@ -80,8 +80,8 @@ Shutdown behavior:
- the per-component shutdown budget is controlled by
`GATEWAY_SHUTDOWN_TIMEOUT`;
- internal subscribers are stopped as part of application shutdown;
- the in-memory `PushHub` is closed before gRPC graceful stop;
- active `SubscribeEvents` streams terminate with gRPC `UNAVAILABLE` and
- the in-memory `PushHub` is closed before HTTP graceful stop;
- active `SubscribeEvents` streams terminate with `UNAVAILABLE` and
message `gateway is shutting down`.
During planned restarts:
+7 -3
View File
@@ -7,12 +7,12 @@ runtime dependencies.
flowchart LR
subgraph Clients
Public["Public REST clients"]
Authd["Authenticated gRPC clients"]
Authd["Authenticated edge clients\n(Connect / gRPC / gRPC-Web)"]
end
subgraph Gateway["Edge Gateway process"]
PublicHTTP["Public HTTP listener\n/healthz /readyz /api/v1/public/auth/*"]
AuthGRPC["Authenticated gRPC listener\nExecuteCommand / SubscribeEvents"]
AuthGRPC["Authenticated edge listener (h2c)\nConnect / gRPC / gRPC-Web\nExecuteCommand / SubscribeEvents"]
AdminHTTP["Optional admin HTTP listener\n/metrics"]
BackendREST["backendclient.RESTClient\nsessions + public auth + user/lobby"]
BackendPush["backendclient.PushClient\nSubscribePush consumer"]
@@ -48,9 +48,13 @@ Notes:
- `cmd/gateway` refuses startup when Redis connectivity, the backend endpoint,
or the response signer is misconfigured.
- Session lookup is synchronous: every authenticated gRPC request triggers one
- Session lookup is synchronous: every authenticated edge request triggers one
`GET /api/v1/internal/sessions/{id}` call to backend; there is no
process-local projection.
- The authenticated edge listener is built on `connectrpc.com/connect` and
natively serves the Connect, gRPC, and gRPC-Web protocols on a single
HTTP/2 cleartext (`h2c`) port. Browsers use Connect; native clients can
use either Connect or raw gRPC framing against the same listener.
- `backendclient.PushClient` keeps a long-lived `Push.SubscribePush` stream
open. The dispatcher converts inbound `pushv1.PushEvent` frames into either
`PushHub.Publish` (for client events) or `PushHub.RevokeDeviceSession` /
+7 -1
View File
@@ -5,6 +5,8 @@ go 1.26.1
require (
buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.11-20260209202127-80ab13bee0bf.1
buf.build/go/protovalidate v1.1.3
connectrpc.com/connect v1.19.2
galaxy/core v0.0.0-00010101000000-000000000000
galaxy/redisconn v0.0.0-00010101000000-000000000000
github.com/alicebob/miniredis/v2 v2.37.0
github.com/getkin/kin-openapi v0.135.0
@@ -16,6 +18,7 @@ require (
github.com/stretchr/testify v1.11.1
go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.67.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0
go.opentelemetry.io/otel v1.43.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0
@@ -25,6 +28,7 @@ require (
go.opentelemetry.io/otel/sdk/metric v1.43.0
go.opentelemetry.io/otel/trace v1.43.0
go.uber.org/zap v1.27.1
golang.org/x/net v0.53.0
golang.org/x/text v0.36.0
golang.org/x/time v0.15.0
google.golang.org/grpc v1.80.0
@@ -43,6 +47,7 @@ require (
github.com/cloudwego/base64x v0.1.6 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/gabriel-vasile/mimetype v1.4.13 // indirect
github.com/gin-contrib/sse v1.1.1 // indirect
github.com/go-logr/logr v1.4.3 // indirect
@@ -94,7 +99,6 @@ require (
golang.org/x/arch v0.25.0 // indirect
golang.org/x/crypto v0.50.0 // indirect
golang.org/x/exp v0.0.0-20260410095643-746e56fc9e2f // indirect
golang.org/x/net v0.53.0 // indirect
golang.org/x/sys v0.43.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 // indirect
@@ -102,3 +106,5 @@ require (
)
replace galaxy/redisconn => ../pkg/redisconn
replace galaxy/core => ../ui/core
+6
View File
@@ -4,6 +4,8 @@ buf.build/go/protovalidate v1.1.3 h1:m2GVEgQWd7rk+vIoAZ+f0ygGjvQTuqPQapBBdcpWVPE
buf.build/go/protovalidate v1.1.3/go.mod h1:9XIuohWz+kj+9JVn3WQneHA5LZP50mjvneZMnbLkiIE=
cel.dev/expr v0.25.1 h1:1KrZg61W6TWSxuNZ37Xy49ps13NUovb66QLprthtwi4=
cel.dev/expr v0.25.1/go.mod h1:hrXvqGP6G6gyx8UAHSHJ5RGk//1Oj5nXQ2NI02Nrsg4=
connectrpc.com/connect v1.19.2 h1:McQ83FGdzL+t60peksi0gXC7MQ/iLKgLduAnThbM0mo=
connectrpc.com/connect v1.19.2/go.mod h1:tN20fjdGlewnSFeZxLKb0xwIZ6ozc3OQs2hTXy4du9w=
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
github.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ=
@@ -34,6 +36,8 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM=
github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg=
@@ -171,6 +175,8 @@ go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.
go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0/go.mod h1:MdHW7tLtkeGJnR4TyOrnd5D0zUGZQB1l84uHCe8hRpE=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.67.0 h1:yI1/OhfEPy7J9eoa6Sj051C7n5dvpj0QX8g4sRchg04=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.67.0/go.mod h1:NoUCKYWK+3ecatC4HjkRktREheMeEtrXoQxrqYFeHSc=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0/go.mod h1:BuhAPThV8PBHBvg8ZzZ/Ok3idOdhWIodywz2xEcRbJo=
go.opentelemetry.io/contrib/propagators/b3 v1.43.0 h1:CETqV3QLLPTy5yNrqyMr41VnAOOD4lsRved7n4QG00A=
go.opentelemetry.io/contrib/propagators/b3 v1.43.0/go.mod h1:Q4mCiCdziYzpNR0g+6UqVotAlCDZdzz6L8jwY4knOrw=
go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=
@@ -51,6 +51,12 @@ func (c *RESTClient) ExecuteGameCommand(ctx context.Context, command downstream.
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute game command %q: %w", command.MessageType, err)
}
return c.executeUserGamesOrder(ctx, command.UserID, req)
case ordermodel.MessageTypeUserGamesOrderGet:
req, err := transcoder.PayloadToUserGamesOrderGet(command.PayloadBytes)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute game command %q: %w", command.MessageType, err)
}
return c.executeUserGamesOrderGet(ctx, command.UserID, req)
case reportmodel.MessageTypeUserGamesReport:
req, err := transcoder.PayloadToGameReportRequest(command.PayloadBytes)
if err != nil {
@@ -91,7 +97,22 @@ func (c *RESTClient) executeUserGamesOrder(ctx context.Context, userID string, r
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute user.games.order: %w", err)
}
return projectUserGamesAckResponse(status, respBody, transcoder.EmptyUserGamesOrderResponsePayload)
return projectUserGamesOrderResponse(status, respBody)
}
func (c *RESTClient) executeUserGamesOrderGet(ctx context.Context, userID string, req *ordermodel.UserGamesOrderGet) (downstream.UnaryResult, error) {
if req.GameID == uuid.Nil {
return downstream.UnaryResult{}, errors.New("execute user.games.order.get: game_id must not be empty")
}
if req.Turn < 0 {
return downstream.UnaryResult{}, fmt.Errorf("execute user.games.order.get: turn must be non-negative, got %d", req.Turn)
}
target := fmt.Sprintf("%s/api/v1/user/games/%s/orders?turn=%d", c.baseURL, url.PathEscape(req.GameID.String()), req.Turn)
respBody, status, err := c.do(ctx, http.MethodGet, target, userID, nil)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute user.games.order.get: %w", err)
}
return projectUserGamesOrderGetResponse(status, respBody)
}
func (c *RESTClient) executeUserGamesReport(ctx context.Context, userID string, req *reportmodel.GameReportRequest) (downstream.UnaryResult, error) {
@@ -122,10 +143,10 @@ func buildEngineCommandBody(commands []ordermodel.DecodableCommand) (gamerest.Co
return gamerest.Command{Actor: "", Commands: raw}, nil
}
// projectUserGamesAckResponse turns a backend response for command /
// order routes into a UnaryResult. Engine returns 204 on success, so
// any 2xx status is treated as ok and answered with the empty typed
// FB envelope produced by ackBuilder.
// projectUserGamesAckResponse turns a backend response for the
// `user.games.command` route into a UnaryResult. Engine returns 204
// on success, so any 2xx status is treated as ok and answered with
// the empty typed FB envelope produced by ackBuilder.
func projectUserGamesAckResponse(statusCode int, payload []byte, ackBuilder func() []byte) (downstream.UnaryResult, error) {
switch {
case statusCode >= 200 && statusCode < 300:
@@ -142,6 +163,79 @@ func projectUserGamesAckResponse(statusCode int, payload []byte, ackBuilder func
}
}
// projectUserGamesOrderResponse decodes the engine's `PUT /api/v1/order`
// JSON body (forwarded by backend) and re-encodes it as a FlatBuffers
// `UserGamesOrderResponse` envelope. The body carries per-command
// `cmdApplied` / `cmdErrorCode` plus the engine-assigned `updatedAt`,
// all of which round-trip into FB unchanged. An empty body falls back
// to a typed empty envelope so the gateway can ack a successful but
// unstructured 2xx without surfacing an error.
func projectUserGamesOrderResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
switch {
case statusCode >= 200 && statusCode < 300:
var parsed *ordermodel.UserGamesOrder
if len(payload) > 0 {
decoded, jsonErr := transcoder.JSONToUserGamesOrder(payload)
if jsonErr != nil {
return downstream.UnaryResult{}, fmt.Errorf("decode engine order response: %w", jsonErr)
}
parsed = decoded
}
encoded, err := transcoder.UserGamesOrderResponseToPayload(parsed)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode order response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: userCommandResultCodeOK,
PayloadBytes: encoded,
}, nil
case statusCode == http.StatusServiceUnavailable:
return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable
case statusCode >= 400 && statusCode <= 599:
return projectUserBackendError(statusCode, payload)
default:
return downstream.UnaryResult{}, fmt.Errorf("unexpected HTTP status %d", statusCode)
}
}
// projectUserGamesOrderGetResponse decodes the engine's
// `GET /api/v1/order` JSON body and re-encodes it as a FlatBuffers
// `UserGamesOrderGetResponse` envelope. A `204 No Content` from the
// engine surfaces as `found = false` with no embedded order; `200`
// surfaces as `found = true` with the decoded order.
func projectUserGamesOrderGetResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
switch {
case statusCode == http.StatusNoContent:
encoded, err := transcoder.UserGamesOrderGetResponseToPayload(nil, false)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode order get response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: userCommandResultCodeOK,
PayloadBytes: encoded,
}, nil
case statusCode >= 200 && statusCode < 300:
decoded, err := transcoder.JSONToUserGamesOrder(payload)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("decode engine order get response: %w", err)
}
encoded, err := transcoder.UserGamesOrderGetResponseToPayload(decoded, true)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode order get response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: userCommandResultCodeOK,
PayloadBytes: encoded,
}, nil
case statusCode == http.StatusServiceUnavailable:
return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable
case statusCode >= 400 && statusCode <= 599:
return projectUserBackendError(statusCode, payload)
default:
return downstream.UnaryResult{}, fmt.Errorf("unexpected HTTP status %d", statusCode)
}
}
// projectUserGamesReportResponse decodes the engine's Report JSON
// payload (forwarded verbatim by backend) and re-encodes it as a
// FlatBuffers Report for the signed-gRPC client.
@@ -0,0 +1,187 @@
package backendclient_test
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"galaxy/gateway/internal/backendclient"
"galaxy/gateway/internal/downstream"
ordermodel "galaxy/model/order"
"galaxy/transcoder"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestExecuteUserGamesOrderForwardsAndDecodesResponse(t *testing.T) {
t.Parallel()
gameID := uuid.MustParse("11111111-2222-3333-4444-555555555555")
applied := true
source := &ordermodel.UserGamesOrder{
GameID: gameID,
Commands: []ordermodel.DecodableCommand{
&ordermodel.CommandPlanetRename{
CommandMeta: ordermodel.CommandMeta{
CmdType: ordermodel.CommandTypePlanetRename,
CmdID: "00000000-0000-0000-0000-00000000aaaa",
},
Number: 7,
Name: "alpha",
},
},
}
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, http.MethodPost, r.Method)
require.Equal(t, "/api/v1/user/games/"+gameID.String()+"/orders", r.URL.Path)
require.Equal(t, "user-1", r.Header.Get(backendclient.HeaderUserID))
writeJSON(t, w, http.StatusAccepted, map[string]any{
"game_id": gameID.String(),
"updatedAt": int64(99),
"cmd": []map[string]any{{
"@type": string(ordermodel.CommandTypePlanetRename),
"cmdId": "00000000-0000-0000-0000-00000000aaaa",
"cmdApplied": applied,
"planetNumber": 7,
"name": "alpha",
}},
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload, err := transcoder.UserGamesOrderToPayload(source)
require.NoError(t, err)
cmd := newAuthCommand(t, ordermodel.MessageTypeUserGamesOrder, payload)
result, err := client.ExecuteGameCommand(context.Background(), cmd)
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToUserGamesOrderResponse(result.PayloadBytes)
require.NoError(t, err)
require.NotNil(t, decoded)
assert.Equal(t, gameID, decoded.GameID)
assert.Equal(t, int64(99), decoded.UpdatedAt)
require.Len(t, decoded.Commands, 1)
rename, ok := ordermodel.AsCommand[*ordermodel.CommandPlanetRename](decoded.Commands[0])
require.True(t, ok)
assert.Equal(t, "00000000-0000-0000-0000-00000000aaaa", rename.CmdID)
require.NotNil(t, rename.CmdApplied)
assert.True(t, *rename.CmdApplied)
}
func TestExecuteUserGamesOrderGetReturnsStored(t *testing.T) {
t.Parallel()
gameID := uuid.MustParse("22222222-3333-4444-5555-666666666666")
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, http.MethodGet, r.Method)
require.Equal(t, "/api/v1/user/games/"+gameID.String()+"/orders", r.URL.Path)
require.Equal(t, "5", r.URL.Query().Get("turn"))
writeJSON(t, w, http.StatusOK, map[string]any{
"game_id": gameID.String(),
"updatedAt": int64(42),
"cmd": []map[string]any{{
"@type": string(ordermodel.CommandTypePlanetRename),
"cmdId": "00000000-0000-0000-0000-00000000bbbb",
"planetNumber": 9,
"name": "stored",
}},
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload, err := transcoder.UserGamesOrderGetToPayload(&ordermodel.UserGamesOrderGet{GameID: gameID, Turn: 5})
require.NoError(t, err)
result, err := client.ExecuteGameCommand(context.Background(), newAuthCommand(t, ordermodel.MessageTypeUserGamesOrderGet, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
stored, found, err := transcoder.PayloadToUserGamesOrderGetResponse(result.PayloadBytes)
require.NoError(t, err)
require.True(t, found)
require.NotNil(t, stored)
assert.Equal(t, gameID, stored.GameID)
assert.Equal(t, int64(42), stored.UpdatedAt)
require.Len(t, stored.Commands, 1)
rename, ok := ordermodel.AsCommand[*ordermodel.CommandPlanetRename](stored.Commands[0])
require.True(t, ok)
assert.Equal(t, 9, rename.Number)
assert.Equal(t, "stored", rename.Name)
}
func TestExecuteUserGamesOrderGetMapsNoContent(t *testing.T) {
t.Parallel()
gameID := uuid.MustParse("33333333-4444-5555-6666-777777777777")
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, "11", r.URL.Query().Get("turn"))
w.WriteHeader(http.StatusNoContent)
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload, err := transcoder.UserGamesOrderGetToPayload(&ordermodel.UserGamesOrderGet{GameID: gameID, Turn: 11})
require.NoError(t, err)
result, err := client.ExecuteGameCommand(context.Background(), newAuthCommand(t, ordermodel.MessageTypeUserGamesOrderGet, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
stored, found, err := transcoder.PayloadToUserGamesOrderGetResponse(result.PayloadBytes)
require.NoError(t, err)
assert.False(t, found)
assert.Nil(t, stored)
}
func TestExecuteUserGamesOrderGetRejectsNegativeTurn(t *testing.T) {
t.Parallel()
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
t.Fatal("server must not be hit on negative turn")
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
gameID := uuid.MustParse("44444444-5555-6666-7777-888888888888")
// PayloadToUserGamesOrderGet rejects negative turns at decode
// time; force the negative case by hand-crafting a payload via
// the encoder set to 0 then mutating the buffer is fragile, so
// instead exercise the encoder's own non-negative check.
_, err := transcoder.UserGamesOrderGetToPayload(&ordermodel.UserGamesOrderGet{GameID: gameID, Turn: -1})
require.Error(t, err)
// And verify the dispatch path also surfaces the encoder error
// when wrapping a manually-signed envelope: the request payload
// is empty so the decoder reports "data is empty", which the
// dispatcher wraps with the message-type prefix.
_, err = client.ExecuteGameCommand(context.Background(), downstream.AuthenticatedCommand{
MessageType: ordermodel.MessageTypeUserGamesOrderGet,
PayloadBytes: nil,
UserID: "user-1",
})
require.Error(t, err)
assert.Contains(t, err.Error(), "user.games.order.get")
}
// writeJSON copy below mirrors the helper used by other test files
// in this package; keeping it adjacent to its callers avoids
// reaching across files in a fresh test.
//
// TODO(phase14): collapse the two writeJSON copies once the package
// gains a shared `helpers_test.go`. Phase 14 keeps the duplicate to
// avoid touching unrelated tests.
var _ = json.Marshal // keep encoding/json import if writeJSON is hoisted
func init() {
// Sanity-check that the package-level writeJSON helper is
// declared by another _test.go file we depend on; if a future
// refactor removes it, this test file will not compile.
_ = strings.TrimSpace
}
@@ -10,6 +10,7 @@ import (
"net/http"
"net/url"
"strings"
"time"
"galaxy/gateway/internal/downstream"
lobbymodel "galaxy/model/lobby"
@@ -55,12 +56,52 @@ func (c *RESTClient) ExecuteLobbyCommand(ctx context.Context, command downstream
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyMyGames(ctx, command.UserID)
case lobbymodel.MessageTypePublicGamesList:
req, err := transcoder.PayloadToPublicGamesListRequest(command.PayloadBytes)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyPublicGames(ctx, command.UserID, req)
case lobbymodel.MessageTypeMyApplicationsList:
if _, err := transcoder.PayloadToMyApplicationsListRequest(command.PayloadBytes); err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyMyApplications(ctx, command.UserID)
case lobbymodel.MessageTypeMyInvitesList:
if _, err := transcoder.PayloadToMyInvitesListRequest(command.PayloadBytes); err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyMyInvites(ctx, command.UserID)
case lobbymodel.MessageTypeOpenEnrollment:
req, err := transcoder.PayloadToOpenEnrollmentRequest(command.PayloadBytes)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyOpenEnrollment(ctx, command.UserID, req)
case lobbymodel.MessageTypeGameCreate:
req, err := transcoder.PayloadToGameCreateRequest(command.PayloadBytes)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyGameCreate(ctx, command.UserID, req)
case lobbymodel.MessageTypeApplicationSubmit:
req, err := transcoder.PayloadToApplicationSubmitRequest(command.PayloadBytes)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyApplicationSubmit(ctx, command.UserID, req)
case lobbymodel.MessageTypeInviteRedeem:
req, err := transcoder.PayloadToInviteRedeemRequest(command.PayloadBytes)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyInviteRedeem(ctx, command.UserID, req)
case lobbymodel.MessageTypeInviteDecline:
req, err := transcoder.PayloadToInviteDeclineRequest(command.PayloadBytes)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command %q: %w", command.MessageType, err)
}
return c.executeLobbyInviteDecline(ctx, command.UserID, req)
default:
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute lobby command: unsupported message type %q", command.MessageType)
}
@@ -88,6 +129,81 @@ func (c *RESTClient) executeLobbyMyGames(ctx context.Context, userID string) (do
return projectLobbyErrorResponse(status, body)
}
func (c *RESTClient) executeLobbyPublicGames(ctx context.Context, userID string, req *lobbymodel.PublicGamesListRequest) (downstream.UnaryResult, error) {
page := req.Page
if page <= 0 {
page = 1
}
pageSize := req.PageSize
if pageSize <= 0 {
pageSize = 50
}
target := fmt.Sprintf("%s/api/v1/user/lobby/games?page=%d&page_size=%d", c.baseURL, page, pageSize)
body, status, err := c.do(ctx, http.MethodGet, target, userID, nil)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute lobby.public.games.list: %w", err)
}
if status == http.StatusOK {
page, err := decodePublicGamesPage(body)
if err != nil {
return downstream.UnaryResult{}, err
}
payloadBytes, err := transcoder.PublicGamesListResponseToPayload(page)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: lobbyResultCodeOK,
PayloadBytes: payloadBytes,
}, nil
}
return projectLobbyErrorResponse(status, body)
}
func (c *RESTClient) executeLobbyMyApplications(ctx context.Context, userID string) (downstream.UnaryResult, error) {
body, status, err := c.do(ctx, http.MethodGet, c.baseURL+"/api/v1/user/lobby/my/applications", userID, nil)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute lobby.my.applications.list: %w", err)
}
if status == http.StatusOK {
response, err := decodeApplicationsList(body)
if err != nil {
return downstream.UnaryResult{}, err
}
payloadBytes, err := transcoder.MyApplicationsListResponseToPayload(response)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: lobbyResultCodeOK,
PayloadBytes: payloadBytes,
}, nil
}
return projectLobbyErrorResponse(status, body)
}
func (c *RESTClient) executeLobbyMyInvites(ctx context.Context, userID string) (downstream.UnaryResult, error) {
body, status, err := c.do(ctx, http.MethodGet, c.baseURL+"/api/v1/user/lobby/my/invites", userID, nil)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute lobby.my.invites.list: %w", err)
}
if status == http.StatusOK {
response, err := decodeInvitesList(body)
if err != nil {
return downstream.UnaryResult{}, err
}
payloadBytes, err := transcoder.MyInvitesListResponseToPayload(response)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: lobbyResultCodeOK,
PayloadBytes: payloadBytes,
}, nil
}
return projectLobbyErrorResponse(status, body)
}
func (c *RESTClient) executeLobbyOpenEnrollment(ctx context.Context, userID string, req *lobbymodel.OpenEnrollmentRequest) (downstream.UnaryResult, error) {
if req == nil || strings.TrimSpace(req.GameID) == "" {
return downstream.UnaryResult{}, errors.New("execute lobby.game.open-enrollment: game_id must not be empty")
@@ -122,6 +238,342 @@ func (c *RESTClient) executeLobbyOpenEnrollment(ctx context.Context, userID stri
return projectLobbyErrorResponse(status, body)
}
func (c *RESTClient) executeLobbyGameCreate(ctx context.Context, userID string, req *lobbymodel.GameCreateRequest) (downstream.UnaryResult, error) {
if req == nil || strings.TrimSpace(req.GameName) == "" {
return downstream.UnaryResult{}, errors.New("execute lobby.game.create: game_name must not be empty")
}
if strings.TrimSpace(req.TurnSchedule) == "" {
return downstream.UnaryResult{}, errors.New("execute lobby.game.create: turn_schedule must not be empty")
}
if strings.TrimSpace(req.TargetEngineVersion) == "" {
return downstream.UnaryResult{}, errors.New("execute lobby.game.create: target_engine_version must not be empty")
}
if req.MinPlayers <= 0 || req.MaxPlayers <= 0 {
return downstream.UnaryResult{}, errors.New("execute lobby.game.create: min_players and max_players must be positive")
}
if req.MinPlayers > req.MaxPlayers {
return downstream.UnaryResult{}, errors.New("execute lobby.game.create: min_players must not exceed max_players")
}
if req.EnrollmentEndsAt.IsZero() {
return downstream.UnaryResult{}, errors.New("execute lobby.game.create: enrollment_ends_at must be set")
}
body := map[string]any{
"game_name": req.GameName,
"visibility": "private",
"description": req.Description,
"min_players": int32(req.MinPlayers),
"max_players": int32(req.MaxPlayers),
"start_gap_hours": int32(req.StartGapHours),
"start_gap_players": int32(req.StartGapPlayers),
"enrollment_ends_at": req.EnrollmentEndsAt.UTC().Format(time.RFC3339Nano),
"turn_schedule": req.TurnSchedule,
"target_engine_version": req.TargetEngineVersion,
}
payload, status, err := c.do(ctx, http.MethodPost, c.baseURL+"/api/v1/user/lobby/games", userID, body)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute lobby.game.create: %w", err)
}
if status == http.StatusOK || status == http.StatusCreated {
summary, err := decodeGameSummaryFromGameDetail(payload)
if err != nil {
return downstream.UnaryResult{}, err
}
payloadBytes, err := transcoder.GameCreateResponseToPayload(&lobbymodel.GameCreateResponse{Game: summary})
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: lobbyResultCodeOK,
PayloadBytes: payloadBytes,
}, nil
}
return projectLobbyErrorResponse(status, payload)
}
func (c *RESTClient) executeLobbyApplicationSubmit(ctx context.Context, userID string, req *lobbymodel.ApplicationSubmitRequest) (downstream.UnaryResult, error) {
if req == nil || strings.TrimSpace(req.GameID) == "" {
return downstream.UnaryResult{}, errors.New("execute lobby.application.submit: game_id must not be empty")
}
if strings.TrimSpace(req.RaceName) == "" {
return downstream.UnaryResult{}, errors.New("execute lobby.application.submit: race_name must not be empty")
}
target := c.baseURL + "/api/v1/user/lobby/games/" + url.PathEscape(req.GameID) + "/applications"
body := map[string]any{"race_name": req.RaceName}
payload, status, err := c.do(ctx, http.MethodPost, target, userID, body)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute lobby.application.submit: %w", err)
}
if status == http.StatusOK || status == http.StatusCreated {
app, err := decodeApplicationDetail(payload)
if err != nil {
return downstream.UnaryResult{}, err
}
payloadBytes, err := transcoder.ApplicationSubmitResponseToPayload(&lobbymodel.ApplicationSubmitResponse{Application: app})
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: lobbyResultCodeOK,
PayloadBytes: payloadBytes,
}, nil
}
return projectLobbyErrorResponse(status, payload)
}
func (c *RESTClient) executeLobbyInviteRedeem(ctx context.Context, userID string, req *lobbymodel.InviteRedeemRequest) (downstream.UnaryResult, error) {
if req == nil || strings.TrimSpace(req.GameID) == "" || strings.TrimSpace(req.InviteID) == "" {
return downstream.UnaryResult{}, errors.New("execute lobby.invite.redeem: game_id and invite_id must not be empty")
}
target := c.baseURL + "/api/v1/user/lobby/games/" + url.PathEscape(req.GameID) + "/invites/" + url.PathEscape(req.InviteID) + "/redeem"
payload, status, err := c.do(ctx, http.MethodPost, target, userID, nil)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute lobby.invite.redeem: %w", err)
}
if status == http.StatusOK {
invite, err := decodeInviteDetail(payload)
if err != nil {
return downstream.UnaryResult{}, err
}
payloadBytes, err := transcoder.InviteRedeemResponseToPayload(&lobbymodel.InviteRedeemResponse{Invite: invite})
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: lobbyResultCodeOK,
PayloadBytes: payloadBytes,
}, nil
}
return projectLobbyErrorResponse(status, payload)
}
func (c *RESTClient) executeLobbyInviteDecline(ctx context.Context, userID string, req *lobbymodel.InviteDeclineRequest) (downstream.UnaryResult, error) {
if req == nil || strings.TrimSpace(req.GameID) == "" || strings.TrimSpace(req.InviteID) == "" {
return downstream.UnaryResult{}, errors.New("execute lobby.invite.decline: game_id and invite_id must not be empty")
}
target := c.baseURL + "/api/v1/user/lobby/games/" + url.PathEscape(req.GameID) + "/invites/" + url.PathEscape(req.InviteID) + "/decline"
payload, status, err := c.do(ctx, http.MethodPost, target, userID, nil)
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("execute lobby.invite.decline: %w", err)
}
if status == http.StatusOK {
invite, err := decodeInviteDetail(payload)
if err != nil {
return downstream.UnaryResult{}, err
}
payloadBytes, err := transcoder.InviteDeclineResponseToPayload(&lobbymodel.InviteDeclineResponse{Invite: invite})
if err != nil {
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
}
return downstream.UnaryResult{
ResultCode: lobbyResultCodeOK,
PayloadBytes: payloadBytes,
}, nil
}
return projectLobbyErrorResponse(status, payload)
}
// decodeGameSummaryFromGameDetail accepts the backend's full
// LobbyGameDetail wire shape and projects it onto the gateway's
// GameSummary contract. It uses non-strict JSON decoding so the
// gateway tolerates the runtime/engine fields it does not forward to
// the UI.
func decodeGameSummaryFromGameDetail(payload []byte) (lobbymodel.GameSummary, error) {
var wire struct {
GameID string `json:"game_id"`
GameName string `json:"game_name"`
GameType string `json:"game_type"`
Status string `json:"status"`
OwnerUserID *string `json:"owner_user_id"`
MinPlayers int `json:"min_players"`
MaxPlayers int `json:"max_players"`
EnrollmentEndsAt time.Time `json:"enrollment_ends_at"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CurrentTurn int32 `json:"current_turn"`
}
if err := json.Unmarshal(payload, &wire); err != nil {
return lobbymodel.GameSummary{}, fmt.Errorf("decode success response: %w", err)
}
owner := ""
if wire.OwnerUserID != nil {
owner = *wire.OwnerUserID
}
return lobbymodel.GameSummary{
GameID: wire.GameID,
GameName: wire.GameName,
GameType: wire.GameType,
Status: wire.Status,
OwnerUserID: owner,
MinPlayers: wire.MinPlayers,
MaxPlayers: wire.MaxPlayers,
EnrollmentEndsAt: wire.EnrollmentEndsAt.UTC(),
CreatedAt: wire.CreatedAt.UTC(),
UpdatedAt: wire.UpdatedAt.UTC(),
CurrentTurn: wire.CurrentTurn,
}, nil
}
func decodePublicGamesPage(payload []byte) (*lobbymodel.PublicGamesListResponse, error) {
var wire struct {
Items []struct {
GameID string `json:"game_id"`
GameName string `json:"game_name"`
GameType string `json:"game_type"`
Status string `json:"status"`
OwnerUserID *string `json:"owner_user_id"`
MinPlayers int `json:"min_players"`
MaxPlayers int `json:"max_players"`
EnrollmentEndsAt time.Time `json:"enrollment_ends_at"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CurrentTurn int32 `json:"current_turn"`
} `json:"items"`
Page int `json:"page"`
PageSize int `json:"page_size"`
Total int `json:"total"`
}
if err := json.Unmarshal(payload, &wire); err != nil {
return nil, fmt.Errorf("decode success response: %w", err)
}
out := &lobbymodel.PublicGamesListResponse{
Items: make([]lobbymodel.GameSummary, 0, len(wire.Items)),
Page: wire.Page,
PageSize: wire.PageSize,
Total: wire.Total,
}
for _, w := range wire.Items {
owner := ""
if w.OwnerUserID != nil {
owner = *w.OwnerUserID
}
out.Items = append(out.Items, lobbymodel.GameSummary{
GameID: w.GameID,
GameName: w.GameName,
GameType: w.GameType,
Status: w.Status,
OwnerUserID: owner,
MinPlayers: w.MinPlayers,
MaxPlayers: w.MaxPlayers,
EnrollmentEndsAt: w.EnrollmentEndsAt.UTC(),
CreatedAt: w.CreatedAt.UTC(),
UpdatedAt: w.UpdatedAt.UTC(),
CurrentTurn: w.CurrentTurn,
})
}
return out, nil
}
func decodeApplicationsList(payload []byte) (*lobbymodel.MyApplicationsListResponse, error) {
var wire struct {
Items []applicationDetailWire `json:"items"`
}
if err := json.Unmarshal(payload, &wire); err != nil {
return nil, fmt.Errorf("decode success response: %w", err)
}
out := &lobbymodel.MyApplicationsListResponse{
Items: make([]lobbymodel.ApplicationSummary, 0, len(wire.Items)),
}
for _, w := range wire.Items {
out.Items = append(out.Items, w.toModel())
}
return out, nil
}
func decodeApplicationDetail(payload []byte) (lobbymodel.ApplicationSummary, error) {
var wire applicationDetailWire
if err := json.Unmarshal(payload, &wire); err != nil {
return lobbymodel.ApplicationSummary{}, fmt.Errorf("decode success response: %w", err)
}
return wire.toModel(), nil
}
func decodeInvitesList(payload []byte) (*lobbymodel.MyInvitesListResponse, error) {
var wire struct {
Items []inviteDetailWire `json:"items"`
}
if err := json.Unmarshal(payload, &wire); err != nil {
return nil, fmt.Errorf("decode success response: %w", err)
}
out := &lobbymodel.MyInvitesListResponse{
Items: make([]lobbymodel.InviteSummary, 0, len(wire.Items)),
}
for _, w := range wire.Items {
out.Items = append(out.Items, w.toModel())
}
return out, nil
}
func decodeInviteDetail(payload []byte) (lobbymodel.InviteSummary, error) {
var wire inviteDetailWire
if err := json.Unmarshal(payload, &wire); err != nil {
return lobbymodel.InviteSummary{}, fmt.Errorf("decode success response: %w", err)
}
return wire.toModel(), nil
}
type applicationDetailWire struct {
ApplicationID string `json:"application_id"`
GameID string `json:"game_id"`
ApplicantUserID string `json:"applicant_user_id"`
RaceName string `json:"race_name"`
Status string `json:"status"`
CreatedAt time.Time `json:"created_at"`
DecidedAt *time.Time `json:"decided_at,omitempty"`
}
func (w applicationDetailWire) toModel() lobbymodel.ApplicationSummary {
out := lobbymodel.ApplicationSummary{
ApplicationID: w.ApplicationID,
GameID: w.GameID,
ApplicantUserID: w.ApplicantUserID,
RaceName: w.RaceName,
Status: w.Status,
CreatedAt: w.CreatedAt.UTC(),
}
if w.DecidedAt != nil {
t := w.DecidedAt.UTC()
out.DecidedAt = &t
}
return out
}
type inviteDetailWire struct {
InviteID string `json:"invite_id"`
GameID string `json:"game_id"`
InviterUserID string `json:"inviter_user_id"`
InvitedUserID *string `json:"invited_user_id,omitempty"`
Code *string `json:"code,omitempty"`
RaceName string `json:"race_name"`
Status string `json:"status"`
CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"`
DecidedAt *time.Time `json:"decided_at,omitempty"`
}
func (w inviteDetailWire) toModel() lobbymodel.InviteSummary {
out := lobbymodel.InviteSummary{
InviteID: w.InviteID,
GameID: w.GameID,
InviterUserID: w.InviterUserID,
RaceName: w.RaceName,
Status: w.Status,
CreatedAt: w.CreatedAt.UTC(),
ExpiresAt: w.ExpiresAt.UTC(),
}
if w.InvitedUserID != nil {
out.InvitedUserID = *w.InvitedUserID
}
if w.Code != nil {
out.Code = *w.Code
}
if w.DecidedAt != nil {
t := w.DecidedAt.UTC()
out.DecidedAt = &t
}
return out
}
func projectLobbyErrorResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
switch {
case statusCode == http.StatusServiceUnavailable:
@@ -0,0 +1,512 @@
package backendclient_test
import (
"context"
"encoding/json"
"errors"
"io"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"galaxy/gateway/internal/backendclient"
"galaxy/gateway/internal/downstream"
lobbymodel "galaxy/model/lobby"
"galaxy/transcoder"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func newAuthCommand(t *testing.T, messageType string, payload []byte) downstream.AuthenticatedCommand {
t.Helper()
return downstream.AuthenticatedCommand{
MessageType: messageType,
PayloadBytes: payload,
UserID: "user-1",
}
}
func mustEncode[T any](t *testing.T, encode func(*T) ([]byte, error), value *T) []byte {
t.Helper()
bytes, err := encode(value)
require.NoError(t, err)
return bytes
}
func TestExecuteLobbyMyGamesListReturnsItems(t *testing.T) {
t.Parallel()
enrollment := time.Date(2026, 5, 15, 12, 0, 0, 0, time.UTC)
created := time.Date(2026, 5, 7, 10, 0, 0, 0, time.UTC)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, http.MethodGet, r.Method)
require.Equal(t, "/api/v1/user/lobby/my/games", r.URL.Path)
require.Equal(t, "user-1", r.Header.Get(backendclient.HeaderUserID))
writeJSON(t, w, http.StatusOK, map[string]any{
"items": []map[string]any{{
"game_id": "game-1",
"game_name": "Test Game",
"game_type": "private",
"status": "draft",
"owner_user_id": "user-1",
"min_players": 2,
"max_players": 8,
"enrollment_ends_at": enrollment.Format(time.RFC3339Nano),
"created_at": created.Format(time.RFC3339Nano),
"updated_at": created.Format(time.RFC3339Nano),
}},
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.MyGamesListRequestToPayload, &lobbymodel.MyGamesListRequest{})
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeMyGamesList, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToMyGamesListResponse(result.PayloadBytes)
require.NoError(t, err)
require.Len(t, decoded.Items, 1)
assert.Equal(t, "game-1", decoded.Items[0].GameID)
assert.Equal(t, enrollment, decoded.Items[0].EnrollmentEndsAt)
}
func TestExecuteLobbyPublicGamesListPaginatesAndDecodes(t *testing.T) {
t.Parallel()
enrollment := time.Date(2026, 6, 1, 12, 0, 0, 0, time.UTC)
created := time.Date(2026, 5, 1, 12, 0, 0, 0, time.UTC)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, http.MethodGet, r.Method)
require.Equal(t, "/api/v1/user/lobby/games", r.URL.Path)
require.Equal(t, "2", r.URL.Query().Get("page"))
require.Equal(t, "10", r.URL.Query().Get("page_size"))
writeJSON(t, w, http.StatusOK, map[string]any{
"items": []map[string]any{{
"game_id": "public-1",
"game_name": "Open",
"game_type": "public",
"status": "enrollment_open",
"owner_user_id": nil,
"min_players": 4,
"max_players": 12,
"enrollment_ends_at": enrollment.Format(time.RFC3339Nano),
"created_at": created.Format(time.RFC3339Nano),
"updated_at": created.Format(time.RFC3339Nano),
}},
"page": 2,
"page_size": 10,
"total": 31,
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.PublicGamesListRequestToPayload, &lobbymodel.PublicGamesListRequest{Page: 2, PageSize: 10})
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypePublicGamesList, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToPublicGamesListResponse(result.PayloadBytes)
require.NoError(t, err)
assert.Equal(t, 2, decoded.Page)
assert.Equal(t, 10, decoded.PageSize)
assert.Equal(t, 31, decoded.Total)
require.Len(t, decoded.Items, 1)
assert.Empty(t, decoded.Items[0].OwnerUserID)
}
func TestExecuteLobbyMyApplicationsList(t *testing.T) {
t.Parallel()
created := time.Date(2026, 5, 5, 10, 0, 0, 0, time.UTC)
decided := time.Date(2026, 5, 6, 12, 0, 0, 0, time.UTC)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, "/api/v1/user/lobby/my/applications", r.URL.Path)
writeJSON(t, w, http.StatusOK, map[string]any{
"items": []map[string]any{
{
"application_id": "app-1",
"game_id": "public-1",
"applicant_user_id": "user-1",
"race_name": "Vegan Federation",
"status": "pending",
"created_at": created.Format(time.RFC3339Nano),
},
{
"application_id": "app-2",
"game_id": "public-2",
"applicant_user_id": "user-1",
"race_name": "Lithic Compact",
"status": "approved",
"created_at": created.Format(time.RFC3339Nano),
"decided_at": decided.Format(time.RFC3339Nano),
},
},
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.MyApplicationsListRequestToPayload, &lobbymodel.MyApplicationsListRequest{})
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeMyApplicationsList, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToMyApplicationsListResponse(result.PayloadBytes)
require.NoError(t, err)
require.Len(t, decoded.Items, 2)
assert.Equal(t, "pending", decoded.Items[0].Status)
assert.Nil(t, decoded.Items[0].DecidedAt)
require.NotNil(t, decoded.Items[1].DecidedAt)
assert.Equal(t, decided, *decoded.Items[1].DecidedAt)
}
func TestExecuteLobbyMyInvitesList(t *testing.T) {
t.Parallel()
created := time.Date(2026, 5, 5, 10, 0, 0, 0, time.UTC)
expires := time.Date(2026, 5, 8, 10, 0, 0, 0, time.UTC)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, "/api/v1/user/lobby/my/invites", r.URL.Path)
writeJSON(t, w, http.StatusOK, map[string]any{
"items": []map[string]any{{
"invite_id": "invite-1",
"game_id": "private-1",
"inviter_user_id": "user-host",
"invited_user_id": "user-1",
"race_name": "Vegan Federation",
"status": "pending",
"created_at": created.Format(time.RFC3339Nano),
"expires_at": expires.Format(time.RFC3339Nano),
}},
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.MyInvitesListRequestToPayload, &lobbymodel.MyInvitesListRequest{})
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeMyInvitesList, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToMyInvitesListResponse(result.PayloadBytes)
require.NoError(t, err)
require.Len(t, decoded.Items, 1)
assert.Equal(t, "user-1", decoded.Items[0].InvitedUserID)
assert.Empty(t, decoded.Items[0].Code)
assert.Equal(t, expires, decoded.Items[0].ExpiresAt)
}
func TestExecuteLobbyGameCreatePostsPrivateAndProjectsToSummary(t *testing.T) {
t.Parallel()
enrollment := time.Date(2026, 6, 1, 12, 0, 0, 0, time.UTC)
created := time.Date(2026, 5, 7, 10, 0, 0, 0, time.UTC)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, http.MethodPost, r.Method)
require.Equal(t, "/api/v1/user/lobby/games", r.URL.Path)
var body map[string]any
raw, err := io.ReadAll(r.Body)
require.NoError(t, err)
require.NoError(t, json.Unmarshal(raw, &body))
assert.Equal(t, "private", body["visibility"])
assert.Equal(t, "First Contact", body["game_name"])
assert.Equal(t, "0 0 * * *", body["turn_schedule"])
// Backend always returns the full GameDetail including runtime
// snapshot fields the gateway must tolerate.
writeJSON(t, w, http.StatusCreated, map[string]any{
"game_id": "newly-created",
"game_name": "First Contact",
"game_type": "private",
"status": "draft",
"owner_user_id": "user-1",
"min_players": 2,
"max_players": 8,
"enrollment_ends_at": enrollment.Format(time.RFC3339Nano),
"created_at": created.Format(time.RFC3339Nano),
"updated_at": created.Format(time.RFC3339Nano),
"visibility": "private",
"description": "",
"turn_schedule": "0 0 * * *",
"target_engine_version": "v1",
"start_gap_hours": 24,
"start_gap_players": 2,
"current_turn": 0,
"runtime_status": "",
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.GameCreateRequestToPayload, &lobbymodel.GameCreateRequest{
GameName: "First Contact",
Description: "",
MinPlayers: 2,
MaxPlayers: 8,
StartGapHours: 24,
StartGapPlayers: 2,
EnrollmentEndsAt: enrollment,
TurnSchedule: "0 0 * * *",
TargetEngineVersion: "v1",
})
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeGameCreate, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToGameCreateResponse(result.PayloadBytes)
require.NoError(t, err)
assert.Equal(t, "newly-created", decoded.Game.GameID)
assert.Equal(t, "draft", decoded.Game.Status)
}
func TestExecuteLobbyGameCreateRejectsEmptyGameName(t *testing.T) {
t.Parallel()
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
t.Errorf("backend must not be hit on validation failure")
w.WriteHeader(http.StatusInternalServerError)
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.GameCreateRequestToPayload, &lobbymodel.GameCreateRequest{
MinPlayers: 2,
MaxPlayers: 8,
EnrollmentEndsAt: time.Date(2026, 6, 1, 12, 0, 0, 0, time.UTC),
TurnSchedule: "0 0 * * *",
TargetEngineVersion: "v1",
})
_, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeGameCreate, payload))
require.Error(t, err)
assert.Contains(t, err.Error(), "game_name must not be empty")
}
func TestExecuteLobbyApplicationSubmitPostsRaceName(t *testing.T) {
t.Parallel()
created := time.Date(2026, 5, 5, 10, 0, 0, 0, time.UTC)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, http.MethodPost, r.Method)
require.Equal(t, "/api/v1/user/lobby/games/public-1/applications", r.URL.Path)
var body map[string]any
raw, err := io.ReadAll(r.Body)
require.NoError(t, err)
require.NoError(t, json.Unmarshal(raw, &body))
assert.Equal(t, "Vegan Federation", body["race_name"])
writeJSON(t, w, http.StatusCreated, map[string]any{
"application_id": "app-3",
"game_id": "public-1",
"applicant_user_id": "user-1",
"race_name": "Vegan Federation",
"status": "pending",
"created_at": created.Format(time.RFC3339Nano),
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.ApplicationSubmitRequestToPayload, &lobbymodel.ApplicationSubmitRequest{
GameID: "public-1",
RaceName: "Vegan Federation",
})
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeApplicationSubmit, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToApplicationSubmitResponse(result.PayloadBytes)
require.NoError(t, err)
assert.Equal(t, "app-3", decoded.Application.ApplicationID)
assert.Equal(t, "pending", decoded.Application.Status)
}
func TestExecuteLobbyInviteRedeemPostsToBackend(t *testing.T) {
t.Parallel()
created := time.Date(2026, 5, 5, 10, 0, 0, 0, time.UTC)
expires := time.Date(2026, 5, 8, 10, 0, 0, 0, time.UTC)
decided := time.Date(2026, 5, 6, 12, 0, 0, 0, time.UTC)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, http.MethodPost, r.Method)
require.Equal(t, "/api/v1/user/lobby/games/private-1/invites/invite-1/redeem", r.URL.Path)
writeJSON(t, w, http.StatusOK, map[string]any{
"invite_id": "invite-1",
"game_id": "private-1",
"inviter_user_id": "user-host",
"invited_user_id": "user-1",
"race_name": "Vegan Federation",
"status": "accepted",
"created_at": created.Format(time.RFC3339Nano),
"expires_at": expires.Format(time.RFC3339Nano),
"decided_at": decided.Format(time.RFC3339Nano),
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.InviteRedeemRequestToPayload, &lobbymodel.InviteRedeemRequest{GameID: "private-1", InviteID: "invite-1"})
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeInviteRedeem, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToInviteRedeemResponse(result.PayloadBytes)
require.NoError(t, err)
assert.Equal(t, "accepted", decoded.Invite.Status)
}
func TestExecuteLobbyInviteDeclinePostsToBackend(t *testing.T) {
t.Parallel()
created := time.Date(2026, 5, 5, 10, 0, 0, 0, time.UTC)
expires := time.Date(2026, 5, 8, 10, 0, 0, 0, time.UTC)
decided := time.Date(2026, 5, 6, 12, 0, 0, 0, time.UTC)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, http.MethodPost, r.Method)
require.Equal(t, "/api/v1/user/lobby/games/private-1/invites/invite-1/decline", r.URL.Path)
writeJSON(t, w, http.StatusOK, map[string]any{
"invite_id": "invite-1",
"game_id": "private-1",
"inviter_user_id": "user-host",
"invited_user_id": "user-1",
"race_name": "Vegan Federation",
"status": "declined",
"created_at": created.Format(time.RFC3339Nano),
"expires_at": expires.Format(time.RFC3339Nano),
"decided_at": decided.Format(time.RFC3339Nano),
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.InviteDeclineRequestToPayload, &lobbymodel.InviteDeclineRequest{GameID: "private-1", InviteID: "invite-1"})
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeInviteDecline, payload))
require.NoError(t, err)
assert.Equal(t, "ok", result.ResultCode)
decoded, err := transcoder.PayloadToInviteDeclineResponse(result.PayloadBytes)
require.NoError(t, err)
assert.Equal(t, "declined", decoded.Invite.Status)
}
func TestExecuteLobbyProjectsBackendErrorAcrossCommands(t *testing.T) {
t.Parallel()
cases := []struct {
name string
messageType string
payload []byte
statusCode int
want string
}{
{
name: "public games conflict",
messageType: lobbymodel.MessageTypePublicGamesList,
payload: mustEncode(t, transcoder.PublicGamesListRequestToPayload, &lobbymodel.PublicGamesListRequest{Page: 1, PageSize: 50}),
statusCode: http.StatusConflict,
want: "conflict",
},
{
name: "applications forbidden",
messageType: lobbymodel.MessageTypeApplicationSubmit,
payload: mustEncode(t, transcoder.ApplicationSubmitRequestToPayload, &lobbymodel.ApplicationSubmitRequest{GameID: "g", RaceName: "r"}),
statusCode: http.StatusForbidden,
want: "forbidden",
},
{
name: "invite redeem not found",
messageType: lobbymodel.MessageTypeInviteRedeem,
payload: mustEncode(t, transcoder.InviteRedeemRequestToPayload, &lobbymodel.InviteRedeemRequest{GameID: "g", InviteID: "i"}),
statusCode: http.StatusNotFound,
want: "subject_not_found",
},
{
name: "create invalid request",
messageType: lobbymodel.MessageTypeGameCreate,
payload: mustEncode(t, transcoder.GameCreateRequestToPayload, validCreateRequest()),
statusCode: http.StatusBadRequest,
want: "invalid_request",
},
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
writeJSON(t, w, tc.statusCode, map[string]any{
"error": map[string]any{"code": tc.want, "message": "from backend"},
})
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
result, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, tc.messageType, tc.payload))
require.NoError(t, err)
assert.Equal(t, tc.want, result.ResultCode)
errResp, err := transcoder.PayloadToLobbyErrorResponse(result.PayloadBytes)
require.NoError(t, err)
assert.Equal(t, tc.want, errResp.Error.Code)
assert.Equal(t, "from backend", errResp.Error.Message)
})
}
}
func TestExecuteLobbyMapsServiceUnavailableToDownstreamError(t *testing.T) {
t.Parallel()
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusServiceUnavailable)
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
payload := mustEncode(t, transcoder.MyGamesListRequestToPayload, &lobbymodel.MyGamesListRequest{})
_, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, lobbymodel.MessageTypeMyGamesList, payload))
require.Error(t, err)
assert.True(t, errors.Is(err, downstream.ErrDownstreamUnavailable))
}
func TestExecuteLobbyRejectsUnknownMessageType(t *testing.T) {
t.Parallel()
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusOK)
}))
t.Cleanup(server.Close)
client := newRESTClient(t, server)
_, err := client.ExecuteLobbyCommand(context.Background(), newAuthCommand(t, "lobby.unknown", []byte{0x01}))
require.Error(t, err)
assert.True(t, strings.Contains(err.Error(), "unsupported message type"))
}
func validCreateRequest() *lobbymodel.GameCreateRequest {
return &lobbymodel.GameCreateRequest{
GameName: "Test",
Description: "",
MinPlayers: 2,
MaxPlayers: 8,
StartGapHours: 24,
StartGapPlayers: 2,
EnrollmentEndsAt: time.Date(2026, 6, 1, 12, 0, 0, 0, time.UTC),
TurnSchedule: "0 0 * * *",
TargetEngineVersion: "v1",
}
}
+8
View File
@@ -39,7 +39,14 @@ func LobbyRoutes(client *RESTClient) map[string]downstream.Client {
}
return map[string]downstream.Client{
lobbymodel.MessageTypeMyGamesList: target,
lobbymodel.MessageTypePublicGamesList: target,
lobbymodel.MessageTypeMyApplicationsList: target,
lobbymodel.MessageTypeMyInvitesList: target,
lobbymodel.MessageTypeOpenEnrollment: target,
lobbymodel.MessageTypeGameCreate: target,
lobbymodel.MessageTypeApplicationSubmit: target,
lobbymodel.MessageTypeInviteRedeem: target,
lobbymodel.MessageTypeInviteDecline: target,
}
}
@@ -55,6 +62,7 @@ func GameRoutes(client *RESTClient) map[string]downstream.Client {
return map[string]downstream.Client{
ordermodel.MessageTypeUserGamesCommand: target,
ordermodel.MessageTypeUserGamesOrder: target,
ordermodel.MessageTypeUserGamesOrderGet: target,
reportmodel.MessageTypeUserGamesReport: target,
}
}
@@ -0,0 +1,106 @@
package backendclient_test
import (
"context"
"testing"
"galaxy/gateway/internal/backendclient"
"galaxy/gateway/internal/downstream"
lobbymodel "galaxy/model/lobby"
ordermodel "galaxy/model/order"
reportmodel "galaxy/model/report"
usermodel "galaxy/model/user"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Phase 14 follow-up: every authenticated message-type constant
// declared in `pkg/model/<service>` must be wired into the matching
// route table. Without this regression test, adding a new constant
// without registering it surfaces only at runtime as
// `unimplemented: message_type is not routed` — exactly what the
// owner saw when an outdated gateway image missed
// `user.games.order.get`.
func TestRoutesCoverAllAuthenticatedMessageTypes(t *testing.T) {
t.Parallel()
cases := map[string]struct {
expected []string
actual map[string]downstream.Client
}{
"user": {
expected: []string{
usermodel.MessageTypeGetMyAccount,
usermodel.MessageTypeUpdateMyProfile,
usermodel.MessageTypeUpdateMySettings,
usermodel.MessageTypeListMySessions,
usermodel.MessageTypeRevokeMySession,
usermodel.MessageTypeRevokeAllMySessions,
},
actual: backendclient.UserRoutes(nil),
},
"lobby": {
expected: []string{
lobbymodel.MessageTypeMyGamesList,
lobbymodel.MessageTypePublicGamesList,
lobbymodel.MessageTypeMyApplicationsList,
lobbymodel.MessageTypeMyInvitesList,
lobbymodel.MessageTypeOpenEnrollment,
lobbymodel.MessageTypeGameCreate,
lobbymodel.MessageTypeApplicationSubmit,
lobbymodel.MessageTypeInviteRedeem,
lobbymodel.MessageTypeInviteDecline,
},
actual: backendclient.LobbyRoutes(nil),
},
"game": {
expected: []string{
ordermodel.MessageTypeUserGamesCommand,
ordermodel.MessageTypeUserGamesOrder,
ordermodel.MessageTypeUserGamesOrderGet,
reportmodel.MessageTypeUserGamesReport,
},
actual: backendclient.GameRoutes(nil),
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
t.Parallel()
require.Len(t, tc.actual, len(tc.expected),
"%s routes table size diverges from the expected message-type list", name)
for _, mt := range tc.expected {
client, ok := tc.actual[mt]
assert.Truef(t, ok, "%s routes are missing %q", name, mt)
assert.NotNilf(t, client, "%s routes resolve %q to a nil client", name, mt)
}
})
}
}
// Sanity-check that the order-get route really points at the game
// command client (and not, say, the lobby one if a future refactor
// reshuffles the helpers): the route table must dispatch through
// `gameCommandClient.ExecuteCommand`, which in turn calls
// `RESTClient.ExecuteGameCommand`. We exercise this through the
// public Router contract.
func TestUserGamesOrderGetRoutedToGameClient(t *testing.T) {
t.Parallel()
routes := backendclient.GameRoutes(nil)
router := downstream.NewStaticRouter(routes)
client, err := router.Route(ordermodel.MessageTypeUserGamesOrderGet)
require.NoError(t, err)
require.NotNil(t, client)
// Without a live RESTClient the client is the unavailable stub —
// calling ExecuteCommand surfaces the canonical "downstream
// service is unavailable" sentinel rather than the "not routed"
// error we want to keep regression-tested.
_, err = client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{
MessageType: ordermodel.MessageTypeUserGamesOrderGet,
})
assert.ErrorIs(t, err, downstream.ErrDownstreamUnavailable)
}
@@ -11,14 +11,12 @@ import (
"galaxy/gateway/internal/config"
"galaxy/gateway/internal/downstream"
"galaxy/gateway/internal/testutil"
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
"connectrpc.com/connect"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/otel/trace"
"go.uber.org/zap"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func TestExecuteCommandRoutesVerifiedCommandAndSignsResponse(t *testing.T) {
@@ -58,32 +56,27 @@ func TestExecuteCommandRoutesVerifiedCommandAndSignsResponse(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
response, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
response, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.NoError(t, err)
assert.Equal(t, "v1", response.GetProtocolVersion())
assert.Equal(t, "request-123", response.GetRequestId())
assert.Equal(t, testCurrentTime.UnixMilli(), response.GetTimestampMs())
assert.Equal(t, "accepted", response.GetResultCode())
assert.Equal(t, []byte("downstream-response"), response.GetPayloadBytes())
assert.Equal(t, "v1", response.Msg.GetProtocolVersion())
assert.Equal(t, "request-123", response.Msg.GetRequestId())
assert.Equal(t, testCurrentTime.UnixMilli(), response.Msg.GetTimestampMs())
assert.Equal(t, "accepted", response.Msg.GetResultCode())
assert.Equal(t, []byte("downstream-response"), response.Msg.GetPayloadBytes())
assert.Equal(t, 1, moveClient.executeCalls)
assert.Zero(t, renameClient.executeCalls)
wantHash := sha256.Sum256([]byte("downstream-response"))
assert.Equal(t, wantHash[:], response.GetPayloadHash())
require.NoError(t, authn.VerifyPayloadHash(response.GetPayloadBytes(), response.GetPayloadHash()))
require.NoError(t, authn.VerifyResponseSignature(signer.PublicKey(), response.GetSignature(), authn.ResponseSigningFields{
ProtocolVersion: response.GetProtocolVersion(),
RequestID: response.GetRequestId(),
TimestampMS: response.GetTimestampMs(),
ResultCode: response.GetResultCode(),
PayloadHash: response.GetPayloadHash(),
assert.Equal(t, wantHash[:], response.Msg.GetPayloadHash())
require.NoError(t, authn.VerifyPayloadHash(response.Msg.GetPayloadBytes(), response.Msg.GetPayloadHash()))
require.NoError(t, authn.VerifyResponseSignature(signer.PublicKey(), response.Msg.GetSignature(), authn.ResponseSigningFields{
ProtocolVersion: response.Msg.GetProtocolVersion(),
RequestID: response.Msg.GetRequestId(),
TimestampMS: response.Msg.GetTimestampMs(),
ResultCode: response.Msg.GetResultCode(),
PayloadHash: response.Msg.GetPayloadHash(),
}))
}
@@ -99,16 +92,11 @@ func TestExecuteCommandRouteMissReturnsUnimplemented(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.Error(t, err)
assert.Equal(t, codes.Unimplemented, status.Code(err))
assert.Equal(t, "message_type is not routed", status.Convert(err).Message())
assert.Equal(t, connect.CodeUnimplemented, connect.CodeOf(err))
assert.Equal(t, "message_type is not routed", connectErrorMessage(t, err))
}
func TestExecuteCommandMapsDownstreamUnavailableToUnavailable(t *testing.T) {
@@ -131,16 +119,11 @@ func TestExecuteCommandMapsDownstreamUnavailableToUnavailable(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.Error(t, err)
assert.Equal(t, codes.Unavailable, status.Code(err))
assert.Equal(t, "downstream service is unavailable", status.Convert(err).Message())
assert.Equal(t, connect.CodeUnavailable, connect.CodeOf(err))
assert.Equal(t, "downstream service is unavailable", connectErrorMessage(t, err))
assert.Equal(t, 1, failingClient.executeCalls)
}
@@ -167,16 +150,11 @@ func TestExecuteCommandMapsDownstreamTimeoutToUnavailable(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.Error(t, err)
assert.Equal(t, codes.Unavailable, status.Code(err))
assert.Equal(t, "downstream service is unavailable", status.Convert(err).Message())
assert.Equal(t, connect.CodeUnavailable, connect.CodeOf(err))
assert.Equal(t, "downstream service is unavailable", connectErrorMessage(t, err))
assert.Equal(t, 1, stallingClient.executeCalls)
}
@@ -203,16 +181,11 @@ func TestExecuteCommandFailsClosedWhenResponseSignerUnavailable(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.Error(t, err)
assert.Equal(t, codes.Unavailable, status.Code(err))
assert.Equal(t, "response signer is unavailable", status.Convert(err).Message())
assert.Equal(t, connect.CodeUnavailable, connect.CodeOf(err))
assert.Equal(t, "response signer is unavailable", connectErrorMessage(t, err))
assert.Equal(t, 1, successClient.executeCalls)
}
@@ -250,13 +223,8 @@ func TestExecuteCommandPropagatesOTelSpanContextToDownstream(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.NoError(t, err)
assert.True(t, seenSpanContext.IsValid())
@@ -290,15 +258,10 @@ func TestExecuteCommandDrainsInFlightUnaryDuringShutdown(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
client := newEdgeClient(t, addr)
resultCh := make(chan error, 1)
go func() {
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
resultCh <- err
}()
@@ -353,13 +316,8 @@ func TestExecuteCommandLogsDoNotContainSensitiveTransportMaterial(t *testing.T)
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.NoError(t, err)
logOutput := logBuffer.String()
+143
View File
@@ -0,0 +1,143 @@
package grpcapi
import (
"context"
"errors"
"fmt"
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
"galaxy/gateway/proto/galaxy/gateway/v1/gatewayv1connect"
"connectrpc.com/connect"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
grpcstatus "google.golang.org/grpc/status"
)
// connectEdgeAdapter exposes the existing gRPC-shaped authenticated edge
// service decorator stack (envelope → session → payload-hash → signature →
// freshness/replay → rate-limit → routing/push) through the
// gatewayv1connect.EdgeGatewayHandler interface. It owns no logic of its
// own; the underlying decorator stack carries the full ingress contract
// unchanged.
type connectEdgeAdapter struct {
impl gatewayv1.EdgeGatewayServer
}
// newConnectEdgeAdapter wraps impl as a Connect handler.
func newConnectEdgeAdapter(impl gatewayv1.EdgeGatewayServer) gatewayv1connect.EdgeGatewayHandler {
return &connectEdgeAdapter{impl: impl}
}
// ExecuteCommand unwraps the typed Connect request, calls the underlying
// service, and wraps the typed response. gRPC `status.Error` values
// returned by the decorator stack are translated to *connect.Error so
// the Connect client receives the matching code and message.
func (a *connectEdgeAdapter) ExecuteCommand(ctx context.Context, req *connect.Request[gatewayv1.ExecuteCommandRequest]) (*connect.Response[gatewayv1.ExecuteCommandResponse], error) {
resp, err := a.impl.ExecuteCommand(ctx, req.Msg)
if err != nil {
return nil, translateGRPCStatusError(err)
}
return connect.NewResponse(resp), nil
}
// SubscribeEvents adapts the Connect server stream to the
// grpc.ServerStreamingServer contract expected by the existing decorator
// stack. The decorator stack only ever calls Send and Context on the
// stream; the remaining grpc.ServerStream surface is satisfied by no-op
// shims so the interface contract is met without panicking. Errors
// returned by the decorator stack are translated to *connect.Error.
func (a *connectEdgeAdapter) SubscribeEvents(ctx context.Context, req *connect.Request[gatewayv1.SubscribeEventsRequest], stream *connect.ServerStream[gatewayv1.GatewayEvent]) error {
wrapped := &connectEdgeStream{ctx: ctx, stream: stream}
if err := a.impl.SubscribeEvents(req.Msg, wrapped); err != nil {
return translateGRPCStatusError(err)
}
return nil
}
// translateGRPCStatusError maps gRPC status.Error values returned by the
// decorator stack into *connect.Error with the equivalent code and message.
// Errors that are already *connect.Error pass through unchanged. Errors
// without a recognisable gRPC status are returned verbatim — connect-go
// renders those as CodeUnknown.
func translateGRPCStatusError(err error) error {
if err == nil {
return nil
}
var connectErr *connect.Error
if errors.As(err, &connectErr) {
return err
}
grpcStatus, ok := grpcstatus.FromError(err)
if !ok {
return err
}
if grpcStatus.Code() == codes.OK {
return nil
}
return connect.NewError(connect.Code(grpcStatus.Code()), errors.New(grpcStatus.Message()))
}
// connectEdgeStream satisfies grpc.ServerStreamingServer[gatewayv1.GatewayEvent]
// on top of *connect.ServerStream. The decorator stack reads the request
// context and pushes outbound events through Send; the rest of the
// grpc.ServerStream surface is not exercised in the gateway, so the no-op
// implementations preserve the type contract without surprising behaviour.
type connectEdgeStream struct {
ctx context.Context
stream *connect.ServerStream[gatewayv1.GatewayEvent]
}
// Send forwards a typed gateway event through the underlying Connect server
// stream.
func (s *connectEdgeStream) Send(event *gatewayv1.GatewayEvent) error {
return s.stream.Send(event)
}
// Context returns the request context handed to the Connect handler.
func (s *connectEdgeStream) Context() context.Context {
return s.ctx
}
// SetHeader is part of grpc.ServerStream. The Connect transport exposes
// response headers through ResponseHeader() at construction time; metadata
// supplied here is intentionally ignored because no decorator in the
// gateway exercises the gRPC-only metadata path.
func (s *connectEdgeStream) SetHeader(metadata.MD) error {
return nil
}
// SendHeader is part of grpc.ServerStream. Connect-served streams flush
// headers automatically on the first Send; manual header dispatch is not
// modelled.
func (s *connectEdgeStream) SendHeader(metadata.MD) error {
return nil
}
// SetTrailer is part of grpc.ServerStream. Trailer metadata has no
// corresponding Connect concept on server-streaming responses.
func (s *connectEdgeStream) SetTrailer(metadata.MD) {}
// SendMsg is part of grpc.ServerStream. The decorator stack never calls
// SendMsg directly; if a future caller does, the typed Send path is used
// when the message is a GatewayEvent.
func (s *connectEdgeStream) SendMsg(m any) error {
event, ok := m.(*gatewayv1.GatewayEvent)
if !ok {
return fmt.Errorf("connectEdgeStream.SendMsg: unsupported message type %T", m)
}
return s.stream.Send(event)
}
// RecvMsg is part of grpc.ServerStream. Server-streaming server handlers
// have no client messages to receive after the initial request, so this
// method is intentionally an error path.
func (s *connectEdgeStream) RecvMsg(any) error {
return errors.New("connectEdgeStream.RecvMsg: server-streaming has no client messages")
}
@@ -0,0 +1,110 @@
package grpcapi
import (
"context"
"net"
"time"
"galaxy/gateway/internal/telemetry"
"connectrpc.com/connect"
"go.uber.org/zap"
)
// observabilityConnectInterceptor returns a Connect interceptor that records
// the same structured log entry and authenticated edge metric pair as the
// gRPC instrumentation it replaced. It also injects the parsed peer IP into
// the request context so the rate-limit decorator can attribute requests
// without depending on the gRPC `peer` package.
func observabilityConnectInterceptor(logger *zap.Logger, metrics *telemetry.Runtime) connect.Interceptor {
if logger == nil {
logger = zap.NewNop()
}
return &connectObservability{logger: logger, metrics: metrics}
}
type connectObservability struct {
logger *zap.Logger
metrics *telemetry.Runtime
}
// WrapUnary records timing and outcome for a single unary edge call.
func (o *connectObservability) WrapUnary(next connect.UnaryFunc) connect.UnaryFunc {
return func(ctx context.Context, req connect.AnyRequest) (connect.AnyResponse, error) {
ctx = contextWithPeerIP(ctx, hostFromConnectPeerAddr(req.Peer().Addr))
start := time.Now()
resp, err := next(ctx, req)
var respValue any
if resp != nil {
respValue = resp.Any()
}
recordEdgeRequest(o.logger, o.metrics, ctx, "connect", req.Spec().Procedure, req.Any(), respValue, err, time.Since(start), "unary")
return resp, err
}
}
// WrapStreamingClient is the client-side hook required by the
// connect.Interceptor contract. The gateway only acts as a Connect server,
// so this hook is a pass-through.
func (o *connectObservability) WrapStreamingClient(next connect.StreamingClientFunc) connect.StreamingClientFunc {
return next
}
// WrapStreamingHandler records timing and outcome for one server-streaming
// edge call. The wrapped conn captures the first received request so the
// log/metric pair carries the same envelope fields the gRPC instrumentation
// emitted before.
func (o *connectObservability) WrapStreamingHandler(next connect.StreamingHandlerFunc) connect.StreamingHandlerFunc {
return func(ctx context.Context, conn connect.StreamingHandlerConn) error {
ctx = contextWithPeerIP(ctx, hostFromConnectPeerAddr(conn.Peer().Addr))
start := time.Now()
wrapped := &observabilityStreamingConn{StreamingHandlerConn: conn}
err := next(ctx, wrapped)
recordEdgeRequest(o.logger, o.metrics, ctx, "connect", conn.Spec().Procedure, wrapped.firstRequest, nil, err, time.Since(start), "stream")
return err
}
}
// observabilityStreamingConn captures the first received request so the
// streaming-handler interceptor can derive the envelope log fields after
// the handler returns.
type observabilityStreamingConn struct {
connect.StreamingHandlerConn
firstRequest any
}
// Receive forwards to the underlying conn and stores the first successful
// message, so envelopeFieldsFromRequest can read message_type, request_id,
// and trace_id from it.
func (c *observabilityStreamingConn) Receive(msg any) error {
err := c.StreamingHandlerConn.Receive(msg)
if err == nil && c.firstRequest == nil {
c.firstRequest = msg
}
return err
}
// hostFromConnectPeerAddr returns the host part of a "host:port" peer
// address, or the address verbatim when it cannot be split. Empty input
// yields an empty string so peerIPFromContext falls back to the canonical
// `unknown` bucket.
func hostFromConnectPeerAddr(addr string) string {
if addr == "" {
return ""
}
host, _, err := net.SplitHostPort(addr)
if err == nil && host != "" {
return host
}
return addr
}
+1 -2
View File
@@ -4,8 +4,7 @@ import (
"bytes"
"context"
"fmt"
"galaxy/gateway/proto/galaxy/gateway/v1"
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
"buf.build/go/protovalidate"
"google.golang.org/grpc"
@@ -3,7 +3,6 @@ package grpcapi
import (
"context"
"errors"
"io"
"sync"
"testing"
"time"
@@ -12,11 +11,10 @@ import (
"galaxy/gateway/internal/session"
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
"connectrpc.com/connect"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func TestExecuteCommandRejectsStaleTimestamp(t *testing.T) {
@@ -51,16 +49,11 @@ func TestExecuteCommandRejectsStaleTimestamp(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithTimestamp("device-session-123", "request-123", tt.timestampMS))
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithTimestamp("device-session-123", "request-123", tt.timestampMS)))
require.Error(t, err)
assert.Equal(t, codes.FailedPrecondition, status.Code(err))
assert.Equal(t, "request timestamp is outside the freshness window", status.Convert(err).Message())
assert.Equal(t, connect.CodeFailedPrecondition, connect.CodeOf(err))
assert.Equal(t, "request timestamp is outside the freshness window", connectErrorMessage(t, err))
assert.Zero(t, delegate.executeCalls)
})
}
@@ -98,16 +91,11 @@ func TestSubscribeEventsRejectsStaleTimestamp(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
client := newEdgeClient(t, addr)
err := subscribeEventsError(t, context.Background(), client, newValidSubscribeEventsRequestWithTimestamp("device-session-123", "request-123", tt.timestampMS))
require.Error(t, err)
assert.Equal(t, codes.FailedPrecondition, status.Code(err))
assert.Equal(t, "request timestamp is outside the freshness window", status.Convert(err).Message())
assert.Equal(t, connect.CodeFailedPrecondition, connect.CodeOf(err))
assert.Equal(t, "request timestamp is outside the freshness window", connectErrorMessage(t, err))
assert.Zero(t, delegate.subscribeCalls)
})
}
@@ -127,21 +115,16 @@ func TestExecuteCommandRejectsReplay(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
client := newEdgeClient(t, addr)
req := newValidExecuteCommandRequest()
_, err := client.ExecuteCommand(context.Background(), req)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(req))
require.NoError(t, err)
_, err = client.ExecuteCommand(context.Background(), req)
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(req))
require.Error(t, err)
assert.Equal(t, codes.FailedPrecondition, status.Code(err))
assert.Equal(t, "request replay detected", status.Convert(err).Message())
assert.Equal(t, connect.CodeFailedPrecondition, connect.CodeOf(err))
assert.Equal(t, "request replay detected", connectErrorMessage(t, err))
assert.Equal(t, 1, delegate.executeCalls)
}
@@ -159,25 +142,20 @@ func TestSubscribeEventsRejectsReplay(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
client := newEdgeClient(t, addr)
req := newValidSubscribeEventsRequest()
stream, err := client.SubscribeEvents(context.Background(), req)
stream, err := client.SubscribeEvents(context.Background(), connect.NewRequest(req))
require.NoError(t, err)
event := recvBootstrapEvent(t, stream)
assertServerTimeBootstrapEvent(t, event, newTestResponseSignerPublicKey(), "request-123", "trace-123", testCurrentTime.UnixMilli())
_, err = stream.Recv()
require.ErrorIs(t, err, io.EOF)
require.False(t, stream.Receive())
require.NoError(t, stream.Err())
err = subscribeEventsError(t, context.Background(), client, req)
require.Error(t, err)
assert.Equal(t, codes.FailedPrecondition, status.Code(err))
assert.Equal(t, "request replay detected", status.Convert(err).Message())
assert.Equal(t, connect.CodeFailedPrecondition, connect.CodeOf(err))
assert.Equal(t, "request replay detected", connectErrorMessage(t, err))
assert.Equal(t, 1, delegate.subscribeCalls)
}
@@ -204,17 +182,12 @@ func TestExecuteCommandAllowsSameRequestIDAcrossDistinctSessions(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-123", "request-shared"))
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-123", "request-shared")))
require.NoError(t, err)
_, err = client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-456", "request-shared"))
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-456", "request-shared")))
require.NoError(t, err)
assert.Equal(t, 2, delegate.executeCalls)
@@ -243,26 +216,21 @@ func TestSubscribeEventsAllowsSameRequestIDAcrossDistinctSessions(t *testing.T)
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
client := gatewayv1.NewEdgeGatewayClient(conn)
stream, err := client.SubscribeEvents(context.Background(), newValidSubscribeEventsRequestWithSessionAndRequestID("device-session-123", "request-shared"))
stream, err := client.SubscribeEvents(context.Background(), connect.NewRequest(newValidSubscribeEventsRequestWithSessionAndRequestID("device-session-123", "request-shared")))
require.NoError(t, err)
event := recvBootstrapEvent(t, stream)
assertServerTimeBootstrapEvent(t, event, newTestResponseSignerPublicKey(), "request-shared", "trace-123", testCurrentTime.UnixMilli())
_, err = stream.Recv()
require.ErrorIs(t, err, io.EOF)
require.False(t, stream.Receive())
require.NoError(t, stream.Err())
stream, err = client.SubscribeEvents(context.Background(), newValidSubscribeEventsRequestWithSessionAndRequestID("device-session-456", "request-shared"))
stream, err = client.SubscribeEvents(context.Background(), connect.NewRequest(newValidSubscribeEventsRequestWithSessionAndRequestID("device-session-456", "request-shared")))
require.NoError(t, err)
event = recvBootstrapEvent(t, stream)
assertServerTimeBootstrapEvent(t, event, newTestResponseSignerPublicKey(), "request-shared", "trace-123", testCurrentTime.UnixMilli())
_, err = stream.Recv()
require.ErrorIs(t, err, io.EOF)
require.False(t, stream.Receive())
require.NoError(t, stream.Err())
assert.Equal(t, 2, delegate.subscribeCalls)
}
@@ -283,16 +251,11 @@ func TestExecuteCommandRejectsReplayStoreUnavailable(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.Error(t, err)
assert.Equal(t, codes.Unavailable, status.Code(err))
assert.Equal(t, "replay store is unavailable", status.Convert(err).Message())
assert.Equal(t, connect.CodeUnavailable, connect.CodeOf(err))
assert.Equal(t, "replay store is unavailable", connectErrorMessage(t, err))
assert.Zero(t, delegate.executeCalls)
}
@@ -312,16 +275,11 @@ func TestSubscribeEventsRejectsReplayStoreUnavailable(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
client := newEdgeClient(t, addr)
err := subscribeEventsError(t, context.Background(), client, newValidSubscribeEventsRequest())
require.Error(t, err)
assert.Equal(t, codes.Unavailable, status.Code(err))
assert.Equal(t, "replay store is unavailable", status.Convert(err).Message())
assert.Equal(t, connect.CodeUnavailable, connect.CodeOf(err))
assert.Equal(t, "replay store is unavailable", connectErrorMessage(t, err))
assert.Zero(t, delegate.subscribeCalls)
}
@@ -353,15 +311,10 @@ func TestExecuteCommandFreshRequestReachesDelegateAndUsesDynamicReplayTTL(t *tes
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
response, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
response, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.NoError(t, err)
assert.Equal(t, "request-123", response.GetRequestId())
assert.Equal(t, "request-123", response.Msg.GetRequestId())
assert.Equal(t, "device-session-123", reservedDeviceSessionID)
assert.Equal(t, "request-123", reservedRequestID)
assert.Equal(t, testFreshnessWindow, reservedTTL)
@@ -394,18 +347,13 @@ func TestSubscribeEventsFreshRequestReachesDelegateAndUsesDynamicReplayTTL(t *te
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
stream, err := client.SubscribeEvents(context.Background(), newValidSubscribeEventsRequest())
client := newEdgeClient(t, addr)
stream, err := client.SubscribeEvents(context.Background(), connect.NewRequest(newValidSubscribeEventsRequest()))
require.NoError(t, err)
event := recvBootstrapEvent(t, stream)
assertServerTimeBootstrapEvent(t, event, newTestResponseSignerPublicKey(), "request-123", "trace-123", testCurrentTime.UnixMilli())
_, err = stream.Recv()
require.ErrorIs(t, err, io.EOF)
require.False(t, stream.Receive())
require.NoError(t, stream.Err())
assert.Equal(t, testFreshnessWindow, reservedTTL)
assert.Equal(t, 1, delegate.subscribeCalls)
}
@@ -434,15 +382,10 @@ func TestExecuteCommandFutureSkewUsesExtendedReplayTTL(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(
context.Background(),
newValidExecuteCommandRequestWithTimestamp("device-session-123", "request-123", testCurrentTime.Add(2*time.Minute).UnixMilli()),
connect.NewRequest(newValidExecuteCommandRequestWithTimestamp("device-session-123", "request-123", testCurrentTime.Add(2*time.Minute).UnixMilli())),
)
require.NoError(t, err)
assert.Equal(t, 7*time.Minute, reservedTTL)
@@ -473,15 +416,10 @@ func TestExecuteCommandBoundaryFreshnessUsesMinimumReplayTTL(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(
context.Background(),
newValidExecuteCommandRequestWithTimestamp("device-session-123", "request-123", testCurrentTime.Add(-testFreshnessWindow).UnixMilli()),
connect.NewRequest(newValidExecuteCommandRequestWithTimestamp("device-session-123", "request-123", testCurrentTime.Add(-testFreshnessWindow).UnixMilli())),
)
require.NoError(t, err)
assert.Equal(t, minimumReplayReservationTTL, reservedTTL)
+17 -55
View File
@@ -12,59 +12,21 @@ import (
"go.opentelemetry.io/otel/attribute"
"go.uber.org/zap"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func observabilityUnaryInterceptor(logger *zap.Logger, metrics *telemetry.Runtime) grpc.UnaryServerInterceptor {
if logger == nil {
logger = zap.NewNop()
}
return func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) {
start := time.Now()
resp, err := handler(ctx, req)
recordGRPCRequest(logger, metrics, ctx, info.FullMethod, req, resp, err, time.Since(start), "unary")
return resp, err
}
}
func observabilityStreamInterceptor(logger *zap.Logger, metrics *telemetry.Runtime) grpc.StreamServerInterceptor {
if logger == nil {
logger = zap.NewNop()
}
return func(srv any, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
start := time.Now()
wrapped := &observabilityServerStream{ServerStream: stream}
err := handler(srv, wrapped)
recordGRPCRequest(logger, metrics, stream.Context(), info.FullMethod, wrapped.request, nil, err, time.Since(start), "stream")
return err
}
}
type observabilityServerStream struct {
grpc.ServerStream
request any
}
func (s *observabilityServerStream) RecvMsg(m any) error {
err := s.ServerStream.RecvMsg(m)
if err == nil && s.request == nil {
s.request = m
}
return err
}
func recordGRPCRequest(logger *zap.Logger, metrics *telemetry.Runtime, ctx context.Context, fullMethod string, req any, resp any, err error, duration time.Duration, streamKind string) {
// recordEdgeRequest emits the structured log entry and the
// `gateway.authenticated_grpc.*` metric pair for one authenticated edge
// request or stream outcome. The transport parameter labels the wire
// protocol the request travelled over (`connect`, `grpc`, or `grpc-web`),
// preserving stable observability semantics across the unified Connect-go
// listener.
func recordEdgeRequest(logger *zap.Logger, metrics *telemetry.Runtime, ctx context.Context, transport string, fullMethod string, req any, resp any, err error, duration time.Duration, streamKind string) {
rpcMethod := path.Base(fullMethod)
messageType, requestID, traceID := grpcEnvelopeFields(req)
resultCode := grpcResultCode(resp)
grpcCode, grpcMessage, outcome := grpcOutcome(err)
messageType, requestID, traceID := envelopeFieldsFromRequest(req)
resultCode := resultCodeFromResponse(resp)
grpcCode, grpcMessage, outcome := outcomeFromError(err)
rejectReason := telemetry.RejectReason(outcome)
attrs := []attribute.KeyValue{
@@ -82,7 +44,7 @@ func recordGRPCRequest(logger *zap.Logger, metrics *telemetry.Runtime, ctx conte
fields := []zap.Field{
zap.String("component", "authenticated_grpc"),
zap.String("transport", "grpc"),
zap.String("transport", transport),
zap.String("stream_kind", streamKind),
zap.String("rpc_method", rpcMethod),
zap.String("message_type", messageType),
@@ -106,15 +68,15 @@ func recordGRPCRequest(logger *zap.Logger, metrics *telemetry.Runtime, ctx conte
switch outcome {
case telemetry.EdgeOutcomeSuccess:
logger.Info("authenticated gRPC request completed", fields...)
logger.Info("authenticated edge request completed", fields...)
case telemetry.EdgeOutcomeBackendUnavailable, telemetry.EdgeOutcomeDownstreamUnavailable, telemetry.EdgeOutcomeInternalError:
logger.Error("authenticated gRPC request failed", fields...)
logger.Error("authenticated edge request failed", fields...)
default:
logger.Warn("authenticated gRPC request rejected", fields...)
logger.Warn("authenticated edge request rejected", fields...)
}
}
func grpcEnvelopeFields(req any) (messageType string, requestID string, traceID string) {
func envelopeFieldsFromRequest(req any) (messageType string, requestID string, traceID string) {
switch typed := req.(type) {
case *gatewayv1.ExecuteCommandRequest:
return typed.GetMessageType(), typed.GetRequestId(), typed.GetTraceId()
@@ -125,7 +87,7 @@ func grpcEnvelopeFields(req any) (messageType string, requestID string, traceID
}
}
func grpcResultCode(resp any) string {
func resultCodeFromResponse(resp any) string {
typed, ok := resp.(*gatewayv1.ExecuteCommandResponse)
if !ok {
return ""
@@ -134,7 +96,7 @@ func grpcResultCode(resp any) string {
return typed.GetResultCode()
}
func grpcOutcome(err error) (codes.Code, string, telemetry.EdgeOutcome) {
func outcomeFromError(err error) (codes.Code, string, telemetry.EdgeOutcome) {
switch {
case err == nil:
return codes.OK, "", telemetry.EdgeOutcomeSuccess
@@ -6,12 +6,10 @@ import (
"testing"
"galaxy/gateway/internal/session"
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
"connectrpc.com/connect"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func TestExecuteCommandRejectsPayloadHashWithInvalidLength(t *testing.T) {
@@ -25,19 +23,15 @@ func TestExecuteCommandRejectsPayloadHashWithInvalidLength(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
req := newValidExecuteCommandRequest()
req.PayloadHash = []byte("short")
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), req)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(req))
require.Error(t, err)
assert.Equal(t, codes.InvalidArgument, status.Code(err))
assert.Equal(t, "payload_hash must be a 32-byte SHA-256 digest", status.Convert(err).Message())
assert.Equal(t, connect.CodeInvalidArgument, connect.CodeOf(err))
assert.Equal(t, "payload_hash must be a 32-byte SHA-256 digest", connectErrorMessage(t, err))
assert.Zero(t, delegate.executeCalls)
}
@@ -52,20 +46,16 @@ func TestExecuteCommandRejectsPayloadHashMismatch(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
req := newValidExecuteCommandRequest()
sum := sha256.Sum256([]byte("other"))
req.PayloadHash = sum[:]
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), req)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(req))
require.Error(t, err)
assert.Equal(t, codes.InvalidArgument, status.Code(err))
assert.Equal(t, "payload_hash does not match payload_bytes", status.Convert(err).Message())
assert.Equal(t, connect.CodeInvalidArgument, connect.CodeOf(err))
assert.Equal(t, "payload_hash does not match payload_bytes", connectErrorMessage(t, err))
assert.Zero(t, delegate.executeCalls)
}
@@ -80,19 +70,15 @@ func TestSubscribeEventsRejectsPayloadHashWithInvalidLength(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
req := newValidSubscribeEventsRequest()
req.PayloadHash = []byte("short")
client := gatewayv1.NewEdgeGatewayClient(conn)
err := subscribeEventsError(t, context.Background(), client, req)
require.Error(t, err)
assert.Equal(t, codes.InvalidArgument, status.Code(err))
assert.Equal(t, "payload_hash must be a 32-byte SHA-256 digest", status.Convert(err).Message())
assert.Equal(t, connect.CodeInvalidArgument, connect.CodeOf(err))
assert.Equal(t, "payload_hash must be a 32-byte SHA-256 digest", connectErrorMessage(t, err))
assert.Zero(t, delegate.subscribeCalls)
}
@@ -107,19 +93,15 @@ func TestSubscribeEventsRejectsPayloadHashMismatch(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
req := newValidSubscribeEventsRequest()
sum := sha256.Sum256([]byte("other"))
req.PayloadHash = sum[:]
client := gatewayv1.NewEdgeGatewayClient(conn)
err := subscribeEventsError(t, context.Background(), client, req)
require.Error(t, err)
assert.Equal(t, codes.InvalidArgument, status.Code(err))
assert.Equal(t, "payload_hash does not match payload_bytes", status.Convert(err).Message())
assert.Equal(t, connect.CodeInvalidArgument, connect.CodeOf(err))
assert.Equal(t, "payload_hash does not match payload_bytes", connectErrorMessage(t, err))
assert.Zero(t, delegate.subscribeCalls)
}
+17 -21
View File
@@ -3,8 +3,6 @@ package grpcapi
import (
"context"
"errors"
"net"
"strings"
"galaxy/gateway/internal/config"
"galaxy/gateway/internal/ratelimit"
@@ -13,7 +11,6 @@ import (
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/status"
)
@@ -41,7 +38,7 @@ var (
ErrAuthenticatedPolicyUnavailable = errors.New("authenticated request policy is unavailable")
)
// AuthenticatedRequestLimiter applies authenticated gRPC rate-limit policy to
// AuthenticatedRequestLimiter applies authenticated edge rate-limit policy to
// one concrete bucket key.
type AuthenticatedRequestLimiter interface {
// Reserve evaluates key under policy and reports whether the request may
@@ -52,10 +49,11 @@ type AuthenticatedRequestLimiter interface {
// AuthenticatedRequest describes the authenticated request metadata exposed to
// the edge-policy hook.
type AuthenticatedRequest struct {
// RPCMethod identifies the public gRPC method being processed.
// RPCMethod identifies the public RPC method being processed.
RPCMethod string
// PeerIP is the transport peer IP derived from the gRPC connection.
// PeerIP is the transport peer IP host part derived from the
// authenticated edge HTTP listener peer address.
PeerIP string
// MessageClass is the stable rate-limit and policy class. The gateway uses
@@ -258,23 +256,21 @@ func authenticatedMessageClass(messageType string) string {
return messageType
}
type peerIPContextKey struct{}
// contextWithPeerIP attaches the authenticated edge transport peer IP to ctx.
// It is set by the transport interceptor before the service decorator stack
// runs, and read back via peerIPFromContext.
func contextWithPeerIP(ctx context.Context, ip string) context.Context {
return context.WithValue(ctx, peerIPContextKey{}, ip)
}
func peerIPFromContext(ctx context.Context) string {
peerInfo, ok := peer.FromContext(ctx)
if !ok || peerInfo.Addr == nil {
if ip, ok := ctx.Value(peerIPContextKey{}).(string); ok && ip != "" {
return ip
}
return unknownAuthenticatedPeerIP
}
value := strings.TrimSpace(peerInfo.Addr.String())
if value == "" {
return unknownAuthenticatedPeerIP
}
host, _, err := net.SplitHostPort(value)
if err == nil && host != "" {
return host
}
return value
}
type noopAuthenticatedRequestPolicy struct{}
@@ -3,7 +3,6 @@ package grpcapi
import (
"context"
"fmt"
"io"
"net"
"net/http"
"strings"
@@ -17,10 +16,9 @@ import (
"galaxy/gateway/internal/session"
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
"connectrpc.com/connect"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func TestExecuteCommandRateLimitsByIP(t *testing.T) {
@@ -41,20 +39,15 @@ func TestExecuteCommandRateLimitsByIP(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-1", "request-1"))
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-1", "request-1")))
require.NoError(t, err)
_, err = client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-2", "request-2"))
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-2", "request-2")))
require.Error(t, err)
assert.Equal(t, codes.ResourceExhausted, status.Code(err))
assert.Equal(t, "authenticated request rate limit exceeded", status.Convert(err).Message())
assert.Equal(t, connect.CodeResourceExhausted, connect.CodeOf(err))
assert.Equal(t, "authenticated request rate limit exceeded", connectErrorMessage(t, err))
assert.Equal(t, 1, delegate.executeCalls)
}
@@ -76,21 +69,16 @@ func TestExecuteCommandRateLimitsBySession(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-1", "request-1"))
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-1", "request-1")))
require.NoError(t, err)
_, err = client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-1", "request-2"))
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-1", "request-2")))
require.Error(t, err)
assert.Equal(t, codes.ResourceExhausted, status.Code(err))
assert.Equal(t, connect.CodeResourceExhausted, connect.CodeOf(err))
_, err = client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-2", "request-3"))
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-2", "request-3")))
require.NoError(t, err)
assert.Equal(t, 2, delegate.executeCalls)
@@ -118,21 +106,16 @@ func TestExecuteCommandRateLimitsByUser(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-1", "request-1"))
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-1", "request-1")))
require.NoError(t, err)
_, err = client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-2", "request-2"))
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-2", "request-2")))
require.Error(t, err)
assert.Equal(t, codes.ResourceExhausted, status.Code(err))
assert.Equal(t, connect.CodeResourceExhausted, connect.CodeOf(err))
_, err = client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithSessionAndRequestID("device-session-3", "request-3"))
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithSessionAndRequestID("device-session-3", "request-3")))
require.NoError(t, err)
assert.Equal(t, 2, delegate.executeCalls)
@@ -159,21 +142,16 @@ func TestExecuteCommandRateLimitsByMessageClass(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithMessageType("device-session-1", "request-1", "fleet.move"))
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithMessageType("device-session-1", "request-1", "fleet.move")))
require.NoError(t, err)
_, err = client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithMessageType("device-session-2", "request-2", "fleet.move"))
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithMessageType("device-session-2", "request-2", "fleet.move")))
require.Error(t, err)
assert.Equal(t, codes.ResourceExhausted, status.Code(err))
assert.Equal(t, connect.CodeResourceExhausted, connect.CodeOf(err))
_, err = client.ExecuteCommand(context.Background(), newValidExecuteCommandRequestWithMessageType("device-session-2", "request-3", "fleet.rename"))
_, err = client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequestWithMessageType("device-session-2", "request-3", "fleet.rename")))
require.NoError(t, err)
assert.Equal(t, 2, delegate.executeCalls)
@@ -193,13 +171,8 @@ func TestAuthenticatedPolicyHookReceivesVerifiedRequest(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.NoError(t, err)
require.Len(t, policy.requests, 1)
@@ -228,16 +201,11 @@ func TestExecuteCommandPolicyRejectMapsToPermissionDenied(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.Error(t, err)
assert.Equal(t, codes.PermissionDenied, status.Code(err))
assert.Equal(t, "authenticated request rejected by edge policy", status.Convert(err).Message())
assert.Equal(t, connect.CodePermissionDenied, connect.CodeOf(err))
assert.Equal(t, "authenticated request rejected by edge policy", connectErrorMessage(t, err))
assert.Zero(t, delegate.executeCalls)
}
@@ -259,24 +227,19 @@ func TestSubscribeEventsRateLimitRejectsStream(t *testing.T) {
defer runGateway.stop(t)
addr := waitForListenAddr(t, server)
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := newEdgeClient(t, addr)
client := gatewayv1.NewEdgeGatewayClient(conn)
stream, err := client.SubscribeEvents(context.Background(), newValidSubscribeEventsRequestWithSessionAndRequestID("device-session-1", "request-1"))
stream, err := client.SubscribeEvents(context.Background(), connect.NewRequest(newValidSubscribeEventsRequestWithSessionAndRequestID("device-session-1", "request-1")))
require.NoError(t, err)
event := recvBootstrapEvent(t, stream)
assertServerTimeBootstrapEvent(t, event, newTestResponseSignerPublicKey(), "request-1", "trace-123", testCurrentTime.UnixMilli())
_, err = stream.Recv()
require.ErrorIs(t, err, io.EOF)
require.False(t, stream.Receive())
require.NoError(t, stream.Err())
err = subscribeEventsError(t, context.Background(), client, newValidSubscribeEventsRequestWithSessionAndRequestID("device-session-2", "request-2"))
require.Error(t, err)
assert.Equal(t, codes.ResourceExhausted, status.Code(err))
assert.Equal(t, "authenticated request rate limit exceeded", status.Convert(err).Message())
assert.Equal(t, connect.CodeResourceExhausted, connect.CodeOf(err))
assert.Equal(t, "authenticated request rate limit exceeded", connectErrorMessage(t, err))
assert.Equal(t, 1, delegate.subscribeCalls)
}
@@ -342,13 +305,8 @@ func TestAuthenticatedRateLimitsStayIsolatedFromPublicREST(t *testing.T) {
require.NoError(t, firstPublic.Body.Close())
require.NoError(t, secondPublic.Body.Close())
conn := dialGatewayClient(t, addr)
defer func() {
require.NoError(t, conn.Close())
}()
client := gatewayv1.NewEdgeGatewayClient(conn)
_, err := client.ExecuteCommand(context.Background(), newValidExecuteCommandRequest())
client := newEdgeClient(t, addr)
_, err := client.ExecuteCommand(context.Background(), connect.NewRequest(newValidExecuteCommandRequest()))
require.NoError(t, err)
}

Some files were not shown because too many files have changed in this diff Show More