docs: reorder & testing
This commit is contained in:
@@ -4,10 +4,27 @@ This repository hosts the Galaxy Game project.
|
|||||||
|
|
||||||
## Sources of truth
|
## Sources of truth
|
||||||
|
|
||||||
- `ARCHITECTURE.md` — global architecture, project-wide rules
|
- `docs/ARCHITECTURE.md` — global architecture, security model,
|
||||||
and links to the implemented services.
|
cross-service contracts, and project-wide rules.
|
||||||
- `galaxy/<service>/README.md` - service conventions and agreements
|
- `docs/FUNCTIONAL.md` — per-domain user stories that describe what each
|
||||||
for the implemented or planned to be implemented service.
|
user-visible operation does, with the exact gateway and backend logic
|
||||||
|
for it. Starting point for any change request that touches behaviour.
|
||||||
|
- `docs/FUNCTIONAL_ru.md` — Russian translation of `docs/FUNCTIONAL.md`,
|
||||||
|
maintained as a convenience for the project owner. **Not a source of
|
||||||
|
truth** — when the two files disagree, the English version wins.
|
||||||
|
Every point edit applied to `docs/FUNCTIONAL.md` must also be
|
||||||
|
mirrored into `docs/FUNCTIONAL_ru.md` in the same patch (translate
|
||||||
|
the changed paragraphs only, do not re-translate the whole file).
|
||||||
|
A full re-translation only happens on explicit owner request.
|
||||||
|
- `docs/TESTING.md` — testing layers (unit / integration), the
|
||||||
|
integration runbook, and the principles every test must follow
|
||||||
|
(no-op observability for testcontainers, `t.Fatal` on
|
||||||
|
infrastructure breakages, label-driven preclean). Read before
|
||||||
|
adding tests or modifying the integration harness.
|
||||||
|
- `galaxy/<service>/README.md` — service conventions, layout,
|
||||||
|
configuration, and operations for an implemented or planned service.
|
||||||
|
- `galaxy/<service>/openapi.yaml` and `*.proto` files — exact wire
|
||||||
|
contracts for REST and gRPC surfaces.
|
||||||
|
|
||||||
## Planning of service implementation and Implementing Plan
|
## Planning of service implementation and Implementing Plan
|
||||||
|
|
||||||
@@ -20,7 +37,7 @@ This repository hosts the Galaxy Game project.
|
|||||||
## Decision records when implementing stages from PLAN.md
|
## Decision records when implementing stages from PLAN.md
|
||||||
|
|
||||||
- Stage-related discussion and decisions do NOT live in `README.md` or
|
- Stage-related discussion and decisions do NOT live in `README.md` or
|
||||||
`ARCHITECTURE.md`. Those files describe the current state, not the history.
|
`docs/ARCHITECTURE.md`. Those files describe the current state, not the history.
|
||||||
- Each non-trivial decision gets its own `.md` under the module's `docs/`,
|
- Each non-trivial decision gets its own `.md` under the module's `docs/`,
|
||||||
referenced from the relevant `README.md`.
|
referenced from the relevant `README.md`.
|
||||||
- Any agreement reached during interactive planning that is not obvious from
|
- Any agreement reached during interactive planning that is not obvious from
|
||||||
@@ -33,6 +50,19 @@ The existing codebase of `galaxy/<service>` may be modified or extended when a
|
|||||||
plan stage requires it. All such changes must be covered by new or updated tests
|
plan stage requires it. All such changes must be covered by new or updated tests
|
||||||
and reflected in documentation when they affect documented behavior.
|
and reflected in documentation when they affect documented behavior.
|
||||||
|
|
||||||
|
## Pre-production migration rule
|
||||||
|
|
||||||
|
The platform is not yet in production. Schema changes for `backend` go
|
||||||
|
into the existing `backend/internal/postgres/migrations/00001_init.sql`
|
||||||
|
file rather than into new `00002_*`-prefixed files. Local databases and
|
||||||
|
integration test harnesses are recreated from scratch on every pull.
|
||||||
|
|
||||||
|
**This rule is removed before the first production deployment.** From
|
||||||
|
that point on every schema change becomes a new migration file with a
|
||||||
|
monotonically increasing prefix, and `00001_init.sql` becomes immutable
|
||||||
|
history. See `backend/internal/postgres/migrations/README.md` for
|
||||||
|
details.
|
||||||
|
|
||||||
## Documentation discipline
|
## Documentation discipline
|
||||||
|
|
||||||
- Code and docs are kept in sync. If an implementation changes behavior
|
- Code and docs are kept in sync. If an implementation changes behavior
|
||||||
@@ -45,7 +75,33 @@ and reflected in documentation when they affect documented behavior.
|
|||||||
doc with a reference kept.
|
doc with a reference kept.
|
||||||
- Cross-module impact: if a new agreement requires changes in
|
- Cross-module impact: if a new agreement requires changes in
|
||||||
already-implemented modules, make those changes — code, tests, docs — in
|
already-implemented modules, make those changes — code, tests, docs — in
|
||||||
the same patch, and record the new rule in `ARCHITECTURE.md`.
|
the same patch, and record the new rule in `docs/ARCHITECTURE.md`.
|
||||||
|
|
||||||
|
## Documentation synchronisation
|
||||||
|
|
||||||
|
The same behaviour is described in several parallel sources: code,
|
||||||
|
`docs/ARCHITECTURE.md`, `docs/FUNCTIONAL.md` (with its Russian mirror
|
||||||
|
`docs/FUNCTIONAL_ru.md`), the affected service `README.md`, the
|
||||||
|
relevant `openapi.yaml` or `*.proto`, and the per-stage decision
|
||||||
|
records under `galaxy/<service>/docs/`. They must never disagree.
|
||||||
|
|
||||||
|
- Any patch that changes user-visible behaviour, an API contract, or a
|
||||||
|
cross-service flow updates every affected source in the same change
|
||||||
|
set — never one source in this patch and another later.
|
||||||
|
- Before declaring a change complete, read the relevant sections of
|
||||||
|
`docs/ARCHITECTURE.md`, `docs/FUNCTIONAL.md`, the affected service
|
||||||
|
README, the relevant `openapi.yaml` or `*.proto`, and the implementing
|
||||||
|
code; confirm they describe the same behaviour.
|
||||||
|
- When two sources disagree about existing behaviour, do not pick one
|
||||||
|
silently. Decide which one is authoritative, fix the contradiction in
|
||||||
|
the same patch, and call out the change in the response. If the
|
||||||
|
resolution is non-obvious, escalate to the user before proceeding.
|
||||||
|
- When touching code, also re-read inline package and Go Doc Comments in
|
||||||
|
the affected packages and update them when they no longer match the
|
||||||
|
code.
|
||||||
|
- When `docs/FUNCTIONAL.md` changes, mirror the same change into
|
||||||
|
`docs/FUNCTIONAL_ru.md` (translate only the touched paragraphs).
|
||||||
|
Skipping the mirror is treated as an incomplete patch.
|
||||||
|
|
||||||
## Dependencies
|
## Dependencies
|
||||||
|
|
||||||
|
|||||||
-210
@@ -1,210 +0,0 @@
|
|||||||
# TESTING.md
|
|
||||||
|
|
||||||
Test strategy for the [Galaxy Game](ARCHITECTURE.md) platform after the
|
|
||||||
consolidation that moved every domain concern into `galaxy/backend`.
|
|
||||||
The platform now ships three executables — `gateway`, `backend`,
|
|
||||||
`game` (the engine container) — plus the shared `pkg/*` libraries.
|
|
||||||
This document defines the layering of tests, the responsibilities of
|
|
||||||
each layer, and the mandatory minimum coverage per executable.
|
|
||||||
|
|
||||||
## Three layers
|
|
||||||
|
|
||||||
1. **Service tests** verify a single executable in isolation. They
|
|
||||||
live next to the implementation as `*_test.go` files and use only
|
|
||||||
in-process or testcontainers-managed dependencies.
|
|
||||||
2. **Inter-service integration tests** verify one cross-process seam
|
|
||||||
between two real executables (most often `gateway ↔ backend`,
|
|
||||||
sometimes `backend ↔ game`). They live in
|
|
||||||
[`integration/`](integration/) and drive the platform from outside
|
|
||||||
the trust boundary.
|
|
||||||
3. **Full system tests** are a small, focused subset of the
|
|
||||||
integration suite that walks an entire user-facing flow from the
|
|
||||||
client edge through every component the flow touches. They live in
|
|
||||||
the same `integration/` module and reuse the same fixtures.
|
|
||||||
|
|
||||||
Service tests are the cheapest and the broadest; integration tests
|
|
||||||
are slower and broader; full-system tests are the slowest and the
|
|
||||||
narrowest. The pyramid stays in this order — never replace a service
|
|
||||||
test with a system test.
|
|
||||||
|
|
||||||
## Global rules
|
|
||||||
|
|
||||||
- Every executable owns the service tests for its packages. Adding a
|
|
||||||
new package without `_test.go` files is a review block.
|
|
||||||
- Every cross-process seam must have at least one passing
|
|
||||||
inter-service test before the seam is wired in production.
|
|
||||||
- Async flows (mail outbox, notification routes, runtime workers,
|
|
||||||
push gRPC) get tests for both the success path and the retry /
|
|
||||||
dead-letter path, and a duplicate-event safety check.
|
|
||||||
- Sync flows get happy path, validation failure, timeout
|
|
||||||
propagation, and dependency unavailable.
|
|
||||||
- Every external or trusted-internal API must have contract tests
|
|
||||||
alongside behaviour tests. `backend/internal/server/contract_test.go`
|
|
||||||
is the reference; gateway runs the same shape against
|
|
||||||
`gateway/openapi.yaml`.
|
|
||||||
- The integration suite must keep running on a developer machine
|
|
||||||
with Docker available; tests skip cleanly with a clear message
|
|
||||||
when the daemon is unreachable.
|
|
||||||
|
|
||||||
## Service-specific coverage
|
|
||||||
|
|
||||||
### `galaxy/gateway`
|
|
||||||
|
|
||||||
Service tests live under `gateway/internal/`:
|
|
||||||
|
|
||||||
- Public REST routing, error projection, and OpenAPI contract
|
|
||||||
validation.
|
|
||||||
- Authenticated gRPC envelope verification (`grpcapi.Server`):
|
|
||||||
signature, payload hash, freshness window, anti-replay reservation,
|
|
||||||
unknown / revoked sessions.
|
|
||||||
- Session cache (`session.BackendCache`) — the only implementation
|
|
||||||
in the codebase, a thin wrapper around the `backendclient.RESTClient`
|
|
||||||
per-request lookup.
|
|
||||||
- Response signing for unary responses and stream events
|
|
||||||
(`authn.ResponseSigner`).
|
|
||||||
- Push hub (`push.Hub`) and push fan-out (`push_fanout.go`).
|
|
||||||
- Replay store (`replay.RedisStore`) reservation semantics.
|
|
||||||
- Anti-abuse rate limits per IP / session / user / message class.
|
|
||||||
|
|
||||||
### `galaxy/backend`
|
|
||||||
|
|
||||||
Service tests live under `backend/internal/`:
|
|
||||||
|
|
||||||
- Startup wiring: `app.App` lifecycle, telemetry runtime, Postgres
|
|
||||||
pool, embedded migrations.
|
|
||||||
- OpenAPI contract test (`internal/server/contract_test.go`):
|
|
||||||
validates every documented operation against the live gin engine.
|
|
||||||
- Domain unit + e2e tests per package (`auth`, `user`, `admin`,
|
|
||||||
`lobby`, `runtime`, `mail`, `notification`, `geo`, `push`).
|
|
||||||
E2E tests (`*_e2e_test.go`) spin up a Postgres testcontainer.
|
|
||||||
- Mail outbox: pickup with `SELECT FOR UPDATE SKIP LOCKED`, retry
|
|
||||||
with backoff plus jitter, dead-letter past `MAX_ATTEMPTS`,
|
|
||||||
resend semantics (`pending|retrying|dead_lettered` → re-armed,
|
|
||||||
`sent` → 409).
|
|
||||||
- Notification: idempotent `Submit`, route materialisation, push +
|
|
||||||
email fan-out, `OnUserDeleted` cascade.
|
|
||||||
- Lobby: state-machine transitions, RND canonicalisation, sweeper.
|
|
||||||
- Runtime: per-game mutex serialisation, worker pool, scheduler,
|
|
||||||
reconciler, force-next-turn skip flag.
|
|
||||||
- Admin: bcrypt cost 12, idempotent bootstrap, write-through cache,
|
|
||||||
409 Conflict on duplicate username, last-used timestamp.
|
|
||||||
- Geo: counter increment on every authenticated request,
|
|
||||||
declared-country write at registration, fail-open semantics.
|
|
||||||
|
|
||||||
### `galaxy/game`
|
|
||||||
|
|
||||||
The engine has its own service tests under `game/`:
|
|
||||||
|
|
||||||
- OpenAPI contract test (`game/openapi_contract_test.go`).
|
|
||||||
- Engine lifecycle (init, status, turn, banish, command, order,
|
|
||||||
report) implemented by the engine package suites.
|
|
||||||
|
|
||||||
## Integration test coverage (`integration/`)
|
|
||||||
|
|
||||||
The integration module is the single home for inter-service and
|
|
||||||
full-system tests. Every scenario calls `testenv.Bootstrap(t)` which
|
|
||||||
brings up Postgres, Redis, mailpit, the backend image, the gateway
|
|
||||||
image, and (when needed) the engine image.
|
|
||||||
|
|
||||||
Mandatory inter-service coverage:
|
|
||||||
|
|
||||||
- **Gateway ↔ Backend (public auth)**:
|
|
||||||
`auth_flow_test.go` — register + confirm with mailpit-captured
|
|
||||||
code; declared_country populated; idempotent re-confirm.
|
|
||||||
- **Gateway ↔ Backend (authenticated user surface)**:
|
|
||||||
`user_account_test.go`, `user_profile_update_test.go`,
|
|
||||||
`user_settings_update_test.go` — signed envelope, FlatBuffers
|
|
||||||
payload, response signature verification, BCP 47 / IANA validation.
|
|
||||||
- **Gateway ↔ Backend (anti-replay, signature, freshness)**:
|
|
||||||
`gateway_edge_test.go` — body-too-large, bad signature,
|
|
||||||
payload_hash mismatch, stale timestamp, unknown session,
|
|
||||||
unsupported `protocol_version`.
|
|
||||||
- **Gateway ↔ Backend (push)**:
|
|
||||||
`notification_flow_test.go`, `session_revoke_test.go` — push
|
|
||||||
delivery to a SubscribeEvents stream and immediate stream close
|
|
||||||
on revoke.
|
|
||||||
- **Gateway ↔ Backend (anti-replay)**:
|
|
||||||
`anti_replay_test.go` — duplicate `request_id` rejected.
|
|
||||||
- **Backend ↔ Postgres** is exercised by every backend e2e test
|
|
||||||
through testcontainers; integration tests do not duplicate it.
|
|
||||||
- **Backend ↔ SMTP**:
|
|
||||||
`mail_flow_test.go` — login-code email captured by mailpit; admin
|
|
||||||
list reaches `sent`; resend on `sent` returns 409.
|
|
||||||
- **Backend ↔ Game engine**:
|
|
||||||
`runtime_lifecycle_test.go`, `engine_command_proxy_test.go` —
|
|
||||||
start container, healthz green, command, force-next-turn, finish,
|
|
||||||
race name promotion.
|
|
||||||
- **Admin surface (REST)**:
|
|
||||||
`admin_flow_test.go`, `admin_global_games_view_test.go`,
|
|
||||||
`admin_engine_versions_test.go`, `admin_user_sanction_test.go` —
|
|
||||||
bootstrap + CRUD; visibility split between user and admin queries;
|
|
||||||
engine-version registry CRUD; permanent block cascade.
|
|
||||||
- **Lobby flow without engine**:
|
|
||||||
`lobby_flow_test.go` — owner-creates-private-game →
|
|
||||||
open-enrollment → invite → redeem → memberships listing.
|
|
||||||
- **Soft delete cascade**:
|
|
||||||
`soft_delete_test.go` — `POST /api/v1/user/account/delete`
|
|
||||||
cascades through auth/lobby/notification/geo, gateway rejects
|
|
||||||
subsequent calls.
|
|
||||||
- **Geo counters**:
|
|
||||||
`geo_counter_increments_test.go` — multiple authenticated
|
|
||||||
requests with different `X-Forwarded-For` values increment the
|
|
||||||
user's per-country counter rows.
|
|
||||||
|
|
||||||
Full-system flows beyond the inter-service set are intentionally
|
|
||||||
limited; pick scenarios that exercise the longest vertical slice
|
|
||||||
the platform supports today.
|
|
||||||
|
|
||||||
## Out-of-scope (legacy architecture)
|
|
||||||
|
|
||||||
The previous nine-service architecture defined components that no
|
|
||||||
longer exist as distinct services. Their behaviour either lives
|
|
||||||
inside `backend` (and is therefore covered by backend service or
|
|
||||||
integration tests) or has been removed:
|
|
||||||
|
|
||||||
- *Auth/Session Service*, *User Service*, *Notification Service*,
|
|
||||||
*Mail Service*, *Game Lobby Service*, *Runtime Manager*,
|
|
||||||
*Game Master*, *Admin Service* — consolidated into
|
|
||||||
`backend/internal/*`. Inter-service seams between these former
|
|
||||||
services are now in-process function calls; they are exercised by
|
|
||||||
backend service tests, not by integration tests.
|
|
||||||
- *Geo Profile Service* (suspicious-multi-country detection,
|
|
||||||
review-recommended state, session blocking through geo) — not
|
|
||||||
implemented. The geo concern is intentionally minimal (see
|
|
||||||
`ARCHITECTURE.md §10`) and the test plan does not assert on
|
|
||||||
features we do not ship.
|
|
||||||
- *Billing Service* — not implemented; no tests required until it
|
|
||||||
appears.
|
|
||||||
|
|
||||||
## Practical execution
|
|
||||||
|
|
||||||
During day-to-day development:
|
|
||||||
|
|
||||||
- Run `go test ./<service>/...` for the service you are touching;
|
|
||||||
this is fast (Postgres testcontainers add ~3–5 s per package that
|
|
||||||
uses them).
|
|
||||||
- Run `go test ./integration/...` before opening a PR that touches a
|
|
||||||
cross-process seam. Cold runs build three Docker images
|
|
||||||
(`galaxy/backend:integration`, `galaxy/gateway:integration`,
|
|
||||||
`galaxy/game:integration`) — budget ~3 min for the cold path,
|
|
||||||
~75 s for the warm path.
|
|
||||||
- CI runs every layer on every push. Integration tests skip with a
|
|
||||||
clear message if Docker is not available.
|
|
||||||
|
|
||||||
## Adding a new test
|
|
||||||
|
|
||||||
1. Decide the layer: service, inter-service, or system. A backend
|
|
||||||
change usually lands as service tests plus an integration test
|
|
||||||
for any new cross-process behaviour.
|
|
||||||
2. Reuse `testenv` fixtures rather than rolling your own
|
|
||||||
container orchestration.
|
|
||||||
3. Follow the bootstrap-per-test pattern; do not share a global
|
|
||||||
stack across tests.
|
|
||||||
4. Make the test deterministic: explicit timeouts (no
|
|
||||||
`time.Sleep`), `t.Logf` instead of `fmt.Println`, no
|
|
||||||
`t.Parallel()` in `integration/`.
|
|
||||||
5. Adding a new service-test file is fine; adding an
|
|
||||||
integration-test file requires that the seam be reachable
|
|
||||||
through gateway's REST or gRPC surface (or through backend HTTP
|
|
||||||
directly with `X-User-ID` for routes that gateway does not yet
|
|
||||||
register).
|
|
||||||
+8
-2
@@ -2,8 +2,8 @@
|
|||||||
|
|
||||||
# Build context is the workspace root (galaxy/), not the backend/
|
# Build context is the workspace root (galaxy/), not the backend/
|
||||||
# subdirectory, because the backend module pulls galaxy/{cronutil,error,
|
# subdirectory, because the backend module pulls galaxy/{cronutil,error,
|
||||||
# geoip,model,postgres,util} through the go.work replace directives.
|
# geoip,model,postgres,schema,transcoder,util} through the go.work
|
||||||
# Build with:
|
# replace directives. Build with:
|
||||||
#
|
#
|
||||||
# docker build -t galaxy/backend:integration -f backend/Dockerfile .
|
# docker build -t galaxy/backend:integration -f backend/Dockerfile .
|
||||||
|
|
||||||
@@ -16,6 +16,8 @@ COPY pkg/error/ ./pkg/error/
|
|||||||
COPY pkg/geoip/ ./pkg/geoip/
|
COPY pkg/geoip/ ./pkg/geoip/
|
||||||
COPY pkg/model/ ./pkg/model/
|
COPY pkg/model/ ./pkg/model/
|
||||||
COPY pkg/postgres/ ./pkg/postgres/
|
COPY pkg/postgres/ ./pkg/postgres/
|
||||||
|
COPY pkg/schema/ ./pkg/schema/
|
||||||
|
COPY pkg/transcoder/ ./pkg/transcoder/
|
||||||
COPY pkg/util/ ./pkg/util/
|
COPY pkg/util/ ./pkg/util/
|
||||||
COPY backend/ ./backend/
|
COPY backend/ ./backend/
|
||||||
|
|
||||||
@@ -32,6 +34,8 @@ use (
|
|||||||
./pkg/geoip
|
./pkg/geoip
|
||||||
./pkg/model
|
./pkg/model
|
||||||
./pkg/postgres
|
./pkg/postgres
|
||||||
|
./pkg/schema
|
||||||
|
./pkg/transcoder
|
||||||
./pkg/util
|
./pkg/util
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -41,6 +45,8 @@ replace (
|
|||||||
galaxy/geoip v0.0.0 => ./pkg/geoip
|
galaxy/geoip v0.0.0 => ./pkg/geoip
|
||||||
galaxy/model v0.0.0 => ./pkg/model
|
galaxy/model v0.0.0 => ./pkg/model
|
||||||
galaxy/postgres v0.0.0 => ./pkg/postgres
|
galaxy/postgres v0.0.0 => ./pkg/postgres
|
||||||
|
galaxy/schema v0.0.0 => ./pkg/schema
|
||||||
|
galaxy/transcoder v0.0.0 => ./pkg/transcoder
|
||||||
galaxy/util v0.0.0 => ./pkg/util
|
galaxy/util v0.0.0 => ./pkg/util
|
||||||
)
|
)
|
||||||
EOF
|
EOF
|
||||||
|
|||||||
+1
-1
@@ -10,7 +10,7 @@ It should NOT be threated as source of truth for service functionality.
|
|||||||
|
|
||||||
This plan is the technical specification for implementing the
|
This plan is the technical specification for implementing the
|
||||||
consolidated Galaxy `backend` service. It is read together with
|
consolidated Galaxy `backend` service. It is read together with
|
||||||
`../ARCHITECTURE.md` (architecture and security model) and
|
`../docs/ARCHITECTURE.md` (architecture and security model) and
|
||||||
`README.md` (module layout, configuration, operations).
|
`README.md` (module layout, configuration, operations).
|
||||||
|
|
||||||
After reading those two documents and this plan, an implementing
|
After reading those two documents and this plan, an implementing
|
||||||
|
|||||||
+23
-9
@@ -3,7 +3,7 @@
|
|||||||
`backend` is the consolidated business service of the Galaxy platform. It
|
`backend` is the consolidated business service of the Galaxy platform. It
|
||||||
owns identity, sessions, lobby, game runtime, mail, notifications, geo
|
owns identity, sessions, lobby, game runtime, mail, notifications, geo
|
||||||
signals, and administration. It is reachable only from `gateway` over
|
signals, and administration. It is reachable only from `gateway` over
|
||||||
the trusted network. See `../ARCHITECTURE.md` for the platform-level
|
the trusted network. See `../docs/ARCHITECTURE.md` for the platform-level
|
||||||
context, security model, and decision rationale.
|
context, security model, and decision rationale.
|
||||||
|
|
||||||
## 1. Purpose
|
## 1. Purpose
|
||||||
@@ -205,12 +205,21 @@ message PushEvent {
|
|||||||
|
|
||||||
- `ClientEvent` carries an opaque payload addressed to a `(user_id [,
|
- `ClientEvent` carries an opaque payload addressed to a `(user_id [,
|
||||||
device_session_id])`. Gateway signs and forwards it to active client
|
device_session_id])`. Gateway signs and forwards it to active client
|
||||||
subscriptions. The frame also carries `event_id`, `request_id`, and
|
subscriptions. Producers do not pass raw bytes to `push.Service`;
|
||||||
`trace_id` correlation strings populated by backend producers
|
instead they pass a typed `push.Event` (`Kind() string`,
|
||||||
(notification dispatcher fills `event_id` from `route_id`,
|
`Marshal() ([]byte, error)`) and `push.Service` invokes Marshal at
|
||||||
`request_id` from the originating intent's `idempotency_key`, and
|
publish time. Every notification catalog kind (§10) has a 1:1
|
||||||
`trace_id` from the active span); gateway re-emits the values inside
|
FlatBuffers schema in `pkg/schema/fbs/notification.fbs`; the
|
||||||
the signed client envelope without re-interpreting them.
|
notification dispatcher routes `(kind, payload)` to a typed event
|
||||||
|
through `notification.buildClientPushEvent`, so client decoders can
|
||||||
|
rely on a stable wire shape per kind. `push.JSONEvent` remains as a
|
||||||
|
safety net for kinds that arrive without a catalog schema. The frame
|
||||||
|
also carries `event_id`, `request_id`, and `trace_id` correlation
|
||||||
|
strings populated by backend producers (notification dispatcher
|
||||||
|
fills `event_id` from `route_id`, `request_id` from the originating
|
||||||
|
intent's `idempotency_key`, and `trace_id` from the active span);
|
||||||
|
gateway re-emits the values inside the signed client envelope
|
||||||
|
without re-interpreting them.
|
||||||
- `SessionInvalidation` instructs gateway to close active subscriptions
|
- `SessionInvalidation` instructs gateway to close active subscriptions
|
||||||
and reject in-flight requests for the affected sessions.
|
and reject in-flight requests for the affected sessions.
|
||||||
- `cursor` is a monotonically increasing string. Gateway stores the last
|
- `cursor` is a monotonically increasing string. Gateway stores the last
|
||||||
@@ -275,7 +284,12 @@ Lifecycle:
|
|||||||
and either marks `sent` or schedules `next_attempt_at` with
|
and either marks `sent` or schedules `next_attempt_at` with
|
||||||
exponential backoff and jitter.
|
exponential backoff and jitter.
|
||||||
3. After `BACKEND_MAIL_MAX_ATTEMPTS` the delivery moves to
|
3. After `BACKEND_MAIL_MAX_ATTEMPTS` the delivery moves to
|
||||||
`mail_dead_letters`. An admin notification intent is emitted.
|
`mail_dead_letters` and the worker writes an operator log line.
|
||||||
|
The `mail.dead_lettered` notification kind is reserved in the
|
||||||
|
catalog (see §10) but has no producer wired up yet, so no admin
|
||||||
|
email or push event is emitted today; admin observability for
|
||||||
|
dead letters relies on the log line and the
|
||||||
|
`/api/v1/admin/mail/dead-letters` listing.
|
||||||
4. Operators can resend a `pending`, `retrying`, or `dead_lettered`
|
4. Operators can resend a `pending`, `retrying`, or `dead_lettered`
|
||||||
delivery via `POST /api/v1/admin/mail/{delivery_id}/resend`. Resend
|
delivery via `POST /api/v1/admin/mail/{delivery_id}/resend`. Resend
|
||||||
on a `sent` delivery returns `409 Conflict` so operators cannot
|
on a `sent` delivery returns `409 Conflict` so operators cannot
|
||||||
@@ -469,4 +483,4 @@ Primary references:
|
|||||||
|
|
||||||
- [`PLAN.md`](PLAN.md) — historical staged build-up of the service.
|
- [`PLAN.md`](PLAN.md) — historical staged build-up of the service.
|
||||||
- [`openapi.yaml`](openapi.yaml) — REST contract.
|
- [`openapi.yaml`](openapi.yaml) — REST contract.
|
||||||
- [`../ARCHITECTURE.md`](../ARCHITECTURE.md) — workspace-level architecture.
|
- [`../docs/ARCHITECTURE.md`](../docs/ARCHITECTURE.md) — workspace-level architecture.
|
||||||
|
|||||||
@@ -278,6 +278,7 @@ func run(ctx context.Context) (err error) {
|
|||||||
|
|
||||||
publicAuthHandlers := backendserver.NewPublicAuthHandlers(authSvc, logger)
|
publicAuthHandlers := backendserver.NewPublicAuthHandlers(authSvc, logger)
|
||||||
internalSessionsHandlers := backendserver.NewInternalSessionsHandlers(authSvc, logger)
|
internalSessionsHandlers := backendserver.NewInternalSessionsHandlers(authSvc, logger)
|
||||||
|
userSessionsHandlers := backendserver.NewUserSessionsHandlers(authSvc, logger)
|
||||||
userAccountHandlers := backendserver.NewUserAccountHandlers(userSvc, logger)
|
userAccountHandlers := backendserver.NewUserAccountHandlers(userSvc, logger)
|
||||||
adminUsersHandlers := backendserver.NewAdminUsersHandlers(userSvc, logger)
|
adminUsersHandlers := backendserver.NewAdminUsersHandlers(userSvc, logger)
|
||||||
adminAdminAccountsHandlers := backendserver.NewAdminAdminAccountsHandlers(adminSvc, logger)
|
adminAdminAccountsHandlers := backendserver.NewAdminAdminAccountsHandlers(adminSvc, logger)
|
||||||
@@ -309,6 +310,7 @@ func run(ctx context.Context) (err error) {
|
|||||||
GeoCounter: geoSvc,
|
GeoCounter: geoSvc,
|
||||||
PublicAuth: publicAuthHandlers,
|
PublicAuth: publicAuthHandlers,
|
||||||
InternalSessions: internalSessionsHandlers,
|
InternalSessions: internalSessionsHandlers,
|
||||||
|
UserSessions: userSessionsHandlers,
|
||||||
UserAccount: userAccountHandlers,
|
UserAccount: userAccountHandlers,
|
||||||
AdminUsers: adminUsersHandlers,
|
AdminUsers: adminUsersHandlers,
|
||||||
AdminAdminAccounts: adminAdminAccountsHandlers,
|
AdminAdminAccounts: adminAdminAccountsHandlers,
|
||||||
@@ -370,11 +372,15 @@ type authSessionRevoker struct {
|
|||||||
svc *auth.Service
|
svc *auth.Service
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *authSessionRevoker) RevokeAllForUser(ctx context.Context, userID uuid.UUID) error {
|
func (r *authSessionRevoker) RevokeAllForUser(ctx context.Context, userID uuid.UUID, actor user.SessionRevokeActor) error {
|
||||||
if r == nil || r.svc == nil {
|
if r == nil || r.svc == nil {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
_, err := r.svc.RevokeAllForUser(ctx, userID)
|
_, err := r.svc.RevokeAllForUser(ctx, userID, auth.RevokeContext{
|
||||||
|
ActorKind: auth.ActorKind(actor.Kind),
|
||||||
|
ActorID: actor.ID,
|
||||||
|
Reason: actor.Reason,
|
||||||
|
})
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -18,5 +18,5 @@ Primary references:
|
|||||||
- [`../openapi.yaml`](../openapi.yaml) — REST contract.
|
- [`../openapi.yaml`](../openapi.yaml) — REST contract.
|
||||||
- [`../PLAN.md`](../PLAN.md) — historical staged build-up; kept for
|
- [`../PLAN.md`](../PLAN.md) — historical staged build-up; kept for
|
||||||
archaeology, not as a source of truth.
|
archaeology, not as a source of truth.
|
||||||
- [`../../ARCHITECTURE.md`](../../ARCHITECTURE.md) — workspace-level
|
- [`../../docs/ARCHITECTURE.md`](../../docs/ARCHITECTURE.md) — workspace-level
|
||||||
architecture.
|
architecture.
|
||||||
|
|||||||
+24
-6
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
This document collects the multi-step interactions inside `backend`
|
This document collects the multi-step interactions inside `backend`
|
||||||
that span domain modules. Each section assumes the reader is familiar
|
that span domain modules. Each section assumes the reader is familiar
|
||||||
with `../README.md` and `../../ARCHITECTURE.md`.
|
with `../README.md` and `../../docs/ARCHITECTURE.md`.
|
||||||
|
|
||||||
## Registration (send + confirm)
|
## Registration (send + confirm)
|
||||||
|
|
||||||
@@ -39,11 +39,29 @@ sequenceDiagram
|
|||||||
Gateway-->>Client: 200 {device_session_id}
|
Gateway-->>Client: 200 {device_session_id}
|
||||||
```
|
```
|
||||||
|
|
||||||
Re-confirming the same `challenge_id` returns the existing session and
|
A `challenge_id` is single-use: confirm consumes the row in the same
|
||||||
clears the throttle window (the throttle reuses the latest un-consumed
|
transaction that inserts the device session, so a second confirm-email-code
|
||||||
challenge rather than dropping the request). `accounts.user_name` is
|
on the same id returns `400 invalid_request` (`auth.ErrChallengeNotFound`)
|
||||||
synthesised once and never overwritten on subsequent sign-ins; the same
|
together with unknown and expired ids. The opaque error code is
|
||||||
account always lands the same handle.
|
deliberate — the API never differentiates "consumed", "expired", and
|
||||||
|
"never existed" so an attacker cannot mine challenge_id state.
|
||||||
|
|
||||||
|
Throttle reuses the latest un-consumed challenge rather than dropping
|
||||||
|
the request: send-email-code returns the existing `challenge_id` to a
|
||||||
|
caller hitting the throttle, leaving the wire shape identical to a
|
||||||
|
fresh issue.
|
||||||
|
|
||||||
|
`accounts.permanent_block` is checked twice on the registration path:
|
||||||
|
once in send-email-code (no fresh challenge for an already-blocked
|
||||||
|
address) and once in confirm-email-code after the verification code has
|
||||||
|
matched (catches the case where an admin applied the block in the
|
||||||
|
window between the two calls). Both paths surface
|
||||||
|
`auth.ErrEmailPermanentlyBlocked` and the handler maps it to `400
|
||||||
|
invalid_request` with message `email is not allowed`.
|
||||||
|
|
||||||
|
`accounts.user_name` is synthesised once at first sign-in and never
|
||||||
|
overwritten on subsequent sign-ins; the same account always lands the
|
||||||
|
same handle.
|
||||||
|
|
||||||
## Authenticated request lifecycle
|
## Authenticated request lifecycle
|
||||||
|
|
||||||
|
|||||||
@@ -71,7 +71,7 @@ func startPostgres(t *testing.T) *sql.DB {
|
|||||||
cfg.PrimaryDSN = scopedDSN
|
cfg.PrimaryDSN = scopedDSN
|
||||||
cfg.OperationTimeout = pgOpTO
|
cfg.OperationTimeout = pgOpTO
|
||||||
|
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -72,7 +72,7 @@ func startPostgres(t *testing.T) *sql.DB {
|
|||||||
cfg.PrimaryDSN = scopedDSN
|
cfg.PrimaryDSN = scopedDSN
|
||||||
cfg.OperationTimeout = pgOpTO
|
cfg.OperationTimeout = pgOpTO
|
||||||
|
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
@@ -155,8 +155,7 @@ func (p *recordingPush) snapshot() []recordedPush {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// stubGeo implements auth.GeoService with no real lookups. The country
|
// stubGeo implements auth.GeoService with no real lookups. The country
|
||||||
// it returns is configurable per call via CountryForIP; LanguageForIP
|
// it returns is configurable per call via countryByIP.
|
||||||
// returns "" so the auth flow exercises the "en" fallback path.
|
|
||||||
type stubGeo struct {
|
type stubGeo struct {
|
||||||
countryByIP map[string]string
|
countryByIP map[string]string
|
||||||
}
|
}
|
||||||
@@ -169,8 +168,6 @@ func (g *stubGeo) LookupCountry(sourceIP string) string {
|
|||||||
return g.countryByIP[sourceIP]
|
return g.countryByIP[sourceIP]
|
||||||
}
|
}
|
||||||
|
|
||||||
func (g *stubGeo) LanguageForIP(_ string) string { return "" }
|
|
||||||
|
|
||||||
func (g *stubGeo) SetDeclaredCountryAtRegistration(_ context.Context, _ uuid.UUID, _ string) error {
|
func (g *stubGeo) SetDeclaredCountryAtRegistration(_ context.Context, _ uuid.UUID, _ string) error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -279,7 +276,10 @@ func TestAuthEndToEnd(t *testing.T) {
|
|||||||
t.Fatalf("GetSession user_id = %s, want %s", got.UserID, session.UserID)
|
t.Fatalf("GetSession user_id = %s, want %s", got.UserID, session.UserID)
|
||||||
}
|
}
|
||||||
|
|
||||||
revoked, err := svc.RevokeSession(ctx, session.DeviceSessionID)
|
revoked, err := svc.RevokeSession(ctx, session.DeviceSessionID, auth.RevokeContext{
|
||||||
|
ActorKind: auth.ActorKindUserSelf,
|
||||||
|
ActorID: session.UserID.String(),
|
||||||
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("RevokeSession: %v", err)
|
t.Fatalf("RevokeSession: %v", err)
|
||||||
}
|
}
|
||||||
@@ -294,7 +294,10 @@ func TestAuthEndToEnd(t *testing.T) {
|
|||||||
t.Fatalf("GetSession after revoke = %v, want ErrSessionNotFound", err)
|
t.Fatalf("GetSession after revoke = %v, want ErrSessionNotFound", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
again, err := svc.RevokeSession(ctx, session.DeviceSessionID)
|
again, err := svc.RevokeSession(ctx, session.DeviceSessionID, auth.RevokeContext{
|
||||||
|
ActorKind: auth.ActorKindUserSelf,
|
||||||
|
ActorID: session.UserID.String(),
|
||||||
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("idempotent RevokeSession: %v", err)
|
t.Fatalf("idempotent RevokeSession: %v", err)
|
||||||
}
|
}
|
||||||
@@ -330,6 +333,49 @@ func TestSendEmailCodePermanentlyBlocked(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TestConfirmEmailCodePermanentlyBlockedAfterSend covers the case where
|
||||||
|
// an admin applies permanent_block in the window between send and
|
||||||
|
// confirm. The send-time guard let the challenge through because the
|
||||||
|
// account was unblocked at that moment; the confirm-time guard must
|
||||||
|
// catch the late block and reject the registration.
|
||||||
|
func TestConfirmEmailCodePermanentlyBlockedAfterSend(t *testing.T) {
|
||||||
|
db := startPostgres(t)
|
||||||
|
svc, mailer, _, _ := buildService(t, db)
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
const email = "blockedlater@example.test"
|
||||||
|
|
||||||
|
if _, err := db.Exec(`
|
||||||
|
INSERT INTO backend.accounts (
|
||||||
|
user_id, email, user_name, preferred_language, time_zone
|
||||||
|
) VALUES ($1, $2, $3, $4, $5)
|
||||||
|
`, uuid.New(), email, "Player-XXBLATER", "en", "UTC"); err != nil {
|
||||||
|
t.Fatalf("seed account: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
id, err := svc.SendEmailCode(ctx, email, "en", "", "")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("SendEmailCode: %v", err)
|
||||||
|
}
|
||||||
|
_, code, _ := mailer.snapshot()
|
||||||
|
|
||||||
|
if _, err := db.Exec(`
|
||||||
|
UPDATE backend.accounts SET permanent_block = true WHERE email = $1
|
||||||
|
`, email); err != nil {
|
||||||
|
t.Fatalf("apply permanent_block: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = svc.ConfirmEmailCode(ctx, auth.ConfirmInputs{
|
||||||
|
ChallengeID: id,
|
||||||
|
Code: code,
|
||||||
|
ClientPublicKey: randomKey(t),
|
||||||
|
TimeZone: "UTC",
|
||||||
|
})
|
||||||
|
if !errors.Is(err, auth.ErrEmailPermanentlyBlocked) {
|
||||||
|
t.Fatalf("ConfirmEmailCode after block = %v, want ErrEmailPermanentlyBlocked", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestSendEmailCodeThrottleReusesChallenge(t *testing.T) {
|
func TestSendEmailCodeThrottleReusesChallenge(t *testing.T) {
|
||||||
db := startPostgres(t)
|
db := startPostgres(t)
|
||||||
svc, mailer, _, _ := buildService(t, db)
|
svc, mailer, _, _ := buildService(t, db)
|
||||||
@@ -468,7 +514,10 @@ func TestRevokeAllForUser(t *testing.T) {
|
|||||||
deviceSessionIDs = append(deviceSessionIDs, sess.DeviceSessionID)
|
deviceSessionIDs = append(deviceSessionIDs, sess.DeviceSessionID)
|
||||||
}
|
}
|
||||||
|
|
||||||
revoked, err := svc.RevokeAllForUser(ctx, userID)
|
revoked, err := svc.RevokeAllForUser(ctx, userID, auth.RevokeContext{
|
||||||
|
ActorKind: auth.ActorKindUserSelf,
|
||||||
|
ActorID: userID.String(),
|
||||||
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("RevokeAllForUser: %v", err)
|
t.Fatalf("RevokeAllForUser: %v", err)
|
||||||
}
|
}
|
||||||
@@ -485,7 +534,10 @@ func TestRevokeAllForUser(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Idempotent: revoking again returns an empty slice.
|
// Idempotent: revoking again returns an empty slice.
|
||||||
again, err := svc.RevokeAllForUser(ctx, userID)
|
again, err := svc.RevokeAllForUser(ctx, userID, auth.RevokeContext{
|
||||||
|
ActorKind: auth.ActorKindUserSelf,
|
||||||
|
ActorID: userID.String(),
|
||||||
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("idempotent RevokeAllForUser: %v", err)
|
t.Fatalf("idempotent RevokeAllForUser: %v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -136,6 +136,29 @@ func (c *Cache) Remove(deviceSessionID uuid.UUID) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ListByUser returns a freshly-allocated snapshot of every cached
|
||||||
|
// session belonging to userID. The user-surface "list my sessions"
|
||||||
|
// handler consumes this. An empty slice is returned for an unknown
|
||||||
|
// userID.
|
||||||
|
func (c *Cache) ListByUser(userID uuid.UUID) []Session {
|
||||||
|
if c == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
c.mu.RLock()
|
||||||
|
defer c.mu.RUnlock()
|
||||||
|
set, ok := c.byUser[userID]
|
||||||
|
if !ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
out := make([]Session, 0, len(set))
|
||||||
|
for id := range set {
|
||||||
|
if sess, ok := c.byID[id]; ok {
|
||||||
|
out = append(out, sess)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
// RemoveByUser evicts every cached entry belonging to userID and returns
|
// RemoveByUser evicts every cached entry belonging to userID and returns
|
||||||
// the device_session_ids it removed. The returned slice is safe for the
|
// the device_session_ids it removed. The returned slice is safe for the
|
||||||
// caller to hold past the call — it is freshly allocated.
|
// caller to hold past the call — it is freshly allocated.
|
||||||
|
|||||||
@@ -28,10 +28,11 @@ import (
|
|||||||
//
|
//
|
||||||
// locale (request body, BCP 47) takes precedence over acceptLanguage
|
// locale (request body, BCP 47) takes precedence over acceptLanguage
|
||||||
// (the standard HTTP header forwarded by gateway) when both are
|
// (the standard HTTP header forwarded by gateway) when both are
|
||||||
// supplied. The captured value is persisted on the challenge row as
|
// supplied. When neither is supplied SendEmailCode falls back to the
|
||||||
// `preferred_language`, replayed at confirm-email-code, and used only
|
// platform default ("en"). The resolved value is persisted on the
|
||||||
// for newly-registered accounts; existing accounts keep their stored
|
// challenge row as `preferred_language` and used by confirm-email-code
|
||||||
// language.
|
// only for newly-registered accounts; existing accounts keep their
|
||||||
|
// stored language.
|
||||||
func (s *Service) SendEmailCode(
|
func (s *Service) SendEmailCode(
|
||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
email, locale, acceptLanguage, sourceIP string,
|
email, locale, acceptLanguage, sourceIP string,
|
||||||
@@ -50,6 +51,9 @@ func (s *Service) SendEmailCode(
|
|||||||
}
|
}
|
||||||
|
|
||||||
captured := pickCapturedLocale(locale, acceptLanguage)
|
captured := pickCapturedLocale(locale, acceptLanguage)
|
||||||
|
if captured == "" {
|
||||||
|
captured = defaultLanguage
|
||||||
|
}
|
||||||
|
|
||||||
now := s.deps.Now()
|
now := s.deps.Now()
|
||||||
windowStart := now.Add(-s.deps.Config.ChallengeThrottle.Window)
|
windowStart := now.Add(-s.deps.Config.ChallengeThrottle.Window)
|
||||||
@@ -178,11 +182,23 @@ func (s *Service) ConfirmEmailCode(ctx context.Context, in ConfirmInputs) (Sessi
|
|||||||
return Session{}, err
|
return Session{}, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Re-check permanent_block after verifying the code. SendEmailCode
|
||||||
|
// guards against fresh challenges for already-blocked addresses;
|
||||||
|
// this guard catches the case where an admin applied
|
||||||
|
// permanent_block in the window between send and confirm.
|
||||||
|
permanent, err := s.deps.Store.IsEmailPermanentlyBlocked(ctx, loaded.Email)
|
||||||
|
if err != nil {
|
||||||
|
return Session{}, fmt.Errorf("auth: check permanent block at confirm: %w", err)
|
||||||
|
}
|
||||||
|
if permanent {
|
||||||
|
return Session{}, ErrEmailPermanentlyBlocked
|
||||||
|
}
|
||||||
|
|
||||||
preferredLang := loaded.PreferredLanguage
|
preferredLang := loaded.PreferredLanguage
|
||||||
if preferredLang == "" {
|
if preferredLang == "" {
|
||||||
preferredLang = s.deps.Geo.LanguageForIP(in.SourceIP)
|
// Defensive fallback: SendEmailCode now always persists a
|
||||||
}
|
// non-empty preferred_language, but a row written by an older
|
||||||
if preferredLang == "" {
|
// build could still be empty.
|
||||||
preferredLang = defaultLanguage
|
preferredLang = defaultLanguage
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -33,12 +33,12 @@ type UserEnsurer interface {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// GeoService provides the geo helpers auth needs at confirm-email-code:
|
// GeoService provides the geo helpers auth needs at confirm-email-code:
|
||||||
// a country lookup for the `preferred_language` fallback and a
|
// a country lookup that backfills `accounts.declared_country` for newly
|
||||||
// post-commit write of `accounts.declared_country`. Both methods are
|
// registered accounts and a post-commit write of the same column. Both
|
||||||
// best-effort — auth never blocks the registration flow on geo failures.
|
// methods are best-effort — auth never blocks the registration flow on
|
||||||
|
// geo failures.
|
||||||
type GeoService interface {
|
type GeoService interface {
|
||||||
LookupCountry(sourceIP string) string
|
LookupCountry(sourceIP string) string
|
||||||
LanguageForIP(sourceIP string) string
|
|
||||||
SetDeclaredCountryAtRegistration(ctx context.Context, userID uuid.UUID, sourceIP string) error
|
SetDeclaredCountryAtRegistration(ctx context.Context, userID uuid.UUID, sourceIP string) error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -8,12 +8,48 @@ import (
|
|||||||
"go.uber.org/zap"
|
"go.uber.org/zap"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// ActorKind enumerates the principals that can drive a session revoke.
|
||||||
|
// The values are persisted into `session_revocations.actor_kind` and
|
||||||
|
// must stay aligned with `user.SessionRevokeActor*` constants and any
|
||||||
|
// admin/operator tooling that joins on the audit table.
|
||||||
|
type ActorKind string
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ActorKindUserSelf indicates the session's owner initiated the
|
||||||
|
// revoke (logout self / logout-all-self through the user surface).
|
||||||
|
ActorKindUserSelf ActorKind = "user_self"
|
||||||
|
|
||||||
|
// ActorKindAdminSanction indicates an admin-applied sanction (most
|
||||||
|
// notably permanent_block) caused the revoke.
|
||||||
|
ActorKindAdminSanction ActorKind = "admin_sanction"
|
||||||
|
|
||||||
|
// ActorKindSoftDeleteUser indicates the session's owner triggered
|
||||||
|
// account soft-delete on themselves.
|
||||||
|
ActorKindSoftDeleteUser ActorKind = "soft_delete_user"
|
||||||
|
|
||||||
|
// ActorKindSoftDeleteAdmin indicates an admin soft-deleted the
|
||||||
|
// account and the cascade revoked the sessions.
|
||||||
|
ActorKindSoftDeleteAdmin ActorKind = "soft_delete_admin"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RevokeContext records the audit metadata persisted alongside every
|
||||||
|
// session revoke. ActorID is the stable identifier of the principal (a
|
||||||
|
// user UUID for self-driven flows, an admin username for admin-driven
|
||||||
|
// flows). Reason is a free-form note kept verbatim.
|
||||||
|
type RevokeContext struct {
|
||||||
|
ActorKind ActorKind
|
||||||
|
ActorID string
|
||||||
|
Reason string
|
||||||
|
}
|
||||||
|
|
||||||
// GetSession returns the active session keyed by deviceSessionID. The
|
// GetSession returns the active session keyed by deviceSessionID. The
|
||||||
// lookup is cache-only: the cache is the write-through projection of
|
// lookup hits the cache; on a miss the session is either revoked or
|
||||||
// `device_sessions WHERE status='active'`, so a miss means the session
|
// absent. After a hit the call refreshes `last_seen_at` against
|
||||||
// is either revoked or absent. Either way the gateway sees
|
// Postgres so admin observers see when each cached session was last
|
||||||
// ErrSessionNotFound and treats the calling client as unauthenticated.
|
// resolved by gateway. The refresh runs after the cache read and
|
||||||
func (s *Service) GetSession(_ context.Context, deviceSessionID uuid.UUID) (Session, error) {
|
// updates the cached row in-place; failures are logged but never block
|
||||||
|
// the lookup.
|
||||||
|
func (s *Service) GetSession(ctx context.Context, deviceSessionID uuid.UUID) (Session, error) {
|
||||||
if deviceSessionID == uuid.Nil {
|
if deviceSessionID == uuid.Nil {
|
||||||
return Session{}, ErrSessionNotFound
|
return Session{}, ErrSessionNotFound
|
||||||
}
|
}
|
||||||
@@ -21,31 +57,73 @@ func (s *Service) GetSession(_ context.Context, deviceSessionID uuid.UUID) (Sess
|
|||||||
if !ok {
|
if !ok {
|
||||||
return Session{}, ErrSessionNotFound
|
return Session{}, ErrSessionNotFound
|
||||||
}
|
}
|
||||||
|
now := s.deps.Now()
|
||||||
|
if updated, err := s.deps.Store.TouchSessionLastSeen(ctx, deviceSessionID, now); err == nil {
|
||||||
|
s.deps.Cache.Add(updated)
|
||||||
|
return updated, nil
|
||||||
|
} else if errors.Is(err, ErrSessionNotFound) {
|
||||||
|
// The row vanished between Cache.Get and the touch — treat as
|
||||||
|
// revoked from the caller's perspective.
|
||||||
|
s.deps.Cache.Remove(deviceSessionID)
|
||||||
|
return Session{}, ErrSessionNotFound
|
||||||
|
} else {
|
||||||
|
s.deps.Logger.Warn("auth: touch last_seen_at failed",
|
||||||
|
zap.String("device_session_id", deviceSessionID.String()),
|
||||||
|
zap.Error(err),
|
||||||
|
)
|
||||||
return sess, nil
|
return sess, nil
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// RevokeSession marks deviceSessionID revoked, evicts it from the cache,
|
// ListActiveByUser returns the cached active sessions for userID. The
|
||||||
// and emits a session_invalidation push event. The call is idempotent:
|
// user-surface "list my sessions" handler consumes this. The slice is
|
||||||
// a second revoke on an already-revoked session returns the existing
|
// safe for the caller to retain — it is freshly allocated.
|
||||||
// row with status='revoked' (HTTP 200), not ErrSessionNotFound. An
|
func (s *Service) ListActiveByUser(_ context.Context, userID uuid.UUID) []Session {
|
||||||
|
if userID == uuid.Nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return s.deps.Cache.ListByUser(userID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// LookupSessionInCache returns the cached session for deviceSessionID
|
||||||
|
// without touching last_seen_at. The user-surface revoke handler
|
||||||
|
// consumes this to verify ownership before issuing a revoke. A miss
|
||||||
|
// means the session is either revoked or absent — handlers must treat
|
||||||
|
// the two cases identically so a caller cannot probe whether a foreign
|
||||||
|
// device_session_id exists.
|
||||||
|
func (s *Service) LookupSessionInCache(deviceSessionID uuid.UUID) (Session, bool) {
|
||||||
|
if deviceSessionID == uuid.Nil {
|
||||||
|
return Session{}, false
|
||||||
|
}
|
||||||
|
return s.deps.Cache.Get(deviceSessionID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RevokeSession marks deviceSessionID revoked atomically with an
|
||||||
|
// audit row in `session_revocations`, evicts it from the cache, and
|
||||||
|
// emits a session_invalidation push event. The call is idempotent: a
|
||||||
|
// second revoke on an already-revoked session returns the existing
|
||||||
|
// row with status='revoked' (HTTP 200) and writes no audit row. An
|
||||||
// unknown device_session_id yields ErrSessionNotFound.
|
// unknown device_session_id yields ErrSessionNotFound.
|
||||||
//
|
//
|
||||||
// Cache eviction and the push emission run after the database UPDATE
|
// Cache eviction and the push emission run after the database UPDATE
|
||||||
// commits so a failed UPDATE leaves both cache and gateway view intact.
|
// commits so a failed UPDATE leaves both cache and gateway view
|
||||||
func (s *Service) RevokeSession(ctx context.Context, deviceSessionID uuid.UUID) (Session, error) {
|
// intact.
|
||||||
|
func (s *Service) RevokeSession(ctx context.Context, deviceSessionID uuid.UUID, rc RevokeContext) (Session, error) {
|
||||||
if deviceSessionID == uuid.Nil {
|
if deviceSessionID == uuid.Nil {
|
||||||
return Session{}, ErrSessionNotFound
|
return Session{}, ErrSessionNotFound
|
||||||
}
|
}
|
||||||
revoked, ok, err := s.deps.Store.RevokeSession(ctx, deviceSessionID)
|
revoked, ok, err := s.deps.Store.RevokeSession(ctx, deviceSessionID, rc, s.deps.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Session{}, err
|
return Session{}, err
|
||||||
}
|
}
|
||||||
if ok {
|
if ok {
|
||||||
s.deps.Cache.Remove(deviceSessionID)
|
s.deps.Cache.Remove(deviceSessionID)
|
||||||
s.deps.Push.PublishSessionInvalidation(ctx, deviceSessionID, revoked.UserID, "auth.revoke_session")
|
s.deps.Push.PublishSessionInvalidation(ctx, deviceSessionID, revoked.UserID, string(rc.ActorKind))
|
||||||
s.deps.Logger.Info("auth session revoked",
|
s.deps.Logger.Info("auth session revoked",
|
||||||
zap.String("device_session_id", deviceSessionID.String()),
|
zap.String("device_session_id", deviceSessionID.String()),
|
||||||
zap.String("user_id", revoked.UserID.String()),
|
zap.String("user_id", revoked.UserID.String()),
|
||||||
|
zap.String("actor_kind", string(rc.ActorKind)),
|
||||||
|
zap.String("actor_id", rc.ActorID),
|
||||||
)
|
)
|
||||||
return revoked, nil
|
return revoked, nil
|
||||||
}
|
}
|
||||||
@@ -63,27 +141,30 @@ func (s *Service) RevokeSession(ctx context.Context, deviceSessionID uuid.UUID)
|
|||||||
return existing, nil
|
return existing, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// RevokeAllForUser marks every active session for userID revoked,
|
// RevokeAllForUser marks every active session for userID revoked
|
||||||
// evicts each from the cache, and emits one session_invalidation push
|
// atomically with one audit row per revoked session, evicts each from
|
||||||
// event per revoked row. Returns the list of revoked sessions in the
|
// the cache, and emits one session_invalidation push event per
|
||||||
// order Postgres returned them. An empty result is a successful
|
// revoked row. Returns the list of revoked sessions in the order
|
||||||
// idempotent call (handler reports revoked_count=0).
|
// Postgres returned them. An empty result is a successful idempotent
|
||||||
func (s *Service) RevokeAllForUser(ctx context.Context, userID uuid.UUID) ([]Session, error) {
|
// call (handler reports revoked_count=0).
|
||||||
|
func (s *Service) RevokeAllForUser(ctx context.Context, userID uuid.UUID, rc RevokeContext) ([]Session, error) {
|
||||||
if userID == uuid.Nil {
|
if userID == uuid.Nil {
|
||||||
return nil, nil
|
return nil, nil
|
||||||
}
|
}
|
||||||
revoked, err := s.deps.Store.RevokeAllForUser(ctx, userID)
|
revoked, err := s.deps.Store.RevokeAllForUser(ctx, userID, rc, s.deps.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
for _, sess := range revoked {
|
for _, sess := range revoked {
|
||||||
s.deps.Cache.Remove(sess.DeviceSessionID)
|
s.deps.Cache.Remove(sess.DeviceSessionID)
|
||||||
s.deps.Push.PublishSessionInvalidation(ctx, sess.DeviceSessionID, sess.UserID, "auth.revoke_all_for_user")
|
s.deps.Push.PublishSessionInvalidation(ctx, sess.DeviceSessionID, sess.UserID, string(rc.ActorKind))
|
||||||
}
|
}
|
||||||
if len(revoked) > 0 {
|
if len(revoked) > 0 {
|
||||||
s.deps.Logger.Info("auth sessions revoked (bulk)",
|
s.deps.Logger.Info("auth sessions revoked (bulk)",
|
||||||
zap.String("user_id", userID.String()),
|
zap.String("user_id", userID.String()),
|
||||||
zap.Int("count", len(revoked)),
|
zap.Int("count", len(revoked)),
|
||||||
|
zap.String("actor_kind", string(rc.ActorKind)),
|
||||||
|
zap.String("actor_id", rc.ActorID),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
return revoked, nil
|
return revoked, nil
|
||||||
|
|||||||
+121
-22
@@ -332,15 +332,14 @@ func (s *Store) LoadSession(ctx context.Context, deviceSessionID uuid.UUID) (Ses
|
|||||||
return modelToSession(row), nil
|
return modelToSession(row), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// RevokeSession transitions an active row to status='revoked' and
|
// TouchSessionLastSeen sets `last_seen_at` to at on the row keyed by
|
||||||
// returns the row as it stands after the update. The boolean reports
|
// deviceSessionID. The UPDATE is gated by `status='active'` so a
|
||||||
// whether the UPDATE actually changed a row — false means the row was
|
// revoked or absent row reports ErrSessionNotFound. Returns the post-
|
||||||
// already revoked or did not exist; the auth Service then falls back to
|
// update row so the cache can be refreshed without a second read.
|
||||||
// LoadSession for idempotent-revoke responses.
|
func (s *Store) TouchSessionLastSeen(ctx context.Context, deviceSessionID uuid.UUID, at time.Time) (Session, error) {
|
||||||
func (s *Store) RevokeSession(ctx context.Context, deviceSessionID uuid.UUID) (Session, bool, error) {
|
|
||||||
stmt := table.DeviceSessions.
|
stmt := table.DeviceSessions.
|
||||||
UPDATE(table.DeviceSessions.Status, table.DeviceSessions.RevokedAt).
|
UPDATE(table.DeviceSessions.LastSeenAt).
|
||||||
SET(postgres.String(SessionStatusRevoked), postgres.NOW()).
|
SET(postgres.TimestampzT(at)).
|
||||||
WHERE(
|
WHERE(
|
||||||
table.DeviceSessions.DeviceSessionID.EQ(postgres.UUID(deviceSessionID)).
|
table.DeviceSessions.DeviceSessionID.EQ(postgres.UUID(deviceSessionID)).
|
||||||
AND(table.DeviceSessions.Status.EQ(postgres.String(SessionStatusActive))),
|
AND(table.DeviceSessions.Status.EQ(postgres.String(SessionStatusActive))),
|
||||||
@@ -350,22 +349,65 @@ func (s *Store) RevokeSession(ctx context.Context, deviceSessionID uuid.UUID) (S
|
|||||||
var row model.DeviceSessions
|
var row model.DeviceSessions
|
||||||
if err := stmt.QueryContext(ctx, s.db, &row); err != nil {
|
if err := stmt.QueryContext(ctx, s.db, &row); err != nil {
|
||||||
if errors.Is(err, qrm.ErrNoRows) {
|
if errors.Is(err, qrm.ErrNoRows) {
|
||||||
return Session{}, false, nil
|
return Session{}, ErrSessionNotFound
|
||||||
}
|
}
|
||||||
|
return Session{}, fmt.Errorf("auth store: touch last_seen %s: %w", deviceSessionID, err)
|
||||||
|
}
|
||||||
|
return modelToSession(row), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// RevokeSession transitions an active row to status='revoked' and
|
||||||
|
// inserts the matching audit row into session_revocations atomically
|
||||||
|
// inside one transaction. The boolean reports whether the UPDATE
|
||||||
|
// actually changed a row — false means the row was already revoked or
|
||||||
|
// did not exist, in which case no audit row is written and the auth
|
||||||
|
// Service falls back to LoadSession for the idempotent-revoke
|
||||||
|
// response.
|
||||||
|
func (s *Store) RevokeSession(ctx context.Context, deviceSessionID uuid.UUID, rc RevokeContext, at time.Time) (Session, bool, error) {
|
||||||
|
var (
|
||||||
|
revoked Session
|
||||||
|
ok bool
|
||||||
|
)
|
||||||
|
err := withTx(ctx, s.db, func(tx *sql.Tx) error {
|
||||||
|
updateStmt := table.DeviceSessions.
|
||||||
|
UPDATE(table.DeviceSessions.Status, table.DeviceSessions.RevokedAt).
|
||||||
|
SET(postgres.String(SessionStatusRevoked), postgres.TimestampzT(at)).
|
||||||
|
WHERE(
|
||||||
|
table.DeviceSessions.DeviceSessionID.EQ(postgres.UUID(deviceSessionID)).
|
||||||
|
AND(table.DeviceSessions.Status.EQ(postgres.String(SessionStatusActive))),
|
||||||
|
).
|
||||||
|
RETURNING(sessionColumns())
|
||||||
|
|
||||||
|
var row model.DeviceSessions
|
||||||
|
if err := updateStmt.QueryContext(ctx, tx, &row); err != nil {
|
||||||
|
if errors.Is(err, qrm.ErrNoRows) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
revoked = modelToSession(row)
|
||||||
|
ok = true
|
||||||
|
return insertRevocationTx(ctx, tx, deviceSessionID, revoked.UserID, rc, at)
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
return Session{}, false, fmt.Errorf("auth store: revoke session %s: %w", deviceSessionID, err)
|
return Session{}, false, fmt.Errorf("auth store: revoke session %s: %w", deviceSessionID, err)
|
||||||
}
|
}
|
||||||
return modelToSession(row), true, nil
|
return revoked, ok, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// RevokeAllForUser transitions every active row for userID to
|
// RevokeAllForUser transitions every active row for userID to
|
||||||
// status='revoked' and returns the rows as they stand after the update.
|
// status='revoked', writes one session_revocations row per revoked
|
||||||
// An empty slice with a nil error is returned when the user owned no
|
// session, and returns the rows as they stand after the update. The
|
||||||
// active sessions; the caller must treat that as a successful idempotent
|
// UPDATE and the audit inserts run inside one transaction. An empty
|
||||||
// revoke (the API surface returns revoked_count=0 in that case).
|
// slice with a nil error is returned when the user owned no active
|
||||||
func (s *Store) RevokeAllForUser(ctx context.Context, userID uuid.UUID) ([]Session, error) {
|
// sessions; the caller treats that as a successful idempotent revoke
|
||||||
stmt := table.DeviceSessions.
|
// (the API surface returns revoked_count=0).
|
||||||
|
func (s *Store) RevokeAllForUser(ctx context.Context, userID uuid.UUID, rc RevokeContext, at time.Time) ([]Session, error) {
|
||||||
|
var out []Session
|
||||||
|
err := withTx(ctx, s.db, func(tx *sql.Tx) error {
|
||||||
|
updateStmt := table.DeviceSessions.
|
||||||
UPDATE(table.DeviceSessions.Status, table.DeviceSessions.RevokedAt).
|
UPDATE(table.DeviceSessions.Status, table.DeviceSessions.RevokedAt).
|
||||||
SET(postgres.String(SessionStatusRevoked), postgres.NOW()).
|
SET(postgres.String(SessionStatusRevoked), postgres.TimestampzT(at)).
|
||||||
WHERE(
|
WHERE(
|
||||||
table.DeviceSessions.UserID.EQ(postgres.UUID(userID)).
|
table.DeviceSessions.UserID.EQ(postgres.UUID(userID)).
|
||||||
AND(table.DeviceSessions.Status.EQ(postgres.String(SessionStatusActive))),
|
AND(table.DeviceSessions.Status.EQ(postgres.String(SessionStatusActive))),
|
||||||
@@ -373,16 +415,73 @@ func (s *Store) RevokeAllForUser(ctx context.Context, userID uuid.UUID) ([]Sessi
|
|||||||
RETURNING(sessionColumns())
|
RETURNING(sessionColumns())
|
||||||
|
|
||||||
var rows []model.DeviceSessions
|
var rows []model.DeviceSessions
|
||||||
if err := stmt.QueryContext(ctx, s.db, &rows); err != nil {
|
if err := updateStmt.QueryContext(ctx, tx, &rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
out = make([]Session, 0, len(rows))
|
||||||
|
for _, row := range rows {
|
||||||
|
sess := modelToSession(row)
|
||||||
|
out = append(out, sess)
|
||||||
|
if err := insertRevocationTx(ctx, tx, sess.DeviceSessionID, sess.UserID, rc, at); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
return nil, fmt.Errorf("auth store: revoke all for user %s: %w", userID, err)
|
return nil, fmt.Errorf("auth store: revoke all for user %s: %w", userID, err)
|
||||||
}
|
}
|
||||||
out := make([]Session, 0, len(rows))
|
|
||||||
for _, row := range rows {
|
|
||||||
out = append(out, modelToSession(row))
|
|
||||||
}
|
|
||||||
return out, nil
|
return out, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// insertRevocationTx writes a single audit row inside an existing
|
||||||
|
// transaction. Callers are expected to mint a fresh revocation_id per
|
||||||
|
// row; collisions are not retried because revocation_id is a uuid.New
|
||||||
|
// in the only call sites.
|
||||||
|
func insertRevocationTx(ctx context.Context, tx *sql.Tx, deviceSessionID, userID uuid.UUID, rc RevokeContext, at time.Time) error {
|
||||||
|
actorUserID, actorUsername, err := revokeContextToColumns(rc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
stmt := table.SessionRevocations.INSERT(
|
||||||
|
table.SessionRevocations.RevocationID,
|
||||||
|
table.SessionRevocations.DeviceSessionID,
|
||||||
|
table.SessionRevocations.UserID,
|
||||||
|
table.SessionRevocations.ActorKind,
|
||||||
|
table.SessionRevocations.ActorUserID,
|
||||||
|
table.SessionRevocations.ActorUsername,
|
||||||
|
table.SessionRevocations.Reason,
|
||||||
|
table.SessionRevocations.RevokedAt,
|
||||||
|
).VALUES(uuid.New(), deviceSessionID, userID, string(rc.ActorKind), actorUserID, actorUsername, rc.Reason, at)
|
||||||
|
|
||||||
|
if _, err := stmt.ExecContext(ctx, tx); err != nil {
|
||||||
|
return fmt.Errorf("insert session_revocations: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// revokeContextToColumns splits RevokeContext.ActorID into the
|
||||||
|
// (actor_user_id, actor_username) pair persisted by session_revocations.
|
||||||
|
// User-driven kinds parse ActorID as a UUID; admin-driven kinds keep it
|
||||||
|
// as the operator username. Empty ActorID lands as NULL/NULL.
|
||||||
|
func revokeContextToColumns(rc RevokeContext) (any, any, error) {
|
||||||
|
if rc.ActorID == "" {
|
||||||
|
return nil, nil, nil
|
||||||
|
}
|
||||||
|
switch rc.ActorKind {
|
||||||
|
case ActorKindUserSelf, ActorKindSoftDeleteUser:
|
||||||
|
uid, err := uuid.Parse(rc.ActorID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("auth store: actor_id %q is not a uuid: %w", rc.ActorID, err)
|
||||||
|
}
|
||||||
|
return uid, nil, nil
|
||||||
|
case ActorKindAdminSanction, ActorKindSoftDeleteAdmin:
|
||||||
|
return nil, rc.ActorID, nil
|
||||||
|
default:
|
||||||
|
return nil, nil, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// modelToChallenge projects a generated model row into the public
|
// modelToChallenge projects a generated model row into the public
|
||||||
// Challenge struct. Pointer fields are copied so callers cannot mutate
|
// Challenge struct. Pointer fields are copied so callers cannot mutate
|
||||||
// the underlying scan buffer.
|
// the underlying scan buffer.
|
||||||
|
|||||||
@@ -65,7 +65,7 @@ func startPostgres(t *testing.T) *sql.DB {
|
|||||||
cfg := pgshared.DefaultConfig()
|
cfg := pgshared.DefaultConfig()
|
||||||
cfg.PrimaryDSN = scoped
|
cfg.PrimaryDSN = scoped
|
||||||
cfg.OperationTimeout = pgOpTO
|
cfg.OperationTimeout = pgOpTO
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,63 +0,0 @@
|
|||||||
package geo
|
|
||||||
|
|
||||||
import "strings"
|
|
||||||
|
|
||||||
// countryToLanguage maps an uppercase ISO 3166-1 alpha-2 country code to
|
|
||||||
// an ISO 639-1 lowercase language code. The set is intentionally minimal
|
|
||||||
// — covering the top-traffic Galaxy locales — and is consulted as a
|
|
||||||
// fallback when neither the request body nor the Accept-Language header
|
|
||||||
// supplied a locale at send-email-code. Unknown countries map to the
|
|
||||||
// empty string so the auth flow can default to "en".
|
|
||||||
//
|
|
||||||
// The mapping is intentionally hard-coded rather than derived from the
|
|
||||||
// GeoLite2 database: countries with multiple official languages collapse
|
|
||||||
// to the single most common UI locale to keep the registration path
|
|
||||||
// deterministic. The implementation may revise this table without changing the
|
|
||||||
// surface auth depends on.
|
|
||||||
var countryToLanguage = map[string]string{
|
|
||||||
// English-default territories and the platform fallback.
|
|
||||||
"US": "en", "GB": "en", "AU": "en", "NZ": "en", "IE": "en", "CA": "en",
|
|
||||||
// Western Europe.
|
|
||||||
"DE": "de", "AT": "de", "CH": "de",
|
|
||||||
"FR": "fr", "BE": "fr", "LU": "fr",
|
|
||||||
"ES": "es", "MX": "es", "AR": "es", "CL": "es", "CO": "es",
|
|
||||||
"IT": "it",
|
|
||||||
"PT": "pt", "BR": "pt",
|
|
||||||
"NL": "nl",
|
|
||||||
// Central / Eastern Europe.
|
|
||||||
"PL": "pl",
|
|
||||||
"RU": "ru", "BY": "ru", "KZ": "ru",
|
|
||||||
"UA": "uk",
|
|
||||||
"CZ": "cs",
|
|
||||||
"SK": "sk",
|
|
||||||
"HU": "hu",
|
|
||||||
"RO": "ro",
|
|
||||||
"BG": "bg",
|
|
||||||
// Northern Europe.
|
|
||||||
"SE": "sv",
|
|
||||||
"NO": "no",
|
|
||||||
"DK": "da",
|
|
||||||
"FI": "fi",
|
|
||||||
// Asia.
|
|
||||||
"JP": "ja",
|
|
||||||
"KR": "ko",
|
|
||||||
"CN": "zh", "TW": "zh", "HK": "zh", "SG": "zh",
|
|
||||||
"VN": "vi",
|
|
||||||
"TH": "th",
|
|
||||||
"ID": "id",
|
|
||||||
"IN": "en",
|
|
||||||
"IL": "he",
|
|
||||||
"TR": "tr",
|
|
||||||
// Middle East and North Africa.
|
|
||||||
"SA": "ar", "AE": "ar", "EG": "ar",
|
|
||||||
}
|
|
||||||
|
|
||||||
// languageForCountry returns the ISO 639-1 language code mapped to
|
|
||||||
// country, or "" when no mapping is known. country is normalised to
|
|
||||||
// uppercase before lookup.
|
|
||||||
func languageForCountry(country string) string {
|
|
||||||
if country == "" {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
return countryToLanguage[strings.ToUpper(strings.TrimSpace(country))]
|
|
||||||
}
|
|
||||||
@@ -3,12 +3,12 @@
|
|||||||
// registration time and by the user-surface middleware on every
|
// registration time and by the user-surface middleware on every
|
||||||
// authenticated request.
|
// authenticated request.
|
||||||
//
|
//
|
||||||
// The implementation shipped `LookupCountry`, `LanguageForIP` and
|
// The implementation shipped `LookupCountry` and
|
||||||
// `SetDeclaredCountryAtRegistration`. The implementation added the
|
// `SetDeclaredCountryAtRegistration`. The implementation added the
|
||||||
// `OnUserDeleted` cascade leg. The implementation layers `IncrementCounterAsync`
|
// `OnUserDeleted` cascade leg. The implementation layers
|
||||||
// and `ListUserCounters` on top of the same Service plus the
|
// `IncrementCounterAsync` and `ListUserCounters` on top of the same
|
||||||
// background-goroutine machinery (cancellable context and WaitGroup)
|
// Service plus the background-goroutine machinery (cancellable context
|
||||||
// needed to drain pending counter upserts on shutdown.
|
// and WaitGroup) needed to drain pending counter upserts on shutdown.
|
||||||
package geo
|
package geo
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
@@ -8,22 +8,6 @@ import (
|
|||||||
"go.uber.org/zap"
|
"go.uber.org/zap"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestLanguageForCountry(t *testing.T) {
|
|
||||||
cases := map[string]string{
|
|
||||||
"DE": "de",
|
|
||||||
"de": "de", // case-insensitive input
|
|
||||||
"RU": "ru",
|
|
||||||
"BR": "pt",
|
|
||||||
"": "",
|
|
||||||
"ZZ": "",
|
|
||||||
}
|
|
||||||
for input, want := range cases {
|
|
||||||
if got := languageForCountry(input); got != want {
|
|
||||||
t.Errorf("languageForCountry(%q) = %q, want %q", input, got, want)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLookupCountryNilSafety(t *testing.T) {
|
func TestLookupCountryNilSafety(t *testing.T) {
|
||||||
var s *Service
|
var s *Service
|
||||||
if got := s.LookupCountry("8.8.8.8"); got != "" {
|
if got := s.LookupCountry("8.8.8.8"); got != "" {
|
||||||
@@ -31,13 +15,6 @@ func TestLookupCountryNilSafety(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestLanguageForIPNilSafety(t *testing.T) {
|
|
||||||
var s *Service
|
|
||||||
if got := s.LanguageForIP("8.8.8.8"); got != "" {
|
|
||||||
t.Errorf("nil Service LanguageForIP = %q, want empty", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSetLoggerNilSafety(t *testing.T) {
|
func TestSetLoggerNilSafety(t *testing.T) {
|
||||||
var s *Service
|
var s *Service
|
||||||
s.SetLogger(zap.NewNop())
|
s.SetLogger(zap.NewNop())
|
||||||
|
|||||||
@@ -1,14 +0,0 @@
|
|||||||
package geo
|
|
||||||
|
|
||||||
// LanguageForIP returns an ISO 639-1 language code derived from
|
|
||||||
// sourceIP. The function looks up the country via LookupCountry and then
|
|
||||||
// consults the static country->language table. Returns "" when the
|
|
||||||
// country lookup fails or no language mapping exists for the country.
|
|
||||||
//
|
|
||||||
// Auth uses LanguageForIP as a fallback after the client-supplied locale
|
|
||||||
// (request body or Accept-Language header). The empty string signals
|
|
||||||
// "fall through to the platform default 'en'".
|
|
||||||
func (s *Service) LanguageForIP(sourceIP string) string {
|
|
||||||
country := s.LookupCountry(sourceIP)
|
|
||||||
return languageForCountry(country)
|
|
||||||
}
|
|
||||||
@@ -63,7 +63,7 @@ func startPostgres(t *testing.T) *sql.DB {
|
|||||||
cfg := pgshared.DefaultConfig()
|
cfg := pgshared.DefaultConfig()
|
||||||
cfg.PrimaryDSN = scopedDSN
|
cfg.PrimaryDSN = scopedDSN
|
||||||
cfg.OperationTimeout = testOpTimeout
|
cfg.OperationTimeout = testOpTimeout
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ func startPostgres(t *testing.T) *sql.DB {
|
|||||||
cfg.PrimaryDSN = scopedDSN
|
cfg.PrimaryDSN = scopedDSN
|
||||||
cfg.OperationTimeout = pgOpTO
|
cfg.OperationTimeout = pgOpTO
|
||||||
|
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ import (
|
|||||||
|
|
||||||
"galaxy/backend/internal/config"
|
"galaxy/backend/internal/config"
|
||||||
"galaxy/backend/internal/user"
|
"galaxy/backend/internal/user"
|
||||||
|
"galaxy/backend/push"
|
||||||
|
|
||||||
"github.com/google/uuid"
|
"github.com/google/uuid"
|
||||||
"go.uber.org/zap"
|
"go.uber.org/zap"
|
||||||
@@ -13,9 +14,17 @@ import (
|
|||||||
|
|
||||||
// PushPublisher is the publisher contract notification uses to emit a
|
// PushPublisher is the publisher contract notification uses to emit a
|
||||||
// `client_event` push frame to gateway. The real implementation lives
|
// `client_event` push frame to gateway. The real implementation lives
|
||||||
// in `backend/internal/push` ; NewNoopPushPublisher satisfies
|
// in `backend/push` (`*push.Service`); NewNoopPushPublisher satisfies
|
||||||
// the interface for tests that do not exercise push behaviour.
|
// the interface for tests that do not exercise push behaviour.
|
||||||
//
|
//
|
||||||
|
// `event` is a typed `push.Event`: the publisher invokes Marshal on
|
||||||
|
// the event at publish time, so producers stay decoupled from the
|
||||||
|
// wire encoding. Every catalog kind has a FlatBuffers schema in
|
||||||
|
// `pkg/schema/fbs/notification.fbs` and is built by
|
||||||
|
// `buildClientPushEvent`; an unknown kind falls back to
|
||||||
|
// `push.JSONEvent` so a misconfigured producer keeps the pipeline
|
||||||
|
// flowing.
|
||||||
|
//
|
||||||
// Implementations must be concurrency-safe. The deviceSessionID pointer
|
// Implementations must be concurrency-safe. The deviceSessionID pointer
|
||||||
// narrows the event to a single device session when non-nil; nil means
|
// narrows the event to a single device session when non-nil; nil means
|
||||||
// fan out to every active session of userID. eventID, requestID and
|
// fan out to every active session of userID. eventID, requestID and
|
||||||
@@ -23,7 +32,7 @@ import (
|
|||||||
// into the signed client envelope; empty strings are forwarded
|
// into the signed client envelope; empty strings are forwarded
|
||||||
// unchanged.
|
// unchanged.
|
||||||
type PushPublisher interface {
|
type PushPublisher interface {
|
||||||
PublishClientEvent(ctx context.Context, userID uuid.UUID, deviceSessionID *uuid.UUID, kind string, payload map[string]any, eventID, requestID, traceID string) error
|
PublishClientEvent(ctx context.Context, userID uuid.UUID, deviceSessionID *uuid.UUID, event push.Event, eventID, requestID, traceID string) error
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mailer is the email surface notification uses for outbound mail. The
|
// Mailer is the email surface notification uses for outbound mail. The
|
||||||
@@ -76,11 +85,14 @@ type noopPushPublisher struct {
|
|||||||
logger *zap.Logger
|
logger *zap.Logger
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *noopPushPublisher) PublishClientEvent(_ context.Context, userID uuid.UUID, deviceSessionID *uuid.UUID, kind string, payload map[string]any, eventID, requestID, traceID string) error {
|
func (p *noopPushPublisher) PublishClientEvent(_ context.Context, userID uuid.UUID, deviceSessionID *uuid.UUID, event push.Event, eventID, requestID, traceID string) error {
|
||||||
|
kind := ""
|
||||||
|
if event != nil {
|
||||||
|
kind = event.Kind()
|
||||||
|
}
|
||||||
fields := []zap.Field{
|
fields := []zap.Field{
|
||||||
zap.String("user_id", userID.String()),
|
zap.String("user_id", userID.String()),
|
||||||
zap.String("kind", kind),
|
zap.String("kind", kind),
|
||||||
zap.Int("payload_keys", len(payload)),
|
|
||||||
}
|
}
|
||||||
if deviceSessionID != nil {
|
if deviceSessionID != nil {
|
||||||
fields = append(fields, zap.String("device_session_id", deviceSessionID.String()))
|
fields = append(fields, zap.String("device_session_id", deviceSessionID.String()))
|
||||||
|
|||||||
@@ -121,7 +121,11 @@ func (s *Service) performDispatch(ctx context.Context, claim ClaimedRoute) error
|
|||||||
eventID := claim.Route.RouteID.String()
|
eventID := claim.Route.RouteID.String()
|
||||||
requestID := claim.Notification.IdempotencyKey
|
requestID := claim.Notification.IdempotencyKey
|
||||||
traceID := traceIDFromContext(ctx)
|
traceID := traceIDFromContext(ctx)
|
||||||
return s.deps.Push.PublishClientEvent(ctx, *claim.Route.UserID, claim.Route.DeviceSessionID, claim.Notification.Kind, claim.Notification.Payload, eventID, requestID, traceID)
|
event, err := buildClientPushEvent(claim.Notification.Kind, claim.Notification.Payload)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("build push event %q: %w", claim.Notification.Kind, err)
|
||||||
|
}
|
||||||
|
return s.deps.Push.PublishClientEvent(ctx, *claim.Route.UserID, claim.Route.DeviceSessionID, event, eventID, requestID, traceID)
|
||||||
case ChannelEmail:
|
case ChannelEmail:
|
||||||
entry, ok := LookupCatalog(claim.Notification.Kind)
|
entry, ok := LookupCatalog(claim.Notification.Kind)
|
||||||
if !ok {
|
if !ok {
|
||||||
|
|||||||
@@ -0,0 +1,247 @@
|
|||||||
|
package notification
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"galaxy/backend/push"
|
||||||
|
"galaxy/transcoder"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
)
|
||||||
|
|
||||||
|
// preMarshaledEvent adapts a pre-encoded FlatBuffers payload to the
|
||||||
|
// push.Event interface. The factory below pre-encodes the payload at
|
||||||
|
// construction time so the kind-specific build error surfaces inside
|
||||||
|
// the dispatcher (where it can drive retry / dead-letter logic) rather
|
||||||
|
// than inside push.Service.PublishClientEvent.
|
||||||
|
type preMarshaledEvent struct {
|
||||||
|
kind string
|
||||||
|
payload []byte
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e preMarshaledEvent) Kind() string { return e.kind }
|
||||||
|
func (e preMarshaledEvent) Marshal() ([]byte, error) { return e.payload, nil }
|
||||||
|
|
||||||
|
// buildClientPushEvent maps a catalog kind together with the producer
|
||||||
|
// payload map onto a typed push.Event. Every catalog kind has a
|
||||||
|
// FlatBuffers schema in `pkg/schema/fbs/notification.fbs`; an unknown
|
||||||
|
// kind falls back to push.JSONEvent so a misconfigured producer keeps
|
||||||
|
// the pipeline flowing while the catalog catches up.
|
||||||
|
func buildClientPushEvent(kind string, payload map[string]any) (push.Event, error) {
|
||||||
|
switch kind {
|
||||||
|
case KindLobbyInviteReceived:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
inviter, err := mapUUID(payload, "inviter_user_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyInviteReceivedEventToPayload(&transcoder.LobbyInviteReceivedEvent{
|
||||||
|
GameID: gameID,
|
||||||
|
InviterUserID: inviter,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyInviteRevoked:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyInviteRevokedEventToPayload(&transcoder.LobbyInviteRevokedEvent{GameID: gameID})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyApplicationSubmitted:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
appID, err := mapUUID(payload, "application_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyApplicationSubmittedEventToPayload(&transcoder.LobbyApplicationSubmittedEvent{
|
||||||
|
GameID: gameID,
|
||||||
|
ApplicationID: appID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyApplicationApproved:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyApplicationApprovedEventToPayload(&transcoder.LobbyApplicationApprovedEvent{GameID: gameID})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyApplicationRejected:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyApplicationRejectedEventToPayload(&transcoder.LobbyApplicationRejectedEvent{GameID: gameID})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyMembershipRemoved:
|
||||||
|
bytes, err := transcoder.LobbyMembershipRemovedEventToPayload(&transcoder.LobbyMembershipRemovedEvent{
|
||||||
|
Reason: mapStringOpt(payload, "reason"),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyMembershipBlocked:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyMembershipBlockedEventToPayload(&transcoder.LobbyMembershipBlockedEvent{
|
||||||
|
GameID: gameID,
|
||||||
|
Reason: mapStringOpt(payload, "reason"),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyRaceNameRegistered:
|
||||||
|
raceName, err := mapString(payload, "race_name")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyRaceNameRegisteredEventToPayload(&transcoder.LobbyRaceNameRegisteredEvent{RaceName: raceName})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyRaceNamePending:
|
||||||
|
raceName, err := mapString(payload, "race_name")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyRaceNamePendingEventToPayload(&transcoder.LobbyRaceNamePendingEvent{
|
||||||
|
RaceName: raceName,
|
||||||
|
ExpiresAt: mapStringOpt(payload, "expires_at"),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindLobbyRaceNameExpired:
|
||||||
|
raceName, err := mapString(payload, "race_name")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.LobbyRaceNameExpiredEventToPayload(&transcoder.LobbyRaceNameExpiredEvent{RaceName: raceName})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindRuntimeImagePullFailed:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.RuntimeImagePullFailedEventToPayload(&transcoder.RuntimeImagePullFailedEvent{
|
||||||
|
GameID: gameID,
|
||||||
|
ImageRef: mapStringOpt(payload, "image_ref"),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindRuntimeContainerStartFailed:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.RuntimeContainerStartFailedEventToPayload(&transcoder.RuntimeContainerStartFailedEvent{GameID: gameID})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
|
||||||
|
case KindRuntimeStartConfigInvalid:
|
||||||
|
gameID, err := mapUUID(payload, "game_id")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bytes, err := transcoder.RuntimeStartConfigInvalidEventToPayload(&transcoder.RuntimeStartConfigInvalidEvent{
|
||||||
|
GameID: gameID,
|
||||||
|
Reason: mapStringOpt(payload, "reason"),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return preMarshaledEvent{kind: kind, payload: bytes}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return push.JSONEvent{EventKind: kind, Payload: payload}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// mapUUID extracts a required UUID-shaped field from the producer
|
||||||
|
// payload. Producers stringify uuid values before assembling Intent
|
||||||
|
// payloads, so the JSON-roundtripped form is `string`.
|
||||||
|
func mapUUID(payload map[string]any, key string) (uuid.UUID, error) {
|
||||||
|
raw, ok := payload[key]
|
||||||
|
if !ok {
|
||||||
|
return uuid.Nil, fmt.Errorf("notification payload: %s is missing", key)
|
||||||
|
}
|
||||||
|
str, ok := raw.(string)
|
||||||
|
if !ok {
|
||||||
|
return uuid.Nil, fmt.Errorf("notification payload: %s must be a string, got %T", key, raw)
|
||||||
|
}
|
||||||
|
parsed, err := uuid.Parse(str)
|
||||||
|
if err != nil {
|
||||||
|
return uuid.Nil, fmt.Errorf("notification payload: %s is not a uuid: %w", key, err)
|
||||||
|
}
|
||||||
|
return parsed, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// mapString extracts a required string field from the producer payload.
|
||||||
|
func mapString(payload map[string]any, key string) (string, error) {
|
||||||
|
raw, ok := payload[key]
|
||||||
|
if !ok {
|
||||||
|
return "", fmt.Errorf("notification payload: %s is missing", key)
|
||||||
|
}
|
||||||
|
str, ok := raw.(string)
|
||||||
|
if !ok {
|
||||||
|
return "", fmt.Errorf("notification payload: %s must be a string, got %T", key, raw)
|
||||||
|
}
|
||||||
|
if str == "" {
|
||||||
|
return "", fmt.Errorf("notification payload: %s is empty", key)
|
||||||
|
}
|
||||||
|
return str, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// mapStringOpt returns the string value for key, or "" when the key is
|
||||||
|
// missing or carries a non-string value.
|
||||||
|
func mapStringOpt(payload map[string]any, key string) string {
|
||||||
|
raw, ok := payload[key]
|
||||||
|
if !ok {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
str, _ := raw.(string)
|
||||||
|
return str
|
||||||
|
}
|
||||||
@@ -0,0 +1,157 @@
|
|||||||
|
package notification
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"galaxy/backend/push"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
)
|
||||||
|
|
||||||
|
// TestBuildClientPushEventCoversCatalog asserts that every catalog kind
|
||||||
|
// returns a typed FB event (preMarshaledEvent) and that an unknown kind
|
||||||
|
// falls through to the JSON safety net.
|
||||||
|
func TestBuildClientPushEventCoversCatalog(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
gameID := uuid.MustParse("11111111-1111-1111-1111-111111111111")
|
||||||
|
applicationID := uuid.MustParse("22222222-2222-2222-2222-222222222222")
|
||||||
|
inviterID := uuid.MustParse("33333333-3333-3333-3333-333333333333")
|
||||||
|
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
kind string
|
||||||
|
payload map[string]any
|
||||||
|
}{
|
||||||
|
{"invite received", KindLobbyInviteReceived, map[string]any{
|
||||||
|
"game_id": gameID.String(),
|
||||||
|
"inviter_user_id": inviterID.String(),
|
||||||
|
}},
|
||||||
|
{"invite revoked", KindLobbyInviteRevoked, map[string]any{
|
||||||
|
"game_id": gameID.String(),
|
||||||
|
}},
|
||||||
|
{"application submitted", KindLobbyApplicationSubmitted, map[string]any{
|
||||||
|
"game_id": gameID.String(),
|
||||||
|
"application_id": applicationID.String(),
|
||||||
|
}},
|
||||||
|
{"application approved", KindLobbyApplicationApproved, map[string]any{"game_id": gameID.String()}},
|
||||||
|
{"application rejected", KindLobbyApplicationRejected, map[string]any{"game_id": gameID.String()}},
|
||||||
|
{"membership removed", KindLobbyMembershipRemoved, map[string]any{"reason": "deleted"}},
|
||||||
|
{"membership blocked", KindLobbyMembershipBlocked, map[string]any{
|
||||||
|
"game_id": gameID.String(),
|
||||||
|
"reason": "permanent_blocked",
|
||||||
|
}},
|
||||||
|
{"race name registered", KindLobbyRaceNameRegistered, map[string]any{"race_name": "Skylancer"}},
|
||||||
|
{"race name pending", KindLobbyRaceNamePending, map[string]any{
|
||||||
|
"race_name": "Skylancer",
|
||||||
|
"expires_at": "2026-05-06T12:00:00Z",
|
||||||
|
}},
|
||||||
|
{"race name expired", KindLobbyRaceNameExpired, map[string]any{"race_name": "Skylancer"}},
|
||||||
|
{"runtime image pull failed", KindRuntimeImagePullFailed, map[string]any{
|
||||||
|
"game_id": gameID.String(),
|
||||||
|
"image_ref": "gcr.io/example:1.0.0",
|
||||||
|
}},
|
||||||
|
{"runtime container start failed", KindRuntimeContainerStartFailed, map[string]any{"game_id": gameID.String()}},
|
||||||
|
{"runtime start config invalid", KindRuntimeStartConfigInvalid, map[string]any{
|
||||||
|
"game_id": gameID.String(),
|
||||||
|
"reason": "missing engine version",
|
||||||
|
}},
|
||||||
|
}
|
||||||
|
|
||||||
|
seenKinds := map[string]bool{}
|
||||||
|
for _, tt := range tests {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
event, err := buildClientPushEvent(tt.kind, tt.payload)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("build %s: %v", tt.kind, err)
|
||||||
|
}
|
||||||
|
if event.Kind() != tt.kind {
|
||||||
|
t.Fatalf("Kind() = %q, want %q", event.Kind(), tt.kind)
|
||||||
|
}
|
||||||
|
bytes, err := event.Marshal()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Marshal: %v", err)
|
||||||
|
}
|
||||||
|
if len(bytes) == 0 {
|
||||||
|
t.Fatalf("Marshal returned empty bytes")
|
||||||
|
}
|
||||||
|
if _, isJSON := event.(push.JSONEvent); isJSON {
|
||||||
|
t.Fatalf("expected typed FB event for %s, got JSONEvent", tt.kind)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
seenKinds[tt.kind] = true
|
||||||
|
}
|
||||||
|
for _, kind := range SupportedKinds() {
|
||||||
|
if !seenKinds[kind] {
|
||||||
|
t.Errorf("catalog kind %q is not covered by this test", kind)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBuildClientPushEventUnknownKindFallsBackToJSON(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
event, err := buildClientPushEvent("unknown.kind", map[string]any{"x": 1})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
if _, ok := event.(push.JSONEvent); !ok {
|
||||||
|
t.Fatalf("expected JSONEvent fallback, got %T", event)
|
||||||
|
}
|
||||||
|
if event.Kind() != "unknown.kind" {
|
||||||
|
t.Fatalf("Kind() = %q", event.Kind())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBuildClientPushEventRejectsBrokenPayloads(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
kind string
|
||||||
|
payload map[string]any
|
||||||
|
want string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "missing required uuid",
|
||||||
|
kind: KindLobbyApplicationSubmitted,
|
||||||
|
payload: map[string]any{"game_id": uuid.NewString()},
|
||||||
|
want: "application_id is missing",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "non-uuid string",
|
||||||
|
kind: KindLobbyInviteRevoked,
|
||||||
|
payload: map[string]any{"game_id": "not-a-uuid"},
|
||||||
|
want: "is not a uuid",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "uuid not a string",
|
||||||
|
kind: KindLobbyInviteRevoked,
|
||||||
|
payload: map[string]any{"game_id": 42},
|
||||||
|
want: "must be a string",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "missing required string",
|
||||||
|
kind: KindLobbyRaceNameRegistered,
|
||||||
|
payload: map[string]any{},
|
||||||
|
want: "race_name is missing",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
_, err := buildClientPushEvent(tt.kind, tt.payload)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("expected error")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), tt.want) {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -13,6 +13,7 @@ import (
|
|||||||
"galaxy/backend/internal/notification"
|
"galaxy/backend/internal/notification"
|
||||||
backendpg "galaxy/backend/internal/postgres"
|
backendpg "galaxy/backend/internal/postgres"
|
||||||
"galaxy/backend/internal/user"
|
"galaxy/backend/internal/user"
|
||||||
|
"galaxy/backend/push"
|
||||||
pgshared "galaxy/postgres"
|
pgshared "galaxy/postgres"
|
||||||
|
|
||||||
"github.com/google/uuid"
|
"github.com/google/uuid"
|
||||||
@@ -69,7 +70,7 @@ func startPostgres(t *testing.T) *sql.DB {
|
|||||||
cfg := pgshared.DefaultConfig()
|
cfg := pgshared.DefaultConfig()
|
||||||
cfg.PrimaryDSN = scoped
|
cfg.PrimaryDSN = scoped
|
||||||
cfg.OperationTimeout = pgOpTO
|
cfg.OperationTimeout = pgOpTO
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
@@ -149,9 +150,17 @@ type recordedPushEvent struct {
|
|||||||
TraceID string
|
TraceID string
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *recordingPush) PublishClientEvent(_ context.Context, userID uuid.UUID, _ *uuid.UUID, kind string, payload map[string]any, eventID, requestID, traceID string) error {
|
func (r *recordingPush) PublishClientEvent(_ context.Context, userID uuid.UUID, _ *uuid.UUID, event push.Event, eventID, requestID, traceID string) error {
|
||||||
r.mu.Lock()
|
r.mu.Lock()
|
||||||
defer r.mu.Unlock()
|
defer r.mu.Unlock()
|
||||||
|
kind := ""
|
||||||
|
var payload map[string]any
|
||||||
|
if event != nil {
|
||||||
|
kind = event.Kind()
|
||||||
|
if jsonEvent, ok := event.(push.JSONEvent); ok {
|
||||||
|
payload = jsonEvent.Payload
|
||||||
|
}
|
||||||
|
}
|
||||||
r.calls = append(r.calls, recordedPushEvent{
|
r.calls = append(r.calls, recordedPushEvent{
|
||||||
UserID: userID,
|
UserID: userID,
|
||||||
Kind: kind,
|
Kind: kind,
|
||||||
|
|||||||
@@ -22,7 +22,8 @@ type Accounts struct {
|
|||||||
DeclaredCountry *string
|
DeclaredCountry *string
|
||||||
PermanentBlock bool
|
PermanentBlock bool
|
||||||
DeletedActorType *string
|
DeletedActorType *string
|
||||||
DeletedActorID *string
|
DeletedActorUserID *uuid.UUID
|
||||||
|
DeletedActorUsername *string
|
||||||
CreatedAt time.Time
|
CreatedAt time.Time
|
||||||
UpdatedAt time.Time
|
UpdatedAt time.Time
|
||||||
DeletedAt *time.Time
|
DeletedAt *time.Time
|
||||||
|
|||||||
@@ -19,7 +19,8 @@ type EntitlementRecords struct {
|
|||||||
IsPaid bool
|
IsPaid bool
|
||||||
Source string
|
Source string
|
||||||
ActorType string
|
ActorType string
|
||||||
ActorID *string
|
ActorUserID *uuid.UUID
|
||||||
|
ActorUsername *string
|
||||||
ReasonCode string
|
ReasonCode string
|
||||||
StartsAt time.Time
|
StartsAt time.Time
|
||||||
EndsAt *time.Time
|
EndsAt *time.Time
|
||||||
|
|||||||
@@ -18,7 +18,8 @@ type EntitlementSnapshots struct {
|
|||||||
IsPaid bool
|
IsPaid bool
|
||||||
Source string
|
Source string
|
||||||
ActorType string
|
ActorType string
|
||||||
ActorID *string
|
ActorUserID *uuid.UUID
|
||||||
|
ActorUsername *string
|
||||||
ReasonCode string
|
ReasonCode string
|
||||||
StartsAt time.Time
|
StartsAt time.Time
|
||||||
EndsAt *time.Time
|
EndsAt *time.Time
|
||||||
|
|||||||
@@ -19,11 +19,13 @@ type LimitRecords struct {
|
|||||||
Value int32
|
Value int32
|
||||||
ReasonCode string
|
ReasonCode string
|
||||||
ActorType string
|
ActorType string
|
||||||
ActorID *string
|
ActorUserID *uuid.UUID
|
||||||
|
ActorUsername *string
|
||||||
AppliedAt time.Time
|
AppliedAt time.Time
|
||||||
ExpiresAt *time.Time
|
ExpiresAt *time.Time
|
||||||
RemovedAt *time.Time
|
RemovedAt *time.Time
|
||||||
RemovedByType *string
|
RemovedByType *string
|
||||||
RemovedByID *string
|
RemovedByUserID *uuid.UUID
|
||||||
|
RemovedByUsername *string
|
||||||
RemovedReasonCode *string
|
RemovedReasonCode *string
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -19,11 +19,13 @@ type SanctionRecords struct {
|
|||||||
Scope string
|
Scope string
|
||||||
ReasonCode string
|
ReasonCode string
|
||||||
ActorType string
|
ActorType string
|
||||||
ActorID *string
|
ActorUserID *uuid.UUID
|
||||||
|
ActorUsername *string
|
||||||
AppliedAt time.Time
|
AppliedAt time.Time
|
||||||
ExpiresAt *time.Time
|
ExpiresAt *time.Time
|
||||||
RemovedAt *time.Time
|
RemovedAt *time.Time
|
||||||
RemovedByType *string
|
RemovedByType *string
|
||||||
RemovedByID *string
|
RemovedByUserID *uuid.UUID
|
||||||
|
RemovedByUsername *string
|
||||||
RemovedReasonCode *string
|
RemovedReasonCode *string
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,24 @@
|
|||||||
|
//
|
||||||
|
// Code generated by go-jet DO NOT EDIT.
|
||||||
|
//
|
||||||
|
// WARNING: Changes to this file may cause incorrect behavior
|
||||||
|
// and will be lost if the code is regenerated
|
||||||
|
//
|
||||||
|
|
||||||
|
package model
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type SessionRevocations struct {
|
||||||
|
RevocationID uuid.UUID `sql:"primary_key"`
|
||||||
|
DeviceSessionID uuid.UUID
|
||||||
|
UserID uuid.UUID
|
||||||
|
ActorKind string
|
||||||
|
ActorUserID *uuid.UUID
|
||||||
|
ActorUsername *string
|
||||||
|
Reason string
|
||||||
|
RevokedAt time.Time
|
||||||
|
}
|
||||||
@@ -26,7 +26,8 @@ type accountsTable struct {
|
|||||||
DeclaredCountry postgres.ColumnString
|
DeclaredCountry postgres.ColumnString
|
||||||
PermanentBlock postgres.ColumnBool
|
PermanentBlock postgres.ColumnBool
|
||||||
DeletedActorType postgres.ColumnString
|
DeletedActorType postgres.ColumnString
|
||||||
DeletedActorID postgres.ColumnString
|
DeletedActorUserID postgres.ColumnString
|
||||||
|
DeletedActorUsername postgres.ColumnString
|
||||||
CreatedAt postgres.ColumnTimestampz
|
CreatedAt postgres.ColumnTimestampz
|
||||||
UpdatedAt postgres.ColumnTimestampz
|
UpdatedAt postgres.ColumnTimestampz
|
||||||
DeletedAt postgres.ColumnTimestampz
|
DeletedAt postgres.ColumnTimestampz
|
||||||
@@ -80,12 +81,13 @@ func newAccountsTableImpl(schemaName, tableName, alias string) accountsTable {
|
|||||||
DeclaredCountryColumn = postgres.StringColumn("declared_country")
|
DeclaredCountryColumn = postgres.StringColumn("declared_country")
|
||||||
PermanentBlockColumn = postgres.BoolColumn("permanent_block")
|
PermanentBlockColumn = postgres.BoolColumn("permanent_block")
|
||||||
DeletedActorTypeColumn = postgres.StringColumn("deleted_actor_type")
|
DeletedActorTypeColumn = postgres.StringColumn("deleted_actor_type")
|
||||||
DeletedActorIDColumn = postgres.StringColumn("deleted_actor_id")
|
DeletedActorUserIDColumn = postgres.StringColumn("deleted_actor_user_id")
|
||||||
|
DeletedActorUsernameColumn = postgres.StringColumn("deleted_actor_username")
|
||||||
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
||||||
UpdatedAtColumn = postgres.TimestampzColumn("updated_at")
|
UpdatedAtColumn = postgres.TimestampzColumn("updated_at")
|
||||||
DeletedAtColumn = postgres.TimestampzColumn("deleted_at")
|
DeletedAtColumn = postgres.TimestampzColumn("deleted_at")
|
||||||
allColumns = postgres.ColumnList{UserIDColumn, EmailColumn, UserNameColumn, DisplayNameColumn, PreferredLanguageColumn, TimeZoneColumn, DeclaredCountryColumn, PermanentBlockColumn, DeletedActorTypeColumn, DeletedActorIDColumn, CreatedAtColumn, UpdatedAtColumn, DeletedAtColumn}
|
allColumns = postgres.ColumnList{UserIDColumn, EmailColumn, UserNameColumn, DisplayNameColumn, PreferredLanguageColumn, TimeZoneColumn, DeclaredCountryColumn, PermanentBlockColumn, DeletedActorTypeColumn, DeletedActorUserIDColumn, DeletedActorUsernameColumn, CreatedAtColumn, UpdatedAtColumn, DeletedAtColumn}
|
||||||
mutableColumns = postgres.ColumnList{EmailColumn, UserNameColumn, DisplayNameColumn, PreferredLanguageColumn, TimeZoneColumn, DeclaredCountryColumn, PermanentBlockColumn, DeletedActorTypeColumn, DeletedActorIDColumn, CreatedAtColumn, UpdatedAtColumn, DeletedAtColumn}
|
mutableColumns = postgres.ColumnList{EmailColumn, UserNameColumn, DisplayNameColumn, PreferredLanguageColumn, TimeZoneColumn, DeclaredCountryColumn, PermanentBlockColumn, DeletedActorTypeColumn, DeletedActorUserIDColumn, DeletedActorUsernameColumn, CreatedAtColumn, UpdatedAtColumn, DeletedAtColumn}
|
||||||
defaultColumns = postgres.ColumnList{DisplayNameColumn, PermanentBlockColumn, CreatedAtColumn, UpdatedAtColumn}
|
defaultColumns = postgres.ColumnList{DisplayNameColumn, PermanentBlockColumn, CreatedAtColumn, UpdatedAtColumn}
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -102,7 +104,8 @@ func newAccountsTableImpl(schemaName, tableName, alias string) accountsTable {
|
|||||||
DeclaredCountry: DeclaredCountryColumn,
|
DeclaredCountry: DeclaredCountryColumn,
|
||||||
PermanentBlock: PermanentBlockColumn,
|
PermanentBlock: PermanentBlockColumn,
|
||||||
DeletedActorType: DeletedActorTypeColumn,
|
DeletedActorType: DeletedActorTypeColumn,
|
||||||
DeletedActorID: DeletedActorIDColumn,
|
DeletedActorUserID: DeletedActorUserIDColumn,
|
||||||
|
DeletedActorUsername: DeletedActorUsernameColumn,
|
||||||
CreatedAt: CreatedAtColumn,
|
CreatedAt: CreatedAtColumn,
|
||||||
UpdatedAt: UpdatedAtColumn,
|
UpdatedAt: UpdatedAtColumn,
|
||||||
DeletedAt: DeletedAtColumn,
|
DeletedAt: DeletedAtColumn,
|
||||||
|
|||||||
@@ -23,7 +23,8 @@ type entitlementRecordsTable struct {
|
|||||||
IsPaid postgres.ColumnBool
|
IsPaid postgres.ColumnBool
|
||||||
Source postgres.ColumnString
|
Source postgres.ColumnString
|
||||||
ActorType postgres.ColumnString
|
ActorType postgres.ColumnString
|
||||||
ActorID postgres.ColumnString
|
ActorUserID postgres.ColumnString
|
||||||
|
ActorUsername postgres.ColumnString
|
||||||
ReasonCode postgres.ColumnString
|
ReasonCode postgres.ColumnString
|
||||||
StartsAt postgres.ColumnTimestampz
|
StartsAt postgres.ColumnTimestampz
|
||||||
EndsAt postgres.ColumnTimestampz
|
EndsAt postgres.ColumnTimestampz
|
||||||
@@ -75,13 +76,14 @@ func newEntitlementRecordsTableImpl(schemaName, tableName, alias string) entitle
|
|||||||
IsPaidColumn = postgres.BoolColumn("is_paid")
|
IsPaidColumn = postgres.BoolColumn("is_paid")
|
||||||
SourceColumn = postgres.StringColumn("source")
|
SourceColumn = postgres.StringColumn("source")
|
||||||
ActorTypeColumn = postgres.StringColumn("actor_type")
|
ActorTypeColumn = postgres.StringColumn("actor_type")
|
||||||
ActorIDColumn = postgres.StringColumn("actor_id")
|
ActorUserIDColumn = postgres.StringColumn("actor_user_id")
|
||||||
|
ActorUsernameColumn = postgres.StringColumn("actor_username")
|
||||||
ReasonCodeColumn = postgres.StringColumn("reason_code")
|
ReasonCodeColumn = postgres.StringColumn("reason_code")
|
||||||
StartsAtColumn = postgres.TimestampzColumn("starts_at")
|
StartsAtColumn = postgres.TimestampzColumn("starts_at")
|
||||||
EndsAtColumn = postgres.TimestampzColumn("ends_at")
|
EndsAtColumn = postgres.TimestampzColumn("ends_at")
|
||||||
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
CreatedAtColumn = postgres.TimestampzColumn("created_at")
|
||||||
allColumns = postgres.ColumnList{RecordIDColumn, UserIDColumn, TierColumn, IsPaidColumn, SourceColumn, ActorTypeColumn, ActorIDColumn, ReasonCodeColumn, StartsAtColumn, EndsAtColumn, CreatedAtColumn}
|
allColumns = postgres.ColumnList{RecordIDColumn, UserIDColumn, TierColumn, IsPaidColumn, SourceColumn, ActorTypeColumn, ActorUserIDColumn, ActorUsernameColumn, ReasonCodeColumn, StartsAtColumn, EndsAtColumn, CreatedAtColumn}
|
||||||
mutableColumns = postgres.ColumnList{UserIDColumn, TierColumn, IsPaidColumn, SourceColumn, ActorTypeColumn, ActorIDColumn, ReasonCodeColumn, StartsAtColumn, EndsAtColumn, CreatedAtColumn}
|
mutableColumns = postgres.ColumnList{UserIDColumn, TierColumn, IsPaidColumn, SourceColumn, ActorTypeColumn, ActorUserIDColumn, ActorUsernameColumn, ReasonCodeColumn, StartsAtColumn, EndsAtColumn, CreatedAtColumn}
|
||||||
defaultColumns = postgres.ColumnList{ReasonCodeColumn, StartsAtColumn, CreatedAtColumn}
|
defaultColumns = postgres.ColumnList{ReasonCodeColumn, StartsAtColumn, CreatedAtColumn}
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -95,7 +97,8 @@ func newEntitlementRecordsTableImpl(schemaName, tableName, alias string) entitle
|
|||||||
IsPaid: IsPaidColumn,
|
IsPaid: IsPaidColumn,
|
||||||
Source: SourceColumn,
|
Source: SourceColumn,
|
||||||
ActorType: ActorTypeColumn,
|
ActorType: ActorTypeColumn,
|
||||||
ActorID: ActorIDColumn,
|
ActorUserID: ActorUserIDColumn,
|
||||||
|
ActorUsername: ActorUsernameColumn,
|
||||||
ReasonCode: ReasonCodeColumn,
|
ReasonCode: ReasonCodeColumn,
|
||||||
StartsAt: StartsAtColumn,
|
StartsAt: StartsAtColumn,
|
||||||
EndsAt: EndsAtColumn,
|
EndsAt: EndsAtColumn,
|
||||||
|
|||||||
@@ -22,7 +22,8 @@ type entitlementSnapshotsTable struct {
|
|||||||
IsPaid postgres.ColumnBool
|
IsPaid postgres.ColumnBool
|
||||||
Source postgres.ColumnString
|
Source postgres.ColumnString
|
||||||
ActorType postgres.ColumnString
|
ActorType postgres.ColumnString
|
||||||
ActorID postgres.ColumnString
|
ActorUserID postgres.ColumnString
|
||||||
|
ActorUsername postgres.ColumnString
|
||||||
ReasonCode postgres.ColumnString
|
ReasonCode postgres.ColumnString
|
||||||
StartsAt postgres.ColumnTimestampz
|
StartsAt postgres.ColumnTimestampz
|
||||||
EndsAt postgres.ColumnTimestampz
|
EndsAt postgres.ColumnTimestampz
|
||||||
@@ -74,14 +75,15 @@ func newEntitlementSnapshotsTableImpl(schemaName, tableName, alias string) entit
|
|||||||
IsPaidColumn = postgres.BoolColumn("is_paid")
|
IsPaidColumn = postgres.BoolColumn("is_paid")
|
||||||
SourceColumn = postgres.StringColumn("source")
|
SourceColumn = postgres.StringColumn("source")
|
||||||
ActorTypeColumn = postgres.StringColumn("actor_type")
|
ActorTypeColumn = postgres.StringColumn("actor_type")
|
||||||
ActorIDColumn = postgres.StringColumn("actor_id")
|
ActorUserIDColumn = postgres.StringColumn("actor_user_id")
|
||||||
|
ActorUsernameColumn = postgres.StringColumn("actor_username")
|
||||||
ReasonCodeColumn = postgres.StringColumn("reason_code")
|
ReasonCodeColumn = postgres.StringColumn("reason_code")
|
||||||
StartsAtColumn = postgres.TimestampzColumn("starts_at")
|
StartsAtColumn = postgres.TimestampzColumn("starts_at")
|
||||||
EndsAtColumn = postgres.TimestampzColumn("ends_at")
|
EndsAtColumn = postgres.TimestampzColumn("ends_at")
|
||||||
MaxRegisteredRaceNamesColumn = postgres.IntegerColumn("max_registered_race_names")
|
MaxRegisteredRaceNamesColumn = postgres.IntegerColumn("max_registered_race_names")
|
||||||
UpdatedAtColumn = postgres.TimestampzColumn("updated_at")
|
UpdatedAtColumn = postgres.TimestampzColumn("updated_at")
|
||||||
allColumns = postgres.ColumnList{UserIDColumn, TierColumn, IsPaidColumn, SourceColumn, ActorTypeColumn, ActorIDColumn, ReasonCodeColumn, StartsAtColumn, EndsAtColumn, MaxRegisteredRaceNamesColumn, UpdatedAtColumn}
|
allColumns = postgres.ColumnList{UserIDColumn, TierColumn, IsPaidColumn, SourceColumn, ActorTypeColumn, ActorUserIDColumn, ActorUsernameColumn, ReasonCodeColumn, StartsAtColumn, EndsAtColumn, MaxRegisteredRaceNamesColumn, UpdatedAtColumn}
|
||||||
mutableColumns = postgres.ColumnList{TierColumn, IsPaidColumn, SourceColumn, ActorTypeColumn, ActorIDColumn, ReasonCodeColumn, StartsAtColumn, EndsAtColumn, MaxRegisteredRaceNamesColumn, UpdatedAtColumn}
|
mutableColumns = postgres.ColumnList{TierColumn, IsPaidColumn, SourceColumn, ActorTypeColumn, ActorUserIDColumn, ActorUsernameColumn, ReasonCodeColumn, StartsAtColumn, EndsAtColumn, MaxRegisteredRaceNamesColumn, UpdatedAtColumn}
|
||||||
defaultColumns = postgres.ColumnList{ReasonCodeColumn, UpdatedAtColumn}
|
defaultColumns = postgres.ColumnList{ReasonCodeColumn, UpdatedAtColumn}
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -94,7 +96,8 @@ func newEntitlementSnapshotsTableImpl(schemaName, tableName, alias string) entit
|
|||||||
IsPaid: IsPaidColumn,
|
IsPaid: IsPaidColumn,
|
||||||
Source: SourceColumn,
|
Source: SourceColumn,
|
||||||
ActorType: ActorTypeColumn,
|
ActorType: ActorTypeColumn,
|
||||||
ActorID: ActorIDColumn,
|
ActorUserID: ActorUserIDColumn,
|
||||||
|
ActorUsername: ActorUsernameColumn,
|
||||||
ReasonCode: ReasonCodeColumn,
|
ReasonCode: ReasonCodeColumn,
|
||||||
StartsAt: StartsAtColumn,
|
StartsAt: StartsAtColumn,
|
||||||
EndsAt: EndsAtColumn,
|
EndsAt: EndsAtColumn,
|
||||||
|
|||||||
@@ -23,12 +23,14 @@ type limitRecordsTable struct {
|
|||||||
Value postgres.ColumnInteger
|
Value postgres.ColumnInteger
|
||||||
ReasonCode postgres.ColumnString
|
ReasonCode postgres.ColumnString
|
||||||
ActorType postgres.ColumnString
|
ActorType postgres.ColumnString
|
||||||
ActorID postgres.ColumnString
|
ActorUserID postgres.ColumnString
|
||||||
|
ActorUsername postgres.ColumnString
|
||||||
AppliedAt postgres.ColumnTimestampz
|
AppliedAt postgres.ColumnTimestampz
|
||||||
ExpiresAt postgres.ColumnTimestampz
|
ExpiresAt postgres.ColumnTimestampz
|
||||||
RemovedAt postgres.ColumnTimestampz
|
RemovedAt postgres.ColumnTimestampz
|
||||||
RemovedByType postgres.ColumnString
|
RemovedByType postgres.ColumnString
|
||||||
RemovedByID postgres.ColumnString
|
RemovedByUserID postgres.ColumnString
|
||||||
|
RemovedByUsername postgres.ColumnString
|
||||||
RemovedReasonCode postgres.ColumnString
|
RemovedReasonCode postgres.ColumnString
|
||||||
|
|
||||||
AllColumns postgres.ColumnList
|
AllColumns postgres.ColumnList
|
||||||
@@ -77,15 +79,17 @@ func newLimitRecordsTableImpl(schemaName, tableName, alias string) limitRecordsT
|
|||||||
ValueColumn = postgres.IntegerColumn("value")
|
ValueColumn = postgres.IntegerColumn("value")
|
||||||
ReasonCodeColumn = postgres.StringColumn("reason_code")
|
ReasonCodeColumn = postgres.StringColumn("reason_code")
|
||||||
ActorTypeColumn = postgres.StringColumn("actor_type")
|
ActorTypeColumn = postgres.StringColumn("actor_type")
|
||||||
ActorIDColumn = postgres.StringColumn("actor_id")
|
ActorUserIDColumn = postgres.StringColumn("actor_user_id")
|
||||||
|
ActorUsernameColumn = postgres.StringColumn("actor_username")
|
||||||
AppliedAtColumn = postgres.TimestampzColumn("applied_at")
|
AppliedAtColumn = postgres.TimestampzColumn("applied_at")
|
||||||
ExpiresAtColumn = postgres.TimestampzColumn("expires_at")
|
ExpiresAtColumn = postgres.TimestampzColumn("expires_at")
|
||||||
RemovedAtColumn = postgres.TimestampzColumn("removed_at")
|
RemovedAtColumn = postgres.TimestampzColumn("removed_at")
|
||||||
RemovedByTypeColumn = postgres.StringColumn("removed_by_type")
|
RemovedByTypeColumn = postgres.StringColumn("removed_by_type")
|
||||||
RemovedByIDColumn = postgres.StringColumn("removed_by_id")
|
RemovedByUserIDColumn = postgres.StringColumn("removed_by_user_id")
|
||||||
|
RemovedByUsernameColumn = postgres.StringColumn("removed_by_username")
|
||||||
RemovedReasonCodeColumn = postgres.StringColumn("removed_reason_code")
|
RemovedReasonCodeColumn = postgres.StringColumn("removed_reason_code")
|
||||||
allColumns = postgres.ColumnList{RecordIDColumn, UserIDColumn, LimitCodeColumn, ValueColumn, ReasonCodeColumn, ActorTypeColumn, ActorIDColumn, AppliedAtColumn, ExpiresAtColumn, RemovedAtColumn, RemovedByTypeColumn, RemovedByIDColumn, RemovedReasonCodeColumn}
|
allColumns = postgres.ColumnList{RecordIDColumn, UserIDColumn, LimitCodeColumn, ValueColumn, ReasonCodeColumn, ActorTypeColumn, ActorUserIDColumn, ActorUsernameColumn, AppliedAtColumn, ExpiresAtColumn, RemovedAtColumn, RemovedByTypeColumn, RemovedByUserIDColumn, RemovedByUsernameColumn, RemovedReasonCodeColumn}
|
||||||
mutableColumns = postgres.ColumnList{UserIDColumn, LimitCodeColumn, ValueColumn, ReasonCodeColumn, ActorTypeColumn, ActorIDColumn, AppliedAtColumn, ExpiresAtColumn, RemovedAtColumn, RemovedByTypeColumn, RemovedByIDColumn, RemovedReasonCodeColumn}
|
mutableColumns = postgres.ColumnList{UserIDColumn, LimitCodeColumn, ValueColumn, ReasonCodeColumn, ActorTypeColumn, ActorUserIDColumn, ActorUsernameColumn, AppliedAtColumn, ExpiresAtColumn, RemovedAtColumn, RemovedByTypeColumn, RemovedByUserIDColumn, RemovedByUsernameColumn, RemovedReasonCodeColumn}
|
||||||
defaultColumns = postgres.ColumnList{AppliedAtColumn}
|
defaultColumns = postgres.ColumnList{AppliedAtColumn}
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -99,12 +103,14 @@ func newLimitRecordsTableImpl(schemaName, tableName, alias string) limitRecordsT
|
|||||||
Value: ValueColumn,
|
Value: ValueColumn,
|
||||||
ReasonCode: ReasonCodeColumn,
|
ReasonCode: ReasonCodeColumn,
|
||||||
ActorType: ActorTypeColumn,
|
ActorType: ActorTypeColumn,
|
||||||
ActorID: ActorIDColumn,
|
ActorUserID: ActorUserIDColumn,
|
||||||
|
ActorUsername: ActorUsernameColumn,
|
||||||
AppliedAt: AppliedAtColumn,
|
AppliedAt: AppliedAtColumn,
|
||||||
ExpiresAt: ExpiresAtColumn,
|
ExpiresAt: ExpiresAtColumn,
|
||||||
RemovedAt: RemovedAtColumn,
|
RemovedAt: RemovedAtColumn,
|
||||||
RemovedByType: RemovedByTypeColumn,
|
RemovedByType: RemovedByTypeColumn,
|
||||||
RemovedByID: RemovedByIDColumn,
|
RemovedByUserID: RemovedByUserIDColumn,
|
||||||
|
RemovedByUsername: RemovedByUsernameColumn,
|
||||||
RemovedReasonCode: RemovedReasonCodeColumn,
|
RemovedReasonCode: RemovedReasonCodeColumn,
|
||||||
|
|
||||||
AllColumns: allColumns,
|
AllColumns: allColumns,
|
||||||
|
|||||||
@@ -23,12 +23,14 @@ type sanctionRecordsTable struct {
|
|||||||
Scope postgres.ColumnString
|
Scope postgres.ColumnString
|
||||||
ReasonCode postgres.ColumnString
|
ReasonCode postgres.ColumnString
|
||||||
ActorType postgres.ColumnString
|
ActorType postgres.ColumnString
|
||||||
ActorID postgres.ColumnString
|
ActorUserID postgres.ColumnString
|
||||||
|
ActorUsername postgres.ColumnString
|
||||||
AppliedAt postgres.ColumnTimestampz
|
AppliedAt postgres.ColumnTimestampz
|
||||||
ExpiresAt postgres.ColumnTimestampz
|
ExpiresAt postgres.ColumnTimestampz
|
||||||
RemovedAt postgres.ColumnTimestampz
|
RemovedAt postgres.ColumnTimestampz
|
||||||
RemovedByType postgres.ColumnString
|
RemovedByType postgres.ColumnString
|
||||||
RemovedByID postgres.ColumnString
|
RemovedByUserID postgres.ColumnString
|
||||||
|
RemovedByUsername postgres.ColumnString
|
||||||
RemovedReasonCode postgres.ColumnString
|
RemovedReasonCode postgres.ColumnString
|
||||||
|
|
||||||
AllColumns postgres.ColumnList
|
AllColumns postgres.ColumnList
|
||||||
@@ -77,15 +79,17 @@ func newSanctionRecordsTableImpl(schemaName, tableName, alias string) sanctionRe
|
|||||||
ScopeColumn = postgres.StringColumn("scope")
|
ScopeColumn = postgres.StringColumn("scope")
|
||||||
ReasonCodeColumn = postgres.StringColumn("reason_code")
|
ReasonCodeColumn = postgres.StringColumn("reason_code")
|
||||||
ActorTypeColumn = postgres.StringColumn("actor_type")
|
ActorTypeColumn = postgres.StringColumn("actor_type")
|
||||||
ActorIDColumn = postgres.StringColumn("actor_id")
|
ActorUserIDColumn = postgres.StringColumn("actor_user_id")
|
||||||
|
ActorUsernameColumn = postgres.StringColumn("actor_username")
|
||||||
AppliedAtColumn = postgres.TimestampzColumn("applied_at")
|
AppliedAtColumn = postgres.TimestampzColumn("applied_at")
|
||||||
ExpiresAtColumn = postgres.TimestampzColumn("expires_at")
|
ExpiresAtColumn = postgres.TimestampzColumn("expires_at")
|
||||||
RemovedAtColumn = postgres.TimestampzColumn("removed_at")
|
RemovedAtColumn = postgres.TimestampzColumn("removed_at")
|
||||||
RemovedByTypeColumn = postgres.StringColumn("removed_by_type")
|
RemovedByTypeColumn = postgres.StringColumn("removed_by_type")
|
||||||
RemovedByIDColumn = postgres.StringColumn("removed_by_id")
|
RemovedByUserIDColumn = postgres.StringColumn("removed_by_user_id")
|
||||||
|
RemovedByUsernameColumn = postgres.StringColumn("removed_by_username")
|
||||||
RemovedReasonCodeColumn = postgres.StringColumn("removed_reason_code")
|
RemovedReasonCodeColumn = postgres.StringColumn("removed_reason_code")
|
||||||
allColumns = postgres.ColumnList{RecordIDColumn, UserIDColumn, SanctionCodeColumn, ScopeColumn, ReasonCodeColumn, ActorTypeColumn, ActorIDColumn, AppliedAtColumn, ExpiresAtColumn, RemovedAtColumn, RemovedByTypeColumn, RemovedByIDColumn, RemovedReasonCodeColumn}
|
allColumns = postgres.ColumnList{RecordIDColumn, UserIDColumn, SanctionCodeColumn, ScopeColumn, ReasonCodeColumn, ActorTypeColumn, ActorUserIDColumn, ActorUsernameColumn, AppliedAtColumn, ExpiresAtColumn, RemovedAtColumn, RemovedByTypeColumn, RemovedByUserIDColumn, RemovedByUsernameColumn, RemovedReasonCodeColumn}
|
||||||
mutableColumns = postgres.ColumnList{UserIDColumn, SanctionCodeColumn, ScopeColumn, ReasonCodeColumn, ActorTypeColumn, ActorIDColumn, AppliedAtColumn, ExpiresAtColumn, RemovedAtColumn, RemovedByTypeColumn, RemovedByIDColumn, RemovedReasonCodeColumn}
|
mutableColumns = postgres.ColumnList{UserIDColumn, SanctionCodeColumn, ScopeColumn, ReasonCodeColumn, ActorTypeColumn, ActorUserIDColumn, ActorUsernameColumn, AppliedAtColumn, ExpiresAtColumn, RemovedAtColumn, RemovedByTypeColumn, RemovedByUserIDColumn, RemovedByUsernameColumn, RemovedReasonCodeColumn}
|
||||||
defaultColumns = postgres.ColumnList{AppliedAtColumn}
|
defaultColumns = postgres.ColumnList{AppliedAtColumn}
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -99,12 +103,14 @@ func newSanctionRecordsTableImpl(schemaName, tableName, alias string) sanctionRe
|
|||||||
Scope: ScopeColumn,
|
Scope: ScopeColumn,
|
||||||
ReasonCode: ReasonCodeColumn,
|
ReasonCode: ReasonCodeColumn,
|
||||||
ActorType: ActorTypeColumn,
|
ActorType: ActorTypeColumn,
|
||||||
ActorID: ActorIDColumn,
|
ActorUserID: ActorUserIDColumn,
|
||||||
|
ActorUsername: ActorUsernameColumn,
|
||||||
AppliedAt: AppliedAtColumn,
|
AppliedAt: AppliedAtColumn,
|
||||||
ExpiresAt: ExpiresAtColumn,
|
ExpiresAt: ExpiresAtColumn,
|
||||||
RemovedAt: RemovedAtColumn,
|
RemovedAt: RemovedAtColumn,
|
||||||
RemovedByType: RemovedByTypeColumn,
|
RemovedByType: RemovedByTypeColumn,
|
||||||
RemovedByID: RemovedByIDColumn,
|
RemovedByUserID: RemovedByUserIDColumn,
|
||||||
|
RemovedByUsername: RemovedByUsernameColumn,
|
||||||
RemovedReasonCode: RemovedReasonCodeColumn,
|
RemovedReasonCode: RemovedReasonCodeColumn,
|
||||||
|
|
||||||
AllColumns: allColumns,
|
AllColumns: allColumns,
|
||||||
|
|||||||
@@ -0,0 +1,99 @@
|
|||||||
|
//
|
||||||
|
// Code generated by go-jet DO NOT EDIT.
|
||||||
|
//
|
||||||
|
// WARNING: Changes to this file may cause incorrect behavior
|
||||||
|
// and will be lost if the code is regenerated
|
||||||
|
//
|
||||||
|
|
||||||
|
package table
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/go-jet/jet/v2/postgres"
|
||||||
|
)
|
||||||
|
|
||||||
|
var SessionRevocations = newSessionRevocationsTable("backend", "session_revocations", "")
|
||||||
|
|
||||||
|
type sessionRevocationsTable struct {
|
||||||
|
postgres.Table
|
||||||
|
|
||||||
|
// Columns
|
||||||
|
RevocationID postgres.ColumnString
|
||||||
|
DeviceSessionID postgres.ColumnString
|
||||||
|
UserID postgres.ColumnString
|
||||||
|
ActorKind postgres.ColumnString
|
||||||
|
ActorUserID postgres.ColumnString
|
||||||
|
ActorUsername postgres.ColumnString
|
||||||
|
Reason postgres.ColumnString
|
||||||
|
RevokedAt postgres.ColumnTimestampz
|
||||||
|
|
||||||
|
AllColumns postgres.ColumnList
|
||||||
|
MutableColumns postgres.ColumnList
|
||||||
|
DefaultColumns postgres.ColumnList
|
||||||
|
}
|
||||||
|
|
||||||
|
type SessionRevocationsTable struct {
|
||||||
|
sessionRevocationsTable
|
||||||
|
|
||||||
|
EXCLUDED sessionRevocationsTable
|
||||||
|
}
|
||||||
|
|
||||||
|
// AS creates new SessionRevocationsTable with assigned alias
|
||||||
|
func (a SessionRevocationsTable) AS(alias string) *SessionRevocationsTable {
|
||||||
|
return newSessionRevocationsTable(a.SchemaName(), a.TableName(), alias)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Schema creates new SessionRevocationsTable with assigned schema name
|
||||||
|
func (a SessionRevocationsTable) FromSchema(schemaName string) *SessionRevocationsTable {
|
||||||
|
return newSessionRevocationsTable(schemaName, a.TableName(), a.Alias())
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithPrefix creates new SessionRevocationsTable with assigned table prefix
|
||||||
|
func (a SessionRevocationsTable) WithPrefix(prefix string) *SessionRevocationsTable {
|
||||||
|
return newSessionRevocationsTable(a.SchemaName(), prefix+a.TableName(), a.TableName())
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithSuffix creates new SessionRevocationsTable with assigned table suffix
|
||||||
|
func (a SessionRevocationsTable) WithSuffix(suffix string) *SessionRevocationsTable {
|
||||||
|
return newSessionRevocationsTable(a.SchemaName(), a.TableName()+suffix, a.TableName())
|
||||||
|
}
|
||||||
|
|
||||||
|
func newSessionRevocationsTable(schemaName, tableName, alias string) *SessionRevocationsTable {
|
||||||
|
return &SessionRevocationsTable{
|
||||||
|
sessionRevocationsTable: newSessionRevocationsTableImpl(schemaName, tableName, alias),
|
||||||
|
EXCLUDED: newSessionRevocationsTableImpl("", "excluded", ""),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newSessionRevocationsTableImpl(schemaName, tableName, alias string) sessionRevocationsTable {
|
||||||
|
var (
|
||||||
|
RevocationIDColumn = postgres.StringColumn("revocation_id")
|
||||||
|
DeviceSessionIDColumn = postgres.StringColumn("device_session_id")
|
||||||
|
UserIDColumn = postgres.StringColumn("user_id")
|
||||||
|
ActorKindColumn = postgres.StringColumn("actor_kind")
|
||||||
|
ActorUserIDColumn = postgres.StringColumn("actor_user_id")
|
||||||
|
ActorUsernameColumn = postgres.StringColumn("actor_username")
|
||||||
|
ReasonColumn = postgres.StringColumn("reason")
|
||||||
|
RevokedAtColumn = postgres.TimestampzColumn("revoked_at")
|
||||||
|
allColumns = postgres.ColumnList{RevocationIDColumn, DeviceSessionIDColumn, UserIDColumn, ActorKindColumn, ActorUserIDColumn, ActorUsernameColumn, ReasonColumn, RevokedAtColumn}
|
||||||
|
mutableColumns = postgres.ColumnList{DeviceSessionIDColumn, UserIDColumn, ActorKindColumn, ActorUserIDColumn, ActorUsernameColumn, ReasonColumn, RevokedAtColumn}
|
||||||
|
defaultColumns = postgres.ColumnList{ReasonColumn, RevokedAtColumn}
|
||||||
|
)
|
||||||
|
|
||||||
|
return sessionRevocationsTable{
|
||||||
|
Table: postgres.NewTable(schemaName, tableName, alias, allColumns...),
|
||||||
|
|
||||||
|
//Columns
|
||||||
|
RevocationID: RevocationIDColumn,
|
||||||
|
DeviceSessionID: DeviceSessionIDColumn,
|
||||||
|
UserID: UserIDColumn,
|
||||||
|
ActorKind: ActorKindColumn,
|
||||||
|
ActorUserID: ActorUserIDColumn,
|
||||||
|
ActorUsername: ActorUsernameColumn,
|
||||||
|
Reason: ReasonColumn,
|
||||||
|
RevokedAt: RevokedAtColumn,
|
||||||
|
|
||||||
|
AllColumns: allColumns,
|
||||||
|
MutableColumns: mutableColumns,
|
||||||
|
DefaultColumns: defaultColumns,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -40,5 +40,6 @@ func UseSchema(schema string) {
|
|||||||
RuntimeRecords = RuntimeRecords.FromSchema(schema)
|
RuntimeRecords = RuntimeRecords.FromSchema(schema)
|
||||||
SanctionActive = SanctionActive.FromSchema(schema)
|
SanctionActive = SanctionActive.FromSchema(schema)
|
||||||
SanctionRecords = SanctionRecords.FromSchema(schema)
|
SanctionRecords = SanctionRecords.FromSchema(schema)
|
||||||
|
SessionRevocations = SessionRevocations.FromSchema(schema)
|
||||||
UserCountryCounters = UserCountryCounters.FromSchema(schema)
|
UserCountryCounters = UserCountryCounters.FromSchema(schema)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -37,7 +37,8 @@ CREATE TABLE auth_challenges (
|
|||||||
attempts integer NOT NULL DEFAULT 0,
|
attempts integer NOT NULL DEFAULT 0,
|
||||||
created_at timestamptz NOT NULL DEFAULT now(),
|
created_at timestamptz NOT NULL DEFAULT now(),
|
||||||
expires_at timestamptz NOT NULL,
|
expires_at timestamptz NOT NULL,
|
||||||
consumed_at timestamptz
|
consumed_at timestamptz,
|
||||||
|
preferred_language text NOT NULL DEFAULT ''
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX auth_challenges_email_idx ON auth_challenges (email);
|
CREATE INDEX auth_challenges_email_idx ON auth_challenges (email);
|
||||||
@@ -48,6 +49,30 @@ CREATE TABLE blocked_emails (
|
|||||||
blocked_at timestamptz NOT NULL DEFAULT now()
|
blocked_at timestamptz NOT NULL DEFAULT now()
|
||||||
);
|
);
|
||||||
|
|
||||||
|
-- session_revocations is the durable audit trail of every device-session
|
||||||
|
-- revocation. Each revoke writes one row carrying the actor kind, actor
|
||||||
|
-- id, and free-form reason. The table is append-only; reading it is the
|
||||||
|
-- only way to answer "who and why revoked this session". The
|
||||||
|
-- device_session_id column is not a foreign key because device_sessions
|
||||||
|
-- rows survive after revoke (status='revoked'), and dropping a session
|
||||||
|
-- through a future cleanup must not implicitly drop its audit history.
|
||||||
|
CREATE TABLE session_revocations (
|
||||||
|
revocation_id uuid PRIMARY KEY,
|
||||||
|
device_session_id uuid NOT NULL,
|
||||||
|
user_id uuid NOT NULL,
|
||||||
|
actor_kind text NOT NULL,
|
||||||
|
actor_user_id uuid,
|
||||||
|
actor_username text,
|
||||||
|
reason text NOT NULL DEFAULT '',
|
||||||
|
revoked_at timestamptz NOT NULL DEFAULT now(),
|
||||||
|
CONSTRAINT session_revocations_actor_chk
|
||||||
|
CHECK (actor_user_id IS NULL OR actor_username IS NULL)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX session_revocations_user_idx ON session_revocations (user_id, revoked_at DESC);
|
||||||
|
CREATE INDEX session_revocations_device_idx ON session_revocations (device_session_id, revoked_at DESC);
|
||||||
|
CREATE INDEX session_revocations_actor_kind_idx ON session_revocations (actor_kind, revoked_at DESC);
|
||||||
|
|
||||||
-- =====================================================================
|
-- =====================================================================
|
||||||
-- User domain
|
-- User domain
|
||||||
-- =====================================================================
|
-- =====================================================================
|
||||||
@@ -66,12 +91,15 @@ CREATE TABLE accounts (
|
|||||||
declared_country text,
|
declared_country text,
|
||||||
permanent_block boolean NOT NULL DEFAULT false,
|
permanent_block boolean NOT NULL DEFAULT false,
|
||||||
deleted_actor_type text,
|
deleted_actor_type text,
|
||||||
deleted_actor_id text,
|
deleted_actor_user_id uuid,
|
||||||
|
deleted_actor_username text,
|
||||||
created_at timestamptz NOT NULL DEFAULT now(),
|
created_at timestamptz NOT NULL DEFAULT now(),
|
||||||
updated_at timestamptz NOT NULL DEFAULT now(),
|
updated_at timestamptz NOT NULL DEFAULT now(),
|
||||||
deleted_at timestamptz,
|
deleted_at timestamptz,
|
||||||
CONSTRAINT accounts_email_unique UNIQUE (email),
|
CONSTRAINT accounts_email_unique UNIQUE (email),
|
||||||
CONSTRAINT accounts_user_name_unique UNIQUE (user_name)
|
CONSTRAINT accounts_user_name_unique UNIQUE (user_name),
|
||||||
|
CONSTRAINT accounts_deleted_actor_chk
|
||||||
|
CHECK (deleted_actor_user_id IS NULL OR deleted_actor_username IS NULL)
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX accounts_listing_idx
|
CREATE INDEX accounts_listing_idx
|
||||||
@@ -94,13 +122,16 @@ CREATE TABLE entitlement_records (
|
|||||||
is_paid boolean NOT NULL,
|
is_paid boolean NOT NULL,
|
||||||
source text NOT NULL,
|
source text NOT NULL,
|
||||||
actor_type text NOT NULL,
|
actor_type text NOT NULL,
|
||||||
actor_id text,
|
actor_user_id uuid,
|
||||||
|
actor_username text,
|
||||||
reason_code text NOT NULL DEFAULT '',
|
reason_code text NOT NULL DEFAULT '',
|
||||||
starts_at timestamptz NOT NULL DEFAULT now(),
|
starts_at timestamptz NOT NULL DEFAULT now(),
|
||||||
ends_at timestamptz,
|
ends_at timestamptz,
|
||||||
created_at timestamptz NOT NULL DEFAULT now(),
|
created_at timestamptz NOT NULL DEFAULT now(),
|
||||||
CONSTRAINT entitlement_records_tier_chk
|
CONSTRAINT entitlement_records_tier_chk
|
||||||
CHECK (tier IN ('free', 'monthly', 'yearly', 'permanent'))
|
CHECK (tier IN ('free', 'monthly', 'yearly', 'permanent')),
|
||||||
|
CONSTRAINT entitlement_records_actor_chk
|
||||||
|
CHECK (actor_user_id IS NULL OR actor_username IS NULL)
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX entitlement_records_user_idx
|
CREATE INDEX entitlement_records_user_idx
|
||||||
@@ -117,14 +148,17 @@ CREATE TABLE entitlement_snapshots (
|
|||||||
is_paid boolean NOT NULL,
|
is_paid boolean NOT NULL,
|
||||||
source text NOT NULL,
|
source text NOT NULL,
|
||||||
actor_type text NOT NULL,
|
actor_type text NOT NULL,
|
||||||
actor_id text,
|
actor_user_id uuid,
|
||||||
|
actor_username text,
|
||||||
reason_code text NOT NULL DEFAULT '',
|
reason_code text NOT NULL DEFAULT '',
|
||||||
starts_at timestamptz NOT NULL,
|
starts_at timestamptz NOT NULL,
|
||||||
ends_at timestamptz,
|
ends_at timestamptz,
|
||||||
max_registered_race_names integer NOT NULL,
|
max_registered_race_names integer NOT NULL,
|
||||||
updated_at timestamptz NOT NULL DEFAULT now(),
|
updated_at timestamptz NOT NULL DEFAULT now(),
|
||||||
CONSTRAINT entitlement_snapshots_tier_chk
|
CONSTRAINT entitlement_snapshots_tier_chk
|
||||||
CHECK (tier IN ('free', 'monthly', 'yearly', 'permanent'))
|
CHECK (tier IN ('free', 'monthly', 'yearly', 'permanent')),
|
||||||
|
CONSTRAINT entitlement_snapshots_actor_chk
|
||||||
|
CHECK (actor_user_id IS NULL OR actor_username IS NULL)
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE TABLE sanction_records (
|
CREATE TABLE sanction_records (
|
||||||
@@ -134,15 +168,21 @@ CREATE TABLE sanction_records (
|
|||||||
scope text NOT NULL,
|
scope text NOT NULL,
|
||||||
reason_code text NOT NULL,
|
reason_code text NOT NULL,
|
||||||
actor_type text NOT NULL,
|
actor_type text NOT NULL,
|
||||||
actor_id text,
|
actor_user_id uuid,
|
||||||
|
actor_username text,
|
||||||
applied_at timestamptz NOT NULL DEFAULT now(),
|
applied_at timestamptz NOT NULL DEFAULT now(),
|
||||||
expires_at timestamptz,
|
expires_at timestamptz,
|
||||||
removed_at timestamptz,
|
removed_at timestamptz,
|
||||||
removed_by_type text,
|
removed_by_type text,
|
||||||
removed_by_id text,
|
removed_by_user_id uuid,
|
||||||
|
removed_by_username text,
|
||||||
removed_reason_code text,
|
removed_reason_code text,
|
||||||
CONSTRAINT sanction_records_code_chk
|
CONSTRAINT sanction_records_code_chk
|
||||||
CHECK (sanction_code IN ('permanent_block'))
|
CHECK (sanction_code IN ('permanent_block')),
|
||||||
|
CONSTRAINT sanction_records_actor_chk
|
||||||
|
CHECK (actor_user_id IS NULL OR actor_username IS NULL),
|
||||||
|
CONSTRAINT sanction_records_removed_by_chk
|
||||||
|
CHECK (removed_by_user_id IS NULL OR removed_by_username IS NULL)
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX sanction_records_user_idx
|
CREATE INDEX sanction_records_user_idx
|
||||||
@@ -167,13 +207,19 @@ CREATE TABLE limit_records (
|
|||||||
value integer NOT NULL,
|
value integer NOT NULL,
|
||||||
reason_code text NOT NULL,
|
reason_code text NOT NULL,
|
||||||
actor_type text NOT NULL,
|
actor_type text NOT NULL,
|
||||||
actor_id text,
|
actor_user_id uuid,
|
||||||
|
actor_username text,
|
||||||
applied_at timestamptz NOT NULL DEFAULT now(),
|
applied_at timestamptz NOT NULL DEFAULT now(),
|
||||||
expires_at timestamptz,
|
expires_at timestamptz,
|
||||||
removed_at timestamptz,
|
removed_at timestamptz,
|
||||||
removed_by_type text,
|
removed_by_type text,
|
||||||
removed_by_id text,
|
removed_by_user_id uuid,
|
||||||
removed_reason_code text
|
removed_by_username text,
|
||||||
|
removed_reason_code text,
|
||||||
|
CONSTRAINT limit_records_actor_chk
|
||||||
|
CHECK (actor_user_id IS NULL OR actor_username IS NULL),
|
||||||
|
CONSTRAINT limit_records_removed_by_chk
|
||||||
|
CHECK (removed_by_user_id IS NULL OR removed_by_username IS NULL)
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX limit_records_user_idx
|
CREATE INDEX limit_records_user_idx
|
||||||
|
|||||||
@@ -1,13 +0,0 @@
|
|||||||
-- +goose Up
|
|
||||||
-- Persist the locale captured at send-email-code so it can be replayed at
|
|
||||||
-- confirm-email-code when the auth flow needs `preferred_language` to seed
|
|
||||||
-- a freshly-created `accounts` row. Existing rows default to '' and are
|
|
||||||
-- treated by the auth service as "no captured locale", in which case the
|
|
||||||
-- service falls back to the geoip-derived language and finally to "en".
|
|
||||||
|
|
||||||
ALTER TABLE backend.auth_challenges
|
|
||||||
ADD COLUMN preferred_language text NOT NULL DEFAULT '';
|
|
||||||
|
|
||||||
-- +goose Down
|
|
||||||
ALTER TABLE backend.auth_challenges
|
|
||||||
DROP COLUMN preferred_language;
|
|
||||||
@@ -0,0 +1,26 @@
|
|||||||
|
# Backend migrations
|
||||||
|
|
||||||
|
Goose migrations embedded into the backend binary by `embed.go`. Applied
|
||||||
|
at startup before any listener opens (see `internal/postgres`).
|
||||||
|
|
||||||
|
## Pre-production single-file rule
|
||||||
|
|
||||||
|
**While the platform is not yet in production, every schema change goes
|
||||||
|
into the existing `00001_init.sql` file** rather than a new
|
||||||
|
`00002_*`-prefixed file. The intent is to keep the schema in one
|
||||||
|
canonical place so reviewers and developers do not have to reconstruct
|
||||||
|
the latest shape from a chain of incremental migrations.
|
||||||
|
|
||||||
|
Operationally this means that pulling a branch with schema changes
|
||||||
|
requires a fresh database — the only consumer today is local development
|
||||||
|
and integration tests, both of which spin up disposable Postgres
|
||||||
|
instances.
|
||||||
|
|
||||||
|
> **Remove this rule before the first production deployment.** From
|
||||||
|
> that point on every schema change must be a new migration file with a
|
||||||
|
> monotonically increasing prefix, and `00001_init.sql` becomes
|
||||||
|
> immutable history.
|
||||||
|
|
||||||
|
If you need to make a change, edit `00001_init.sql` directly. Down
|
||||||
|
migrations should still be kept in sync (they live at the bottom of the
|
||||||
|
file — currently a single `DROP SCHEMA backend CASCADE`).
|
||||||
@@ -34,6 +34,7 @@ var expectedBackendTables = []string{
|
|||||||
"auth_challenges",
|
"auth_challenges",
|
||||||
"blocked_emails",
|
"blocked_emails",
|
||||||
"device_sessions",
|
"device_sessions",
|
||||||
|
"session_revocations",
|
||||||
// User domain.
|
// User domain.
|
||||||
"accounts",
|
"accounts",
|
||||||
"entitlement_records",
|
"entitlement_records",
|
||||||
@@ -110,7 +111,7 @@ func TestMigrationsApplyToFreshSchema(t *testing.T) {
|
|||||||
cfg.PrimaryDSN = scopedDSN
|
cfg.PrimaryDSN = scopedDSN
|
||||||
cfg.OperationTimeout = migrationsTestOpTimeout
|
cfg.OperationTimeout = migrationsTestOpTimeout
|
||||||
|
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,23 @@
|
|||||||
|
package postgres
|
||||||
|
|
||||||
|
import (
|
||||||
|
pgshared "galaxy/postgres"
|
||||||
|
|
||||||
|
metricnoop "go.opentelemetry.io/otel/metric/noop"
|
||||||
|
tracenoop "go.opentelemetry.io/otel/trace/noop"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NoObservabilityOptions returns the pgshared options that pin a fresh
|
||||||
|
// `*sql.DB` to no-op tracer and meter providers. Tests that bring up a
|
||||||
|
// real Postgres testcontainer use it so the otelsql instrumentation
|
||||||
|
// never falls back to the global tracer/meter — leaving an OTLP
|
||||||
|
// endpoint accidentally configured in the developer environment cannot
|
||||||
|
// stall the test on a background exporter handshake. Production code
|
||||||
|
// passes the runtime's real providers through galaxy/postgres directly
|
||||||
|
// and does not touch this helper.
|
||||||
|
func NoObservabilityOptions() []pgshared.Option {
|
||||||
|
return []pgshared.Option{
|
||||||
|
pgshared.WithTracerProvider(tracenoop.NewTracerProvider()),
|
||||||
|
pgshared.WithMeterProvider(metricnoop.NewMeterProvider()),
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -82,7 +82,7 @@ func startPostgres(t *testing.T) *sql.DB {
|
|||||||
cfg := pgshared.DefaultConfig()
|
cfg := pgshared.DefaultConfig()
|
||||||
cfg.PrimaryDSN = scopedDSN
|
cfg.PrimaryDSN = scopedDSN
|
||||||
cfg.OperationTimeout = pgOpTO
|
cfg.OperationTimeout = pgOpTO
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,10 +15,13 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// InternalSessionsHandlers groups the gateway-only session handlers
|
// InternalSessionsHandlers groups the gateway-only session handlers
|
||||||
// under `/api/v1/internal/sessions/*`. The current implementation ships real
|
// under `/api/v1/internal/sessions/*`. The internal surface only
|
||||||
// implementations; nil *auth.Service falls back to the Stage-3
|
// carries the per-request session lookup gateway needs to verify
|
||||||
// placeholder so the contract test continues to validate the OpenAPI
|
// signed envelopes; revocation is driven through the user surface
|
||||||
// envelope without booting a database.
|
// (self-driven) or through admin operations that call auth in-process,
|
||||||
|
// not through this listener. nil *auth.Service falls back to the
|
||||||
|
// Stage-3 placeholder so the contract test continues to validate the
|
||||||
|
// OpenAPI envelope without booting a database.
|
||||||
type InternalSessionsHandlers struct {
|
type InternalSessionsHandlers struct {
|
||||||
svc *auth.Service
|
svc *auth.Service
|
||||||
logger *zap.Logger
|
logger *zap.Logger
|
||||||
@@ -62,58 +65,3 @@ func (h *InternalSessionsHandlers) Get() gin.HandlerFunc {
|
|||||||
c.JSON(http.StatusOK, deviceSessionToWire(sess))
|
c.JSON(http.StatusOK, deviceSessionToWire(sess))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Revoke handles POST /api/v1/internal/sessions/{device_session_id}/revoke.
|
|
||||||
func (h *InternalSessionsHandlers) Revoke() gin.HandlerFunc {
|
|
||||||
if h.svc == nil {
|
|
||||||
return handlers.NotImplemented("internalSessionsRevoke")
|
|
||||||
}
|
|
||||||
return func(c *gin.Context) {
|
|
||||||
deviceSessionID, err := uuid.Parse(c.Param("device_session_id"))
|
|
||||||
if err != nil {
|
|
||||||
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "device_session_id must be a valid UUID")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
ctx := c.Request.Context()
|
|
||||||
sess, err := h.svc.RevokeSession(ctx, deviceSessionID)
|
|
||||||
if err != nil {
|
|
||||||
if errors.Is(err, auth.ErrSessionNotFound) {
|
|
||||||
httperr.Abort(c, http.StatusNotFound, httperr.CodeNotFound, "device session not found")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
h.logger.Error("internal sessions revoke failed",
|
|
||||||
append(telemetry.TraceFieldsFromContext(ctx), zap.Error(err))...,
|
|
||||||
)
|
|
||||||
httperr.Abort(c, http.StatusInternalServerError, httperr.CodeInternalError, "service error")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
c.JSON(http.StatusOK, deviceSessionToWire(sess))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// RevokeAllForUser handles POST /api/v1/internal/sessions/users/{user_id}/revoke-all.
|
|
||||||
func (h *InternalSessionsHandlers) RevokeAllForUser() gin.HandlerFunc {
|
|
||||||
if h.svc == nil {
|
|
||||||
return handlers.NotImplemented("internalSessionsRevokeAllForUser")
|
|
||||||
}
|
|
||||||
return func(c *gin.Context) {
|
|
||||||
userID, err := uuid.Parse(c.Param("user_id"))
|
|
||||||
if err != nil {
|
|
||||||
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "user_id must be a valid UUID")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
ctx := c.Request.Context()
|
|
||||||
revoked, err := h.svc.RevokeAllForUser(ctx, userID)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("internal sessions revoke-all failed",
|
|
||||||
append(telemetry.TraceFieldsFromContext(ctx), zap.Error(err))...,
|
|
||||||
)
|
|
||||||
httperr.Abort(c, http.StatusInternalServerError, httperr.CodeInternalError, "service error")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"user_id": userID.String(),
|
|
||||||
"revoked_count": len(revoked),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -126,6 +126,8 @@ func (h *PublicAuthHandlers) ConfirmEmailCode() gin.HandlerFunc {
|
|||||||
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "code is incorrect")
|
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "code is incorrect")
|
||||||
case errors.Is(err, auth.ErrTooManyAttempts):
|
case errors.Is(err, auth.ErrTooManyAttempts):
|
||||||
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "too many attempts")
|
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "too many attempts")
|
||||||
|
case errors.Is(err, auth.ErrEmailPermanentlyBlocked):
|
||||||
|
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "email is not allowed")
|
||||||
default:
|
default:
|
||||||
h.logger.Error("confirm-email-code failed",
|
h.logger.Error("confirm-email-code failed",
|
||||||
append(telemetry.TraceFieldsFromContext(ctx), zap.Error(err))...,
|
append(telemetry.TraceFieldsFromContext(ctx), zap.Error(err))...,
|
||||||
|
|||||||
@@ -116,15 +116,20 @@ func (h *UserGamesHandlers) Orders() gin.HandlerFunc {
|
|||||||
respondGameProxyError(c, h.logger, "user games orders", ctx, err)
|
respondGameProxyError(c, h.logger, "user games orders", ctx, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// Orders payload uses an updatedAt + commands shape; we don't
|
// Engine binds the order body into `gamerest.Command{Actor,
|
||||||
// rewrite it here because the engine derives the actor from
|
// Commands}` and rejects an empty actor with `notblank`, so
|
||||||
// the route, not the order body. We pass the body through
|
// backend rebinds the actor from the runtime player mapping
|
||||||
// verbatim (per ARCHITECTURE.md §9: backend is the only
|
// before forwarding — the same rule as for the command
|
||||||
// caller, so rewriting is unnecessary). Unused mapping is
|
// handler. Per ARCHITECTURE.md §9 backend is the only caller
|
||||||
// kept in the lookup so 404 returns when no mapping exists.
|
// of the engine, so the body never carries a client-supplied
|
||||||
_ = mapping
|
// actor.
|
||||||
_ = order.Order{}
|
_ = order.Order{}
|
||||||
resp, err := h.engine.PutOrders(ctx, endpoint, body)
|
payload, err := rebindActor(body, mapping.RaceName)
|
||||||
|
if err != nil {
|
||||||
|
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "request body must be a JSON object")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
resp, err := h.engine.PutOrders(ctx, endpoint, payload)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
respondEngineProxyError(c, h.logger, "user games orders", ctx, resp, err)
|
respondEngineProxyError(c, h.logger, "user games orders", ctx, resp, err)
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -0,0 +1,143 @@
|
|||||||
|
package server
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"net/http"
|
||||||
|
|
||||||
|
"galaxy/backend/internal/auth"
|
||||||
|
"galaxy/backend/internal/server/handlers"
|
||||||
|
"galaxy/backend/internal/server/httperr"
|
||||||
|
"galaxy/backend/internal/server/middleware/userid"
|
||||||
|
"galaxy/backend/internal/telemetry"
|
||||||
|
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"go.uber.org/zap"
|
||||||
|
)
|
||||||
|
|
||||||
|
// UserSessionsHandlers groups the user-facing session handlers under
|
||||||
|
// `/api/v1/user/sessions/*`. Authenticated callers can list their own
|
||||||
|
// active device sessions, revoke a specific one (logout from one
|
||||||
|
// device), or revoke all sessions at once (logout everywhere). Every
|
||||||
|
// mutation lands an audit row in `session_revocations` through the
|
||||||
|
// auth service. nil *auth.Service falls back to the standard 501
|
||||||
|
// placeholder.
|
||||||
|
type UserSessionsHandlers struct {
|
||||||
|
svc *auth.Service
|
||||||
|
logger *zap.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewUserSessionsHandlers constructs the handler set. svc may be nil
|
||||||
|
// — in that case every handler returns 501 not_implemented.
|
||||||
|
func NewUserSessionsHandlers(svc *auth.Service, logger *zap.Logger) *UserSessionsHandlers {
|
||||||
|
if logger == nil {
|
||||||
|
logger = zap.NewNop()
|
||||||
|
}
|
||||||
|
return &UserSessionsHandlers{svc: svc, logger: logger.Named("http.user.sessions")}
|
||||||
|
}
|
||||||
|
|
||||||
|
type userSessionsListResponse struct {
|
||||||
|
Items []deviceSessionPayload `json:"items"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type userSessionsRevocationSummary struct {
|
||||||
|
UserID string `json:"user_id"`
|
||||||
|
RevokedCount int `json:"revoked_count"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// List handles GET /api/v1/user/sessions.
|
||||||
|
func (h *UserSessionsHandlers) List() gin.HandlerFunc {
|
||||||
|
if h.svc == nil {
|
||||||
|
return handlers.NotImplemented("userSessionsList")
|
||||||
|
}
|
||||||
|
return func(c *gin.Context) {
|
||||||
|
callerID, ok := userid.FromContext(c.Request.Context())
|
||||||
|
if !ok {
|
||||||
|
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "X-User-ID header is required")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
sessions := h.svc.ListActiveByUser(c.Request.Context(), callerID)
|
||||||
|
items := make([]deviceSessionPayload, 0, len(sessions))
|
||||||
|
for _, s := range sessions {
|
||||||
|
items = append(items, deviceSessionToWire(s))
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, userSessionsListResponse{Items: items})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Revoke handles POST /api/v1/user/sessions/{device_session_id}/revoke.
|
||||||
|
// The target session must belong to the caller; otherwise the handler
|
||||||
|
// returns 404 (using the same shape as a missing session) so callers
|
||||||
|
// cannot probe foreign device_session_ids.
|
||||||
|
func (h *UserSessionsHandlers) Revoke() gin.HandlerFunc {
|
||||||
|
if h.svc == nil {
|
||||||
|
return handlers.NotImplemented("userSessionsRevoke")
|
||||||
|
}
|
||||||
|
return func(c *gin.Context) {
|
||||||
|
callerID, ok := userid.FromContext(c.Request.Context())
|
||||||
|
if !ok {
|
||||||
|
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "X-User-ID header is required")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
deviceSessionID, err := uuid.Parse(c.Param("device_session_id"))
|
||||||
|
if err != nil {
|
||||||
|
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "device_session_id must be a valid UUID")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Ownership check via the cache — if the target session is not
|
||||||
|
// active and owned by the caller, surface a 404 in both
|
||||||
|
// branches so foreign sessions are not probeable.
|
||||||
|
cached, ok := h.svc.LookupSessionInCache(deviceSessionID)
|
||||||
|
if !ok || cached.UserID != callerID {
|
||||||
|
httperr.Abort(c, http.StatusNotFound, httperr.CodeNotFound, "device session not found")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ctx := c.Request.Context()
|
||||||
|
sess, err := h.svc.RevokeSession(ctx, deviceSessionID, auth.RevokeContext{
|
||||||
|
ActorKind: auth.ActorKindUserSelf,
|
||||||
|
ActorID: callerID.String(),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, auth.ErrSessionNotFound) {
|
||||||
|
httperr.Abort(c, http.StatusNotFound, httperr.CodeNotFound, "device session not found")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
h.logger.Error("user sessions revoke failed",
|
||||||
|
append(telemetry.TraceFieldsFromContext(ctx), zap.Error(err))...,
|
||||||
|
)
|
||||||
|
httperr.Abort(c, http.StatusInternalServerError, httperr.CodeInternalError, "service error")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, deviceSessionToWire(sess))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RevokeAll handles POST /api/v1/user/sessions/revoke-all.
|
||||||
|
func (h *UserSessionsHandlers) RevokeAll() gin.HandlerFunc {
|
||||||
|
if h.svc == nil {
|
||||||
|
return handlers.NotImplemented("userSessionsRevokeAll")
|
||||||
|
}
|
||||||
|
return func(c *gin.Context) {
|
||||||
|
callerID, ok := userid.FromContext(c.Request.Context())
|
||||||
|
if !ok {
|
||||||
|
httperr.Abort(c, http.StatusBadRequest, httperr.CodeInvalidRequest, "X-User-ID header is required")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ctx := c.Request.Context()
|
||||||
|
revoked, err := h.svc.RevokeAllForUser(ctx, callerID, auth.RevokeContext{
|
||||||
|
ActorKind: auth.ActorKindUserSelf,
|
||||||
|
ActorID: callerID.String(),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
h.logger.Error("user sessions revoke-all failed",
|
||||||
|
append(telemetry.TraceFieldsFromContext(ctx), zap.Error(err))...,
|
||||||
|
)
|
||||||
|
httperr.Abort(c, http.StatusInternalServerError, httperr.CodeInternalError, "service error")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, userSessionsRevocationSummary{
|
||||||
|
UserID: callerID.String(),
|
||||||
|
RevokedCount: len(revoked),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -68,6 +68,7 @@ type RouterDependencies struct {
|
|||||||
UserLobbyMy *UserLobbyMyHandlers
|
UserLobbyMy *UserLobbyMyHandlers
|
||||||
UserLobbyRaceNames *UserLobbyRaceNamesHandlers
|
UserLobbyRaceNames *UserLobbyRaceNamesHandlers
|
||||||
UserGames *UserGamesHandlers
|
UserGames *UserGamesHandlers
|
||||||
|
UserSessions *UserSessionsHandlers
|
||||||
AdminAdminAccounts *AdminAdminAccountsHandlers
|
AdminAdminAccounts *AdminAdminAccountsHandlers
|
||||||
AdminUsers *AdminUsersHandlers
|
AdminUsers *AdminUsersHandlers
|
||||||
AdminGames *AdminGamesHandlers
|
AdminGames *AdminGamesHandlers
|
||||||
@@ -162,6 +163,9 @@ func withDefaultHandlers(deps RouterDependencies) RouterDependencies {
|
|||||||
if deps.UserGames == nil {
|
if deps.UserGames == nil {
|
||||||
deps.UserGames = NewUserGamesHandlers(nil, nil, deps.Logger)
|
deps.UserGames = NewUserGamesHandlers(nil, nil, deps.Logger)
|
||||||
}
|
}
|
||||||
|
if deps.UserSessions == nil {
|
||||||
|
deps.UserSessions = NewUserSessionsHandlers(nil, deps.Logger)
|
||||||
|
}
|
||||||
if deps.AdminAdminAccounts == nil {
|
if deps.AdminAdminAccounts == nil {
|
||||||
deps.AdminAdminAccounts = NewAdminAdminAccountsHandlers(nil, deps.Logger)
|
deps.AdminAdminAccounts = NewAdminAdminAccountsHandlers(nil, deps.Logger)
|
||||||
}
|
}
|
||||||
@@ -258,6 +262,11 @@ func registerUserRoutes(router *gin.Engine, instruments *metrics.Instruments, de
|
|||||||
userGames.POST("/:game_id/commands", deps.UserGames.Commands())
|
userGames.POST("/:game_id/commands", deps.UserGames.Commands())
|
||||||
userGames.POST("/:game_id/orders", deps.UserGames.Orders())
|
userGames.POST("/:game_id/orders", deps.UserGames.Orders())
|
||||||
userGames.GET("/:game_id/reports/:turn", deps.UserGames.Report())
|
userGames.GET("/:game_id/reports/:turn", deps.UserGames.Report())
|
||||||
|
|
||||||
|
userSessions := group.Group("/sessions")
|
||||||
|
userSessions.GET("", deps.UserSessions.List())
|
||||||
|
userSessions.POST("/revoke-all", deps.UserSessions.RevokeAll())
|
||||||
|
userSessions.POST("/:device_session_id/revoke", deps.UserSessions.Revoke())
|
||||||
}
|
}
|
||||||
|
|
||||||
func registerAdminRoutes(router *gin.Engine, instruments *metrics.Instruments, deps RouterDependencies) {
|
func registerAdminRoutes(router *gin.Engine, instruments *metrics.Instruments, deps RouterDependencies) {
|
||||||
@@ -323,9 +332,7 @@ func registerInternalRoutes(router *gin.Engine, instruments *metrics.Instruments
|
|||||||
group.Use(metrics.Middleware(instruments, metrics.GroupInternal))
|
group.Use(metrics.Middleware(instruments, metrics.GroupInternal))
|
||||||
|
|
||||||
sessions := group.Group("/sessions")
|
sessions := group.Group("/sessions")
|
||||||
sessions.POST("/users/:user_id/revoke-all", deps.InternalSessions.RevokeAllForUser())
|
|
||||||
sessions.GET("/:device_session_id", deps.InternalSessions.Get())
|
sessions.GET("/:device_session_id", deps.InternalSessions.Get())
|
||||||
sessions.POST("/:device_session_id/revoke", deps.InternalSessions.Revoke())
|
|
||||||
|
|
||||||
users := group.Group("/users")
|
users := group.Group("/users")
|
||||||
users.GET("/:user_id/account-internal", deps.InternalUsers.GetAccountInternal())
|
users.GET("/:user_id/account-internal", deps.InternalUsers.GetAccountInternal())
|
||||||
|
|||||||
@@ -12,19 +12,35 @@ import (
|
|||||||
|
|
||||||
// ActorRef identifies the principal that produced an audit-bearing
|
// ActorRef identifies the principal that produced an audit-bearing
|
||||||
// mutation. The wire shape mirrors the OpenAPI ActorRef schema. Type is
|
// mutation. The wire shape mirrors the OpenAPI ActorRef schema. Type is
|
||||||
// a free-form string ("user", "admin", "system" in MVP); ID is opaque
|
// one of "user", "admin", "system" in MVP. ID carries a user UUID for
|
||||||
// (a user UUID, an admin username, or empty for system).
|
// Type=="user", an admin username for Type=="admin", and is empty for
|
||||||
|
// Type=="system".
|
||||||
type ActorRef struct {
|
type ActorRef struct {
|
||||||
Type string
|
Type string
|
||||||
ID string
|
ID string
|
||||||
}
|
}
|
||||||
|
|
||||||
// Validate rejects empty actor types. Admin handlers always populate
|
// Validate rejects empty actor types and enforces the per-type shape
|
||||||
// Type; user-side mutations supply Type internally.
|
// of ID: a user actor requires a UUID id, a system actor must have an
|
||||||
|
// empty id. Other types pass through with no further check.
|
||||||
func (a ActorRef) Validate() error {
|
func (a ActorRef) Validate() error {
|
||||||
if strings.TrimSpace(a.Type) == "" {
|
t := strings.TrimSpace(a.Type)
|
||||||
|
if t == "" {
|
||||||
return ErrInvalidActor
|
return ErrInvalidActor
|
||||||
}
|
}
|
||||||
|
switch t {
|
||||||
|
case "user":
|
||||||
|
if strings.TrimSpace(a.ID) == "" {
|
||||||
|
return fmt.Errorf("%w: user actor requires id", ErrInvalidActor)
|
||||||
|
}
|
||||||
|
if _, err := uuid.Parse(a.ID); err != nil {
|
||||||
|
return fmt.Errorf("%w: user actor id must be a uuid: %v", ErrInvalidActor, err)
|
||||||
|
}
|
||||||
|
case "system":
|
||||||
|
if strings.TrimSpace(a.ID) != "" {
|
||||||
|
return fmt.Errorf("%w: system actor must have an empty id", ErrInvalidActor)
|
||||||
|
}
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -34,10 +34,34 @@ type GeoCascade interface {
|
|||||||
// canonical implementation wraps `*auth.Service.RevokeAllForUser`. The
|
// canonical implementation wraps `*auth.Service.RevokeAllForUser`. The
|
||||||
// adapter lives in `cmd/backend/main.go` so `auth` does not export an
|
// adapter lives in `cmd/backend/main.go` so `auth` does not export an
|
||||||
// extra method shape.
|
// extra method shape.
|
||||||
|
//
|
||||||
|
// The actor argument carries audit context: who initiated the revoke
|
||||||
|
// and why. The auth side persists it into `session_revocations`; user
|
||||||
|
// callers populate it with a fixed kind matching the trigger.
|
||||||
type SessionRevoker interface {
|
type SessionRevoker interface {
|
||||||
RevokeAllForUser(ctx context.Context, userID uuid.UUID) error
|
RevokeAllForUser(ctx context.Context, userID uuid.UUID, actor SessionRevokeActor) error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SessionRevokeActor describes the principal behind a session revoke.
|
||||||
|
// Kind is a closed vocabulary mirrored by `auth.ActorKind`; ID is the
|
||||||
|
// stable identifier of the principal (a user UUID for self-driven
|
||||||
|
// flows, an admin username for admin-driven flows). Reason is a
|
||||||
|
// free-form note recorded in the audit row.
|
||||||
|
type SessionRevokeActor struct {
|
||||||
|
Kind string
|
||||||
|
ID string
|
||||||
|
Reason string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Closed Kind vocabulary. Mirror constants live in
|
||||||
|
// `auth.ActorKind*`; the values must stay in sync because the auth
|
||||||
|
// adapter forwards them verbatim.
|
||||||
|
const (
|
||||||
|
SessionRevokeActorSoftDeleteUser = "soft_delete_user"
|
||||||
|
SessionRevokeActorSoftDeleteAdmin = "soft_delete_admin"
|
||||||
|
SessionRevokeActorAdminSanction = "admin_sanction"
|
||||||
|
)
|
||||||
|
|
||||||
// NewNoopLobbyCascade returns a LobbyCascade that logs every invocation
|
// NewNoopLobbyCascade returns a LobbyCascade that logs every invocation
|
||||||
// at info level and returns nil. The canonical lobby is wired in `cmd/backend/main.go`.
|
// at info level and returns nil. The canonical lobby is wired in `cmd/backend/main.go`.
|
||||||
// implementation; until then the no-op keeps the cascade orchestration
|
// implementation; until then the no-op keeps the cascade orchestration
|
||||||
|
|||||||
@@ -63,8 +63,7 @@ func (s *Service) ApplyLimit(ctx context.Context, input ApplyLimitInput) (Accoun
|
|||||||
LimitCode: input.LimitCode,
|
LimitCode: input.LimitCode,
|
||||||
Value: input.Value,
|
Value: input.Value,
|
||||||
ReasonCode: input.ReasonCode,
|
ReasonCode: input.ReasonCode,
|
||||||
ActorType: input.Actor.Type,
|
Actor: input.Actor,
|
||||||
ActorID: input.Actor.ID,
|
|
||||||
AppliedAt: now,
|
AppliedAt: now,
|
||||||
ExpiresAt: expiresAt,
|
ExpiresAt: expiresAt,
|
||||||
}); err != nil {
|
}); err != nil {
|
||||||
|
|||||||
@@ -81,8 +81,7 @@ func (s *Service) ApplySanction(ctx context.Context, input ApplySanctionInput) (
|
|||||||
SanctionCode: input.SanctionCode,
|
SanctionCode: input.SanctionCode,
|
||||||
Scope: input.Scope,
|
Scope: input.Scope,
|
||||||
ReasonCode: input.ReasonCode,
|
ReasonCode: input.ReasonCode,
|
||||||
ActorType: input.Actor.Type,
|
Actor: input.Actor,
|
||||||
ActorID: input.Actor.ID,
|
|
||||||
AppliedAt: now,
|
AppliedAt: now,
|
||||||
ExpiresAt: expiresAt,
|
ExpiresAt: expiresAt,
|
||||||
FlipPermanent: flipPermanent,
|
FlipPermanent: flipPermanent,
|
||||||
@@ -94,7 +93,7 @@ func (s *Service) ApplySanction(ctx context.Context, input ApplySanctionInput) (
|
|||||||
}
|
}
|
||||||
|
|
||||||
if flipPermanent {
|
if flipPermanent {
|
||||||
if err := s.cascadePermanentBlock(ctx, input.UserID); err != nil {
|
if err := s.cascadePermanentBlock(ctx, input.UserID, input.Actor, input.ReasonCode); err != nil {
|
||||||
s.deps.Logger.Warn("permanent-block cascade returned error",
|
s.deps.Logger.Warn("permanent-block cascade returned error",
|
||||||
zap.String("user_id", input.UserID.String()),
|
zap.String("user_id", input.UserID.String()),
|
||||||
zap.Error(err),
|
zap.Error(err),
|
||||||
@@ -117,10 +116,15 @@ func validateSanctionCode(code string) error {
|
|||||||
// lobby on-user-blocked hook. Both calls are best-effort — they run
|
// lobby on-user-blocked hook. Both calls are best-effort — they run
|
||||||
// after the database commit and only join errors for the caller to
|
// after the database commit and only join errors for the caller to
|
||||||
// log.
|
// log.
|
||||||
func (s *Service) cascadePermanentBlock(ctx context.Context, userID uuid.UUID) error {
|
func (s *Service) cascadePermanentBlock(ctx context.Context, userID uuid.UUID, actor ActorRef, reasonCode string) error {
|
||||||
var joined error
|
var joined error
|
||||||
if s.deps.SessionRevoker != nil {
|
if s.deps.SessionRevoker != nil {
|
||||||
if err := s.deps.SessionRevoker.RevokeAllForUser(ctx, userID); err != nil {
|
revokeActor := SessionRevokeActor{
|
||||||
|
Kind: SessionRevokeActorAdminSanction,
|
||||||
|
ID: actor.ID,
|
||||||
|
Reason: SanctionCodePermanentBlock + ":" + reasonCode,
|
||||||
|
}
|
||||||
|
if err := s.deps.SessionRevoker.RevokeAllForUser(ctx, userID, revokeActor); err != nil {
|
||||||
joined = errors.Join(joined, fmt.Errorf("session revoke: %w", err))
|
joined = errors.Join(joined, fmt.Errorf("session revoke: %w", err))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -45,17 +45,26 @@ func (s *Service) SoftDelete(ctx context.Context, userID uuid.UUID, actor ActorR
|
|||||||
zap.String("user_id", userID.String()),
|
zap.String("user_id", userID.String()),
|
||||||
zap.String("actor_type", actor.Type),
|
zap.String("actor_type", actor.Type),
|
||||||
)
|
)
|
||||||
return s.runSoftDeleteCascade(ctx, userID)
|
return s.runSoftDeleteCascade(ctx, userID, actor)
|
||||||
}
|
}
|
||||||
|
|
||||||
// runSoftDeleteCascade fans the soft-delete signal out to dependent
|
// runSoftDeleteCascade fans the soft-delete signal out to dependent
|
||||||
// modules in the documented order: auth → lobby → notification → geo.
|
// modules in the documented order: auth → lobby → notification → geo.
|
||||||
// Each call's error is joined; the loop continues even after a
|
// Each call's error is joined; the loop continues even after a
|
||||||
// failure so the remaining modules still get notified.
|
// failure so the remaining modules still get notified.
|
||||||
func (s *Service) runSoftDeleteCascade(ctx context.Context, userID uuid.UUID) error {
|
func (s *Service) runSoftDeleteCascade(ctx context.Context, userID uuid.UUID, actor ActorRef) error {
|
||||||
var joined error
|
var joined error
|
||||||
if s.deps.SessionRevoker != nil {
|
if s.deps.SessionRevoker != nil {
|
||||||
if err := s.deps.SessionRevoker.RevokeAllForUser(ctx, userID); err != nil {
|
kind := SessionRevokeActorSoftDeleteAdmin
|
||||||
|
if actor.Type == "user" {
|
||||||
|
kind = SessionRevokeActorSoftDeleteUser
|
||||||
|
}
|
||||||
|
revokeActor := SessionRevokeActor{
|
||||||
|
Kind: kind,
|
||||||
|
ID: actor.ID,
|
||||||
|
Reason: "soft delete",
|
||||||
|
}
|
||||||
|
if err := s.deps.SessionRevoker.RevokeAllForUser(ctx, userID, revokeActor); err != nil {
|
||||||
joined = errors.Join(joined, fmt.Errorf("session revoke: %w", err))
|
joined = errors.Join(joined, fmt.Errorf("session revoke: %w", err))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -122,12 +122,14 @@ type orderTracker struct {
|
|||||||
name string
|
name string
|
||||||
calls int
|
calls int
|
||||||
lastUser uuid.UUID
|
lastUser uuid.UUID
|
||||||
|
lastActor user.SessionRevokeActor
|
||||||
appendTo func(string)
|
appendTo func(string)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *orderTracker) RevokeAllForUser(_ context.Context, userID uuid.UUID) error {
|
func (r *orderTracker) RevokeAllForUser(_ context.Context, userID uuid.UUID, actor user.SessionRevokeActor) error {
|
||||||
r.calls++
|
r.calls++
|
||||||
r.lastUser = userID
|
r.lastUser = userID
|
||||||
|
r.lastActor = actor
|
||||||
if r.appendTo != nil && r.name != "" {
|
if r.appendTo != nil && r.name != "" {
|
||||||
r.appendTo(r.name)
|
r.appendTo(r.name)
|
||||||
}
|
}
|
||||||
|
|||||||
+108
-49
@@ -5,6 +5,7 @@ import (
|
|||||||
"database/sql"
|
"database/sql"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"galaxy/backend/internal/postgres/jet/backend/model"
|
"galaxy/backend/internal/postgres/jet/backend/model"
|
||||||
@@ -72,8 +73,7 @@ type sanctionInsert struct {
|
|||||||
SanctionCode string
|
SanctionCode string
|
||||||
Scope string
|
Scope string
|
||||||
ReasonCode string
|
ReasonCode string
|
||||||
ActorType string
|
Actor ActorRef
|
||||||
ActorID string
|
|
||||||
AppliedAt time.Time
|
AppliedAt time.Time
|
||||||
ExpiresAt *time.Time
|
ExpiresAt *time.Time
|
||||||
FlipPermanent bool
|
FlipPermanent bool
|
||||||
@@ -85,8 +85,7 @@ type limitInsert struct {
|
|||||||
LimitCode string
|
LimitCode string
|
||||||
Value int32
|
Value int32
|
||||||
ReasonCode string
|
ReasonCode string
|
||||||
ActorType string
|
Actor ActorRef
|
||||||
ActorID string
|
|
||||||
AppliedAt time.Time
|
AppliedAt time.Time
|
||||||
ExpiresAt *time.Time
|
ExpiresAt *time.Time
|
||||||
}
|
}
|
||||||
@@ -113,7 +112,8 @@ func accountColumns() postgres.ColumnList {
|
|||||||
func snapshotColumns() postgres.ColumnList {
|
func snapshotColumns() postgres.ColumnList {
|
||||||
s := table.EntitlementSnapshots
|
s := table.EntitlementSnapshots
|
||||||
return postgres.ColumnList{
|
return postgres.ColumnList{
|
||||||
s.UserID, s.Tier, s.IsPaid, s.Source, s.ActorType, s.ActorID,
|
s.UserID, s.Tier, s.IsPaid, s.Source,
|
||||||
|
s.ActorType, s.ActorUserID, s.ActorUsername,
|
||||||
s.ReasonCode, s.StartsAt, s.EndsAt, s.MaxRegisteredRaceNames, s.UpdatedAt,
|
s.ReasonCode, s.StartsAt, s.EndsAt, s.MaxRegisteredRaceNames, s.UpdatedAt,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -275,7 +275,7 @@ func (s *Store) ListActiveSanctions(ctx context.Context, userID uuid.UUID) ([]Ac
|
|||||||
r := table.SanctionRecords
|
r := table.SanctionRecords
|
||||||
stmt := postgres.SELECT(
|
stmt := postgres.SELECT(
|
||||||
r.SanctionCode, r.Scope, r.ReasonCode,
|
r.SanctionCode, r.Scope, r.ReasonCode,
|
||||||
r.ActorType, r.ActorID,
|
r.ActorType, r.ActorUserID, r.ActorUsername,
|
||||||
r.AppliedAt, r.ExpiresAt,
|
r.AppliedAt, r.ExpiresAt,
|
||||||
).
|
).
|
||||||
FROM(a.INNER_JOIN(r, r.RecordID.EQ(a.RecordID))).
|
FROM(a.INNER_JOIN(r, r.RecordID.EQ(a.RecordID))).
|
||||||
@@ -292,7 +292,7 @@ func (s *Store) ListActiveSanctions(ctx context.Context, userID uuid.UUID) ([]Ac
|
|||||||
SanctionCode: row.SanctionCode,
|
SanctionCode: row.SanctionCode,
|
||||||
Scope: row.Scope,
|
Scope: row.Scope,
|
||||||
ReasonCode: row.ReasonCode,
|
ReasonCode: row.ReasonCode,
|
||||||
Actor: ActorRef{Type: row.ActorType, ID: derefString(row.ActorID)},
|
Actor: actorFromColumns(row.ActorType, row.ActorUserID, row.ActorUsername),
|
||||||
AppliedAt: row.AppliedAt,
|
AppliedAt: row.AppliedAt,
|
||||||
}
|
}
|
||||||
if row.ExpiresAt != nil {
|
if row.ExpiresAt != nil {
|
||||||
@@ -311,7 +311,7 @@ func (s *Store) ListActiveLimits(ctx context.Context, userID uuid.UUID) ([]Activ
|
|||||||
r := table.LimitRecords
|
r := table.LimitRecords
|
||||||
stmt := postgres.SELECT(
|
stmt := postgres.SELECT(
|
||||||
r.LimitCode, a.Value, r.ReasonCode,
|
r.LimitCode, a.Value, r.ReasonCode,
|
||||||
r.ActorType, r.ActorID,
|
r.ActorType, r.ActorUserID, r.ActorUsername,
|
||||||
r.AppliedAt, r.ExpiresAt,
|
r.AppliedAt, r.ExpiresAt,
|
||||||
).
|
).
|
||||||
FROM(a.INNER_JOIN(r, r.RecordID.EQ(a.RecordID))).
|
FROM(a.INNER_JOIN(r, r.RecordID.EQ(a.RecordID))).
|
||||||
@@ -331,7 +331,7 @@ func (s *Store) ListActiveLimits(ctx context.Context, userID uuid.UUID) ([]Activ
|
|||||||
LimitCode: row.LimitRecords.LimitCode,
|
LimitCode: row.LimitRecords.LimitCode,
|
||||||
Value: row.LimitActive.Value,
|
Value: row.LimitActive.Value,
|
||||||
ReasonCode: row.LimitRecords.ReasonCode,
|
ReasonCode: row.LimitRecords.ReasonCode,
|
||||||
Actor: ActorRef{Type: row.LimitRecords.ActorType, ID: derefString(row.LimitRecords.ActorID)},
|
Actor: actorFromColumns(row.LimitRecords.ActorType, row.LimitRecords.ActorUserID, row.LimitRecords.ActorUsername),
|
||||||
AppliedAt: row.LimitRecords.AppliedAt,
|
AppliedAt: row.LimitRecords.AppliedAt,
|
||||||
}
|
}
|
||||||
if row.LimitRecords.ExpiresAt != nil {
|
if row.LimitRecords.ExpiresAt != nil {
|
||||||
@@ -395,9 +395,12 @@ func (s *Store) ApplyEntitlementTx(ctx context.Context, snap EntitlementSnapshot
|
|||||||
if err := s.assertAccountLive(ctx, snap.UserID); err != nil {
|
if err := s.assertAccountLive(ctx, snap.UserID); err != nil {
|
||||||
return EntitlementSnapshot{}, err
|
return EntitlementSnapshot{}, err
|
||||||
}
|
}
|
||||||
err := withTx(ctx, s.db, func(tx *sql.Tx) error {
|
actorUserID, actorUsername, err := actorToColumnArgs(snap.Actor)
|
||||||
|
if err != nil {
|
||||||
|
return EntitlementSnapshot{}, err
|
||||||
|
}
|
||||||
|
err = withTx(ctx, s.db, func(tx *sql.Tx) error {
|
||||||
recordID := uuid.New()
|
recordID := uuid.New()
|
||||||
actorID := nullableString(snap.Actor.ID)
|
|
||||||
var endsAt any
|
var endsAt any
|
||||||
if snap.EndsAt != nil {
|
if snap.EndsAt != nil {
|
||||||
endsAt = *snap.EndsAt
|
endsAt = *snap.EndsAt
|
||||||
@@ -409,20 +412,21 @@ func (s *Store) ApplyEntitlementTx(ctx context.Context, snap EntitlementSnapshot
|
|||||||
table.EntitlementRecords.IsPaid,
|
table.EntitlementRecords.IsPaid,
|
||||||
table.EntitlementRecords.Source,
|
table.EntitlementRecords.Source,
|
||||||
table.EntitlementRecords.ActorType,
|
table.EntitlementRecords.ActorType,
|
||||||
table.EntitlementRecords.ActorID,
|
table.EntitlementRecords.ActorUserID,
|
||||||
|
table.EntitlementRecords.ActorUsername,
|
||||||
table.EntitlementRecords.ReasonCode,
|
table.EntitlementRecords.ReasonCode,
|
||||||
table.EntitlementRecords.StartsAt,
|
table.EntitlementRecords.StartsAt,
|
||||||
table.EntitlementRecords.EndsAt,
|
table.EntitlementRecords.EndsAt,
|
||||||
table.EntitlementRecords.CreatedAt,
|
table.EntitlementRecords.CreatedAt,
|
||||||
).VALUES(
|
).VALUES(
|
||||||
recordID, snap.UserID, snap.Tier, snap.IsPaid, snap.Source,
|
recordID, snap.UserID, snap.Tier, snap.IsPaid, snap.Source,
|
||||||
snap.Actor.Type, actorID, snap.ReasonCode,
|
snap.Actor.Type, actorUserID, actorUsername, snap.ReasonCode,
|
||||||
snap.StartsAt, endsAt, snap.UpdatedAt,
|
snap.StartsAt, endsAt, snap.UpdatedAt,
|
||||||
)
|
)
|
||||||
if _, err := recordStmt.ExecContext(ctx, tx); err != nil {
|
if _, err := recordStmt.ExecContext(ctx, tx); err != nil {
|
||||||
return fmt.Errorf("insert entitlement record: %w", err)
|
return fmt.Errorf("insert entitlement record: %w", err)
|
||||||
}
|
}
|
||||||
return upsertSnapshotTx(ctx, tx, snap)
|
return upsertSnapshotTx(ctx, tx, snap, actorUserID, actorUsername)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return EntitlementSnapshot{}, err
|
return EntitlementSnapshot{}, err
|
||||||
@@ -437,9 +441,12 @@ func (s *Store) ApplySanctionTx(ctx context.Context, input sanctionInsert) error
|
|||||||
if err := s.assertAccountLive(ctx, input.UserID); err != nil {
|
if err := s.assertAccountLive(ctx, input.UserID); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
actorUserID, actorUsername, err := actorToColumnArgs(input.Actor)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
return withTx(ctx, s.db, func(tx *sql.Tx) error {
|
return withTx(ctx, s.db, func(tx *sql.Tx) error {
|
||||||
recordID := uuid.New()
|
recordID := uuid.New()
|
||||||
actorID := nullableString(input.ActorID)
|
|
||||||
var expiresAt any
|
var expiresAt any
|
||||||
if input.ExpiresAt != nil {
|
if input.ExpiresAt != nil {
|
||||||
expiresAt = *input.ExpiresAt
|
expiresAt = *input.ExpiresAt
|
||||||
@@ -451,12 +458,13 @@ func (s *Store) ApplySanctionTx(ctx context.Context, input sanctionInsert) error
|
|||||||
table.SanctionRecords.Scope,
|
table.SanctionRecords.Scope,
|
||||||
table.SanctionRecords.ReasonCode,
|
table.SanctionRecords.ReasonCode,
|
||||||
table.SanctionRecords.ActorType,
|
table.SanctionRecords.ActorType,
|
||||||
table.SanctionRecords.ActorID,
|
table.SanctionRecords.ActorUserID,
|
||||||
|
table.SanctionRecords.ActorUsername,
|
||||||
table.SanctionRecords.AppliedAt,
|
table.SanctionRecords.AppliedAt,
|
||||||
table.SanctionRecords.ExpiresAt,
|
table.SanctionRecords.ExpiresAt,
|
||||||
).VALUES(
|
).VALUES(
|
||||||
recordID, input.UserID, input.SanctionCode, input.Scope, input.ReasonCode,
|
recordID, input.UserID, input.SanctionCode, input.Scope, input.ReasonCode,
|
||||||
input.ActorType, actorID, input.AppliedAt, expiresAt,
|
input.Actor.Type, actorUserID, actorUsername, input.AppliedAt, expiresAt,
|
||||||
)
|
)
|
||||||
if _, err := recordStmt.ExecContext(ctx, tx); err != nil {
|
if _, err := recordStmt.ExecContext(ctx, tx); err != nil {
|
||||||
return fmt.Errorf("insert sanction record: %w", err)
|
return fmt.Errorf("insert sanction record: %w", err)
|
||||||
@@ -498,9 +506,12 @@ func (s *Store) ApplyLimitTx(ctx context.Context, input limitInsert) error {
|
|||||||
if err := s.assertAccountLive(ctx, input.UserID); err != nil {
|
if err := s.assertAccountLive(ctx, input.UserID); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
actorUserID, actorUsername, err := actorToColumnArgs(input.Actor)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
return withTx(ctx, s.db, func(tx *sql.Tx) error {
|
return withTx(ctx, s.db, func(tx *sql.Tx) error {
|
||||||
recordID := uuid.New()
|
recordID := uuid.New()
|
||||||
actorID := nullableString(input.ActorID)
|
|
||||||
var expiresAt any
|
var expiresAt any
|
||||||
if input.ExpiresAt != nil {
|
if input.ExpiresAt != nil {
|
||||||
expiresAt = *input.ExpiresAt
|
expiresAt = *input.ExpiresAt
|
||||||
@@ -512,12 +523,13 @@ func (s *Store) ApplyLimitTx(ctx context.Context, input limitInsert) error {
|
|||||||
table.LimitRecords.Value,
|
table.LimitRecords.Value,
|
||||||
table.LimitRecords.ReasonCode,
|
table.LimitRecords.ReasonCode,
|
||||||
table.LimitRecords.ActorType,
|
table.LimitRecords.ActorType,
|
||||||
table.LimitRecords.ActorID,
|
table.LimitRecords.ActorUserID,
|
||||||
|
table.LimitRecords.ActorUsername,
|
||||||
table.LimitRecords.AppliedAt,
|
table.LimitRecords.AppliedAt,
|
||||||
table.LimitRecords.ExpiresAt,
|
table.LimitRecords.ExpiresAt,
|
||||||
).VALUES(
|
).VALUES(
|
||||||
recordID, input.UserID, input.LimitCode, input.Value, input.ReasonCode,
|
recordID, input.UserID, input.LimitCode, input.Value, input.ReasonCode,
|
||||||
input.ActorType, actorID, input.AppliedAt, expiresAt,
|
input.Actor.Type, actorUserID, actorUsername, input.AppliedAt, expiresAt,
|
||||||
)
|
)
|
||||||
if _, err := recordStmt.ExecContext(ctx, tx); err != nil {
|
if _, err := recordStmt.ExecContext(ctx, tx); err != nil {
|
||||||
return fmt.Errorf("insert limit record: %w", err)
|
return fmt.Errorf("insert limit record: %w", err)
|
||||||
@@ -547,12 +559,16 @@ func (s *Store) ApplyLimitTx(ctx context.Context, input limitInsert) error {
|
|||||||
// successful idempotent operation.
|
// successful idempotent operation.
|
||||||
func (s *Store) SoftDeleteAccount(ctx context.Context, userID uuid.UUID, actor ActorRef, now time.Time) (bool, error) {
|
func (s *Store) SoftDeleteAccount(ctx context.Context, userID uuid.UUID, actor ActorRef, now time.Time) (bool, error) {
|
||||||
a := table.Accounts
|
a := table.Accounts
|
||||||
actorIDExpr := nullableStringExpr(actor.ID)
|
actorUserIDExpr, actorUsernameExpr, err := actorToColumnExprs(actor)
|
||||||
|
if err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
stmt := a.UPDATE().
|
stmt := a.UPDATE().
|
||||||
SET(
|
SET(
|
||||||
a.DeletedAt.SET(postgres.TimestampzT(now)),
|
a.DeletedAt.SET(postgres.TimestampzT(now)),
|
||||||
a.DeletedActorType.SET(postgres.String(actor.Type)),
|
a.DeletedActorType.SET(postgres.String(actor.Type)),
|
||||||
a.DeletedActorID.SET(actorIDExpr),
|
a.DeletedActorUserID.SET(actorUserIDExpr),
|
||||||
|
a.DeletedActorUsername.SET(actorUsernameExpr),
|
||||||
a.UpdatedAt.SET(postgres.TimestampzT(now)),
|
a.UpdatedAt.SET(postgres.TimestampzT(now)),
|
||||||
).
|
).
|
||||||
WHERE(
|
WHERE(
|
||||||
@@ -593,18 +609,23 @@ func (s *Store) assertAccountLive(ctx context.Context, userID uuid.UUID) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func insertSnapshotTx(ctx context.Context, tx *sql.Tx, snap EntitlementSnapshot) error {
|
func insertSnapshotTx(ctx context.Context, tx *sql.Tx, snap EntitlementSnapshot) error {
|
||||||
|
actorUserID, actorUsername, err := actorToColumnArgs(snap.Actor)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
es := table.EntitlementSnapshots
|
es := table.EntitlementSnapshots
|
||||||
actorID := nullableString(snap.Actor.ID)
|
|
||||||
var endsAt any
|
var endsAt any
|
||||||
if snap.EndsAt != nil {
|
if snap.EndsAt != nil {
|
||||||
endsAt = *snap.EndsAt
|
endsAt = *snap.EndsAt
|
||||||
}
|
}
|
||||||
stmt := es.INSERT(
|
stmt := es.INSERT(
|
||||||
es.UserID, es.Tier, es.IsPaid, es.Source, es.ActorType, es.ActorID,
|
es.UserID, es.Tier, es.IsPaid, es.Source,
|
||||||
|
es.ActorType, es.ActorUserID, es.ActorUsername,
|
||||||
es.ReasonCode, es.StartsAt, es.EndsAt,
|
es.ReasonCode, es.StartsAt, es.EndsAt,
|
||||||
es.MaxRegisteredRaceNames, es.UpdatedAt,
|
es.MaxRegisteredRaceNames, es.UpdatedAt,
|
||||||
).VALUES(
|
).VALUES(
|
||||||
snap.UserID, snap.Tier, snap.IsPaid, snap.Source, snap.Actor.Type, actorID,
|
snap.UserID, snap.Tier, snap.IsPaid, snap.Source,
|
||||||
|
snap.Actor.Type, actorUserID, actorUsername,
|
||||||
snap.ReasonCode, snap.StartsAt, endsAt, snap.MaxRegisteredRaceNames, snap.UpdatedAt,
|
snap.ReasonCode, snap.StartsAt, endsAt, snap.MaxRegisteredRaceNames, snap.UpdatedAt,
|
||||||
)
|
)
|
||||||
if _, err := stmt.ExecContext(ctx, tx); err != nil {
|
if _, err := stmt.ExecContext(ctx, tx); err != nil {
|
||||||
@@ -613,19 +634,20 @@ func insertSnapshotTx(ctx context.Context, tx *sql.Tx, snap EntitlementSnapshot)
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func upsertSnapshotTx(ctx context.Context, tx *sql.Tx, snap EntitlementSnapshot) error {
|
func upsertSnapshotTx(ctx context.Context, tx *sql.Tx, snap EntitlementSnapshot, actorUserID, actorUsername any) error {
|
||||||
es := table.EntitlementSnapshots
|
es := table.EntitlementSnapshots
|
||||||
actorID := nullableString(snap.Actor.ID)
|
|
||||||
var endsAt any
|
var endsAt any
|
||||||
if snap.EndsAt != nil {
|
if snap.EndsAt != nil {
|
||||||
endsAt = *snap.EndsAt
|
endsAt = *snap.EndsAt
|
||||||
}
|
}
|
||||||
stmt := es.INSERT(
|
stmt := es.INSERT(
|
||||||
es.UserID, es.Tier, es.IsPaid, es.Source, es.ActorType, es.ActorID,
|
es.UserID, es.Tier, es.IsPaid, es.Source,
|
||||||
|
es.ActorType, es.ActorUserID, es.ActorUsername,
|
||||||
es.ReasonCode, es.StartsAt, es.EndsAt,
|
es.ReasonCode, es.StartsAt, es.EndsAt,
|
||||||
es.MaxRegisteredRaceNames, es.UpdatedAt,
|
es.MaxRegisteredRaceNames, es.UpdatedAt,
|
||||||
).VALUES(
|
).VALUES(
|
||||||
snap.UserID, snap.Tier, snap.IsPaid, snap.Source, snap.Actor.Type, actorID,
|
snap.UserID, snap.Tier, snap.IsPaid, snap.Source,
|
||||||
|
snap.Actor.Type, actorUserID, actorUsername,
|
||||||
snap.ReasonCode, snap.StartsAt, endsAt, snap.MaxRegisteredRaceNames, snap.UpdatedAt,
|
snap.ReasonCode, snap.StartsAt, endsAt, snap.MaxRegisteredRaceNames, snap.UpdatedAt,
|
||||||
).
|
).
|
||||||
ON_CONFLICT(es.UserID).
|
ON_CONFLICT(es.UserID).
|
||||||
@@ -634,7 +656,8 @@ func upsertSnapshotTx(ctx context.Context, tx *sql.Tx, snap EntitlementSnapshot)
|
|||||||
es.IsPaid.SET(es.EXCLUDED.IsPaid),
|
es.IsPaid.SET(es.EXCLUDED.IsPaid),
|
||||||
es.Source.SET(es.EXCLUDED.Source),
|
es.Source.SET(es.EXCLUDED.Source),
|
||||||
es.ActorType.SET(es.EXCLUDED.ActorType),
|
es.ActorType.SET(es.EXCLUDED.ActorType),
|
||||||
es.ActorID.SET(es.EXCLUDED.ActorID),
|
es.ActorUserID.SET(es.EXCLUDED.ActorUserID),
|
||||||
|
es.ActorUsername.SET(es.EXCLUDED.ActorUsername),
|
||||||
es.ReasonCode.SET(es.EXCLUDED.ReasonCode),
|
es.ReasonCode.SET(es.EXCLUDED.ReasonCode),
|
||||||
es.StartsAt.SET(es.EXCLUDED.StartsAt),
|
es.StartsAt.SET(es.EXCLUDED.StartsAt),
|
||||||
es.EndsAt.SET(es.EXCLUDED.EndsAt),
|
es.EndsAt.SET(es.EXCLUDED.EndsAt),
|
||||||
@@ -680,7 +703,7 @@ func modelToSnapshot(row model.EntitlementSnapshots) EntitlementSnapshot {
|
|||||||
Tier: row.Tier,
|
Tier: row.Tier,
|
||||||
IsPaid: row.IsPaid,
|
IsPaid: row.IsPaid,
|
||||||
Source: row.Source,
|
Source: row.Source,
|
||||||
Actor: ActorRef{Type: row.ActorType, ID: derefString(row.ActorID)},
|
Actor: actorFromColumns(row.ActorType, row.ActorUserID, row.ActorUsername),
|
||||||
ReasonCode: row.ReasonCode,
|
ReasonCode: row.ReasonCode,
|
||||||
StartsAt: row.StartsAt,
|
StartsAt: row.StartsAt,
|
||||||
MaxRegisteredRaceNames: row.MaxRegisteredRaceNames,
|
MaxRegisteredRaceNames: row.MaxRegisteredRaceNames,
|
||||||
@@ -693,31 +716,67 @@ func modelToSnapshot(row model.EntitlementSnapshots) EntitlementSnapshot {
|
|||||||
return out
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
// nullableString converts a Go string to the `any` form expected by jet
|
// actorToColumnArgs converts an ActorRef into the (actor_user_id,
|
||||||
// VALUES: an empty string becomes nil so the column receives NULL.
|
// actor_username) values for jet INSERT VALUES. A nil-typed `any` lands
|
||||||
func nullableString(v string) any {
|
// as SQL NULL through the database/sql driver. Type=="user" parses ID
|
||||||
if v == "" {
|
// as a UUID; Type=="admin" stores ID verbatim as the username;
|
||||||
return nil
|
// everything else (system, unknown) writes both columns as NULL. An
|
||||||
|
// empty ID is allowed for "user" so synthetic system events that label
|
||||||
|
// themselves as "user" do not fail.
|
||||||
|
func actorToColumnArgs(actor ActorRef) (any, any, error) {
|
||||||
|
switch strings.TrimSpace(actor.Type) {
|
||||||
|
case "user":
|
||||||
|
id := strings.TrimSpace(actor.ID)
|
||||||
|
if id == "" {
|
||||||
|
return nil, nil, nil
|
||||||
|
}
|
||||||
|
uid, err := uuid.Parse(id)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("user store: actor id %q is not a uuid: %w", actor.ID, err)
|
||||||
|
}
|
||||||
|
return uid, nil, nil
|
||||||
|
case "admin":
|
||||||
|
if strings.TrimSpace(actor.ID) == "" {
|
||||||
|
return nil, nil, nil
|
||||||
|
}
|
||||||
|
return nil, actor.ID, nil
|
||||||
|
default:
|
||||||
|
return nil, nil, nil
|
||||||
}
|
}
|
||||||
return v
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// nullableStringExpr returns a typed jet expression: the empty string
|
// actorToColumnExprs is the typed-expression analogue of
|
||||||
// produces NULL, otherwise a String literal. Used by UPDATE SET paths
|
// actorToColumnArgs for the UPDATE SET sites. jet's generated bindings
|
||||||
// where jet's SET wants a typed Expression rather than `any`.
|
// type uuid columns as ColumnString (the dialect emits an explicit
|
||||||
func nullableStringExpr(v string) postgres.StringExpression {
|
// CAST), so both returned expressions are StringExpression.
|
||||||
if v == "" {
|
func actorToColumnExprs(actor ActorRef) (postgres.StringExpression, postgres.StringExpression, error) {
|
||||||
return postgres.StringExp(postgres.NULL)
|
uidArg, nameArg, err := actorToColumnArgs(actor)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
return postgres.String(v)
|
uidExpr := postgres.StringExp(postgres.NULL)
|
||||||
|
if uid, ok := uidArg.(uuid.UUID); ok {
|
||||||
|
uidExpr = postgres.UUID(uid)
|
||||||
|
}
|
||||||
|
nameExpr := postgres.StringExp(postgres.NULL)
|
||||||
|
if name, ok := nameArg.(string); ok {
|
||||||
|
nameExpr = postgres.String(name)
|
||||||
|
}
|
||||||
|
return uidExpr, nameExpr, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// derefString returns the empty string when p is nil, otherwise *p.
|
// actorFromColumns reconstructs an ActorRef from the (actor_type,
|
||||||
func derefString(p *string) string {
|
// actor_user_id, actor_username) triple read from an audit row. The
|
||||||
if p == nil {
|
// non-nil column wins; both nil yields an empty ID.
|
||||||
return ""
|
func actorFromColumns(actorType string, userID *uuid.UUID, username *string) ActorRef {
|
||||||
|
out := ActorRef{Type: actorType}
|
||||||
|
switch {
|
||||||
|
case userID != nil:
|
||||||
|
out.ID = userID.String()
|
||||||
|
case username != nil:
|
||||||
|
out.ID = *username
|
||||||
}
|
}
|
||||||
return *p
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
// rowsAffectedOrNotFound returns ErrAccountNotFound when the UPDATE
|
// rowsAffectedOrNotFound returns ErrAccountNotFound when the UPDATE
|
||||||
|
|||||||
@@ -68,7 +68,7 @@ func startPostgres(t *testing.T) *sql.DB {
|
|||||||
cfg.PrimaryDSN = scopedDSN
|
cfg.PrimaryDSN = scopedDSN
|
||||||
cfg.OperationTimeout = testOpTimeout
|
cfg.OperationTimeout = testOpTimeout
|
||||||
|
|
||||||
db, err := pgshared.OpenPrimary(ctx, cfg)
|
db, err := pgshared.OpenPrimary(ctx, cfg, backendpg.NoObservabilityOptions()...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open primary: %v", err)
|
t.Fatalf("open primary: %v", err)
|
||||||
}
|
}
|
||||||
@@ -510,11 +510,13 @@ func TestListAccountsExcludesSoftDeleted(t *testing.T) {
|
|||||||
type recordingRevoker struct {
|
type recordingRevoker struct {
|
||||||
calls int
|
calls int
|
||||||
lastUser uuid.UUID
|
lastUser uuid.UUID
|
||||||
|
lastActor user.SessionRevokeActor
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *recordingRevoker) RevokeAllForUser(_ context.Context, userID uuid.UUID) error {
|
func (r *recordingRevoker) RevokeAllForUser(_ context.Context, userID uuid.UUID, actor user.SessionRevokeActor) error {
|
||||||
r.calls++
|
r.calls++
|
||||||
r.lastUser = userID
|
r.lastUser = userID
|
||||||
|
r.lastActor = actor
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
+89
-42
@@ -1062,6 +1062,86 @@ paths:
|
|||||||
$ref: "#/components/responses/NotImplementedError"
|
$ref: "#/components/responses/NotImplementedError"
|
||||||
"500":
|
"500":
|
||||||
$ref: "#/components/responses/InternalError"
|
$ref: "#/components/responses/InternalError"
|
||||||
|
/api/v1/user/sessions:
|
||||||
|
get:
|
||||||
|
tags: [User]
|
||||||
|
operationId: userSessionsList
|
||||||
|
summary: List the caller's active device sessions
|
||||||
|
security:
|
||||||
|
- UserHeader: []
|
||||||
|
parameters:
|
||||||
|
- $ref: "#/components/parameters/XUserID"
|
||||||
|
responses:
|
||||||
|
"200":
|
||||||
|
description: Caller's active device sessions.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: "#/components/schemas/UserSessionList"
|
||||||
|
"400":
|
||||||
|
$ref: "#/components/responses/InvalidRequestError"
|
||||||
|
"501":
|
||||||
|
$ref: "#/components/responses/NotImplementedError"
|
||||||
|
"500":
|
||||||
|
$ref: "#/components/responses/InternalError"
|
||||||
|
/api/v1/user/sessions/revoke-all:
|
||||||
|
post:
|
||||||
|
tags: [User]
|
||||||
|
operationId: userSessionsRevokeAll
|
||||||
|
summary: Revoke every device session belonging to the caller
|
||||||
|
description: |
|
||||||
|
Logout from every device. Subsequent authenticated requests on
|
||||||
|
any of the caller's sessions are rejected. Each revocation is
|
||||||
|
recorded in `session_revocations` with `actor_kind=user_self`.
|
||||||
|
security:
|
||||||
|
- UserHeader: []
|
||||||
|
parameters:
|
||||||
|
- $ref: "#/components/parameters/XUserID"
|
||||||
|
responses:
|
||||||
|
"200":
|
||||||
|
description: Caller's sessions revoked.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: "#/components/schemas/DeviceSessionRevocationSummary"
|
||||||
|
"400":
|
||||||
|
$ref: "#/components/responses/InvalidRequestError"
|
||||||
|
"501":
|
||||||
|
$ref: "#/components/responses/NotImplementedError"
|
||||||
|
"500":
|
||||||
|
$ref: "#/components/responses/InternalError"
|
||||||
|
/api/v1/user/sessions/{device_session_id}/revoke:
|
||||||
|
post:
|
||||||
|
tags: [User]
|
||||||
|
operationId: userSessionsRevoke
|
||||||
|
summary: Revoke one of the caller's device sessions
|
||||||
|
description: |
|
||||||
|
Logout from a single device. The target `device_session_id`
|
||||||
|
must belong to the caller; otherwise the endpoint returns
|
||||||
|
`404 not_found` (the same shape as a missing session) so the
|
||||||
|
endpoint cannot be used to probe foreign session ids. The
|
||||||
|
revocation is recorded in `session_revocations` with
|
||||||
|
`actor_kind=user_self`.
|
||||||
|
security:
|
||||||
|
- UserHeader: []
|
||||||
|
parameters:
|
||||||
|
- $ref: "#/components/parameters/XUserID"
|
||||||
|
- $ref: "#/components/parameters/DeviceSessionID"
|
||||||
|
responses:
|
||||||
|
"200":
|
||||||
|
description: Device session revoked.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: "#/components/schemas/DeviceSession"
|
||||||
|
"400":
|
||||||
|
$ref: "#/components/responses/InvalidRequestError"
|
||||||
|
"404":
|
||||||
|
$ref: "#/components/responses/NotFoundError"
|
||||||
|
"501":
|
||||||
|
$ref: "#/components/responses/NotImplementedError"
|
||||||
|
"500":
|
||||||
|
$ref: "#/components/responses/InternalError"
|
||||||
/api/v1/admin/admin-accounts:
|
/api/v1/admin/admin-accounts:
|
||||||
get:
|
get:
|
||||||
tags: [Admin]
|
tags: [Admin]
|
||||||
@@ -2013,48 +2093,6 @@ paths:
|
|||||||
$ref: "#/components/responses/NotImplementedError"
|
$ref: "#/components/responses/NotImplementedError"
|
||||||
"500":
|
"500":
|
||||||
$ref: "#/components/responses/InternalError"
|
$ref: "#/components/responses/InternalError"
|
||||||
/api/v1/internal/sessions/{device_session_id}/revoke:
|
|
||||||
post:
|
|
||||||
tags: [Internal]
|
|
||||||
operationId: internalSessionsRevoke
|
|
||||||
summary: Revoke a device session (gateway-only)
|
|
||||||
security: []
|
|
||||||
parameters:
|
|
||||||
- $ref: "#/components/parameters/DeviceSessionID"
|
|
||||||
responses:
|
|
||||||
"200":
|
|
||||||
description: Session revoked.
|
|
||||||
content:
|
|
||||||
application/json:
|
|
||||||
schema:
|
|
||||||
$ref: "#/components/schemas/DeviceSession"
|
|
||||||
"404":
|
|
||||||
$ref: "#/components/responses/NotFoundError"
|
|
||||||
"501":
|
|
||||||
$ref: "#/components/responses/NotImplementedError"
|
|
||||||
"500":
|
|
||||||
$ref: "#/components/responses/InternalError"
|
|
||||||
/api/v1/internal/sessions/users/{user_id}/revoke-all:
|
|
||||||
post:
|
|
||||||
tags: [Internal]
|
|
||||||
operationId: internalSessionsRevokeAllForUser
|
|
||||||
summary: Revoke every device session belonging to a user
|
|
||||||
security: []
|
|
||||||
parameters:
|
|
||||||
- $ref: "#/components/parameters/UserID"
|
|
||||||
responses:
|
|
||||||
"200":
|
|
||||||
description: Sessions revoked.
|
|
||||||
content:
|
|
||||||
application/json:
|
|
||||||
schema:
|
|
||||||
$ref: "#/components/schemas/DeviceSessionRevocationSummary"
|
|
||||||
"404":
|
|
||||||
$ref: "#/components/responses/NotFoundError"
|
|
||||||
"501":
|
|
||||||
$ref: "#/components/responses/NotImplementedError"
|
|
||||||
"500":
|
|
||||||
$ref: "#/components/responses/InternalError"
|
|
||||||
/api/v1/internal/users/{user_id}/account-internal:
|
/api/v1/internal/users/{user_id}/account-internal:
|
||||||
get:
|
get:
|
||||||
tags: [Internal]
|
tags: [Internal]
|
||||||
@@ -3456,6 +3494,15 @@ components:
|
|||||||
format: uuid
|
format: uuid
|
||||||
revoked_count:
|
revoked_count:
|
||||||
type: integer
|
type: integer
|
||||||
|
UserSessionList:
|
||||||
|
type: object
|
||||||
|
additionalProperties: false
|
||||||
|
required: [items]
|
||||||
|
properties:
|
||||||
|
items:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
$ref: "#/components/schemas/DeviceSession"
|
||||||
responses:
|
responses:
|
||||||
NotImplementedError:
|
NotImplementedError:
|
||||||
description: Endpoint is documented but not implemented yet.
|
description: Endpoint is documented but not implemented yet.
|
||||||
|
|||||||
@@ -0,0 +1,54 @@
|
|||||||
|
package push
|
||||||
|
|
||||||
|
import "encoding/json"
|
||||||
|
|
||||||
|
// Event is the typed contract for client events emitted onto the gRPC
|
||||||
|
// push stream. Implementations carry their own serialiser; push.Service
|
||||||
|
// invokes Marshal at publish time to obtain the bytes that go into
|
||||||
|
// `pushv1.ClientEvent.Payload`.
|
||||||
|
//
|
||||||
|
// Notification dispatcher builds a typed FlatBuffers Event for every
|
||||||
|
// catalog kind through `notification.buildClientPushEvent`, backed by
|
||||||
|
// the per-kind helpers in `pkg/transcoder/notification.go`. JSONEvent
|
||||||
|
// (below) remains the safety net for kinds that arrive without a
|
||||||
|
// catalog schema.
|
||||||
|
type Event interface {
|
||||||
|
// Kind returns the catalog kind of this event (`backend/README.md`
|
||||||
|
// §10). Empty kind is rejected at publish time.
|
||||||
|
Kind() string
|
||||||
|
|
||||||
|
// Marshal returns the bytes that travel inside
|
||||||
|
// `pushv1.ClientEvent.Payload`. Implementations are expected to use
|
||||||
|
// FlatBuffers (preferred) or any deterministic encoding the client
|
||||||
|
// can decode; the push transport treats the result as opaque
|
||||||
|
// payload bytes.
|
||||||
|
Marshal() ([]byte, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// JSONEvent is the safety-net Event implementation for kinds that
|
||||||
|
// arrive without a catalog FlatBuffers schema. It serialises Payload
|
||||||
|
// via encoding/json so a misconfigured producer cannot silently drop
|
||||||
|
// events while a new kind is being added.
|
||||||
|
//
|
||||||
|
// New kinds must ship with a typed FlatBuffers schema in
|
||||||
|
// `pkg/schema/fbs/notification.fbs` and a matching case in
|
||||||
|
// `notification.buildClientPushEvent`; JSONEvent is not a canonical
|
||||||
|
// shape, only a fallback.
|
||||||
|
type JSONEvent struct {
|
||||||
|
// EventKind is the catalog kind returned by Kind().
|
||||||
|
EventKind string
|
||||||
|
|
||||||
|
// Payload is the JSON-serialisable map written by the producer.
|
||||||
|
Payload map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
// Kind returns EventKind verbatim.
|
||||||
|
func (e JSONEvent) Kind() string { return e.EventKind }
|
||||||
|
|
||||||
|
// Marshal returns Payload encoded as JSON. The result is treated as
|
||||||
|
// opaque bytes by the push transport.
|
||||||
|
func (e JSONEvent) Marshal() ([]byte, error) {
|
||||||
|
return json.Marshal(e.Payload)
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ Event = JSONEvent{}
|
||||||
@@ -33,7 +33,7 @@ func TestPublishClientEventStampsCursorAndPayload(t *testing.T) {
|
|||||||
userID := uuid.New()
|
userID := uuid.New()
|
||||||
devID := uuid.New()
|
devID := uuid.New()
|
||||||
payload := map[string]any{"game_id": "g1", "n": 7.0}
|
payload := map[string]any{"game_id": "g1", "n": 7.0}
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, &devID, "lobby.invite.received", payload, "route-1", "req-1", "trace-1"))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, &devID, JSONEvent{EventKind: "lobby.invite.received", Payload: payload},"route-1", "req-1", "trace-1"))
|
||||||
|
|
||||||
events, stale := svc.ring.since(0, time.Now())
|
events, stale := svc.ring.since(0, time.Now())
|
||||||
require.False(t, stale)
|
require.False(t, stale)
|
||||||
@@ -63,7 +63,7 @@ func TestPublishClientEventOmitsDeviceSessionWhenNil(t *testing.T) {
|
|||||||
t.Cleanup(svc.Close)
|
t.Cleanup(svc.Close)
|
||||||
|
|
||||||
userID := uuid.New()
|
userID := uuid.New()
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, "x", nil, "", "", ""))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, JSONEvent{EventKind: "x"},"", "", ""))
|
||||||
|
|
||||||
events, _ := svc.ring.since(0, time.Now())
|
events, _ := svc.ring.since(0, time.Now())
|
||||||
require.Len(t, events, 1)
|
require.Len(t, events, 1)
|
||||||
@@ -76,8 +76,8 @@ func TestPublishClientEventRequiresUserAndKind(t *testing.T) {
|
|||||||
svc := newTestService(t)
|
svc := newTestService(t)
|
||||||
t.Cleanup(svc.Close)
|
t.Cleanup(svc.Close)
|
||||||
|
|
||||||
require.Error(t, svc.PublishClientEvent(context.Background(), uuid.Nil, nil, "k", nil, "", "", ""))
|
require.Error(t, svc.PublishClientEvent(context.Background(), uuid.Nil, nil, JSONEvent{EventKind: "k"},"", "", ""))
|
||||||
require.Error(t, svc.PublishClientEvent(context.Background(), uuid.New(), nil, " ", nil, "", "", ""))
|
require.Error(t, svc.PublishClientEvent(context.Background(), uuid.New(), nil, JSONEvent{EventKind: " "},"", "", ""))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestPublishSessionInvalidationStampsCursor(t *testing.T) {
|
func TestPublishSessionInvalidationStampsCursor(t *testing.T) {
|
||||||
@@ -123,7 +123,7 @@ func TestPublishCursorMonotonic(t *testing.T) {
|
|||||||
|
|
||||||
userID := uuid.New()
|
userID := uuid.New()
|
||||||
for range 5 {
|
for range 5 {
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, "k", nil, "", "", ""))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, JSONEvent{EventKind: "k"},"", "", ""))
|
||||||
}
|
}
|
||||||
events, _ := svc.ring.since(0, time.Now())
|
events, _ := svc.ring.since(0, time.Now())
|
||||||
require.Len(t, events, 5)
|
require.Len(t, events, 5)
|
||||||
@@ -137,7 +137,7 @@ func TestPublishOnClosedServiceIsNoop(t *testing.T) {
|
|||||||
|
|
||||||
svc := newTestService(t)
|
svc := newTestService(t)
|
||||||
svc.Close()
|
svc.Close()
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), uuid.New(), nil, "k", nil, "", "", ""))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), uuid.New(), nil, JSONEvent{EventKind: "k"},"", "", ""))
|
||||||
events, _ := svc.ring.since(0, time.Now())
|
events, _ := svc.ring.since(0, time.Now())
|
||||||
assert.Empty(t, events)
|
assert.Empty(t, events)
|
||||||
}
|
}
|
||||||
@@ -150,7 +150,7 @@ var (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type pushClientEventPublisher interface {
|
type pushClientEventPublisher interface {
|
||||||
PublishClientEvent(ctx context.Context, userID uuid.UUID, deviceSessionID *uuid.UUID, kind string, payload map[string]any, eventID, requestID, traceID string) error
|
PublishClientEvent(ctx context.Context, userID uuid.UUID, deviceSessionID *uuid.UUID, event Event, eventID, requestID, traceID string) error
|
||||||
}
|
}
|
||||||
|
|
||||||
type pushSessionInvalidationEmitter interface {
|
type pushSessionInvalidationEmitter interface {
|
||||||
|
|||||||
+18
-12
@@ -19,7 +19,6 @@ package push
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strings"
|
"strings"
|
||||||
@@ -131,23 +130,30 @@ func (s *Service) Close() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// PublishClientEvent enqueues a ClientEvent for delivery. payload is
|
// PublishClientEvent enqueues a ClientEvent for delivery. The typed
|
||||||
// marshalled to JSON; deviceSessionID is optional. eventID, requestID
|
// `event` carries both the catalog kind and the payload bytes;
|
||||||
// and traceID are correlation identifiers that gateway forwards
|
// push.Service invokes event.Marshal() at publish time so producers
|
||||||
// verbatim into the signed client envelope (typically the producing
|
// stay decoupled from the wire encoding. deviceSessionID is optional.
|
||||||
// route id, the originating client request id, and the trace id of the
|
// eventID, requestID and traceID are correlation identifiers that
|
||||||
// span that produced the event); empty strings are forwarded
|
// gateway forwards verbatim into the signed client envelope (typically
|
||||||
// unchanged. The method satisfies notification.PushPublisher.
|
// the producing route id, the originating client request id, and the
|
||||||
func (s *Service) PublishClientEvent(_ context.Context, userID uuid.UUID, deviceSessionID *uuid.UUID, kind string, payload map[string]any, eventID, requestID, traceID string) error {
|
// trace id of the span that produced the event); empty strings are
|
||||||
|
// forwarded unchanged. The method satisfies
|
||||||
|
// notification.PushPublisher.
|
||||||
|
func (s *Service) PublishClientEvent(_ context.Context, userID uuid.UUID, deviceSessionID *uuid.UUID, event Event, eventID, requestID, traceID string) error {
|
||||||
|
if event == nil {
|
||||||
|
return errors.New("push.PublishClientEvent: event is required")
|
||||||
|
}
|
||||||
if userID == uuid.Nil {
|
if userID == uuid.Nil {
|
||||||
return errors.New("push.PublishClientEvent: userID is required")
|
return errors.New("push.PublishClientEvent: userID is required")
|
||||||
}
|
}
|
||||||
|
kind := event.Kind()
|
||||||
if strings.TrimSpace(kind) == "" {
|
if strings.TrimSpace(kind) == "" {
|
||||||
return errors.New("push.PublishClientEvent: kind is required")
|
return errors.New("push.PublishClientEvent: event kind is required")
|
||||||
}
|
}
|
||||||
encoded, err := json.Marshal(payload)
|
encoded, err := event.Marshal()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("push.PublishClientEvent: marshal payload: %w", err)
|
return fmt.Errorf("push.PublishClientEvent: marshal event: %w", err)
|
||||||
}
|
}
|
||||||
ev := &pushv1.PushEvent{
|
ev := &pushv1.PushEvent{
|
||||||
Kind: &pushv1.PushEvent_ClientEvent{
|
Kind: &pushv1.PushEvent_ClientEvent{
|
||||||
|
|||||||
@@ -87,7 +87,7 @@ func TestSubscribePushDeliversLiveEvents(t *testing.T) {
|
|||||||
require.Eventually(t, func() bool { return svc.SubscriberCount() == 1 }, time.Second, 5*time.Millisecond)
|
require.Eventually(t, func() bool { return svc.SubscriberCount() == 1 }, time.Second, 5*time.Millisecond)
|
||||||
|
|
||||||
userID := uuid.New()
|
userID := uuid.New()
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, "k", nil, "", "", ""))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, JSONEvent{EventKind: "k"},"", "", ""))
|
||||||
|
|
||||||
ev, err := recvOne(t, stream, time.Second)
|
ev, err := recvOne(t, stream, time.Second)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -104,7 +104,7 @@ func TestSubscribePushReplaysPastEventsOnReconnect(t *testing.T) {
|
|||||||
|
|
||||||
userID := uuid.New()
|
userID := uuid.New()
|
||||||
for range 3 {
|
for range 3 {
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, "k", nil, "", "", ""))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, JSONEvent{EventKind: "k"},"", "", ""))
|
||||||
}
|
}
|
||||||
|
|
||||||
client, cleanup := startBufconnServer(t, svc)
|
client, cleanup := startBufconnServer(t, svc)
|
||||||
@@ -129,7 +129,7 @@ func TestSubscribePushSkipsReplayWhenCursorStale(t *testing.T) {
|
|||||||
|
|
||||||
userID := uuid.New()
|
userID := uuid.New()
|
||||||
for range 4 {
|
for range 4 {
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, "k", nil, "", "", ""))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, JSONEvent{EventKind: "k"},"", "", ""))
|
||||||
}
|
}
|
||||||
// Ring capacity 2 means cursors 1 and 2 are evicted.
|
// Ring capacity 2 means cursors 1 and 2 are evicted.
|
||||||
|
|
||||||
@@ -141,7 +141,7 @@ func TestSubscribePushSkipsReplayWhenCursorStale(t *testing.T) {
|
|||||||
require.Eventually(t, func() bool { return svc.SubscriberCount() == 1 }, time.Second, 5*time.Millisecond)
|
require.Eventually(t, func() bool { return svc.SubscriberCount() == 1 }, time.Second, 5*time.Millisecond)
|
||||||
|
|
||||||
// Stale cursor → no replay; live publish must arrive.
|
// Stale cursor → no replay; live publish must arrive.
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, "k", nil, "", "", ""))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), userID, nil, JSONEvent{EventKind: "k"},"", "", ""))
|
||||||
ev, err := recvOne(t, stream, time.Second)
|
ev, err := recvOne(t, stream, time.Second)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, formatCursor(5), ev.Cursor)
|
assert.Equal(t, formatCursor(5), ev.Cursor)
|
||||||
@@ -173,7 +173,7 @@ func TestSubscribePushReplacesExistingClientID(t *testing.T) {
|
|||||||
require.Eventually(t, func() bool { return svc.SubscriberCount() == 1 }, time.Second, 5*time.Millisecond)
|
require.Eventually(t, func() bool { return svc.SubscriberCount() == 1 }, time.Second, 5*time.Millisecond)
|
||||||
|
|
||||||
// Live publish reaches the replacement.
|
// Live publish reaches the replacement.
|
||||||
require.NoError(t, svc.PublishClientEvent(context.Background(), uuid.New(), nil, "k", nil, "", "", ""))
|
require.NoError(t, svc.PublishClientEvent(context.Background(), uuid.New(), nil, JSONEvent{EventKind: "k"},"", "", ""))
|
||||||
ev, err := recvOne(t, stream2, time.Second)
|
ev, err := recvOne(t, stream2, time.Second)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.NotEmpty(t, ev.Cursor)
|
assert.NotEmpty(t, ev.Cursor)
|
||||||
|
|||||||
@@ -96,9 +96,14 @@ the user surface. Request bodies are never trusted to convey identity.
|
|||||||
|
|
||||||
The admin surface is on the same listener as the user surface; isolation
|
The admin surface is on the same listener as the user surface; isolation
|
||||||
between admin and the public is provided by Basic Auth and by the trust
|
between admin and the public is provided by Basic Auth and by the trust
|
||||||
boundary described in §15. The internal surface is part of that same trust
|
boundary described in [§15](#15-transport-security-model-gateway-boundary).
|
||||||
boundary: it is network-locked rather than auth-locked, and only `gateway`
|
The internal surface is part of that same trust boundary: it is
|
||||||
is expected to call it.
|
network-locked rather than auth-locked, and only `gateway` is expected
|
||||||
|
to call it. The internal surface is read-only with respect to device
|
||||||
|
sessions — it carries the per-request lookup gateway needs to verify a
|
||||||
|
signed envelope, and nothing else. Revocations are user-driven (through
|
||||||
|
the user surface) or admin-driven (through in-process calls inside
|
||||||
|
backend); see [`FUNCTIONAL.md` §1.5](FUNCTIONAL.md#15-revocation).
|
||||||
|
|
||||||
JSON bodies use `snake_case` field names everywhere on the wire. Backend,
|
JSON bodies use `snake_case` field names everywhere on the wire. Backend,
|
||||||
gateway, and the shared `pkg/model` schemas are aligned on this convention;
|
gateway, and the shared `pkg/model` schemas are aligned on this convention;
|
||||||
@@ -126,10 +131,14 @@ because they cross domain boundaries:
|
|||||||
fresh email always lands a unique account without a client-supplied
|
fresh email always lands a unique account without a client-supplied
|
||||||
name. The column is never overwritten on subsequent sign-ins.
|
name. The column is never overwritten on subsequent sign-ins.
|
||||||
- **`accounts.permanent_block`** is the canonical permanent-block flag.
|
- **`accounts.permanent_block`** is the canonical permanent-block flag.
|
||||||
When set, `auth.SendEmailCode` rejects with `400 invalid_request`; every
|
When set, both `auth.SendEmailCode` and `auth.ConfirmEmailCode` reject
|
||||||
other path — including a `blocked_emails` row, a throttled email, a
|
with `400 invalid_request`. The send-time check stops fresh challenges
|
||||||
fresh email — returns the opaque `{challenge_id}` shape so the endpoint
|
for already-blocked addresses; the confirm-time check (re-run after
|
||||||
cannot be used to enumerate accounts.
|
the verification code matches) catches admin blocks applied in the
|
||||||
|
window between send and confirm. Every other branch on send — including
|
||||||
|
a `blocked_emails` row, a throttled email, a fresh email — returns the
|
||||||
|
opaque `{challenge_id}` shape so the endpoint cannot be used to
|
||||||
|
enumerate accounts.
|
||||||
- **Public lobby games are admin-created** through
|
- **Public lobby games are admin-created** through
|
||||||
`POST /api/v1/admin/games`. The user-facing
|
`POST /api/v1/admin/games`. The user-facing
|
||||||
`POST /api/v1/user/lobby/games` always emits `private` games owned by
|
`POST /api/v1/user/lobby/games` always emits `private` games owned by
|
||||||
@@ -141,7 +150,7 @@ because they cross domain boundaries:
|
|||||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `backend/internal/config` | Environment-variable loader and validator. |
|
| `backend/internal/config` | Environment-variable loader and validator. |
|
||||||
| `backend/internal/server` | gin engine, listeners, route groups, shared middleware (request id, panic recovery, metrics, tracing). |
|
| `backend/internal/server` | gin engine, listeners, route groups, shared middleware (request id, panic recovery, metrics, tracing). |
|
||||||
| `backend/internal/auth` | Email-code challenges, device sessions, Ed25519 client public keys, send/confirm flows, revoke. Internal session lookup endpoint for gateway. |
|
| `backend/internal/auth` | Email-code challenges, device sessions, Ed25519 client public keys, send/confirm, user-driven revoke (single + revoke-all), admin-driven revoke (sanctions, soft-delete, in-process), durable revocation audit in `session_revocations`, internal session lookup endpoint for gateway. |
|
||||||
| `backend/internal/user` | User accounts, settings (`preferred_language`, `time_zone`, `declared_country`), entitlements, sanctions, limits, soft delete with in-process cascade. |
|
| `backend/internal/user` | User accounts, settings (`preferred_language`, `time_zone`, `declared_country`), entitlements, sanctions, limits, soft delete with in-process cascade. |
|
||||||
| `backend/internal/lobby` | Games, applications, invites, memberships, enrollment state machine, turn schedule, Race Name Directory. |
|
| `backend/internal/lobby` | Games, applications, invites, memberships, enrollment state machine, turn schedule, Race Name Directory. |
|
||||||
| `backend/internal/runtime` | Engine version registry, container lifecycle, turn scheduler, `(user_id ↔ race_name ↔ engine_player_uuid)` mapping per game, runtime snapshot publication into `lobby`. |
|
| `backend/internal/runtime` | Engine version registry, container lifecycle, turn scheduler, `(user_id ↔ race_name ↔ engine_player_uuid)` mapping per game, runtime snapshot publication into `lobby`. |
|
||||||
@@ -180,7 +189,7 @@ because they cross domain boundaries:
|
|||||||
`notification_dead_letters`. Cross-domain references
|
`notification_dead_letters`. Cross-domain references
|
||||||
(`memberships.user_id`, `games.owner_user_id`, etc.) are kept as
|
(`memberships.user_id`, `games.owner_user_id`, etc.) are kept as
|
||||||
opaque `uuid` columns because each domain runs its own cleanup
|
opaque `uuid` columns because each domain runs its own cleanup
|
||||||
through the in-process cascade described in §7. Adding a database
|
through the in-process cascade described in [§7](#7-in-process-async-patterns). Adding a database
|
||||||
cascade would either duplicate that work or hide it behind opaque
|
cascade would either duplicate that work or hide it behind opaque
|
||||||
triggers.
|
triggers.
|
||||||
- `created_at`, `updated_at`, `deleted_at` are always `timestamptz`. UTC
|
- `created_at`, `updated_at`, `deleted_at` are always `timestamptz`. UTC
|
||||||
@@ -192,6 +201,27 @@ because they cross domain boundaries:
|
|||||||
- Worker pickup uses `SELECT ... FOR UPDATE SKIP LOCKED` ordered by
|
- Worker pickup uses `SELECT ... FOR UPDATE SKIP LOCKED` ordered by
|
||||||
`next_attempt_at`. This pattern serves the mail outbox, retry-able
|
`next_attempt_at`. This pattern serves the mail outbox, retry-able
|
||||||
runtime jobs, and any future deferred work.
|
runtime jobs, and any future deferred work.
|
||||||
|
- `session_revocations` is the append-only audit trail of every device
|
||||||
|
session revocation, keyed by `revocation_id` (uuid) with
|
||||||
|
`device_session_id`, `user_id`, `actor_kind`, the actor pair
|
||||||
|
`actor_user_id uuid` + `actor_username text` (exactly one is
|
||||||
|
non-NULL per row, enforced by a CHECK constraint), `reason`, and
|
||||||
|
`revoked_at`. The row is inserted in the same transaction that
|
||||||
|
flips `device_sessions.status` to `'revoked'`, so a successful
|
||||||
|
revoke always leaves a matching audit row.
|
||||||
|
|
||||||
|
The two-column actor pair is the canonical shape used by every
|
||||||
|
audit-bearing table — `accounts.deleted_actor_*`,
|
||||||
|
`entitlement_records`, `entitlement_snapshots`,
|
||||||
|
`sanction_records.actor_*` + `removed_by_*`, and
|
||||||
|
`limit_records.actor_*` + `removed_by_*` follow the same convention.
|
||||||
|
`actor_kind` (or `actor_type` on the user-domain tables) values are
|
||||||
|
`user`, `admin`, `system`. The Go layer hides the split behind
|
||||||
|
`user.ActorRef{Type, ID string}`: `Type=="user"` requires `ID` to
|
||||||
|
be a UUID, `Type=="admin"` stores `ID` as the operator username
|
||||||
|
(passed to `actor_username`), and `Type=="system"` requires an
|
||||||
|
empty `ID`. See `backend/internal/user/store.go`
|
||||||
|
(`actorToColumnArgs`/`actorFromColumns`) for the SQL boundary.
|
||||||
|
|
||||||
## 6. In-Memory Cache
|
## 6. In-Memory Cache
|
||||||
|
|
||||||
@@ -222,6 +252,19 @@ read finishes; the `/readyz` probe waits on every cache being ready
|
|||||||
before reporting ready, so the listener never serves a request that
|
before reporting ready, so the listener never serves a request that
|
||||||
would spuriously miss because of a cold cache.
|
would spuriously miss because of a cold cache.
|
||||||
|
|
||||||
|
`gateway` carries a separate, smaller cache: the in-memory session
|
||||||
|
cache fronting every authenticated request. It is a bounded LRU
|
||||||
|
(default 50 000 entries) with a safety-net TTL (default 10 minutes).
|
||||||
|
Misses trigger a single synchronous REST call to backend's
|
||||||
|
`/api/v1/internal/sessions/{id}` lookup; hits answer the hot path
|
||||||
|
directly. The cache is kept consistent through the
|
||||||
|
`session_invalidation` push events backend emits over `Push.SubscribePush`:
|
||||||
|
each event flips the cached entry to `revoked` so subsequent
|
||||||
|
authenticated requests bound to that session are rejected at the
|
||||||
|
edge without another backend round-trip. The TTL covers the case of a
|
||||||
|
missed event (cursor aged out, gateway restart) by forcing a refresh
|
||||||
|
at most once per window.
|
||||||
|
|
||||||
## 7. In-Process Async Patterns
|
## 7. In-Process Async Patterns
|
||||||
|
|
||||||
Async work is implemented with goroutines and channels. There is no Redis
|
Async work is implemented with goroutines and channels. There is no Redis
|
||||||
@@ -269,7 +312,11 @@ There are two channels between `gateway` and `backend`.
|
|||||||
**Sync REST (gateway → backend).** Every authenticated user request and
|
**Sync REST (gateway → backend).** Every authenticated user request and
|
||||||
every public auth request goes over plain HTTP/JSON. The gateway sends
|
every public auth request goes over plain HTTP/JSON. The gateway sends
|
||||||
`X-User-ID` (when authenticated) and forwards the verified payload. The
|
`X-User-ID` (when authenticated) and forwards the verified payload. The
|
||||||
backend never re-derives user identity from the body.
|
backend never re-derives user identity from the body. The session
|
||||||
|
lookup hits backend's `/api/v1/internal/sessions/{id}` only on a
|
||||||
|
cache miss in the gateway-side LRU described in [§6](#6-in-memory-cache); backend updates
|
||||||
|
`device_sessions.last_seen_at` on every successful lookup so admin
|
||||||
|
operators can observe when each session was last resolved at the edge.
|
||||||
|
|
||||||
**gRPC stream (gateway ⇄ backend).** Backend exposes a single RPC
|
**gRPC stream (gateway ⇄ backend).** Backend exposes a single RPC
|
||||||
`SubscribePush(GatewaySubscribeRequest) returns (stream PushEvent)`. The
|
`SubscribePush(GatewaySubscribeRequest) returns (stream PushEvent)`. The
|
||||||
@@ -311,6 +358,16 @@ containers. The contract is the engine OpenAPI document; backend uses the
|
|||||||
existing typed DTOs in `pkg/model/{order,report,rest}` and a hand-written
|
existing typed DTOs in `pkg/model/{order,report,rest}` and a hand-written
|
||||||
`net/http` client in `backend/internal/engineclient`.
|
`net/http` client in `backend/internal/engineclient`.
|
||||||
|
|
||||||
|
Authenticated client traffic for in-game operations crosses three
|
||||||
|
serialisation boundaries: signed-gRPC FlatBuffers (client ↔ gateway),
|
||||||
|
JSON over REST (gateway ↔ backend), and JSON over REST again
|
||||||
|
(backend ↔ engine). Gateway owns the FB ↔ JSON transcoding for the
|
||||||
|
three message types `user.games.command`, `user.games.order`,
|
||||||
|
`user.games.report` (FB schemas in `pkg/schema/fbs/{order,report}`,
|
||||||
|
encoders in `pkg/transcoder`). Backend never touches FlatBuffers and
|
||||||
|
never re-interprets the JSON beyond rebinding the actor field from
|
||||||
|
the runtime player mapping (clients never carry a trusted actor).
|
||||||
|
|
||||||
Container state is owned by `backend/internal/runtime`:
|
Container state is owned by `backend/internal/runtime`:
|
||||||
|
|
||||||
- `runtime_records` is the persistent map from `game_id` to current
|
- `runtime_records` is the persistent map from `game_id` to current
|
||||||
@@ -350,7 +407,7 @@ The geo concern is intentionally minimal.
|
|||||||
- Source IP for both flows is read from the leftmost `X-Forwarded-For`
|
- Source IP for both flows is read from the leftmost `X-Forwarded-For`
|
||||||
entry, falling back to `RemoteAddr` when the header is absent.
|
entry, falling back to `RemoteAddr` when the header is absent.
|
||||||
Backend trusts the value because the network segment between gateway
|
Backend trusts the value because the network segment between gateway
|
||||||
and backend is the trust boundary (§15–§16); duplicating the edge
|
and backend is the trust boundary ([§15](#15-transport-security-model-gateway-boundary)–[§16](#16-security-boundaries-summary)); duplicating the edge
|
||||||
rate-limit / spoof checks here would be double work.
|
rate-limit / spoof checks here would be double work.
|
||||||
- Email addresses are never written to logs verbatim. Backend modules
|
- Email addresses are never written to logs verbatim. Backend modules
|
||||||
emit a per-process HMAC-SHA256-truncated `email_hash` instead, so
|
emit a per-process HMAC-SHA256-truncated `email_hash` instead, so
|
||||||
@@ -370,7 +427,10 @@ Email is delivered through a Postgres-backed outbox.
|
|||||||
marks the delivery sent or schedules `next_attempt_at` for retry with
|
marks the delivery sent or schedules `next_attempt_at` for retry with
|
||||||
exponential backoff and jitter.
|
exponential backoff and jitter.
|
||||||
- After the configured maximum retry budget the delivery moves to
|
- After the configured maximum retry budget the delivery moves to
|
||||||
`mail_dead_letters` and emits an admin-facing notification intent.
|
`mail_dead_letters`. The `mail.dead_lettered` notification kind is
|
||||||
|
reserved in the catalog but has no producer wired up yet, so no
|
||||||
|
admin notification is emitted today — operator visibility comes
|
||||||
|
from a log line and the `/api/v1/admin/mail/dead-letters` listing.
|
||||||
- On startup the worker drains everything pending. There is no separate
|
- On startup the worker drains everything pending. There is no separate
|
||||||
recovery procedure: starting backend is sufficient.
|
recovery procedure: starting backend is sufficient.
|
||||||
- Operators can re-enqueue from `mail_dead_letters` through the admin
|
- Operators can re-enqueue from `mail_dead_letters` through the admin
|
||||||
@@ -381,12 +441,14 @@ committed; SMTP completion is asynchronous to the auth request.
|
|||||||
|
|
||||||
## 12. Notification Pipeline
|
## 12. Notification Pipeline
|
||||||
|
|
||||||
Notifications are an in-process pipeline. The catalog of intent types
|
Notifications are an in-process pipeline. The closed catalog is
|
||||||
(turn ready, generation failed, finished, lobby invite/application/
|
defined in `backend/internal/notification/catalog.go` and currently
|
||||||
membership state changes, race name registered/expired, runtime image
|
covers 13 kinds: 10 lobby kinds (invite received/revoked, application
|
||||||
pull failed, runtime container start failed, runtime start config invalid,
|
submitted/approved/rejected, membership removed/blocked, race name
|
||||||
geo review recommended) is documented in `backend/README.md` and may be
|
registered/pending/expired) and 3 admin-recipient runtime kinds
|
||||||
trimmed if a type is unused.
|
(image pull failed, container start failed, start config invalid).
|
||||||
|
Per-kind delivery channels (push, email, or both) and the admin-vs-
|
||||||
|
per-user recipient routing live in the same file.
|
||||||
|
|
||||||
For every intent, `notification.Submit` performs:
|
For every intent, `notification.Submit` performs:
|
||||||
|
|
||||||
@@ -394,8 +456,18 @@ For every intent, `notification.Submit` performs:
|
|||||||
2. Recipient resolution against `user`.
|
2. Recipient resolution against `user`.
|
||||||
3. Per-recipient route materialisation in `notification_routes` —
|
3. Per-recipient route materialisation in `notification_routes` —
|
||||||
`push`, `email`, or both — based on the type-specific policy table.
|
`push`, `email`, or both — based on the type-specific policy table.
|
||||||
4. Push routes are emitted onto the gRPC `client_event` channel for the
|
4. Push routes are emitted onto the gRPC `client_event` channel for
|
||||||
recipient.
|
the recipient. The dispatcher passes the producer's payload map
|
||||||
|
through `notification.buildClientPushEvent(kind, payload)`, which
|
||||||
|
maps the kind to the matching FlatBuffers schema in
|
||||||
|
`pkg/schema/fbs/notification.fbs` (one table per catalog kind, 1:1
|
||||||
|
with the camel-case form of the kind plus the `Event` suffix) and
|
||||||
|
returns a typed `push.Event`. `push.Service` invokes `Marshal` and
|
||||||
|
places the bytes into `pushv1.ClientEvent.Payload`. An unknown
|
||||||
|
kind falls back to `push.JSONEvent` so a misconfigured producer
|
||||||
|
does not silently drop frames; new kinds must ship with a typed
|
||||||
|
FB schema and a matching `buildClientPushEvent` case rather than
|
||||||
|
relying on the fallback.
|
||||||
5. Email routes are inserted into `mail_deliveries` with the matching
|
5. Email routes are inserted into `mail_deliveries` with the matching
|
||||||
template id.
|
template id.
|
||||||
6. Malformed intents go to `notification_malformed_intents` and never
|
6. Malformed intents go to `notification_malformed_intents` and never
|
||||||
@@ -615,9 +687,9 @@ business validation and authorisation.
|
|||||||
| Concern | Enforced by | Notes |
|
| Concern | Enforced by | Notes |
|
||||||
| -------------------------------------------------------- | ----------------------- | ----------------------------------------------------------------------------------------------- |
|
| -------------------------------------------------------- | ----------------------- | ----------------------------------------------------------------------------------------------- |
|
||||||
| Public TLS termination, pinning | gateway | Native clients pin SPKI. |
|
| Public TLS termination, pinning | gateway | Native clients pin SPKI. |
|
||||||
| Request signature, payload hash, freshness, anti-replay | gateway | See §15. |
|
| Request signature, payload hash, freshness, anti-replay | gateway | See [§15](#15-transport-security-model-gateway-boundary). |
|
||||||
| Session lookup | backend (sync REST) | gateway calls `/api/v1/internal/sessions/...` per request, no Redis projection. |
|
| Session lookup | backend (sync REST) + gateway in-memory LRU | gateway-side LRU with TTL safety net ([§6](#6-in-memory-cache)) hits backend's `/api/v1/internal/sessions/{id}` only on miss; no Redis projection. |
|
||||||
| Session revocation propagation | backend → gateway | `session_invalidation` over the gRPC push stream. |
|
| Session revocation propagation | backend → gateway | `session_invalidation` over the gRPC push stream flips the gateway-side cache entry to revoked and closes any active push stream. |
|
||||||
| Authorisation, ownership, state transitions | backend | `X-User-ID` is the sole identity input on the user surface. |
|
| Authorisation, ownership, state transitions | backend | `X-User-ID` is the sole identity input on the user surface. |
|
||||||
| Edge rate limiting | gateway | Backend has no rate-limit responsibility in MVP. |
|
| Edge rate limiting | gateway | Backend has no rate-limit responsibility in MVP. |
|
||||||
| Admin authentication | backend | Basic Auth against `admin_accounts`. |
|
| Admin authentication | backend | Basic Auth against `admin_accounts`. |
|
||||||
+1036
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
+333
@@ -0,0 +1,333 @@
|
|||||||
|
# Testing
|
||||||
|
|
||||||
|
Test strategy and runbook for the [Galaxy Game](ARCHITECTURE.md)
|
||||||
|
platform. The platform ships three executables — `gateway`,
|
||||||
|
`backend`, `game` (the engine container) — plus the shared `pkg/*`
|
||||||
|
libraries. This document defines the layering of tests, the
|
||||||
|
mandatory minimum coverage per executable, the integration runbook,
|
||||||
|
and the principles every test must follow.
|
||||||
|
|
||||||
|
## Layers
|
||||||
|
|
||||||
|
1. **Service tests** verify a single executable in isolation. They
|
||||||
|
live next to the implementation as `*_test.go` files and use only
|
||||||
|
in-process or testcontainers-managed dependencies. The package
|
||||||
|
either runs entirely in process or boots a single Postgres
|
||||||
|
testcontainer per test.
|
||||||
|
2. **Inter-service integration tests** verify one cross-process seam
|
||||||
|
between two real executables (most often `gateway ↔ backend`,
|
||||||
|
sometimes `backend ↔ game`). They live in
|
||||||
|
[`galaxy/integration/`](../integration/) and drive the platform
|
||||||
|
from outside the trust boundary.
|
||||||
|
3. **Full system tests** are a small, focused subset of the
|
||||||
|
integration suite that walks an entire user-facing flow from the
|
||||||
|
client edge through every component the flow touches. They live
|
||||||
|
in the same `integration/` module and reuse the same fixtures.
|
||||||
|
|
||||||
|
Service tests are the cheapest and the broadest; integration tests
|
||||||
|
are slower and broader; full-system tests are the slowest and the
|
||||||
|
narrowest. The pyramid stays in this order — never replace a service
|
||||||
|
test with a system test.
|
||||||
|
|
||||||
|
## Global rules
|
||||||
|
|
||||||
|
- Every executable owns the service tests for its packages. Adding a
|
||||||
|
new package without `_test.go` files is a review block.
|
||||||
|
- Every cross-process seam must have at least one passing
|
||||||
|
inter-service test before the seam is wired in production.
|
||||||
|
- Async flows (mail outbox, notification routes, runtime workers,
|
||||||
|
push gRPC) get tests for both the success path and the retry /
|
||||||
|
dead-letter path, and a duplicate-event safety check.
|
||||||
|
- Sync flows get happy path, validation failure, timeout
|
||||||
|
propagation, and dependency unavailable.
|
||||||
|
- Every external or trusted-internal API must have contract tests
|
||||||
|
alongside behaviour tests. `backend/internal/server/contract_test.go`
|
||||||
|
is the reference; gateway runs the same shape against
|
||||||
|
`gateway/openapi.yaml`.
|
||||||
|
- The integration suite must keep running on a developer machine
|
||||||
|
with Docker available. The only acceptable `t.Skip` is
|
||||||
|
`testenv.RequireDocker` (no daemon at all). Any failure deeper
|
||||||
|
than that — `tcpostgres.Run`, network create, image build, schema
|
||||||
|
migration — fails the test loudly with `t.Fatal`. The historical
|
||||||
|
bug we fixed (silent skips on reaper failures masking 27
|
||||||
|
integration tests as "ok") came from treating an environment
|
||||||
|
break as a skip.
|
||||||
|
|
||||||
|
## Service-specific coverage
|
||||||
|
|
||||||
|
### `galaxy/gateway`
|
||||||
|
|
||||||
|
Service tests live under `gateway/internal/`:
|
||||||
|
|
||||||
|
- Public REST routing, error projection, and OpenAPI contract
|
||||||
|
validation.
|
||||||
|
- Authenticated gRPC envelope verification (`grpcapi.Server`):
|
||||||
|
signature, payload hash, freshness window, anti-replay reservation,
|
||||||
|
unknown / revoked sessions.
|
||||||
|
- Session cache (`session.BackendCache`) — the only implementation
|
||||||
|
in the codebase, a thin wrapper around the `backendclient.RESTClient`
|
||||||
|
per-request lookup.
|
||||||
|
- Response signing for unary responses and stream events
|
||||||
|
(`authn.ResponseSigner`).
|
||||||
|
- Push hub (`push.Hub`) and push fan-out (`push_fanout.go`).
|
||||||
|
- Replay store (`replay.RedisStore`) reservation semantics.
|
||||||
|
- Anti-abuse rate limits per IP / session / user / message class.
|
||||||
|
|
||||||
|
### `galaxy/backend`
|
||||||
|
|
||||||
|
Service tests live under `backend/internal/`:
|
||||||
|
|
||||||
|
- Startup wiring: `app.App` lifecycle, telemetry runtime, Postgres
|
||||||
|
pool, embedded migrations.
|
||||||
|
- OpenAPI contract test (`internal/server/contract_test.go`):
|
||||||
|
validates every documented operation against the live gin engine.
|
||||||
|
- Domain unit + e2e tests per package (`auth`, `user`, `admin`,
|
||||||
|
`lobby`, `runtime`, `mail`, `notification`, `geo`, `push`).
|
||||||
|
E2E tests (`*_e2e_test.go`) spin up a Postgres testcontainer.
|
||||||
|
- Mail outbox: pickup with `SELECT FOR UPDATE SKIP LOCKED`, retry
|
||||||
|
with backoff plus jitter, dead-letter past `MAX_ATTEMPTS`,
|
||||||
|
resend semantics (`pending|retrying|dead_lettered` → re-armed,
|
||||||
|
`sent` → 409).
|
||||||
|
- Notification: idempotent `Submit`, route materialisation, push +
|
||||||
|
email fan-out, `OnUserDeleted` cascade. Coverage of every catalog
|
||||||
|
kind in `buildClientPushEvent` lives in
|
||||||
|
`internal/notification/events_test.go`.
|
||||||
|
- Lobby: state-machine transitions, RND canonicalisation, sweeper.
|
||||||
|
- Runtime: per-game mutex serialisation, worker pool, scheduler,
|
||||||
|
reconciler, force-next-turn skip flag.
|
||||||
|
- Admin: bcrypt cost 12, idempotent bootstrap, write-through cache,
|
||||||
|
409 Conflict on duplicate username, last-used timestamp.
|
||||||
|
- Geo: counter increment on every authenticated request,
|
||||||
|
declared-country write at registration, fail-open semantics.
|
||||||
|
|
||||||
|
### `galaxy/game`
|
||||||
|
|
||||||
|
The engine has its own service tests under `game/`:
|
||||||
|
|
||||||
|
- OpenAPI contract test (`game/openapi_contract_test.go`).
|
||||||
|
- Engine lifecycle (init, status, turn, banish, command, order,
|
||||||
|
report) implemented by the engine package suites.
|
||||||
|
|
||||||
|
## Integration runbook
|
||||||
|
|
||||||
|
### Entry points
|
||||||
|
|
||||||
|
```bash
|
||||||
|
make -C integration preclean # idempotent leftover cleanup
|
||||||
|
make -C integration integration # preclean + serial test run
|
||||||
|
make -C integration integration-step # preclean + one-test-at-a-time
|
||||||
|
```
|
||||||
|
|
||||||
|
`integration` runs every test in the module sequentially
|
||||||
|
(`-p=1 -parallel=1`) — recommended default on a slow / shared
|
||||||
|
Docker. `integration-step` runs them one at a time with a fresh
|
||||||
|
preclean before each test and stops on the first failure; useful to
|
||||||
|
isolate a flake or build up to a full pass without losing context to
|
||||||
|
subsequent tests.
|
||||||
|
|
||||||
|
### Why preclean matters
|
||||||
|
|
||||||
|
`preclean` keys off labels and removes:
|
||||||
|
|
||||||
|
- Containers labelled `org.testcontainers=true` (every container the
|
||||||
|
testcontainers-go library brings up — backend, gateway, game,
|
||||||
|
postgres, redis, mailpit, ryuk).
|
||||||
|
- Containers labelled `galaxy.backend=1` — engine instances spawned
|
||||||
|
by backend's runtime adapter directly on the host Docker daemon
|
||||||
|
(see `backend/internal/dockerclient/types.go`).
|
||||||
|
- Networks labelled `org.testcontainers=true`.
|
||||||
|
- Locally-built images labelled `galaxy.test.kind=integration-image`
|
||||||
|
— the `galaxy/{backend,gateway,game}:integration` builds produced
|
||||||
|
by `integration/testenv/images.go`. Pulled service images
|
||||||
|
(`postgres:16-alpine`, `redis:7-alpine`, `axllent/mailpit`,
|
||||||
|
`testcontainers/ryuk`) are **not** touched, so the cache stays
|
||||||
|
warm.
|
||||||
|
|
||||||
|
### Ryuk reaper
|
||||||
|
|
||||||
|
The integration runners disable the testcontainers Ryuk reaper:
|
||||||
|
|
||||||
|
```makefile
|
||||||
|
export TESTCONTAINERS_RYUK_DISABLED = true
|
||||||
|
```
|
||||||
|
|
||||||
|
This is environment-driven, not principled — Ryuk does not start
|
||||||
|
cleanly on the local colima setup we use, and `preclean` covers the
|
||||||
|
same job by labels. Re-enable Ryuk by exporting
|
||||||
|
`TESTCONTAINERS_RYUK_DISABLED=false` (or unset) before invoking the
|
||||||
|
make target if you have an environment where Ryuk works.
|
||||||
|
|
||||||
|
### Cold runs
|
||||||
|
|
||||||
|
The first run after a clean checkout (or after `preclean`) rebuilds
|
||||||
|
three images: `galaxy/backend:integration`,
|
||||||
|
`galaxy/gateway:integration`, `galaxy/game:integration`. Cold cost
|
||||||
|
is ~30 s per image. Subsequent runs reuse the build cache; `preclean`
|
||||||
|
removes the tagged images themselves but BuildKit cache mounts
|
||||||
|
survive, so re-builds are fast.
|
||||||
|
|
||||||
|
## Integration test coverage
|
||||||
|
|
||||||
|
Mandatory inter-service coverage in `integration/`:
|
||||||
|
|
||||||
|
- **Gateway ↔ Backend (public auth)**:
|
||||||
|
`auth_flow_test.go` — register + confirm with mailpit-captured
|
||||||
|
code; declared_country populated; idempotent re-confirm.
|
||||||
|
- **Gateway ↔ Backend (authenticated user surface)**:
|
||||||
|
`user_account_test.go`, `user_profile_update_test.go`,
|
||||||
|
`user_settings_update_test.go` — signed envelope, FlatBuffers
|
||||||
|
payload, response signature verification, BCP 47 / IANA validation.
|
||||||
|
- **Gateway ↔ Backend (anti-replay, signature, freshness)**:
|
||||||
|
`gateway_edge_test.go` — body-too-large, bad signature,
|
||||||
|
payload_hash mismatch, stale timestamp, unknown session,
|
||||||
|
unsupported `protocol_version`.
|
||||||
|
- **Gateway ↔ Backend (push)**:
|
||||||
|
`notification_flow_test.go`, `session_revoke_test.go` — push
|
||||||
|
delivery to a SubscribeEvents stream and immediate stream close
|
||||||
|
on revoke.
|
||||||
|
- **Gateway ↔ Backend (anti-replay)**:
|
||||||
|
`anti_replay_test.go` — duplicate `request_id` rejected.
|
||||||
|
- **Backend ↔ Postgres** is exercised by every backend e2e test
|
||||||
|
through testcontainers; integration tests do not duplicate it.
|
||||||
|
- **Backend ↔ SMTP**:
|
||||||
|
`mail_flow_test.go` — login-code email captured by mailpit; admin
|
||||||
|
list reaches `sent`; resend on `sent` returns 409.
|
||||||
|
- **Backend ↔ Game engine**:
|
||||||
|
`runtime_lifecycle_test.go`, `engine_command_proxy_test.go` —
|
||||||
|
start container, healthz green, command, force-next-turn, finish,
|
||||||
|
race name promotion.
|
||||||
|
- **Admin surface (REST)**:
|
||||||
|
`admin_flow_test.go`, `admin_global_games_view_test.go`,
|
||||||
|
`admin_engine_versions_test.go`, `admin_user_sanction_test.go` —
|
||||||
|
bootstrap + CRUD; visibility split between user and admin queries;
|
||||||
|
engine-version registry CRUD; permanent block cascade.
|
||||||
|
- **Lobby flow without engine**:
|
||||||
|
`lobby_flow_test.go` — owner-creates-private-game →
|
||||||
|
open-enrollment → invite → redeem → memberships listing.
|
||||||
|
- **Soft delete cascade**:
|
||||||
|
`soft_delete_test.go` — `POST /api/v1/user/account/delete`
|
||||||
|
cascades through auth/lobby/notification/geo, gateway rejects
|
||||||
|
subsequent calls.
|
||||||
|
- **Geo counters**:
|
||||||
|
`geo_counter_increments_test.go` — multiple authenticated
|
||||||
|
requests with different `X-Forwarded-For` values increment the
|
||||||
|
user's per-country counter rows.
|
||||||
|
|
||||||
|
Full-system flows beyond the inter-service set are intentionally
|
||||||
|
limited; pick scenarios that exercise the longest vertical slice
|
||||||
|
the platform supports today.
|
||||||
|
|
||||||
|
## Principles
|
||||||
|
|
||||||
|
### Service tests
|
||||||
|
|
||||||
|
- **Postgres testcontainers must pin no-op observability providers.**
|
||||||
|
Tests that call `pgshared.OpenPrimary(ctx, cfg)` from
|
||||||
|
`galaxy/postgres` pass `backendpg.NoObservabilityOptions()...` so
|
||||||
|
`otelsql` cannot fall through to the global tracer/meter providers.
|
||||||
|
Without this, an unset OTEL endpoint in the developer environment
|
||||||
|
can stall the test on a background exporter handshake.
|
||||||
|
|
||||||
|
See `backend/internal/postgres/testopts.go` for the helper and
|
||||||
|
`backend/internal/{auth,user,admin,lobby,mail,notification,runtime,geo,postgres}/`
|
||||||
|
test files for the established call sites.
|
||||||
|
|
||||||
|
- **A bootstrap failure is fatal, not a skip.** A test that needs a
|
||||||
|
testcontainer must fail loudly when the container fails to come
|
||||||
|
up. `t.Skipf` is reserved for `testenv.RequireDocker` (no daemon
|
||||||
|
at all); anything past that — `tcpostgres.Run`, `db.Ping`, schema
|
||||||
|
migration — uses `t.Fatalf`.
|
||||||
|
|
||||||
|
### Integration tests
|
||||||
|
|
||||||
|
- **Bootstrap is per-test.** Each test calls `testenv.Bootstrap(t)`
|
||||||
|
to spin up a dedicated Postgres, Redis, mailpit, backend, and
|
||||||
|
gateway. Cross-test contamination is impossible.
|
||||||
|
|
||||||
|
- **Tests do not call `t.Parallel`.** Docker resource pressure makes
|
||||||
|
parallel bootstraps flaky on commodity hardware.
|
||||||
|
|
||||||
|
- **Anti-abuse limits are loosened by `testenv/gateway.go`.** The
|
||||||
|
bulk-scenario default lifts every gateway rate-limit class
|
||||||
|
(`public_auth`, identity-bucket per-email, IP/session/user/
|
||||||
|
message-class) to 10 000 req/window with a 1 000 burst. Negative-
|
||||||
|
path edge tests in `gateway_edge_test.go` tighten specific limits
|
||||||
|
per test to observe the protection firing.
|
||||||
|
|
||||||
|
- **Image labels are intentional.** `integration/testenv/images.go`
|
||||||
|
stamps every locally-built image with
|
||||||
|
`galaxy.test.kind=integration-image`; `preclean` keys off this
|
||||||
|
label. Do not strip it from new image builds added to the test
|
||||||
|
harness.
|
||||||
|
|
||||||
|
## Test file ownership matrix
|
||||||
|
|
||||||
|
| Suite | Where | Boots | Runs how |
|
||||||
|
|--------------------------------------------|-------------------|----------------------------------------------------------------------|-------------------------------------------|
|
||||||
|
| `backend/internal/<pkg>/...` unit | per package | one Postgres testcontainer per test | `go test ./internal/<pkg>/` |
|
||||||
|
| `backend/push` | `backend/push/` | nothing | `go test ./push/` |
|
||||||
|
| `gateway/internal/<pkg>/...` unit | per package | mostly nothing; few use redis tc | `go test ./internal/<pkg>/` |
|
||||||
|
| `pkg/transcoder`, `pkg/postgres` unit | per package | nothing / one tc per test | `go test ./...` from the package |
|
||||||
|
| `integration/` | `integration/` | postgres + redis + mailpit + backend + gateway (+ optional game) | `make -C integration integration` |
|
||||||
|
|
||||||
|
## Adding a new test
|
||||||
|
|
||||||
|
1. Decide the layer: service, inter-service, or system. A backend
|
||||||
|
change usually lands as service tests plus an integration test
|
||||||
|
for any new cross-process behaviour.
|
||||||
|
2. Reuse `testenv` fixtures rather than rolling your own container
|
||||||
|
orchestration.
|
||||||
|
3. Follow the bootstrap-per-test pattern; do not share a global
|
||||||
|
stack across tests.
|
||||||
|
4. Make the test deterministic: explicit timeouts (no
|
||||||
|
`time.Sleep`), `t.Logf` instead of `fmt.Println`, no
|
||||||
|
`t.Parallel()` in `integration/`.
|
||||||
|
5. Service test that hits Postgres: copy the `startPostgres(t)`
|
||||||
|
helper from one of the existing packages (e.g.
|
||||||
|
`backend/internal/auth/auth_e2e_test.go`) and pass
|
||||||
|
`backendpg.NoObservabilityOptions()...` to `pgshared.OpenPrimary`.
|
||||||
|
6. Integration test: add the file under `integration/`, call
|
||||||
|
`testenv.Bootstrap(t)`, and use the typed clients exposed by
|
||||||
|
`testenv` rather than reaching for raw HTTP. New scenarios that
|
||||||
|
need bespoke gateway env should pass `Extra` through
|
||||||
|
`BootstrapOptions` so the loosened defaults stay shared.
|
||||||
|
7. Any test that brings up its own Docker container (rare — most go
|
||||||
|
through `testenv`) must label the container so `preclean` can
|
||||||
|
find it on the next run.
|
||||||
|
|
||||||
|
## Day-to-day execution
|
||||||
|
|
||||||
|
- Run `go test ./<service>/...` for the service you are touching;
|
||||||
|
this is fast (Postgres testcontainers add ~3–5 s per package that
|
||||||
|
uses them).
|
||||||
|
- Run `make -C integration integration` before opening a PR that
|
||||||
|
touches a cross-process seam. Cold runs build three Docker images
|
||||||
|
(`galaxy/backend:integration`, `galaxy/gateway:integration`,
|
||||||
|
`galaxy/game:integration`) — budget ~3 min for the cold path,
|
||||||
|
~75 s for the warm path.
|
||||||
|
- Use `make -C integration integration-step` when a flake or a real
|
||||||
|
regression needs a per-test isolation pass.
|
||||||
|
- CI runs every layer on every push. Integration tests rely on a
|
||||||
|
reachable Docker daemon; missing daemon yields a clear skip from
|
||||||
|
`testenv.RequireDocker`, anything past that is a hard failure.
|
||||||
|
|
||||||
|
## Out-of-scope (legacy architecture)
|
||||||
|
|
||||||
|
The previous nine-service architecture defined components that no
|
||||||
|
longer exist as distinct services. Their behaviour either lives
|
||||||
|
inside `backend` (and is therefore covered by backend service or
|
||||||
|
integration tests) or has been removed:
|
||||||
|
|
||||||
|
- *Auth/Session Service*, *User Service*, *Notification Service*,
|
||||||
|
*Mail Service*, *Game Lobby Service*, *Runtime Manager*,
|
||||||
|
*Game Master*, *Admin Service* — consolidated into
|
||||||
|
`backend/internal/*`. Inter-service seams between these former
|
||||||
|
services are now in-process function calls; they are exercised by
|
||||||
|
backend service tests, not by integration tests.
|
||||||
|
- *Geo Profile Service* (suspicious-multi-country detection,
|
||||||
|
review-recommended state, session blocking through geo) — not
|
||||||
|
implemented. The geo concern is intentionally minimal (see
|
||||||
|
`ARCHITECTURE.md §10`) and the test plan does not assert on
|
||||||
|
features we do not ship.
|
||||||
|
- *Billing Service* — not implemented; no tests required until it
|
||||||
|
appears.
|
||||||
+1
-1
@@ -8,7 +8,7 @@ batched player command execution.
|
|||||||
## References
|
## References
|
||||||
|
|
||||||
- [`openapi.yaml`](openapi.yaml) — REST contract.
|
- [`openapi.yaml`](openapi.yaml) — REST contract.
|
||||||
- [`../ARCHITECTURE.md`](../ARCHITECTURE.md) — system architecture.
|
- [`../docs/ARCHITECTURE.md`](../docs/ARCHITECTURE.md) — system architecture.
|
||||||
- [`../rtmanager/README.md`](../rtmanager/README.md) — Runtime Manager owns
|
- [`../rtmanager/README.md`](../rtmanager/README.md) — Runtime Manager owns
|
||||||
container lifecycle for this binary.
|
container lifecycle for this binary.
|
||||||
|
|
||||||
|
|||||||
+50
-62
@@ -346,6 +346,12 @@ The current direct `Gateway -> User` self-service boundary uses that pattern:
|
|||||||
- `user.account.get`
|
- `user.account.get`
|
||||||
- `user.profile.update`
|
- `user.profile.update`
|
||||||
- `user.settings.update`
|
- `user.settings.update`
|
||||||
|
- `user.sessions.list`
|
||||||
|
- `user.sessions.revoke`
|
||||||
|
- `user.sessions.revoke_all`
|
||||||
|
- `user.games.command`
|
||||||
|
- `user.games.order`
|
||||||
|
- `user.games.report`
|
||||||
- external payloads and responses:
|
- external payloads and responses:
|
||||||
- FlatBuffers
|
- FlatBuffers
|
||||||
- internal downstream transport:
|
- internal downstream transport:
|
||||||
@@ -479,20 +485,25 @@ payload only: `user_id`, optional `device_session_id`, `event_type`,
|
|||||||
gateway derives `timestamp_ms`, recomputes `payload_hash`, signs the event,
|
gateway derives `timestamp_ms`, recomputes `payload_hash`, signs the event,
|
||||||
and only then forwards it to the matching `SubscribeEvents` streams.
|
and only then forwards it to the matching `SubscribeEvents` streams.
|
||||||
|
|
||||||
Notification-owned user-facing payloads are expected to use
|
Notification-owned user-facing payloads use
|
||||||
`pkg/schema/fbs/notification.fbs`. The initial notification event vocabulary
|
`pkg/schema/fbs/notification.fbs`. Each catalog kind has a 1:1
|
||||||
in v1 is exactly:
|
FlatBuffers table named with the camel-case form of the kind plus the
|
||||||
|
`Event` suffix. The closed v1 vocabulary is exactly the 13 kinds
|
||||||
|
defined in `backend/internal/notification/catalog.go`:
|
||||||
|
|
||||||
- `game.turn.ready`
|
- `lobby.invite.received`
|
||||||
- `game.finished`
|
- `lobby.invite.revoked`
|
||||||
- `lobby.application.submitted`
|
- `lobby.application.submitted`
|
||||||
- `lobby.membership.approved`
|
- `lobby.application.approved`
|
||||||
- `lobby.membership.rejected`
|
- `lobby.application.rejected`
|
||||||
|
- `lobby.membership.removed`
|
||||||
- `lobby.membership.blocked`
|
- `lobby.membership.blocked`
|
||||||
- `lobby.invite.created`
|
|
||||||
- `lobby.invite.redeemed`
|
|
||||||
- `lobby.race_name.registration_eligible`
|
|
||||||
- `lobby.race_name.registered`
|
- `lobby.race_name.registered`
|
||||||
|
- `lobby.race_name.pending`
|
||||||
|
- `lobby.race_name.expired`
|
||||||
|
- `runtime.image_pull_failed` (admin recipient)
|
||||||
|
- `runtime.container_start_failed` (admin recipient)
|
||||||
|
- `runtime.start_config_invalid` (admin recipient)
|
||||||
|
|
||||||
`lobby.application.submitted` is published toward `Gateway` only for the
|
`lobby.application.submitted` is published toward `Gateway` only for the
|
||||||
private-game owner flow. The public-game variant is email-only.
|
private-game owner flow. The public-game variant is email-only.
|
||||||
@@ -589,68 +600,45 @@ Expected session fields available to the gateway:
|
|||||||
|
|
||||||
### Session Cache
|
### Session Cache
|
||||||
|
|
||||||
`SessionCache` provides the fast path for:
|
`SessionCache` is the in-memory LRU + TTL store fronting every
|
||||||
|
authenticated request. It serves the hot path for:
|
||||||
|
|
||||||
- session existence checks;
|
- session existence checks;
|
||||||
- `device_session_id -> user_id`;
|
- `device_session_id → user_id`;
|
||||||
- access to the base64-encoded raw Ed25519 client public key used for
|
- access to the base64-encoded raw Ed25519 client public key used for
|
||||||
signature verification;
|
signature verification;
|
||||||
- revoked versus active status checks.
|
- active vs revoked status checks.
|
||||||
|
|
||||||
Cache updates are event-driven.
|
Implementation: a bounded LRU map (default 50 000 entries) wrapped by a
|
||||||
TTL is allowed only as a safety net and must not replace invalidation events.
|
safety-net TTL (default 10 minutes). On miss the cache calls
|
||||||
|
`/api/v1/internal/sessions/{id}` against backend and seeds the entry.
|
||||||
|
`session_invalidation` push frames flip the cached entry's status to
|
||||||
|
`revoked` so subsequent authenticated requests are rejected at the edge
|
||||||
|
without another backend round-trip. The TTL covers the case of a missed
|
||||||
|
event (cursor aged out, gateway restart) by forcing a fresh backend
|
||||||
|
lookup at most once per window.
|
||||||
|
|
||||||
The gateway keeps a process-local in-memory snapshot
|
The cache is process-local and unsynchronised across gateway instances.
|
||||||
cache in front of the Redis fallback backend. Authenticated requests read the
|
The MVP ships a single gateway instance (see
|
||||||
local snapshot first. A local miss performs one bounded Redis lookup and seeds
|
`docs/ARCHITECTURE.md §18`); multi-instance scale-out is a later step
|
||||||
the local snapshot so later requests for the same session avoid another Redis
|
that may revisit the topology.
|
||||||
round-trip unless a later session event changes the cached state.
|
|
||||||
|
|
||||||
The local snapshot cache intentionally has no TTL and no size-based
|
Configuration:
|
||||||
eviction policy. Session lifecycle events are the authoritative mechanism for
|
|
||||||
keeping the hot path current, while Redis fallback remains the safety net for
|
|
||||||
cold misses and process restarts.
|
|
||||||
|
|
||||||
The Redis fallback implementation uses `go-redis/v9`. `cmd/gateway` opens one
|
- `GATEWAY_SESSION_CACHE_MAX_ENTRIES` with default `50000`
|
||||||
shared `*redis.Client` via `pkg/redisconn` (instrumented with OpenTelemetry
|
- `GATEWAY_SESSION_CACHE_TTL` with default `10m`
|
||||||
tracing and metrics), issues a single bounded `PING` on startup, and refuses
|
|
||||||
to start when Redis is misconfigured or unavailable. The session cache,
|
|
||||||
replay store, session-events subscriber, and client-events subscriber all
|
|
||||||
use that shared client. See `docs/redis-config.md` for the rationale behind
|
|
||||||
the shape and the project-wide rules in
|
|
||||||
`ARCHITECTURE.md §Persistence Backends`.
|
|
||||||
|
|
||||||
Required Redis connection variables:
|
Redis is used by the gateway only for the authenticated Replay Store
|
||||||
|
(see below). The shared client is opened via `pkg/redisconn` against
|
||||||
|
`GATEWAY_REDIS_MASTER_ADDR` and `GATEWAY_REDIS_PASSWORD`; optional
|
||||||
|
tuning lives under `GATEWAY_REDIS_REPLICA_ADDRS`, `GATEWAY_REDIS_DB`,
|
||||||
|
and `GATEWAY_REDIS_OPERATION_TIMEOUT` (all documented in
|
||||||
|
`docs/redis-config.md`).
|
||||||
|
|
||||||
- `GATEWAY_REDIS_MASTER_ADDR`
|
> Removed: the previous Redis-backed session-cache projection and its
|
||||||
- `GATEWAY_REDIS_PASSWORD`
|
> environment variables (`GATEWAY_SESSION_CACHE_REDIS_*`,
|
||||||
|
> `GATEWAY_REDIS_TLS_ENABLED`, `GATEWAY_REDIS_USERNAME`).
|
||||||
Optional Redis connection variables:
|
> `pkg/redisconn.LoadFromEnv` rejects the deprecated names at startup.
|
||||||
|
|
||||||
- `GATEWAY_REDIS_REPLICA_ADDRS` (comma-separated; reserved for future
|
|
||||||
read-routing — currently unused)
|
|
||||||
- `GATEWAY_REDIS_DB` with default `0`
|
|
||||||
- `GATEWAY_REDIS_OPERATION_TIMEOUT` with default `250ms`
|
|
||||||
|
|
||||||
> Removed: `GATEWAY_SESSION_CACHE_REDIS_ADDR`,
|
|
||||||
> `GATEWAY_SESSION_CACHE_REDIS_USERNAME`,
|
|
||||||
> `GATEWAY_SESSION_CACHE_REDIS_PASSWORD`,
|
|
||||||
> `GATEWAY_SESSION_CACHE_REDIS_DB`,
|
|
||||||
> `GATEWAY_SESSION_CACHE_REDIS_TLS_ENABLED`. `pkg/redisconn.LoadFromEnv`
|
|
||||||
> rejects the deprecated `GATEWAY_REDIS_TLS_ENABLED` and
|
|
||||||
> `GATEWAY_REDIS_USERNAME` variables at startup.
|
|
||||||
|
|
||||||
Per-subsystem Redis behavior variables (namespace, timeouts):
|
|
||||||
|
|
||||||
- `GATEWAY_REPLAY_REDIS_KEY_PREFIX` with default `gateway:replay:`
|
|
||||||
- `GATEWAY_REPLAY_REDIS_RESERVE_TIMEOUT` with default `250ms`
|
|
||||||
|
|
||||||
Gateway no longer keeps a session cache projection or the two Redis
|
|
||||||
Streams (`session_events`, `client_events`). Session lookup is a
|
|
||||||
synchronous REST call to backend, and inbound client / session events
|
|
||||||
arrive through the gRPC `Push.SubscribePush` consumer (see the
|
|
||||||
**Backend Client** section below). Redis is therefore used only by
|
|
||||||
the Replay Store.
|
|
||||||
|
|
||||||
### Backend Client
|
### Backend Client
|
||||||
|
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
// `galaxy/integration/testenv`) can reuse the canonical signing
|
// `galaxy/integration/testenv`) can reuse the canonical signing
|
||||||
// input builders and the response/event verifiers without having to
|
// input builders and the response/event verifiers without having to
|
||||||
// duplicate the wire contract documented in
|
// duplicate the wire contract documented in
|
||||||
// `../../ARCHITECTURE.md` §15.
|
// `../../docs/ARCHITECTURE.md` §15.
|
||||||
package authn
|
package authn
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
@@ -153,7 +153,11 @@ func newAuthenticatedGRPCDependencies(ctx context.Context, cfg config.Config, lo
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
sessionCache, err := session.NewBackendCache(backend.REST())
|
sessionCache, err := session.NewMemoryCache(backend.REST(), session.MemoryCacheOptions{
|
||||||
|
MaxEntries: cfg.SessionCache.MaxEntries,
|
||||||
|
TTL: cfg.SessionCache.TTL,
|
||||||
|
Logger: logger,
|
||||||
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
||||||
@@ -171,20 +175,27 @@ func newAuthenticatedGRPCDependencies(ctx context.Context, cfg config.Config, lo
|
|||||||
|
|
||||||
pushHub := push.NewHubWithObserver(0, telemetry.NewPushObserver(telemetryRuntime))
|
pushHub := push.NewHubWithObserver(0, telemetry.NewPushObserver(telemetryRuntime))
|
||||||
|
|
||||||
dispatcher := events.NewDispatcher(pushHub, pushHub, logger, telemetryRuntime)
|
// Composite invalidator: every session_invalidation event flips the
|
||||||
|
// cached record to revoked AND closes any active push subscription.
|
||||||
|
invalidator := &cacheAndHubInvalidator{cache: sessionCache, hub: pushHub}
|
||||||
|
dispatcher := events.NewDispatcher(pushHub, invalidator, logger, telemetryRuntime)
|
||||||
pushClient := backend.Push().
|
pushClient := backend.Push().
|
||||||
WithLogger(logger).
|
WithLogger(logger).
|
||||||
WithHandler(dispatcher)
|
WithHandler(dispatcher)
|
||||||
|
|
||||||
userRoutes := backendclient.UserRoutes(backend.REST())
|
userRoutes := backendclient.UserRoutes(backend.REST())
|
||||||
lobbyRoutes := backendclient.LobbyRoutes(backend.REST())
|
lobbyRoutes := backendclient.LobbyRoutes(backend.REST())
|
||||||
allRoutes := make(map[string]downstream.Client, len(userRoutes)+len(lobbyRoutes))
|
gameRoutes := backendclient.GameRoutes(backend.REST())
|
||||||
|
allRoutes := make(map[string]downstream.Client, len(userRoutes)+len(lobbyRoutes)+len(gameRoutes))
|
||||||
for k, v := range userRoutes {
|
for k, v := range userRoutes {
|
||||||
allRoutes[k] = v
|
allRoutes[k] = v
|
||||||
}
|
}
|
||||||
for k, v := range lobbyRoutes {
|
for k, v := range lobbyRoutes {
|
||||||
allRoutes[k] = v
|
allRoutes[k] = v
|
||||||
}
|
}
|
||||||
|
for k, v := range gameRoutes {
|
||||||
|
allRoutes[k] = v
|
||||||
|
}
|
||||||
|
|
||||||
cleanup := func() error {
|
cleanup := func() error {
|
||||||
return closeRedisClient()
|
return closeRedisClient()
|
||||||
@@ -202,6 +213,40 @@ func newAuthenticatedGRPCDependencies(ctx context.Context, cfg config.Config, lo
|
|||||||
}, []app.Component{pushClient}, cleanup, nil
|
}, []app.Component{pushClient}, cleanup, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// cacheAndHubInvalidator fans every session-invalidation push frame
|
||||||
|
// out to both the session cache (so subsequent Lookups see the
|
||||||
|
// session as revoked without a backend round-trip) and the push hub
|
||||||
|
// (so any active SubscribeEvents stream bound to the session is
|
||||||
|
// closed immediately). The shape matches `events.SessionInvalidator`.
|
||||||
|
type cacheAndHubInvalidator struct {
|
||||||
|
cache session.Cache
|
||||||
|
hub *push.Hub
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cacheAndHubInvalidator) RevokeDeviceSession(deviceSessionID string) {
|
||||||
|
if c == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if c.cache != nil {
|
||||||
|
c.cache.MarkRevoked(deviceSessionID)
|
||||||
|
}
|
||||||
|
if c.hub != nil {
|
||||||
|
c.hub.RevokeDeviceSession(deviceSessionID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cacheAndHubInvalidator) RevokeAllForUser(userID string) {
|
||||||
|
if c == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if c.cache != nil {
|
||||||
|
c.cache.MarkAllRevokedForUser(userID)
|
||||||
|
}
|
||||||
|
if c.hub != nil {
|
||||||
|
c.hub.RevokeAllForUser(userID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// authServiceAdapter adapts backendclient.RESTClient to the
|
// authServiceAdapter adapts backendclient.RESTClient to the
|
||||||
// restapi.AuthServiceClient interface so the public REST handlers can stay
|
// restapi.AuthServiceClient interface so the public REST handlers can stay
|
||||||
// unchanged. The two surfaces share the same JSON wire shape; only the Go
|
// unchanged. The two surfaces share the same JSON wire shape; only the Go
|
||||||
|
|||||||
@@ -0,0 +1,170 @@
|
|||||||
|
package backendclient
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"galaxy/gateway/internal/downstream"
|
||||||
|
ordermodel "galaxy/model/order"
|
||||||
|
reportmodel "galaxy/model/report"
|
||||||
|
gamerest "galaxy/model/rest"
|
||||||
|
"galaxy/transcoder"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ExecuteGameCommand routes one authenticated `user.games.*` command
|
||||||
|
// into backend's `/api/v1/user/games/{game_id}/*` endpoints. Command
|
||||||
|
// and order requests transcode the typed FB-payload into the JSON
|
||||||
|
// shape the engine expects (a `gamerest.Command` with empty actor —
|
||||||
|
// backend rebinds the actor from the runtime player mapping). Report
|
||||||
|
// requests transcode the response Report from JSON back to FB.
|
||||||
|
func (c *RESTClient) ExecuteGameCommand(ctx context.Context, command downstream.AuthenticatedCommand) (downstream.UnaryResult, error) {
|
||||||
|
if c == nil || c.httpClient == nil {
|
||||||
|
return downstream.UnaryResult{}, errors.New("backendclient: execute game command: nil client")
|
||||||
|
}
|
||||||
|
if ctx == nil {
|
||||||
|
return downstream.UnaryResult{}, errors.New("backendclient: execute game command: nil context")
|
||||||
|
}
|
||||||
|
if err := ctx.Err(); err != nil {
|
||||||
|
return downstream.UnaryResult{}, err
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(command.UserID) == "" {
|
||||||
|
return downstream.UnaryResult{}, errors.New("backendclient: execute game command: user_id must not be empty")
|
||||||
|
}
|
||||||
|
|
||||||
|
switch command.MessageType {
|
||||||
|
case ordermodel.MessageTypeUserGamesCommand:
|
||||||
|
req, err := transcoder.PayloadToUserGamesCommand(command.PayloadBytes)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute game command %q: %w", command.MessageType, err)
|
||||||
|
}
|
||||||
|
return c.executeUserGamesCommand(ctx, command.UserID, req)
|
||||||
|
case ordermodel.MessageTypeUserGamesOrder:
|
||||||
|
req, err := transcoder.PayloadToUserGamesOrder(command.PayloadBytes)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute game command %q: %w", command.MessageType, err)
|
||||||
|
}
|
||||||
|
return c.executeUserGamesOrder(ctx, command.UserID, req)
|
||||||
|
case reportmodel.MessageTypeUserGamesReport:
|
||||||
|
req, err := transcoder.PayloadToGameReportRequest(command.PayloadBytes)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute game command %q: %w", command.MessageType, err)
|
||||||
|
}
|
||||||
|
return c.executeUserGamesReport(ctx, command.UserID, req)
|
||||||
|
default:
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute game command: unsupported message type %q", command.MessageType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *RESTClient) executeUserGamesCommand(ctx context.Context, userID string, req *ordermodel.UserGamesCommand) (downstream.UnaryResult, error) {
|
||||||
|
if req.GameID == uuid.Nil {
|
||||||
|
return downstream.UnaryResult{}, errors.New("execute user.games.command: game_id must not be empty")
|
||||||
|
}
|
||||||
|
body, err := buildEngineCommandBody(req.Commands)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("execute user.games.command: %w", err)
|
||||||
|
}
|
||||||
|
target := c.baseURL + "/api/v1/user/games/" + url.PathEscape(req.GameID.String()) + "/commands"
|
||||||
|
respBody, status, err := c.do(ctx, http.MethodPost, target, userID, body)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("execute user.games.command: %w", err)
|
||||||
|
}
|
||||||
|
return projectUserGamesAckResponse(status, respBody, transcoder.EmptyUserGamesCommandResponsePayload)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *RESTClient) executeUserGamesOrder(ctx context.Context, userID string, req *ordermodel.UserGamesOrder) (downstream.UnaryResult, error) {
|
||||||
|
if req.GameID == uuid.Nil {
|
||||||
|
return downstream.UnaryResult{}, errors.New("execute user.games.order: game_id must not be empty")
|
||||||
|
}
|
||||||
|
body, err := buildEngineCommandBody(req.Commands)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("execute user.games.order: %w", err)
|
||||||
|
}
|
||||||
|
target := c.baseURL + "/api/v1/user/games/" + url.PathEscape(req.GameID.String()) + "/orders"
|
||||||
|
respBody, status, err := c.do(ctx, http.MethodPost, target, userID, body)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("execute user.games.order: %w", err)
|
||||||
|
}
|
||||||
|
return projectUserGamesAckResponse(status, respBody, transcoder.EmptyUserGamesOrderResponsePayload)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *RESTClient) executeUserGamesReport(ctx context.Context, userID string, req *reportmodel.GameReportRequest) (downstream.UnaryResult, error) {
|
||||||
|
if req.GameID == uuid.Nil {
|
||||||
|
return downstream.UnaryResult{}, errors.New("execute user.games.report: game_id must not be empty")
|
||||||
|
}
|
||||||
|
target := fmt.Sprintf("%s/api/v1/user/games/%s/reports/%d", c.baseURL, url.PathEscape(req.GameID.String()), req.Turn)
|
||||||
|
respBody, status, err := c.do(ctx, http.MethodGet, target, userID, nil)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("execute user.games.report: %w", err)
|
||||||
|
}
|
||||||
|
return projectUserGamesReportResponse(status, respBody)
|
||||||
|
}
|
||||||
|
|
||||||
|
// buildEngineCommandBody serialises a slice of typed commands into the
|
||||||
|
// JSON shape expected by backend's command/order handlers (a
|
||||||
|
// `gamerest.Command` with the actor field left empty — backend rebinds
|
||||||
|
// it from the runtime player mapping before forwarding to the engine).
|
||||||
|
func buildEngineCommandBody(commands []ordermodel.DecodableCommand) (gamerest.Command, error) {
|
||||||
|
raw := make([]json.RawMessage, len(commands))
|
||||||
|
for i, cmd := range commands {
|
||||||
|
encoded, err := json.Marshal(cmd)
|
||||||
|
if err != nil {
|
||||||
|
return gamerest.Command{}, fmt.Errorf("encode command %d: %w", i, err)
|
||||||
|
}
|
||||||
|
raw[i] = encoded
|
||||||
|
}
|
||||||
|
return gamerest.Command{Actor: "", Commands: raw}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// projectUserGamesAckResponse turns a backend response for command /
|
||||||
|
// order routes into a UnaryResult. Engine returns 204 on success, so
|
||||||
|
// any 2xx status is treated as ok and answered with the empty typed
|
||||||
|
// FB envelope produced by ackBuilder.
|
||||||
|
func projectUserGamesAckResponse(statusCode int, payload []byte, ackBuilder func() []byte) (downstream.UnaryResult, error) {
|
||||||
|
switch {
|
||||||
|
case statusCode >= 200 && statusCode < 300:
|
||||||
|
return downstream.UnaryResult{
|
||||||
|
ResultCode: userCommandResultCodeOK,
|
||||||
|
PayloadBytes: ackBuilder(),
|
||||||
|
}, nil
|
||||||
|
case statusCode == http.StatusServiceUnavailable:
|
||||||
|
return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable
|
||||||
|
case statusCode >= 400 && statusCode <= 599:
|
||||||
|
return projectUserBackendError(statusCode, payload)
|
||||||
|
default:
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("unexpected HTTP status %d", statusCode)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// projectUserGamesReportResponse decodes the engine's Report JSON
|
||||||
|
// payload (forwarded verbatim by backend) and re-encodes it as a
|
||||||
|
// FlatBuffers Report for the signed-gRPC client.
|
||||||
|
func projectUserGamesReportResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
|
||||||
|
switch {
|
||||||
|
case statusCode == http.StatusOK:
|
||||||
|
var report reportmodel.Report
|
||||||
|
if err := json.Unmarshal(payload, &report); err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("decode engine report: %w", err)
|
||||||
|
}
|
||||||
|
encoded, err := transcoder.ReportToPayload(&report)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("encode report payload: %w", err)
|
||||||
|
}
|
||||||
|
return downstream.UnaryResult{
|
||||||
|
ResultCode: userCommandResultCodeOK,
|
||||||
|
PayloadBytes: encoded,
|
||||||
|
}, nil
|
||||||
|
case statusCode == http.StatusServiceUnavailable:
|
||||||
|
return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable
|
||||||
|
case statusCode >= 400 && statusCode <= 599:
|
||||||
|
return projectUserBackendError(statusCode, payload)
|
||||||
|
default:
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("unexpected HTTP status %d", statusCode)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -106,7 +106,10 @@ func TestPushClientDeliversClientEventsAndAdvancesCursor(t *testing.T) {
|
|||||||
require.Eventually(t, func() bool { return svc.Service.SubscriberCount() == 1 }, time.Second, 10*time.Millisecond)
|
require.Eventually(t, func() bool { return svc.Service.SubscriberCount() == 1 }, time.Second, 10*time.Millisecond)
|
||||||
|
|
||||||
userID := uuid.New()
|
userID := uuid.New()
|
||||||
require.NoError(t, svc.Service.PublishClientEvent(context.Background(), userID, nil, "lobby.invite.received", map[string]any{"x": 1.0}, "evt-1", "req-1", "trace-1"))
|
require.NoError(t, svc.Service.PublishClientEvent(context.Background(), userID, nil, backendpush.JSONEvent{
|
||||||
|
EventKind: "lobby.invite.received",
|
||||||
|
Payload: map[string]any{"x": 1.0},
|
||||||
|
}, "evt-1", "req-1", "trace-1"))
|
||||||
|
|
||||||
select {
|
select {
|
||||||
case got := <-out:
|
case got := <-out:
|
||||||
|
|||||||
@@ -98,45 +98,6 @@ func (c *RESTClient) LookupSession(ctx context.Context, deviceSessionID string)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// RevokeSession asks backend to revoke a single device session by id.
|
|
||||||
func (c *RESTClient) RevokeSession(ctx context.Context, deviceSessionID string) error {
|
|
||||||
if strings.TrimSpace(deviceSessionID) == "" {
|
|
||||||
return errors.New("backendclient: revoke session: device_session_id must not be empty")
|
|
||||||
}
|
|
||||||
target := c.baseURL + "/api/v1/internal/sessions/" + url.PathEscape(deviceSessionID) + "/revoke"
|
|
||||||
_, status, err := c.do(ctx, http.MethodPost, target, "", nil)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("backendclient: revoke session: %w", err)
|
|
||||||
}
|
|
||||||
if status == http.StatusOK || status == http.StatusNoContent {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if status == http.StatusNotFound {
|
|
||||||
return errSessionNotFound()
|
|
||||||
}
|
|
||||||
return fmt.Errorf("backendclient: revoke session: unexpected HTTP status %d", status)
|
|
||||||
}
|
|
||||||
|
|
||||||
// RevokeAllSessionsForUser asks backend to revoke every active device
|
|
||||||
// session belonging to userID.
|
|
||||||
func (c *RESTClient) RevokeAllSessionsForUser(ctx context.Context, userID string) error {
|
|
||||||
if strings.TrimSpace(userID) == "" {
|
|
||||||
return errors.New("backendclient: revoke-all sessions: user_id must not be empty")
|
|
||||||
}
|
|
||||||
target := c.baseURL + "/api/v1/internal/sessions/users/" + url.PathEscape(userID) + "/revoke-all"
|
|
||||||
_, status, err := c.do(ctx, http.MethodPost, target, "", nil)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("backendclient: revoke-all sessions: %w", err)
|
|
||||||
}
|
|
||||||
if status == http.StatusOK || status == http.StatusNoContent {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if status == http.StatusNotFound {
|
|
||||||
return errSessionNotFound()
|
|
||||||
}
|
|
||||||
return fmt.Errorf("backendclient: revoke-all sessions: unexpected HTTP status %d", status)
|
|
||||||
}
|
|
||||||
|
|
||||||
// do executes a JSON request and reads the response body. userID, when
|
// do executes a JSON request and reads the response body. userID, when
|
||||||
// non-empty, is sent as the X-User-Id header (required for `/api/v1/user/*`).
|
// non-empty, is sent as the X-User-Id header (required for `/api/v1/user/*`).
|
||||||
func (c *RESTClient) do(ctx context.Context, method, target, userID string, body any) ([]byte, int, error) {
|
func (c *RESTClient) do(ctx context.Context, method, target, userID string, body any) ([]byte, int, error) {
|
||||||
|
|||||||
@@ -5,6 +5,8 @@ import (
|
|||||||
|
|
||||||
"galaxy/gateway/internal/downstream"
|
"galaxy/gateway/internal/downstream"
|
||||||
lobbymodel "galaxy/model/lobby"
|
lobbymodel "galaxy/model/lobby"
|
||||||
|
ordermodel "galaxy/model/order"
|
||||||
|
reportmodel "galaxy/model/report"
|
||||||
usermodel "galaxy/model/user"
|
usermodel "galaxy/model/user"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -21,6 +23,9 @@ func UserRoutes(client *RESTClient) map[string]downstream.Client {
|
|||||||
usermodel.MessageTypeGetMyAccount: target,
|
usermodel.MessageTypeGetMyAccount: target,
|
||||||
usermodel.MessageTypeUpdateMyProfile: target,
|
usermodel.MessageTypeUpdateMyProfile: target,
|
||||||
usermodel.MessageTypeUpdateMySettings: target,
|
usermodel.MessageTypeUpdateMySettings: target,
|
||||||
|
usermodel.MessageTypeListMySessions: target,
|
||||||
|
usermodel.MessageTypeRevokeMySession: target,
|
||||||
|
usermodel.MessageTypeRevokeAllMySessions: target,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -38,6 +43,22 @@ func LobbyRoutes(client *RESTClient) map[string]downstream.Client {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GameRoutes returns the authenticated `user.games.*` downstream
|
||||||
|
// routes served by backend (which in turn forwards to the running
|
||||||
|
// game engine container). When client is nil every route resolves to
|
||||||
|
// a dependency-unavailable client.
|
||||||
|
func GameRoutes(client *RESTClient) map[string]downstream.Client {
|
||||||
|
target := downstream.Client(unavailableClient{})
|
||||||
|
if client != nil {
|
||||||
|
target = gameCommandClient{rest: client}
|
||||||
|
}
|
||||||
|
return map[string]downstream.Client{
|
||||||
|
ordermodel.MessageTypeUserGamesCommand: target,
|
||||||
|
ordermodel.MessageTypeUserGamesOrder: target,
|
||||||
|
reportmodel.MessageTypeUserGamesReport: target,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
type unavailableClient struct{}
|
type unavailableClient struct{}
|
||||||
|
|
||||||
func (unavailableClient) ExecuteCommand(context.Context, downstream.AuthenticatedCommand) (downstream.UnaryResult, error) {
|
func (unavailableClient) ExecuteCommand(context.Context, downstream.AuthenticatedCommand) (downstream.UnaryResult, error) {
|
||||||
@@ -60,8 +81,17 @@ func (c lobbyCommandClient) ExecuteCommand(ctx context.Context, command downstre
|
|||||||
return c.rest.ExecuteLobbyCommand(ctx, command)
|
return c.rest.ExecuteLobbyCommand(ctx, command)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type gameCommandClient struct {
|
||||||
|
rest *RESTClient
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c gameCommandClient) ExecuteCommand(ctx context.Context, command downstream.AuthenticatedCommand) (downstream.UnaryResult, error) {
|
||||||
|
return c.rest.ExecuteGameCommand(ctx, command)
|
||||||
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
_ downstream.Client = unavailableClient{}
|
_ downstream.Client = unavailableClient{}
|
||||||
_ downstream.Client = userCommandClient{}
|
_ downstream.Client = userCommandClient{}
|
||||||
_ downstream.Client = lobbyCommandClient{}
|
_ downstream.Client = lobbyCommandClient{}
|
||||||
|
_ downstream.Client = gameCommandClient{}
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"net/url"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"galaxy/gateway/internal/downstream"
|
"galaxy/gateway/internal/downstream"
|
||||||
@@ -59,6 +60,22 @@ func (c *RESTClient) ExecuteUserCommand(ctx context.Context, command downstream.
|
|||||||
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute user command %q: %w", command.MessageType, err)
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute user command %q: %w", command.MessageType, err)
|
||||||
}
|
}
|
||||||
return c.executeUserAccountUpdateSettings(ctx, command.UserID, req)
|
return c.executeUserAccountUpdateSettings(ctx, command.UserID, req)
|
||||||
|
case usermodel.MessageTypeListMySessions:
|
||||||
|
if _, err := transcoder.PayloadToListMySessionsRequest(command.PayloadBytes); err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute user command %q: %w", command.MessageType, err)
|
||||||
|
}
|
||||||
|
return c.executeUserSessionsList(ctx, command.UserID)
|
||||||
|
case usermodel.MessageTypeRevokeMySession:
|
||||||
|
req, err := transcoder.PayloadToRevokeMySessionRequest(command.PayloadBytes)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute user command %q: %w", command.MessageType, err)
|
||||||
|
}
|
||||||
|
return c.executeUserSessionsRevoke(ctx, command.UserID, req)
|
||||||
|
case usermodel.MessageTypeRevokeAllMySessions:
|
||||||
|
if _, err := transcoder.PayloadToRevokeAllMySessionsRequest(command.PayloadBytes); err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute user command %q: %w", command.MessageType, err)
|
||||||
|
}
|
||||||
|
return c.executeUserSessionsRevokeAll(ctx, command.UserID)
|
||||||
default:
|
default:
|
||||||
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute user command: unsupported message type %q", command.MessageType)
|
return downstream.UnaryResult{}, fmt.Errorf("backendclient: execute user command: unsupported message type %q", command.MessageType)
|
||||||
}
|
}
|
||||||
@@ -88,6 +105,124 @@ func (c *RESTClient) executeUserAccountUpdateSettings(ctx context.Context, userI
|
|||||||
return projectUserResponse(status, body)
|
return projectUserResponse(status, body)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *RESTClient) executeUserSessionsList(ctx context.Context, userID string) (downstream.UnaryResult, error) {
|
||||||
|
body, status, err := c.do(ctx, http.MethodGet, c.baseURL+"/api/v1/user/sessions", userID, nil)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("execute user.sessions.list: %w", err)
|
||||||
|
}
|
||||||
|
return projectUserSessionsListResponse(status, body)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *RESTClient) executeUserSessionsRevoke(ctx context.Context, userID string, req *usermodel.RevokeMySessionRequest) (downstream.UnaryResult, error) {
|
||||||
|
if strings.TrimSpace(req.DeviceSessionID) == "" {
|
||||||
|
return downstream.UnaryResult{}, errors.New("execute user.sessions.revoke: device_session_id must not be empty")
|
||||||
|
}
|
||||||
|
target := c.baseURL + "/api/v1/user/sessions/" + url.PathEscape(req.DeviceSessionID) + "/revoke"
|
||||||
|
body, status, err := c.do(ctx, http.MethodPost, target, userID, nil)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("execute user.sessions.revoke: %w", err)
|
||||||
|
}
|
||||||
|
return projectUserSessionRevokeResponse(status, body)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *RESTClient) executeUserSessionsRevokeAll(ctx context.Context, userID string) (downstream.UnaryResult, error) {
|
||||||
|
body, status, err := c.do(ctx, http.MethodPost, c.baseURL+"/api/v1/user/sessions/revoke-all", userID, nil)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("execute user.sessions.revoke_all: %w", err)
|
||||||
|
}
|
||||||
|
return projectUserSessionsRevokeAllResponse(status, body)
|
||||||
|
}
|
||||||
|
|
||||||
|
func projectUserSessionsListResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
|
||||||
|
switch {
|
||||||
|
case statusCode == http.StatusOK:
|
||||||
|
var response usermodel.ListMySessionsResponse
|
||||||
|
if err := decodeStrictJSON(payload, &response); err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("decode success response: %w", err)
|
||||||
|
}
|
||||||
|
payloadBytes, err := transcoder.ListMySessionsResponseToPayload(&response)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
|
||||||
|
}
|
||||||
|
return downstream.UnaryResult{
|
||||||
|
ResultCode: userCommandResultCodeOK,
|
||||||
|
PayloadBytes: payloadBytes,
|
||||||
|
}, nil
|
||||||
|
case statusCode == http.StatusServiceUnavailable:
|
||||||
|
return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable
|
||||||
|
case statusCode >= 400 && statusCode <= 599:
|
||||||
|
return projectUserBackendError(statusCode, payload)
|
||||||
|
default:
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("unexpected HTTP status %d", statusCode)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func projectUserSessionRevokeResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
|
||||||
|
switch {
|
||||||
|
case statusCode == http.StatusOK:
|
||||||
|
var session usermodel.DeviceSession
|
||||||
|
if err := decodeStrictJSON(payload, &session); err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("decode success response: %w", err)
|
||||||
|
}
|
||||||
|
payloadBytes, err := transcoder.RevokeMySessionResponseToPayload(&usermodel.RevokeMySessionResponse{Session: session})
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
|
||||||
|
}
|
||||||
|
return downstream.UnaryResult{
|
||||||
|
ResultCode: userCommandResultCodeOK,
|
||||||
|
PayloadBytes: payloadBytes,
|
||||||
|
}, nil
|
||||||
|
case statusCode == http.StatusServiceUnavailable:
|
||||||
|
return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable
|
||||||
|
case statusCode >= 400 && statusCode <= 599:
|
||||||
|
return projectUserBackendError(statusCode, payload)
|
||||||
|
default:
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("unexpected HTTP status %d", statusCode)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func projectUserSessionsRevokeAllResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
|
||||||
|
switch {
|
||||||
|
case statusCode == http.StatusOK:
|
||||||
|
var summary usermodel.DeviceSessionRevocationSummary
|
||||||
|
if err := decodeStrictJSON(payload, &summary); err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("decode success response: %w", err)
|
||||||
|
}
|
||||||
|
payloadBytes, err := transcoder.RevokeAllMySessionsResponseToPayload(&usermodel.RevokeAllMySessionsResponse{Summary: summary})
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err)
|
||||||
|
}
|
||||||
|
return downstream.UnaryResult{
|
||||||
|
ResultCode: userCommandResultCodeOK,
|
||||||
|
PayloadBytes: payloadBytes,
|
||||||
|
}, nil
|
||||||
|
case statusCode == http.StatusServiceUnavailable:
|
||||||
|
return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable
|
||||||
|
case statusCode >= 400 && statusCode <= 599:
|
||||||
|
return projectUserBackendError(statusCode, payload)
|
||||||
|
default:
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("unexpected HTTP status %d", statusCode)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// projectUserBackendError shares the error-projection path between every
|
||||||
|
// user-command projector. The error envelope is identical regardless of
|
||||||
|
// the success-path payload shape.
|
||||||
|
func projectUserBackendError(statusCode int, payload []byte) (downstream.UnaryResult, error) {
|
||||||
|
errResp, err := decodeUserError(statusCode, payload)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("decode error response: %w", err)
|
||||||
|
}
|
||||||
|
payloadBytes, err := transcoder.ErrorResponseToPayload(errResp)
|
||||||
|
if err != nil {
|
||||||
|
return downstream.UnaryResult{}, fmt.Errorf("encode error response payload: %w", err)
|
||||||
|
}
|
||||||
|
return downstream.UnaryResult{
|
||||||
|
ResultCode: errResp.Error.Code,
|
||||||
|
PayloadBytes: payloadBytes,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
func projectUserResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
|
func projectUserResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) {
|
||||||
switch {
|
switch {
|
||||||
case statusCode == http.StatusOK:
|
case statusCode == http.StatusOK:
|
||||||
|
|||||||
@@ -166,6 +166,14 @@ const (
|
|||||||
// rate-limit burst.
|
// rate-limit burst.
|
||||||
authenticatedGRPCMessageClassRateLimitBurstEnvVar = "GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_MESSAGE_CLASS_RATE_LIMIT_BURST"
|
authenticatedGRPCMessageClassRateLimitBurstEnvVar = "GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_MESSAGE_CLASS_RATE_LIMIT_BURST"
|
||||||
|
|
||||||
|
// sessionCacheMaxEntriesEnvVar names the environment variable that configures
|
||||||
|
// the in-memory session cache LRU bound (entries).
|
||||||
|
sessionCacheMaxEntriesEnvVar = "GATEWAY_SESSION_CACHE_MAX_ENTRIES"
|
||||||
|
|
||||||
|
// sessionCacheTTLEnvVar names the environment variable that configures the
|
||||||
|
// in-memory session cache safety-net TTL applied to every cached entry.
|
||||||
|
sessionCacheTTLEnvVar = "GATEWAY_SESSION_CACHE_TTL"
|
||||||
|
|
||||||
// replayRedisKeyPrefixEnvVar names the environment variable that configures
|
// replayRedisKeyPrefixEnvVar names the environment variable that configures
|
||||||
// the Redis key prefix used for authenticated replay reservations.
|
// the Redis key prefix used for authenticated replay reservations.
|
||||||
replayRedisKeyPrefixEnvVar = "GATEWAY_REPLAY_REDIS_KEY_PREFIX"
|
replayRedisKeyPrefixEnvVar = "GATEWAY_REPLAY_REDIS_KEY_PREFIX"
|
||||||
@@ -309,6 +317,9 @@ const (
|
|||||||
defaultAuthenticatedGRPCMessageClassRateLimitRequests = 60
|
defaultAuthenticatedGRPCMessageClassRateLimitRequests = 60
|
||||||
defaultAuthenticatedGRPCMessageClassRateLimitBurst = 20
|
defaultAuthenticatedGRPCMessageClassRateLimitBurst = 20
|
||||||
|
|
||||||
|
defaultSessionCacheMaxEntries = 50_000
|
||||||
|
defaultSessionCacheTTL = 10 * time.Minute
|
||||||
|
|
||||||
defaultReplayRedisKeyPrefix = "gateway:replay:"
|
defaultReplayRedisKeyPrefix = "gateway:replay:"
|
||||||
defaultReplayRedisReserveTimeout = 250 * time.Millisecond
|
defaultReplayRedisReserveTimeout = 250 * time.Millisecond
|
||||||
|
|
||||||
@@ -521,6 +532,21 @@ type AuthenticatedGRPCConfig struct {
|
|||||||
AntiAbuse AuthenticatedGRPCAntiAbuseConfig
|
AntiAbuse AuthenticatedGRPCAntiAbuseConfig
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SessionCacheConfig describes the bounds of the gateway's in-memory
|
||||||
|
// session cache. The cache fronts every authenticated request and
|
||||||
|
// falls back to a synchronous backend lookup on miss; push-event
|
||||||
|
// driven invalidations flip cached records to revoked status without
|
||||||
|
// a backend roundtrip.
|
||||||
|
type SessionCacheConfig struct {
|
||||||
|
// MaxEntries bounds the LRU. Zero or negative values fall back to
|
||||||
|
// the package default at construction time.
|
||||||
|
MaxEntries int
|
||||||
|
|
||||||
|
// TTL is the safety-net freshness window applied to every cached
|
||||||
|
// entry. Zero or negative values fall back to the package default.
|
||||||
|
TTL time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
// ReplayRedisConfig describes the Redis namespace and timeout used for
|
// ReplayRedisConfig describes the Redis namespace and timeout used for
|
||||||
// authenticated replay reservations.
|
// authenticated replay reservations.
|
||||||
type ReplayRedisConfig struct {
|
type ReplayRedisConfig struct {
|
||||||
@@ -577,6 +603,10 @@ type Config struct {
|
|||||||
// Streams; Redis is now used only for replay reservations.
|
// Streams; Redis is now used only for replay reservations.
|
||||||
Redis redisconn.Config
|
Redis redisconn.Config
|
||||||
|
|
||||||
|
// SessionCache configures the in-memory session cache fronting
|
||||||
|
// every authenticated request.
|
||||||
|
SessionCache SessionCacheConfig
|
||||||
|
|
||||||
// ReplayRedis configures the Redis-backed authenticated ReplayStore.
|
// ReplayRedis configures the Redis-backed authenticated ReplayStore.
|
||||||
ReplayRedis ReplayRedisConfig
|
ReplayRedis ReplayRedisConfig
|
||||||
|
|
||||||
@@ -699,6 +729,15 @@ func DefaultReplayRedisConfig() ReplayRedisConfig {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// DefaultSessionCacheConfig returns the default LRU bound and safety-net TTL
|
||||||
|
// used by the in-memory session cache.
|
||||||
|
func DefaultSessionCacheConfig() SessionCacheConfig {
|
||||||
|
return SessionCacheConfig{
|
||||||
|
MaxEntries: defaultSessionCacheMaxEntries,
|
||||||
|
TTL: defaultSessionCacheTTL,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// DefaultBackendConfig returns the default backend settings used for the
|
// DefaultBackendConfig returns the default backend settings used for the
|
||||||
// gateway → backend HTTP and gRPC conversation. URL fields stay empty and
|
// gateway → backend HTTP and gRPC conversation. URL fields stay empty and
|
||||||
// must be supplied explicitly via env vars.
|
// must be supplied explicitly via env vars.
|
||||||
@@ -727,6 +766,7 @@ func LoadFromEnv() (Config, error) {
|
|||||||
AdminHTTP: DefaultAdminHTTPConfig(),
|
AdminHTTP: DefaultAdminHTTPConfig(),
|
||||||
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
||||||
Redis: redisconn.DefaultConfig(),
|
Redis: redisconn.DefaultConfig(),
|
||||||
|
SessionCache: DefaultSessionCacheConfig(),
|
||||||
ReplayRedis: DefaultReplayRedisConfig(),
|
ReplayRedis: DefaultReplayRedisConfig(),
|
||||||
ResponseSigner: DefaultResponseSignerConfig(),
|
ResponseSigner: DefaultResponseSignerConfig(),
|
||||||
}
|
}
|
||||||
@@ -895,6 +935,18 @@ func LoadFromEnv() (Config, error) {
|
|||||||
}
|
}
|
||||||
cfg.Redis = redisConn
|
cfg.Redis = redisConn
|
||||||
|
|
||||||
|
sessionCacheMaxEntries, err := loadIntEnvWithDefault(sessionCacheMaxEntriesEnvVar, cfg.SessionCache.MaxEntries)
|
||||||
|
if err != nil {
|
||||||
|
return Config{}, err
|
||||||
|
}
|
||||||
|
cfg.SessionCache.MaxEntries = sessionCacheMaxEntries
|
||||||
|
|
||||||
|
sessionCacheTTL, err := loadDurationEnvWithDefault(sessionCacheTTLEnvVar, cfg.SessionCache.TTL)
|
||||||
|
if err != nil {
|
||||||
|
return Config{}, err
|
||||||
|
}
|
||||||
|
cfg.SessionCache.TTL = sessionCacheTTL
|
||||||
|
|
||||||
rawReplayRedisKeyPrefix, ok := os.LookupEnv(replayRedisKeyPrefixEnvVar)
|
rawReplayRedisKeyPrefix, ok := os.LookupEnv(replayRedisKeyPrefixEnvVar)
|
||||||
if ok {
|
if ok {
|
||||||
cfg.ReplayRedis.KeyPrefix = rawReplayRedisKeyPrefix
|
cfg.ReplayRedis.KeyPrefix = rawReplayRedisKeyPrefix
|
||||||
|
|||||||
@@ -123,4 +123,7 @@ func (unavailableSessionCache) Lookup(context.Context, string) (session.Record,
|
|||||||
return session.Record{}, errors.New("session cache is unavailable")
|
return session.Record{}, errors.New("session cache is unavailable")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (unavailableSessionCache) MarkRevoked(string) {}
|
||||||
|
func (unavailableSessionCache) MarkAllRevokedForUser(string) {}
|
||||||
|
|
||||||
var _ gatewayv1.EdgeGatewayServer = sessionLookupService{}
|
var _ gatewayv1.EdgeGatewayServer = sessionLookupService{}
|
||||||
|
|||||||
@@ -292,3 +292,6 @@ type staticSessionCache struct {
|
|||||||
func (c staticSessionCache) Lookup(ctx context.Context, deviceSessionID string) (session.Record, error) {
|
func (c staticSessionCache) Lookup(ctx context.Context, deviceSessionID string) (session.Record, error) {
|
||||||
return c.lookupFunc(ctx, deviceSessionID)
|
return c.lookupFunc(ctx, deviceSessionID)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (staticSessionCache) MarkRevoked(string) {}
|
||||||
|
func (staticSessionCache) MarkAllRevokedForUser(string) {}
|
||||||
|
|||||||
@@ -1,50 +1,12 @@
|
|||||||
package session
|
package session
|
||||||
|
|
||||||
import (
|
import "context"
|
||||||
"context"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
// BackendLookup describes the slice of `backendclient.RESTClient`
|
// BackendLookup is the slice of backend's REST surface that the
|
||||||
// SessionCache depends on. The narrow interface keeps this package free
|
// session-cache layer depends on. The narrow interface keeps this
|
||||||
// of any backendclient import.
|
// package free of any backendclient import. The canonical
|
||||||
|
// implementation is `*backendclient.RESTClient`; tests can supply a
|
||||||
|
// fake.
|
||||||
type BackendLookup interface {
|
type BackendLookup interface {
|
||||||
LookupSession(ctx context.Context, deviceSessionID string) (Record, error)
|
LookupSession(ctx context.Context, deviceSessionID string) (Record, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
// BackendCache resolves authenticated device sessions by issuing one
|
|
||||||
// synchronous REST call to backend per request. The canonical implementation replaces the
|
|
||||||
// previous Redis-backed projection with this thin wrapper; gateway no
|
|
||||||
// longer keeps a process-local snapshot. See ARCHITECTURE.md §11
|
|
||||||
// «backend (sync REST), no Redis projection».
|
|
||||||
type BackendCache struct {
|
|
||||||
backend BackendLookup
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewBackendCache constructs a Cache that delegates every Lookup to
|
|
||||||
// backend over REST. backend must not be nil.
|
|
||||||
func NewBackendCache(backend BackendLookup) (*BackendCache, error) {
|
|
||||||
if backend == nil {
|
|
||||||
return nil, errors.New("session.NewBackendCache: backend lookup must not be nil")
|
|
||||||
}
|
|
||||||
return &BackendCache{backend: backend}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Lookup resolves deviceSessionID via backend. ErrNotFound is forwarded
|
|
||||||
// unchanged so callers can keep using the existing equality check.
|
|
||||||
func (c *BackendCache) Lookup(ctx context.Context, deviceSessionID string) (Record, error) {
|
|
||||||
if c == nil {
|
|
||||||
return Record{}, errors.New("session backend cache: nil cache")
|
|
||||||
}
|
|
||||||
if c.backend == nil {
|
|
||||||
return Record{}, errors.New("session backend cache: nil backend lookup")
|
|
||||||
}
|
|
||||||
rec, err := c.backend.LookupSession(ctx, deviceSessionID)
|
|
||||||
if err != nil {
|
|
||||||
return Record{}, fmt.Errorf("session backend cache: %w", err)
|
|
||||||
}
|
|
||||||
return rec, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var _ Cache = (*BackendCache)(nil)
|
|
||||||
|
|||||||
@@ -0,0 +1,238 @@
|
|||||||
|
package session
|
||||||
|
|
||||||
|
import (
|
||||||
|
"container/list"
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"go.uber.org/zap"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DefaultMaxEntries is the LRU bound applied when MemoryCacheOptions
|
||||||
|
// does not supply a positive MaxEntries. Holds well below the per-process
|
||||||
|
// memory budget for the documented MVP scale (≤10K active accounts,
|
||||||
|
// ≤100K device sessions).
|
||||||
|
const DefaultMaxEntries = 50_000
|
||||||
|
|
||||||
|
// DefaultTTL is the safety-net freshness window applied when
|
||||||
|
// MemoryCacheOptions does not supply a positive TTL. Push events drive
|
||||||
|
// invalidation in the steady state; the TTL guards against missed
|
||||||
|
// events (cursor aged out, gateway restart) by forcing a fresh backend
|
||||||
|
// lookup at most once per window.
|
||||||
|
const DefaultTTL = 10 * time.Minute
|
||||||
|
|
||||||
|
// MemoryCache is the canonical Cache implementation. Hot-path Lookup
|
||||||
|
// reads serve from a process-local LRU + TTL map; misses delegate to
|
||||||
|
// BackendLookup and seed the cache. session_invalidation push events
|
||||||
|
// flip cached records to a revoked status without a backend
|
||||||
|
// roundtrip, after which Lookup returns the revoked record straight
|
||||||
|
// from memory and gateway rejects the request.
|
||||||
|
//
|
||||||
|
// MemoryCache is safe for concurrent use.
|
||||||
|
type MemoryCache struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
entries map[string]*list.Element
|
||||||
|
byUser map[string]map[string]struct{}
|
||||||
|
order *list.List
|
||||||
|
max int
|
||||||
|
ttl time.Duration
|
||||||
|
backend BackendLookup
|
||||||
|
now func() time.Time
|
||||||
|
logger *zap.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
// memoryEntry is the value stored inside the LRU list. The key
|
||||||
|
// duplication keeps Element.Value self-describing for eviction.
|
||||||
|
type memoryEntry struct {
|
||||||
|
key string
|
||||||
|
record Record
|
||||||
|
expiresAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// MemoryCacheOptions tunes the cache.
|
||||||
|
type MemoryCacheOptions struct {
|
||||||
|
// MaxEntries bounds the number of cached records. Zero or
|
||||||
|
// negative values default to DefaultMaxEntries.
|
||||||
|
MaxEntries int
|
||||||
|
// TTL bounds how long a cached entry serves the hot path before
|
||||||
|
// a fresh backend lookup. Zero or negative values default to
|
||||||
|
// DefaultTTL.
|
||||||
|
TTL time.Duration
|
||||||
|
// Now overrides time.Now for tests.
|
||||||
|
Now func() time.Time
|
||||||
|
// Logger is named "session.cache". A nil value uses zap.NewNop.
|
||||||
|
Logger *zap.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewMemoryCache constructs a MemoryCache. backend must not be nil.
|
||||||
|
func NewMemoryCache(backend BackendLookup, opts MemoryCacheOptions) (*MemoryCache, error) {
|
||||||
|
if backend == nil {
|
||||||
|
return nil, errors.New("session.NewMemoryCache: backend lookup must not be nil")
|
||||||
|
}
|
||||||
|
max := opts.MaxEntries
|
||||||
|
if max <= 0 {
|
||||||
|
max = DefaultMaxEntries
|
||||||
|
}
|
||||||
|
ttl := opts.TTL
|
||||||
|
if ttl <= 0 {
|
||||||
|
ttl = DefaultTTL
|
||||||
|
}
|
||||||
|
now := opts.Now
|
||||||
|
if now == nil {
|
||||||
|
now = time.Now
|
||||||
|
}
|
||||||
|
logger := opts.Logger
|
||||||
|
if logger == nil {
|
||||||
|
logger = zap.NewNop()
|
||||||
|
}
|
||||||
|
return &MemoryCache{
|
||||||
|
entries: make(map[string]*list.Element, max),
|
||||||
|
byUser: make(map[string]map[string]struct{}),
|
||||||
|
order: list.New(),
|
||||||
|
max: max,
|
||||||
|
ttl: ttl,
|
||||||
|
backend: backend,
|
||||||
|
now: now,
|
||||||
|
logger: logger.Named("session.cache"),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Lookup serves deviceSessionID from the cache. A miss (or an entry
|
||||||
|
// past its TTL) triggers a backend lookup and seeds the cache before
|
||||||
|
// returning. Concurrent Lookups for the same key are not coalesced —
|
||||||
|
// that level of optimisation is not needed at the documented MVP
|
||||||
|
// scale.
|
||||||
|
func (c *MemoryCache) Lookup(ctx context.Context, deviceSessionID string) (Record, error) {
|
||||||
|
if c == nil {
|
||||||
|
return Record{}, errors.New("session memory cache: nil cache")
|
||||||
|
}
|
||||||
|
if deviceSessionID == "" {
|
||||||
|
return Record{}, ErrNotFound
|
||||||
|
}
|
||||||
|
now := c.now()
|
||||||
|
c.mu.Lock()
|
||||||
|
if elem, ok := c.entries[deviceSessionID]; ok {
|
||||||
|
entry := elem.Value.(*memoryEntry)
|
||||||
|
if entry.expiresAt.After(now) {
|
||||||
|
c.order.MoveToFront(elem)
|
||||||
|
rec := entry.record
|
||||||
|
c.mu.Unlock()
|
||||||
|
return rec, nil
|
||||||
|
}
|
||||||
|
// Expired — evict and fall through to backend.
|
||||||
|
c.evictLocked(elem)
|
||||||
|
}
|
||||||
|
c.mu.Unlock()
|
||||||
|
|
||||||
|
rec, err := c.backend.LookupSession(ctx, deviceSessionID)
|
||||||
|
if err != nil {
|
||||||
|
return Record{}, fmt.Errorf("session memory cache: %w", err)
|
||||||
|
}
|
||||||
|
c.mu.Lock()
|
||||||
|
c.insertLocked(deviceSessionID, rec, now.Add(c.ttl))
|
||||||
|
c.mu.Unlock()
|
||||||
|
return rec, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MarkRevoked flips the cached record for deviceSessionID to a
|
||||||
|
// revoked status. Calling on a missing entry is a no-op.
|
||||||
|
func (c *MemoryCache) MarkRevoked(deviceSessionID string) {
|
||||||
|
if c == nil || deviceSessionID == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.mu.Lock()
|
||||||
|
defer c.mu.Unlock()
|
||||||
|
elem, ok := c.entries[deviceSessionID]
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
entry := elem.Value.(*memoryEntry)
|
||||||
|
entry.record.Status = StatusRevoked
|
||||||
|
}
|
||||||
|
|
||||||
|
// MarkAllRevokedForUser flips every cached record whose UserID is
|
||||||
|
// userID to revoked. The user index is updated in O(n) over the
|
||||||
|
// user's session set, not the whole cache.
|
||||||
|
func (c *MemoryCache) MarkAllRevokedForUser(userID string) {
|
||||||
|
if c == nil || userID == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.mu.Lock()
|
||||||
|
defer c.mu.Unlock()
|
||||||
|
set, ok := c.byUser[userID]
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
for id := range set {
|
||||||
|
if elem, ok := c.entries[id]; ok {
|
||||||
|
elem.Value.(*memoryEntry).record.Status = StatusRevoked
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Len returns the current number of cached entries. Useful for
|
||||||
|
// metrics and tests.
|
||||||
|
func (c *MemoryCache) Len() int {
|
||||||
|
if c == nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
c.mu.Lock()
|
||||||
|
defer c.mu.Unlock()
|
||||||
|
return c.order.Len()
|
||||||
|
}
|
||||||
|
|
||||||
|
// insertLocked stores rec under deviceSessionID. The caller holds c.mu.
|
||||||
|
func (c *MemoryCache) insertLocked(deviceSessionID string, rec Record, expiresAt time.Time) {
|
||||||
|
if existing, ok := c.entries[deviceSessionID]; ok {
|
||||||
|
existing.Value.(*memoryEntry).record = rec
|
||||||
|
existing.Value.(*memoryEntry).expiresAt = expiresAt
|
||||||
|
c.order.MoveToFront(existing)
|
||||||
|
c.indexUserLocked(deviceSessionID, rec.UserID)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
elem := c.order.PushFront(&memoryEntry{
|
||||||
|
key: deviceSessionID,
|
||||||
|
record: rec,
|
||||||
|
expiresAt: expiresAt,
|
||||||
|
})
|
||||||
|
c.entries[deviceSessionID] = elem
|
||||||
|
c.indexUserLocked(deviceSessionID, rec.UserID)
|
||||||
|
if c.order.Len() > c.max {
|
||||||
|
oldest := c.order.Back()
|
||||||
|
if oldest != nil {
|
||||||
|
c.evictLocked(oldest)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// evictLocked removes elem from every internal index. The caller holds c.mu.
|
||||||
|
func (c *MemoryCache) evictLocked(elem *list.Element) {
|
||||||
|
entry := elem.Value.(*memoryEntry)
|
||||||
|
delete(c.entries, entry.key)
|
||||||
|
if set := c.byUser[entry.record.UserID]; set != nil {
|
||||||
|
delete(set, entry.key)
|
||||||
|
if len(set) == 0 {
|
||||||
|
delete(c.byUser, entry.record.UserID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
c.order.Remove(elem)
|
||||||
|
}
|
||||||
|
|
||||||
|
// indexUserLocked associates deviceSessionID with userID in byUser.
|
||||||
|
// The caller holds c.mu.
|
||||||
|
func (c *MemoryCache) indexUserLocked(deviceSessionID, userID string) {
|
||||||
|
if userID == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
set, ok := c.byUser[userID]
|
||||||
|
if !ok {
|
||||||
|
set = make(map[string]struct{})
|
||||||
|
c.byUser[userID] = set
|
||||||
|
}
|
||||||
|
set[deviceSessionID] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ Cache = (*MemoryCache)(nil)
|
||||||
@@ -0,0 +1,204 @@
|
|||||||
|
package session_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/gateway/internal/session"
|
||||||
|
)
|
||||||
|
|
||||||
|
// stubLookup is the BackendLookup test fake. lookups counts hits;
|
||||||
|
// records is the canonical source of truth keyed by device_session_id.
|
||||||
|
type stubLookup struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
records map[string]session.Record
|
||||||
|
hits atomic.Int64
|
||||||
|
notFound bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func newStubLookup() *stubLookup {
|
||||||
|
return &stubLookup{records: make(map[string]session.Record)}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *stubLookup) put(rec session.Record) {
|
||||||
|
s.mu.Lock()
|
||||||
|
s.records[rec.DeviceSessionID] = rec
|
||||||
|
s.mu.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *stubLookup) LookupSession(_ context.Context, deviceSessionID string) (session.Record, error) {
|
||||||
|
s.hits.Add(1)
|
||||||
|
s.mu.Lock()
|
||||||
|
defer s.mu.Unlock()
|
||||||
|
if s.notFound {
|
||||||
|
return session.Record{}, session.ErrNotFound
|
||||||
|
}
|
||||||
|
rec, ok := s.records[deviceSessionID]
|
||||||
|
if !ok {
|
||||||
|
return session.Record{}, session.ErrNotFound
|
||||||
|
}
|
||||||
|
return rec, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMemoryCacheLookupHitsCacheAfterFirstFetch(t *testing.T) {
|
||||||
|
stub := newStubLookup()
|
||||||
|
stub.put(session.Record{DeviceSessionID: "a", UserID: "u1", Status: session.StatusActive})
|
||||||
|
|
||||||
|
cache, err := session.NewMemoryCache(stub, session.MemoryCacheOptions{
|
||||||
|
MaxEntries: 10,
|
||||||
|
TTL: time.Hour,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("NewMemoryCache: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := cache.Lookup(context.Background(), "a"); err != nil {
|
||||||
|
t.Fatalf("first lookup: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := cache.Lookup(context.Background(), "a"); err != nil {
|
||||||
|
t.Fatalf("second lookup: %v", err)
|
||||||
|
}
|
||||||
|
if got := stub.hits.Load(); got != 1 {
|
||||||
|
t.Fatalf("backend hits = %d, want 1 (cache should serve the second call)", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMemoryCacheLookupRefreshesOnTTLExpiry(t *testing.T) {
|
||||||
|
stub := newStubLookup()
|
||||||
|
stub.put(session.Record{DeviceSessionID: "a", UserID: "u1", Status: session.StatusActive})
|
||||||
|
|
||||||
|
clock := time.Unix(1_000_000, 0)
|
||||||
|
now := func() time.Time { return clock }
|
||||||
|
|
||||||
|
cache, err := session.NewMemoryCache(stub, session.MemoryCacheOptions{
|
||||||
|
MaxEntries: 10,
|
||||||
|
TTL: 100 * time.Millisecond,
|
||||||
|
Now: now,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("NewMemoryCache: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := cache.Lookup(context.Background(), "a"); err != nil {
|
||||||
|
t.Fatalf("first lookup: %v", err)
|
||||||
|
}
|
||||||
|
clock = clock.Add(200 * time.Millisecond)
|
||||||
|
if _, err := cache.Lookup(context.Background(), "a"); err != nil {
|
||||||
|
t.Fatalf("post-TTL lookup: %v", err)
|
||||||
|
}
|
||||||
|
if got := stub.hits.Load(); got != 2 {
|
||||||
|
t.Fatalf("backend hits = %d, want 2 (TTL expiry should refetch)", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMemoryCacheMarkRevokedFlipsCachedRecord(t *testing.T) {
|
||||||
|
stub := newStubLookup()
|
||||||
|
stub.put(session.Record{DeviceSessionID: "a", UserID: "u1", Status: session.StatusActive})
|
||||||
|
|
||||||
|
cache, err := session.NewMemoryCache(stub, session.MemoryCacheOptions{MaxEntries: 10, TTL: time.Hour})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("NewMemoryCache: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := cache.Lookup(context.Background(), "a"); err != nil {
|
||||||
|
t.Fatalf("first lookup: %v", err)
|
||||||
|
}
|
||||||
|
cache.MarkRevoked("a")
|
||||||
|
rec, err := cache.Lookup(context.Background(), "a")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("post-revoke lookup: %v", err)
|
||||||
|
}
|
||||||
|
if rec.Status != session.StatusRevoked {
|
||||||
|
t.Fatalf("status = %q, want %q", rec.Status, session.StatusRevoked)
|
||||||
|
}
|
||||||
|
if got := stub.hits.Load(); got != 1 {
|
||||||
|
t.Fatalf("backend hits = %d, want 1 (MarkRevoked must not refetch)", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMemoryCacheMarkAllRevokedForUserFlipsAllSessions(t *testing.T) {
|
||||||
|
stub := newStubLookup()
|
||||||
|
stub.put(session.Record{DeviceSessionID: "a", UserID: "u1", Status: session.StatusActive})
|
||||||
|
stub.put(session.Record{DeviceSessionID: "b", UserID: "u1", Status: session.StatusActive})
|
||||||
|
stub.put(session.Record{DeviceSessionID: "c", UserID: "u2", Status: session.StatusActive})
|
||||||
|
|
||||||
|
cache, err := session.NewMemoryCache(stub, session.MemoryCacheOptions{MaxEntries: 10, TTL: time.Hour})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("NewMemoryCache: %v", err)
|
||||||
|
}
|
||||||
|
for _, id := range []string{"a", "b", "c"} {
|
||||||
|
if _, err := cache.Lookup(context.Background(), id); err != nil {
|
||||||
|
t.Fatalf("seed %s: %v", id, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cache.MarkAllRevokedForUser("u1")
|
||||||
|
|
||||||
|
for _, id := range []string{"a", "b"} {
|
||||||
|
rec, err := cache.Lookup(context.Background(), id)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("post-revoke lookup %s: %v", id, err)
|
||||||
|
}
|
||||||
|
if rec.Status != session.StatusRevoked {
|
||||||
|
t.Fatalf("session %s status = %q, want revoked", id, rec.Status)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
rec, err := cache.Lookup(context.Background(), "c")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("post-revoke lookup c: %v", err)
|
||||||
|
}
|
||||||
|
if rec.Status != session.StatusActive {
|
||||||
|
t.Fatalf("session c status = %q, want active (other user)", rec.Status)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMemoryCacheLRUEvictsLeastRecentlyUsed(t *testing.T) {
|
||||||
|
stub := newStubLookup()
|
||||||
|
stub.put(session.Record{DeviceSessionID: "a", UserID: "u1", Status: session.StatusActive})
|
||||||
|
stub.put(session.Record{DeviceSessionID: "b", UserID: "u2", Status: session.StatusActive})
|
||||||
|
stub.put(session.Record{DeviceSessionID: "c", UserID: "u3", Status: session.StatusActive})
|
||||||
|
|
||||||
|
cache, err := session.NewMemoryCache(stub, session.MemoryCacheOptions{MaxEntries: 2, TTL: time.Hour})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("NewMemoryCache: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := cache.Lookup(context.Background(), "a"); err != nil {
|
||||||
|
t.Fatalf("seed a: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := cache.Lookup(context.Background(), "b"); err != nil {
|
||||||
|
t.Fatalf("seed b: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := cache.Lookup(context.Background(), "c"); err != nil {
|
||||||
|
t.Fatalf("seed c: %v", err)
|
||||||
|
}
|
||||||
|
if got := cache.Len(); got != 2 {
|
||||||
|
t.Fatalf("Len = %d, want 2", got)
|
||||||
|
}
|
||||||
|
|
||||||
|
hitsBefore := stub.hits.Load()
|
||||||
|
if _, err := cache.Lookup(context.Background(), "a"); err != nil {
|
||||||
|
t.Fatalf("re-lookup a: %v", err)
|
||||||
|
}
|
||||||
|
if got := stub.hits.Load(); got != hitsBefore+1 {
|
||||||
|
t.Fatalf("backend hits = %d, want +1 (a was evicted)", got-hitsBefore)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMemoryCachePropagatesBackendNotFound(t *testing.T) {
|
||||||
|
stub := newStubLookup()
|
||||||
|
stub.notFound = true
|
||||||
|
|
||||||
|
cache, err := session.NewMemoryCache(stub, session.MemoryCacheOptions{MaxEntries: 4, TTL: time.Hour})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("NewMemoryCache: %v", err)
|
||||||
|
}
|
||||||
|
_, err = cache.Lookup(context.Background(), "missing")
|
||||||
|
if !errors.Is(err, session.ErrNotFound) {
|
||||||
|
t.Fatalf("Lookup error = %v, want ErrNotFound", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -14,13 +14,29 @@ var (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// Cache resolves authenticated device-session state from the gateway
|
// Cache resolves authenticated device-session state from the gateway
|
||||||
// hot path. The implementation dropped the previous Redis projection: the only
|
// hot path. The canonical implementation is *MemoryCache: a
|
||||||
// implementation is *BackendCache, which calls backend's
|
// process-local LRU + TTL store that falls back to backend's
|
||||||
// `/api/v1/internal/sessions/{id}` synchronously per request.
|
// `/api/v1/internal/sessions/{id}` on miss and listens for
|
||||||
|
// `session_invalidation` push events from backend so revoked sessions
|
||||||
|
// are reflected immediately without a fresh backend lookup.
|
||||||
|
//
|
||||||
|
// The Mark* methods are called by the push dispatcher. They flip
|
||||||
|
// cached entries to revoked status; subsequent Lookups serve the
|
||||||
|
// revoked record directly so authenticated traffic on those sessions
|
||||||
|
// is rejected at the edge before reaching backend.
|
||||||
type Cache interface {
|
type Cache interface {
|
||||||
// Lookup returns the cached record for deviceSessionID. Implementations must
|
// Lookup returns the cached record for deviceSessionID. Implementations must
|
||||||
// wrap ErrNotFound when the cache does not contain the requested record.
|
// wrap ErrNotFound when the cache does not contain the requested record.
|
||||||
Lookup(ctx context.Context, deviceSessionID string) (Record, error)
|
Lookup(ctx context.Context, deviceSessionID string) (Record, error)
|
||||||
|
|
||||||
|
// MarkRevoked flips the cached record for deviceSessionID to a
|
||||||
|
// revoked status. Calling on a missing entry is a no-op.
|
||||||
|
MarkRevoked(deviceSessionID string)
|
||||||
|
|
||||||
|
// MarkAllRevokedForUser flips every cached record belonging to
|
||||||
|
// userID to a revoked status. Calling on a user with no cached
|
||||||
|
// sessions is a no-op.
|
||||||
|
MarkAllRevokedForUser(userID string)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Status identifies the cached lifecycle state of a device session.
|
// Status identifies the cached lifecycle state of a device session.
|
||||||
|
|||||||
@@ -0,0 +1,41 @@
|
|||||||
|
# galaxy/integration test entry points.
|
||||||
|
#
|
||||||
|
# Targets:
|
||||||
|
# preclean — wipe leftover containers/networks/images from
|
||||||
|
# earlier runs (idempotent).
|
||||||
|
# integration — preclean, then run every test in the module
|
||||||
|
# sequentially (`-p=1 -parallel=1`). Recommended
|
||||||
|
# default for a slow / shared Docker.
|
||||||
|
# integration-step — preclean before each test and run them one at
|
||||||
|
# a time, stopping on the first failure. Use to
|
||||||
|
# isolate a flake or build up to a full pass.
|
||||||
|
#
|
||||||
|
# Override knobs:
|
||||||
|
# INTEGRATION_TIMEOUT per-test timeout for `make integration`
|
||||||
|
# (default 15m).
|
||||||
|
# STEP_TIMEOUT per-test timeout for `make integration-step`
|
||||||
|
# (default 5m, exported to runstep.sh).
|
||||||
|
#
|
||||||
|
# Both runners disable parallelism so concurrent docker-compose
|
||||||
|
# bootstraps cannot overload Docker. They also disable the
|
||||||
|
# testcontainers Ryuk reaper because it does not start cleanly on the
|
||||||
|
# colima/docker setup we use locally — the `preclean` target removes
|
||||||
|
# leftover state by label instead, which Ryuk would otherwise handle.
|
||||||
|
|
||||||
|
INTEGRATION_TIMEOUT ?= 15m
|
||||||
|
STEP_TIMEOUT ?= 5m
|
||||||
|
|
||||||
|
GO_TEST_FLAGS = -count=1 -timeout=$(INTEGRATION_TIMEOUT) -p=1 -parallel=1
|
||||||
|
|
||||||
|
export TESTCONTAINERS_RYUK_DISABLED = true
|
||||||
|
|
||||||
|
.PHONY: preclean integration integration-step
|
||||||
|
|
||||||
|
preclean:
|
||||||
|
@bash scripts/preclean.sh
|
||||||
|
|
||||||
|
integration: preclean
|
||||||
|
go test $(GO_TEST_FLAGS) ./...
|
||||||
|
|
||||||
|
integration-step:
|
||||||
|
@STEP_TIMEOUT=$(STEP_TIMEOUT) bash scripts/runstep.sh
|
||||||
+42
-3
@@ -5,6 +5,13 @@ from outside and verifies behaviour at the public boundary while
|
|||||||
`backend` and `galaxy/game` run as Docker containers managed by the
|
`backend` and `galaxy/game` run as Docker containers managed by the
|
||||||
test process via `testcontainers-go`.
|
test process via `testcontainers-go`.
|
||||||
|
|
||||||
|
For cross-cutting testing principles (unit vs integration boundaries,
|
||||||
|
why testcontainers tests pin no-op observability providers, why
|
||||||
|
infrastructure failures in this suite fail loudly instead of skipping)
|
||||||
|
see [`docs/TESTING.md`](../docs/TESTING.md). This README focuses on
|
||||||
|
the integration-specific runbook: prerequisites, entry points,
|
||||||
|
labels, and per-test fixtures.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
- A reachable Docker daemon (`DOCKER_HOST` or the local socket).
|
- A reachable Docker daemon (`DOCKER_HOST` or the local socket).
|
||||||
@@ -15,10 +22,40 @@ test process via `testcontainers-go`.
|
|||||||
|
|
||||||
## Run
|
## Run
|
||||||
|
|
||||||
|
The recommended entry points are the Makefile targets:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
go test ./integration/...
|
make -C integration preclean # idempotent leftover cleanup
|
||||||
|
make -C integration integration # preclean + serial test run
|
||||||
|
make -C integration integration-step # preclean + one-test-at-a-time
|
||||||
```
|
```
|
||||||
|
|
||||||
|
`preclean` removes stale containers and locally-built images from
|
||||||
|
earlier runs; it never touches testcontainers-pulled service images
|
||||||
|
(`postgres:16-alpine`, `axllent/mailpit`, `redis:7-alpine`,
|
||||||
|
`testcontainers/ryuk`), so the cache stays warm. The cleanup keys
|
||||||
|
off labels:
|
||||||
|
|
||||||
|
- `org.testcontainers=true` — every container/network created by
|
||||||
|
`testcontainers-go` (our backend/gateway/game and the postgres /
|
||||||
|
redis / mailpit / ryuk service containers).
|
||||||
|
- `galaxy.backend=1` — engine instances spawned by backend's runtime
|
||||||
|
adapter directly on the host Docker daemon (see
|
||||||
|
`backend/internal/dockerclient/types.go`).
|
||||||
|
- `galaxy.test.kind=integration-image` — local builds of
|
||||||
|
`galaxy/{backend,gateway,game}:integration` produced by
|
||||||
|
`testenv/images.go`.
|
||||||
|
|
||||||
|
`integration` runs every test in the module sequentially
|
||||||
|
(`-p=1 -parallel=1`) — recommended default on a slow / shared Docker.
|
||||||
|
`integration-step` runs them one at a time with a fresh preclean
|
||||||
|
before each test and stops on the first failure; useful to isolate a
|
||||||
|
flake or build up to a full pass without losing context to subsequent
|
||||||
|
tests.
|
||||||
|
|
||||||
|
Direct `go test ./integration/...` still works but does not pre-clean
|
||||||
|
or serialise the suite; use it only on a hand-cleaned Docker.
|
||||||
|
|
||||||
The suite builds three Docker images on demand from the workspace
|
The suite builds three Docker images on demand from the workspace
|
||||||
sources:
|
sources:
|
||||||
|
|
||||||
@@ -27,8 +64,10 @@ sources:
|
|||||||
- `galaxy/game:integration` (`game/Dockerfile`).
|
- `galaxy/game:integration` (`game/Dockerfile`).
|
||||||
|
|
||||||
Each image is built once per `go test` invocation, guarded by a
|
Each image is built once per `go test` invocation, guarded by a
|
||||||
`sync.Once` inside `testenv`. The first cold run is slow (~2–3 min on
|
`sync.Once` inside `testenv`, and stamped with the
|
||||||
a developer machine); subsequent runs reuse the layer cache.
|
`galaxy.test.kind=integration-image` label so `preclean` can wipe it
|
||||||
|
on the next run. The first cold run is slow (~2–3 min on a
|
||||||
|
developer machine); subsequent runs reuse the layer cache.
|
||||||
|
|
||||||
## Skipping
|
## Skipping
|
||||||
|
|
||||||
|
|||||||
@@ -70,7 +70,11 @@ func TestAdminUserSanctionPermanentBlock(t *testing.T) {
|
|||||||
if lastErr == nil {
|
if lastErr == nil {
|
||||||
t.Fatalf("authenticated call succeeded after permanent_block")
|
t.Fatalf("authenticated call succeeded after permanent_block")
|
||||||
}
|
}
|
||||||
if !testenv.IsUnauthenticated(lastErr) {
|
// Gateway maps a revoked session to FailedPrecondition ("device
|
||||||
|
// session is revoked"); a session that vanished from the cache
|
||||||
|
// before the call lands as Unauthenticated. Either is a correct
|
||||||
|
// rejection.
|
||||||
|
if !testenv.IsFailedPrecondition(lastErr) && !testenv.IsUnauthenticated(lastErr) {
|
||||||
t.Fatalf("post-sanction status: %v", lastErr)
|
t.Fatalf("post-sanction status: %v", lastErr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
Executable
+88
@@ -0,0 +1,88 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Pre-run cleanup for galaxy/integration. Idempotent and safe to call
|
||||||
|
# repeatedly; runs before each integration test session to wipe state
|
||||||
|
# left over from earlier runs.
|
||||||
|
#
|
||||||
|
# What we touch:
|
||||||
|
# 1. Containers labelled `org.testcontainers=true` — every container
|
||||||
|
# brought up by testcontainers-go (our backend/gateway/game plus
|
||||||
|
# postgres/redis/mailpit/ryuk service containers).
|
||||||
|
# 2. Containers labelled `galaxy.backend=1` — engine instances spawned
|
||||||
|
# by backend's runtime adapter on the host Docker daemon (see
|
||||||
|
# `backend/internal/dockerclient/types.go`). These do not carry
|
||||||
|
# the testcontainers label because backend, not testcontainers,
|
||||||
|
# creates them.
|
||||||
|
# 3. Networks labelled `org.testcontainers=true` — networks created
|
||||||
|
# by testcontainers-go for cross-container wiring.
|
||||||
|
# 4. Images labelled `galaxy.test.kind=integration-image` — local
|
||||||
|
# builds of galaxy/{backend,gateway,game}:integration. Pulled
|
||||||
|
# service images (postgres, redis, ryuk, mailpit) are NOT touched
|
||||||
|
# so the cache stays warm between runs.
|
||||||
|
#
|
||||||
|
# What we never touch:
|
||||||
|
# - Containers / images without one of the labels above.
|
||||||
|
# - User-managed images and volumes.
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
remove_containers_with_label() {
|
||||||
|
local label="$1"
|
||||||
|
local description="$2"
|
||||||
|
local ids
|
||||||
|
ids=$(docker ps -aq --filter "label=$label" 2>/dev/null || true)
|
||||||
|
if [ -z "$ids" ]; then
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
local count
|
||||||
|
count=$(printf '%s\n' "$ids" | wc -l | tr -d ' ')
|
||||||
|
echo "preclean: removing $count $description"
|
||||||
|
# shellcheck disable=SC2086
|
||||||
|
docker rm -f $ids >/dev/null 2>&1 || true
|
||||||
|
}
|
||||||
|
|
||||||
|
remove_networks_with_label() {
|
||||||
|
local label="$1"
|
||||||
|
local description="$2"
|
||||||
|
local ids
|
||||||
|
ids=$(docker network ls -q --filter "label=$label" 2>/dev/null || true)
|
||||||
|
if [ -z "$ids" ]; then
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
local count
|
||||||
|
count=$(printf '%s\n' "$ids" | wc -l | tr -d ' ')
|
||||||
|
echo "preclean: removing $count $description"
|
||||||
|
# shellcheck disable=SC2086
|
||||||
|
docker network rm $ids >/dev/null 2>&1 || true
|
||||||
|
}
|
||||||
|
|
||||||
|
remove_images_with_label() {
|
||||||
|
local label="$1"
|
||||||
|
local description="$2"
|
||||||
|
local ids
|
||||||
|
ids=$(docker images -q --filter "label=$label" 2>/dev/null || true)
|
||||||
|
if [ -z "$ids" ]; then
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
local count
|
||||||
|
count=$(printf '%s\n' "$ids" | sort -u | wc -l | tr -d ' ')
|
||||||
|
echo "preclean: removing $count $description"
|
||||||
|
# shellcheck disable=SC2086
|
||||||
|
docker rmi -f $ids >/dev/null 2>&1 || true
|
||||||
|
}
|
||||||
|
|
||||||
|
if ! command -v docker >/dev/null 2>&1; then
|
||||||
|
echo "preclean: docker CLI not found, nothing to do" >&2
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! docker info >/dev/null 2>&1; then
|
||||||
|
echo "preclean: docker daemon unreachable, nothing to do" >&2
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
remove_containers_with_label "org.testcontainers=true" "testcontainers-managed containers"
|
||||||
|
remove_containers_with_label "galaxy.backend=1" "backend-managed engine containers"
|
||||||
|
remove_networks_with_label "org.testcontainers=true" "testcontainers-managed networks"
|
||||||
|
remove_images_with_label "galaxy.test.kind=integration-image" "integration-built images"
|
||||||
|
|
||||||
|
echo "preclean: done"
|
||||||
Executable
+81
@@ -0,0 +1,81 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Sequential one-test-at-a-time integration run.
|
||||||
|
#
|
||||||
|
# Runs every Test* function under `galaxy/integration` in a fresh
|
||||||
|
# Docker state — preclean + single-test `go test -run` invocation —
|
||||||
|
# stopping on the first failure. Use this to:
|
||||||
|
#
|
||||||
|
# - Diagnose which test brings the suite down on a slow or
|
||||||
|
# overloaded Docker.
|
||||||
|
# - Build confidence on a host that cannot run the full suite in
|
||||||
|
# one shot.
|
||||||
|
#
|
||||||
|
# Slower than `make integration` (every test pays the bootstrap cost
|
||||||
|
# of its own backend/gateway/postgres) but each iteration is
|
||||||
|
# self-contained, so a flaky test cannot silently poison its
|
||||||
|
# successors.
|
||||||
|
#
|
||||||
|
# Environment:
|
||||||
|
# STEP_TIMEOUT per-test timeout (default 5m).
|
||||||
|
# STEP_PRECLEAN set to 0 to skip the preclean step before each
|
||||||
|
# test. Default is 1; only disable on a hand-cleaned
|
||||||
|
# Docker that you are sure has no leftover state.
|
||||||
|
# STEP_VERBOSE set to 0 to suppress `-v`. Default 1.
|
||||||
|
#
|
||||||
|
# Ryuk: this runner exports TESTCONTAINERS_RYUK_DISABLED=true. Ryuk
|
||||||
|
# does not start cleanly on the local colima setup; the per-step
|
||||||
|
# preclean handles leftover state by label. Override by setting
|
||||||
|
# TESTCONTAINERS_RYUK_DISABLED=false in the calling shell.
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
export TESTCONTAINERS_RYUK_DISABLED="${TESTCONTAINERS_RYUK_DISABLED:-true}"
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
readonly STEP_TIMEOUT="${STEP_TIMEOUT:-5m}"
|
||||||
|
readonly STEP_PRECLEAN="${STEP_PRECLEAN:-1}"
|
||||||
|
readonly STEP_VERBOSE="${STEP_VERBOSE:-1}"
|
||||||
|
|
||||||
|
go_test_flags=(-count=1 -timeout="$STEP_TIMEOUT" -p=1 -parallel=1)
|
||||||
|
if [ "$STEP_VERBOSE" = "1" ]; then
|
||||||
|
go_test_flags+=(-v)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Discover every top-level Test in the integration module. `go test
|
||||||
|
# -list` honours build tags and filters; `^Test` picks up the standard
|
||||||
|
# Go test convention.
|
||||||
|
mapfile -t tests < <(go test -list '^Test' ./... 2>/dev/null | grep -E '^Test' | sort -u)
|
||||||
|
if [ "${#tests[@]}" -eq 0 ]; then
|
||||||
|
echo "runstep: no tests found under ./..." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "runstep: discovered ${#tests[@]} tests; per-test timeout $STEP_TIMEOUT"
|
||||||
|
|
||||||
|
passed=0
|
||||||
|
failed=""
|
||||||
|
for name in "${tests[@]}"; do
|
||||||
|
if [ "$STEP_PRECLEAN" = "1" ]; then
|
||||||
|
bash scripts/preclean.sh
|
||||||
|
fi
|
||||||
|
echo
|
||||||
|
echo "============================================================"
|
||||||
|
echo "runstep: $name"
|
||||||
|
echo "============================================================"
|
||||||
|
if go test "${go_test_flags[@]}" -run "^${name}$" ./...; then
|
||||||
|
passed=$((passed + 1))
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
failed="$name"
|
||||||
|
break
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -n "$failed" ]; then
|
||||||
|
echo
|
||||||
|
echo "runstep: FAILED at $failed (after $passed passes)"
|
||||||
|
echo " drill down with: go test -run '^${failed}$' -v ./..."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
echo "runstep: all ${#tests[@]} tests passed"
|
||||||
@@ -2,7 +2,6 @@ package integration_test
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"net/http"
|
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -11,10 +10,10 @@ import (
|
|||||||
"galaxy/transcoder"
|
"galaxy/transcoder"
|
||||||
)
|
)
|
||||||
|
|
||||||
// TestSessionRevoke_SubsequentRequestsRejected revokes a session via
|
// TestSessionRevoke_SubsequentRequestsRejected revokes the caller's
|
||||||
// the internal endpoint backend exposes (gateway uses the same path)
|
// session through the user surface (signed gRPC end-to-end) and
|
||||||
// and asserts the gateway rejects subsequent authenticated requests
|
// asserts that subsequent authenticated calls bound to that session
|
||||||
// bound to that session.
|
// are rejected by gateway.
|
||||||
func TestSessionRevoke_SubsequentRequestsRejected(t *testing.T) {
|
func TestSessionRevoke_SubsequentRequestsRejected(t *testing.T) {
|
||||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||||
@@ -28,31 +27,36 @@ func TestSessionRevoke_SubsequentRequestsRejected(t *testing.T) {
|
|||||||
defer gw.Close()
|
defer gw.Close()
|
||||||
|
|
||||||
// Sanity: the authenticated path works before revoke.
|
// Sanity: the authenticated path works before revoke.
|
||||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
getPayload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("encode payload: %v", err)
|
t.Fatalf("encode get-account payload: %v", err)
|
||||||
}
|
}
|
||||||
if _, err := gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{}); err != nil {
|
if _, err := gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, getPayload, testenv.ExecuteOptions{}); err != nil {
|
||||||
t.Fatalf("pre-revoke call failed: %v", err)
|
t.Fatalf("pre-revoke call failed: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Revoke.
|
// Revoke own session through signed gRPC.
|
||||||
internal := testenv.NewBackendInternalClient(plat.Backend.HTTPURL)
|
revokePayload, err := transcoder.RevokeMySessionRequestToPayload(&usermodel.RevokeMySessionRequest{
|
||||||
raw, resp, err := internal.Do(ctx, http.MethodPost, "/api/v1/internal/sessions/"+sess.DeviceSessionID+"/revoke", nil)
|
DeviceSessionID: sess.DeviceSessionID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("encode revoke payload: %v", err)
|
||||||
|
}
|
||||||
|
revokeResult, err := gw.Execute(ctx, usermodel.MessageTypeRevokeMySession, revokePayload, testenv.ExecuteOptions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("revoke: %v", err)
|
t.Fatalf("revoke: %v", err)
|
||||||
}
|
}
|
||||||
if resp.StatusCode/100 != 2 {
|
if revokeResult.ResultCode != "ok" {
|
||||||
t.Fatalf("revoke status %d body=%s", resp.StatusCode, string(raw))
|
t.Fatalf("revoke result_code = %q, want ok", revokeResult.ResultCode)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Authenticated requests must now be rejected. Allow up to 2s
|
// Authenticated requests must now be rejected. Allow up to 2s
|
||||||
// for the session-invalidation push frame to propagate to
|
// for the session-invalidation push frame to propagate to gateway
|
||||||
// gateway and close any cached state.
|
// and close any cached state.
|
||||||
deadline := time.Now().Add(2 * time.Second)
|
deadline := time.Now().Add(2 * time.Second)
|
||||||
var lastErr error
|
var lastErr error
|
||||||
for time.Now().Before(deadline) {
|
for time.Now().Before(deadline) {
|
||||||
_, lastErr = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{})
|
_, lastErr = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, getPayload, testenv.ExecuteOptions{})
|
||||||
if lastErr != nil {
|
if lastErr != nil {
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
@@ -61,7 +65,98 @@ func TestSessionRevoke_SubsequentRequestsRejected(t *testing.T) {
|
|||||||
if lastErr == nil {
|
if lastErr == nil {
|
||||||
t.Fatalf("post-revoke call still succeeded; expected rejection")
|
t.Fatalf("post-revoke call still succeeded; expected rejection")
|
||||||
}
|
}
|
||||||
if !testenv.IsUnauthenticated(lastErr) {
|
// Gateway maps a revoked session to FailedPrecondition ("device
|
||||||
t.Fatalf("post-revoke status: expected Unauthenticated, got %v", lastErr)
|
// session is revoked"); a session that vanished from the cache
|
||||||
|
// before the call lands as Unauthenticated. Either is a correct
|
||||||
|
// rejection.
|
||||||
|
if !testenv.IsFailedPrecondition(lastErr) && !testenv.IsUnauthenticated(lastErr) {
|
||||||
|
t.Fatalf("post-revoke status: %v", lastErr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestSessionRevoke_RejectsForeignSession checks that a caller cannot
|
||||||
|
// revoke a session that belongs to a different user. Backend returns
|
||||||
|
// the same shape as a missing session (no foreign-id probing).
|
||||||
|
func TestSessionRevoke_RejectsForeignSession(t *testing.T) {
|
||||||
|
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
owner := testenv.RegisterSession(t, plat, "owner+foreign@example.com")
|
||||||
|
attacker := testenv.RegisterSession(t, plat, "attacker+foreign@example.com")
|
||||||
|
|
||||||
|
attackerGW, err := attacker.DialAuthenticated(ctx, plat)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("dial attacker: %v", err)
|
||||||
|
}
|
||||||
|
defer attackerGW.Close()
|
||||||
|
|
||||||
|
revokePayload, err := transcoder.RevokeMySessionRequestToPayload(&usermodel.RevokeMySessionRequest{
|
||||||
|
DeviceSessionID: owner.DeviceSessionID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("encode revoke payload: %v", err)
|
||||||
|
}
|
||||||
|
result, err := attackerGW.Execute(ctx, usermodel.MessageTypeRevokeMySession, revokePayload, testenv.ExecuteOptions{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("attacker revoke: %v", err)
|
||||||
|
}
|
||||||
|
if result.ResultCode == "ok" {
|
||||||
|
t.Fatalf("attacker revoke result_code = ok, want a not-found error")
|
||||||
|
}
|
||||||
|
// Decoded error envelope must carry the not-found code so attackers
|
||||||
|
// see the same shape as a genuinely missing session.
|
||||||
|
errResp, err := transcoder.PayloadToErrorResponse(result.PayloadBytes)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("decode error: %v", err)
|
||||||
|
}
|
||||||
|
// Backend's user-side handlers stamp 404 responses with
|
||||||
|
// `httperr.CodeNotFound = "not_found"`; the gateway forwards a
|
||||||
|
// non-empty code as-is and only synthesises `subject_not_found`
|
||||||
|
// when the upstream payload omits the code field. Both shapes
|
||||||
|
// satisfy the "no foreign-id probing" contract — the attacker
|
||||||
|
// learns the same thing for a missing session and a session that
|
||||||
|
// belongs to someone else.
|
||||||
|
if code := errResp.Error.Code; code != "not_found" && code != "subject_not_found" {
|
||||||
|
t.Fatalf("error.code = %q, want not_found or subject_not_found", code)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestSessionRevoke_RevokeAll covers the bulk logout path. Two
|
||||||
|
// sessions for the same user, then revoke-all, then both sessions
|
||||||
|
// must reject authenticated traffic.
|
||||||
|
func TestSessionRevoke_RevokeAll(t *testing.T) {
|
||||||
|
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
const email = "pilot+revoke-all@example.com"
|
||||||
|
first := testenv.RegisterSession(t, plat, email)
|
||||||
|
second := testenv.RegisterSession(t, plat, email)
|
||||||
|
|
||||||
|
firstGW, err := first.DialAuthenticated(ctx, plat)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("dial first: %v", err)
|
||||||
|
}
|
||||||
|
defer firstGW.Close()
|
||||||
|
|
||||||
|
revokeAllPayload, err := transcoder.RevokeAllMySessionsRequestToPayload(&usermodel.RevokeAllMySessionsRequest{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("encode revoke-all payload: %v", err)
|
||||||
|
}
|
||||||
|
result, err := firstGW.Execute(ctx, usermodel.MessageTypeRevokeAllMySessions, revokeAllPayload, testenv.ExecuteOptions{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("revoke-all: %v", err)
|
||||||
|
}
|
||||||
|
if result.ResultCode != "ok" {
|
||||||
|
t.Fatalf("revoke-all result_code = %q, want ok", result.ResultCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := transcoder.PayloadToRevokeAllMySessionsResponse(result.PayloadBytes)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("decode revoke-all payload: %v", err)
|
||||||
|
}
|
||||||
|
if resp.Summary.RevokedCount != 2 {
|
||||||
|
t.Fatalf("summary.revoked_count = %d, want 2 (sessions: %s, %s)", resp.Summary.RevokedCount, first.DeviceSessionID, second.DeviceSessionID)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -70,8 +70,12 @@ func TestSoftDelete_Cascade(t *testing.T) {
|
|||||||
if lastErr == nil {
|
if lastErr == nil {
|
||||||
t.Fatalf("gateway accepted authenticated call after soft delete; expected rejection")
|
t.Fatalf("gateway accepted authenticated call after soft delete; expected rejection")
|
||||||
}
|
}
|
||||||
if !testenv.IsUnauthenticated(lastErr) {
|
// Gateway maps a revoked session to FailedPrecondition ("device
|
||||||
t.Fatalf("post-delete status: expected Unauthenticated, got %v", lastErr)
|
// session is revoked"); a session that vanished from the cache
|
||||||
|
// before the call lands as Unauthenticated. Either is a correct
|
||||||
|
// rejection.
|
||||||
|
if !testenv.IsFailedPrecondition(lastErr) && !testenv.IsUnauthenticated(lastErr) {
|
||||||
|
t.Fatalf("post-delete status: %v", lastErr)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Geo cascade: counters for this user should be gone.
|
// Geo cascade: counters for this user should be gone.
|
||||||
|
|||||||
@@ -86,6 +86,16 @@ func StartGateway(t *testing.T, opts GatewayOptions) *GatewayContainer {
|
|||||||
// Negative-path edge tests tighten these per-test.
|
// Negative-path edge tests tighten these per-test.
|
||||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "10000",
|
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "10000",
|
||||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "1000",
|
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "1000",
|
||||||
|
// Identity-bucket limits sit on top of the class limits and are
|
||||||
|
// keyed by the request identity (email for send-email-code,
|
||||||
|
// challenge_id for confirm-email-code). The defaults are
|
||||||
|
// purposely tight in production (3 sends per email per window);
|
||||||
|
// happy-path scenarios that re-issue codes for the same email
|
||||||
|
// would otherwise trip the limiter mid-test.
|
||||||
|
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "10000",
|
||||||
|
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "1000",
|
||||||
|
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "10000",
|
||||||
|
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "1000",
|
||||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_IP_RATE_LIMIT_REQUESTS": "10000",
|
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_IP_RATE_LIMIT_REQUESTS": "10000",
|
||||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_IP_RATE_LIMIT_BURST": "1000",
|
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_IP_RATE_LIMIT_BURST": "1000",
|
||||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_SESSION_RATE_LIMIT_REQUESTS": "10000",
|
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_SESSION_RATE_LIMIT_REQUESTS": "10000",
|
||||||
|
|||||||
@@ -61,6 +61,13 @@ func EnsureGameImage(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// integrationImageLabel is the docker label stamped onto every image
|
||||||
|
// built from `integration/testenv/images.go`. The pre-clean script
|
||||||
|
// (`integration/scripts/preclean.sh`) keys off this label to wipe
|
||||||
|
// stale builds without touching testcontainers-pulled service images
|
||||||
|
// (postgres, redis, ryuk, mailpit) which we want to keep cached.
|
||||||
|
const integrationImageLabel = "galaxy.test.kind=integration-image"
|
||||||
|
|
||||||
func buildImage(tag, dockerfile string) error {
|
func buildImage(tag, dockerfile string) error {
|
||||||
root, err := workspaceRoot()
|
root, err := workspaceRoot()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -72,6 +79,7 @@ func buildImage(tag, dockerfile string) error {
|
|||||||
cmd := exec.CommandContext(ctx, "docker", "build",
|
cmd := exec.CommandContext(ctx, "docker", "build",
|
||||||
"-t", tag,
|
"-t", tag,
|
||||||
"-f", filepath.Join(root, dockerfile),
|
"-f", filepath.Join(root, dockerfile),
|
||||||
|
"--label", integrationImageLabel,
|
||||||
root,
|
root,
|
||||||
)
|
)
|
||||||
out, err := cmd.CombinedOutput()
|
out, err := cmd.CombinedOutput()
|
||||||
|
|||||||
@@ -11,12 +11,19 @@ import (
|
|||||||
// StartNetwork creates a user-defined Docker bridge network and
|
// StartNetwork creates a user-defined Docker bridge network and
|
||||||
// registers a t.Cleanup to remove it. All platform containers attach
|
// registers a t.Cleanup to remove it. All platform containers attach
|
||||||
// to the same network so they can resolve each other by alias.
|
// to the same network so they can resolve each other by alias.
|
||||||
|
//
|
||||||
|
// A failure here is fatal, not a skip: the network create path runs
|
||||||
|
// long after `RequireDocker` has confirmed the daemon is reachable, so
|
||||||
|
// any error here is a real environment break (subnet exhaustion, a
|
||||||
|
// half-dead Ryuk reaper, a daemon-side network plugin issue) and
|
||||||
|
// silently skipping it would mask the rest of the suite as
|
||||||
|
// "passing" when nothing in fact ran.
|
||||||
func StartNetwork(t *testing.T) *testcontainers.DockerNetwork {
|
func StartNetwork(t *testing.T) *testcontainers.DockerNetwork {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
net, err := tcnetwork.New(ctx)
|
net, err := tcnetwork.New(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Skipf("docker network unavailable: %v", err)
|
t.Fatalf("create docker network: %v", err)
|
||||||
}
|
}
|
||||||
t.Cleanup(func() {
|
t.Cleanup(func() {
|
||||||
if err := net.Remove(ctx); err != nil {
|
if err := net.Remove(ctx); err != nil {
|
||||||
|
|||||||
@@ -2,8 +2,50 @@ package order
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// MessageTypeUserGamesCommand is the authenticated gateway message type
|
||||||
|
// used to send a batch of in-game commands to the engine through
|
||||||
|
// `POST /api/v1/user/games/{game_id}/commands`. The signed payload is
|
||||||
|
// a FlatBuffers `order.UserGamesCommand`.
|
||||||
|
const MessageTypeUserGamesCommand = "user.games.command"
|
||||||
|
|
||||||
|
// MessageTypeUserGamesOrder is the authenticated gateway message type
|
||||||
|
// used to validate / store a batch of in-game orders through
|
||||||
|
// `POST /api/v1/user/games/{game_id}/orders`. The signed payload is a
|
||||||
|
// FlatBuffers `order.UserGamesOrder`.
|
||||||
|
const MessageTypeUserGamesOrder = "user.games.order"
|
||||||
|
|
||||||
|
// UserGamesCommand is the typed payload of MessageTypeUserGamesCommand.
|
||||||
|
// `GameID` selects the running engine container; `Commands` is the
|
||||||
|
// player command batch executed atomically by the engine. The `Actor`
|
||||||
|
// field present in the engine's JSON shape is rebuilt by backend from
|
||||||
|
// the runtime player mapping — clients never carry it.
|
||||||
|
type UserGamesCommand struct {
|
||||||
|
// GameID identifies the running game for this batch.
|
||||||
|
GameID uuid.UUID `json:"game_id"`
|
||||||
|
|
||||||
|
// Commands is the player command batch.
|
||||||
|
Commands []DecodableCommand `json:"cmd"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// UserGamesOrder is the typed payload of MessageTypeUserGamesOrder.
|
||||||
|
// Mirrors `UserGamesCommand` plus an `UpdatedAt` field that lets the
|
||||||
|
// engine reject stale order submissions.
|
||||||
|
type UserGamesOrder struct {
|
||||||
|
// GameID identifies the running game for this batch.
|
||||||
|
GameID uuid.UUID `json:"game_id"`
|
||||||
|
|
||||||
|
// UpdatedAt is the client-side timestamp used for stale-order
|
||||||
|
// detection on the engine side.
|
||||||
|
UpdatedAt int `json:"updatedAt"`
|
||||||
|
|
||||||
|
// Commands is the player order batch.
|
||||||
|
Commands []DecodableCommand `json:"cmd"`
|
||||||
|
}
|
||||||
|
|
||||||
type Order struct {
|
type Order struct {
|
||||||
// TODO: check with already stored order, if any, and generate an error, if newer order exists
|
// TODO: check with already stored order, if any, and generate an error, if newer order exists
|
||||||
UpdatedAt int `json:"updatedAt"`
|
UpdatedAt int `json:"updatedAt"`
|
||||||
|
|||||||
@@ -0,0 +1,22 @@
|
|||||||
|
package report
|
||||||
|
|
||||||
|
import "github.com/google/uuid"
|
||||||
|
|
||||||
|
// MessageTypeUserGamesReport is the authenticated gateway message type
|
||||||
|
// used to fetch a per-player turn report through
|
||||||
|
// `GET /api/v1/user/games/{game_id}/reports/{turn}`. The signed payload
|
||||||
|
// is a FlatBuffers `GameReportRequest`; the response is a FlatBuffers
|
||||||
|
// `Report`.
|
||||||
|
const MessageTypeUserGamesReport = "user.games.report"
|
||||||
|
|
||||||
|
// GameReportRequest is the typed payload of MessageTypeUserGamesReport.
|
||||||
|
// `GameID` selects the target game (the message_type alone is not
|
||||||
|
// enough; this scope is per-game) and `Turn` selects the requested
|
||||||
|
// turn number. Both fields are required.
|
||||||
|
type GameReportRequest struct {
|
||||||
|
// GameID identifies the game whose report is fetched.
|
||||||
|
GameID uuid.UUID `json:"game_id"`
|
||||||
|
|
||||||
|
// Turn is the zero-based turn number whose report is requested.
|
||||||
|
Turn uint `json:"turn"`
|
||||||
|
}
|
||||||
@@ -16,6 +16,19 @@ const (
|
|||||||
// MessageTypeUpdateMySettings is the authenticated gateway message type used
|
// MessageTypeUpdateMySettings is the authenticated gateway message type used
|
||||||
// to mutate self-service settings fields.
|
// to mutate self-service settings fields.
|
||||||
MessageTypeUpdateMySettings = "user.settings.update"
|
MessageTypeUpdateMySettings = "user.settings.update"
|
||||||
|
|
||||||
|
// MessageTypeListMySessions is the authenticated gateway message type used
|
||||||
|
// to read the caller's active device sessions.
|
||||||
|
MessageTypeListMySessions = "user.sessions.list"
|
||||||
|
|
||||||
|
// MessageTypeRevokeMySession is the authenticated gateway message type used
|
||||||
|
// to revoke one of the caller's device sessions.
|
||||||
|
MessageTypeRevokeMySession = "user.sessions.revoke"
|
||||||
|
|
||||||
|
// MessageTypeRevokeAllMySessions is the authenticated gateway message type
|
||||||
|
// used to revoke every device session belonging to the caller (logout
|
||||||
|
// everywhere).
|
||||||
|
MessageTypeRevokeAllMySessions = "user.sessions.revoke_all"
|
||||||
)
|
)
|
||||||
|
|
||||||
// GetMyAccountRequest stores the authenticated self-service read request for
|
// GetMyAccountRequest stores the authenticated self-service read request for
|
||||||
@@ -198,3 +211,78 @@ type ErrorResponse struct {
|
|||||||
// Error stores the mirrored error envelope body.
|
// Error stores the mirrored error envelope body.
|
||||||
Error ErrorBody `json:"error"`
|
Error ErrorBody `json:"error"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// DeviceSession stores the transport-ready snapshot of one device session
|
||||||
|
// served by the authenticated user-surface session endpoints.
|
||||||
|
type DeviceSession struct {
|
||||||
|
// DeviceSessionID stores the durable device-session identifier.
|
||||||
|
DeviceSessionID string `json:"device_session_id"`
|
||||||
|
|
||||||
|
// UserID stores the authenticated user identity bound to the session.
|
||||||
|
UserID string `json:"user_id"`
|
||||||
|
|
||||||
|
// Status stores the lifecycle state of the session
|
||||||
|
// (`active` or `revoked`).
|
||||||
|
Status string `json:"status"`
|
||||||
|
|
||||||
|
// ClientPublicKey stores the standard base64-encoded raw 32-byte
|
||||||
|
// Ed25519 client public key, when populated.
|
||||||
|
ClientPublicKey string `json:"client_public_key,omitempty"`
|
||||||
|
|
||||||
|
// CreatedAt stores when the session was created.
|
||||||
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
|
||||||
|
// RevokedAt stores when the session was revoked, if revoked.
|
||||||
|
RevokedAt *time.Time `json:"revoked_at,omitempty"`
|
||||||
|
|
||||||
|
// LastSeenAt stores when gateway last resolved this session.
|
||||||
|
LastSeenAt *time.Time `json:"last_seen_at,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListMySessionsRequest stores the authenticated self-service "list my
|
||||||
|
// active sessions" command. The body is intentionally empty.
|
||||||
|
type ListMySessionsRequest struct{}
|
||||||
|
|
||||||
|
// ListMySessionsResponse stores the success payload of MessageTypeListMySessions.
|
||||||
|
type ListMySessionsResponse struct {
|
||||||
|
// Items stores the caller's currently active device sessions.
|
||||||
|
Items []DeviceSession `json:"items"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// RevokeMySessionRequest stores the authenticated self-service single
|
||||||
|
// session revocation request.
|
||||||
|
type RevokeMySessionRequest struct {
|
||||||
|
// DeviceSessionID identifies the device session to revoke. The
|
||||||
|
// session must belong to the caller; otherwise the response carries
|
||||||
|
// the same error shape as a missing session so foreign session ids
|
||||||
|
// cannot be probed.
|
||||||
|
DeviceSessionID string `json:"device_session_id"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// RevokeMySessionResponse stores the success payload of
|
||||||
|
// MessageTypeRevokeMySession.
|
||||||
|
type RevokeMySessionResponse struct {
|
||||||
|
// Session stores the post-revoke snapshot of the affected session.
|
||||||
|
Session DeviceSession `json:"session"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// RevokeAllMySessionsRequest stores the authenticated self-service
|
||||||
|
// "logout everywhere" command. The body is intentionally empty.
|
||||||
|
type RevokeAllMySessionsRequest struct{}
|
||||||
|
|
||||||
|
// DeviceSessionRevocationSummary stores the count of sessions revoked by a
|
||||||
|
// bulk operation.
|
||||||
|
type DeviceSessionRevocationSummary struct {
|
||||||
|
// UserID identifies the user whose sessions were affected.
|
||||||
|
UserID string `json:"user_id"`
|
||||||
|
|
||||||
|
// RevokedCount stores how many sessions transitioned to revoked.
|
||||||
|
RevokedCount int `json:"revoked_count"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// RevokeAllMySessionsResponse stores the success payload of
|
||||||
|
// MessageTypeRevokeAllMySessions.
|
||||||
|
type RevokeAllMySessionsResponse struct {
|
||||||
|
// Summary stores the user_id and revoked_count snapshot.
|
||||||
|
Summary DeviceSessionRevocationSummary `json:"summary"`
|
||||||
|
}
|
||||||
|
|||||||
@@ -0,0 +1,14 @@
|
|||||||
|
// common contains FlatBuffers types shared across multiple schemas
|
||||||
|
// (order, report, …). Files that need these types include this one
|
||||||
|
// via `include "common.fbs";` and reference them through the `common.`
|
||||||
|
// namespace.
|
||||||
|
namespace common;
|
||||||
|
|
||||||
|
// UUID is a 128-bit RFC 4122 identifier encoded as two big-endian
|
||||||
|
// uint64 halves (`hi` carries bytes 0..7, `lo` carries bytes 8..15).
|
||||||
|
// Transcoders use the helpers in `pkg/transcoder/uuid.go` to convert
|
||||||
|
// between this layout and `github.com/google/uuid.UUID`.
|
||||||
|
struct UUID {
|
||||||
|
hi:uint64;
|
||||||
|
lo:uint64;
|
||||||
|
}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
// Code generated by the FlatBuffers compiler. DO NOT EDIT.
|
// Code generated by the FlatBuffers compiler. DO NOT EDIT.
|
||||||
|
|
||||||
package report
|
package common
|
||||||
|
|
||||||
import (
|
import (
|
||||||
flatbuffers "github.com/google/flatbuffers/go"
|
flatbuffers "github.com/google/flatbuffers/go"
|
||||||
@@ -1,54 +1,67 @@
|
|||||||
// notification contains shared FlatBuffers payloads published by
|
// notification contains shared FlatBuffers payloads published by
|
||||||
// Notification Service toward the gateway client event stream.
|
// Notification Service toward the gateway client event stream. Each
|
||||||
|
// table mirrors one catalog kind defined in
|
||||||
|
// `backend/internal/notification/catalog.go`; the table name is the
|
||||||
|
// camel-case form of the kind with the `Event` suffix.
|
||||||
|
|
||||||
|
include "common.fbs";
|
||||||
|
|
||||||
namespace notification;
|
namespace notification;
|
||||||
|
|
||||||
table GameTurnReadyEvent {
|
table LobbyInviteReceivedEvent {
|
||||||
game_id:string;
|
game_id:common.UUID (required);
|
||||||
turn_number:int64;
|
inviter_user_id:common.UUID (required);
|
||||||
}
|
}
|
||||||
|
|
||||||
table GameFinishedEvent {
|
table LobbyInviteRevokedEvent {
|
||||||
game_id:string;
|
game_id:common.UUID (required);
|
||||||
final_turn_number:int64;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
table LobbyApplicationSubmittedEvent {
|
table LobbyApplicationSubmittedEvent {
|
||||||
game_id:string;
|
game_id:common.UUID (required);
|
||||||
applicant_user_id:string;
|
application_id:common.UUID (required);
|
||||||
}
|
}
|
||||||
|
|
||||||
table LobbyMembershipApprovedEvent {
|
table LobbyApplicationApprovedEvent {
|
||||||
game_id:string;
|
game_id:common.UUID (required);
|
||||||
}
|
}
|
||||||
|
|
||||||
table LobbyMembershipRejectedEvent {
|
table LobbyApplicationRejectedEvent {
|
||||||
game_id:string;
|
game_id:common.UUID (required);
|
||||||
}
|
}
|
||||||
|
|
||||||
table LobbyMembershipBlockedEvent {
|
table LobbyMembershipRemovedEvent {
|
||||||
game_id:string;
|
|
||||||
membership_user_id:string;
|
|
||||||
reason:string;
|
reason:string;
|
||||||
}
|
}
|
||||||
|
|
||||||
table LobbyInviteCreatedEvent {
|
table LobbyMembershipBlockedEvent {
|
||||||
game_id:string;
|
game_id:common.UUID (required);
|
||||||
inviter_user_id:string;
|
reason:string;
|
||||||
}
|
|
||||||
|
|
||||||
table LobbyInviteRedeemedEvent {
|
|
||||||
game_id:string;
|
|
||||||
invitee_user_id:string;
|
|
||||||
}
|
|
||||||
|
|
||||||
table LobbyRaceNameRegistrationEligibleEvent {
|
|
||||||
game_id:string;
|
|
||||||
race_name:string;
|
|
||||||
eligible_until_ms:int64;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
table LobbyRaceNameRegisteredEvent {
|
table LobbyRaceNameRegisteredEvent {
|
||||||
race_name:string;
|
race_name:string;
|
||||||
}
|
}
|
||||||
|
|
||||||
root_type GameTurnReadyEvent;
|
table LobbyRaceNamePendingEvent {
|
||||||
|
race_name:string;
|
||||||
|
expires_at:string;
|
||||||
|
}
|
||||||
|
|
||||||
|
table LobbyRaceNameExpiredEvent {
|
||||||
|
race_name:string;
|
||||||
|
}
|
||||||
|
|
||||||
|
table RuntimeImagePullFailedEvent {
|
||||||
|
game_id:common.UUID (required);
|
||||||
|
image_ref:string;
|
||||||
|
}
|
||||||
|
|
||||||
|
table RuntimeContainerStartFailedEvent {
|
||||||
|
game_id:common.UUID (required);
|
||||||
|
}
|
||||||
|
|
||||||
|
table RuntimeStartConfigInvalidEvent {
|
||||||
|
game_id:common.UUID (required);
|
||||||
|
reason:string;
|
||||||
|
}
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user