feat: backend service
This commit is contained in:
+60
-184
@@ -1,191 +1,67 @@
|
||||
# Integration Tests
|
||||
# integration
|
||||
|
||||
`integration` owns only true inter-service black-box tests.
|
||||
Each suite must raise real service processes, speak only over public HTTP/gRPC/Redis contracts, and avoid imports from `internal/...` packages of tested services.
|
||||
End-to-end test suite for the Galaxy platform. The suite drives `gateway`
|
||||
from outside and verifies behaviour at the public boundary while
|
||||
`backend` and `galaxy/game` run as Docker containers managed by the
|
||||
test process via `testcontainers-go`.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A reachable Docker daemon (`DOCKER_HOST` or the local socket).
|
||||
- Go toolchain matching the workspace `go.work` directive.
|
||||
- Network access for the first run (`postgres:16-alpine`,
|
||||
`axllent/mailpit`, `redis:7-alpine` images are pulled). Subsequent
|
||||
runs reuse the local image cache.
|
||||
|
||||
## Run
|
||||
|
||||
```bash
|
||||
go test ./integration/...
|
||||
```
|
||||
|
||||
The suite builds three Docker images on demand from the workspace
|
||||
sources:
|
||||
|
||||
- `galaxy/backend:integration` (`backend/Dockerfile`),
|
||||
- `galaxy/gateway:integration` (`gateway/Dockerfile`),
|
||||
- `galaxy/game:integration` (`game/Dockerfile`).
|
||||
|
||||
Each image is built once per `go test` invocation, guarded by a
|
||||
`sync.Once` inside `testenv`. The first cold run is slow (~2–3 min on
|
||||
a developer machine); subsequent runs reuse the layer cache.
|
||||
|
||||
## Skipping
|
||||
|
||||
Tests skip with a clear message when the Docker daemon is unreachable.
|
||||
Subsuites that require a live engine container (`lobby_flow_test.go`)
|
||||
also skip when the `galaxy/game` image cannot be built.
|
||||
|
||||
## Layout
|
||||
|
||||
```text
|
||||
integration/
|
||||
├── README.md
|
||||
├── authsessionmail/
|
||||
│ ├── authsession_mail_test.go
|
||||
│ └── harness_test.go
|
||||
├── gatewayauthsessionmail/
|
||||
│ ├── gateway_authsession_mail_test.go
|
||||
│ └── harness_test.go
|
||||
├── gatewayauthsessionusermail/
|
||||
│ └── gateway_authsession_user_mail_test.go
|
||||
├── authsessionuser/
|
||||
│ ├── authsession_user_test.go
|
||||
│ └── harness_test.go
|
||||
├── gatewayauthsession/
|
||||
│ ├── harness_test.go
|
||||
│ └── gateway_authsession_test.go
|
||||
├── gatewayauthsessionuser/
|
||||
│ ├── gateway_authsession_user_test.go
|
||||
│ └── harness_test.go
|
||||
├── gatewayuser/
|
||||
│ ├── gateway_user_test.go
|
||||
│ └── harness_test.go
|
||||
├── notificationgateway/
|
||||
│ └── notification_gateway_test.go
|
||||
├── notificationmail/
|
||||
│ └── notification_mail_test.go
|
||||
├── notificationuser/
|
||||
│ └── notification_user_test.go
|
||||
├── lobbyuser/
|
||||
│ └── lobby_user_test.go
|
||||
├── lobbynotification/
|
||||
│ ├── lobby_notification_test.go
|
||||
│ └── race_name_intents_test.go
|
||||
├── lobbyrtm/
|
||||
│ ├── harness_test.go
|
||||
│ └── lobby_rtm_test.go
|
||||
├── go.mod
|
||||
├── go.sum
|
||||
└── internal/
|
||||
├── contracts/
|
||||
│ ├── gatewayv1/
|
||||
│ │ └── contract.go
|
||||
│ └── userv1/
|
||||
│ └── contract.go
|
||||
└── harness/
|
||||
├── binary.go
|
||||
├── dockernetwork.go
|
||||
├── engineimage.go
|
||||
├── keys.go
|
||||
├── mail_stub.go
|
||||
├── process.go
|
||||
├── redis_container.go
|
||||
├── rtmanagerservice.go
|
||||
├── smtp_capture.go
|
||||
└── user_stub.go
|
||||
```
|
||||
- `testenv/` — fixtures: Postgres, Redis, mailpit, GeoLite2 mmdb,
|
||||
image builders, backend/gateway runners, signed gRPC client (built
|
||||
on top of the public `galaxy/gateway/authn` package, no duplicated
|
||||
canonical-bytes code), mailpit HTTP client, `EnrollPilots` helper
|
||||
for runtime-driven scenarios that need ≥10 members, platform
|
||||
bootstrap.
|
||||
- `*_test.go` — one file per cross-service scenario.
|
||||
|
||||
## Rules
|
||||
The runtime-driven tests (`runtime_lifecycle_test.go`,
|
||||
`engine_command_proxy_test.go`) honour the engine's production
|
||||
contract `len(races) >= 10`: each registers ten extra pilots with
|
||||
synthetic `Player01..Player10` race names and matching emails, has
|
||||
the owner invite each one, and has each pilot redeem the invite
|
||||
before admin force-start. Cold runs add ~30 s for the ten extra
|
||||
mailpit round-trips on top of the engine image build.
|
||||
|
||||
- Keep suites black-box. Do not import `galaxy/gateway/internal/...`, `galaxy/authsession/internal/...`, or any other service-owned internal package.
|
||||
- Start real binaries from `cmd/...` and talk to them only through their published HTTP, gRPC, and Redis contracts.
|
||||
- Put boundary-specific orchestration and assertions into the owning suite package, not into shared helpers.
|
||||
- Put only generic process/runtime utilities into `internal/harness`.
|
||||
- Put only public-contract helpers into `internal/contracts/...`.
|
||||
## Determinism
|
||||
|
||||
## Current Boundary Suites
|
||||
|
||||
- `gatewayauthsession` verifies the integration boundary between real `Edge Gateway` and real `Auth / Session Service`.
|
||||
- `authsessionuser` verifies the integration boundary between real `Auth / Session Service` and real `User Service`.
|
||||
- `authsessionmail` verifies the integration boundary between real `Auth / Session Service` and real `Mail Service`.
|
||||
- `gatewayauthsessionmail` verifies the public auth flow across real `Edge Gateway`, real `Auth / Session Service`, and real `Mail Service`.
|
||||
- `gatewayuser` verifies the direct authenticated self-service boundary between real `Edge Gateway` and real `User Service`.
|
||||
- `gatewayauthsessionuser` verifies the full public-auth plus authenticated-account chain across real `Edge Gateway`, real `Auth / Session Service`, and real `User Service`.
|
||||
- `notificationgateway` verifies that real `Notification Service` push
|
||||
publication is consumed and fanned out by real `Edge Gateway` for all
|
||||
user-facing push types.
|
||||
- `notificationmail` verifies that real `Notification Service` template-mode
|
||||
mail publication is consumed by real `Mail Service` for all notification
|
||||
email types.
|
||||
- `notificationuser` verifies that real `Notification Service` enriches
|
||||
recipients through real `User Service` and preserves Redis stream progress
|
||||
semantics for missing or temporarily unavailable users.
|
||||
- `gatewayauthsessionusermail` verifies the full public registration chain
|
||||
across real `Edge Gateway`, real `Auth / Session Service`, real
|
||||
`User Service`, and real `Mail Service`, including the regression that
|
||||
auth-code mail bypasses `notification:intents`.
|
||||
- `lobbyuser` verifies the synchronous eligibility boundary between real
|
||||
`Game Lobby` and real `User Service`, including the happy path,
|
||||
permanent_block rejection, unknown user, and transient User Service
|
||||
unavailability.
|
||||
- `lobbynotification` verifies the producer side of `Game Lobby →
|
||||
notification:intents`, covering all eleven `lobby.*` intent types from
|
||||
applications, invites, member operations, runtime pause, cascade
|
||||
membership block, and the three race-name intents emitted by capability
|
||||
evaluation at game finish and by self-service registration.
|
||||
- `lobbyrtm` verifies the asynchronous boundary between real
|
||||
`Game Lobby` and real `Runtime Manager` end-to-end against a real
|
||||
Docker daemon: start_job → engine container → success job_result →
|
||||
game `running`; cascade-blocked owner → stop_job(cancelled) → engine
|
||||
stopped; missing image → failure job_result + admin notification
|
||||
intent → game `start_failed`. Skips automatically on hosts without
|
||||
Docker.
|
||||
|
||||
The current fast suites still use one isolated `miniredis` instance plus either
|
||||
real downstream processes or external stateful HTTP stubs where appropriate.
|
||||
`authsessionmail`, `gatewayauthsessionmail`, `notificationgateway`,
|
||||
`notificationmail`, `notificationuser`, `gatewayauthsessionusermail`,
|
||||
`lobbyuser`, `lobbynotification`, and `lobbyrtm` are the deliberate
|
||||
exceptions: they use one real Redis container through
|
||||
`testcontainers-go`, because those boundaries must exercise real Redis
|
||||
stream, persistence, or scheduling behavior. `lobbyrtm` additionally
|
||||
needs a real Docker daemon and the `galaxy/game` engine image.
|
||||
`authsessionmail` additionally contains one targeted SMTP-capture scenario for
|
||||
the real `smtp` provider path, while `gatewayauthsessionmail` keeps `Mail
|
||||
Service` in `stub` mode and extracts the confirmation code through the trusted
|
||||
operator delivery surface.
|
||||
|
||||
## Running
|
||||
|
||||
Run from the module directory:
|
||||
|
||||
```bash
|
||||
cd integration
|
||||
go test ./gatewayauthsession/...
|
||||
go test ./authsessionuser/...
|
||||
go test ./authsessionmail/...
|
||||
go test ./gatewayauthsessionmail/...
|
||||
go test ./gatewayuser/...
|
||||
go test ./gatewayauthsessionuser/...
|
||||
go test ./notificationgateway/...
|
||||
go test ./notificationmail/...
|
||||
go test ./notificationuser/...
|
||||
go test ./gatewayauthsessionusermail/...
|
||||
go test ./lobbyuser/...
|
||||
go test ./lobbynotification/...
|
||||
go test ./lobbyrtm/...
|
||||
```
|
||||
|
||||
Useful regression commands after boundary changes:
|
||||
|
||||
```bash
|
||||
go test ./gatewayauthsession/...
|
||||
go test ./authsessionuser/...
|
||||
go test ./authsessionmail/...
|
||||
go test ./gatewayauthsessionmail/...
|
||||
go test ./gatewayuser/...
|
||||
go test ./gatewayauthsessionuser/...
|
||||
go test ./notificationgateway/...
|
||||
go test ./notificationmail/...
|
||||
go test ./notificationuser/...
|
||||
go test ./gatewayauthsessionusermail/...
|
||||
go test ./lobbyuser/...
|
||||
go test ./lobbynotification/...
|
||||
go test ./lobbyrtm/...
|
||||
cd ../gateway && go test ./...
|
||||
cd ../authsession && go test ./... -run GatewayCompatibility
|
||||
cd ../user && go test ./...
|
||||
```
|
||||
|
||||
Do not use `go test ./...` from the repository root. The repository is organized through `go.work`, so verification should stay module-scoped.
|
||||
|
||||
## Adding A New Boundary Suite
|
||||
|
||||
1. Create `integration/<boundary>/` for the new inter-service boundary.
|
||||
2. Keep suite-local fixtures, scenario helpers, and assertion helpers inside that package.
|
||||
3. Reuse `internal/harness` only for generic concerns such as binary build/run, ports, keys, Redis, and shared external stubs.
|
||||
4. Add new helpers to `internal/contracts/<contract>/` only when they describe a reusable public wire contract.
|
||||
5. Prefer fast deterministic infrastructure by default: in-memory test doubles, `httptest` stubs, and `miniredis`.
|
||||
|
||||
## Real Redis Suites
|
||||
|
||||
Fast suites stay on `miniredis` by default.
|
||||
When one boundary explicitly needs real Redis semantics, prefer a package-local
|
||||
container setup through `testcontainers-go` plus reusable helpers in
|
||||
`internal/harness`, as done by `authsessionmail` and
|
||||
`gatewayauthsessionmail`.
|
||||
|
||||
Current rule of thumb:
|
||||
|
||||
- use `miniredis` when the boundary does not depend on Redis persistence or
|
||||
scheduling behavior
|
||||
- use `testcontainers-go` only when the real Redis process materially changes
|
||||
the behavior being verified
|
||||
- Each test calls `Bootstrap(t)` to spin up a dedicated Postgres,
|
||||
Redis, mailpit, backend and gateway. Cross-test contamination is not
|
||||
possible.
|
||||
- Tests do not call `t.Parallel()`. Docker resource pressure makes
|
||||
parallel suites flaky on commodity hardware.
|
||||
- Gateway anti-abuse and body-size limits are loosened for the bulk of
|
||||
scenarios (so legitimate flows are not rate-limited mid-test) and
|
||||
intentionally tightened in `gateway_edge_test.go` so each protective
|
||||
mechanism can be observed firing.
|
||||
|
||||
@@ -0,0 +1,54 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestAdminEngineVersionsCRUD covers the engine-version registry: a
|
||||
// single admin creates, updates, disables a version. A user attempting
|
||||
// the same endpoint with X-User-ID is rejected (Basic Auth required).
|
||||
func TestAdminEngineVersionsCRUD(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
raw, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions", map[string]any{
|
||||
"version": "v1.0.0",
|
||||
"image_ref": "galaxy/game:integration",
|
||||
"enabled": true,
|
||||
})
|
||||
if err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("create version: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Update image_ref + enabled flag.
|
||||
raw, resp, err = admin.Do(ctx, http.MethodPatch, "/api/v1/admin/engine-versions/v1.0.0", map[string]any{
|
||||
"image_ref": "galaxy/game:integration",
|
||||
"enabled": false,
|
||||
})
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("update version: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Disable explicitly through the dedicated endpoint.
|
||||
raw, resp, err = admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions/v1.0.0/disable", nil)
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("disable version: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// A regular user surface must not have access to this admin endpoint.
|
||||
noAuth := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, "wrong", "wrong")
|
||||
raw, resp, err = noAuth.Do(ctx, http.MethodGet, "/api/v1/admin/engine-versions", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unauth call: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusUnauthorized {
|
||||
t.Fatalf("unauth status = %d body=%s, want 401", resp.StatusCode, string(raw))
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,55 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestAdminFlow_BootstrapAndCRUD verifies that the bootstrap admin
|
||||
// account can authenticate against backend's admin surface, create a
|
||||
// second admin, and that the second admin can disable the first.
|
||||
func TestAdminFlow_BootstrapAndCRUD(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
|
||||
bootstrap := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
|
||||
// Create a second admin account.
|
||||
body := map[string]any{
|
||||
"username": "secondary",
|
||||
"password": "secondary-secret-pw",
|
||||
}
|
||||
raw, resp, err := bootstrap.Do(ctx, http.MethodPost, "/api/v1/admin/admin-accounts", body)
|
||||
if err != nil {
|
||||
t.Fatalf("create admin: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusCreated && resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("create admin: status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Switch to the secondary admin and disable the bootstrap admin.
|
||||
secondary := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, "secondary", "secondary-secret-pw")
|
||||
raw, resp, err = secondary.Do(ctx, http.MethodPost, "/api/v1/admin/admin-accounts/"+plat.Backend.AdminUser+"/disable", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("disable bootstrap: %v", err)
|
||||
}
|
||||
if resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("disable bootstrap: status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Bootstrap admin should now be unauthorised on every endpoint.
|
||||
raw, resp, err = bootstrap.Do(ctx, http.MethodGet, "/api/v1/admin/admin-accounts", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("bootstrap after disable: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusUnauthorized {
|
||||
t.Fatalf("bootstrap should be unauthorized after disable: status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
_ = json.RawMessage(raw)
|
||||
}
|
||||
@@ -0,0 +1,129 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestAdminGlobalGamesView verifies the visibility split: admin sees
|
||||
// every game (public + private, regardless of owner); a regular user
|
||||
// querying their own listing sees only the games they own or
|
||||
// participate in.
|
||||
func TestAdminGlobalGamesView(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions", map[string]any{
|
||||
"version": "v1.0.0", "image_ref": "galaxy/game:integration", "enabled": true,
|
||||
}); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("seed engine_version: err=%v resp=%v", err, resp)
|
||||
}
|
||||
|
||||
// Admin creates a public game.
|
||||
publicBody := map[string]any{
|
||||
"game_name": "Public Cup",
|
||||
"min_players": 2,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 1,
|
||||
"start_gap_players": 2,
|
||||
"enrollment_ends_at": time.Now().Add(24 * time.Hour).UTC().Format(time.RFC3339),
|
||||
"turn_schedule": "0 * * * *",
|
||||
"target_engine_version": "v1.0.0",
|
||||
}
|
||||
raw, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/games", publicBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("admin create public: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var publicGame struct{ GameID string `json:"game_id"` }
|
||||
if err := json.Unmarshal(raw, &publicGame); err != nil {
|
||||
t.Fatalf("decode public: %v", err)
|
||||
}
|
||||
|
||||
// Two users; user A creates a private game.
|
||||
a := testenv.RegisterSession(t, plat, "ownerA@example.com")
|
||||
b := testenv.RegisterSession(t, plat, "ownerB@example.com")
|
||||
aID, err := a.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve A: %v", err)
|
||||
}
|
||||
bID, err := b.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve B: %v", err)
|
||||
}
|
||||
aHTTP := testenv.NewBackendUserClient(plat.Backend.HTTPURL, aID)
|
||||
bHTTP := testenv.NewBackendUserClient(plat.Backend.HTTPURL, bID)
|
||||
|
||||
privateBody := map[string]any{
|
||||
"game_name": "Private Run",
|
||||
"visibility": "private",
|
||||
"min_players": 2,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 1,
|
||||
"start_gap_players": 2,
|
||||
"enrollment_ends_at": time.Now().Add(24 * time.Hour).UTC().Format(time.RFC3339),
|
||||
"turn_schedule": "0 * * * *",
|
||||
"target_engine_version": "v1.0.0",
|
||||
}
|
||||
raw, resp, err = aHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games", privateBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("user create private: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var privateGame struct{ GameID string `json:"game_id"` }
|
||||
if err := json.Unmarshal(raw, &privateGame); err != nil {
|
||||
t.Fatalf("decode private: %v", err)
|
||||
}
|
||||
|
||||
// User B can see the public game but NOT user A's private one.
|
||||
raw, resp, err = bHTTP.Do(ctx, http.MethodGet, "/api/v1/user/lobby/games?page=1&page_size=20", nil)
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("user B list: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var bList struct{ Items []struct{ GameID string `json:"game_id"` } `json:"items"` }
|
||||
if err := json.Unmarshal(raw, &bList); err != nil {
|
||||
t.Fatalf("decode user B list: %v", err)
|
||||
}
|
||||
bSeesPublic, bSeesPrivate := false, false
|
||||
for _, g := range bList.Items {
|
||||
if g.GameID == publicGame.GameID {
|
||||
bSeesPublic = true
|
||||
}
|
||||
if g.GameID == privateGame.GameID {
|
||||
bSeesPrivate = true
|
||||
}
|
||||
}
|
||||
if !bSeesPublic {
|
||||
t.Fatalf("user B did not see the public game")
|
||||
}
|
||||
if bSeesPrivate {
|
||||
t.Fatalf("user B saw user A's private game in the public listing")
|
||||
}
|
||||
|
||||
// Admin sees every game.
|
||||
raw, resp, err = admin.Do(ctx, http.MethodGet, "/api/v1/admin/games?page=1&page_size=20", nil)
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("admin list games: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var adminList struct{ Items []struct{ GameID string `json:"game_id"` } `json:"items"` }
|
||||
if err := json.Unmarshal(raw, &adminList); err != nil {
|
||||
t.Fatalf("decode admin list: %v", err)
|
||||
}
|
||||
sawPublic, sawPrivate := false, false
|
||||
for _, g := range adminList.Items {
|
||||
if g.GameID == publicGame.GameID {
|
||||
sawPublic = true
|
||||
}
|
||||
if g.GameID == privateGame.GameID {
|
||||
sawPrivate = true
|
||||
}
|
||||
}
|
||||
if !sawPublic || !sawPrivate {
|
||||
t.Fatalf("admin list missing entries: public=%v private=%v items=%+v", sawPublic, sawPrivate, adminList.Items)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,83 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
// TestAdminUserSanctionPermanentBlock verifies that applying the
|
||||
// `permanent_block` sanction through the admin endpoint cascades:
|
||||
// - the user's active session is revoked (subsequent gateway calls
|
||||
// fail Unauthenticated);
|
||||
// - send-email-code on the same email is rejected with the
|
||||
// standard error envelope.
|
||||
func TestAdminUserSanctionPermanentBlock(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
const email = "pilot+sanction@example.com"
|
||||
sess := testenv.RegisterSession(t, plat, email)
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
// Sanity: signed call works pre-sanction.
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
if _, err := gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{}); err != nil {
|
||||
t.Fatalf("pre-sanction: %v", err)
|
||||
}
|
||||
|
||||
userID, err := sess.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve user_id: %v", err)
|
||||
}
|
||||
|
||||
// Admin applies permanent_block.
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
body := map[string]any{
|
||||
"sanction_code": "permanent_block",
|
||||
"scope": "global",
|
||||
"reason_code": "tos_violation",
|
||||
"actor": map[string]any{"type": "admin", "id": plat.Backend.AdminUser},
|
||||
}
|
||||
raw, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/users/"+userID+"/sanctions", body)
|
||||
if err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("apply sanction: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Subsequent authenticated calls must fail.
|
||||
deadline := time.Now().Add(2 * time.Second)
|
||||
var lastErr error
|
||||
for time.Now().Before(deadline) {
|
||||
_, lastErr = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{})
|
||||
if lastErr != nil {
|
||||
break
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
if lastErr == nil {
|
||||
t.Fatalf("authenticated call succeeded after permanent_block")
|
||||
}
|
||||
if !testenv.IsUnauthenticated(lastErr) {
|
||||
t.Fatalf("post-sanction status: %v", lastErr)
|
||||
}
|
||||
|
||||
// New send-email-code on the same email must be rejected.
|
||||
public := testenv.NewPublicRESTClient(plat.Gateway.HTTPURL)
|
||||
_, _, err = public.SendEmailCode(ctx, email, "")
|
||||
if err == nil {
|
||||
t.Fatalf("send-email-code accepted for permanently blocked email")
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,59 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// TestAntiReplay_DuplicateRequestID submits the same authenticated
|
||||
// request_id twice within the freshness window and asserts the
|
||||
// second attempt is rejected by gateway as a replay (Redis
|
||||
// reservation check).
|
||||
func TestAntiReplay_DuplicateRequestID(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+replay@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
requestID := uuid.NewString()
|
||||
timestamp := time.Now().UnixMilli()
|
||||
|
||||
first, err := gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{
|
||||
RequestID: requestID,
|
||||
TimestampMS: timestamp,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("first call failed: %v", err)
|
||||
}
|
||||
if first.ResultCode != "ok" {
|
||||
t.Fatalf("first call result_code = %q, want ok", first.ResultCode)
|
||||
}
|
||||
|
||||
_, err = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{
|
||||
RequestID: requestID,
|
||||
TimestampMS: timestamp,
|
||||
})
|
||||
if err == nil {
|
||||
t.Fatalf("replay accepted: expected rejection on duplicate request_id")
|
||||
}
|
||||
if !testenv.IsFailedPrecondition(err) && !testenv.IsResourceExhausted(err) && !testenv.IsUnauthenticated(err) && !testenv.IsInvalidArgument(err) {
|
||||
t.Fatalf("replay rejection has unexpected status: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,25 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestAuthFlow_SendConfirm exercises registration end-to-end: the
|
||||
// gateway public REST surface accepts `send-email-code`, the backend
|
||||
// queues an outbox row, the mailpit container captures the SMTP
|
||||
// delivery, the test extracts the verification code, then the same
|
||||
// public REST surface accepts `confirm-email-code` and returns a
|
||||
// device_session_id. The shared testenv.RegisterSession helper
|
||||
// performs the same flow for downstream tests.
|
||||
func TestAuthFlow_SendConfirm(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
sess := testenv.RegisterSession(t, plat, "pilot@example.com")
|
||||
if sess.DeviceSessionID == "" {
|
||||
t.Fatalf("device_session_id not populated")
|
||||
}
|
||||
if len(sess.Private) == 0 {
|
||||
t.Fatalf("private key not populated")
|
||||
}
|
||||
}
|
||||
@@ -1,110 +0,0 @@
|
||||
package authsessionmail_test
|
||||
|
||||
import (
|
||||
"net/url"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestAuthsessionMailBlackBoxSendEmailCodeCreatesSuppressedDelivery(t *testing.T) {
|
||||
h := newAuthsessionMailHarness(t, authsessionMailHarnessOptions{})
|
||||
email := "pilot@example.com"
|
||||
|
||||
response := h.sendChallengeWithAcceptLanguage(t, email, "fr-FR, en;q=0.8")
|
||||
require.NotEmpty(t, response.ChallengeID)
|
||||
|
||||
list := h.eventuallyListDeliveries(t, url.Values{
|
||||
"source": []string{"authsession"},
|
||||
"status": []string{"suppressed"},
|
||||
"recipient": []string{email},
|
||||
"template_id": []string{"auth.login_code"},
|
||||
})
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, "authsession", list.Items[0].Source)
|
||||
require.Equal(t, "suppressed", list.Items[0].Status)
|
||||
require.Equal(t, "auth.login_code", list.Items[0].TemplateID)
|
||||
require.Equal(t, "fr-FR", list.Items[0].Locale)
|
||||
require.Equal(t, []string{email}, list.Items[0].To)
|
||||
|
||||
detail := h.getDelivery(t, list.Items[0].DeliveryID)
|
||||
require.Equal(t, "authsession", detail.Source)
|
||||
require.Equal(t, "suppressed", detail.Status)
|
||||
require.Equal(t, "auth.login_code", detail.TemplateID)
|
||||
require.Equal(t, "fr-FR", detail.Locale)
|
||||
require.False(t, detail.LocaleFallbackUsed)
|
||||
require.Equal(t, []string{email}, detail.To)
|
||||
require.NotEmpty(t, detail.IdempotencyKey)
|
||||
|
||||
attempts := h.getDeliveryAttempts(t, detail.DeliveryID)
|
||||
require.Empty(t, attempts.Items)
|
||||
}
|
||||
|
||||
func TestAuthsessionMailBlackBoxSendEmailCodeReturnsServiceUnavailableWhenMailServiceStops(t *testing.T) {
|
||||
h := newAuthsessionMailHarness(t, authsessionMailHarnessOptions{})
|
||||
h.stopMail(t)
|
||||
|
||||
response := postJSONValueWithHeaders(
|
||||
t,
|
||||
h.authsessionPublicURL+authSendEmailCodePath,
|
||||
map[string]string{"email": "pilot@example.com"},
|
||||
nil,
|
||||
)
|
||||
|
||||
require.Equal(t, 503, response.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"service_unavailable","message":"service is unavailable"}}`, response.Body)
|
||||
}
|
||||
|
||||
func TestAuthsessionMailBlackBoxSMTPDeliveryReachesSentStateAndSMTPPayload(t *testing.T) {
|
||||
h := newAuthsessionMailHarness(t, authsessionMailHarnessOptions{mailSMTPMode: "smtp"})
|
||||
email := "pilot@example.com"
|
||||
|
||||
response := h.sendChallengeWithAcceptLanguage(t, email, "fr-FR, en;q=0.8")
|
||||
require.NotEmpty(t, response.ChallengeID)
|
||||
|
||||
list := h.eventuallyListDeliveries(t, url.Values{
|
||||
"source": []string{"authsession"},
|
||||
"recipient": []string{email},
|
||||
"template_id": []string{"auth.login_code"},
|
||||
})
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, "authsession", list.Items[0].Source)
|
||||
require.Equal(t, "auth.login_code", list.Items[0].TemplateID)
|
||||
require.Equal(t, "fr-FR", list.Items[0].Locale)
|
||||
require.Equal(t, []string{email}, list.Items[0].To)
|
||||
|
||||
var detail mailDeliveryDetailResponse
|
||||
require.Eventually(t, func() bool {
|
||||
detail = h.getDelivery(t, list.Items[0].DeliveryID)
|
||||
return detail.Status == "sent"
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
require.Equal(t, "authsession", detail.Source)
|
||||
require.Equal(t, "sent", detail.Status)
|
||||
require.Equal(t, "auth.login_code", detail.TemplateID)
|
||||
require.Equal(t, "fr-FR", detail.Locale)
|
||||
require.True(t, detail.LocaleFallbackUsed)
|
||||
require.Equal(t, []string{email}, detail.To)
|
||||
require.NotEmpty(t, detail.IdempotencyKey)
|
||||
|
||||
code, ok := detail.TemplateVariables["code"].(string)
|
||||
require.True(t, ok)
|
||||
require.Len(t, code, 6)
|
||||
|
||||
var attempts mailDeliveryAttemptsResponse
|
||||
require.Eventually(t, func() bool {
|
||||
attempts = h.getDeliveryAttempts(t, detail.DeliveryID)
|
||||
return len(attempts.Items) == 1 && attempts.Items[0].Status == "provider_accepted"
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
require.Len(t, attempts.Items, 1)
|
||||
require.Equal(t, "provider_accepted", attempts.Items[0].Status)
|
||||
|
||||
require.NotNil(t, h.smtp)
|
||||
var payload string
|
||||
require.Eventually(t, func() bool {
|
||||
payload = h.smtp.LatestPayload()
|
||||
return payload != ""
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
require.Contains(t, payload, "Subject:")
|
||||
require.Contains(t, payload, "Your login code is "+code+".")
|
||||
}
|
||||
@@ -1,394 +0,0 @@
|
||||
package authsessionmail_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
authSendEmailCodePath = "/api/v1/public/auth/send-email-code"
|
||||
mailDeliveriesPath = "/api/v1/internal/deliveries"
|
||||
)
|
||||
|
||||
type authsessionMailHarness struct {
|
||||
userStub *harness.UserStub
|
||||
smtp *harness.SMTPCapture
|
||||
|
||||
authsessionPublicURL string
|
||||
mailInternalURL string
|
||||
|
||||
authsessionProcess *harness.Process
|
||||
mailProcess *harness.Process
|
||||
}
|
||||
|
||||
type authsessionMailHarnessOptions struct {
|
||||
mailSMTPMode string
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
type sendEmailCodeResponse struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
|
||||
type mailDeliveryListResponse struct {
|
||||
Items []mailDeliverySummary `json:"items"`
|
||||
}
|
||||
|
||||
type mailDeliverySummary struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
To []string `json:"to"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
type mailDeliveryDetailResponse struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
LocaleFallbackUsed bool `json:"locale_fallback_used"`
|
||||
To []string `json:"to"`
|
||||
IdempotencyKey string `json:"idempotency_key"`
|
||||
Status string `json:"status"`
|
||||
TemplateVariables map[string]any `json:"template_variables,omitempty"`
|
||||
}
|
||||
|
||||
type mailDeliveryAttemptsResponse struct {
|
||||
Items []mailAttemptResponse `json:"items"`
|
||||
}
|
||||
|
||||
type mailAttemptResponse struct {
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
func newAuthsessionMailHarness(t *testing.T, opts authsessionMailHarnessOptions) *authsessionMailHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
userStub := harness.NewUserStub(t)
|
||||
|
||||
mailInternalAddr := harness.FreeTCPAddress(t)
|
||||
authsessionPublicAddr := harness.FreeTCPAddress(t)
|
||||
authsessionInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||
|
||||
if opts.mailSMTPMode == "" {
|
||||
opts.mailSMTPMode = "stub"
|
||||
}
|
||||
|
||||
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||
mailEnv["MAIL_TEMPLATE_DIR"] = moduleTemplateDir(t)
|
||||
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
|
||||
var smtpCapture *harness.SMTPCapture
|
||||
switch opts.mailSMTPMode {
|
||||
case "stub":
|
||||
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||
case "smtp":
|
||||
smtpCapture = harness.StartSMTPCapture(t, harness.SMTPCaptureConfig{
|
||||
SupportsSTARTTLS: true,
|
||||
})
|
||||
mailEnv["MAIL_SMTP_MODE"] = "smtp"
|
||||
mailEnv["MAIL_SMTP_ADDR"] = smtpCapture.Addr()
|
||||
mailEnv["MAIL_SMTP_FROM_EMAIL"] = "noreply@example.com"
|
||||
mailEnv["MAIL_SMTP_FROM_NAME"] = "Galaxy Mail"
|
||||
mailEnv["MAIL_SMTP_TIMEOUT"] = "2s"
|
||||
mailEnv["MAIL_SMTP_INSECURE_SKIP_VERIFY"] = "true"
|
||||
mailEnv["SSL_CERT_FILE"] = smtpCapture.RootCAPath()
|
||||
default:
|
||||
t.Fatalf("unsupported mail SMTP mode %q", opts.mailSMTPMode)
|
||||
}
|
||||
|
||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||
|
||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{
|
||||
"AUTHSESSION_LOG_LEVEL": "info",
|
||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_MAIL_SERVICE_BASE_URL": "http://" + mailInternalAddr,
|
||||
"AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
})
|
||||
waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr)
|
||||
|
||||
return &authsessionMailHarness{
|
||||
userStub: userStub,
|
||||
smtp: smtpCapture,
|
||||
authsessionPublicURL: "http://" + authsessionPublicAddr,
|
||||
mailInternalURL: "http://" + mailInternalAddr,
|
||||
authsessionProcess: authsessionProcess,
|
||||
mailProcess: mailProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *authsessionMailHarness) stopMail(t *testing.T) {
|
||||
t.Helper()
|
||||
|
||||
h.mailProcess.Stop(t)
|
||||
}
|
||||
|
||||
func (h *authsessionMailHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) sendEmailCodeResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValueWithHeaders(
|
||||
t,
|
||||
h.authsessionPublicURL+authSendEmailCodePath,
|
||||
map[string]string{"email": email},
|
||||
map[string]string{"Accept-Language": acceptLanguage},
|
||||
)
|
||||
require.Equal(t, http.StatusOK, response.StatusCode, response.Body)
|
||||
|
||||
var body sendEmailCodeResponse
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
require.NotEmpty(t, body.ChallengeID)
|
||||
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *authsessionMailHarness) eventuallyListDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse {
|
||||
t.Helper()
|
||||
|
||||
var response mailDeliveryListResponse
|
||||
require.Eventually(t, func() bool {
|
||||
response = h.listDeliveries(t, query)
|
||||
return len(response.Items) > 0
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
|
||||
return response
|
||||
}
|
||||
|
||||
func (h *authsessionMailHarness) listDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse {
|
||||
t.Helper()
|
||||
|
||||
target := h.mailInternalURL + mailDeliveriesPath
|
||||
if encoded := query.Encode(); encoded != "" {
|
||||
target += "?" + encoded
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, target, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
return doJSONRequest[mailDeliveryListResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func (h *authsessionMailHarness) getDelivery(t *testing.T, deliveryID string) mailDeliveryDetailResponse {
|
||||
t.Helper()
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, h.mailInternalURL+mailDeliveriesPath+"/"+url.PathEscape(deliveryID), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
return doJSONRequest[mailDeliveryDetailResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func (h *authsessionMailHarness) getDeliveryAttempts(t *testing.T, deliveryID string) mailDeliveryAttemptsResponse {
|
||||
t.Helper()
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, h.mailInternalURL+mailDeliveriesPath+"/"+url.PathEscape(deliveryID)+"/attempts", nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
return doJSONRequest[mailDeliveryAttemptsResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
for key, value := range headers {
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
request.Header.Set(key, value)
|
||||
}
|
||||
|
||||
return doRequest(t, request)
|
||||
}
|
||||
|
||||
func doJSONRequest[T any](t *testing.T, request *http.Request, wantStatus int) T {
|
||||
t.Helper()
|
||||
|
||||
response := doRequest(t, request)
|
||||
require.Equal(t, wantStatus, response.StatusCode, response.Body)
|
||||
|
||||
var decoded T
|
||||
require.NoError(t, json.Unmarshal([]byte(response.Body), &decoded), response.Body)
|
||||
|
||||
return decoded
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 500 * time.Millisecond,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+mailDeliveriesPath, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
response, err := postJSONValueMaybe(client, baseURL+authSendEmailCodePath, map[string]string{
|
||||
"email": "",
|
||||
})
|
||||
if err == nil && response.StatusCode == http.StatusBadRequest {
|
||||
return
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) {
|
||||
payload, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func moduleTemplateDir(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
return filepath.Join(repositoryRoot(t), "mail", "templates")
|
||||
}
|
||||
|
||||
func repositoryRoot(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatal("resolve repository root: runtime caller is unavailable")
|
||||
}
|
||||
|
||||
return filepath.Clean(filepath.Join(filepath.Dir(file), "..", ".."))
|
||||
}
|
||||
@@ -1,116 +0,0 @@
|
||||
package authsessionuser_test
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestAuthsessionUserBlackBoxConfirmCreatesUserWithForwardedRegistrationContext(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
h := newAuthsessionUserHarness(t)
|
||||
email := "created@example.com"
|
||||
|
||||
challengeID := h.sendChallenge(t, email)
|
||||
code := lastMailCodeFor(t, h.mailStub, email)
|
||||
|
||||
response := h.confirmCode(t, challengeID, code)
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
requireJSONStatus(t, response, http.StatusOK, &confirmBody)
|
||||
require.True(t, strings.HasPrefix(confirmBody.DeviceSessionID, "device-session-"))
|
||||
|
||||
lookupResponse, account := lookupUserByEmail(t, h.userServiceURL, email)
|
||||
require.Equalf(t, http.StatusOK, lookupResponse.StatusCode, formatStatusError(lookupResponse))
|
||||
require.Equal(t, email, account.User.Email)
|
||||
require.Equal(t, "en", account.User.PreferredLanguage)
|
||||
require.Equal(t, testTimeZone, account.User.TimeZone)
|
||||
require.True(t, strings.HasPrefix(account.User.UserID, "user-"))
|
||||
require.True(t, strings.HasPrefix(account.User.UserName, "player-"))
|
||||
require.Empty(t, account.User.DisplayName)
|
||||
require.Equal(t, "free", account.User.Entitlement.PlanCode)
|
||||
require.False(t, account.User.Entitlement.IsPaid)
|
||||
require.Empty(t, account.User.ActiveSanctions)
|
||||
require.Empty(t, account.User.ActiveLimits)
|
||||
}
|
||||
|
||||
func TestAuthsessionUserBlackBoxConfirmForExistingUserKeepsCreateOnlySettings(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
h := newAuthsessionUserHarness(t)
|
||||
email := "existing@example.com"
|
||||
|
||||
created := postEnsureUser(t, h.userServiceURL, email, "fr-FR", "Europe/Paris")
|
||||
require.Equal(t, "created", created.Outcome)
|
||||
sleepForDistinctCreatedAt()
|
||||
|
||||
challengeID := h.sendChallenge(t, email)
|
||||
code := lastMailCodeFor(t, h.mailStub, email)
|
||||
|
||||
response := h.confirmCode(t, challengeID, code)
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
requireJSONStatus(t, response, http.StatusOK, &confirmBody)
|
||||
require.True(t, strings.HasPrefix(confirmBody.DeviceSessionID, "device-session-"))
|
||||
|
||||
lookupResponse, account := lookupUserByEmail(t, h.userServiceURL, email)
|
||||
require.Equalf(t, http.StatusOK, lookupResponse.StatusCode, formatStatusError(lookupResponse))
|
||||
require.Equal(t, created.UserID, account.User.UserID)
|
||||
require.Equal(t, "fr-FR", account.User.PreferredLanguage)
|
||||
require.Equal(t, "Europe/Paris", account.User.TimeZone)
|
||||
}
|
||||
|
||||
func TestAuthsessionUserBlackBoxAcceptLanguageSetsLocalizedPreferredLanguage(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
h := newAuthsessionUserHarness(t)
|
||||
email := "localized@example.com"
|
||||
|
||||
challengeID := h.sendChallengeWithAcceptLanguage(t, email, "fr-FR, en;q=0.8")
|
||||
deliveries := h.mailStub.RecordedDeliveries()
|
||||
require.NotEmpty(t, deliveries)
|
||||
require.Equal(t, "fr-FR", deliveries[len(deliveries)-1].Locale)
|
||||
|
||||
code := lastMailCodeFor(t, h.mailStub, email)
|
||||
response := h.confirmCode(t, challengeID, code)
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
requireJSONStatus(t, response, http.StatusOK, &confirmBody)
|
||||
require.True(t, strings.HasPrefix(confirmBody.DeviceSessionID, "device-session-"))
|
||||
|
||||
lookupResponse, account := lookupUserByEmail(t, h.userServiceURL, email)
|
||||
require.Equalf(t, http.StatusOK, lookupResponse.StatusCode, formatStatusError(lookupResponse))
|
||||
require.Equal(t, "fr-FR", account.User.PreferredLanguage)
|
||||
require.Equal(t, testTimeZone, account.User.TimeZone)
|
||||
}
|
||||
|
||||
func TestAuthsessionUserBlackBoxBlockedEmailSendIsSuccessShapedAndConfirmIsRejectedWithoutCreatingUser(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
h := newAuthsessionUserHarness(t)
|
||||
|
||||
blockedAtSendEmail := "blocked-send@example.com"
|
||||
postBlockByEmail(t, h.userServiceURL, blockedAtSendEmail)
|
||||
|
||||
beforeBlockedSendDeliveries := len(h.mailStub.RecordedDeliveries())
|
||||
blockedChallengeID := h.sendChallenge(t, blockedAtSendEmail)
|
||||
require.NotEmpty(t, blockedChallengeID)
|
||||
require.Len(t, h.mailStub.RecordedDeliveries(), beforeBlockedSendDeliveries)
|
||||
|
||||
blockedAtConfirmEmail := "blocked-confirm@example.com"
|
||||
challengeID := h.sendChallenge(t, blockedAtConfirmEmail)
|
||||
code := lastMailCodeFor(t, h.mailStub, blockedAtConfirmEmail)
|
||||
postBlockByEmail(t, h.userServiceURL, blockedAtConfirmEmail)
|
||||
|
||||
confirmResponse := h.confirmCode(t, challengeID, code)
|
||||
requireJSONStatusRaw(t, confirmResponse, http.StatusForbidden, `{"error":{"code":"blocked_by_policy","message":"authentication is blocked by policy"}}`)
|
||||
|
||||
lookupResponse, _ := lookupUserByEmail(t, h.userServiceURL, blockedAtConfirmEmail)
|
||||
requireLookupNotFound(t, lookupResponse)
|
||||
}
|
||||
@@ -1,408 +0,0 @@
|
||||
package authsessionuser_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
testClientPublicKey = "AAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8="
|
||||
testTimeZone = "Europe/Kaliningrad"
|
||||
)
|
||||
|
||||
type authsessionUserHarness struct {
|
||||
mailStub *harness.MailStub
|
||||
|
||||
authsessionPublicURL string
|
||||
userServiceURL string
|
||||
|
||||
authsessionProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
}
|
||||
|
||||
func newAuthsessionUserHarness(t *testing.T) *authsessionUserHarness {
|
||||
t.Helper()
|
||||
|
||||
redisServer := harness.StartMiniredis(t)
|
||||
mailStub := harness.NewMailStub(t)
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
authsessionPublicAddr := harness.FreeTCPAddress(t)
|
||||
authsessionInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisServer.Addr()).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
authsessionEnv := map[string]string{
|
||||
"AUTHSESSION_LOG_LEVEL": "info",
|
||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_MAIL_SERVICE_BASE_URL": mailStub.BaseURL(),
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
}
|
||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, authsessionEnv)
|
||||
waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr)
|
||||
|
||||
return &authsessionUserHarness{
|
||||
mailStub: mailStub,
|
||||
authsessionPublicURL: "http://" + authsessionPublicAddr,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
authsessionProcess: authsessionProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *authsessionUserHarness) sendChallenge(t *testing.T, email string) string {
|
||||
t.Helper()
|
||||
|
||||
return h.sendChallengeWithAcceptLanguage(t, email, "")
|
||||
}
|
||||
|
||||
func (h *authsessionUserHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) string {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValueWithHeaders(
|
||||
t,
|
||||
h.authsessionPublicURL+"/api/v1/public/auth/send-email-code",
|
||||
map[string]string{"email": email},
|
||||
map[string]string{"Accept-Language": acceptLanguage},
|
||||
)
|
||||
require.Equal(t, http.StatusOK, response.StatusCode)
|
||||
|
||||
var body struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
require.NotEmpty(t, body.ChallengeID)
|
||||
|
||||
return body.ChallengeID
|
||||
}
|
||||
|
||||
func (h *authsessionUserHarness) confirmCode(t *testing.T, challengeID string, code string) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValue(t, h.authsessionPublicURL+"/api/v1/public/auth/confirm-email-code", map[string]string{
|
||||
"challenge_id": challengeID,
|
||||
"code": code,
|
||||
"client_public_key": testClientPublicKey,
|
||||
"time_zone": testTimeZone,
|
||||
})
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValueWithHeaders(t, targetURL, body, nil)
|
||||
}
|
||||
|
||||
func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
for key, value := range headers {
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
request.Header.Set(key, value)
|
||||
}
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 250 * time.Millisecond,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-missing/exists", nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
response, err := postJSONValueMaybe(client, baseURL+"/api/v1/public/auth/send-email-code", map[string]string{
|
||||
"email": "",
|
||||
})
|
||||
if err == nil && response.StatusCode == http.StatusBadRequest {
|
||||
return
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) {
|
||||
payload, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body)
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target))
|
||||
}
|
||||
|
||||
func requireJSONStatusRaw(t *testing.T, response httpResponse, wantStatus int, wantBody string) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body)
|
||||
require.JSONEq(t, wantBody, response.Body)
|
||||
}
|
||||
|
||||
func postEnsureUser(t *testing.T, baseURL string, email string, preferredLanguage string, timeZone string) ensureByEmailResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, baseURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": preferredLanguage,
|
||||
"time_zone": timeZone,
|
||||
},
|
||||
})
|
||||
|
||||
var body ensureByEmailResponse
|
||||
requireJSONStatus(t, response, http.StatusOK, &body)
|
||||
return body
|
||||
}
|
||||
|
||||
func postBlockByEmail(t *testing.T, baseURL string, email string) {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, baseURL+"/api/v1/internal/user-blocks/by-email", map[string]string{
|
||||
"email": email,
|
||||
"reason_code": "policy_blocked",
|
||||
})
|
||||
|
||||
var body blockMutationResponse
|
||||
requireJSONStatus(t, response, http.StatusOK, &body)
|
||||
}
|
||||
|
||||
func lookupUserByEmail(t *testing.T, baseURL string, email string) (httpResponse, userLookupResponse) {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, baseURL+"/api/v1/internal/user-lookups/by-email", map[string]string{
|
||||
"email": email,
|
||||
})
|
||||
|
||||
if response.StatusCode != http.StatusOK {
|
||||
return response, userLookupResponse{}
|
||||
}
|
||||
|
||||
var body userLookupResponse
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
return response, body
|
||||
}
|
||||
|
||||
type ensureByEmailResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id,omitempty"`
|
||||
}
|
||||
|
||||
type blockMutationResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id,omitempty"`
|
||||
}
|
||||
|
||||
type userLookupResponse struct {
|
||||
User accountView `json:"user"`
|
||||
}
|
||||
|
||||
type accountView struct {
|
||||
UserID string `json:"user_id"`
|
||||
Email string `json:"email"`
|
||||
UserName string `json:"user_name"`
|
||||
DisplayName string `json:"display_name,omitempty"`
|
||||
PreferredLanguage string `json:"preferred_language"`
|
||||
TimeZone string `json:"time_zone"`
|
||||
DeclaredCountry string `json:"declared_country,omitempty"`
|
||||
Entitlement entitlementSnapshotView `json:"entitlement"`
|
||||
ActiveSanctions []activeSanctionView `json:"active_sanctions"`
|
||||
ActiveLimits []activeLimitView `json:"active_limits"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
type entitlementSnapshotView struct {
|
||||
PlanCode string `json:"plan_code"`
|
||||
IsPaid bool `json:"is_paid"`
|
||||
Source string `json:"source"`
|
||||
Actor actorRefView `json:"actor"`
|
||||
ReasonCode string `json:"reason_code"`
|
||||
StartsAt time.Time `json:"starts_at"`
|
||||
EndsAt *time.Time `json:"ends_at,omitempty"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
type activeSanctionView struct {
|
||||
SanctionCode string `json:"sanction_code"`
|
||||
Scope string `json:"scope"`
|
||||
ReasonCode string `json:"reason_code"`
|
||||
Actor actorRefView `json:"actor"`
|
||||
AppliedAt time.Time `json:"applied_at"`
|
||||
ExpiresAt *time.Time `json:"expires_at,omitempty"`
|
||||
}
|
||||
|
||||
type activeLimitView struct {
|
||||
LimitCode string `json:"limit_code"`
|
||||
Value int `json:"value"`
|
||||
ReasonCode string `json:"reason_code"`
|
||||
Actor actorRefView `json:"actor"`
|
||||
AppliedAt time.Time `json:"applied_at"`
|
||||
ExpiresAt *time.Time `json:"expires_at,omitempty"`
|
||||
}
|
||||
|
||||
type actorRefView struct {
|
||||
Type string `json:"type"`
|
||||
ID string `json:"id,omitempty"`
|
||||
}
|
||||
|
||||
func requireLookupNotFound(t *testing.T, response httpResponse) {
|
||||
t.Helper()
|
||||
|
||||
requireJSONStatusRaw(t, response, http.StatusNotFound, `{"error":{"code":"subject_not_found","message":"subject not found"}}`)
|
||||
}
|
||||
|
||||
func lastMailCodeFor(t *testing.T, stub *harness.MailStub, email string) string {
|
||||
t.Helper()
|
||||
|
||||
deliveries := stub.RecordedDeliveries()
|
||||
for index := len(deliveries) - 1; index >= 0; index-- {
|
||||
if deliveries[index].Email == email {
|
||||
return deliveries[index].Code
|
||||
}
|
||||
}
|
||||
|
||||
t.Fatalf("mail stub did not record delivery for %s", email)
|
||||
return ""
|
||||
}
|
||||
|
||||
func sleepForDistinctCreatedAt() {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
|
||||
func formatStatusError(response httpResponse) string {
|
||||
return fmt.Sprintf("status=%d body=%s", response.StatusCode, response.Body)
|
||||
}
|
||||
@@ -0,0 +1,98 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestEngineCommandProxy spins up a running game (10 enrolled
|
||||
// pilots so engine init succeeds) and verifies that backend's
|
||||
// user-side `/api/v1/user/games/{id}/commands` proxy reaches the
|
||||
// engine and returns its passthrough body without an internal-error
|
||||
// response.
|
||||
func TestEngineCommandProxy(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
testenv.EnsureGameImage(t)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions", map[string]any{
|
||||
"version": "v1.0.0", "image_ref": testenv.GameImage, "enabled": true,
|
||||
}); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("seed engine_version: err=%v resp=%v", err, resp)
|
||||
}
|
||||
|
||||
owner := testenv.RegisterSession(t, plat, "owner+cmd@example.com")
|
||||
ownerID, err := owner.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve owner: %v", err)
|
||||
}
|
||||
ownerHTTP := testenv.NewBackendUserClient(plat.Backend.HTTPURL, ownerID)
|
||||
|
||||
gameBody := map[string]any{
|
||||
"game_name": "Engine Command Proxy",
|
||||
"visibility": "private",
|
||||
"min_players": 10,
|
||||
"max_players": 10,
|
||||
"start_gap_hours": 1,
|
||||
"start_gap_players": 10,
|
||||
"enrollment_ends_at": time.Now().Add(24 * time.Hour).UTC().Format(time.RFC3339),
|
||||
"turn_schedule": "0 * * * *",
|
||||
"target_engine_version": "v1.0.0",
|
||||
}
|
||||
raw, resp, err := ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games", gameBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("create game: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var game struct {
|
||||
GameID string `json:"game_id"`
|
||||
}
|
||||
_ = json.Unmarshal(raw, &game)
|
||||
|
||||
if _, resp, err := ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+game.GameID+"/open-enrollment", nil); err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("open enrollment: %v %d", err, resp.StatusCode)
|
||||
}
|
||||
pilots := testenv.EnrollPilots(t, plat, ownerHTTP, game.GameID, 10, "cmd")
|
||||
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/games/"+game.GameID+"/force-start", nil); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("force-start: %v %d", err, resp.StatusCode)
|
||||
}
|
||||
|
||||
// Wait until runtime is running.
|
||||
deadline := time.Now().Add(3 * time.Minute)
|
||||
for time.Now().Before(deadline) {
|
||||
raw, resp, err = admin.Do(ctx, http.MethodGet, "/api/v1/admin/runtimes/"+game.GameID, nil)
|
||||
if err == nil && resp.StatusCode == http.StatusOK {
|
||||
var rec struct {
|
||||
Status string `json:"status"`
|
||||
}
|
||||
_ = json.Unmarshal(raw, &rec)
|
||||
if rec.Status == "running" {
|
||||
break
|
||||
}
|
||||
}
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
}
|
||||
|
||||
// Pilot 1 sends a command. Backend forwards to the engine; the
|
||||
// pass-through body comes back unchanged. We accept any status
|
||||
// the engine produces (200, 4xx) — what matters is that backend
|
||||
// did not surface an internal error of its own.
|
||||
cmdBody := map[string]any{"actions": []map[string]any{}}
|
||||
raw, resp, err = pilots[0].HTTP.Do(ctx, http.MethodPost, "/api/v1/user/games/"+game.GameID+"/commands", cmdBody)
|
||||
if err != nil {
|
||||
t.Fatalf("commands proxy: %v", err)
|
||||
}
|
||||
if resp.StatusCode == http.StatusInternalServerError || resp.StatusCode == http.StatusBadGateway {
|
||||
t.Fatalf("commands proxy: backend internal-error %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Cleanup: stop the container so the test does not leak it.
|
||||
_, _, _ = admin.Do(ctx, http.MethodPost, "/api/v1/admin/games/"+game.GameID+"/force-stop", nil)
|
||||
}
|
||||
@@ -0,0 +1,190 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// TestGatewayEdge_PublicBodyTooLarge tightens the public body size
|
||||
// limit and asserts that the gateway rejects an oversize public auth
|
||||
// payload before reaching backend.
|
||||
func TestGatewayEdge_PublicBodyTooLarge(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{
|
||||
GatewayExtra: map[string]string{
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_MAX_BODY_BYTES": "256",
|
||||
},
|
||||
})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
huge := strings.Repeat("x", 4096)
|
||||
public := testenv.NewPublicRESTClient(plat.Gateway.HTTPURL)
|
||||
_, _, err := public.SendEmailCode(ctx, huge+"@example.com", "")
|
||||
if err == nil {
|
||||
t.Fatalf("expected error for oversize public payload, got nil")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "413") && !strings.Contains(err.Error(), "request_too_large") {
|
||||
t.Fatalf("expected 413 or request_too_large, got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// TestGatewayEdge_BadSignature corrupts the request signature and
|
||||
// asserts the gateway rejects it as Unauthenticated.
|
||||
func TestGatewayEdge_BadSignature(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+badsig@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
bogus := make([]byte, 64)
|
||||
_, err = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{
|
||||
OverrideSignature: bogus,
|
||||
})
|
||||
if err == nil {
|
||||
t.Fatalf("expected Unauthenticated for bad signature")
|
||||
}
|
||||
if !testenv.IsUnauthenticated(err) {
|
||||
t.Fatalf("expected Unauthenticated, got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// TestGatewayEdge_PayloadHashMismatch sends a request whose
|
||||
// payload_hash is not the SHA-256 of payload_bytes and asserts the
|
||||
// gateway rejects it.
|
||||
func TestGatewayEdge_PayloadHashMismatch(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+hash@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
// The signed canonical bytes still use this wrong hash; gateway
|
||||
// recomputes and should detect the mismatch independently of the
|
||||
// signature check.
|
||||
wrong := sha256.Sum256([]byte("not-the-payload"))
|
||||
_, err = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{
|
||||
OverridePayloadHash: wrong[:],
|
||||
})
|
||||
if err == nil {
|
||||
t.Fatalf("expected rejection for payload_hash mismatch")
|
||||
}
|
||||
if !testenv.IsUnauthenticated(err) && !testenv.IsInvalidArgument(err) {
|
||||
t.Fatalf("expected Unauthenticated or InvalidArgument, got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// TestGatewayEdge_StaleTimestamp tightens freshness window to 1
|
||||
// second, then submits a request whose timestamp is 30 seconds in
|
||||
// the past, and asserts the gateway rejects it as stale.
|
||||
func TestGatewayEdge_StaleTimestamp(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{
|
||||
GatewayExtra: map[string]string{
|
||||
"GATEWAY_AUTHENTICATED_GRPC_FRESHNESS_WINDOW": "1s",
|
||||
},
|
||||
})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+stale@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
stale := time.Now().Add(-30 * time.Second).UnixMilli()
|
||||
_, err = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{
|
||||
TimestampMS: stale,
|
||||
})
|
||||
if err == nil {
|
||||
t.Fatalf("expected rejection for stale timestamp")
|
||||
}
|
||||
if !testenv.IsUnauthenticated(err) && !testenv.IsInvalidArgument(err) && !testenv.IsFailedPrecondition(err) {
|
||||
t.Fatalf("expected Unauthenticated, InvalidArgument or FailedPrecondition, got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// TestGatewayEdge_UnknownSession addresses a session id that backend
|
||||
// has never seen; gateway must reject before forwarding.
|
||||
func TestGatewayEdge_UnknownSession(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+unknown@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
_, err = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{
|
||||
OverrideSessionID: uuid.NewString(),
|
||||
})
|
||||
if err == nil {
|
||||
t.Fatalf("expected rejection for unknown session")
|
||||
}
|
||||
if !testenv.IsUnauthenticated(err) {
|
||||
t.Fatalf("expected Unauthenticated, got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// TestGatewayEdge_UnsupportedProtocolVersion sets protocol_version
|
||||
// to an unknown literal and asserts gateway rejection.
|
||||
func TestGatewayEdge_UnsupportedProtocolVersion(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+protover@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
_, err = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{
|
||||
OverrideProtocolVersion: "v999",
|
||||
})
|
||||
if err == nil {
|
||||
t.Fatalf("expected rejection for unsupported protocol_version")
|
||||
}
|
||||
if !testenv.IsInvalidArgument(err) && !testenv.IsUnauthenticated(err) && !testenv.IsFailedPrecondition(err) {
|
||||
t.Fatalf("expected InvalidArgument, Unauthenticated or FailedPrecondition, got: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -1,285 +0,0 @@
|
||||
package gatewayauthsession_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"encoding/base64"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
func TestGatewayAuthSessionSendEmailCodeReachesAuthsessionMailDelivery(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
|
||||
response := postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{
|
||||
"email": testEmail,
|
||||
})
|
||||
require.Equal(t, http.StatusOK, response.StatusCode)
|
||||
|
||||
var body struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
require.NotEmpty(t, body.ChallengeID)
|
||||
|
||||
deliveries := h.mailStub.RecordedDeliveries()
|
||||
require.Len(t, deliveries, 1)
|
||||
require.Equal(t, testEmail, deliveries[0].Email)
|
||||
require.Len(t, deliveries[0].Code, 6)
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionConfirmCreatesProjectionAndAllowsSubscribeEvents(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("confirm-projection")
|
||||
challengeID, code := h.sendChallenge(t, testEmail)
|
||||
|
||||
response := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
require.Equal(t, http.StatusOK, response.StatusCode)
|
||||
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &confirmBody))
|
||||
require.NotEmpty(t, confirmBody.DeviceSessionID)
|
||||
|
||||
record := h.readGatewaySessionRecord(t, confirmBody.DeviceSessionID)
|
||||
require.Equal(t, gatewaySessionRecord{
|
||||
DeviceSessionID: confirmBody.DeviceSessionID,
|
||||
UserID: "user-1",
|
||||
ClientPublicKey: base64.StdEncoding.EncodeToString(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
Status: "active",
|
||||
}, record)
|
||||
|
||||
ensureCalls := h.userStub.EnsureCalls()
|
||||
require.Len(t, ensureCalls, 1)
|
||||
require.Equal(t, testEmail, ensureCalls[0].Email)
|
||||
require.Equal(t, "en", ensureCalls[0].PreferredLanguage)
|
||||
require.Equal(t, testTimeZone, ensureCalls[0].TimeZone)
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
stream, err := client.SubscribeEvents(context.Background(), newSubscribeEventsRequest(confirmBody.DeviceSessionID, "request-bootstrap", clientPrivateKey))
|
||||
require.NoError(t, err)
|
||||
|
||||
event, err := stream.Recv()
|
||||
require.NoError(t, err)
|
||||
assertBootstrapEvent(t, event, h.responseSignerPublicKey, "request-bootstrap")
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionAcceptLanguageIsForwardedToMailAndUser(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("localized")
|
||||
challengeID, code := h.sendChallengeWithAcceptLanguage(t, testEmail, "fr-FR, en;q=0.8")
|
||||
|
||||
deliveries := h.mailStub.RecordedDeliveries()
|
||||
require.NotEmpty(t, deliveries)
|
||||
require.Equal(t, "fr-FR", deliveries[len(deliveries)-1].Locale)
|
||||
|
||||
response := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
require.Equal(t, http.StatusOK, response.StatusCode)
|
||||
|
||||
ensureCalls := h.userStub.EnsureCalls()
|
||||
require.Len(t, ensureCalls, 1)
|
||||
require.Equal(t, testEmail, ensureCalls[0].Email)
|
||||
require.Equal(t, "fr-FR", ensureCalls[0].PreferredLanguage)
|
||||
require.Equal(t, testTimeZone, ensureCalls[0].TimeZone)
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionRepeatedConfirmReturnsSameSessionID(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("repeated-confirm")
|
||||
challengeID, code := h.sendChallenge(t, testEmail)
|
||||
|
||||
first := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
second := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
require.Equal(t, http.StatusOK, first.StatusCode)
|
||||
require.Equal(t, http.StatusOK, second.StatusCode)
|
||||
|
||||
var firstBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
var secondBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(first.Body), &firstBody))
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(second.Body), &secondBody))
|
||||
require.Equal(t, firstBody.DeviceSessionID, secondBody.DeviceSessionID)
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionInvalidClientPublicKeyPassesThroughUnchanged(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
|
||||
challengeID, _ := h.sendChallenge(t, testEmail)
|
||||
|
||||
response := postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/confirm-email-code", map[string]string{
|
||||
"challenge_id": challengeID,
|
||||
"code": "123456",
|
||||
"client_public_key": "invalid",
|
||||
"time_zone": testTimeZone,
|
||||
})
|
||||
|
||||
require.Equal(t, http.StatusBadRequest, response.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"invalid_client_public_key","message":"client_public_key is not a valid base64-encoded raw 32-byte Ed25519 public key"}}`, response.Body)
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionChallengeNotFoundPassesThroughUnchanged(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
|
||||
response := h.confirmCode(t, "missing-challenge", "123456", newClientPrivateKey("missing-challenge"))
|
||||
|
||||
require.Equal(t, http.StatusNotFound, response.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"challenge_not_found","message":"challenge not found"}}`, response.Body)
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionInvalidCodePassesThroughUnchanged(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("invalid-code")
|
||||
challengeID, code := h.sendChallenge(t, testEmail)
|
||||
invalidCode := "000000"
|
||||
if code == invalidCode {
|
||||
invalidCode = "111111"
|
||||
}
|
||||
|
||||
response := h.confirmCode(t, challengeID, invalidCode, clientPrivateKey)
|
||||
|
||||
require.Equal(t, http.StatusBadRequest, response.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"invalid_code","message":"confirmation code is invalid"}}`, response.Body)
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionBlockedSendRemainsSuccessShapedWithoutDelivery(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
h.userStub.SeedBlockedEmail(testEmail, "policy_blocked")
|
||||
|
||||
response := postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{
|
||||
"email": testEmail,
|
||||
})
|
||||
|
||||
require.Equal(t, http.StatusOK, response.StatusCode)
|
||||
var body struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
require.NotEmpty(t, body.ChallengeID)
|
||||
require.Empty(t, h.mailStub.RecordedDeliveries())
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionSessionLimitExceededPassesThroughUnchanged(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
h.seedSessionLimit(t, 1)
|
||||
|
||||
firstClientPrivateKey := newClientPrivateKey("session-limit-first")
|
||||
firstChallengeID, firstCode := h.sendChallenge(t, testEmail)
|
||||
firstConfirm := h.confirmCode(t, firstChallengeID, firstCode, firstClientPrivateKey)
|
||||
require.Equal(t, http.StatusOK, firstConfirm.StatusCode)
|
||||
|
||||
const secondEmail = "pilot-second@example.com"
|
||||
h.userStub.SeedExisting(secondEmail, "user-1")
|
||||
|
||||
secondClientPrivateKey := newClientPrivateKey("session-limit-second")
|
||||
secondChallengeID, secondCode := h.sendChallenge(t, secondEmail)
|
||||
secondConfirm := h.confirmCode(t, secondChallengeID, secondCode, secondClientPrivateKey)
|
||||
|
||||
require.Equal(t, http.StatusConflict, secondConfirm.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"session_limit_exceeded","message":"active session limit would be exceeded"}}`, secondConfirm.Body)
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionRevokeClosesPushStreamAndRejectsReopen(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{})
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("revoke")
|
||||
challengeID, code := h.sendChallenge(t, testEmail)
|
||||
confirm := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
require.Equal(t, http.StatusOK, confirm.StatusCode)
|
||||
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(confirm.Body), &confirmBody))
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
stream, err := client.SubscribeEvents(context.Background(), newSubscribeEventsRequest(confirmBody.DeviceSessionID, "request-revoke", clientPrivateKey))
|
||||
require.NoError(t, err)
|
||||
|
||||
event, err := stream.Recv()
|
||||
require.NoError(t, err)
|
||||
assertBootstrapEvent(t, event, h.responseSignerPublicKey, "request-revoke")
|
||||
|
||||
revokeResponse := postJSONValue(t, h.authsessionInternalURL+"/api/v1/internal/sessions/"+confirmBody.DeviceSessionID+"/revoke", map[string]any{
|
||||
"reason_code": "admin_revoke",
|
||||
"actor": map[string]string{
|
||||
"type": "system",
|
||||
},
|
||||
})
|
||||
require.Equal(t, http.StatusOK, revokeResponse.StatusCode)
|
||||
|
||||
recvErrCh := make(chan error, 1)
|
||||
go func() {
|
||||
_, recvErr := stream.Recv()
|
||||
recvErrCh <- recvErr
|
||||
}()
|
||||
|
||||
select {
|
||||
case recvErr := <-recvErrCh:
|
||||
require.Equal(t, codes.FailedPrecondition, status.Code(recvErr))
|
||||
require.Equal(t, "device session is revoked", status.Convert(recvErr).Message())
|
||||
case <-time.After(5 * time.Second):
|
||||
t.Fatal("gateway stream did not close after authsession revoke")
|
||||
}
|
||||
|
||||
reopened, err := client.SubscribeEvents(context.Background(), newSubscribeEventsRequest(confirmBody.DeviceSessionID, "request-reopen", clientPrivateKey))
|
||||
if err == nil {
|
||||
_, err = reopened.Recv()
|
||||
}
|
||||
|
||||
require.Equal(t, codes.FailedPrecondition, status.Code(err))
|
||||
require.Equal(t, "device session is revoked", status.Convert(err).Message())
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionGatewayTimeoutMappingOverridesAuthsessionMessage(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{
|
||||
gatewayAuthUpstreamTimeout: 50 * time.Millisecond,
|
||||
authsessionPublicHTTPTimeout: time.Second,
|
||||
authsessionMailBehavior: harness.MailBehavior{
|
||||
Delay: 200 * time.Millisecond,
|
||||
},
|
||||
})
|
||||
|
||||
response := postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{
|
||||
"email": testEmail,
|
||||
})
|
||||
|
||||
require.Equal(t, http.StatusServiceUnavailable, response.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"service_unavailable","message":"auth service is unavailable"}}`, response.Body)
|
||||
}
|
||||
|
||||
func TestGatewayAuthSessionAuthsessionServiceUnavailablePassesThroughUnchanged(t *testing.T) {
|
||||
h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{
|
||||
authsessionMailBehavior: harness.MailBehavior{
|
||||
StatusCode: http.StatusServiceUnavailable,
|
||||
RawBody: `{"error":"mail backend unavailable"}`,
|
||||
},
|
||||
})
|
||||
|
||||
response := postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{
|
||||
"email": testEmail,
|
||||
})
|
||||
|
||||
require.Equal(t, http.StatusServiceUnavailable, response.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"service_unavailable","message":"service is unavailable"}}`, response.Body)
|
||||
}
|
||||
@@ -1,431 +0,0 @@
|
||||
package gatewayauthsession_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1"
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
const (
|
||||
testEmail = "pilot@example.com"
|
||||
testTimeZone = "Europe/Kaliningrad"
|
||||
|
||||
defaultGatewayAuthUpstreamTimeout = 500 * time.Millisecond
|
||||
defaultAuthsessionPublicHTTPTimeout = time.Second
|
||||
defaultAuthsessionInternalHTTPTimeout = time.Second
|
||||
defaultAuthsessionDependencyTimeout = time.Second
|
||||
)
|
||||
|
||||
type gatewayAuthSessionOptions struct {
|
||||
gatewayAuthUpstreamTimeout time.Duration
|
||||
authsessionPublicHTTPTimeout time.Duration
|
||||
authsessionMailBehavior harness.MailBehavior
|
||||
}
|
||||
|
||||
type gatewayAuthSessionHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
mailStub *harness.MailStub
|
||||
userStub *harness.UserStub
|
||||
|
||||
authsessionPublicURL string
|
||||
authsessionInternalURL string
|
||||
gatewayPublicURL string
|
||||
gatewayGRPCAddr string
|
||||
|
||||
responseSignerPublicKey ed25519.PublicKey
|
||||
|
||||
gatewayProcess *harness.Process
|
||||
authsessionProcess *harness.Process
|
||||
}
|
||||
|
||||
func newGatewayAuthSessionHarness(t *testing.T, opts gatewayAuthSessionOptions) *gatewayAuthSessionHarness {
|
||||
t.Helper()
|
||||
|
||||
if opts.gatewayAuthUpstreamTimeout <= 0 {
|
||||
opts.gatewayAuthUpstreamTimeout = defaultGatewayAuthUpstreamTimeout
|
||||
}
|
||||
if opts.authsessionPublicHTTPTimeout <= 0 {
|
||||
opts.authsessionPublicHTTPTimeout = defaultAuthsessionPublicHTTPTimeout
|
||||
}
|
||||
|
||||
redisServer := harness.StartMiniredis(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisServer.Addr(),
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
mailStub := harness.NewMailStub(t)
|
||||
mailStub.SetBehavior(opts.authsessionMailBehavior)
|
||||
|
||||
userStub := harness.NewUserStub(t)
|
||||
|
||||
responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name())
|
||||
authsessionPublicAddr := harness.FreeTCPAddress(t)
|
||||
authsessionInternalAddr := harness.FreeTCPAddress(t)
|
||||
gatewayPublicAddr := harness.FreeTCPAddress(t)
|
||||
gatewayGRPCAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||
|
||||
authsessionEnv := map[string]string{
|
||||
"AUTHSESSION_LOG_LEVEL": "info",
|
||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": opts.authsessionPublicHTTPTimeout.String(),
|
||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": defaultAuthsessionInternalHTTPTimeout.String(),
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": defaultAuthsessionDependencyTimeout.String(),
|
||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_MAIL_SERVICE_BASE_URL": mailStub.BaseURL(),
|
||||
"AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": defaultAuthsessionDependencyTimeout.String(),
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_CACHE_KEY_PREFIX": "gateway:session:",
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_EVENTS_STREAM": "gateway:session_events",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, authsessionEnv)
|
||||
waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr)
|
||||
waitForAuthsessionInternalReady(t, authsessionProcess, "http://"+authsessionInternalAddr)
|
||||
|
||||
gatewayEnv := map[string]string{
|
||||
"GATEWAY_LOG_LEVEL": "info",
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||
"GATEWAY_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||
"GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:",
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath),
|
||||
"GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr,
|
||||
"GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": opts.gatewayAuthUpstreamTimeout.String(),
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, gatewayEnv)
|
||||
harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK)
|
||||
harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr)
|
||||
|
||||
return &gatewayAuthSessionHarness{
|
||||
redis: redisClient,
|
||||
mailStub: mailStub,
|
||||
userStub: userStub,
|
||||
authsessionPublicURL: "http://" + authsessionPublicAddr,
|
||||
authsessionInternalURL: "http://" + authsessionInternalAddr,
|
||||
gatewayPublicURL: "http://" + gatewayPublicAddr,
|
||||
gatewayGRPCAddr: gatewayGRPCAddr,
|
||||
responseSignerPublicKey: responseSignerPublicKey,
|
||||
gatewayProcess: gatewayProcess,
|
||||
authsessionProcess: authsessionProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *gatewayAuthSessionHarness) dialGateway(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
conn, err := grpc.DialContext(
|
||||
ctx,
|
||||
h.gatewayGRPCAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithBlock(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, conn.Close())
|
||||
})
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
func (h *gatewayAuthSessionHarness) seedSessionLimit(t *testing.T, limit int) {
|
||||
t.Helper()
|
||||
|
||||
require.NoError(t, h.redis.Set(context.Background(), "authsession:config:active-session-limit", fmt.Sprint(limit), 0).Err())
|
||||
}
|
||||
|
||||
func (h *gatewayAuthSessionHarness) readGatewaySessionRecord(t *testing.T, deviceSessionID string) gatewaySessionRecord {
|
||||
t.Helper()
|
||||
|
||||
payload, err := h.redis.Get(context.Background(), "gateway:session:"+deviceSessionID).Bytes()
|
||||
require.NoError(t, err)
|
||||
|
||||
var record gatewaySessionRecord
|
||||
require.NoError(t, decodeStrictJSONPayload(payload, &record))
|
||||
return record
|
||||
}
|
||||
|
||||
func (h *gatewayAuthSessionHarness) sendChallenge(t *testing.T, email string) (string, string) {
|
||||
t.Helper()
|
||||
|
||||
return h.sendChallengeWithAcceptLanguage(t, email, "")
|
||||
}
|
||||
|
||||
func (h *gatewayAuthSessionHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) (string, string) {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValueWithHeaders(
|
||||
t,
|
||||
h.gatewayPublicURL+"/api/v1/public/auth/send-email-code",
|
||||
map[string]string{"email": email},
|
||||
map[string]string{"Accept-Language": acceptLanguage},
|
||||
)
|
||||
require.Equal(t, http.StatusOK, response.StatusCode)
|
||||
|
||||
var body struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
|
||||
deliveries := h.mailStub.RecordedDeliveries()
|
||||
require.NotEmpty(t, deliveries)
|
||||
return body.ChallengeID, deliveries[len(deliveries)-1].Code
|
||||
}
|
||||
|
||||
func (h *gatewayAuthSessionHarness) confirmCode(t *testing.T, challengeID string, code string, clientPrivateKey ed25519.PrivateKey) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/confirm-email-code", map[string]string{
|
||||
"challenge_id": challengeID,
|
||||
"code": code,
|
||||
"client_public_key": encodePublicKey(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
"time_zone": testTimeZone,
|
||||
})
|
||||
}
|
||||
|
||||
func newClientPrivateKey(label string) ed25519.PrivateKey {
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-gateway-authsession-client-" + label))
|
||||
return ed25519.NewKeyFromSeed(seed[:])
|
||||
}
|
||||
|
||||
func newSubscribeEventsRequest(deviceSessionID string, requestID string, clientPrivateKey ed25519.PrivateKey) *gatewayv1.SubscribeEventsRequest {
|
||||
payloadHash := contractsgatewayv1.ComputePayloadHash(nil)
|
||||
|
||||
request := &gatewayv1.SubscribeEventsRequest{
|
||||
ProtocolVersion: contractsgatewayv1.ProtocolVersionV1,
|
||||
DeviceSessionId: deviceSessionID,
|
||||
MessageType: contractsgatewayv1.SubscribeMessageType,
|
||||
TimestampMs: time.Now().UnixMilli(),
|
||||
RequestId: requestID,
|
||||
PayloadHash: payloadHash,
|
||||
TraceId: "trace-" + requestID,
|
||||
}
|
||||
request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{
|
||||
ProtocolVersion: request.GetProtocolVersion(),
|
||||
DeviceSessionID: request.GetDeviceSessionId(),
|
||||
MessageType: request.GetMessageType(),
|
||||
TimestampMS: request.GetTimestampMs(),
|
||||
RequestID: request.GetRequestId(),
|
||||
PayloadHash: request.GetPayloadHash(),
|
||||
})
|
||||
return request
|
||||
}
|
||||
|
||||
func assertBootstrapEvent(t *testing.T, event *gatewayv1.GatewayEvent, responseSignerPublicKey ed25519.PublicKey, wantRequestID string) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, contractsgatewayv1.ServerTimeEventType, event.GetEventType())
|
||||
require.Equal(t, wantRequestID, event.GetEventId())
|
||||
require.Equal(t, wantRequestID, event.GetRequestId())
|
||||
require.NoError(t, contractsgatewayv1.VerifyPayloadHash(event.GetPayloadBytes(), event.GetPayloadHash()))
|
||||
require.NoError(t, contractsgatewayv1.VerifyEventSignature(responseSignerPublicKey, event.GetSignature(), contractsgatewayv1.EventSigningFields{
|
||||
EventType: event.GetEventType(),
|
||||
EventID: event.GetEventId(),
|
||||
TimestampMS: event.GetTimestampMs(),
|
||||
RequestID: event.GetRequestId(),
|
||||
TraceID: event.GetTraceId(),
|
||||
PayloadHash: event.GetPayloadHash(),
|
||||
}))
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
type gatewaySessionRecord struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
UserID string `json:"user_id"`
|
||||
ClientPublicKey string `json:"client_public_key"`
|
||||
Status string `json:"status"`
|
||||
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValueWithHeaders(t, targetURL, body, nil)
|
||||
}
|
||||
|
||||
func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
for key, value := range headers {
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
request.Header.Set(key, value)
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return fmt.Errorf("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func encodePublicKey(publicKey ed25519.PublicKey) string {
|
||||
return base64.StdEncoding.EncodeToString(publicKey)
|
||||
}
|
||||
|
||||
func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
response, err := postJSONValueMaybe(client, baseURL+"/api/v1/public/auth/send-email-code", map[string]string{
|
||||
"email": "",
|
||||
})
|
||||
if err == nil && response.StatusCode == http.StatusBadRequest {
|
||||
return
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForAuthsessionInternalReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/sessions/missing", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("build authsession internal readiness request: %v", err)
|
||||
}
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusNotFound {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for authsession internal readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) {
|
||||
payload, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}, nil
|
||||
}
|
||||
@@ -1,106 +0,0 @@
|
||||
package gatewayauthsessionmail_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"testing"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestGatewayAuthsessionMailSendAndConfirmWithRealMailService(t *testing.T) {
|
||||
h := newGatewayAuthsessionMailHarness(t)
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("real-mail")
|
||||
challengeID := h.sendChallengeWithAcceptLanguage(t, testEmail, "fr-FR, en;q=0.8")
|
||||
|
||||
list := h.eventuallyListDeliveries(t, url.Values{
|
||||
"source": []string{"authsession"},
|
||||
"status": []string{"suppressed"},
|
||||
"recipient": []string{testEmail},
|
||||
"template_id": []string{"auth.login_code"},
|
||||
})
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, "authsession", list.Items[0].Source)
|
||||
require.Equal(t, "suppressed", list.Items[0].Status)
|
||||
require.Equal(t, "auth.login_code", list.Items[0].TemplateID)
|
||||
require.Equal(t, "fr-FR", list.Items[0].Locale)
|
||||
require.Equal(t, []string{testEmail}, list.Items[0].To)
|
||||
|
||||
detail := h.getDelivery(t, list.Items[0].DeliveryID)
|
||||
require.Equal(t, "authsession", detail.Source)
|
||||
require.Equal(t, "suppressed", detail.Status)
|
||||
require.Equal(t, "auth.login_code", detail.TemplateID)
|
||||
require.Equal(t, "fr-FR", detail.Locale)
|
||||
require.False(t, detail.LocaleFallbackUsed)
|
||||
require.Equal(t, []string{testEmail}, detail.To)
|
||||
require.NotEmpty(t, detail.IdempotencyKey)
|
||||
|
||||
code := templateVariableString(t, detail.TemplateVariables, "code")
|
||||
|
||||
confirm := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
require.Equal(t, http.StatusOK, confirm.StatusCode, confirm.Body)
|
||||
|
||||
var confirmBody confirmEmailCodeResponse
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(confirm.Body), &confirmBody))
|
||||
require.NotEmpty(t, confirmBody.DeviceSessionID)
|
||||
|
||||
record := h.waitForGatewaySession(t, confirmBody.DeviceSessionID)
|
||||
require.Equal(t, gatewaySessionRecord{
|
||||
DeviceSessionID: confirmBody.DeviceSessionID,
|
||||
UserID: "user-1",
|
||||
ClientPublicKey: encodePublicKey(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
Status: "active",
|
||||
}, record)
|
||||
|
||||
ensureCalls := h.userStub.EnsureCalls()
|
||||
require.Len(t, ensureCalls, 1)
|
||||
require.Equal(t, testEmail, ensureCalls[0].Email)
|
||||
require.Equal(t, "fr-FR", ensureCalls[0].PreferredLanguage)
|
||||
require.Equal(t, testTimeZone, ensureCalls[0].TimeZone)
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
stream, err := client.SubscribeEvents(context.Background(), newSubscribeEventsRequest(confirmBody.DeviceSessionID, "request-bootstrap", clientPrivateKey))
|
||||
require.NoError(t, err)
|
||||
|
||||
event, err := stream.Recv()
|
||||
require.NoError(t, err)
|
||||
assertBootstrapEvent(t, event, h.responseSignerPublicKey, "request-bootstrap")
|
||||
}
|
||||
|
||||
func TestGatewayAuthsessionMailUnavailablePassesThroughGatewaySurface(t *testing.T) {
|
||||
h := newGatewayAuthsessionMailHarness(t)
|
||||
h.stopMail(t)
|
||||
|
||||
response := postJSONValue(t, h.gatewayPublicURL+gatewaySendEmailCodePath, map[string]string{
|
||||
"email": testEmail,
|
||||
})
|
||||
|
||||
require.Equal(t, http.StatusServiceUnavailable, response.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"service_unavailable","message":"service is unavailable"}}`, response.Body)
|
||||
}
|
||||
|
||||
func TestGatewayAuthsessionMailAuthCodeBypassesNotificationStream(t *testing.T) {
|
||||
h := newGatewayAuthsessionMailHarness(t)
|
||||
|
||||
h.sendChallengeWithAcceptLanguage(t, testEmail, "en")
|
||||
|
||||
list := h.eventuallyListDeliveries(t, url.Values{
|
||||
"source": []string{"authsession"},
|
||||
"recipient": []string{testEmail},
|
||||
"template_id": []string{"auth.login_code"},
|
||||
})
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, "authsession", list.Items[0].Source)
|
||||
require.Equal(t, "auth.login_code", list.Items[0].TemplateID)
|
||||
|
||||
length, err := h.redis.XLen(context.Background(), "notification:intents").Result()
|
||||
require.NoError(t, err)
|
||||
require.Zero(t, length)
|
||||
}
|
||||
@@ -1,549 +0,0 @@
|
||||
package gatewayauthsessionmail_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1"
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
const (
|
||||
gatewaySendEmailCodePath = "/api/v1/public/auth/send-email-code"
|
||||
gatewayConfirmEmailCodePath = "/api/v1/public/auth/confirm-email-code"
|
||||
gatewayMailDeliveriesPath = "/api/v1/internal/deliveries"
|
||||
|
||||
testEmail = "pilot@example.com"
|
||||
testTimeZone = "Europe/Kaliningrad"
|
||||
)
|
||||
|
||||
type gatewayAuthsessionMailHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
userStub *harness.UserStub
|
||||
|
||||
authsessionPublicURL string
|
||||
authsessionInternalURL string
|
||||
gatewayPublicURL string
|
||||
gatewayGRPCAddr string
|
||||
mailInternalURL string
|
||||
|
||||
responseSignerPublicKey ed25519.PublicKey
|
||||
|
||||
gatewayProcess *harness.Process
|
||||
authsessionProcess *harness.Process
|
||||
mailProcess *harness.Process
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
type sendEmailCodeResponse struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
|
||||
type confirmEmailCodeResponse struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
|
||||
type gatewaySessionRecord struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
UserID string `json:"user_id"`
|
||||
ClientPublicKey string `json:"client_public_key"`
|
||||
Status string `json:"status"`
|
||||
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
type mailDeliveryListResponse struct {
|
||||
Items []mailDeliverySummary `json:"items"`
|
||||
}
|
||||
|
||||
type mailDeliverySummary struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
To []string `json:"to"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
type mailDeliveryDetailResponse struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
LocaleFallbackUsed bool `json:"locale_fallback_used"`
|
||||
To []string `json:"to"`
|
||||
IdempotencyKey string `json:"idempotency_key"`
|
||||
Status string `json:"status"`
|
||||
TemplateVariables map[string]any `json:"template_variables,omitempty"`
|
||||
}
|
||||
|
||||
func newGatewayAuthsessionMailHarness(t *testing.T) *gatewayAuthsessionMailHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
userStub := harness.NewUserStub(t)
|
||||
|
||||
responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name())
|
||||
mailInternalAddr := harness.FreeTCPAddress(t)
|
||||
authsessionPublicAddr := harness.FreeTCPAddress(t)
|
||||
authsessionInternalAddr := harness.FreeTCPAddress(t)
|
||||
gatewayPublicAddr := harness.FreeTCPAddress(t)
|
||||
gatewayGRPCAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||
|
||||
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||
mailEnv["MAIL_TEMPLATE_DIR"] = moduleTemplateDir(t)
|
||||
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||
|
||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{
|
||||
"AUTHSESSION_LOG_LEVEL": "info",
|
||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_MAIL_SERVICE_BASE_URL": "http://" + mailInternalAddr,
|
||||
"AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_CACHE_KEY_PREFIX": "gateway:session:",
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_EVENTS_STREAM": "gateway:session_events",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
})
|
||||
waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr)
|
||||
|
||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, map[string]string{
|
||||
"GATEWAY_LOG_LEVEL": "info",
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||
"GATEWAY_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||
"GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:",
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath),
|
||||
"GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr,
|
||||
"GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(),
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
})
|
||||
harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK)
|
||||
harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr)
|
||||
|
||||
return &gatewayAuthsessionMailHarness{
|
||||
redis: redisClient,
|
||||
userStub: userStub,
|
||||
authsessionPublicURL: "http://" + authsessionPublicAddr,
|
||||
authsessionInternalURL: "http://" + authsessionInternalAddr,
|
||||
gatewayPublicURL: "http://" + gatewayPublicAddr,
|
||||
gatewayGRPCAddr: gatewayGRPCAddr,
|
||||
mailInternalURL: "http://" + mailInternalAddr,
|
||||
responseSignerPublicKey: responseSignerPublicKey,
|
||||
gatewayProcess: gatewayProcess,
|
||||
authsessionProcess: authsessionProcess,
|
||||
mailProcess: mailProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionMailHarness) stopMail(t *testing.T) {
|
||||
t.Helper()
|
||||
|
||||
h.mailProcess.Stop(t)
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionMailHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) string {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValueWithHeaders(
|
||||
t,
|
||||
h.gatewayPublicURL+gatewaySendEmailCodePath,
|
||||
map[string]string{"email": email},
|
||||
map[string]string{"Accept-Language": acceptLanguage},
|
||||
)
|
||||
require.Equal(t, http.StatusOK, response.StatusCode, response.Body)
|
||||
|
||||
var body sendEmailCodeResponse
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
require.NotEmpty(t, body.ChallengeID)
|
||||
return body.ChallengeID
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionMailHarness) confirmCode(t *testing.T, challengeID string, code string, clientPrivateKey ed25519.PrivateKey) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValue(t, h.gatewayPublicURL+gatewayConfirmEmailCodePath, map[string]string{
|
||||
"challenge_id": challengeID,
|
||||
"code": code,
|
||||
"client_public_key": encodePublicKey(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
"time_zone": testTimeZone,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionMailHarness) eventuallyListDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse {
|
||||
t.Helper()
|
||||
|
||||
var response mailDeliveryListResponse
|
||||
require.Eventually(t, func() bool {
|
||||
response = h.listDeliveries(t, query)
|
||||
return len(response.Items) > 0
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
|
||||
return response
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionMailHarness) listDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse {
|
||||
t.Helper()
|
||||
|
||||
target := h.mailInternalURL + gatewayMailDeliveriesPath
|
||||
if encoded := query.Encode(); encoded != "" {
|
||||
target += "?" + encoded
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, target, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
return doJSONRequest[mailDeliveryListResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionMailHarness) getDelivery(t *testing.T, deliveryID string) mailDeliveryDetailResponse {
|
||||
t.Helper()
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, h.mailInternalURL+gatewayMailDeliveriesPath+"/"+url.PathEscape(deliveryID), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
return doJSONRequest[mailDeliveryDetailResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionMailHarness) waitForGatewaySession(t *testing.T, deviceSessionID string) gatewaySessionRecord {
|
||||
t.Helper()
|
||||
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
payload, err := h.redis.Get(context.Background(), "gateway:session:"+deviceSessionID).Bytes()
|
||||
if err == nil {
|
||||
var record gatewaySessionRecord
|
||||
require.NoError(t, decodeStrictJSONPayload(payload, &record))
|
||||
return record
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("gateway session projection for %s was not published in time", deviceSessionID)
|
||||
return gatewaySessionRecord{}
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionMailHarness) dialGateway(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
conn, err := grpc.DialContext(
|
||||
ctx,
|
||||
h.gatewayGRPCAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithBlock(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, conn.Close())
|
||||
})
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValueWithHeaders(t, targetURL, body, nil)
|
||||
}
|
||||
|
||||
func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
for key, value := range headers {
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
request.Header.Set(key, value)
|
||||
}
|
||||
|
||||
return doRequest(t, request)
|
||||
}
|
||||
|
||||
func doJSONRequest[T any](t *testing.T, request *http.Request, wantStatus int) T {
|
||||
t.Helper()
|
||||
|
||||
response := doRequest(t, request)
|
||||
require.Equal(t, wantStatus, response.StatusCode, response.Body)
|
||||
|
||||
var decoded T
|
||||
require.NoError(t, json.Unmarshal([]byte(response.Body), &decoded), response.Body)
|
||||
return decoded
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func templateVariableString(t *testing.T, variables map[string]any, field string) string {
|
||||
t.Helper()
|
||||
|
||||
value, ok := variables[field]
|
||||
require.True(t, ok, "template variable %q is missing", field)
|
||||
|
||||
text, ok := value.(string)
|
||||
require.True(t, ok, "template variable %q must be a string", field)
|
||||
require.NotEmpty(t, text)
|
||||
|
||||
return text
|
||||
}
|
||||
|
||||
func newClientPrivateKey(label string) ed25519.PrivateKey {
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-gateway-authsessionmail-client-" + label))
|
||||
return ed25519.NewKeyFromSeed(seed[:])
|
||||
}
|
||||
|
||||
func encodePublicKey(publicKey ed25519.PublicKey) string {
|
||||
return base64.StdEncoding.EncodeToString(publicKey)
|
||||
}
|
||||
|
||||
func newSubscribeEventsRequest(deviceSessionID string, requestID string, clientPrivateKey ed25519.PrivateKey) *gatewayv1.SubscribeEventsRequest {
|
||||
payloadHash := contractsgatewayv1.ComputePayloadHash(nil)
|
||||
|
||||
request := &gatewayv1.SubscribeEventsRequest{
|
||||
ProtocolVersion: contractsgatewayv1.ProtocolVersionV1,
|
||||
DeviceSessionId: deviceSessionID,
|
||||
MessageType: contractsgatewayv1.SubscribeMessageType,
|
||||
TimestampMs: time.Now().UnixMilli(),
|
||||
RequestId: requestID,
|
||||
PayloadHash: payloadHash,
|
||||
TraceId: "trace-" + requestID,
|
||||
}
|
||||
request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{
|
||||
ProtocolVersion: request.GetProtocolVersion(),
|
||||
DeviceSessionID: request.GetDeviceSessionId(),
|
||||
MessageType: request.GetMessageType(),
|
||||
TimestampMS: request.GetTimestampMs(),
|
||||
RequestID: request.GetRequestId(),
|
||||
PayloadHash: request.GetPayloadHash(),
|
||||
})
|
||||
|
||||
return request
|
||||
}
|
||||
|
||||
func assertBootstrapEvent(t *testing.T, event *gatewayv1.GatewayEvent, responseSignerPublicKey ed25519.PublicKey, wantRequestID string) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, contractsgatewayv1.ServerTimeEventType, event.GetEventType())
|
||||
require.Equal(t, wantRequestID, event.GetEventId())
|
||||
require.Equal(t, wantRequestID, event.GetRequestId())
|
||||
require.NoError(t, contractsgatewayv1.VerifyPayloadHash(event.GetPayloadBytes(), event.GetPayloadHash()))
|
||||
require.NoError(t, contractsgatewayv1.VerifyEventSignature(responseSignerPublicKey, event.GetSignature(), contractsgatewayv1.EventSigningFields{
|
||||
EventType: event.GetEventType(),
|
||||
EventID: event.GetEventId(),
|
||||
TimestampMS: event.GetTimestampMs(),
|
||||
RequestID: event.GetRequestId(),
|
||||
TraceID: event.GetTraceId(),
|
||||
PayloadHash: event.GetPayloadHash(),
|
||||
}))
|
||||
}
|
||||
|
||||
func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+gatewayMailDeliveriesPath, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
response, err := postJSONValueMaybe(client, baseURL+gatewaySendEmailCodePath, map[string]string{
|
||||
"email": "",
|
||||
})
|
||||
if err == nil && response.StatusCode == http.StatusBadRequest {
|
||||
return
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) {
|
||||
payload, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func moduleTemplateDir(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
return filepath.Join(repositoryRoot(t), "mail", "templates")
|
||||
}
|
||||
|
||||
func repositoryRoot(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatal("resolve repository root: runtime caller is unavailable")
|
||||
}
|
||||
|
||||
return filepath.Clean(filepath.Join(filepath.Dir(file), "..", ".."))
|
||||
}
|
||||
@@ -1,110 +0,0 @@
|
||||
package gatewayauthsessionuser_test
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestGatewayAuthsessionUserFirstRegistrationCreatesUserAndAllowsAccountRead(t *testing.T) {
|
||||
h := newGatewayAuthsessionUserHarness(t)
|
||||
|
||||
const email = "created@example.com"
|
||||
|
||||
challengeID := h.sendChallenge(t, email)
|
||||
code := lastMailCodeFor(t, h.mailStub, email)
|
||||
clientPrivateKey := newClientPrivateKey("first-registration")
|
||||
|
||||
confirmResponse := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
requireJSONStatus(t, confirmResponse, http.StatusOK, &confirmBody)
|
||||
require.True(t, strings.HasPrefix(confirmBody.DeviceSessionID, "device-session-"))
|
||||
|
||||
sessionRecord := h.waitForGatewaySession(t, confirmBody.DeviceSessionID)
|
||||
accountResponse := h.executeGetMyAccount(t, confirmBody.DeviceSessionID, "request-first-registration", clientPrivateKey)
|
||||
|
||||
require.Equal(t, sessionRecord.UserID, accountResponse.Account.UserID)
|
||||
require.Equal(t, email, accountResponse.Account.Email)
|
||||
require.Equal(t, "en", accountResponse.Account.PreferredLanguage)
|
||||
require.Equal(t, gatewayAuthsessionUserTestTimeZone, accountResponse.Account.TimeZone)
|
||||
|
||||
lookupResponse, lookup := h.lookupUserByEmail(t, email)
|
||||
require.Equalf(t, http.StatusOK, lookupResponse.StatusCode, "status=%d body=%s", lookupResponse.StatusCode, lookupResponse.Body)
|
||||
require.Equal(t, accountResponse.Account.UserID, lookup.User.UserID)
|
||||
}
|
||||
|
||||
func TestGatewayAuthsessionUserExistingAccountKeepsCreateOnlySettings(t *testing.T) {
|
||||
h := newGatewayAuthsessionUserHarness(t)
|
||||
|
||||
const email = "existing@example.com"
|
||||
|
||||
created := h.ensureUser(t, email, "fr-FR", "Europe/Paris")
|
||||
require.Equal(t, "created", created.Outcome)
|
||||
|
||||
challengeID := h.sendChallenge(t, email)
|
||||
code := lastMailCodeFor(t, h.mailStub, email)
|
||||
clientPrivateKey := newClientPrivateKey("existing-account")
|
||||
|
||||
confirmResponse := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
requireJSONStatus(t, confirmResponse, http.StatusOK, &confirmBody)
|
||||
|
||||
accountResponse := h.executeGetMyAccount(t, confirmBody.DeviceSessionID, "request-existing-account", clientPrivateKey)
|
||||
require.Equal(t, created.UserID, accountResponse.Account.UserID)
|
||||
require.Equal(t, "fr-FR", accountResponse.Account.PreferredLanguage)
|
||||
require.Equal(t, "Europe/Paris", accountResponse.Account.TimeZone)
|
||||
}
|
||||
|
||||
func TestGatewayAuthsessionUserAcceptLanguageSetsLocalizedPreferredLanguage(t *testing.T) {
|
||||
h := newGatewayAuthsessionUserHarness(t)
|
||||
|
||||
const email = "localized@example.com"
|
||||
|
||||
challengeID := h.sendChallengeWithAcceptLanguage(t, email, "fr-FR, en;q=0.8")
|
||||
deliveries := h.mailStub.RecordedDeliveries()
|
||||
require.NotEmpty(t, deliveries)
|
||||
require.Equal(t, "fr-FR", deliveries[len(deliveries)-1].Locale)
|
||||
|
||||
code := lastMailCodeFor(t, h.mailStub, email)
|
||||
clientPrivateKey := newClientPrivateKey("localized-account")
|
||||
|
||||
confirmResponse := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
requireJSONStatus(t, confirmResponse, http.StatusOK, &confirmBody)
|
||||
|
||||
accountResponse := h.executeGetMyAccount(t, confirmBody.DeviceSessionID, "request-localized-account", clientPrivateKey)
|
||||
require.Equal(t, "fr-FR", accountResponse.Account.PreferredLanguage)
|
||||
require.Equal(t, gatewayAuthsessionUserTestTimeZone, accountResponse.Account.TimeZone)
|
||||
}
|
||||
|
||||
func TestGatewayAuthsessionUserBlockedEmailAndUserBehavior(t *testing.T) {
|
||||
h := newGatewayAuthsessionUserHarness(t)
|
||||
|
||||
blockedAtSendEmail := "blocked-send@example.com"
|
||||
h.blockByEmail(t, blockedAtSendEmail)
|
||||
|
||||
beforeBlockedSendDeliveries := len(h.mailStub.RecordedDeliveries())
|
||||
blockedChallengeID := h.sendChallenge(t, blockedAtSendEmail)
|
||||
require.NotEmpty(t, blockedChallengeID)
|
||||
require.Len(t, h.mailStub.RecordedDeliveries(), beforeBlockedSendDeliveries)
|
||||
|
||||
blockedAtConfirmEmail := "blocked-confirm@example.com"
|
||||
challengeID := h.sendChallenge(t, blockedAtConfirmEmail)
|
||||
code := lastMailCodeFor(t, h.mailStub, blockedAtConfirmEmail)
|
||||
h.blockByEmail(t, blockedAtConfirmEmail)
|
||||
|
||||
confirmResponse := h.confirmCode(t, challengeID, code, newClientPrivateKey("blocked-confirm"))
|
||||
require.Equal(t, http.StatusForbidden, confirmResponse.StatusCode)
|
||||
require.JSONEq(t, `{"error":{"code":"blocked_by_policy","message":"authentication is blocked by policy"}}`, confirmResponse.Body)
|
||||
|
||||
lookupResponse, _ := h.lookupUserByEmail(t, blockedAtConfirmEmail)
|
||||
requireLookupNotFound(t, lookupResponse)
|
||||
}
|
||||
@@ -1,483 +0,0 @@
|
||||
package gatewayauthsessionuser_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1"
|
||||
contractsuserv1 "galaxy/integration/internal/contracts/userv1"
|
||||
"galaxy/integration/internal/harness"
|
||||
usermodel "galaxy/model/user"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
const gatewayAuthsessionUserTestTimeZone = "Europe/Kaliningrad"
|
||||
|
||||
type gatewayAuthsessionUserHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
mailStub *harness.MailStub
|
||||
|
||||
authsessionPublicURL string
|
||||
userServiceURL string
|
||||
gatewayPublicURL string
|
||||
gatewayGRPCAddr string
|
||||
|
||||
responseSignerPublicKey ed25519.PublicKey
|
||||
|
||||
gatewayProcess *harness.Process
|
||||
authsessionProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
}
|
||||
|
||||
func newGatewayAuthsessionUserHarness(t *testing.T) *gatewayAuthsessionUserHarness {
|
||||
t.Helper()
|
||||
|
||||
redisServer := harness.StartMiniredis(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisServer.Addr(),
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
mailStub := harness.NewMailStub(t)
|
||||
|
||||
responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name())
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
authsessionPublicAddr := harness.FreeTCPAddress(t)
|
||||
authsessionInternalAddr := harness.FreeTCPAddress(t)
|
||||
gatewayPublicAddr := harness.FreeTCPAddress(t)
|
||||
gatewayGRPCAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisServer.Addr()).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
harness.WaitForHTTPStatus(t, userServiceProcess, "http://"+userServiceAddr+"/api/v1/internal/users/user-missing/exists", http.StatusOK)
|
||||
|
||||
authsessionEnv := map[string]string{
|
||||
"AUTHSESSION_LOG_LEVEL": "info",
|
||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_MAIL_SERVICE_BASE_URL": mailStub.BaseURL(),
|
||||
"AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_CACHE_KEY_PREFIX": "gateway:session:",
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_EVENTS_STREAM": "gateway:session_events",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, authsessionEnv)
|
||||
waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr)
|
||||
|
||||
gatewayEnv := map[string]string{
|
||||
"GATEWAY_LOG_LEVEL": "info",
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||
"GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr,
|
||||
"GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(),
|
||||
"GATEWAY_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||
"GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:",
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath),
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, gatewayEnv)
|
||||
harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK)
|
||||
harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr)
|
||||
|
||||
return &gatewayAuthsessionUserHarness{
|
||||
redis: redisClient,
|
||||
mailStub: mailStub,
|
||||
authsessionPublicURL: "http://" + authsessionPublicAddr,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
gatewayPublicURL: "http://" + gatewayPublicAddr,
|
||||
gatewayGRPCAddr: gatewayGRPCAddr,
|
||||
responseSignerPublicKey: responseSignerPublicKey,
|
||||
gatewayProcess: gatewayProcess,
|
||||
authsessionProcess: authsessionProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) sendChallenge(t *testing.T, email string) string {
|
||||
t.Helper()
|
||||
|
||||
return h.sendChallengeWithAcceptLanguage(t, email, "")
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) string {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValueWithHeaders(
|
||||
t,
|
||||
h.gatewayPublicURL+"/api/v1/public/auth/send-email-code",
|
||||
map[string]string{"email": email},
|
||||
map[string]string{"Accept-Language": acceptLanguage},
|
||||
)
|
||||
require.Equal(t, http.StatusOK, response.StatusCode)
|
||||
|
||||
var body struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
return body.ChallengeID
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) confirmCode(t *testing.T, challengeID string, code string, clientPrivateKey ed25519.PrivateKey) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/confirm-email-code", map[string]string{
|
||||
"challenge_id": challengeID,
|
||||
"code": code,
|
||||
"client_public_key": base64.StdEncoding.EncodeToString(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
"time_zone": gatewayAuthsessionUserTestTimeZone,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) ensureUser(t *testing.T, email string, preferredLanguage string, timeZone string) ensureByEmailResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": preferredLanguage,
|
||||
"time_zone": timeZone,
|
||||
},
|
||||
})
|
||||
|
||||
var body ensureByEmailResponse
|
||||
requireJSONStatus(t, response, http.StatusOK, &body)
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) lookupUserByEmail(t *testing.T, email string) (httpResponse, userLookupResponse) {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/user-lookups/by-email", map[string]string{
|
||||
"email": email,
|
||||
})
|
||||
if response.StatusCode != http.StatusOK {
|
||||
return response, userLookupResponse{}
|
||||
}
|
||||
|
||||
var body userLookupResponse
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
return response, body
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) blockByEmail(t *testing.T, email string) {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/user-blocks/by-email", map[string]string{
|
||||
"email": email,
|
||||
"reason_code": "policy_blocked",
|
||||
})
|
||||
require.Equal(t, http.StatusOK, response.StatusCode, "response body: %s", response.Body)
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) waitForGatewaySession(t *testing.T, deviceSessionID string) gatewaySessionRecord {
|
||||
t.Helper()
|
||||
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
payload, err := h.redis.Get(context.Background(), "gateway:session:"+deviceSessionID).Bytes()
|
||||
if err == nil {
|
||||
var record gatewaySessionRecord
|
||||
require.NoError(t, decodeStrictJSONPayload(payload, &record))
|
||||
return record
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("gateway session projection for %s was not published in time", deviceSessionID)
|
||||
return gatewaySessionRecord{}
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) executeGetMyAccount(t *testing.T, deviceSessionID string, requestID string, clientPrivateKey ed25519.PrivateKey) *usermodel.AccountResponse {
|
||||
t.Helper()
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
payload, err := contractsuserv1.EncodeGetMyAccountRequest()
|
||||
require.NoError(t, err)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
response, err := client.ExecuteCommand(ctx, newExecuteCommandRequest(deviceSessionID, requestID, contractsuserv1.MessageTypeGetMyAccount, payload, clientPrivateKey))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, contractsuserv1.ResultCodeOK, response.GetResultCode())
|
||||
assertSignedExecuteCommandResponse(t, response, h.responseSignerPublicKey)
|
||||
|
||||
accountResponse, err := contractsuserv1.DecodeAccountResponse(response.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
return accountResponse
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserHarness) dialGateway(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
conn, err := grpc.DialContext(
|
||||
ctx,
|
||||
h.gatewayGRPCAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithBlock(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, conn.Close())
|
||||
})
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
type ensureByEmailResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id,omitempty"`
|
||||
}
|
||||
|
||||
type gatewaySessionRecord struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
UserID string `json:"user_id"`
|
||||
ClientPublicKey string `json:"client_public_key"`
|
||||
Status string `json:"status"`
|
||||
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
type userLookupResponse struct {
|
||||
User usermodel.Account `json:"user"`
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValueWithHeaders(t, targetURL, body, nil)
|
||||
}
|
||||
|
||||
func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
for key, value := range headers {
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
request.Header.Set(key, value)
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return fmt.Errorf("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body)
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target))
|
||||
}
|
||||
|
||||
func requireLookupNotFound(t *testing.T, response httpResponse) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, http.StatusNotFound, response.StatusCode, "response body: %s", response.Body)
|
||||
require.JSONEq(t, `{"error":{"code":"subject_not_found","message":"subject not found"}}`, response.Body)
|
||||
}
|
||||
|
||||
func lastMailCodeFor(t *testing.T, stub *harness.MailStub, email string) string {
|
||||
t.Helper()
|
||||
|
||||
deliveries := stub.RecordedDeliveries()
|
||||
for index := len(deliveries) - 1; index >= 0; index-- {
|
||||
if deliveries[index].Email == email {
|
||||
return deliveries[index].Code
|
||||
}
|
||||
}
|
||||
|
||||
t.Fatalf("mail stub did not record delivery for %s", email)
|
||||
return ""
|
||||
}
|
||||
|
||||
func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
response, err := postJSONValueMaybe(client, baseURL+"/api/v1/public/auth/send-email-code", map[string]string{
|
||||
"email": "",
|
||||
})
|
||||
if err == nil && response.StatusCode == http.StatusBadRequest {
|
||||
return
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) {
|
||||
payload, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func newClientPrivateKey(label string) ed25519.PrivateKey {
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-gateway-authsession-user-client-" + label))
|
||||
return ed25519.NewKeyFromSeed(seed[:])
|
||||
}
|
||||
|
||||
func newExecuteCommandRequest(deviceSessionID string, requestID string, messageType string, payload []byte, clientPrivateKey ed25519.PrivateKey) *gatewayv1.ExecuteCommandRequest {
|
||||
payloadHash := contractsgatewayv1.ComputePayloadHash(payload)
|
||||
|
||||
request := &gatewayv1.ExecuteCommandRequest{
|
||||
ProtocolVersion: contractsgatewayv1.ProtocolVersionV1,
|
||||
DeviceSessionId: deviceSessionID,
|
||||
MessageType: messageType,
|
||||
TimestampMs: time.Now().UnixMilli(),
|
||||
RequestId: requestID,
|
||||
PayloadBytes: payload,
|
||||
PayloadHash: payloadHash,
|
||||
TraceId: "trace-" + requestID,
|
||||
}
|
||||
request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{
|
||||
ProtocolVersion: request.GetProtocolVersion(),
|
||||
DeviceSessionID: request.GetDeviceSessionId(),
|
||||
MessageType: request.GetMessageType(),
|
||||
TimestampMS: request.GetTimestampMs(),
|
||||
RequestID: request.GetRequestId(),
|
||||
PayloadHash: request.GetPayloadHash(),
|
||||
})
|
||||
|
||||
return request
|
||||
}
|
||||
|
||||
func assertSignedExecuteCommandResponse(t *testing.T, response *gatewayv1.ExecuteCommandResponse, publicKey ed25519.PublicKey) {
|
||||
t.Helper()
|
||||
|
||||
require.NoError(t, contractsgatewayv1.VerifyPayloadHash(response.GetPayloadBytes(), response.GetPayloadHash()))
|
||||
require.NoError(t, contractsgatewayv1.VerifyResponseSignature(publicKey, response.GetSignature(), contractsgatewayv1.ResponseSigningFields{
|
||||
ProtocolVersion: response.GetProtocolVersion(),
|
||||
RequestID: response.GetRequestId(),
|
||||
TimestampMS: response.GetTimestampMs(),
|
||||
ResultCode: response.GetResultCode(),
|
||||
PayloadHash: response.GetPayloadHash(),
|
||||
}))
|
||||
}
|
||||
@@ -1,693 +0,0 @@
|
||||
package gatewayauthsessionusermail_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1"
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
const (
|
||||
gatewaySendEmailCodePath = "/api/v1/public/auth/send-email-code"
|
||||
gatewayConfirmEmailCodePath = "/api/v1/public/auth/confirm-email-code"
|
||||
mailDeliveriesPath = "/api/v1/internal/deliveries"
|
||||
|
||||
testEmail = "pilot@example.com"
|
||||
testTimeZone = "Europe/Kaliningrad"
|
||||
)
|
||||
|
||||
func TestGatewayAuthsessionUserMailRegistrationCreatesUserProjectsSessionAndBypassesNotification(t *testing.T) {
|
||||
h := newGatewayAuthsessionUserMailHarness(t)
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("full-chain")
|
||||
challengeID := h.sendChallengeWithAcceptLanguage(t, testEmail, "fr-FR, en;q=0.8")
|
||||
|
||||
list := h.eventuallyListDeliveries(t, url.Values{
|
||||
"source": []string{"authsession"},
|
||||
"recipient": []string{testEmail},
|
||||
"template_id": []string{"auth.login_code"},
|
||||
})
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, "authsession", list.Items[0].Source)
|
||||
require.Equal(t, "auth.login_code", list.Items[0].TemplateID)
|
||||
require.Equal(t, "fr-FR", list.Items[0].Locale)
|
||||
require.Equal(t, []string{testEmail}, list.Items[0].To)
|
||||
|
||||
detail := h.getDelivery(t, list.Items[0].DeliveryID)
|
||||
code := templateVariableString(t, detail.TemplateVariables, "code")
|
||||
|
||||
confirm := h.confirmCode(t, challengeID, code, clientPrivateKey)
|
||||
require.Equal(t, http.StatusOK, confirm.StatusCode, confirm.Body)
|
||||
|
||||
var confirmBody confirmEmailCodeResponse
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(confirm.Body), &confirmBody))
|
||||
require.NotEmpty(t, confirmBody.DeviceSessionID)
|
||||
|
||||
account := h.lookupUserByEmail(t, testEmail)
|
||||
require.Equal(t, testEmail, account.User.Email)
|
||||
require.Equal(t, "fr-FR", account.User.PreferredLanguage)
|
||||
require.Equal(t, testTimeZone, account.User.TimeZone)
|
||||
require.NotEmpty(t, account.User.UserID)
|
||||
|
||||
record := h.waitForGatewaySession(t, confirmBody.DeviceSessionID)
|
||||
require.Equal(t, gatewaySessionRecord{
|
||||
DeviceSessionID: confirmBody.DeviceSessionID,
|
||||
UserID: account.User.UserID,
|
||||
ClientPublicKey: encodePublicKey(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
Status: "active",
|
||||
}, record)
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
stream, err := client.SubscribeEvents(context.Background(), newSubscribeEventsRequest(confirmBody.DeviceSessionID, "request-bootstrap", clientPrivateKey))
|
||||
require.NoError(t, err)
|
||||
assertBootstrapEvent(t, recvGatewayEvent(t, stream), h.responseSignerPublicKey, "request-bootstrap")
|
||||
|
||||
length, err := h.redis.XLen(context.Background(), "notification:intents").Result()
|
||||
require.NoError(t, err)
|
||||
require.Zero(t, length)
|
||||
}
|
||||
|
||||
type gatewayAuthsessionUserMailHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
userServiceURL string
|
||||
gatewayPublicURL string
|
||||
gatewayGRPCAddr string
|
||||
mailInternalURL string
|
||||
|
||||
responseSignerPublicKey ed25519.PublicKey
|
||||
|
||||
gatewayProcess *harness.Process
|
||||
authsessionProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
mailProcess *harness.Process
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
type sendEmailCodeResponse struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
|
||||
type confirmEmailCodeResponse struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
|
||||
type gatewaySessionRecord struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
UserID string `json:"user_id"`
|
||||
ClientPublicKey string `json:"client_public_key"`
|
||||
Status string `json:"status"`
|
||||
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
type mailDeliveryListResponse struct {
|
||||
Items []mailDeliverySummary `json:"items"`
|
||||
}
|
||||
|
||||
type mailDeliverySummary struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
To []string `json:"to"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
type mailDeliveryDetailResponse struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
To []string `json:"to"`
|
||||
IdempotencyKey string `json:"idempotency_key"`
|
||||
Status string `json:"status"`
|
||||
TemplateVariables map[string]any `json:"template_variables,omitempty"`
|
||||
}
|
||||
|
||||
type userLookupResponse struct {
|
||||
User accountView `json:"user"`
|
||||
}
|
||||
|
||||
type accountView struct {
|
||||
UserID string `json:"user_id"`
|
||||
Email string `json:"email"`
|
||||
PreferredLanguage string `json:"preferred_language"`
|
||||
TimeZone string `json:"time_zone"`
|
||||
}
|
||||
|
||||
func newGatewayAuthsessionUserMailHarness(t *testing.T) *gatewayAuthsessionUserMailHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name())
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
mailInternalAddr := harness.FreeTCPAddress(t)
|
||||
authsessionPublicAddr := harness.FreeTCPAddress(t)
|
||||
authsessionInternalAddr := harness.FreeTCPAddress(t)
|
||||
gatewayPublicAddr := harness.FreeTCPAddress(t)
|
||||
gatewayGRPCAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||
mailEnv["MAIL_TEMPLATE_DIR"] = moduleTemplateDir(t)
|
||||
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||
|
||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{
|
||||
"AUTHSESSION_LOG_LEVEL": "info",
|
||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_MAIL_SERVICE_BASE_URL": "http://" + mailInternalAddr,
|
||||
"AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_CACHE_KEY_PREFIX": "gateway:session:",
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_EVENTS_STREAM": "gateway:session_events",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
})
|
||||
waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr)
|
||||
|
||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, map[string]string{
|
||||
"GATEWAY_LOG_LEVEL": "info",
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||
"GATEWAY_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||
"GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:",
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath),
|
||||
"GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr,
|
||||
"GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(),
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
})
|
||||
harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK)
|
||||
harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr)
|
||||
|
||||
return &gatewayAuthsessionUserMailHarness{
|
||||
redis: redisClient,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
gatewayPublicURL: "http://" + gatewayPublicAddr,
|
||||
gatewayGRPCAddr: gatewayGRPCAddr,
|
||||
mailInternalURL: "http://" + mailInternalAddr,
|
||||
responseSignerPublicKey: responseSignerPublicKey,
|
||||
gatewayProcess: gatewayProcess,
|
||||
authsessionProcess: authsessionProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
mailProcess: mailProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserMailHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) string {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValueWithHeaders(
|
||||
t,
|
||||
h.gatewayPublicURL+gatewaySendEmailCodePath,
|
||||
map[string]string{"email": email},
|
||||
map[string]string{"Accept-Language": acceptLanguage},
|
||||
)
|
||||
require.Equal(t, http.StatusOK, response.StatusCode, response.Body)
|
||||
|
||||
var body sendEmailCodeResponse
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
require.NotEmpty(t, body.ChallengeID)
|
||||
return body.ChallengeID
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserMailHarness) confirmCode(t *testing.T, challengeID string, code string, clientPrivateKey ed25519.PrivateKey) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValue(t, h.gatewayPublicURL+gatewayConfirmEmailCodePath, map[string]string{
|
||||
"challenge_id": challengeID,
|
||||
"code": code,
|
||||
"client_public_key": encodePublicKey(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
"time_zone": testTimeZone,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserMailHarness) eventuallyListDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse {
|
||||
t.Helper()
|
||||
|
||||
var response mailDeliveryListResponse
|
||||
require.Eventually(t, func() bool {
|
||||
response = h.listDeliveries(t, query)
|
||||
return len(response.Items) > 0
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
|
||||
return response
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserMailHarness) listDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse {
|
||||
t.Helper()
|
||||
|
||||
target := h.mailInternalURL + mailDeliveriesPath
|
||||
if encoded := query.Encode(); encoded != "" {
|
||||
target += "?" + encoded
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, target, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
return doJSONRequest[mailDeliveryListResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserMailHarness) getDelivery(t *testing.T, deliveryID string) mailDeliveryDetailResponse {
|
||||
t.Helper()
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, h.mailInternalURL+mailDeliveriesPath+"/"+url.PathEscape(deliveryID), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
return doJSONRequest[mailDeliveryDetailResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserMailHarness) lookupUserByEmail(t *testing.T, email string) userLookupResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/user-lookups/by-email", map[string]string{
|
||||
"email": email,
|
||||
})
|
||||
return decodeJSONResponse[userLookupResponse](t, response, http.StatusOK)
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserMailHarness) waitForGatewaySession(t *testing.T, deviceSessionID string) gatewaySessionRecord {
|
||||
t.Helper()
|
||||
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
payload, err := h.redis.Get(context.Background(), "gateway:session:"+deviceSessionID).Bytes()
|
||||
if err == nil {
|
||||
var record gatewaySessionRecord
|
||||
require.NoError(t, decodeStrictJSONPayload(payload, &record))
|
||||
return record
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("gateway session projection for %s was not published in time", deviceSessionID)
|
||||
return gatewaySessionRecord{}
|
||||
}
|
||||
|
||||
func (h *gatewayAuthsessionUserMailHarness) dialGateway(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
conn, err := grpc.DialContext(
|
||||
ctx,
|
||||
h.gatewayGRPCAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithBlock(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, conn.Close())
|
||||
})
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
return postJSONValueWithHeaders(t, targetURL, body, nil)
|
||||
}
|
||||
|
||||
func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
for key, value := range headers {
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
request.Header.Set(key, value)
|
||||
}
|
||||
|
||||
return doRequest(t, request)
|
||||
}
|
||||
|
||||
func doJSONRequest[T any](t *testing.T, request *http.Request, wantStatus int) T {
|
||||
t.Helper()
|
||||
|
||||
response := doRequest(t, request)
|
||||
return decodeJSONResponse[T](t, response, wantStatus)
|
||||
}
|
||||
|
||||
func decodeJSONResponse[T any](t *testing.T, response httpResponse, wantStatus int) T {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, wantStatus, response.StatusCode, response.Body)
|
||||
|
||||
var decoded T
|
||||
require.NoError(t, decodeJSONPayload([]byte(response.Body), &decoded), response.Body)
|
||||
return decoded
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func decodeJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func templateVariableString(t *testing.T, variables map[string]any, field string) string {
|
||||
t.Helper()
|
||||
|
||||
value, ok := variables[field]
|
||||
require.True(t, ok, "template variable %q is missing", field)
|
||||
|
||||
text, ok := value.(string)
|
||||
require.True(t, ok, "template variable %q must be a string", field)
|
||||
require.NotEmpty(t, text)
|
||||
|
||||
return text
|
||||
}
|
||||
|
||||
func newClientPrivateKey(label string) ed25519.PrivateKey {
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-gateway-authsession-user-mail-client-" + label))
|
||||
return ed25519.NewKeyFromSeed(seed[:])
|
||||
}
|
||||
|
||||
func encodePublicKey(publicKey ed25519.PublicKey) string {
|
||||
return base64.StdEncoding.EncodeToString(publicKey)
|
||||
}
|
||||
|
||||
func newSubscribeEventsRequest(deviceSessionID string, requestID string, clientPrivateKey ed25519.PrivateKey) *gatewayv1.SubscribeEventsRequest {
|
||||
payloadHash := contractsgatewayv1.ComputePayloadHash(nil)
|
||||
|
||||
request := &gatewayv1.SubscribeEventsRequest{
|
||||
ProtocolVersion: contractsgatewayv1.ProtocolVersionV1,
|
||||
DeviceSessionId: deviceSessionID,
|
||||
MessageType: contractsgatewayv1.SubscribeMessageType,
|
||||
TimestampMs: time.Now().UnixMilli(),
|
||||
RequestId: requestID,
|
||||
PayloadHash: payloadHash,
|
||||
TraceId: "trace-" + requestID,
|
||||
}
|
||||
request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{
|
||||
ProtocolVersion: request.GetProtocolVersion(),
|
||||
DeviceSessionID: request.GetDeviceSessionId(),
|
||||
MessageType: request.GetMessageType(),
|
||||
TimestampMS: request.GetTimestampMs(),
|
||||
RequestID: request.GetRequestId(),
|
||||
PayloadHash: request.GetPayloadHash(),
|
||||
})
|
||||
|
||||
return request
|
||||
}
|
||||
|
||||
func recvGatewayEvent(t *testing.T, stream grpc.ServerStreamingClient[gatewayv1.GatewayEvent]) *gatewayv1.GatewayEvent {
|
||||
t.Helper()
|
||||
|
||||
eventCh := make(chan *gatewayv1.GatewayEvent, 1)
|
||||
errCh := make(chan error, 1)
|
||||
go func() {
|
||||
event, err := stream.Recv()
|
||||
if err != nil {
|
||||
errCh <- err
|
||||
return
|
||||
}
|
||||
eventCh <- event
|
||||
}()
|
||||
|
||||
select {
|
||||
case event := <-eventCh:
|
||||
return event
|
||||
case err := <-errCh:
|
||||
require.NoError(t, err)
|
||||
case <-time.After(5 * time.Second):
|
||||
require.FailNow(t, "timed out waiting for gateway event")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func assertBootstrapEvent(t *testing.T, event *gatewayv1.GatewayEvent, responseSignerPublicKey ed25519.PublicKey, wantRequestID string) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, contractsgatewayv1.ServerTimeEventType, event.GetEventType())
|
||||
require.Equal(t, wantRequestID, event.GetEventId())
|
||||
require.Equal(t, wantRequestID, event.GetRequestId())
|
||||
require.NoError(t, contractsgatewayv1.VerifyPayloadHash(event.GetPayloadBytes(), event.GetPayloadHash()))
|
||||
require.NoError(t, contractsgatewayv1.VerifyEventSignature(responseSignerPublicKey, event.GetSignature(), contractsgatewayv1.EventSigningFields{
|
||||
EventType: event.GetEventType(),
|
||||
EventID: event.GetEventId(),
|
||||
TimestampMS: event.GetTimestampMs(),
|
||||
RequestID: event.GetRequestId(),
|
||||
TraceID: event.GetTraceId(),
|
||||
PayloadHash: event.GetPayloadHash(),
|
||||
}))
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-missing/exists", nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+mailDeliveriesPath, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
response, err := postJSONValueMaybe(client, baseURL+gatewaySendEmailCodePath, map[string]string{
|
||||
"email": "",
|
||||
})
|
||||
if err == nil && response.StatusCode == http.StatusBadRequest {
|
||||
return
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) {
|
||||
payload, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
return httpResponse{}, err
|
||||
}
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func moduleTemplateDir(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
return filepath.Join(repositoryRoot(t), "mail", "templates")
|
||||
}
|
||||
|
||||
func repositoryRoot(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatal("resolve repository root: runtime caller is unavailable")
|
||||
}
|
||||
|
||||
return filepath.Clean(filepath.Join(filepath.Dir(file), "..", ".."))
|
||||
}
|
||||
@@ -1,631 +0,0 @@
|
||||
// Package gatewaylobby_test exercises the authenticated Gateway -> Game
|
||||
// Lobby boundary against real Gateway + real Auth/Session Service + real
|
||||
// User Service + real Game Lobby running on testcontainers PostgreSQL
|
||||
// and Redis.
|
||||
//
|
||||
// The boundary contract under test is: a client signs a FlatBuffers
|
||||
// `ExecuteCommandRequest` for one of the reserved `lobby.*` message
|
||||
// types; Gateway verifies the signature, looks up the device session,
|
||||
// resolves the calling `user_id`, routes the command to the Lobby
|
||||
// downstream client, and signs the FlatBuffers response. The suite
|
||||
// asserts on the gRPC response shape, the signed result envelope, and
|
||||
// the decoded FlatBuffers payload.
|
||||
//
|
||||
// Coverage maps onto `TESTING.md §6` `Gateway <-> Game Lobby`:
|
||||
// authenticated platform-level command routing.
|
||||
package gatewaylobby_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1"
|
||||
"galaxy/integration/internal/harness"
|
||||
lobbymodel "galaxy/model/lobby"
|
||||
"galaxy/transcoder"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
const (
|
||||
gatewaySendEmailCodePath = "/api/v1/public/auth/send-email-code"
|
||||
gatewayConfirmEmailCodePath = "/api/v1/public/auth/confirm-email-code"
|
||||
testEmail = "owner@example.com"
|
||||
testTimeZone = "Europe/Kaliningrad"
|
||||
)
|
||||
|
||||
// TestGatewayRoutesLobbyMyGamesListAndSignsResponse drives a single
|
||||
// authenticated user through the full public-auth flow, then issues
|
||||
// `lobby.my.games.list` via the authenticated gRPC ExecuteCommand
|
||||
// surface and asserts the routed-and-signed end-to-end pipeline.
|
||||
func TestGatewayRoutesLobbyMyGamesListAndSignsResponse(t *testing.T) {
|
||||
h := newGatewayLobbyHarness(t)
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("g1-owner")
|
||||
deviceSessionID, ownerUserID := h.authenticate(t, testEmail, clientPrivateKey)
|
||||
|
||||
// Pre-seed: directly create a private game owned by this user via
|
||||
// Lobby's public REST surface. This mirrors what an admin/UI tool
|
||||
// would do; the seed proves Gateway routing reads back caller-owned
|
||||
// state, not just empty results.
|
||||
gameID := h.createPrivateGame(t, ownerUserID, "Gateway Routing Galaxy",
|
||||
time.Now().Add(48*time.Hour).Unix())
|
||||
|
||||
// Send authenticated `lobby.my.games.list` via the Gateway gRPC
|
||||
// surface.
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
requestBytes, err := transcoder.MyGamesListRequestToPayload(&lobbymodel.MyGamesListRequest{})
|
||||
require.NoError(t, err)
|
||||
|
||||
executeRequest := newExecuteCommandRequest(
|
||||
deviceSessionID,
|
||||
"req-list-1",
|
||||
lobbymodel.MessageTypeMyGamesList,
|
||||
requestBytes,
|
||||
clientPrivateKey,
|
||||
)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
response, err := client.ExecuteCommand(ctx, executeRequest)
|
||||
require.NoError(t, err, "ExecuteCommand for lobby.my.games.list must succeed")
|
||||
require.Equal(t, "ok", response.GetResultCode())
|
||||
require.NotEmpty(t, response.GetSignature(), "gateway must sign every successful response")
|
||||
|
||||
// Verify the signed envelope.
|
||||
require.NoError(t, contractsgatewayv1.VerifyResponseSignature(
|
||||
h.responseSignerPublicKey,
|
||||
response.GetSignature(),
|
||||
contractsgatewayv1.ResponseSigningFields{
|
||||
ProtocolVersion: response.GetProtocolVersion(),
|
||||
RequestID: response.GetRequestId(),
|
||||
TimestampMS: response.GetTimestampMs(),
|
||||
ResultCode: response.GetResultCode(),
|
||||
PayloadHash: response.GetPayloadHash(),
|
||||
}),
|
||||
)
|
||||
require.NoError(t, contractsgatewayv1.VerifyPayloadHash(
|
||||
response.GetPayloadBytes(), response.GetPayloadHash()))
|
||||
|
||||
// Decode the FlatBuffers payload. Lobby's `/my/games` may or may
|
||||
// not include the newly-seeded game depending on its membership /
|
||||
// status filter; the boundary contract under test here is the
|
||||
// Gateway routing + signing, not Lobby's own list semantics. We
|
||||
// assert the response decodes to a valid (possibly empty) list
|
||||
// and, if the game IS present, that the projected owner+type
|
||||
// fields survive the FlatBuffers roundtrip.
|
||||
decoded, err := transcoder.PayloadToMyGamesListResponse(response.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, decoded.Items, "Items must always be non-nil even when empty")
|
||||
|
||||
for _, item := range decoded.Items {
|
||||
if item.GameID == gameID {
|
||||
assert.Equal(t, ownerUserID, item.OwnerUserID)
|
||||
assert.Equal(t, "private", item.GameType)
|
||||
return
|
||||
}
|
||||
}
|
||||
// Game absent from /my/games is acceptable for this test. Issue a
|
||||
// direct lobby read to confirm the game does exist on the lobby
|
||||
// side, so we know the routing path is the only thing we depend
|
||||
// on (not lobby's own `/my/games` filter).
|
||||
t.Logf("seeded game %s not in /my/games (likely lobby filter on draft); routing pipeline succeeded with empty items", gameID)
|
||||
require.True(t, h.gameExists(t, gameID),
|
||||
"seeded game must still be observable via lobby admin REST")
|
||||
}
|
||||
|
||||
// TestGatewayRoutesLobbyOpenEnrollmentEnforcesOwnerOnly drives two
|
||||
// authenticated users: the owner who can transition the game to
|
||||
// `enrollment_open`, and a non-owner whose attempt is rejected with
|
||||
// the canonical lobby error envelope. The test exercises the
|
||||
// "owner-only commands before start" requirement of `TESTING.md §6`.
|
||||
func TestGatewayRoutesLobbyOpenEnrollmentEnforcesOwnerOnly(t *testing.T) {
|
||||
h := newGatewayLobbyHarness(t)
|
||||
|
||||
ownerKey := newClientPrivateKey("g1-owner-2")
|
||||
ownerSessionID, ownerUserID := h.authenticate(t, "owner2@example.com", ownerKey)
|
||||
|
||||
guestKey := newClientPrivateKey("g1-guest")
|
||||
guestSessionID, _ := h.authenticate(t, "guest@example.com", guestKey)
|
||||
|
||||
gameID := h.createPrivateGame(t, ownerUserID, "Owner-Only Galaxy",
|
||||
time.Now().Add(48*time.Hour).Unix())
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
// Owner sends `lobby.game.open-enrollment` → success.
|
||||
ownerRequest, err := transcoder.OpenEnrollmentRequestToPayload(&lobbymodel.OpenEnrollmentRequest{
|
||||
GameID: gameID,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
ownerResponse, err := client.ExecuteCommand(
|
||||
context.Background(),
|
||||
newExecuteCommandRequest(ownerSessionID, "req-owner-open", lobbymodel.MessageTypeOpenEnrollment, ownerRequest, ownerKey),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "ok", ownerResponse.GetResultCode())
|
||||
|
||||
decoded, err := transcoder.PayloadToOpenEnrollmentResponse(ownerResponse.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, gameID, decoded.GameID)
|
||||
assert.Equal(t, "enrollment_open", decoded.Status)
|
||||
|
||||
// Guest sends the same command → must be rejected by lobby's
|
||||
// owner-only guard. The error envelope passes through Gateway and
|
||||
// arrives as ResultCode=forbidden (or 4xx code) with payload bytes
|
||||
// carrying the canonical ErrorResponse.
|
||||
guestRequest, err := transcoder.OpenEnrollmentRequestToPayload(&lobbymodel.OpenEnrollmentRequest{
|
||||
GameID: gameID,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
guestResponse, err := client.ExecuteCommand(
|
||||
context.Background(),
|
||||
newExecuteCommandRequest(guestSessionID, "req-guest-open", lobbymodel.MessageTypeOpenEnrollment, guestRequest, guestKey),
|
||||
)
|
||||
require.NoError(t, err, "non-2xx lobby responses must surface as a normal gRPC response with a non-ok ResultCode")
|
||||
require.NotEqual(t, "ok", guestResponse.GetResultCode(),
|
||||
"non-owner must not receive ok; got %s", guestResponse.GetResultCode())
|
||||
|
||||
decodedError, err := transcoder.PayloadToLobbyErrorResponse(guestResponse.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, decodedError.Error.Code)
|
||||
assert.NotEmpty(t, decodedError.Error.Message)
|
||||
}
|
||||
|
||||
// gatewayLobbyHarness owns the per-test infrastructure: shared
|
||||
// PostgreSQL+Redis containers, four real binaries, the Gateway
|
||||
// response-signer key, and the public/internal addresses for each
|
||||
// service.
|
||||
type gatewayLobbyHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
mailStub *harness.MailStub
|
||||
|
||||
authsessionPublicURL string
|
||||
gatewayPublicURL string
|
||||
gatewayGRPCAddr string
|
||||
userServiceURL string
|
||||
lobbyAdminURL string
|
||||
lobbyPublicURL string
|
||||
|
||||
responseSignerPublicKey ed25519.PublicKey
|
||||
|
||||
authsessionProcess *harness.Process
|
||||
gatewayProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
lobbyProcess *harness.Process
|
||||
}
|
||||
|
||||
func newGatewayLobbyHarness(t *testing.T) *gatewayLobbyHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() { require.NoError(t, redisClient.Close()) })
|
||||
|
||||
mailStub := harness.NewMailStub(t)
|
||||
|
||||
responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name())
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
authsessionPublicAddr := harness.FreeTCPAddress(t)
|
||||
authsessionInternalAddr := harness.FreeTCPAddress(t)
|
||||
gatewayPublicAddr := harness.FreeTCPAddress(t)
|
||||
gatewayGRPCAddr := harness.FreeTCPAddress(t)
|
||||
lobbyPublicAddr := harness.FreeTCPAddress(t)
|
||||
lobbyInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
authsessionEnv := map[string]string{
|
||||
"AUTHSESSION_LOG_LEVEL": "info",
|
||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_MAIL_SERVICE_BASE_URL": mailStub.BaseURL(),
|
||||
"AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_CACHE_KEY_PREFIX": "gateway:session:",
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_EVENTS_STREAM": "gateway:session_events",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, authsessionEnv)
|
||||
waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr)
|
||||
|
||||
lobbyEnv := harness.StartLobbyServicePersistence(t, redisRuntime.Addr).Env
|
||||
lobbyEnv["LOBBY_LOG_LEVEL"] = "info"
|
||||
lobbyEnv["LOBBY_PUBLIC_HTTP_ADDR"] = lobbyPublicAddr
|
||||
lobbyEnv["LOBBY_INTERNAL_HTTP_ADDR"] = lobbyInternalAddr
|
||||
lobbyEnv["LOBBY_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
lobbyEnv["LOBBY_GM_BASE_URL"] = mailStub.BaseURL() // unused; lobby just needs a syntactically valid URL.
|
||||
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_USER_LIFECYCLE_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_GM_EVENTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
lobbyEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, lobbyEnv)
|
||||
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
gatewayEnv := map[string]string{
|
||||
"GATEWAY_LOG_LEVEL": "info",
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||
"GATEWAY_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||
"GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:",
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath),
|
||||
"GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr,
|
||||
"GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"GATEWAY_LOBBY_SERVICE_BASE_URL": "http://" + lobbyPublicAddr,
|
||||
"GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(),
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, gatewayEnv)
|
||||
harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK)
|
||||
harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr)
|
||||
|
||||
return &gatewayLobbyHarness{
|
||||
redis: redisClient,
|
||||
mailStub: mailStub,
|
||||
authsessionPublicURL: "http://" + authsessionPublicAddr,
|
||||
gatewayPublicURL: "http://" + gatewayPublicAddr,
|
||||
gatewayGRPCAddr: gatewayGRPCAddr,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
lobbyAdminURL: "http://" + lobbyInternalAddr,
|
||||
lobbyPublicURL: "http://" + lobbyPublicAddr,
|
||||
responseSignerPublicKey: responseSignerPublicKey,
|
||||
authsessionProcess: authsessionProcess,
|
||||
gatewayProcess: gatewayProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
lobbyProcess: lobbyProcess,
|
||||
}
|
||||
}
|
||||
|
||||
// authenticate runs the public-auth challenge/confirm flow through the
|
||||
// Gateway and returns the resulting `device_session_id` plus the
|
||||
// resolved `user_id`.
|
||||
func (h *gatewayLobbyHarness) authenticate(t *testing.T, email string, clientKey ed25519.PrivateKey) (string, string) {
|
||||
t.Helper()
|
||||
|
||||
challengeID := h.sendChallenge(t, email)
|
||||
code := h.waitForChallengeCode(t, email)
|
||||
|
||||
confirm := h.confirmCode(t, challengeID, code, clientKey)
|
||||
require.Equalf(t, http.StatusOK, confirm.StatusCode, "confirm status: %s", confirm.Body)
|
||||
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(confirm.Body), &confirmBody))
|
||||
require.NotEmpty(t, confirmBody.DeviceSessionID)
|
||||
|
||||
user := h.lookupUserByEmail(t, email)
|
||||
|
||||
// Wait for the gateway session projection to land in Redis.
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
if _, err := h.redis.Get(context.Background(), "gateway:session:"+confirmBody.DeviceSessionID).Bytes(); err == nil {
|
||||
return confirmBody.DeviceSessionID, user.UserID
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("gateway session projection for %s never arrived", confirmBody.DeviceSessionID)
|
||||
return "", ""
|
||||
}
|
||||
|
||||
// waitForChallengeCode polls the mail stub until the requested email
|
||||
// has received an auth-code delivery and returns the cleartext code.
|
||||
func (h *gatewayLobbyHarness) waitForChallengeCode(t *testing.T, email string) string {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
for _, delivery := range h.mailStub.RecordedDeliveries() {
|
||||
if delivery.Email == email && delivery.Code != "" {
|
||||
return delivery.Code
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("auth code for %s never arrived at the mail stub", email)
|
||||
return ""
|
||||
}
|
||||
|
||||
func (h *gatewayLobbyHarness) sendChallenge(t *testing.T, email string) string {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.gatewayPublicURL+gatewaySendEmailCodePath, map[string]string{
|
||||
"email": email,
|
||||
})
|
||||
require.Equalf(t, http.StatusOK, response.StatusCode, "send-email-code: %s", response.Body)
|
||||
|
||||
var body struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body))
|
||||
require.NotEmpty(t, body.ChallengeID)
|
||||
return body.ChallengeID
|
||||
}
|
||||
|
||||
func (h *gatewayLobbyHarness) confirmCode(t *testing.T, challengeID, code string, clientPrivateKey ed25519.PrivateKey) httpResponse {
|
||||
t.Helper()
|
||||
return postJSONValue(t, h.gatewayPublicURL+gatewayConfirmEmailCodePath, map[string]string{
|
||||
"challenge_id": challengeID,
|
||||
"code": code,
|
||||
"client_public_key": encodePublicKey(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
"time_zone": testTimeZone,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *gatewayLobbyHarness) lookupUserByEmail(t *testing.T, email string) struct {
|
||||
UserID string `json:"user_id"`
|
||||
} {
|
||||
t.Helper()
|
||||
resp := postJSONValue(t, h.userServiceURL+"/api/v1/internal/user-lookups/by-email", map[string]string{
|
||||
"email": email,
|
||||
})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "user lookup: %s", resp.Body)
|
||||
|
||||
// User Service returns the full user record; only user_id is needed.
|
||||
var body struct {
|
||||
User struct {
|
||||
UserID string `json:"user_id"`
|
||||
} `json:"user"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &body))
|
||||
require.NotEmpty(t, body.User.UserID)
|
||||
return struct {
|
||||
UserID string `json:"user_id"`
|
||||
}{UserID: body.User.UserID}
|
||||
}
|
||||
|
||||
func (h *gatewayLobbyHarness) createPrivateGame(t *testing.T, ownerUserID, gameName string, enrollmentEndsAt int64) string {
|
||||
t.Helper()
|
||||
|
||||
resp := postJSONValueWithHeaders(t, h.lobbyPublicURL+"/api/v1/lobby/games", map[string]any{
|
||||
"game_name": gameName,
|
||||
"game_type": "private",
|
||||
"min_players": 1,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 6,
|
||||
"start_gap_players": 1,
|
||||
"enrollment_ends_at": enrollmentEndsAt,
|
||||
"turn_schedule": "0 18 * * *",
|
||||
"target_engine_version": "1.0.0",
|
||||
}, map[string]string{"X-User-Id": ownerUserID})
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "create private game: %s", resp.Body)
|
||||
|
||||
var record struct {
|
||||
GameID string `json:"game_id"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &record))
|
||||
require.NotEmpty(t, record.GameID)
|
||||
return record.GameID
|
||||
}
|
||||
|
||||
// gameExists checks whether the lobby admin surface still observes a
|
||||
// game that was created through the public surface.
|
||||
func (h *gatewayLobbyHarness) gameExists(t *testing.T, gameID string) bool {
|
||||
t.Helper()
|
||||
req, err := http.NewRequest(http.MethodGet, h.lobbyAdminURL+"/api/v1/lobby/games/"+gameID, nil)
|
||||
require.NoError(t, err)
|
||||
resp := doRequest(t, req)
|
||||
return resp.StatusCode == http.StatusOK
|
||||
}
|
||||
|
||||
func (h *gatewayLobbyHarness) dialGateway(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
conn, err := grpc.DialContext(ctx, h.gatewayGRPCAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithBlock(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { require.NoError(t, conn.Close()) })
|
||||
return conn
|
||||
}
|
||||
|
||||
// --- request/response helpers ---
|
||||
|
||||
func newExecuteCommandRequest(deviceSessionID, requestID, messageType string, payloadBytes []byte, clientPrivateKey ed25519.PrivateKey) *gatewayv1.ExecuteCommandRequest {
|
||||
payloadHash := contractsgatewayv1.ComputePayloadHash(payloadBytes)
|
||||
|
||||
request := &gatewayv1.ExecuteCommandRequest{
|
||||
ProtocolVersion: contractsgatewayv1.ProtocolVersionV1,
|
||||
DeviceSessionId: deviceSessionID,
|
||||
MessageType: messageType,
|
||||
TimestampMs: time.Now().UnixMilli(),
|
||||
RequestId: requestID,
|
||||
PayloadBytes: payloadBytes,
|
||||
PayloadHash: payloadHash,
|
||||
TraceId: "trace-" + requestID,
|
||||
}
|
||||
request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{
|
||||
ProtocolVersion: request.GetProtocolVersion(),
|
||||
DeviceSessionID: request.GetDeviceSessionId(),
|
||||
MessageType: request.GetMessageType(),
|
||||
TimestampMS: request.GetTimestampMs(),
|
||||
RequestID: request.GetRequestId(),
|
||||
PayloadHash: request.GetPayloadHash(),
|
||||
})
|
||||
return request
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
return postJSONValueWithHeaders(t, targetURL, body, nil)
|
||||
}
|
||||
|
||||
func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
for key, value := range headers {
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
request.Header.Set(key, value)
|
||||
}
|
||||
return doRequest(t, request)
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{DisableKeepAlives: true},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-readiness-probe/exists", nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
// AuthSession's public listener does not expose a `/healthz` path;
|
||||
// posting an empty-email send-email-code request is the cheapest
|
||||
// readiness signal and returns 400 once routing is up.
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
body := bytes.NewReader([]byte(`{"email":""}`))
|
||||
req, err := http.NewRequest(http.MethodPost, baseURL+"/api/v1/public/auth/send-email-code", body)
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusBadRequest {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for authsession readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func newClientPrivateKey(label string) ed25519.PrivateKey {
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-gateway-lobby-client-" + label))
|
||||
return ed25519.NewKeyFromSeed(seed[:])
|
||||
}
|
||||
|
||||
func encodePublicKey(publicKey ed25519.PublicKey) string {
|
||||
return base64.StdEncoding.EncodeToString(publicKey)
|
||||
}
|
||||
@@ -1,148 +0,0 @@
|
||||
package gatewayuser_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
contractsuserv1 "galaxy/integration/internal/contracts/userv1"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestGatewayUserGetMyAccountAuthenticated(t *testing.T) {
|
||||
h := newGatewayUserHarness(t)
|
||||
|
||||
const (
|
||||
email = "pilot@example.com"
|
||||
deviceSessionID = "device-session-get-account"
|
||||
requestID = "request-get-account"
|
||||
)
|
||||
|
||||
created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone)
|
||||
require.Equal(t, "created", created.Outcome)
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("get-account")
|
||||
h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey)
|
||||
|
||||
payload, err := contractsuserv1.EncodeGetMyAccountRequest()
|
||||
require.NoError(t, err)
|
||||
|
||||
response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeGetMyAccount, payload, clientPrivateKey)
|
||||
require.Equal(t, contractsuserv1.ResultCodeOK, response.GetResultCode())
|
||||
|
||||
accountResponse, err := contractsuserv1.DecodeAccountResponse(response.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, created.UserID, accountResponse.Account.UserID)
|
||||
require.Equal(t, email, accountResponse.Account.Email)
|
||||
require.Equal(t, "en", accountResponse.Account.PreferredLanguage)
|
||||
require.Equal(t, gatewayUserTestTimeZone, accountResponse.Account.TimeZone)
|
||||
}
|
||||
|
||||
func TestGatewayUserUpdateMyProfileSuccess(t *testing.T) {
|
||||
h := newGatewayUserHarness(t)
|
||||
|
||||
const (
|
||||
email = "pilot-profile@example.com"
|
||||
deviceSessionID = "device-session-update-profile"
|
||||
requestID = "request-update-profile"
|
||||
)
|
||||
|
||||
created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone)
|
||||
clientPrivateKey := newClientPrivateKey("update-profile")
|
||||
h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey)
|
||||
|
||||
payload, err := contractsuserv1.EncodeUpdateMyProfileRequest("NovaPrime")
|
||||
require.NoError(t, err)
|
||||
|
||||
response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeUpdateMyProfile, payload, clientPrivateKey)
|
||||
require.Equal(t, contractsuserv1.ResultCodeOK, response.GetResultCode())
|
||||
|
||||
accountResponse, err := contractsuserv1.DecodeAccountResponse(response.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "NovaPrime", accountResponse.Account.DisplayName)
|
||||
require.NotEmpty(t, accountResponse.Account.UserName)
|
||||
|
||||
lookup := h.lookupUserByEmail(t, email)
|
||||
require.Equal(t, "NovaPrime", lookup.User.DisplayName)
|
||||
}
|
||||
|
||||
func TestGatewayUserUpdateMySettingsSuccess(t *testing.T) {
|
||||
h := newGatewayUserHarness(t)
|
||||
|
||||
const (
|
||||
email = "pilot-settings@example.com"
|
||||
deviceSessionID = "device-session-update-settings"
|
||||
requestID = "request-update-settings"
|
||||
)
|
||||
|
||||
created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone)
|
||||
clientPrivateKey := newClientPrivateKey("update-settings")
|
||||
h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey)
|
||||
|
||||
payload, err := contractsuserv1.EncodeUpdateMySettingsRequest("fr-FR", "Europe/Paris")
|
||||
require.NoError(t, err)
|
||||
|
||||
response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeUpdateMySettings, payload, clientPrivateKey)
|
||||
require.Equal(t, contractsuserv1.ResultCodeOK, response.GetResultCode())
|
||||
|
||||
accountResponse, err := contractsuserv1.DecodeAccountResponse(response.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "fr-FR", accountResponse.Account.PreferredLanguage)
|
||||
require.Equal(t, "Europe/Paris", accountResponse.Account.TimeZone)
|
||||
|
||||
lookup := h.lookupUserByEmail(t, email)
|
||||
require.Equal(t, "fr-FR", lookup.User.PreferredLanguage)
|
||||
require.Equal(t, "Europe/Paris", lookup.User.TimeZone)
|
||||
}
|
||||
|
||||
func TestGatewayUserUpdateMyProfileConflict(t *testing.T) {
|
||||
h := newGatewayUserHarness(t)
|
||||
|
||||
const (
|
||||
email = "pilot-conflict@example.com"
|
||||
deviceSessionID = "device-session-profile-conflict"
|
||||
requestID = "request-profile-conflict"
|
||||
)
|
||||
|
||||
created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone)
|
||||
h.applyProfileUpdateBlock(t, created.UserID)
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("profile-conflict")
|
||||
h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey)
|
||||
|
||||
payload, err := contractsuserv1.EncodeUpdateMyProfileRequest("BlockedNova")
|
||||
require.NoError(t, err)
|
||||
|
||||
response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeUpdateMyProfile, payload, clientPrivateKey)
|
||||
require.Equal(t, "conflict", response.GetResultCode())
|
||||
|
||||
errorResponse, err := contractsuserv1.DecodeErrorResponse(response.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "conflict", errorResponse.Error.Code)
|
||||
require.Equal(t, "request conflicts with current state", errorResponse.Error.Message)
|
||||
}
|
||||
|
||||
func TestGatewayUserUpdateMySettingsInvalidRequest(t *testing.T) {
|
||||
h := newGatewayUserHarness(t)
|
||||
|
||||
const (
|
||||
email = "pilot-invalid@example.com"
|
||||
deviceSessionID = "device-session-settings-invalid"
|
||||
requestID = "request-settings-invalid"
|
||||
)
|
||||
|
||||
created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone)
|
||||
|
||||
clientPrivateKey := newClientPrivateKey("settings-invalid")
|
||||
h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey)
|
||||
|
||||
payload, err := contractsuserv1.EncodeUpdateMySettingsRequest("en", "Mars/Base")
|
||||
require.NoError(t, err)
|
||||
|
||||
response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeUpdateMySettings, payload, clientPrivateKey)
|
||||
require.Equal(t, "invalid_request", response.GetResultCode())
|
||||
|
||||
errorResponse, err := contractsuserv1.DecodeErrorResponse(response.GetPayloadBytes())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "invalid_request", errorResponse.Error.Code)
|
||||
require.NotEmpty(t, errorResponse.Error.Message)
|
||||
}
|
||||
@@ -1,311 +0,0 @@
|
||||
package gatewayuser_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1"
|
||||
"galaxy/integration/internal/harness"
|
||||
usermodel "galaxy/model/user"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
const (
|
||||
gatewayUserDefaultHTTPTimeout = time.Second
|
||||
gatewayUserTestTimeZone = "Europe/Kaliningrad"
|
||||
)
|
||||
|
||||
type gatewayUserHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
userServiceURL string
|
||||
gatewayGRPCAddr string
|
||||
|
||||
responseSignerPublicKey ed25519.PublicKey
|
||||
|
||||
gatewayProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
}
|
||||
|
||||
func newGatewayUserHarness(t *testing.T) *gatewayUserHarness {
|
||||
t.Helper()
|
||||
|
||||
redisServer := harness.StartMiniredis(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisServer.Addr(),
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name())
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
gatewayPublicAddr := harness.FreeTCPAddress(t)
|
||||
gatewayGRPCAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisServer.Addr()).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
harness.WaitForHTTPStatus(t, userServiceProcess, "http://"+userServiceAddr+"/api/v1/internal/users/user-missing/exists", http.StatusOK)
|
||||
|
||||
gatewayEnv := map[string]string{
|
||||
"GATEWAY_LOG_LEVEL": "info",
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||
"GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"GATEWAY_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||
"GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:",
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath),
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, gatewayEnv)
|
||||
harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK)
|
||||
harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr)
|
||||
|
||||
return &gatewayUserHarness{
|
||||
redis: redisClient,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
gatewayGRPCAddr: gatewayGRPCAddr,
|
||||
responseSignerPublicKey: responseSignerPublicKey,
|
||||
gatewayProcess: gatewayProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *gatewayUserHarness) dialGateway(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
conn, err := grpc.DialContext(
|
||||
ctx,
|
||||
h.gatewayGRPCAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithBlock(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, conn.Close())
|
||||
})
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
func (h *gatewayUserHarness) ensureUser(t *testing.T, email string, preferredLanguage string, timeZone string) ensureByEmailResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": preferredLanguage,
|
||||
"time_zone": timeZone,
|
||||
},
|
||||
})
|
||||
|
||||
var body ensureByEmailResponse
|
||||
requireJSONStatus(t, response, http.StatusOK, &body)
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *gatewayUserHarness) lookupUserByEmail(t *testing.T, email string) userLookupResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/user-lookups/by-email", map[string]string{
|
||||
"email": email,
|
||||
})
|
||||
|
||||
var body userLookupResponse
|
||||
requireJSONStatus(t, response, http.StatusOK, &body)
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *gatewayUserHarness) applyProfileUpdateBlock(t *testing.T, userID string) {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/"+userID+"/sanctions/apply", map[string]any{
|
||||
"sanction_code": "profile_update_block",
|
||||
"scope": "lobby",
|
||||
"reason_code": "manual_block",
|
||||
"actor": map[string]string{
|
||||
"type": "admin",
|
||||
"id": "admin-1",
|
||||
},
|
||||
"applied_at": "2026-04-09T10:00:00Z",
|
||||
})
|
||||
require.Equal(t, http.StatusOK, response.StatusCode, "response body: %s", response.Body)
|
||||
}
|
||||
|
||||
func (h *gatewayUserHarness) seedGatewaySession(t *testing.T, deviceSessionID string, userID string, clientPrivateKey ed25519.PrivateKey) {
|
||||
t.Helper()
|
||||
|
||||
record := gatewaySessionRecord{
|
||||
DeviceSessionID: deviceSessionID,
|
||||
UserID: userID,
|
||||
ClientPublicKey: base64.StdEncoding.EncodeToString(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
Status: "active",
|
||||
}
|
||||
|
||||
payload, err := json.Marshal(record)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, h.redis.Set(context.Background(), "gateway:session:"+deviceSessionID, payload, 0).Err())
|
||||
}
|
||||
|
||||
func (h *gatewayUserHarness) executeCommand(t *testing.T, deviceSessionID string, requestID string, messageType string, payload []byte, clientPrivateKey ed25519.PrivateKey) *gatewayv1.ExecuteCommandResponse {
|
||||
t.Helper()
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
response, err := client.ExecuteCommand(ctx, newExecuteCommandRequest(deviceSessionID, requestID, messageType, payload, clientPrivateKey))
|
||||
require.NoError(t, err)
|
||||
assertSignedExecuteCommandResponse(t, response, h.responseSignerPublicKey)
|
||||
return response
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
type gatewaySessionRecord struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
UserID string `json:"user_id"`
|
||||
ClientPublicKey string `json:"client_public_key"`
|
||||
Status string `json:"status"`
|
||||
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
type ensureByEmailResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id,omitempty"`
|
||||
}
|
||||
|
||||
type userLookupResponse struct {
|
||||
User usermodel.Account `json:"user"`
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
|
||||
client := &http.Client{Timeout: gatewayUserDefaultHTTPTimeout}
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
responseBody, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(responseBody),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body)
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target))
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return fmt.Errorf("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func newClientPrivateKey(label string) ed25519.PrivateKey {
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-gateway-user-client-" + label))
|
||||
return ed25519.NewKeyFromSeed(seed[:])
|
||||
}
|
||||
|
||||
func newExecuteCommandRequest(deviceSessionID string, requestID string, messageType string, payload []byte, clientPrivateKey ed25519.PrivateKey) *gatewayv1.ExecuteCommandRequest {
|
||||
payloadHash := contractsgatewayv1.ComputePayloadHash(payload)
|
||||
|
||||
request := &gatewayv1.ExecuteCommandRequest{
|
||||
ProtocolVersion: contractsgatewayv1.ProtocolVersionV1,
|
||||
DeviceSessionId: deviceSessionID,
|
||||
MessageType: messageType,
|
||||
TimestampMs: time.Now().UnixMilli(),
|
||||
RequestId: requestID,
|
||||
PayloadBytes: payload,
|
||||
PayloadHash: payloadHash,
|
||||
TraceId: "trace-" + requestID,
|
||||
}
|
||||
request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{
|
||||
ProtocolVersion: request.GetProtocolVersion(),
|
||||
DeviceSessionID: request.GetDeviceSessionId(),
|
||||
MessageType: request.GetMessageType(),
|
||||
TimestampMS: request.GetTimestampMs(),
|
||||
RequestID: request.GetRequestId(),
|
||||
PayloadHash: request.GetPayloadHash(),
|
||||
})
|
||||
|
||||
return request
|
||||
}
|
||||
|
||||
func assertSignedExecuteCommandResponse(t *testing.T, response *gatewayv1.ExecuteCommandResponse, publicKey ed25519.PublicKey) {
|
||||
t.Helper()
|
||||
|
||||
require.NoError(t, contractsgatewayv1.VerifyPayloadHash(response.GetPayloadBytes(), response.GetPayloadHash()))
|
||||
require.NoError(t, contractsgatewayv1.VerifyResponseSignature(publicKey, response.GetSignature(), contractsgatewayv1.ResponseSigningFields{
|
||||
ProtocolVersion: response.GetProtocolVersion(),
|
||||
RequestID: response.GetRequestId(),
|
||||
TimestampMS: response.GetTimestampMs(),
|
||||
ResultCode: response.GetResultCode(),
|
||||
PayloadHash: response.GetPayloadHash(),
|
||||
}))
|
||||
}
|
||||
@@ -0,0 +1,74 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestGeoCounterIncrements asserts that authenticated requests
|
||||
// produce per-country counter rows in `user_country_counters`.
|
||||
// Gateway does not propagate the original `X-Forwarded-For` to
|
||||
// backend on REST forwarding, so the test calls backend's user
|
||||
// surface directly with a public IP that the synthetic GeoLite2
|
||||
// fixture knows. Calling backend HTTP with `X-User-ID` mirrors the
|
||||
// path gateway takes after the authenticated verification pipeline.
|
||||
func TestGeoCounterIncrements(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+geocounter@example.com")
|
||||
userID, err := sess.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve user_id: %v", err)
|
||||
}
|
||||
|
||||
// Direct backend call mimicking gateway forwarding.
|
||||
user := testenv.NewBackendUserClient(plat.Backend.HTTPURL, userID)
|
||||
for i := 0; i < 3; i++ {
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, user.BaseURL+"/api/v1/user/account", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("new request: %v", err)
|
||||
}
|
||||
req.Header.Set("X-User-ID", userID)
|
||||
// 81.2.69.142 is a UK IP present in MaxMind's reference
|
||||
// Country test database (GeoIP2-Country-Test.mmdb).
|
||||
req.Header.Set("X-Forwarded-For", "81.2.69.142")
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
t.Fatalf("execute #%d: %v", i, err)
|
||||
}
|
||||
_ = resp.Body.Close()
|
||||
}
|
||||
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
raw, resp, err := admin.Do(ctx, http.MethodGet, "/api/v1/admin/geo/users/"+userID+"/countries", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("admin geo lookup: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("admin geo lookup: status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
var body struct {
|
||||
Items []struct {
|
||||
Country string `json:"country"`
|
||||
Count int64 `json:"count"`
|
||||
} `json:"items"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &body); err != nil {
|
||||
t.Fatalf("decode geo response: %v", err)
|
||||
}
|
||||
if len(body.Items) > 0 && body.Items[0].Count > 0 {
|
||||
return
|
||||
}
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("user_country_counters did not record an increment within 5 s")
|
||||
}
|
||||
+28
-22
@@ -3,22 +3,22 @@ module galaxy/integration
|
||||
go 1.26.1
|
||||
|
||||
require (
|
||||
galaxy/postgres v0.0.0
|
||||
github.com/alicebob/miniredis/v2 v2.37.0
|
||||
github.com/jackc/pgx/v5 v5.9.2
|
||||
github.com/redis/go-redis/v9 v9.18.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
galaxy/gateway v0.0.0-00010101000000-000000000000
|
||||
galaxy/model v0.0.0-00010101000000-000000000000
|
||||
galaxy/transcoder v0.0.0-00010101000000-000000000000
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/moby/moby/api v1.54.2
|
||||
github.com/testcontainers/testcontainers-go v0.42.0
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0
|
||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0
|
||||
google.golang.org/grpc v1.80.0
|
||||
)
|
||||
|
||||
require (
|
||||
buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.11-20260209202127-80ab13bee0bf.1 // indirect
|
||||
dario.cat/mergo v1.0.2 // indirect
|
||||
galaxy/util v0.0.0-00010101000000-000000000000 // indirect
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
|
||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||
github.com/XSAM/otelsql v0.42.0 // indirect
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/containerd/errdefs v1.0.0 // indirect
|
||||
@@ -27,7 +27,6 @@ require (
|
||||
github.com/containerd/platforms v0.2.1 // indirect
|
||||
github.com/cpuguy83/dockercfg v0.3.2 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
||||
github.com/distribution/reference v0.6.0 // indirect
|
||||
github.com/docker/go-connections v0.7.0 // indirect
|
||||
github.com/docker/go-units v0.5.0 // indirect
|
||||
@@ -36,19 +35,13 @@ require (
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/go-ole/go-ole v1.2.6 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||
github.com/google/flatbuffers v25.12.19+incompatible // indirect
|
||||
github.com/jackc/pgx/v5 v5.9.2 // indirect
|
||||
github.com/klauspost/compress v1.18.5 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
||||
github.com/magiconair/properties v1.8.10 // indirect
|
||||
github.com/mdelapenya/tlscert v0.2.0 // indirect
|
||||
github.com/mfridman/interpolate v0.0.2 // indirect
|
||||
github.com/moby/docker-image-spec v1.3.1 // indirect
|
||||
github.com/moby/go-archive v0.2.0 // indirect
|
||||
github.com/moby/moby/api v1.54.2 // indirect
|
||||
github.com/moby/moby/client v0.4.1 // indirect
|
||||
github.com/moby/patternmatcher v0.6.1 // indirect
|
||||
github.com/moby/sys/sequential v0.6.0 // indirect
|
||||
@@ -59,24 +52,19 @@ require (
|
||||
github.com/opencontainers/image-spec v1.1.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
|
||||
github.com/pressly/goose/v3 v3.27.1 // indirect
|
||||
github.com/sethvargo/go-retry v0.3.0 // indirect
|
||||
github.com/shirou/gopsutil/v4 v4.26.3 // indirect
|
||||
github.com/sirupsen/logrus v1.9.4 // indirect
|
||||
github.com/stretchr/testify v1.11.1 // indirect
|
||||
github.com/tklauser/go-sysconf v0.3.16 // indirect
|
||||
github.com/tklauser/numcpus v0.11.0 // indirect
|
||||
github.com/yuin/gopher-lua v1.1.1 // indirect
|
||||
github.com/yusufpapurcu/wmi v1.2.4 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 // indirect
|
||||
go.opentelemetry.io/otel v1.43.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.43.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.43.0 // indirect
|
||||
go.uber.org/atomic v1.11.0 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
golang.org/x/crypto v0.50.0 // indirect
|
||||
golang.org/x/net v0.53.0 // indirect
|
||||
golang.org/x/sync v0.20.0 // indirect
|
||||
golang.org/x/sys v0.43.0 // indirect
|
||||
golang.org/x/text v0.36.0 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 // indirect
|
||||
@@ -84,4 +72,22 @@ require (
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
||||
replace galaxy/backend => ../backend
|
||||
|
||||
replace galaxy/gateway => ../gateway
|
||||
|
||||
replace galaxy/model => ../pkg/model
|
||||
|
||||
replace galaxy/transcoder => ../pkg/transcoder
|
||||
|
||||
replace galaxy/cronutil => ../pkg/cronutil
|
||||
|
||||
replace galaxy/error => ../pkg/error
|
||||
|
||||
replace galaxy/geoip => ../pkg/geoip
|
||||
|
||||
replace galaxy/postgres => ../pkg/postgres
|
||||
|
||||
replace galaxy/redisconn => ../pkg/redisconn
|
||||
|
||||
replace galaxy/util => ../pkg/util
|
||||
|
||||
+4
-52
@@ -1,3 +1,5 @@
|
||||
buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.11-20260209202127-80ab13bee0bf.1 h1:PMmTMyvHScV9Mn8wc6ASge9uRcHy0jtqPd+fM35LmsQ=
|
||||
buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.11-20260209202127-80ab13bee0bf.1/go.mod h1:tvtbpgaVXZX4g6Pn+AnzFycuRK3MOz5HJfEGeEllXYM=
|
||||
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
|
||||
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
|
||||
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk=
|
||||
@@ -6,14 +8,6 @@ github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEK
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||
github.com/XSAM/otelsql v0.42.0 h1:Li0xF4eJUxG2e0x3D4rvRlys1f27yJKvjTh7ljkUP5o=
|
||||
github.com/XSAM/otelsql v0.42.0/go.mod h1:4mOrEv+cS1KmKzrvTktvJnstr5GtKSAK+QHvFR9OcpI=
|
||||
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
|
||||
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
||||
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
|
||||
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
|
||||
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
|
||||
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
@@ -30,19 +24,14 @@ github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GK
|
||||
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
|
||||
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
||||
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
|
||||
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
|
||||
github.com/docker/go-connections v0.7.0 h1:6SsRfJddP22WMrCkj19x9WKjEDTB+ahsdiGYf0mN39c=
|
||||
github.com/docker/go-connections v0.7.0/go.mod h1:no1qkHdjq7kLMGUXYAduOhYPSJxxvgWBh7ogVvptn3Q=
|
||||
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
||||
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
|
||||
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
@@ -56,6 +45,8 @@ github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
|
||||
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/flatbuffers v25.12.19+incompatible h1:haMV2JRRJCe1998HeW/p0X9UaMTK6SDo0ffLn2+DbLs=
|
||||
github.com/google/flatbuffers v25.12.19+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
|
||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
@@ -71,8 +62,6 @@ github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo
|
||||
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
||||
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
@@ -83,12 +72,8 @@ github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ
|
||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
||||
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
|
||||
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
|
||||
github.com/mattn/go-isatty v0.0.21 h1:xYae+lCNBP7QuW4PUnNG61ffM4hVIfm+zUzDuSzYLGs=
|
||||
github.com/mattn/go-isatty v0.0.21/go.mod h1:ZXfXG4SQHsB/w3ZeOYbR0PrPwLy+n6xiMrJlRFqopa4=
|
||||
github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI=
|
||||
github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o=
|
||||
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
|
||||
github.com/mfridman/interpolate v0.0.2/go.mod h1:p+7uk6oE07mpE/Ik1b8EckO0O4ZXiGAfshKBWLUM9Xg=
|
||||
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
|
||||
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
|
||||
github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8=
|
||||
@@ -107,54 +92,34 @@ github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g
|
||||
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
|
||||
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
|
||||
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
|
||||
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
|
||||
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
|
||||
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
||||
github.com/pressly/goose/v3 v3.27.1 h1:6uEvcprBybDmW4hcz3gYujhARhye+GoWKhEWyzD5sh4=
|
||||
github.com/pressly/goose/v3 v3.27.1/go.mod h1:maruOxsPnIG2yHHyo8UqKWXYKFcH7Q76csUV7+7KYoM=
|
||||
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
||||
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah2SE=
|
||||
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
|
||||
github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc=
|
||||
github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
|
||||
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
|
||||
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=
|
||||
github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY=
|
||||
github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30=
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0 h1:GCbb1ndrF7OTDiIvxXyItaDab4qkzTFJ48LKFdM7EIo=
|
||||
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0/go.mod h1:IRPBaI8jXdrNfD0e4Zm7Fbcgaz5shKxOQv4axiL09xs=
|
||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 h1:id/6LH8ZeDrtAUVSuNvZUAJ1kVpb82y1pr9yweAWsRg=
|
||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0/go.mod h1:uF0jI8FITagQpBNOgweGBmPf6rP4K0SeL1XFPbsZSSY=
|
||||
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
|
||||
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
|
||||
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
|
||||
github.com/tklauser/numcpus v0.11.0/go.mod h1:z+LwcLq54uWZTX0u/bGobaV34u6V7KNlTZejzM6/3MQ=
|
||||
github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M=
|
||||
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
|
||||
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
|
||||
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
|
||||
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
|
||||
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
|
||||
@@ -169,10 +134,6 @@ go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfC
|
||||
go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A=
|
||||
go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A=
|
||||
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
|
||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
|
||||
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
|
||||
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||
@@ -200,18 +161,9 @@ google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
||||
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
||||
modernc.org/libc v1.72.1 h1:db1xwJ6u1kE3KHTFTTbe2GCrczHPKzlURP0aDC4NGD0=
|
||||
modernc.org/libc v1.72.1/go.mod h1:HRMiC/PhPGLIPM7GzAFCbI+oSgE3dhZ8FWftmRrHVlY=
|
||||
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
|
||||
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
|
||||
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
|
||||
modernc.org/sqlite v1.49.1 h1:dYGHTKcX1sJ+EQDnUzvz4TJ5GbuvhNJa8Fg6ElGx73U=
|
||||
modernc.org/sqlite v1.49.1/go.mod h1:m0w8xhwYUVY3H6pSDwc3gkJ/irZT/0YEXwBlhaxQEew=
|
||||
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
|
||||
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
|
||||
|
||||
@@ -1,243 +0,0 @@
|
||||
// Package gatewayv1contract provides public-contract helpers for the gateway
|
||||
// v1 authenticated transport without importing service-internal packages.
|
||||
package gatewayv1contract
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
)
|
||||
|
||||
const (
|
||||
// ProtocolVersionV1 is the supported public protocol version literal.
|
||||
ProtocolVersionV1 = "v1"
|
||||
|
||||
// SubscribeMessageType is the authenticated message type used to open the
|
||||
// gateway push stream.
|
||||
SubscribeMessageType = "gateway.subscribe"
|
||||
|
||||
// ServerTimeEventType is the bootstrap event type emitted by the gateway
|
||||
// immediately after a push stream is opened.
|
||||
ServerTimeEventType = "gateway.server_time"
|
||||
|
||||
requestDomainMarkerV1 = "galaxy-request-v1"
|
||||
eventDomainMarkerV1 = "galaxy-event-v1"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrInvalidPayloadHash reports that payloadHash is not a raw SHA-256
|
||||
// digest.
|
||||
ErrInvalidPayloadHash = errors.New("payload_hash must be a 32-byte SHA-256 digest")
|
||||
|
||||
// ErrPayloadHashMismatch reports that payloadHash does not match
|
||||
// payloadBytes.
|
||||
ErrPayloadHashMismatch = errors.New("payload_hash does not match payload_bytes")
|
||||
|
||||
// ErrInvalidEventSignature reports that one gateway event signature is not
|
||||
// a raw Ed25519 signature for the canonical event signing input.
|
||||
ErrInvalidEventSignature = errors.New("invalid event signature")
|
||||
|
||||
// ErrInvalidResponseSignature reports that one gateway unary response
|
||||
// signature is not a raw Ed25519 signature for the canonical response
|
||||
// signing input.
|
||||
ErrInvalidResponseSignature = errors.New("invalid response signature")
|
||||
)
|
||||
|
||||
// RequestSigningFields stores the canonical public request fields bound into
|
||||
// one client signature input.
|
||||
type RequestSigningFields struct {
|
||||
// ProtocolVersion identifies the gateway transport envelope version.
|
||||
ProtocolVersion string
|
||||
|
||||
// DeviceSessionID identifies the authenticated device session bound to the
|
||||
// request.
|
||||
DeviceSessionID string
|
||||
|
||||
// MessageType is the stable authenticated gateway message type.
|
||||
MessageType string
|
||||
|
||||
// TimestampMS carries the client request timestamp in milliseconds.
|
||||
TimestampMS int64
|
||||
|
||||
// RequestID is the transport correlation and anti-replay identifier.
|
||||
RequestID string
|
||||
|
||||
// PayloadHash stores the raw SHA-256 digest of PayloadBytes.
|
||||
PayloadHash []byte
|
||||
}
|
||||
|
||||
// EventSigningFields stores the canonical public stream-event fields bound
|
||||
// into one gateway event signature input.
|
||||
type EventSigningFields struct {
|
||||
// EventType identifies the stable client-facing event category.
|
||||
EventType string
|
||||
|
||||
// EventID is the stable event correlation identifier.
|
||||
EventID string
|
||||
|
||||
// TimestampMS carries the gateway event timestamp in milliseconds.
|
||||
TimestampMS int64
|
||||
|
||||
// RequestID optionally correlates the event to the opening client request.
|
||||
RequestID string
|
||||
|
||||
// TraceID optionally carries the client-supplied trace correlation value.
|
||||
TraceID string
|
||||
|
||||
// PayloadHash stores the raw SHA-256 digest of PayloadBytes.
|
||||
PayloadHash []byte
|
||||
}
|
||||
|
||||
// ResponseSigningFields stores the canonical public unary response fields
|
||||
// bound into one gateway signature input.
|
||||
type ResponseSigningFields struct {
|
||||
// ProtocolVersion identifies the gateway transport envelope version.
|
||||
ProtocolVersion string
|
||||
|
||||
// RequestID is the transport correlation identifier echoed by the gateway.
|
||||
RequestID string
|
||||
|
||||
// TimestampMS carries the gateway response timestamp in milliseconds.
|
||||
TimestampMS int64
|
||||
|
||||
// ResultCode stores the stable opaque gateway result code.
|
||||
ResultCode string
|
||||
|
||||
// PayloadHash stores the raw SHA-256 digest of PayloadBytes.
|
||||
PayloadHash []byte
|
||||
}
|
||||
|
||||
// ComputePayloadHash returns the canonical raw SHA-256 digest for payloadBytes.
|
||||
func ComputePayloadHash(payloadBytes []byte) []byte {
|
||||
sum := sha256.Sum256(payloadBytes)
|
||||
return bytes.Clone(sum[:])
|
||||
}
|
||||
|
||||
// VerifyPayloadHash reports whether payloadHash matches payloadBytes under the
|
||||
// public gateway payload-hash contract.
|
||||
func VerifyPayloadHash(payloadBytes, payloadHash []byte) error {
|
||||
if len(payloadHash) != sha256.Size {
|
||||
return ErrInvalidPayloadHash
|
||||
}
|
||||
|
||||
sum := sha256.Sum256(payloadBytes)
|
||||
if !bytes.Equal(sum[:], payloadHash) {
|
||||
return ErrPayloadHashMismatch
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// BuildRequestSigningInput returns the canonical byte sequence the v1 client
|
||||
// request signature covers.
|
||||
func BuildRequestSigningInput(fields RequestSigningFields) []byte {
|
||||
size := len(requestDomainMarkerV1) +
|
||||
len(fields.ProtocolVersion) +
|
||||
len(fields.DeviceSessionID) +
|
||||
len(fields.MessageType) +
|
||||
len(fields.RequestID) +
|
||||
len(fields.PayloadHash) +
|
||||
(6 * binary.MaxVarintLen64) +
|
||||
8
|
||||
|
||||
buf := make([]byte, 0, size)
|
||||
buf = appendLengthPrefixedString(buf, requestDomainMarkerV1)
|
||||
buf = appendLengthPrefixedString(buf, fields.ProtocolVersion)
|
||||
buf = appendLengthPrefixedString(buf, fields.DeviceSessionID)
|
||||
buf = appendLengthPrefixedString(buf, fields.MessageType)
|
||||
buf = binary.BigEndian.AppendUint64(buf, uint64(fields.TimestampMS))
|
||||
buf = appendLengthPrefixedString(buf, fields.RequestID)
|
||||
buf = appendLengthPrefixedBytes(buf, fields.PayloadHash)
|
||||
|
||||
return buf
|
||||
}
|
||||
|
||||
// BuildEventSigningInput returns the canonical byte sequence the v1 gateway
|
||||
// event signature covers.
|
||||
func BuildEventSigningInput(fields EventSigningFields) []byte {
|
||||
size := len(eventDomainMarkerV1) +
|
||||
len(fields.EventType) +
|
||||
len(fields.EventID) +
|
||||
len(fields.RequestID) +
|
||||
len(fields.TraceID) +
|
||||
len(fields.PayloadHash) +
|
||||
(6 * binary.MaxVarintLen64) +
|
||||
8
|
||||
|
||||
buf := make([]byte, 0, size)
|
||||
buf = appendLengthPrefixedString(buf, eventDomainMarkerV1)
|
||||
buf = appendLengthPrefixedString(buf, fields.EventType)
|
||||
buf = appendLengthPrefixedString(buf, fields.EventID)
|
||||
buf = binary.BigEndian.AppendUint64(buf, uint64(fields.TimestampMS))
|
||||
buf = appendLengthPrefixedString(buf, fields.RequestID)
|
||||
buf = appendLengthPrefixedString(buf, fields.TraceID)
|
||||
buf = appendLengthPrefixedBytes(buf, fields.PayloadHash)
|
||||
|
||||
return buf
|
||||
}
|
||||
|
||||
// BuildResponseSigningInput returns the canonical byte sequence the v1
|
||||
// gateway unary response signature covers.
|
||||
func BuildResponseSigningInput(fields ResponseSigningFields) []byte {
|
||||
size := len("galaxy-response-v1") +
|
||||
len(fields.ProtocolVersion) +
|
||||
len(fields.RequestID) +
|
||||
len(fields.ResultCode) +
|
||||
len(fields.PayloadHash) +
|
||||
(5 * binary.MaxVarintLen64) +
|
||||
8
|
||||
|
||||
buf := make([]byte, 0, size)
|
||||
buf = appendLengthPrefixedString(buf, "galaxy-response-v1")
|
||||
buf = appendLengthPrefixedString(buf, fields.ProtocolVersion)
|
||||
buf = appendLengthPrefixedString(buf, fields.RequestID)
|
||||
buf = binary.BigEndian.AppendUint64(buf, uint64(fields.TimestampMS))
|
||||
buf = appendLengthPrefixedString(buf, fields.ResultCode)
|
||||
buf = appendLengthPrefixedBytes(buf, fields.PayloadHash)
|
||||
|
||||
return buf
|
||||
}
|
||||
|
||||
// SignRequest returns one raw Ed25519 client signature for the canonical v1
|
||||
// request signing input.
|
||||
func SignRequest(privateKey ed25519.PrivateKey, fields RequestSigningFields) []byte {
|
||||
return ed25519.Sign(privateKey, BuildRequestSigningInput(fields))
|
||||
}
|
||||
|
||||
// VerifyEventSignature reports whether signature authenticates fields under
|
||||
// publicKey using the canonical gateway event signing input.
|
||||
func VerifyEventSignature(publicKey ed25519.PublicKey, signature []byte, fields EventSigningFields) error {
|
||||
if len(publicKey) != ed25519.PublicKeySize || len(signature) != ed25519.SignatureSize {
|
||||
return ErrInvalidEventSignature
|
||||
}
|
||||
if !ed25519.Verify(publicKey, BuildEventSigningInput(fields), signature) {
|
||||
return ErrInvalidEventSignature
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// VerifyResponseSignature reports whether signature authenticates fields under
|
||||
// publicKey using the canonical gateway unary-response signing input.
|
||||
func VerifyResponseSignature(publicKey ed25519.PublicKey, signature []byte, fields ResponseSigningFields) error {
|
||||
if len(publicKey) != ed25519.PublicKeySize || len(signature) != ed25519.SignatureSize {
|
||||
return ErrInvalidResponseSignature
|
||||
}
|
||||
if !ed25519.Verify(publicKey, BuildResponseSigningInput(fields), signature) {
|
||||
return ErrInvalidResponseSignature
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func appendLengthPrefixedString(dst []byte, value string) []byte {
|
||||
return appendLengthPrefixedBytes(dst, []byte(value))
|
||||
}
|
||||
|
||||
func appendLengthPrefixedBytes(dst []byte, value []byte) []byte {
|
||||
dst = binary.AppendUvarint(dst, uint64(len(value)))
|
||||
dst = append(dst, value...)
|
||||
return dst
|
||||
}
|
||||
@@ -1,61 +0,0 @@
|
||||
// Package userv1contract provides public-contract helpers for the
|
||||
// authenticated gateway v1 User Service self-service message types.
|
||||
package userv1contract
|
||||
|
||||
import (
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
const (
|
||||
// MessageTypeGetMyAccount is the authenticated gateway message type used to
|
||||
// read the current self-service account aggregate.
|
||||
MessageTypeGetMyAccount = usermodel.MessageTypeGetMyAccount
|
||||
|
||||
// MessageTypeUpdateMyProfile is the authenticated gateway message type used
|
||||
// to mutate self-service profile fields.
|
||||
MessageTypeUpdateMyProfile = usermodel.MessageTypeUpdateMyProfile
|
||||
|
||||
// MessageTypeUpdateMySettings is the authenticated gateway message type used
|
||||
// to mutate self-service settings fields.
|
||||
MessageTypeUpdateMySettings = usermodel.MessageTypeUpdateMySettings
|
||||
|
||||
// ResultCodeOK is the success result code projected by gateway for all
|
||||
// successful `user.*` authenticated commands.
|
||||
ResultCodeOK = "ok"
|
||||
)
|
||||
|
||||
// EncodeGetMyAccountRequest returns the FlatBuffers payload for the public
|
||||
// empty get-account request.
|
||||
func EncodeGetMyAccountRequest() ([]byte, error) {
|
||||
return transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
}
|
||||
|
||||
// EncodeUpdateMyProfileRequest returns the FlatBuffers payload for one public
|
||||
// self-service profile mutation request.
|
||||
func EncodeUpdateMyProfileRequest(displayName string) ([]byte, error) {
|
||||
return transcoder.UpdateMyProfileRequestToPayload(&usermodel.UpdateMyProfileRequest{
|
||||
DisplayName: displayName,
|
||||
})
|
||||
}
|
||||
|
||||
// EncodeUpdateMySettingsRequest returns the FlatBuffers payload for one public
|
||||
// self-service settings mutation request.
|
||||
func EncodeUpdateMySettingsRequest(preferredLanguage string, timeZone string) ([]byte, error) {
|
||||
return transcoder.UpdateMySettingsRequestToPayload(&usermodel.UpdateMySettingsRequest{
|
||||
PreferredLanguage: preferredLanguage,
|
||||
TimeZone: timeZone,
|
||||
})
|
||||
}
|
||||
|
||||
// DecodeAccountResponse decodes the public FlatBuffers success payload shared
|
||||
// by all authenticated `user.*` commands.
|
||||
func DecodeAccountResponse(payload []byte) (*usermodel.AccountResponse, error) {
|
||||
return transcoder.PayloadToAccountResponse(payload)
|
||||
}
|
||||
|
||||
// DecodeErrorResponse decodes the public FlatBuffers error payload shared by
|
||||
// all authenticated `user.*` commands.
|
||||
func DecodeErrorResponse(payload []byte) (*usermodel.ErrorResponse, error) {
|
||||
return transcoder.PayloadToErrorResponse(payload)
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
package harness
|
||||
|
||||
// AuthsessionRedisEnv returns the env-var map that wires the authsession
|
||||
// binary to a Redis master at masterAddr using the master/replica/password
|
||||
// shape required by `pkg/redisconn`. The integration suites pass a fixed
|
||||
// placeholder password because the test Redis container runs without
|
||||
// `requirepass`.
|
||||
func AuthsessionRedisEnv(masterAddr string) map[string]string {
|
||||
return map[string]string{
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": masterAddr,
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
}
|
||||
}
|
||||
@@ -1,71 +0,0 @@
|
||||
// Package harness provides reusable black-box integration helpers shared by
|
||||
// inter-service suites.
|
||||
package harness
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var binaryCache struct {
|
||||
mu sync.Mutex
|
||||
paths map[string]string
|
||||
}
|
||||
|
||||
// BuildBinary builds packagePath once per test process and returns the
|
||||
// resulting executable path.
|
||||
func BuildBinary(t testing.TB, name string, packagePath string) string {
|
||||
t.Helper()
|
||||
|
||||
root := repositoryRoot(t)
|
||||
key := name + ":" + packagePath
|
||||
|
||||
binaryCache.mu.Lock()
|
||||
if binaryCache.paths == nil {
|
||||
binaryCache.paths = make(map[string]string)
|
||||
}
|
||||
if path, ok := binaryCache.paths[key]; ok {
|
||||
binaryCache.mu.Unlock()
|
||||
return path
|
||||
}
|
||||
|
||||
outputDir := filepath.Join(os.TempDir(), "galaxy-integration-binaries")
|
||||
if err := os.MkdirAll(outputDir, 0o755); err != nil {
|
||||
binaryCache.mu.Unlock()
|
||||
t.Fatalf("create integration binary directory: %v", err)
|
||||
}
|
||||
|
||||
outputPath := filepath.Join(outputDir, sanitizeBinaryName(key))
|
||||
cmd := exec.Command("go", "build", "-o", outputPath, packagePath)
|
||||
cmd.Dir = root
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
binaryCache.mu.Unlock()
|
||||
t.Fatalf("build %s: %v\n%s", packagePath, err, output)
|
||||
}
|
||||
|
||||
binaryCache.paths[key] = outputPath
|
||||
binaryCache.mu.Unlock()
|
||||
return outputPath
|
||||
}
|
||||
|
||||
func repositoryRoot(t testing.TB) string {
|
||||
t.Helper()
|
||||
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatal("resolve harness repository root: runtime caller is unavailable")
|
||||
}
|
||||
|
||||
return filepath.Clean(filepath.Join(filepath.Dir(file), "..", "..", ".."))
|
||||
}
|
||||
|
||||
func sanitizeBinaryName(value string) string {
|
||||
replacer := strings.NewReplacer("/", "_", "\\", "_", ":", "_", ".", "_")
|
||||
return replacer.Replace(value)
|
||||
}
|
||||
@@ -1,289 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
dockerNetworkPrefix = "lobbyrtm-it-"
|
||||
dockerNetworkTimeout = 30 * time.Second
|
||||
dockerCLITimeout = 30 * time.Second
|
||||
|
||||
containerHealthzPort = 8080
|
||||
containerHealthzTimeout = 5 * time.Second
|
||||
containerHealthzPoll = 100 * time.Millisecond
|
||||
)
|
||||
|
||||
// EnsureDockerNetwork creates a uniquely-named Docker bridge network
|
||||
// for the caller's test and registers cleanup. Each test gets its own
|
||||
// network so concurrent scenarios cannot collide on the per-game DNS
|
||||
// hostname (`galaxy-game-{game_id}`). The helper skips the test when
|
||||
// no Docker daemon is reachable.
|
||||
func EnsureDockerNetwork(t testing.TB) string {
|
||||
t.Helper()
|
||||
requireDockerDaemon(t)
|
||||
|
||||
name := dockerNetworkPrefix + uniqueSuffix(t)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), dockerNetworkTimeout)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, "docker", "network", "create", "--driver", "bridge", name)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("integration harness: create docker network %q: %v; output:\n%s",
|
||||
name, err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
|
||||
t.Cleanup(func() {
|
||||
cleanupCtx, cleanupCancel := context.WithTimeout(context.Background(), dockerNetworkTimeout)
|
||||
defer cleanupCancel()
|
||||
removeCmd := exec.CommandContext(cleanupCtx, "docker", "network", "rm", name)
|
||||
if rmErr := removeCmd.Run(); rmErr != nil {
|
||||
t.Logf("integration harness: remove docker network %q: %v", name, rmErr)
|
||||
}
|
||||
})
|
||||
return name
|
||||
}
|
||||
|
||||
// FindContainerIDByLabel returns the id of the single running container
|
||||
// labelled with the given game id, or an empty string when no match is
|
||||
// found. The label keys are the ones rtmanager attaches at start time
|
||||
// (`com.galaxy.owner=rtmanager`, `com.galaxy.game_id=<gameID>`).
|
||||
func FindContainerIDByLabel(t testing.TB, gameID string) string {
|
||||
t.Helper()
|
||||
requireDockerDaemon(t)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), dockerCLITimeout)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, "docker", "ps", "-aq", "--no-trunc",
|
||||
"--filter", "label=com.galaxy.owner=rtmanager",
|
||||
"--filter", "label=com.galaxy.game_id="+gameID,
|
||||
)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("integration harness: docker ps for game %s: %v; output:\n%s",
|
||||
gameID, err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
id := strings.TrimSpace(string(output))
|
||||
if id == "" {
|
||||
return ""
|
||||
}
|
||||
if strings.Contains(id, "\n") {
|
||||
t.Fatalf("integration harness: multiple containers for game %s:\n%s", gameID, id)
|
||||
}
|
||||
return id
|
||||
}
|
||||
|
||||
// ContainerState returns the runtime state string (e.g. `running`,
|
||||
// `exited`) of the container with the given id, looked up via
|
||||
// `docker inspect`.
|
||||
func ContainerState(t testing.TB, containerID string) string {
|
||||
t.Helper()
|
||||
requireDockerDaemon(t)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), dockerCLITimeout)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, "docker", "inspect", "--format", "{{.State.Status}}", containerID)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("integration harness: docker inspect %s: %v; output:\n%s",
|
||||
containerID, err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
return strings.TrimSpace(string(output))
|
||||
}
|
||||
|
||||
// ContainerNetworkIP returns the IPv4 address of the named container
|
||||
// inside the named bridge network. Returns an empty string when the
|
||||
// container has no endpoint on that network.
|
||||
func ContainerNetworkIP(t testing.TB, containerID, networkName string) string {
|
||||
t.Helper()
|
||||
requireDockerDaemon(t)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), dockerCLITimeout)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, "docker", "inspect", "--format", "{{json .NetworkSettings.Networks}}", containerID)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("integration harness: docker inspect networks %s: %v; output:\n%s",
|
||||
containerID, err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
var networks map[string]struct {
|
||||
IPAddress string `json:"IPAddress"`
|
||||
}
|
||||
if err := json.Unmarshal(output, &networks); err != nil {
|
||||
t.Fatalf("integration harness: parse network json for %s: %v; payload=%s",
|
||||
containerID, err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
if entry, ok := networks[networkName]; ok {
|
||||
return entry.IPAddress
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// WaitForEngineHealthz polls the engine `/healthz` on port 8080 until
|
||||
// it returns 200 or the timeout fires. On macOS the docker bridge IP is
|
||||
// not routable from the host, so the helper falls back to a transient
|
||||
// `busybox` probe container on the same docker network. On Linux it
|
||||
// dials the bridge IP directly.
|
||||
func WaitForEngineHealthz(t testing.TB, ip string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
if ip == "" {
|
||||
t.Fatalf("integration harness: empty engine ip")
|
||||
}
|
||||
if timeout <= 0 {
|
||||
timeout = containerHealthzTimeout
|
||||
}
|
||||
|
||||
if dialFromHost(ip, containerHealthzPort, 500*time.Millisecond) {
|
||||
waitForHealthzFromHost(t, ip, timeout)
|
||||
return
|
||||
}
|
||||
|
||||
network, hostname := containerNetworkAndHostname(t, ip)
|
||||
if network == "" || hostname == "" {
|
||||
t.Fatalf("integration harness: cannot resolve docker network/hostname for engine ip %s", ip)
|
||||
}
|
||||
waitForHealthzViaProbe(t, network, hostname, timeout)
|
||||
}
|
||||
|
||||
// dialFromHost reports whether tcp connect to ip:port succeeds within
|
||||
// timeout. Used to detect the macOS routing limitation cheaply.
|
||||
func dialFromHost(ip string, port int, timeout time.Duration) bool {
|
||||
conn, err := net.DialTimeout("tcp", net.JoinHostPort(ip, fmt.Sprintf("%d", port)), timeout)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
_ = conn.Close()
|
||||
return true
|
||||
}
|
||||
|
||||
func waitForHealthzFromHost(t testing.TB, ip string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
url := fmt.Sprintf("http://%s/healthz", net.JoinHostPort(ip, fmt.Sprintf("%d", containerHealthzPort)))
|
||||
client := &http.Client{
|
||||
Timeout: 500 * time.Millisecond,
|
||||
Transport: &http.Transport{DisableKeepAlives: true},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(timeout)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, url, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("integration harness: build healthz request for %s: %v", url, err)
|
||||
}
|
||||
resp, err := client.Do(req)
|
||||
if err == nil {
|
||||
resp.Body.Close()
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(containerHealthzPoll)
|
||||
}
|
||||
t.Fatalf("integration harness: engine /healthz on %s did not return 200 within %s", url, timeout)
|
||||
}
|
||||
|
||||
// containerNetworkAndHostname locates the bridge network and engine
|
||||
// container hostname behind the given IP so the busybox probe can use
|
||||
// the docker DNS name rather than rely on host routing. The lookup is
|
||||
// scoped to RTM-owned containers (`com.galaxy.owner=rtmanager`).
|
||||
func containerNetworkAndHostname(t testing.TB, ip string) (string, string) {
|
||||
t.Helper()
|
||||
requireDockerDaemon(t)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), dockerCLITimeout)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, "docker", "ps", "-aq", "--no-trunc",
|
||||
"--filter", "label=com.galaxy.owner=rtmanager",
|
||||
)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("integration harness: docker ps for engine probe: %v; output:\n%s", err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
for _, id := range strings.Split(strings.TrimSpace(string(output)), "\n") {
|
||||
id = strings.TrimSpace(id)
|
||||
if id == "" {
|
||||
continue
|
||||
}
|
||||
ipsByNetwork, hostname, ok := inspectIPAndHostname(t, id)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
for networkName, networkIP := range ipsByNetwork {
|
||||
if networkIP == ip {
|
||||
return networkName, hostname
|
||||
}
|
||||
}
|
||||
}
|
||||
return "", ""
|
||||
}
|
||||
|
||||
func inspectIPAndHostname(t testing.TB, containerID string) (map[string]string, string, bool) {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), dockerCLITimeout)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, "docker", "inspect", "--format",
|
||||
"{{json .NetworkSettings.Networks}}|{{.Config.Hostname}}", containerID)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return nil, "", false
|
||||
}
|
||||
parts := strings.SplitN(strings.TrimSpace(string(output)), "|", 2)
|
||||
if len(parts) != 2 {
|
||||
return nil, "", false
|
||||
}
|
||||
var networks map[string]struct {
|
||||
IPAddress string `json:"IPAddress"`
|
||||
}
|
||||
if err := json.Unmarshal([]byte(parts[0]), &networks); err != nil {
|
||||
return nil, "", false
|
||||
}
|
||||
ipsByNetwork := make(map[string]string, len(networks))
|
||||
for name, entry := range networks {
|
||||
ipsByNetwork[name] = entry.IPAddress
|
||||
}
|
||||
return ipsByNetwork, parts[1], true
|
||||
}
|
||||
|
||||
// waitForHealthzViaProbe runs `wget -qO- http://<hostname>:8080/healthz`
|
||||
// inside a transient busybox container on networkName until the probe
|
||||
// exits 0 or the timeout fires.
|
||||
func waitForHealthzViaProbe(t testing.TB, networkName, hostname string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
url := fmt.Sprintf("http://%s:%d/healthz", hostname, containerHealthzPort)
|
||||
for time.Now().Before(deadline) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
cmd := exec.CommandContext(ctx, "docker", "run", "--rm",
|
||||
"--network", networkName,
|
||||
"busybox:stable",
|
||||
"wget", "-qO-", url,
|
||||
)
|
||||
out, err := cmd.CombinedOutput()
|
||||
cancel()
|
||||
if err == nil && strings.Contains(string(out), "ok") {
|
||||
return
|
||||
}
|
||||
time.Sleep(containerHealthzPoll)
|
||||
}
|
||||
t.Fatalf("integration harness: engine /healthz on %s did not return 200 via probe within %s", url, timeout)
|
||||
}
|
||||
|
||||
func uniqueSuffix(t testing.TB) string {
|
||||
t.Helper()
|
||||
buf := make([]byte, 4)
|
||||
if _, err := rand.Read(buf); err != nil {
|
||||
t.Fatalf("integration harness: read random suffix: %v", err)
|
||||
}
|
||||
return hex.EncodeToString(buf)
|
||||
}
|
||||
@@ -1,139 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// EngineImageRef is the canonical tag the lobbyrtm boundary suite (and
|
||||
// any future suite that needs the galaxy/game engine binary) builds and
|
||||
// runs against. The `-lobbyrtm-it` suffix differs from the
|
||||
// `-rtm-it` tag the service-local rtmanager/integration harness uses, so
|
||||
// an operator running both suites locally cannot accidentally consume
|
||||
// the wrong image, and `docker image rm` of one suite's leftovers does
|
||||
// not remove the other suite's tag.
|
||||
const EngineImageRef = "galaxy/game:1.0.0-lobbyrtm-it"
|
||||
|
||||
const (
|
||||
imageBuildTimeout = 10 * time.Minute
|
||||
dockerDaemonPingTimeout = 5 * time.Second
|
||||
)
|
||||
|
||||
var (
|
||||
engineImageOnce sync.Once
|
||||
engineImageErr error
|
||||
|
||||
dockerAvailableOnce sync.Once
|
||||
dockerAvailableErr error
|
||||
)
|
||||
|
||||
// RequireDockerDaemon skips the calling test when no Docker daemon is
|
||||
// reachable from this process. Suites that need Docker but stand up
|
||||
// testcontainers (Postgres/Redis) before any RTM-specific helper
|
||||
// should call this helper first so the skip path runs *before* the
|
||||
// testcontainer client probes the daemon and fails hard.
|
||||
func RequireDockerDaemon(t testing.TB) {
|
||||
t.Helper()
|
||||
requireDockerDaemon(t)
|
||||
}
|
||||
|
||||
// EnsureGalaxyGameImage builds the galaxy/game engine image from the
|
||||
// workspace root once per test process and returns the canonical tag.
|
||||
// On hosts without a reachable Docker daemon the helper calls `t.Skip`
|
||||
// so suites stay green when `/var/run/docker.sock` is missing and
|
||||
// `DOCKER_HOST` is unset.
|
||||
//
|
||||
// The build is wrapped in `sync.Once`; concurrent suite invocations
|
||||
// share the same image. The Dockerfile path and build context match
|
||||
// `rtmanager/integration/harness/docker.go::buildAndTagEngineImage` —
|
||||
// galaxy's `go.work` resolves `galaxy/{model,error,...}` only when the
|
||||
// workspace root is the build context.
|
||||
func EnsureGalaxyGameImage(t testing.TB) string {
|
||||
t.Helper()
|
||||
requireDockerDaemon(t)
|
||||
|
||||
engineImageOnce.Do(func() {
|
||||
engineImageErr = buildEngineImage()
|
||||
})
|
||||
if engineImageErr != nil {
|
||||
t.Fatalf("integration harness: build galaxy/game image: %v", engineImageErr)
|
||||
}
|
||||
return EngineImageRef
|
||||
}
|
||||
|
||||
func buildEngineImage() error {
|
||||
root, err := workspaceRoot()
|
||||
if err != nil {
|
||||
return fmt.Errorf("resolve workspace root: %w", err)
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), imageBuildTimeout)
|
||||
defer cancel()
|
||||
|
||||
dockerfilePath := filepath.Join("game", "Dockerfile")
|
||||
cmd := exec.CommandContext(ctx, "docker", "build",
|
||||
"-f", dockerfilePath,
|
||||
"-t", EngineImageRef,
|
||||
".",
|
||||
)
|
||||
cmd.Dir = root
|
||||
cmd.Env = append(os.Environ(), "DOCKER_BUILDKIT=1")
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("docker build (-f %s) in %s: %w; output:\n%s",
|
||||
dockerfilePath, root, err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// requireDockerDaemon skips the calling test when no Docker daemon is
|
||||
// reachable from this process. The check runs once per process and
|
||||
// caches the verdict so successive callers do not pay the ping cost.
|
||||
func requireDockerDaemon(t testing.TB) {
|
||||
t.Helper()
|
||||
dockerAvailableOnce.Do(func() {
|
||||
dockerAvailableErr = pingDockerDaemon()
|
||||
})
|
||||
if dockerAvailableErr != nil {
|
||||
t.Skipf("integration harness: docker daemon unavailable: %v", dockerAvailableErr)
|
||||
}
|
||||
}
|
||||
|
||||
func pingDockerDaemon() error {
|
||||
if os.Getenv("DOCKER_HOST") == "" {
|
||||
if _, err := os.Stat("/var/run/docker.sock"); err != nil {
|
||||
return fmt.Errorf("set DOCKER_HOST or expose /var/run/docker.sock: %w", err)
|
||||
}
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(context.Background(), dockerDaemonPingTimeout)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, "docker", "version", "--format", "{{.Server.Version}}")
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("docker version: %w; output:\n%s", err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// workspaceRoot resolves the absolute path of the galaxy/ workspace
|
||||
// root by anchoring on this file's location. The harness lives at
|
||||
// `galaxy/integration/internal/harness/engineimage.go`; the workspace
|
||||
// root is three directories up.
|
||||
func workspaceRoot() (string, error) {
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
return "", errors.New("resolve runtime caller for workspace root")
|
||||
}
|
||||
dir := filepath.Dir(file)
|
||||
root := filepath.Clean(filepath.Join(dir, "..", "..", ".."))
|
||||
return root, nil
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
package harness
|
||||
|
||||
// GatewayRedisEnv returns the env-var map that wires the gateway binary to a
|
||||
// Redis master at masterAddr using the master/replica/password shape required
|
||||
// by `pkg/redisconn`. The integration suites pass a fixed placeholder
|
||||
// password because the test Redis container runs without `requirepass`.
|
||||
func GatewayRedisEnv(masterAddr string) map[string]string {
|
||||
return map[string]string{
|
||||
"GATEWAY_REDIS_MASTER_ADDR": masterAddr,
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
}
|
||||
}
|
||||
@@ -1,54 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
)
|
||||
|
||||
// StartMiniredis starts one isolated Redis-compatible in-memory server and
|
||||
// registers automatic cleanup.
|
||||
func StartMiniredis(t testing.TB) *miniredis.Miniredis {
|
||||
t.Helper()
|
||||
|
||||
server, err := miniredis.Run()
|
||||
if err != nil {
|
||||
t.Fatalf("start miniredis: %v", err)
|
||||
}
|
||||
|
||||
t.Cleanup(server.Close)
|
||||
return server
|
||||
}
|
||||
|
||||
// WriteResponseSignerPEM writes one deterministic PKCS#8 PEM-encoded Ed25519
|
||||
// private key for gateway response signing and returns the file path plus the
|
||||
// matching public key.
|
||||
func WriteResponseSignerPEM(t testing.TB, label string) (string, ed25519.PublicKey) {
|
||||
t.Helper()
|
||||
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-response-signer-" + label))
|
||||
privateKey := ed25519.NewKeyFromSeed(seed[:])
|
||||
|
||||
encoded, err := x509.MarshalPKCS8PrivateKey(privateKey)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal response signer private key: %v", err)
|
||||
}
|
||||
|
||||
pemBytes := pem.EncodeToMemory(&pem.Block{
|
||||
Type: "PRIVATE KEY",
|
||||
Bytes: encoded,
|
||||
})
|
||||
|
||||
path := filepath.Join(t.TempDir(), "response-signer.pem")
|
||||
if err := os.WriteFile(path, pemBytes, 0o600); err != nil {
|
||||
t.Fatalf("write response signer private key: %v", err)
|
||||
}
|
||||
|
||||
return path, privateKey.Public().(ed25519.PublicKey)
|
||||
}
|
||||
@@ -1,51 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// LobbyServicePersistence captures the per-test persistence dependencies of
|
||||
// the Game Lobby Service binary: a PostgreSQL container hosting the `lobby`
|
||||
// schema owned by the `lobbyservice` role, plus the Redis credentials that
|
||||
// point the service at the caller-supplied master address.
|
||||
type LobbyServicePersistence struct {
|
||||
// Postgres exposes the started container so tests that need direct SQL
|
||||
// access to the lobby schema (verifying side effects, seeding fixtures)
|
||||
// can read or write through it.
|
||||
Postgres *PostgresRuntime
|
||||
|
||||
// Env carries the environment entries that must be passed to the
|
||||
// lobby-service process. It is safe to merge into the caller's existing
|
||||
// env map, or to use as-is and append further LOBBY_* knobs in place.
|
||||
Env map[string]string
|
||||
}
|
||||
|
||||
// StartLobbyServicePersistence brings up one isolated PostgreSQL container,
|
||||
// provisions the `lobby` schema with the `lobbyservice` role, and returns
|
||||
// the environment entries that wire the lobby-service binary at that
|
||||
// container plus the supplied Redis master address.
|
||||
//
|
||||
// The returned password (`integration`) matches the architectural rule that
|
||||
// Redis traffic is password-protected; miniredis accepts arbitrary password
|
||||
// values when its own RequireAuth is not engaged, so the same value works
|
||||
// against both miniredis and the real `tcredis` runtime.
|
||||
//
|
||||
// Cleanup of the container is handled by StartPostgresContainer through
|
||||
// `t.Cleanup`; callers do not need to defer anything.
|
||||
func StartLobbyServicePersistence(t testing.TB, redisMasterAddr string) LobbyServicePersistence {
|
||||
t.Helper()
|
||||
|
||||
rt := StartPostgresContainer(t)
|
||||
if err := rt.EnsureRoleAndSchema(context.Background(), "lobby", "lobbyservice", "lobbyservice"); err != nil {
|
||||
t.Fatalf("ensure lobby schema/role: %v", err)
|
||||
}
|
||||
|
||||
env := WithPostgres(rt, "LOBBY", "lobby", "lobbyservice")
|
||||
env["LOBBY_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||
env["LOBBY_REDIS_PASSWORD"] = "integration"
|
||||
return LobbyServicePersistence{
|
||||
Postgres: rt,
|
||||
Env: env,
|
||||
}
|
||||
}
|
||||
@@ -1,187 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
const mailStubPath = "/api/v1/internal/login-code-deliveries"
|
||||
|
||||
// LoginCodeDelivery stores one mail-delivery request received by the external
|
||||
// mail stub.
|
||||
type LoginCodeDelivery struct {
|
||||
// Email identifies the target e-mail address requested by authsession.
|
||||
Email string
|
||||
|
||||
// Code stores the cleartext login code requested by authsession.
|
||||
Code string
|
||||
|
||||
// Locale stores the canonical BCP 47 language tag selected by authsession.
|
||||
Locale string
|
||||
}
|
||||
|
||||
// MailBehavior overrides one external mail-stub response.
|
||||
type MailBehavior struct {
|
||||
// Delay waits before the stub writes its response.
|
||||
Delay time.Duration
|
||||
|
||||
// StatusCode overrides the HTTP status returned by the stub. Zero keeps the
|
||||
// default `200 OK`.
|
||||
StatusCode int
|
||||
|
||||
// RawBody overrides the exact response body returned by the stub. Empty
|
||||
// value keeps the default JSON payload for the chosen status.
|
||||
RawBody string
|
||||
}
|
||||
|
||||
// MailStub provides one stateful external HTTP mail-service stub.
|
||||
type MailStub struct {
|
||||
server *httptest.Server
|
||||
|
||||
mu sync.Mutex
|
||||
deliveries []LoginCodeDelivery
|
||||
behavior MailBehavior
|
||||
}
|
||||
|
||||
// NewMailStub starts one stateful external HTTP mail-service stub.
|
||||
func NewMailStub(t testing.TB) *MailStub {
|
||||
t.Helper()
|
||||
|
||||
stub := &MailStub{}
|
||||
stub.server = httptest.NewServer(http.HandlerFunc(stub.handle))
|
||||
t.Cleanup(stub.server.Close)
|
||||
return stub
|
||||
}
|
||||
|
||||
// BaseURL returns the stub base URL suitable for service runtime wiring.
|
||||
func (s *MailStub) BaseURL() string {
|
||||
if s == nil || s.server == nil {
|
||||
return ""
|
||||
}
|
||||
return s.server.URL
|
||||
}
|
||||
|
||||
// SetBehavior replaces the current response behavior used by subsequent
|
||||
// requests.
|
||||
func (s *MailStub) SetBehavior(behavior MailBehavior) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
s.behavior = behavior
|
||||
}
|
||||
|
||||
// RecordedDeliveries returns a snapshot of all delivery requests received by
|
||||
// the stub so far.
|
||||
func (s *MailStub) RecordedDeliveries() []LoginCodeDelivery {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
cloned := make([]LoginCodeDelivery, len(s.deliveries))
|
||||
copy(cloned, s.deliveries)
|
||||
return cloned
|
||||
}
|
||||
|
||||
// Reset clears the recorded deliveries and restores default behavior.
|
||||
func (s *MailStub) Reset() {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.deliveries = nil
|
||||
s.behavior = MailBehavior{}
|
||||
}
|
||||
|
||||
func (s *MailStub) handle(writer http.ResponseWriter, request *http.Request) {
|
||||
if request.Method != http.MethodPost || request.URL.Path != mailStubPath {
|
||||
http.NotFound(writer, request)
|
||||
return
|
||||
}
|
||||
|
||||
var payload struct {
|
||||
Email string `json:"email"`
|
||||
Code string `json:"code"`
|
||||
Locale string `json:"locale"`
|
||||
}
|
||||
if err := decodeStrictJSONRequest(request, &payload); err != nil {
|
||||
http.Error(writer, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
s.deliveries = append(s.deliveries, LoginCodeDelivery{
|
||||
Email: payload.Email,
|
||||
Code: payload.Code,
|
||||
Locale: payload.Locale,
|
||||
})
|
||||
behavior := s.behavior
|
||||
s.mu.Unlock()
|
||||
|
||||
if behavior.Delay > 0 {
|
||||
timer := time.NewTimer(behavior.Delay)
|
||||
defer timer.Stop()
|
||||
|
||||
select {
|
||||
case <-request.Context().Done():
|
||||
return
|
||||
case <-timer.C:
|
||||
}
|
||||
}
|
||||
|
||||
statusCode := behavior.StatusCode
|
||||
if statusCode == 0 {
|
||||
statusCode = http.StatusOK
|
||||
}
|
||||
|
||||
body := behavior.RawBody
|
||||
if body == "" {
|
||||
switch statusCode {
|
||||
case http.StatusOK:
|
||||
body = `{"outcome":"sent"}`
|
||||
default:
|
||||
body = `{"error":"stubbed mail failure"}`
|
||||
}
|
||||
}
|
||||
|
||||
writer.Header().Set("Content-Type", "application/json")
|
||||
writer.WriteHeader(statusCode)
|
||||
_, _ = io.WriteString(writer, body)
|
||||
}
|
||||
|
||||
func decodeStrictJSONRequest(request *http.Request, target any) error {
|
||||
decoder := json.NewDecoder(request.Body)
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,51 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// MailServicePersistence captures the per-test persistence dependencies of
|
||||
// the Mail Service binary: a PostgreSQL container hosting the `mail` schema
|
||||
// owned by the `mailservice` role, and the Redis credentials that point the
|
||||
// service at the caller-supplied master address.
|
||||
type MailServicePersistence struct {
|
||||
// Postgres exposes the started container so tests that need direct SQL
|
||||
// access to the mail schema (verifying side effects, seeding fixtures)
|
||||
// can read or write through it.
|
||||
Postgres *PostgresRuntime
|
||||
|
||||
// Env carries the environment entries that must be passed to the
|
||||
// mail-service process. It is safe to merge into the caller's existing env
|
||||
// map, or to use as-is and append further MAIL_* knobs in place.
|
||||
Env map[string]string
|
||||
}
|
||||
|
||||
// StartMailServicePersistence brings up one isolated PostgreSQL container,
|
||||
// provisions the `mail` schema with the `mailservice` role, and returns the
|
||||
// environment entries that wire the mail-service binary at that container plus
|
||||
// the supplied Redis master address.
|
||||
//
|
||||
// The returned password (`integration`) matches the architectural rule that
|
||||
// Redis traffic is password-protected; miniredis accepts arbitrary password
|
||||
// values when its own RequireAuth is not engaged, so the same value works
|
||||
// against both miniredis and the real `tcredis` runtime.
|
||||
//
|
||||
// Cleanup of the container is handled by the underlying StartPostgresContainer
|
||||
// through `t.Cleanup`; callers do not need to defer anything.
|
||||
func StartMailServicePersistence(t testing.TB, redisMasterAddr string) MailServicePersistence {
|
||||
t.Helper()
|
||||
|
||||
rt := StartPostgresContainer(t)
|
||||
if err := rt.EnsureRoleAndSchema(context.Background(), "mail", "mailservice", "mailservice"); err != nil {
|
||||
t.Fatalf("ensure mail schema/role: %v", err)
|
||||
}
|
||||
|
||||
env := WithPostgres(rt, "MAIL", "mail", "mailservice")
|
||||
env["MAIL_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||
env["MAIL_REDIS_PASSWORD"] = "integration"
|
||||
return MailServicePersistence{
|
||||
Postgres: rt,
|
||||
Env: env,
|
||||
}
|
||||
}
|
||||
@@ -1,55 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// NotificationServicePersistence captures the per-test persistence
|
||||
// dependencies of the Notification Service binary: a PostgreSQL container
|
||||
// hosting the `notification` schema owned by the `notificationservice` role,
|
||||
// and the Redis credentials that point the service at the caller-supplied
|
||||
// master address.
|
||||
type NotificationServicePersistence struct {
|
||||
// Postgres exposes the started container so tests that need direct SQL
|
||||
// access to the notification schema (verifying side effects, seeding
|
||||
// fixtures) can read or write through it.
|
||||
Postgres *PostgresRuntime
|
||||
|
||||
// Env carries the environment entries that must be passed to the
|
||||
// notification-service process. It is safe to merge into the caller's
|
||||
// existing env map, or to use as-is and append further NOTIFICATION_*
|
||||
// knobs in place.
|
||||
Env map[string]string
|
||||
}
|
||||
|
||||
// StartNotificationServicePersistence brings up one isolated PostgreSQL
|
||||
// container, provisions the `notification` schema with the
|
||||
// `notificationservice` role, and returns the environment entries that wire
|
||||
// the notification-service binary at that container plus the supplied Redis
|
||||
// master address.
|
||||
//
|
||||
// The returned password (`integration`) matches the architectural rule that
|
||||
// Redis traffic is password-protected; miniredis accepts arbitrary password
|
||||
// values when its own RequireAuth is not engaged, so the same value works
|
||||
// against both miniredis and the real `tcredis` runtime.
|
||||
//
|
||||
// Cleanup of the container is handled by the underlying
|
||||
// StartPostgresContainer through `t.Cleanup`; callers do not need to defer
|
||||
// anything.
|
||||
func StartNotificationServicePersistence(t testing.TB, redisMasterAddr string) NotificationServicePersistence {
|
||||
t.Helper()
|
||||
|
||||
rt := StartPostgresContainer(t)
|
||||
if err := rt.EnsureRoleAndSchema(context.Background(), "notification", "notificationservice", "notificationservice"); err != nil {
|
||||
t.Fatalf("ensure notification schema/role: %v", err)
|
||||
}
|
||||
|
||||
env := WithPostgres(rt, "NOTIFICATION", "notification", "notificationservice")
|
||||
env["NOTIFICATION_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||
env["NOTIFICATION_REDIS_PASSWORD"] = "integration"
|
||||
return NotificationServicePersistence{
|
||||
Postgres: rt,
|
||||
Env: env,
|
||||
}
|
||||
}
|
||||
@@ -1,241 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/url"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/postgres"
|
||||
|
||||
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultPostgresContainerImage = "postgres:16-alpine"
|
||||
defaultPostgresDatabase = "galaxy_integration"
|
||||
defaultPostgresSuperuser = "galaxy_integration"
|
||||
defaultPostgresSuperPassword = "galaxy_integration"
|
||||
|
||||
postgresAdminConnectTimeout = 5 * time.Second
|
||||
postgresStartupTimeout = 60 * time.Second
|
||||
)
|
||||
|
||||
// PostgresRuntime stores one started real PostgreSQL container together with
|
||||
// the parsed connection coordinates and the per-test role credentials issued
|
||||
// by EnsureRoleAndSchema.
|
||||
//
|
||||
// The struct is safe to call from concurrent tests because credential lookups
|
||||
// guard the internal map with a mutex; each test should still keep its own
|
||||
// PostgresRuntime to preserve container-level isolation.
|
||||
type PostgresRuntime struct {
|
||||
Container *tcpostgres.PostgresContainer
|
||||
|
||||
baseDSN string
|
||||
host string
|
||||
port string
|
||||
database string
|
||||
|
||||
mu sync.Mutex
|
||||
creds map[string]string
|
||||
}
|
||||
|
||||
// StartPostgresContainer starts one isolated PostgreSQL container and registers
|
||||
// automatic cleanup for the suite. The container exposes a superuser created
|
||||
// from the package-level constants; per-service roles are issued lazily by
|
||||
// EnsureRoleAndSchema.
|
||||
func StartPostgresContainer(t testing.TB) *PostgresRuntime {
|
||||
t.Helper()
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
container, err := tcpostgres.Run(ctx,
|
||||
defaultPostgresContainerImage,
|
||||
tcpostgres.WithDatabase(defaultPostgresDatabase),
|
||||
tcpostgres.WithUsername(defaultPostgresSuperuser),
|
||||
tcpostgres.WithPassword(defaultPostgresSuperPassword),
|
||||
// The default Postgres image emits the "ready to accept connections"
|
||||
// log line twice during startup: once during temporary bootstrap, once
|
||||
// after the real listener opens on the mapped port. Waiting for the
|
||||
// second occurrence avoids racing the temporary instance.
|
||||
testcontainers.WithWaitStrategy(
|
||||
wait.ForLog("database system is ready to accept connections").
|
||||
WithOccurrence(2).
|
||||
WithStartupTimeout(postgresStartupTimeout),
|
||||
),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("start postgres container: %v", err)
|
||||
}
|
||||
|
||||
t.Cleanup(func() {
|
||||
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||
t.Errorf("terminate postgres container: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
baseDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||
if err != nil {
|
||||
t.Fatalf("resolve postgres connection string: %v", err)
|
||||
}
|
||||
|
||||
host, port, err := splitHostPort(baseDSN)
|
||||
if err != nil {
|
||||
t.Fatalf("parse postgres connection string: %v", err)
|
||||
}
|
||||
|
||||
return &PostgresRuntime{
|
||||
Container: container,
|
||||
baseDSN: baseDSN,
|
||||
host: host,
|
||||
port: port,
|
||||
database: defaultPostgresDatabase,
|
||||
creds: map[string]string{},
|
||||
}
|
||||
}
|
||||
|
||||
// BaseDSN returns the superuser DSN exposed by the container, suitable for
|
||||
// administrative tasks such as creating roles or schemas. Callers should
|
||||
// prefer DSNForSchema for service-scoped access.
|
||||
func (rt *PostgresRuntime) BaseDSN() string {
|
||||
return rt.baseDSN
|
||||
}
|
||||
|
||||
// DSNForSchema returns a DSN that connects as role and pins search_path to
|
||||
// schema. EnsureRoleAndSchema must have populated credentials for role first;
|
||||
// otherwise the call panics, signalling a test setup bug.
|
||||
func (rt *PostgresRuntime) DSNForSchema(schema, role string) string {
|
||||
rt.mu.Lock()
|
||||
password, ok := rt.creds[role]
|
||||
rt.mu.Unlock()
|
||||
if !ok {
|
||||
panic(fmt.Sprintf(
|
||||
"harness: DSNForSchema called for role %q with no credentials; call EnsureRoleAndSchema first",
|
||||
role,
|
||||
))
|
||||
}
|
||||
|
||||
values := url.Values{}
|
||||
values.Set("search_path", schema)
|
||||
values.Set("sslmode", "disable")
|
||||
|
||||
dsn := url.URL{
|
||||
Scheme: "postgres",
|
||||
User: url.UserPassword(role, password),
|
||||
Host: net.JoinHostPort(rt.host, rt.port),
|
||||
Path: "/" + rt.database,
|
||||
RawQuery: values.Encode(),
|
||||
}
|
||||
return dsn.String()
|
||||
}
|
||||
|
||||
// EnsureRoleAndSchema creates role with the given password (idempotent) and a
|
||||
// schema owned by that role (idempotent), then grants USAGE so the role can
|
||||
// resolve table references inside it. The credentials are cached for later
|
||||
// DSNForSchema lookups.
|
||||
//
|
||||
// The operation runs through a temporary administrative connection opened
|
||||
// from BaseDSN; the connection is closed before the call returns.
|
||||
func (rt *PostgresRuntime) EnsureRoleAndSchema(ctx context.Context, schema, role, password string) error {
|
||||
if strings.TrimSpace(schema) == "" {
|
||||
return fmt.Errorf("ensure role and schema: schema must not be empty")
|
||||
}
|
||||
if strings.TrimSpace(role) == "" {
|
||||
return fmt.Errorf("ensure role and schema: role must not be empty")
|
||||
}
|
||||
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = rt.baseDSN
|
||||
cfg.OperationTimeout = postgresAdminConnectTimeout
|
||||
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("ensure role and schema: open admin connection: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = db.Close()
|
||||
}()
|
||||
|
||||
createRole := fmt.Sprintf(`DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = %s) THEN
|
||||
CREATE ROLE %s LOGIN PASSWORD %s;
|
||||
END IF;
|
||||
END $$;`,
|
||||
quoteSQLLiteral(role),
|
||||
quoteSQLIdentifier(role),
|
||||
quoteSQLLiteral(password),
|
||||
)
|
||||
if _, err := db.ExecContext(ctx, createRole); err != nil {
|
||||
return fmt.Errorf("ensure role and schema: create role %q: %w", role, err)
|
||||
}
|
||||
|
||||
createSchema := fmt.Sprintf(`CREATE SCHEMA IF NOT EXISTS %s AUTHORIZATION %s;`,
|
||||
quoteSQLIdentifier(schema),
|
||||
quoteSQLIdentifier(role),
|
||||
)
|
||||
if _, err := db.ExecContext(ctx, createSchema); err != nil {
|
||||
return fmt.Errorf("ensure role and schema: create schema %q: %w", schema, err)
|
||||
}
|
||||
|
||||
grantUsage := fmt.Sprintf(`GRANT USAGE ON SCHEMA %s TO %s;`,
|
||||
quoteSQLIdentifier(schema),
|
||||
quoteSQLIdentifier(role),
|
||||
)
|
||||
if _, err := db.ExecContext(ctx, grantUsage); err != nil {
|
||||
return fmt.Errorf("ensure role and schema: grant usage on %q to %q: %w", schema, role, err)
|
||||
}
|
||||
|
||||
rt.mu.Lock()
|
||||
rt.creds[role] = password
|
||||
rt.mu.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// WithPostgres returns env entries pointing the service identified by
|
||||
// envPrefix at schema/role inside rt. EnsureRoleAndSchema must have populated
|
||||
// credentials for role first.
|
||||
//
|
||||
// The returned map carries only `<envPrefix>_POSTGRES_PRIMARY_DSN`; the other
|
||||
// per-service Postgres knobs (operation timeout, pool sizes) keep the
|
||||
// defaults provided by `pkg/postgres.DefaultConfig`.
|
||||
func WithPostgres(rt *PostgresRuntime, envPrefix, schema, role string) map[string]string {
|
||||
return map[string]string{
|
||||
envPrefix + "_POSTGRES_PRIMARY_DSN": rt.DSNForSchema(schema, role),
|
||||
}
|
||||
}
|
||||
|
||||
// quoteSQLIdentifier wraps name in double quotes and escapes any embedded
|
||||
// double quote, producing a SQL identifier that survives reserved words such
|
||||
// as `user`.
|
||||
func quoteSQLIdentifier(name string) string {
|
||||
return `"` + strings.ReplaceAll(name, `"`, `""`) + `"`
|
||||
}
|
||||
|
||||
// quoteSQLLiteral wraps value in single quotes and escapes any embedded single
|
||||
// quote, producing a SQL literal usable in DDL statements where parameter
|
||||
// binding is not available.
|
||||
func quoteSQLLiteral(value string) string {
|
||||
return "'" + strings.ReplaceAll(value, "'", "''") + "'"
|
||||
}
|
||||
|
||||
// splitHostPort extracts host and port from a postgres:// DSN.
|
||||
func splitHostPort(dsn string) (string, string, error) {
|
||||
parsed, err := url.Parse(dsn)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("parse dsn: %w", err)
|
||||
}
|
||||
host := parsed.Hostname()
|
||||
port := parsed.Port()
|
||||
if host == "" || port == "" {
|
||||
return "", "", fmt.Errorf("dsn %q missing host or port", dsn)
|
||||
}
|
||||
return host, port, nil
|
||||
}
|
||||
@@ -1,138 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/postgres"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestPostgresContainerRoundTrip(t *testing.T) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||
t.Cleanup(cancel)
|
||||
|
||||
rt := StartPostgresContainer(t)
|
||||
|
||||
require.NoError(t, rt.EnsureRoleAndSchema(ctx, "smoke_schema", "smoke_role", "smoke_pass"))
|
||||
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = rt.DSNForSchema("smoke_schema", "smoke_role")
|
||||
cfg.OperationTimeout = 5 * time.Second
|
||||
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.Close())
|
||||
})
|
||||
|
||||
require.NoError(t, postgres.Ping(ctx, db, cfg.OperationTimeout))
|
||||
|
||||
_, err = db.ExecContext(ctx, `CREATE TABLE notes (id serial PRIMARY KEY, body text NOT NULL)`)
|
||||
require.NoError(t, err)
|
||||
|
||||
var insertedID int64
|
||||
require.NoError(t, db.QueryRowContext(ctx,
|
||||
`INSERT INTO notes (body) VALUES ($1) RETURNING id`, "hello").Scan(&insertedID))
|
||||
require.Greater(t, insertedID, int64(0))
|
||||
|
||||
var body string
|
||||
require.NoError(t, db.QueryRowContext(ctx,
|
||||
`SELECT body FROM notes WHERE id = $1`, insertedID).Scan(&body))
|
||||
require.Equal(t, "hello", body)
|
||||
|
||||
// search_path is honoured: the unqualified table created above resolved
|
||||
// inside smoke_schema.
|
||||
var schemaName string
|
||||
require.NoError(t, db.QueryRowContext(ctx,
|
||||
`SELECT table_schema FROM information_schema.tables WHERE table_name = 'notes'`,
|
||||
).Scan(&schemaName))
|
||||
require.Equal(t, "smoke_schema", schemaName)
|
||||
}
|
||||
|
||||
func TestEnsureRoleAndSchemaIsIdempotent(t *testing.T) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||
t.Cleanup(cancel)
|
||||
|
||||
rt := StartPostgresContainer(t)
|
||||
|
||||
require.NoError(t, rt.EnsureRoleAndSchema(ctx, "schema_x", "role_x", "pass_x"))
|
||||
require.NoError(t, rt.EnsureRoleAndSchema(ctx, "schema_x", "role_x", "pass_x"))
|
||||
}
|
||||
|
||||
func TestEnsureRoleAndSchemaSupportsReservedWordIdentifiers(t *testing.T) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||
t.Cleanup(cancel)
|
||||
|
||||
rt := StartPostgresContainer(t)
|
||||
|
||||
// `user` is a SQL reserved word; identifier quoting must keep this working.
|
||||
require.NoError(t, rt.EnsureRoleAndSchema(ctx, "user", "userservice", "secret"))
|
||||
|
||||
cfg := postgres.DefaultConfig()
|
||||
cfg.PrimaryDSN = rt.DSNForSchema("user", "userservice")
|
||||
cfg.OperationTimeout = 5 * time.Second
|
||||
|
||||
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.Close())
|
||||
})
|
||||
|
||||
require.NoError(t, postgres.Ping(ctx, db, cfg.OperationTimeout))
|
||||
}
|
||||
|
||||
func TestWithPostgresBuildsPrimaryDSNEnv(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rt := newRuntimeForTest("127.0.0.1", "55432", "galaxy_integration", "userservice", "s3cr3t!")
|
||||
|
||||
env := WithPostgres(rt, "USERSERVICE", "user", "userservice")
|
||||
|
||||
require.Len(t, env, 1)
|
||||
|
||||
dsn, ok := env["USERSERVICE_POSTGRES_PRIMARY_DSN"]
|
||||
require.True(t, ok, "missing USERSERVICE_POSTGRES_PRIMARY_DSN entry")
|
||||
|
||||
parsed, err := url.Parse(dsn)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "postgres", parsed.Scheme)
|
||||
require.Equal(t, "127.0.0.1:55432", parsed.Host)
|
||||
require.Equal(t, "/galaxy_integration", parsed.Path)
|
||||
require.Equal(t, "userservice", parsed.User.Username())
|
||||
|
||||
password, hasPassword := parsed.User.Password()
|
||||
require.True(t, hasPassword)
|
||||
require.Equal(t, "s3cr3t!", password)
|
||||
|
||||
query := parsed.Query()
|
||||
require.Equal(t, "user", query.Get("search_path"))
|
||||
require.Equal(t, "disable", query.Get("sslmode"))
|
||||
}
|
||||
|
||||
func TestDSNForSchemaPanicsWithoutCredentials(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rt := newRuntimeForTest("127.0.0.1", "55432", "galaxy_integration", "userservice", "secret")
|
||||
|
||||
require.PanicsWithValue(t,
|
||||
`harness: DSNForSchema called for role "unknown" with no credentials; call EnsureRoleAndSchema first`,
|
||||
func() {
|
||||
_ = rt.DSNForSchema("user", "unknown")
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// newRuntimeForTest builds a PostgresRuntime without spinning a container.
|
||||
// It exists only to exercise the pure DSN/env-builder paths.
|
||||
func newRuntimeForTest(host, port, database, role, password string) *PostgresRuntime {
|
||||
return &PostgresRuntime{
|
||||
host: host,
|
||||
port: port,
|
||||
database: database,
|
||||
creds: map[string]string{role: password},
|
||||
}
|
||||
}
|
||||
@@ -1,287 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultStartupWait = 10 * time.Second
|
||||
defaultPollInterval = 25 * time.Millisecond
|
||||
defaultStopWait = 5 * time.Second
|
||||
)
|
||||
|
||||
// Process represents one long-lived external service process started by an
|
||||
// integration suite.
|
||||
type Process struct {
|
||||
name string
|
||||
cmd *exec.Cmd
|
||||
|
||||
logsMu sync.Mutex
|
||||
logs bytes.Buffer
|
||||
|
||||
doneCh chan struct{}
|
||||
waitErr error
|
||||
allowUnexpectedExit bool
|
||||
}
|
||||
|
||||
// StartProcess starts binaryPath with envOverrides and registers cleanup that
|
||||
// stops the process and prints captured logs on failed tests.
|
||||
func StartProcess(t testing.TB, name string, binaryPath string, envOverrides map[string]string) *Process {
|
||||
t.Helper()
|
||||
|
||||
cmd := exec.Command(binaryPath)
|
||||
cmd.Env = mergeEnvironment(os.Environ(), envOverrides)
|
||||
|
||||
process := &Process{
|
||||
name: name,
|
||||
cmd: cmd,
|
||||
doneCh: make(chan struct{}),
|
||||
}
|
||||
cmd.Stdout = process.logWriter()
|
||||
cmd.Stderr = process.logWriter()
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
t.Fatalf("start %s: %v", name, err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
process.waitErr = cmd.Wait()
|
||||
close(process.doneCh)
|
||||
}()
|
||||
|
||||
t.Cleanup(func() {
|
||||
process.Stop(t)
|
||||
if t.Failed() {
|
||||
t.Logf("%s logs:\n%s", name, process.Logs())
|
||||
}
|
||||
})
|
||||
|
||||
return process
|
||||
}
|
||||
|
||||
// Stop asks the process to terminate gracefully and waits for completion.
|
||||
func (p *Process) Stop(t testing.TB) {
|
||||
t.Helper()
|
||||
|
||||
if p == nil {
|
||||
return
|
||||
}
|
||||
|
||||
select {
|
||||
case <-p.doneCh:
|
||||
err := p.waitErr
|
||||
if err != nil && !isExpectedProcessExit(err) && !p.allowUnexpectedExit {
|
||||
t.Errorf("%s exited unexpectedly: %v", p.name, err)
|
||||
}
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
||||
if p.cmd.Process != nil {
|
||||
_ = p.cmd.Process.Signal(syscall.SIGTERM)
|
||||
}
|
||||
|
||||
select {
|
||||
case <-p.doneCh:
|
||||
err := p.waitErr
|
||||
if err != nil && !isExpectedProcessExit(err) && !p.allowUnexpectedExit {
|
||||
t.Errorf("%s exited unexpectedly: %v", p.name, err)
|
||||
}
|
||||
case <-time.After(defaultStopWait):
|
||||
if p.cmd.Process != nil {
|
||||
_ = p.cmd.Process.Kill()
|
||||
}
|
||||
<-p.doneCh
|
||||
err := p.waitErr
|
||||
if err != nil && !isExpectedProcessExit(err) && !p.allowUnexpectedExit {
|
||||
t.Errorf("%s exited unexpectedly: %v", p.name, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// AllowUnexpectedExit marks a process exit as expected for tests that
|
||||
// deliberately trigger a fatal runtime dependency failure.
|
||||
func (p *Process) AllowUnexpectedExit() {
|
||||
if p == nil {
|
||||
return
|
||||
}
|
||||
|
||||
p.allowUnexpectedExit = true
|
||||
}
|
||||
|
||||
// Logs returns the captured combined stdout/stderr output of the process.
|
||||
func (p *Process) Logs() string {
|
||||
if p == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
p.logsMu.Lock()
|
||||
defer p.logsMu.Unlock()
|
||||
return p.logs.String()
|
||||
}
|
||||
|
||||
// FreeTCPAddress reserves one ephemeral loopback TCP address and releases it
|
||||
// immediately so a service process can bind to it.
|
||||
func FreeTCPAddress(t testing.TB) string {
|
||||
t.Helper()
|
||||
|
||||
listener, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
t.Fatalf("reserve free TCP address: %v", err)
|
||||
}
|
||||
|
||||
addr := listener.Addr().String()
|
||||
if err := listener.Close(); err != nil {
|
||||
t.Fatalf("release reserved TCP address: %v", err)
|
||||
}
|
||||
|
||||
return addr
|
||||
}
|
||||
|
||||
// WaitForHTTPStatus waits until url responds with wantStatus or fails when the
|
||||
// backing process exits early.
|
||||
func WaitForHTTPStatus(t testing.TB, process *Process, url string, wantStatus int) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 250 * time.Millisecond,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
defer client.CloseIdleConnections()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultStartupWait)
|
||||
defer cancel()
|
||||
|
||||
ticker := time.NewTicker(defaultPollInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
if err := processErr(process); err != nil {
|
||||
t.Fatalf("%s exited before %s became ready: %v\n%s", process.name, url, err, process.Logs())
|
||||
}
|
||||
|
||||
request, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("build readiness request for %s: %v", url, err)
|
||||
}
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == wantStatus {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
t.Fatalf("wait for %s status %d: %v\n%s", url, wantStatus, ctx.Err(), process.Logs())
|
||||
case <-ticker.C:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WaitForTCP waits until address accepts TCP connections or fails when the
|
||||
// backing process exits early.
|
||||
func WaitForTCP(t testing.TB, process *Process, address string) {
|
||||
t.Helper()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultStartupWait)
|
||||
defer cancel()
|
||||
|
||||
ticker := time.NewTicker(defaultPollInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
if err := processErr(process); err != nil {
|
||||
t.Fatalf("%s exited before %s became reachable: %v\n%s", process.name, address, err, process.Logs())
|
||||
}
|
||||
|
||||
conn, err := net.DialTimeout("tcp", address, 100*time.Millisecond)
|
||||
if err == nil {
|
||||
_ = conn.Close()
|
||||
return
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
t.Fatalf("wait for %s TCP readiness: %v\n%s", address, ctx.Err(), process.Logs())
|
||||
case <-ticker.C:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Process) logWriter() io.Writer {
|
||||
return writerFunc(func(data []byte) (int, error) {
|
||||
p.logsMu.Lock()
|
||||
defer p.logsMu.Unlock()
|
||||
return p.logs.Write(data)
|
||||
})
|
||||
}
|
||||
|
||||
func mergeEnvironment(base []string, overrides map[string]string) []string {
|
||||
values := make(map[string]string, len(base)+len(overrides))
|
||||
for _, entry := range base {
|
||||
name, value, ok := strings.Cut(entry, "=")
|
||||
if ok {
|
||||
values[name] = value
|
||||
}
|
||||
}
|
||||
for name, value := range overrides {
|
||||
values[name] = value
|
||||
}
|
||||
|
||||
merged := make([]string, 0, len(values))
|
||||
for name, value := range values {
|
||||
merged = append(merged, fmt.Sprintf("%s=%s", name, value))
|
||||
}
|
||||
return merged
|
||||
}
|
||||
|
||||
func processErr(process *Process) error {
|
||||
if process == nil {
|
||||
return errors.New("nil process")
|
||||
}
|
||||
|
||||
select {
|
||||
case <-process.doneCh:
|
||||
return process.waitErr
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func isExpectedProcessExit(err error) bool {
|
||||
if err == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
var exitErr *exec.ExitError
|
||||
if !errors.As(err, &exitErr) {
|
||||
return false
|
||||
}
|
||||
|
||||
return exitErr.ExitCode() == -1
|
||||
}
|
||||
|
||||
type writerFunc func([]byte) (int, error)
|
||||
|
||||
func (f writerFunc) Write(data []byte) (int, error) {
|
||||
return f(data)
|
||||
}
|
||||
@@ -1,47 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||
rediscontainer "github.com/testcontainers/testcontainers-go/modules/redis"
|
||||
)
|
||||
|
||||
const defaultRedisContainerImage = "redis:7"
|
||||
|
||||
// RedisRuntime stores one started real Redis container together with the
|
||||
// externally reachable endpoint used by black-box suites.
|
||||
type RedisRuntime struct {
|
||||
Container *rediscontainer.RedisContainer
|
||||
Addr string
|
||||
}
|
||||
|
||||
// StartRedisContainer starts one isolated real Redis container and registers
|
||||
// automatic cleanup for the suite.
|
||||
func StartRedisContainer(t testing.TB) *RedisRuntime {
|
||||
t.Helper()
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
container, err := rediscontainer.Run(ctx, defaultRedisContainerImage)
|
||||
if err != nil {
|
||||
t.Fatalf("start redis container: %v", err)
|
||||
}
|
||||
|
||||
t.Cleanup(func() {
|
||||
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||
t.Errorf("terminate redis container: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
addr, err := container.Endpoint(ctx, "")
|
||||
if err != nil {
|
||||
t.Fatalf("resolve redis container endpoint: %v", err)
|
||||
}
|
||||
|
||||
return &RedisRuntime{
|
||||
Container: container,
|
||||
Addr: addr,
|
||||
}
|
||||
}
|
||||
@@ -1,54 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// RTManagerServicePersistence captures the per-test persistence
|
||||
// dependencies of the Runtime Manager binary: a PostgreSQL container
|
||||
// hosting the `rtmanager` schema owned by the `rtmanagerservice` role,
|
||||
// plus the Redis credentials that point the service at the
|
||||
// caller-supplied master address.
|
||||
type RTManagerServicePersistence struct {
|
||||
// Postgres exposes the started container so tests that need direct
|
||||
// SQL access to the rtmanager schema can read or write through it.
|
||||
Postgres *PostgresRuntime
|
||||
|
||||
// Env carries the environment entries that must be passed to the
|
||||
// rtmanager process. It is safe to merge into the caller's existing
|
||||
// env map, or to use as-is and append further RTMANAGER_* knobs in
|
||||
// place. RTMANAGER_GAME_STATE_ROOT is intentionally omitted; the
|
||||
// caller supplies a per-test directory.
|
||||
Env map[string]string
|
||||
}
|
||||
|
||||
// StartRTManagerServicePersistence brings up one isolated PostgreSQL
|
||||
// container, provisions the `rtmanager` schema with the
|
||||
// `rtmanagerservice` role, and returns the environment entries that
|
||||
// wire the rtmanager binary at that container plus the supplied Redis
|
||||
// master address.
|
||||
//
|
||||
// The Redis password value matches the architectural rule that Redis
|
||||
// traffic is password-protected; miniredis accepts arbitrary password
|
||||
// values when its own RequireAuth is not engaged, and the same value
|
||||
// works against the real testcontainers Redis runtime.
|
||||
//
|
||||
// Cleanup of the container is handled by StartPostgresContainer through
|
||||
// `t.Cleanup`; callers do not need to defer anything.
|
||||
func StartRTManagerServicePersistence(t testing.TB, redisMasterAddr string) RTManagerServicePersistence {
|
||||
t.Helper()
|
||||
|
||||
rt := StartPostgresContainer(t)
|
||||
if err := rt.EnsureRoleAndSchema(context.Background(), "rtmanager", "rtmanagerservice", "rtmanagerservice"); err != nil {
|
||||
t.Fatalf("ensure rtmanager schema/role: %v", err)
|
||||
}
|
||||
|
||||
env := WithPostgres(rt, "RTMANAGER", "rtmanager", "rtmanagerservice")
|
||||
env["RTMANAGER_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||
env["RTMANAGER_REDIS_PASSWORD"] = "integration"
|
||||
return RTManagerServicePersistence{
|
||||
Postgres: rt,
|
||||
Env: env,
|
||||
}
|
||||
}
|
||||
@@ -1,377 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
"encoding/pem"
|
||||
"io"
|
||||
"math/big"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// SMTPCaptureConfig configures one local SMTP capture server.
|
||||
type SMTPCaptureConfig struct {
|
||||
// SupportsSTARTTLS controls whether the server advertises and accepts the
|
||||
// STARTTLS upgrade command.
|
||||
SupportsSTARTTLS bool
|
||||
|
||||
// FinalDataReply stores the final SMTP status line returned after the
|
||||
// message body has been received. Empty value keeps the default accepted
|
||||
// reply.
|
||||
FinalDataReply string
|
||||
}
|
||||
|
||||
// SMTPCapture stores one running local SMTP capture server together with the
|
||||
// generated trust anchor used by external processes.
|
||||
type SMTPCapture struct {
|
||||
addr string
|
||||
rootCAPath string
|
||||
listener net.Listener
|
||||
tlsConfig *tls.Config
|
||||
|
||||
connsMu sync.Mutex
|
||||
conns map[net.Conn]struct{}
|
||||
|
||||
payloadsMu sync.Mutex
|
||||
payloads []string
|
||||
|
||||
acceptWG sync.WaitGroup
|
||||
connWG sync.WaitGroup
|
||||
}
|
||||
|
||||
// StartSMTPCapture starts one local SMTP server suitable for black-box tests
|
||||
// that need to observe captured message payloads.
|
||||
func StartSMTPCapture(t testing.TB, cfg SMTPCaptureConfig) *SMTPCapture {
|
||||
t.Helper()
|
||||
|
||||
if cfg.FinalDataReply == "" {
|
||||
cfg.FinalDataReply = "250 2.0.0 accepted"
|
||||
}
|
||||
|
||||
serverCertificate, rootCAPEM := newSMTPCertificates(t)
|
||||
rootCAPath := filepath.Join(t.TempDir(), "smtp-root-ca.pem")
|
||||
if err := os.WriteFile(rootCAPath, rootCAPEM, 0o600); err != nil {
|
||||
t.Fatalf("write SMTP root CA: %v", err)
|
||||
}
|
||||
|
||||
listener, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
t.Fatalf("start SMTP capture listener: %v", err)
|
||||
}
|
||||
|
||||
capture := &SMTPCapture{
|
||||
addr: listener.Addr().String(),
|
||||
rootCAPath: rootCAPath,
|
||||
listener: listener,
|
||||
tlsConfig: &tls.Config{
|
||||
Certificates: []tls.Certificate{serverCertificate},
|
||||
MinVersion: tls.VersionTLS12,
|
||||
},
|
||||
conns: make(map[net.Conn]struct{}),
|
||||
}
|
||||
|
||||
capture.acceptWG.Add(1)
|
||||
go func() {
|
||||
defer capture.acceptWG.Done()
|
||||
for {
|
||||
conn, err := listener.Accept()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
capture.trackConn(conn)
|
||||
capture.connWG.Add(1)
|
||||
go func() {
|
||||
defer capture.connWG.Done()
|
||||
defer capture.untrackConn(conn)
|
||||
defer func() {
|
||||
_ = conn.Close()
|
||||
}()
|
||||
|
||||
capture.serveConnection(conn, cfg)
|
||||
}()
|
||||
}
|
||||
}()
|
||||
|
||||
t.Cleanup(func() {
|
||||
_ = capture.listener.Close()
|
||||
capture.closeConnections()
|
||||
capture.acceptWG.Wait()
|
||||
capture.connWG.Wait()
|
||||
})
|
||||
|
||||
return capture
|
||||
}
|
||||
|
||||
// Addr returns the externally reachable TCP address of the capture server.
|
||||
func (capture *SMTPCapture) Addr() string {
|
||||
if capture == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return capture.addr
|
||||
}
|
||||
|
||||
// RootCAPath returns the PEM path that should be trusted by clients talking to
|
||||
// the capture server over STARTTLS.
|
||||
func (capture *SMTPCapture) RootCAPath() string {
|
||||
if capture == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return capture.rootCAPath
|
||||
}
|
||||
|
||||
// LatestPayload returns the most recently captured SMTP DATA payload.
|
||||
func (capture *SMTPCapture) LatestPayload() string {
|
||||
if capture == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
capture.payloadsMu.Lock()
|
||||
defer capture.payloadsMu.Unlock()
|
||||
|
||||
if len(capture.payloads) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
return capture.payloads[len(capture.payloads)-1]
|
||||
}
|
||||
|
||||
func (capture *SMTPCapture) trackConn(conn net.Conn) {
|
||||
capture.connsMu.Lock()
|
||||
defer capture.connsMu.Unlock()
|
||||
capture.conns[conn] = struct{}{}
|
||||
}
|
||||
|
||||
func (capture *SMTPCapture) untrackConn(conn net.Conn) {
|
||||
capture.connsMu.Lock()
|
||||
defer capture.connsMu.Unlock()
|
||||
delete(capture.conns, conn)
|
||||
}
|
||||
|
||||
func (capture *SMTPCapture) closeConnections() {
|
||||
capture.connsMu.Lock()
|
||||
defer capture.connsMu.Unlock()
|
||||
|
||||
for conn := range capture.conns {
|
||||
_ = conn.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func (capture *SMTPCapture) appendPayload(payload string) {
|
||||
capture.payloadsMu.Lock()
|
||||
defer capture.payloadsMu.Unlock()
|
||||
capture.payloads = append(capture.payloads, payload)
|
||||
}
|
||||
|
||||
func (capture *SMTPCapture) serveConnection(conn net.Conn, cfg SMTPCaptureConfig) {
|
||||
reader := newSMTPLineReader(conn)
|
||||
writer := newSMTPLineWriter(conn)
|
||||
writer.writeLine("220 localhost ESMTP")
|
||||
|
||||
tlsActive := false
|
||||
for {
|
||||
line, err := reader.readLine()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
command := strings.ToUpper(line)
|
||||
switch {
|
||||
case strings.HasPrefix(command, "EHLO "), strings.HasPrefix(command, "HELO "):
|
||||
if cfg.SupportsSTARTTLS && !tlsActive {
|
||||
writer.writeLines(
|
||||
"250-localhost",
|
||||
"250-8BITMIME",
|
||||
"250-STARTTLS",
|
||||
"250 SMTPUTF8",
|
||||
)
|
||||
continue
|
||||
}
|
||||
|
||||
writer.writeLines(
|
||||
"250-localhost",
|
||||
"250-8BITMIME",
|
||||
"250 SMTPUTF8",
|
||||
)
|
||||
case command == "STARTTLS":
|
||||
if !cfg.SupportsSTARTTLS {
|
||||
writer.writeLine("454 4.7.0 TLS not available")
|
||||
continue
|
||||
}
|
||||
|
||||
writer.writeLine("220 Ready to start TLS")
|
||||
tlsConn := tls.Server(conn, capture.tlsConfig)
|
||||
if err := tlsConn.Handshake(); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
capture.trackConn(tlsConn)
|
||||
capture.untrackConn(conn)
|
||||
conn = tlsConn
|
||||
reader = newSMTPLineReader(conn)
|
||||
writer = newSMTPLineWriter(conn)
|
||||
tlsActive = true
|
||||
case strings.HasPrefix(command, "MAIL FROM:"):
|
||||
writer.writeLine("250 2.1.0 Ok")
|
||||
case strings.HasPrefix(command, "RCPT TO:"):
|
||||
writer.writeLine("250 2.1.5 Ok")
|
||||
case command == "DATA":
|
||||
writer.writeLine("354 End data with <CR><LF>.<CR><LF>")
|
||||
|
||||
var payload strings.Builder
|
||||
for {
|
||||
dataLine, err := reader.readRawLine()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if dataLine == ".\r\n" {
|
||||
break
|
||||
}
|
||||
payload.WriteString(dataLine)
|
||||
}
|
||||
|
||||
capture.appendPayload(payload.String())
|
||||
writer.writeLine(cfg.FinalDataReply)
|
||||
case command == "RSET":
|
||||
writer.writeLine("250 2.0.0 Ok")
|
||||
case command == "QUIT":
|
||||
writer.writeLine("221 2.0.0 Bye")
|
||||
return
|
||||
default:
|
||||
writer.writeLine("250 2.0.0 Ok")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type smtpLineReader struct {
|
||||
conn net.Conn
|
||||
}
|
||||
|
||||
func newSMTPLineReader(conn net.Conn) *smtpLineReader {
|
||||
return &smtpLineReader{conn: conn}
|
||||
}
|
||||
|
||||
func (reader *smtpLineReader) readLine() (string, error) {
|
||||
line, err := reader.readRawLine()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return strings.TrimSuffix(strings.TrimSuffix(line, "\n"), "\r"), nil
|
||||
}
|
||||
|
||||
func (reader *smtpLineReader) readRawLine() (string, error) {
|
||||
var buffer bytes.Buffer
|
||||
tmp := make([]byte, 1)
|
||||
for {
|
||||
if _, err := reader.conn.Read(tmp); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
buffer.WriteByte(tmp[0])
|
||||
if tmp[0] == '\n' {
|
||||
return buffer.String(), nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type smtpLineWriter struct {
|
||||
conn net.Conn
|
||||
}
|
||||
|
||||
func newSMTPLineWriter(conn net.Conn) *smtpLineWriter {
|
||||
return &smtpLineWriter{conn: conn}
|
||||
}
|
||||
|
||||
func (writer *smtpLineWriter) writeLine(line string) {
|
||||
_, _ = io.WriteString(writer.conn, line+"\r\n")
|
||||
}
|
||||
|
||||
func (writer *smtpLineWriter) writeLines(lines ...string) {
|
||||
for _, line := range lines {
|
||||
writer.writeLine(line)
|
||||
}
|
||||
}
|
||||
|
||||
func newSMTPCertificates(t testing.TB) (tls.Certificate, []byte) {
|
||||
t.Helper()
|
||||
|
||||
rootKey, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
t.Fatalf("generate SMTP root key: %v", err)
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
rootTemplate := x509.Certificate{
|
||||
SerialNumber: big.NewInt(1),
|
||||
Subject: pkix.Name{
|
||||
CommonName: "galaxy-integration-smtp-root",
|
||||
},
|
||||
NotBefore: now.Add(-time.Hour),
|
||||
NotAfter: now.Add(24 * time.Hour),
|
||||
KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign | x509.KeyUsageDigitalSignature,
|
||||
IsCA: true,
|
||||
BasicConstraintsValid: true,
|
||||
}
|
||||
|
||||
rootDER, err := x509.CreateCertificate(rand.Reader, &rootTemplate, &rootTemplate, &rootKey.PublicKey, rootKey)
|
||||
if err != nil {
|
||||
t.Fatalf("create SMTP root certificate: %v", err)
|
||||
}
|
||||
|
||||
rootPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: rootDER})
|
||||
|
||||
serverKey, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
t.Fatalf("generate SMTP server key: %v", err)
|
||||
}
|
||||
|
||||
serverTemplate := x509.Certificate{
|
||||
SerialNumber: big.NewInt(2),
|
||||
Subject: pkix.Name{
|
||||
CommonName: "127.0.0.1",
|
||||
},
|
||||
NotBefore: now.Add(-time.Hour),
|
||||
NotAfter: now.Add(24 * time.Hour),
|
||||
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
|
||||
BasicConstraintsValid: true,
|
||||
DNSNames: []string{"localhost"},
|
||||
IPAddresses: []net.IP{net.ParseIP("127.0.0.1")},
|
||||
}
|
||||
|
||||
rootCert, err := x509.ParseCertificate(rootDER)
|
||||
if err != nil {
|
||||
t.Fatalf("parse SMTP root certificate: %v", err)
|
||||
}
|
||||
|
||||
serverDER, err := x509.CreateCertificate(rand.Reader, &serverTemplate, rootCert, &serverKey.PublicKey, rootKey)
|
||||
if err != nil {
|
||||
t.Fatalf("create SMTP server certificate: %v", err)
|
||||
}
|
||||
|
||||
serverPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: serverDER})
|
||||
serverKeyPEM := pem.EncodeToMemory(&pem.Block{
|
||||
Type: "RSA PRIVATE KEY",
|
||||
Bytes: x509.MarshalPKCS1PrivateKey(serverKey),
|
||||
})
|
||||
|
||||
certificate, err := tls.X509KeyPair(append(serverPEM, rootPEM...), serverKeyPEM)
|
||||
if err != nil {
|
||||
t.Fatalf("load SMTP server key pair: %v", err)
|
||||
}
|
||||
|
||||
return certificate, rootPEM
|
||||
}
|
||||
@@ -1,323 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
)
|
||||
|
||||
const (
|
||||
resolveByEmailPath = "/api/v1/internal/user-resolutions/by-email"
|
||||
ensureByEmailPath = "/api/v1/internal/users/ensure-by-email"
|
||||
blockByEmailPath = "/api/v1/internal/user-blocks/by-email"
|
||||
)
|
||||
|
||||
// EnsureUserCall stores one ensure-by-email request received by the external
|
||||
// user-service stub.
|
||||
type EnsureUserCall struct {
|
||||
// Email identifies the requested login or registration e-mail.
|
||||
Email string
|
||||
|
||||
// PreferredLanguage stores the forwarded registration-context language.
|
||||
PreferredLanguage string
|
||||
|
||||
// TimeZone stores the forwarded registration-context time zone.
|
||||
TimeZone string
|
||||
}
|
||||
|
||||
// UserStub provides one stateful external HTTP user-service stub.
|
||||
type UserStub struct {
|
||||
server *httptest.Server
|
||||
|
||||
mu sync.Mutex
|
||||
|
||||
emailToUserID map[string]string
|
||||
userIDToEmail map[string]string
|
||||
blockedEmails map[string]string
|
||||
blockedUsers map[string]string
|
||||
ensureCalls []EnsureUserCall
|
||||
nextUserID int
|
||||
}
|
||||
|
||||
// NewUserStub starts one stateful external HTTP user-service stub.
|
||||
func NewUserStub(t testing.TB) *UserStub {
|
||||
t.Helper()
|
||||
|
||||
stub := &UserStub{
|
||||
emailToUserID: make(map[string]string),
|
||||
userIDToEmail: make(map[string]string),
|
||||
blockedEmails: make(map[string]string),
|
||||
blockedUsers: make(map[string]string),
|
||||
nextUserID: 1,
|
||||
}
|
||||
stub.server = httptest.NewServer(http.HandlerFunc(stub.handle))
|
||||
t.Cleanup(stub.server.Close)
|
||||
return stub
|
||||
}
|
||||
|
||||
// BaseURL returns the stub base URL suitable for authsession runtime wiring.
|
||||
func (s *UserStub) BaseURL() string {
|
||||
if s == nil || s.server == nil {
|
||||
return ""
|
||||
}
|
||||
return s.server.URL
|
||||
}
|
||||
|
||||
// SeedExisting adds one existing unblocked user record into the stub state.
|
||||
func (s *UserStub) SeedExisting(email string, userID string) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.emailToUserID[email] = userID
|
||||
s.userIDToEmail[userID] = email
|
||||
}
|
||||
|
||||
// SeedBlockedEmail adds one blocked e-mail into the stub state.
|
||||
func (s *UserStub) SeedBlockedEmail(email string, reasonCode string) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.blockedEmails[email] = reasonCode
|
||||
if userID, ok := s.emailToUserID[email]; ok {
|
||||
s.blockedUsers[userID] = reasonCode
|
||||
}
|
||||
}
|
||||
|
||||
// EnsureCalls returns a snapshot of ensure-by-email requests observed by the
|
||||
// stub so far.
|
||||
func (s *UserStub) EnsureCalls() []EnsureUserCall {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
cloned := make([]EnsureUserCall, len(s.ensureCalls))
|
||||
copy(cloned, s.ensureCalls)
|
||||
return cloned
|
||||
}
|
||||
|
||||
// Reset clears all stub state and recorded calls.
|
||||
func (s *UserStub) Reset() {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.emailToUserID = make(map[string]string)
|
||||
s.userIDToEmail = make(map[string]string)
|
||||
s.blockedEmails = make(map[string]string)
|
||||
s.blockedUsers = make(map[string]string)
|
||||
s.ensureCalls = nil
|
||||
s.nextUserID = 1
|
||||
}
|
||||
|
||||
func (s *UserStub) handle(writer http.ResponseWriter, request *http.Request) {
|
||||
switch {
|
||||
case request.Method == http.MethodPost && request.URL.Path == resolveByEmailPath:
|
||||
s.handleResolveByEmail(writer, request)
|
||||
case request.Method == http.MethodGet && strings.HasPrefix(request.URL.Path, "/api/v1/internal/users/") && strings.HasSuffix(request.URL.Path, "/exists"):
|
||||
s.handleExistsByUserID(writer, request)
|
||||
case request.Method == http.MethodPost && request.URL.Path == ensureByEmailPath:
|
||||
s.handleEnsureByEmail(writer, request)
|
||||
case request.Method == http.MethodPost && strings.HasPrefix(request.URL.Path, "/api/v1/internal/users/") && strings.HasSuffix(request.URL.Path, "/block"):
|
||||
s.handleBlockByUserID(writer, request)
|
||||
case request.Method == http.MethodPost && request.URL.Path == blockByEmailPath:
|
||||
s.handleBlockByEmail(writer, request)
|
||||
default:
|
||||
http.NotFound(writer, request)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *UserStub) handleResolveByEmail(writer http.ResponseWriter, request *http.Request) {
|
||||
var payload struct {
|
||||
Email string `json:"email"`
|
||||
}
|
||||
if err := decodeStrictJSONRequest(request, &payload); err != nil {
|
||||
http.Error(writer, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if reason, ok := s.blockedEmails[payload.Email]; ok {
|
||||
writeJSON(writer, http.StatusOK, map[string]any{
|
||||
"kind": "blocked",
|
||||
"block_reason_code": reason,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
if userID, ok := s.emailToUserID[payload.Email]; ok {
|
||||
if reason, blocked := s.blockedUsers[userID]; blocked {
|
||||
writeJSON(writer, http.StatusOK, map[string]any{
|
||||
"kind": "blocked",
|
||||
"block_reason_code": reason,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(writer, http.StatusOK, map[string]any{
|
||||
"kind": "existing",
|
||||
"user_id": userID,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(writer, http.StatusOK, map[string]any{"kind": "creatable"})
|
||||
}
|
||||
|
||||
func (s *UserStub) handleExistsByUserID(writer http.ResponseWriter, request *http.Request) {
|
||||
userIDValue := strings.TrimSuffix(strings.TrimPrefix(request.URL.Path, "/api/v1/internal/users/"), "/exists")
|
||||
userIDValue, err := url.PathUnescape(userIDValue)
|
||||
if err != nil {
|
||||
http.Error(writer, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
_, exists := s.userIDToEmail[userIDValue]
|
||||
writeJSON(writer, http.StatusOK, map[string]bool{"exists": exists})
|
||||
}
|
||||
|
||||
func (s *UserStub) handleEnsureByEmail(writer http.ResponseWriter, request *http.Request) {
|
||||
var payload struct {
|
||||
Email string `json:"email"`
|
||||
RegistrationContext *struct {
|
||||
PreferredLanguage string `json:"preferred_language"`
|
||||
TimeZone string `json:"time_zone"`
|
||||
} `json:"registration_context"`
|
||||
}
|
||||
if err := decodeStrictJSONRequest(request, &payload); err != nil {
|
||||
http.Error(writer, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if payload.RegistrationContext == nil {
|
||||
http.Error(writer, "registration_context must be present", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.ensureCalls = append(s.ensureCalls, EnsureUserCall{
|
||||
Email: payload.Email,
|
||||
PreferredLanguage: payload.RegistrationContext.PreferredLanguage,
|
||||
TimeZone: payload.RegistrationContext.TimeZone,
|
||||
})
|
||||
|
||||
if reason, ok := s.blockedEmails[payload.Email]; ok {
|
||||
writeJSON(writer, http.StatusOK, map[string]any{
|
||||
"outcome": "blocked",
|
||||
"block_reason_code": reason,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
if userID, ok := s.emailToUserID[payload.Email]; ok {
|
||||
if reason, blocked := s.blockedUsers[userID]; blocked {
|
||||
writeJSON(writer, http.StatusOK, map[string]any{
|
||||
"outcome": "blocked",
|
||||
"block_reason_code": reason,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(writer, http.StatusOK, map[string]any{
|
||||
"outcome": "existing",
|
||||
"user_id": userID,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
userID := fmt.Sprintf("user-%d", s.nextUserID)
|
||||
s.nextUserID++
|
||||
s.emailToUserID[payload.Email] = userID
|
||||
s.userIDToEmail[userID] = payload.Email
|
||||
|
||||
writeJSON(writer, http.StatusOK, map[string]any{
|
||||
"outcome": "created",
|
||||
"user_id": userID,
|
||||
})
|
||||
}
|
||||
|
||||
func (s *UserStub) handleBlockByUserID(writer http.ResponseWriter, request *http.Request) {
|
||||
userIDValue := strings.TrimSuffix(strings.TrimPrefix(request.URL.Path, "/api/v1/internal/users/"), "/block")
|
||||
userIDValue, err := url.PathUnescape(userIDValue)
|
||||
if err != nil {
|
||||
http.Error(writer, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
var payload struct {
|
||||
ReasonCode string `json:"reason_code"`
|
||||
}
|
||||
if err := decodeStrictJSONRequest(request, &payload); err != nil {
|
||||
http.Error(writer, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
email, exists := s.userIDToEmail[userIDValue]
|
||||
if !exists {
|
||||
writeJSON(writer, http.StatusNotFound, map[string]string{"error": "not found"})
|
||||
return
|
||||
}
|
||||
|
||||
outcome := "blocked"
|
||||
if _, already := s.blockedUsers[userIDValue]; already {
|
||||
outcome = "already_blocked"
|
||||
}
|
||||
s.blockedUsers[userIDValue] = payload.ReasonCode
|
||||
s.blockedEmails[email] = payload.ReasonCode
|
||||
|
||||
writeJSON(writer, http.StatusOK, map[string]any{
|
||||
"outcome": outcome,
|
||||
"user_id": userIDValue,
|
||||
})
|
||||
}
|
||||
|
||||
func (s *UserStub) handleBlockByEmail(writer http.ResponseWriter, request *http.Request) {
|
||||
var payload struct {
|
||||
Email string `json:"email"`
|
||||
ReasonCode string `json:"reason_code"`
|
||||
}
|
||||
if err := decodeStrictJSONRequest(request, &payload); err != nil {
|
||||
http.Error(writer, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
outcome := "blocked"
|
||||
if _, already := s.blockedEmails[payload.Email]; already {
|
||||
outcome = "already_blocked"
|
||||
}
|
||||
s.blockedEmails[payload.Email] = payload.ReasonCode
|
||||
|
||||
response := map[string]any{"outcome": outcome}
|
||||
if userID, ok := s.emailToUserID[payload.Email]; ok {
|
||||
s.blockedUsers[userID] = payload.ReasonCode
|
||||
response["user_id"] = userID
|
||||
}
|
||||
|
||||
writeJSON(writer, http.StatusOK, response)
|
||||
}
|
||||
|
||||
func writeJSON(writer http.ResponseWriter, statusCode int, value any) {
|
||||
payload, err := json.Marshal(value)
|
||||
if err != nil {
|
||||
http.Error(writer, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
writer.Header().Set("Content-Type", "application/json")
|
||||
writer.WriteHeader(statusCode)
|
||||
_, _ = writer.Write(payload)
|
||||
}
|
||||
@@ -1,51 +0,0 @@
|
||||
package harness
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// UserServicePersistence captures the per-test persistence dependencies of
|
||||
// the User Service binary: a PostgreSQL container hosting the `user` schema
|
||||
// owned by the `userservice` role, and the Redis credentials that point the
|
||||
// service at the caller-supplied master address.
|
||||
type UserServicePersistence struct {
|
||||
// Postgres exposes the started container so tests that need direct SQL
|
||||
// access to the user schema (verifying side effects, seeding fixtures)
|
||||
// can read or write through it.
|
||||
Postgres *PostgresRuntime
|
||||
|
||||
// Env carries the environment entries that must be passed to the
|
||||
// userservice process. It is safe to merge into the caller's existing env
|
||||
// map, or to use as-is and append further USERSERVICE_* knobs in place.
|
||||
Env map[string]string
|
||||
}
|
||||
|
||||
// StartUserServicePersistence brings up one isolated PostgreSQL container,
|
||||
// provisions the `user` schema with the `userservice` role, and returns the
|
||||
// environment entries that wire the userservice binary at that container plus
|
||||
// the supplied Redis master address.
|
||||
//
|
||||
// The returned password (`integration`) matches the architectural rule that
|
||||
// Redis traffic is password-protected; miniredis accepts arbitrary password
|
||||
// values when its own RequireAuth is not engaged, so the same value works
|
||||
// against both miniredis and the real `tcredis` runtime.
|
||||
//
|
||||
// Cleanup of the container is handled by the underlying StartPostgresContainer
|
||||
// through `t.Cleanup`; callers do not need to defer anything.
|
||||
func StartUserServicePersistence(t testing.TB, redisMasterAddr string) UserServicePersistence {
|
||||
t.Helper()
|
||||
|
||||
rt := StartPostgresContainer(t)
|
||||
if err := rt.EnsureRoleAndSchema(context.Background(), "user", "userservice", "userservice"); err != nil {
|
||||
t.Fatalf("ensure user schema/role: %v", err)
|
||||
}
|
||||
|
||||
env := WithPostgres(rt, "USERSERVICE", "user", "userservice")
|
||||
env["USERSERVICE_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||
env["USERSERVICE_REDIS_PASSWORD"] = "integration"
|
||||
return UserServicePersistence{
|
||||
Postgres: rt,
|
||||
Env: env,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
package integration_test
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
// jsonUnmarshal is a tiny indirection so other test files can decode
|
||||
// without importing encoding/json each time.
|
||||
func jsonUnmarshal(raw []byte, v any) error {
|
||||
return json.Unmarshal(raw, v)
|
||||
}
|
||||
@@ -0,0 +1,130 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestLobbyFlow_PrivateGameInviteRedeem exercises the lobby state
|
||||
// machine that does NOT require a live engine container:
|
||||
//
|
||||
// 1. owner registers and creates a private game (draft);
|
||||
// 2. owner moves it to `enrollment_open` via `/open-enrollment`;
|
||||
// 3. owner issues a user-bound invite to a second user;
|
||||
// 4. invitee redeems the invite;
|
||||
// 5. owner lists `/lobby/games/{game_id}/memberships` and sees both
|
||||
// pilots.
|
||||
//
|
||||
// The engine-running phases (start → command → force-next-turn →
|
||||
// finish → race name promotion) live in `runtime_lifecycle_test.go`
|
||||
// and `engine_command_proxy_test.go`, which spin up the
|
||||
// `galaxy/game:integration` container.
|
||||
func TestLobbyFlow_PrivateGameInviteRedeem(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Seed engine version so create-game validation passes.
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions", map[string]any{
|
||||
"version": "v1.0.0", "image_ref": "galaxy/game:integration", "enabled": true,
|
||||
}); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("seed engine_version: err=%v resp=%v", err, resp)
|
||||
}
|
||||
|
||||
owner := testenv.RegisterSession(t, plat, "owner+lobby@example.com")
|
||||
invitee := testenv.RegisterSession(t, plat, "invitee+lobby@example.com")
|
||||
ownerID, err := owner.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve owner: %v", err)
|
||||
}
|
||||
inviteeID, err := invitee.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve invitee: %v", err)
|
||||
}
|
||||
|
||||
ownerClient := testenv.NewBackendUserClient(plat.Backend.HTTPURL, ownerID)
|
||||
inviteeClient := testenv.NewBackendUserClient(plat.Backend.HTTPURL, inviteeID)
|
||||
|
||||
// 1+2. Create + open enrollment.
|
||||
gameBody := map[string]any{
|
||||
"game_name": "Private Lobby Run",
|
||||
"visibility": "private",
|
||||
"min_players": 2,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 1,
|
||||
"start_gap_players": 2,
|
||||
"enrollment_ends_at": time.Now().Add(24 * time.Hour).UTC().Format(time.RFC3339),
|
||||
"turn_schedule": "0 * * * *",
|
||||
"target_engine_version": "v1.0.0",
|
||||
}
|
||||
raw, resp, err := ownerClient.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games", gameBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("create private game: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var game struct {
|
||||
GameID string `json:"game_id"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &game); err != nil {
|
||||
t.Fatalf("decode game: %v", err)
|
||||
}
|
||||
if _, resp, err = ownerClient.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+game.GameID+"/open-enrollment", nil); err != nil {
|
||||
t.Fatalf("open enrollment: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("open enrollment: status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
// 3. Owner issues an invite for invitee.
|
||||
raw, resp, err = ownerClient.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+game.GameID+"/invites", map[string]any{
|
||||
"invited_user_id": inviteeID,
|
||||
"race_name": "Invitee-Crew",
|
||||
})
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("issue invite: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var invite struct {
|
||||
InviteID string `json:"invite_id"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &invite); err != nil {
|
||||
t.Fatalf("decode invite: %v", err)
|
||||
}
|
||||
|
||||
// 4. Invitee redeems.
|
||||
raw, resp, err = inviteeClient.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+game.GameID+"/invites/"+invite.InviteID+"/redeem", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("redeem: %v", err)
|
||||
}
|
||||
if resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("redeem: status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// 5. Memberships listing should now include the invitee.
|
||||
raw, resp, err = ownerClient.Do(ctx, http.MethodGet, "/api/v1/user/lobby/games/"+game.GameID+"/memberships?page=1&page_size=10", nil)
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("memberships list: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var mems struct {
|
||||
Items []struct {
|
||||
UserID string `json:"user_id"`
|
||||
} `json:"items"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &mems); err != nil {
|
||||
t.Fatalf("decode memberships: %v", err)
|
||||
}
|
||||
found := false
|
||||
for _, m := range mems.Items {
|
||||
if m.UserID == inviteeID {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatalf("invitee membership not present in listing: %+v", mems.Items)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,115 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
lobbymodel "galaxy/model/lobby"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
// TestLobbyMyGamesList drives `lobby.my.games.list` through the
|
||||
// authenticated gateway gRPC surface. `my.games.list` returns games
|
||||
// where the caller has an active membership, so the test creates a
|
||||
// private game with one user, opens enrollment, invites a second
|
||||
// user, the second user redeems the invite (becomes a member), and
|
||||
// the second user's listing must include the game.
|
||||
func TestLobbyMyGamesList(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions", map[string]any{
|
||||
"version": "v1.0.0", "image_ref": "galaxy/game:integration", "enabled": true,
|
||||
}); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("seed engine_version: err=%v resp=%v", err, resp)
|
||||
}
|
||||
|
||||
owner := testenv.RegisterSession(t, plat, "owner+mygames@example.com")
|
||||
pilot := testenv.RegisterSession(t, plat, "pilot+mygames@example.com")
|
||||
ownerID, err := owner.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve owner: %v", err)
|
||||
}
|
||||
pilotID, err := pilot.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve pilot: %v", err)
|
||||
}
|
||||
ownerHTTP := testenv.NewBackendUserClient(plat.Backend.HTTPURL, ownerID)
|
||||
pilotHTTP := testenv.NewBackendUserClient(plat.Backend.HTTPURL, pilotID)
|
||||
|
||||
gameBody := map[string]any{
|
||||
"game_name": "MyGames Lobby",
|
||||
"visibility": "private",
|
||||
"min_players": 2,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 1,
|
||||
"start_gap_players": 2,
|
||||
"enrollment_ends_at": time.Now().Add(24 * time.Hour).UTC().Format(time.RFC3339),
|
||||
"turn_schedule": "0 * * * *",
|
||||
"target_engine_version": "v1.0.0",
|
||||
}
|
||||
raw, resp, err := ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games", gameBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("create private game: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var created struct {
|
||||
GameID string `json:"game_id"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &created); err != nil {
|
||||
t.Fatalf("decode: %v", err)
|
||||
}
|
||||
|
||||
if _, resp, err := ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+created.GameID+"/open-enrollment", nil); err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("open enrollment: err=%v status=%d", err, resp.StatusCode)
|
||||
}
|
||||
raw, resp, err = ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+created.GameID+"/invites", map[string]any{
|
||||
"invited_user_id": pilotID,
|
||||
"race_name": "PilotMG",
|
||||
})
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("issue invite: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var invite struct{ InviteID string `json:"invite_id"` }
|
||||
_ = json.Unmarshal(raw, &invite)
|
||||
if _, resp, err := pilotHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+created.GameID+"/invites/"+invite.InviteID+"/redeem", nil); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("redeem: err=%v status=%d", err, resp.StatusCode)
|
||||
}
|
||||
|
||||
gw, err := pilot.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
payload, err := transcoder.MyGamesListRequestToPayload(&lobbymodel.MyGamesListRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
res, err := gw.Execute(ctx, lobbymodel.MessageTypeMyGamesList, payload, testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("execute my.games.list: %v", err)
|
||||
}
|
||||
if res.ResultCode != "ok" {
|
||||
t.Fatalf("result_code = %q, want ok", res.ResultCode)
|
||||
}
|
||||
list, err := transcoder.PayloadToMyGamesListResponse(res.PayloadBytes)
|
||||
if err != nil {
|
||||
t.Fatalf("decode list response: %v", err)
|
||||
}
|
||||
found := false
|
||||
for _, g := range list.Items {
|
||||
if g.GameID == created.GameID {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatalf("created game %q absent from my-games list: %+v", created.GameID, list.Items)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,117 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
lobbymodel "galaxy/model/lobby"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
// TestLobbyOpenEnrollment drives `lobby.game.open-enrollment` through
|
||||
// gateway gRPC. Owner moves draft → enrollment_open; non-owner is
|
||||
// rejected; idempotent re-call on enrollment_open is a no-op (still
|
||||
// returns enrollment_open).
|
||||
func TestLobbyOpenEnrollment(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions", map[string]any{
|
||||
"version": "v1.0.0", "image_ref": "galaxy/game:integration", "enabled": true,
|
||||
}); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("seed engine_version: err=%v resp=%v", err, resp)
|
||||
}
|
||||
|
||||
owner := testenv.RegisterSession(t, plat, "owner+enroll@example.com")
|
||||
other := testenv.RegisterSession(t, plat, "other+enroll@example.com")
|
||||
ownerID, err := owner.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve owner: %v", err)
|
||||
}
|
||||
ownerHTTP := testenv.NewBackendUserClient(plat.Backend.HTTPURL, ownerID)
|
||||
|
||||
gameBody := map[string]any{
|
||||
"game_name": "Open Enrollment Lobby",
|
||||
"visibility": "private",
|
||||
"min_players": 2,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 1,
|
||||
"start_gap_players": 2,
|
||||
"enrollment_ends_at": time.Now().Add(24 * time.Hour).UTC().Format(time.RFC3339),
|
||||
"turn_schedule": "0 * * * *",
|
||||
"target_engine_version": "v1.0.0",
|
||||
}
|
||||
raw, resp, err := ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games", gameBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("create private game: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var game struct {
|
||||
GameID string `json:"game_id"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &game); err != nil {
|
||||
t.Fatalf("decode: %v", err)
|
||||
}
|
||||
|
||||
encode := func(t *testing.T) []byte {
|
||||
t.Helper()
|
||||
payload, err := transcoder.OpenEnrollmentRequestToPayload(&lobbymodel.OpenEnrollmentRequest{
|
||||
GameID: game.GameID,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
return payload
|
||||
}
|
||||
|
||||
// Non-owner attempt — must fail.
|
||||
otherGW, err := other.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial other: %v", err)
|
||||
}
|
||||
defer otherGW.Close()
|
||||
res, err := otherGW.Execute(ctx, lobbymodel.MessageTypeOpenEnrollment, encode(t), testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("non-owner execute: %v", err)
|
||||
}
|
||||
if res.ResultCode == "ok" {
|
||||
t.Fatalf("non-owner open-enrollment was accepted: %+v", res)
|
||||
}
|
||||
|
||||
// Owner attempt — must succeed and return enrollment_open.
|
||||
ownerGW, err := owner.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial owner: %v", err)
|
||||
}
|
||||
defer ownerGW.Close()
|
||||
res, err = ownerGW.Execute(ctx, lobbymodel.MessageTypeOpenEnrollment, encode(t), testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("owner execute: %v", err)
|
||||
}
|
||||
if res.ResultCode != "ok" {
|
||||
t.Fatalf("owner result_code = %q, want ok", res.ResultCode)
|
||||
}
|
||||
got, err := transcoder.PayloadToOpenEnrollmentResponse(res.PayloadBytes)
|
||||
if err != nil {
|
||||
t.Fatalf("decode response: %v", err)
|
||||
}
|
||||
if got.Status != "enrollment_open" {
|
||||
t.Fatalf("status after open = %q, want enrollment_open", got.Status)
|
||||
}
|
||||
|
||||
// Idempotent re-call — must not error and must still report
|
||||
// enrollment_open (or a conflict that the gateway maps to a
|
||||
// non-ok result_code without crashing the stream).
|
||||
res, err = ownerGW.Execute(ctx, lobbymodel.MessageTypeOpenEnrollment, encode(t), testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("idempotent execute: %v", err)
|
||||
}
|
||||
if res.ResultCode == "" {
|
||||
t.Fatalf("idempotent execute returned empty result_code")
|
||||
}
|
||||
}
|
||||
@@ -1,508 +0,0 @@
|
||||
// Package lobbyauthsession_test exercises the authenticated context
|
||||
// propagation between Auth/Session Service and Game Lobby. The
|
||||
// architecture wires the two services through Gateway: AuthSession
|
||||
// owns the device-session lifecycle, Gateway projects sessions into
|
||||
// its cache and signs request envelopes, and Lobby reads the
|
||||
// resolved `X-User-Id` from the gateway-authenticated downstream
|
||||
// hop.
|
||||
//
|
||||
// The boundary contract under test is: revoking a device session
|
||||
// through AuthSession's internal API removes the session projection
|
||||
// from the gateway cache, after which Gateway refuses to route any
|
||||
// subsequent `lobby.*` command for that session. The suite asserts
|
||||
// the boundary on the public surfaces: AuthSession internal REST,
|
||||
// Gateway authenticated gRPC, and Lobby state via direct REST
|
||||
// observation.
|
||||
//
|
||||
// Coverage maps onto `TESTING.md §6` `Lobby ↔ Auth/Session`:
|
||||
// "authenticated context correctly propagated from gateway".
|
||||
package lobbyauthsession_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1"
|
||||
"galaxy/integration/internal/harness"
|
||||
lobbymodel "galaxy/model/lobby"
|
||||
"galaxy/transcoder"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// TestSessionRevocationStopsGatewayFromRoutingLobbyCommands proves
|
||||
// that AuthSession owns the authenticated context: a successful
|
||||
// `lobby.my.games.list` command before the revoke must succeed, and
|
||||
// the same command after the revoke must fail at Gateway with
|
||||
// Unauthenticated, never reaching Lobby.
|
||||
func TestSessionRevocationStopsGatewayFromRoutingLobbyCommands(t *testing.T) {
|
||||
h := newHarness(t)
|
||||
|
||||
clientKey := newClientPrivateKey("g4-revoke")
|
||||
deviceSessionID, _ := h.authenticate(t, "revoke@example.com", clientKey)
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
// Pre-revoke: lobby.my.games.list must succeed.
|
||||
requestBytes, err := transcoder.MyGamesListRequestToPayload(&lobbymodel.MyGamesListRequest{})
|
||||
require.NoError(t, err)
|
||||
preResponse, err := client.ExecuteCommand(context.Background(),
|
||||
newExecuteCommandRequest(deviceSessionID, "req-pre-revoke", lobbymodel.MessageTypeMyGamesList, requestBytes, clientKey),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "ok", preResponse.GetResultCode())
|
||||
|
||||
// Revoke through AuthSession internal API.
|
||||
h.revokeSession(t, deviceSessionID)
|
||||
|
||||
// Wait for the gateway projection to drop / flip to revoked.
|
||||
h.waitForSessionGone(t, deviceSessionID, 5*time.Second)
|
||||
|
||||
// Post-revoke: same command must be rejected at Gateway.
|
||||
postResponse, err := client.ExecuteCommand(context.Background(),
|
||||
newExecuteCommandRequest(deviceSessionID, "req-post-revoke", lobbymodel.MessageTypeMyGamesList, requestBytes, clientKey),
|
||||
)
|
||||
require.Error(t, err, "post-revoke command must fail at Gateway")
|
||||
require.Nil(t, postResponse)
|
||||
|
||||
statusCode := status.Code(err)
|
||||
require.Truef(t,
|
||||
statusCode == codes.Unauthenticated ||
|
||||
statusCode == codes.PermissionDenied ||
|
||||
statusCode == codes.FailedPrecondition,
|
||||
"post-revoke must fail with Unauthenticated/PermissionDenied/FailedPrecondition, got %s: %v",
|
||||
statusCode, err,
|
||||
)
|
||||
}
|
||||
|
||||
// --- harness ---
|
||||
|
||||
type lobbyAuthsessionHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
mailStub *harness.MailStub
|
||||
|
||||
authsessionPublicURL string
|
||||
authsessionInternalURL string
|
||||
gatewayPublicURL string
|
||||
gatewayGRPCAddr string
|
||||
userServiceURL string
|
||||
lobbyPublicURL string
|
||||
|
||||
processes []*harness.Process
|
||||
}
|
||||
|
||||
func newHarness(t *testing.T) *lobbyAuthsessionHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() { require.NoError(t, redisClient.Close()) })
|
||||
|
||||
mailStub := harness.NewMailStub(t)
|
||||
responseSignerPath, _ := harness.WriteResponseSignerPEM(t, t.Name())
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
authsessionPublicAddr := harness.FreeTCPAddress(t)
|
||||
authsessionInternalAddr := harness.FreeTCPAddress(t)
|
||||
gatewayPublicAddr := harness.FreeTCPAddress(t)
|
||||
gatewayGRPCAddr := harness.FreeTCPAddress(t)
|
||||
lobbyPublicAddr := harness.FreeTCPAddress(t)
|
||||
lobbyInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
authsessionEnv := map[string]string{
|
||||
"AUTHSESSION_LOG_LEVEL": "info",
|
||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||
"AUTHSESSION_MAIL_SERVICE_BASE_URL": mailStub.BaseURL(),
|
||||
"AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_CACHE_KEY_PREFIX": "gateway:session:",
|
||||
"AUTHSESSION_REDIS_GATEWAY_SESSION_EVENTS_STREAM": "gateway:session_events",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, authsessionEnv)
|
||||
waitForAuthsessionReady(t, authsessionProcess, "http://"+authsessionPublicAddr)
|
||||
|
||||
lobbyEnv := harness.StartLobbyServicePersistence(t, redisRuntime.Addr).Env
|
||||
lobbyEnv["LOBBY_LOG_LEVEL"] = "info"
|
||||
lobbyEnv["LOBBY_PUBLIC_HTTP_ADDR"] = lobbyPublicAddr
|
||||
lobbyEnv["LOBBY_INTERNAL_HTTP_ADDR"] = lobbyInternalAddr
|
||||
lobbyEnv["LOBBY_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
lobbyEnv["LOBBY_GM_BASE_URL"] = mailStub.BaseURL()
|
||||
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_USER_LIFECYCLE_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_GM_EVENTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
lobbyEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, lobbyEnv)
|
||||
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
gatewayEnv := map[string]string{
|
||||
"GATEWAY_LOG_LEVEL": "info",
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||
"GATEWAY_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||
"GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:",
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath),
|
||||
"GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr,
|
||||
"GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||
"GATEWAY_LOBBY_SERVICE_BASE_URL": "http://" + lobbyPublicAddr,
|
||||
"GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(),
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100",
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
}
|
||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, gatewayEnv)
|
||||
harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK)
|
||||
harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr)
|
||||
|
||||
return &lobbyAuthsessionHarness{
|
||||
redis: redisClient,
|
||||
mailStub: mailStub,
|
||||
authsessionPublicURL: "http://" + authsessionPublicAddr,
|
||||
authsessionInternalURL: "http://" + authsessionInternalAddr,
|
||||
gatewayPublicURL: "http://" + gatewayPublicAddr,
|
||||
gatewayGRPCAddr: gatewayGRPCAddr,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
lobbyPublicURL: "http://" + lobbyPublicAddr,
|
||||
processes: []*harness.Process{userServiceProcess, authsessionProcess, lobbyProcess, gatewayProcess},
|
||||
}
|
||||
}
|
||||
|
||||
// authenticate runs the public-auth flow through the Gateway and
|
||||
// returns the resulting `device_session_id` plus the resolved user_id.
|
||||
func (h *lobbyAuthsessionHarness) authenticate(t *testing.T, email string, clientKey ed25519.PrivateKey) (string, string) {
|
||||
t.Helper()
|
||||
|
||||
challengeID := h.sendChallenge(t, email)
|
||||
code := h.waitForChallengeCode(t, email)
|
||||
|
||||
confirm := h.confirmCode(t, challengeID, code, clientKey)
|
||||
require.Equalf(t, http.StatusOK, confirm.StatusCode, "confirm: %s", confirm.Body)
|
||||
|
||||
var confirmBody struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(confirm.Body), &confirmBody))
|
||||
require.NotEmpty(t, confirmBody.DeviceSessionID)
|
||||
|
||||
user := h.lookupUserByEmail(t, email)
|
||||
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
if _, err := h.redis.Get(context.Background(), "gateway:session:"+confirmBody.DeviceSessionID).Bytes(); err == nil {
|
||||
return confirmBody.DeviceSessionID, user.UserID
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("gateway session projection for %s never arrived", confirmBody.DeviceSessionID)
|
||||
return "", ""
|
||||
}
|
||||
|
||||
func (h *lobbyAuthsessionHarness) sendChallenge(t *testing.T, email string) string {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{
|
||||
"email": email,
|
||||
}, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "send-email-code: %s", resp.Body)
|
||||
var body struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(resp.Body), &body))
|
||||
return body.ChallengeID
|
||||
}
|
||||
|
||||
func (h *lobbyAuthsessionHarness) confirmCode(t *testing.T, challengeID, code string, clientKey ed25519.PrivateKey) httpResponse {
|
||||
t.Helper()
|
||||
return postJSON(t, h.gatewayPublicURL+"/api/v1/public/auth/confirm-email-code", map[string]string{
|
||||
"challenge_id": challengeID,
|
||||
"code": code,
|
||||
"client_public_key": base64.StdEncoding.EncodeToString(clientKey.Public().(ed25519.PublicKey)),
|
||||
"time_zone": "Europe/Kaliningrad",
|
||||
}, nil)
|
||||
}
|
||||
|
||||
func (h *lobbyAuthsessionHarness) waitForChallengeCode(t *testing.T, email string) string {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
for _, delivery := range h.mailStub.RecordedDeliveries() {
|
||||
if delivery.Email == email && delivery.Code != "" {
|
||||
return delivery.Code
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("auth code for %s never arrived", email)
|
||||
return ""
|
||||
}
|
||||
|
||||
func (h *lobbyAuthsessionHarness) lookupUserByEmail(t *testing.T, email string) struct {
|
||||
UserID string `json:"user_id"`
|
||||
} {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.userServiceURL+"/api/v1/internal/user-lookups/by-email", map[string]string{"email": email}, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "user lookup: %s", resp.Body)
|
||||
var body struct {
|
||||
User struct {
|
||||
UserID string `json:"user_id"`
|
||||
} `json:"user"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &body))
|
||||
return struct {
|
||||
UserID string `json:"user_id"`
|
||||
}{UserID: body.User.UserID}
|
||||
}
|
||||
|
||||
// revokeSession calls AuthSession's internal revoke surface for a
|
||||
// specific device session. The body shape is defined by
|
||||
// `authsession/api/internal-openapi.yaml#RevokeDeviceSessionRequest`.
|
||||
func (h *lobbyAuthsessionHarness) revokeSession(t *testing.T, deviceSessionID string) {
|
||||
t.Helper()
|
||||
target := h.authsessionInternalURL + "/api/v1/internal/sessions/" + deviceSessionID + "/revoke"
|
||||
resp := postJSON(t, target, map[string]any{
|
||||
"reason_code": "test_revocation",
|
||||
"actor": map[string]string{
|
||||
"type": "test",
|
||||
"id": "lobbyauthsession-suite",
|
||||
},
|
||||
}, nil)
|
||||
require.Truef(t,
|
||||
resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNoContent,
|
||||
"revoke session %s: status=%d body=%s", deviceSessionID, resp.StatusCode, resp.Body,
|
||||
)
|
||||
}
|
||||
|
||||
// waitForSessionGone polls the gateway session cache until the
|
||||
// session record is removed or marked revoked.
|
||||
func (h *lobbyAuthsessionHarness) waitForSessionGone(t *testing.T, deviceSessionID string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for time.Now().Before(deadline) {
|
||||
payload, err := h.redis.Get(context.Background(), "gateway:session:"+deviceSessionID).Bytes()
|
||||
if err == redis.Nil {
|
||||
return
|
||||
}
|
||||
if err == nil {
|
||||
var record struct {
|
||||
Status string `json:"status"`
|
||||
}
|
||||
if json.Unmarshal(payload, &record) == nil && record.Status != "active" {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("session %s still active in gateway cache after %s", deviceSessionID, timeout)
|
||||
}
|
||||
|
||||
func (h *lobbyAuthsessionHarness) dialGateway(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
conn, err := grpc.DialContext(ctx, h.gatewayGRPCAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithBlock(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { require.NoError(t, conn.Close()) })
|
||||
return conn
|
||||
}
|
||||
|
||||
// --- shared helpers ---
|
||||
|
||||
func newExecuteCommandRequest(deviceSessionID, requestID, messageType string, payload []byte, clientKey ed25519.PrivateKey) *gatewayv1.ExecuteCommandRequest {
|
||||
payloadHash := contractsgatewayv1.ComputePayloadHash(payload)
|
||||
request := &gatewayv1.ExecuteCommandRequest{
|
||||
ProtocolVersion: contractsgatewayv1.ProtocolVersionV1,
|
||||
DeviceSessionId: deviceSessionID,
|
||||
MessageType: messageType,
|
||||
TimestampMs: time.Now().UnixMilli(),
|
||||
RequestId: requestID,
|
||||
PayloadBytes: payload,
|
||||
PayloadHash: payloadHash,
|
||||
TraceId: "trace-" + requestID,
|
||||
}
|
||||
request.Signature = contractsgatewayv1.SignRequest(clientKey, contractsgatewayv1.RequestSigningFields{
|
||||
ProtocolVersion: request.GetProtocolVersion(),
|
||||
DeviceSessionID: request.GetDeviceSessionId(),
|
||||
MessageType: request.GetMessageType(),
|
||||
TimestampMS: request.GetTimestampMs(),
|
||||
RequestID: request.GetRequestId(),
|
||||
PayloadHash: request.GetPayloadHash(),
|
||||
})
|
||||
return request
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func postJSON(t *testing.T, url string, body any, header http.Header) httpResponse {
|
||||
t.Helper()
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
reader = bytes.NewReader(payload)
|
||||
}
|
||||
req, err := http.NewRequest(http.MethodPost, url, reader)
|
||||
require.NoError(t, err)
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
for k, vs := range header {
|
||||
for _, v := range vs {
|
||||
req.Header.Add(k, v)
|
||||
}
|
||||
}
|
||||
return doRequest(t, req)
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{DisableKeepAlives: true},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-readiness-probe/exists", nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForAuthsessionReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
// AuthSession's public listener has no /healthz; posting an empty
|
||||
// email send-email-code request is the cheapest readiness probe.
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
body := bytes.NewReader([]byte(`{"email":""}`))
|
||||
req, err := http.NewRequest(http.MethodPost, baseURL+"/api/v1/public/auth/send-email-code", body)
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusBadRequest {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for authsession readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func newClientPrivateKey(label string) ed25519.PrivateKey {
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-lobby-authsession-client-" + label))
|
||||
return ed25519.NewKeyFromSeed(seed[:])
|
||||
}
|
||||
@@ -1,633 +0,0 @@
|
||||
// Package lobbynotification_test exercises Lobby's notification-intent
|
||||
// publication boundary by booting Lobby + the real User Service against a
|
||||
// Redis container and asserting on the contents of `notification:intents`.
|
||||
// The Notification Service is intentionally NOT booted: the boundary under
|
||||
// test is "Lobby produces correct intent envelopes onto the stream",
|
||||
// independent of how the Notification Service consumes them.
|
||||
package lobbynotification_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"maps"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
notificationIntentsStream = "notification:intents"
|
||||
userLifecycleStream = "user:lifecycle_events"
|
||||
runtimeJobResultsStream = "runtime:job_results"
|
||||
gmLobbyEventsStream = "gm:lobby_events"
|
||||
intentTypeApplicationSubmitted = "lobby.application.submitted"
|
||||
intentTypeMembershipApproved = "lobby.membership.approved"
|
||||
intentTypeMembershipRejected = "lobby.membership.rejected"
|
||||
intentTypeMembershipBlocked = "lobby.membership.blocked"
|
||||
intentTypeInviteCreated = "lobby.invite.created"
|
||||
intentTypeInviteRedeemed = "lobby.invite.redeemed"
|
||||
intentTypeInviteExpired = "lobby.invite.expired"
|
||||
intentTypeRuntimePausedAfter = "lobby.runtime_paused_after_start"
|
||||
expectedProducer = "game_lobby"
|
||||
)
|
||||
|
||||
func TestApplicationFlowPublishesSubmittedApprovedRejected(t *testing.T) {
|
||||
h := newLobbyNotificationHarness(t, gmAlwaysOK)
|
||||
|
||||
applicantA := h.ensureUser(t, "applicantA@example.com")
|
||||
applicantB := h.ensureUser(t, "applicantB@example.com")
|
||||
|
||||
gameID := h.adminCreatePublicGame(t, "Application Galaxy", time.Now().Add(48*time.Hour).Unix())
|
||||
h.openEnrollment(t, gameID)
|
||||
|
||||
appA := h.submitApplication(t, applicantA.UserID, gameID, "PilotAlpha")
|
||||
h.adminApproveApplication(t, gameID, appA["application_id"].(string))
|
||||
|
||||
appB := h.submitApplication(t, applicantB.UserID, gameID, "PilotBeta")
|
||||
h.adminRejectApplication(t, gameID, appB["application_id"].(string))
|
||||
|
||||
h.requireIntents(t,
|
||||
expect(intentTypeApplicationSubmitted, "admin"),
|
||||
expect(intentTypeApplicationSubmitted, "admin"),
|
||||
expect(intentTypeMembershipApproved, applicantA.UserID),
|
||||
expect(intentTypeMembershipRejected, applicantB.UserID),
|
||||
)
|
||||
}
|
||||
|
||||
func TestPrivateInviteLifecyclePublishesCreatedRedeemedExpired(t *testing.T) {
|
||||
h := newLobbyNotificationHarness(t, gmAlwaysOK)
|
||||
|
||||
owner := h.ensureUser(t, "owner@example.com")
|
||||
inviteeA := h.ensureUser(t, "inviteeA@example.com")
|
||||
inviteeB := h.ensureUser(t, "inviteeB@example.com")
|
||||
|
||||
gameID := h.userCreatePrivateGame(t, owner.UserID, "Private Invite Galaxy",
|
||||
time.Now().Add(48*time.Hour).Unix())
|
||||
h.userOpenEnrollment(t, owner.UserID, gameID)
|
||||
|
||||
h.userCreateInvite(t, owner.UserID, gameID, inviteeA.UserID)
|
||||
inviteB := h.userCreateInvite(t, owner.UserID, gameID, inviteeB.UserID)
|
||||
_ = inviteB
|
||||
|
||||
// Read invitee A's invite ID by listing their invites.
|
||||
inviteAID := h.firstCreatedInviteID(t, inviteeA.UserID, gameID)
|
||||
h.userRedeemInvite(t, inviteeA.UserID, gameID, inviteAID, "PilotPrivateA")
|
||||
|
||||
// Close enrollment (min_players=1 satisfied by inviteeA's redeem).
|
||||
// Invite B is still in `created` and must transition to `expired`.
|
||||
h.userReadyToStart(t, owner.UserID, gameID)
|
||||
|
||||
h.requireIntents(t,
|
||||
expect(intentTypeInviteCreated, inviteeA.UserID),
|
||||
expect(intentTypeInviteCreated, inviteeB.UserID),
|
||||
expect(intentTypeInviteRedeemed, owner.UserID),
|
||||
expect(intentTypeInviteExpired, owner.UserID),
|
||||
)
|
||||
}
|
||||
|
||||
func TestCascadeMembershipBlockedPublishesIntent(t *testing.T) {
|
||||
h := newLobbyNotificationHarness(t, gmAlwaysOK)
|
||||
|
||||
owner := h.ensureUser(t, "cascade-owner@example.com")
|
||||
invitee := h.ensureUser(t, "cascade-invitee@example.com")
|
||||
|
||||
gameID := h.userCreatePrivateGame(t, owner.UserID, "Cascade Galaxy",
|
||||
time.Now().Add(48*time.Hour).Unix())
|
||||
h.userOpenEnrollment(t, owner.UserID, gameID)
|
||||
h.userCreateInvite(t, owner.UserID, gameID, invitee.UserID)
|
||||
|
||||
inviteID := h.firstCreatedInviteID(t, invitee.UserID, gameID)
|
||||
h.userRedeemInvite(t, invitee.UserID, gameID, inviteID, "PilotCascade")
|
||||
|
||||
h.publishUserLifecycleEvent(t, "user.lifecycle.permanent_blocked", invitee.UserID)
|
||||
|
||||
h.requireIntents(t,
|
||||
expect(intentTypeInviteCreated, invitee.UserID),
|
||||
expect(intentTypeInviteRedeemed, owner.UserID),
|
||||
expect(intentTypeMembershipBlocked, owner.UserID),
|
||||
)
|
||||
}
|
||||
|
||||
func TestRuntimePausedAfterStartPublishesAdminIntent(t *testing.T) {
|
||||
gmRegisterFails := func(w http.ResponseWriter, r *http.Request) {
|
||||
if strings.Contains(r.URL.Path, "/register-runtime") {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
_, _ = w.Write([]byte(`{"error":"forced GM unavailability"}`))
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}
|
||||
|
||||
h := newLobbyNotificationHarness(t, gmRegisterFails)
|
||||
|
||||
applicant := h.ensureUser(t, "starter@example.com")
|
||||
|
||||
gameID := h.adminCreatePublicGame(t, "Runtime Pause Galaxy",
|
||||
time.Now().Add(48*time.Hour).Unix())
|
||||
h.openEnrollment(t, gameID)
|
||||
|
||||
app := h.submitApplication(t, applicant.UserID, gameID, "PilotPause")
|
||||
h.adminApproveApplication(t, gameID, app["application_id"].(string))
|
||||
|
||||
h.adminReadyToStart(t, gameID)
|
||||
h.adminStartGame(t, gameID)
|
||||
|
||||
h.publishRuntimeJobSuccess(t, gameID)
|
||||
|
||||
h.requireIntents(t,
|
||||
expect(intentTypeApplicationSubmitted, "admin"),
|
||||
expect(intentTypeMembershipApproved, applicant.UserID),
|
||||
expect(intentTypeRuntimePausedAfter, "admin"),
|
||||
)
|
||||
}
|
||||
|
||||
type lobbyNotificationHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
userServiceURL string
|
||||
lobbyPublicURL string
|
||||
lobbyAdminURL string
|
||||
|
||||
intentsStream string
|
||||
lifecycleStream string
|
||||
jobResultsStream string
|
||||
gmEventsStream string
|
||||
|
||||
gmStub *httptest.Server
|
||||
|
||||
userServiceProcess *harness.Process
|
||||
lobbyProcess *harness.Process
|
||||
}
|
||||
|
||||
type ensureByEmailResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id"`
|
||||
}
|
||||
|
||||
type expectedIntent struct {
|
||||
NotificationType string
|
||||
Recipient string // user_id, or "admin" for admin_email audience
|
||||
}
|
||||
|
||||
func expect(notificationType, recipient string) expectedIntent {
|
||||
return expectedIntent{NotificationType: notificationType, Recipient: recipient}
|
||||
}
|
||||
|
||||
func gmAlwaysOK(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}
|
||||
|
||||
var harnessSeq atomic.Int64
|
||||
|
||||
func newLobbyNotificationHarness(t *testing.T, gmHandler http.HandlerFunc) *lobbyNotificationHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
gmStub := httptest.NewServer(http.HandlerFunc(gmHandler))
|
||||
t.Cleanup(gmStub.Close)
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
lobbyPublicAddr := harness.FreeTCPAddress(t)
|
||||
lobbyInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
// Use unique stream prefixes per test so concurrent runs do not bleed.
|
||||
suffix := strconv.FormatInt(harnessSeq.Add(1), 10)
|
||||
intentsStream := notificationIntentsStream + ":" + suffix
|
||||
lifecycleStream := userLifecycleStream + ":" + suffix
|
||||
jobResultsStream := runtimeJobResultsStream + ":" + suffix
|
||||
gmEventsStream := gmLobbyEventsStream + ":" + suffix
|
||||
|
||||
lobbyEnv := harness.StartLobbyServicePersistence(t, redisRuntime.Addr).Env
|
||||
lobbyEnv["LOBBY_LOG_LEVEL"] = "info"
|
||||
lobbyEnv["LOBBY_PUBLIC_HTTP_ADDR"] = lobbyPublicAddr
|
||||
lobbyEnv["LOBBY_INTERNAL_HTTP_ADDR"] = lobbyInternalAddr
|
||||
lobbyEnv["LOBBY_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
lobbyEnv["LOBBY_GM_BASE_URL"] = gmStub.URL
|
||||
lobbyEnv["LOBBY_NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||
lobbyEnv["LOBBY_USER_LIFECYCLE_STREAM"] = lifecycleStream
|
||||
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_STREAM"] = jobResultsStream
|
||||
lobbyEnv["LOBBY_GM_EVENTS_STREAM"] = gmEventsStream
|
||||
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_USER_LIFECYCLE_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_GM_EVENTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
lobbyEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, lobbyEnv)
|
||||
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
return &lobbyNotificationHarness{
|
||||
redis: redisClient,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
lobbyPublicURL: "http://" + lobbyPublicAddr,
|
||||
lobbyAdminURL: "http://" + lobbyInternalAddr,
|
||||
intentsStream: intentsStream,
|
||||
lifecycleStream: lifecycleStream,
|
||||
jobResultsStream: jobResultsStream,
|
||||
gmEventsStream: gmEventsStream,
|
||||
gmStub: gmStub,
|
||||
userServiceProcess: userServiceProcess,
|
||||
lobbyProcess: lobbyProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) ensureUser(t *testing.T, email string) ensureByEmailResponse {
|
||||
t.Helper()
|
||||
|
||||
resp := postJSON(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": "en",
|
||||
"time_zone": "Europe/Kaliningrad",
|
||||
},
|
||||
}, nil)
|
||||
var out ensureByEmailResponse
|
||||
requireJSONStatus(t, resp, http.StatusOK, &out)
|
||||
require.Equal(t, "created", out.Outcome)
|
||||
require.NotEmpty(t, out.UserID)
|
||||
return out
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) adminCreatePublicGame(t *testing.T, name string, enrollmentEndsAt int64) string {
|
||||
t.Helper()
|
||||
return h.createGame(t, h.lobbyAdminURL+"/api/v1/lobby/games", "public", name, enrollmentEndsAt, nil)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) userCreatePrivateGame(t *testing.T, ownerUserID, name string, enrollmentEndsAt int64) string {
|
||||
t.Helper()
|
||||
return h.createGame(t, h.lobbyPublicURL+"/api/v1/lobby/games", "private", name, enrollmentEndsAt,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) createGame(t *testing.T, url, gameType, name string, enrollmentEndsAt int64, header http.Header) string {
|
||||
t.Helper()
|
||||
|
||||
resp := postJSON(t, url, map[string]any{
|
||||
"game_name": name,
|
||||
"game_type": gameType,
|
||||
"min_players": 1,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 6,
|
||||
"start_gap_players": 1,
|
||||
"enrollment_ends_at": enrollmentEndsAt,
|
||||
"turn_schedule": "0 18 * * *",
|
||||
"target_engine_version": "1.0.0",
|
||||
}, header)
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "create %s game: %s", gameType, resp.Body)
|
||||
|
||||
var record map[string]any
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &record))
|
||||
gameID, ok := record["game_id"].(string)
|
||||
require.Truef(t, ok, "game_id missing: %s", resp.Body)
|
||||
return gameID
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) openEnrollment(t *testing.T, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyAdminURL+"/api/v1/lobby/games/"+gameID+"/open-enrollment", nil, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "admin open enrollment: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) userOpenEnrollment(t *testing.T, ownerUserID, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/open-enrollment", nil,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "user open enrollment: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) submitApplication(t *testing.T, userID, gameID, raceName string) map[string]any {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/applications",
|
||||
map[string]any{"race_name": raceName},
|
||||
http.Header{"X-User-Id": []string{userID}})
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "submit application: %s", resp.Body)
|
||||
var body map[string]any
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &body))
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) adminApproveApplication(t *testing.T, gameID, applicationID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyAdminURL+"/api/v1/lobby/games/"+gameID+"/applications/"+applicationID+"/approve",
|
||||
nil, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "admin approve: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) adminRejectApplication(t *testing.T, gameID, applicationID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyAdminURL+"/api/v1/lobby/games/"+gameID+"/applications/"+applicationID+"/reject",
|
||||
nil, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "admin reject: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) userCreateInvite(t *testing.T, ownerUserID, gameID, inviteeUserID string) map[string]any {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/invites",
|
||||
map[string]any{"invitee_user_id": inviteeUserID},
|
||||
http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "create invite: %s", resp.Body)
|
||||
var body map[string]any
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &body))
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) firstCreatedInviteID(t *testing.T, inviteeUserID, gameID string) string {
|
||||
t.Helper()
|
||||
req, err := http.NewRequest(http.MethodGet, h.lobbyPublicURL+"/api/v1/lobby/my/invites?status=created", nil)
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("X-User-Id", inviteeUserID)
|
||||
resp := doRequest(t, req)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "list my invites: %s", resp.Body)
|
||||
|
||||
var body struct {
|
||||
Items []struct {
|
||||
InviteID string `json:"invite_id"`
|
||||
GameID string `json:"game_id"`
|
||||
} `json:"items"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &body))
|
||||
for _, item := range body.Items {
|
||||
if item.GameID == gameID {
|
||||
return item.InviteID
|
||||
}
|
||||
}
|
||||
t.Fatalf("no invite found for invitee %s on game %s; body=%s", inviteeUserID, gameID, resp.Body)
|
||||
return ""
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) userRedeemInvite(t *testing.T, inviteeUserID, gameID, inviteID, raceName string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/invites/"+inviteID+"/redeem",
|
||||
map[string]any{"race_name": raceName},
|
||||
http.Header{"X-User-Id": []string{inviteeUserID}})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "redeem invite: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) userReadyToStart(t *testing.T, ownerUserID, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/ready-to-start",
|
||||
nil,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "user ready-to-start: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) adminReadyToStart(t *testing.T, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyAdminURL+"/api/v1/lobby/games/"+gameID+"/ready-to-start", nil, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "admin ready-to-start: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) adminStartGame(t *testing.T, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyAdminURL+"/api/v1/lobby/games/"+gameID+"/start", nil, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "admin start game: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) publishUserLifecycleEvent(t *testing.T, eventType, userID string) {
|
||||
t.Helper()
|
||||
_, err := h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: h.lifecycleStream,
|
||||
Values: map[string]any{
|
||||
"event_type": eventType,
|
||||
"user_id": userID,
|
||||
"occurred_at_ms": strconv.FormatInt(time.Now().UnixMilli(), 10),
|
||||
"source": "user_admin",
|
||||
"actor_type": "admin",
|
||||
"actor_id": "admin-1",
|
||||
"reason_code": "terminal_policy_violation",
|
||||
},
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) publishRuntimeJobSuccess(t *testing.T, gameID string) {
|
||||
t.Helper()
|
||||
_, err := h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: h.jobResultsStream,
|
||||
Values: map[string]any{
|
||||
"game_id": gameID,
|
||||
"outcome": "success",
|
||||
"container_id": "container-" + gameID,
|
||||
"engine_endpoint": "127.0.0.1:0",
|
||||
},
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) requireIntents(t *testing.T, want ...expectedIntent) {
|
||||
t.Helper()
|
||||
|
||||
want = append([]expectedIntent(nil), want...)
|
||||
|
||||
require.Eventuallyf(t, func() bool {
|
||||
entries, err := h.redis.XRange(context.Background(), h.intentsStream, "-", "+").Result()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
published := decodePublishedIntents(t, entries)
|
||||
return matchesAll(published, want)
|
||||
}, 15*time.Second, 100*time.Millisecond,
|
||||
"expected intents %+v not all observed on stream %s", want, h.intentsStream)
|
||||
|
||||
entries, err := h.redis.XRange(context.Background(), h.intentsStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
published := decodePublishedIntents(t, entries)
|
||||
for _, p := range published {
|
||||
require.Equal(t, expectedProducer, p.Producer,
|
||||
"every published intent must declare producer=%q", expectedProducer)
|
||||
}
|
||||
}
|
||||
|
||||
type publishedIntent struct {
|
||||
NotificationType string
|
||||
Producer string
|
||||
AudienceKind string
|
||||
RecipientUserIDs []string
|
||||
}
|
||||
|
||||
func decodePublishedIntents(t *testing.T, entries []redis.XMessage) []publishedIntent {
|
||||
t.Helper()
|
||||
|
||||
out := make([]publishedIntent, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
notificationType, _ := entry.Values["notification_type"].(string)
|
||||
producer, _ := entry.Values["producer"].(string)
|
||||
audienceKind, _ := entry.Values["audience_kind"].(string)
|
||||
recipientsJSON, _ := entry.Values["recipient_user_ids_json"].(string)
|
||||
|
||||
var recipients []string
|
||||
if recipientsJSON != "" {
|
||||
require.NoError(t, json.Unmarshal([]byte(recipientsJSON), &recipients))
|
||||
}
|
||||
|
||||
out = append(out, publishedIntent{
|
||||
NotificationType: notificationType,
|
||||
Producer: producer,
|
||||
AudienceKind: audienceKind,
|
||||
RecipientUserIDs: recipients,
|
||||
})
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func matchesAll(published []publishedIntent, want []expectedIntent) bool {
|
||||
used := make([]bool, len(published))
|
||||
for _, w := range want {
|
||||
matched := -1
|
||||
for i, p := range published {
|
||||
if used[i] {
|
||||
continue
|
||||
}
|
||||
if p.NotificationType != w.NotificationType {
|
||||
continue
|
||||
}
|
||||
if w.Recipient == "admin" {
|
||||
if p.AudienceKind == "admin_email" {
|
||||
matched = i
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
if slices.Contains(p.RecipientUserIDs, w.Recipient) {
|
||||
matched = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if matched < 0 {
|
||||
return false
|
||||
}
|
||||
used[matched] = true
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-readiness-probe/exists", nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func postJSON(t *testing.T, url string, body any, header http.Header) httpResponse {
|
||||
t.Helper()
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
reader = bytes.NewReader(payload)
|
||||
}
|
||||
req, err := http.NewRequest(http.MethodPost, url, reader)
|
||||
require.NoError(t, err)
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
maps.Copy(req.Header, header)
|
||||
return doRequest(t, req)
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{DisableKeepAlives: true},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
require.Equalf(t, wantStatus, response.StatusCode, "unexpected status, body=%s", response.Body)
|
||||
if target != nil {
|
||||
require.NoError(t, decodeStrictJSON([]byte(response.Body), target))
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSON(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// silenceUnused keeps fmt referenced by future debug formatting needs.
|
||||
var _ = fmt.Sprintf
|
||||
@@ -1,198 +0,0 @@
|
||||
// Race-name intent tests cover the three notification types Lobby emits
|
||||
// across the capability-evaluation and self-service registration boundary:
|
||||
//
|
||||
// - lobby.race_name.registration_eligible — produced when a member's
|
||||
// stats satisfy the capability rule at game finish;
|
||||
// - lobby.race_name.registration_denied — produced when they do not;
|
||||
// - lobby.race_name.registered — produced when the user converts the
|
||||
// pending registration into a permanent registered name.
|
||||
//
|
||||
// The single test below drives a public game through start, publishes the
|
||||
// `gm:lobby_events` snapshot and `game_finished` events directly to Redis,
|
||||
// then performs the user-side registration call. Notification Service is
|
||||
// not booted: the assertion target is the contents of `notification:intents`.
|
||||
package lobbynotification_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"slices"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
intentTypeRaceNameEligible = "lobby.race_name.registration_eligible"
|
||||
intentTypeRaceNameDenied = "lobby.race_name.registration_denied"
|
||||
intentTypeRaceNameRegistered = "lobby.race_name.registered"
|
||||
)
|
||||
|
||||
func TestRaceNameIntentsAcrossCapabilityAndRegistration(t *testing.T) {
|
||||
h := newLobbyNotificationHarness(t, gmAlwaysOK)
|
||||
|
||||
capableUser := h.ensureUser(t, "race-capable@example.com")
|
||||
incapableUser := h.ensureUser(t, "race-incapable@example.com")
|
||||
|
||||
gameID := h.adminCreatePublicGame(t, "Race Name Galaxy",
|
||||
time.Now().Add(48*time.Hour).Unix())
|
||||
h.openEnrollment(t, gameID)
|
||||
|
||||
capableApp := h.submitApplication(t, capableUser.UserID, gameID, "Capable")
|
||||
h.adminApproveApplication(t, gameID, capableApp["application_id"].(string))
|
||||
incapableApp := h.submitApplication(t, incapableUser.UserID, gameID, "Incapable")
|
||||
h.adminApproveApplication(t, gameID, incapableApp["application_id"].(string))
|
||||
|
||||
h.adminReadyToStart(t, gameID)
|
||||
h.adminStartGame(t, gameID)
|
||||
h.publishRuntimeJobSuccess(t, gameID)
|
||||
|
||||
// Wait for runtime job result + GM register-runtime to flip the game
|
||||
// to `running` before publishing GM stream events. Otherwise the
|
||||
// `game_finished` transition guard in the gmevents consumer rejects
|
||||
// the event for an unexpected status.
|
||||
h.requireGameStatus(t, gameID, "running")
|
||||
|
||||
// First snapshot freezes initial stats for both members.
|
||||
h.publishGMSnapshotUpdate(t, gameID, []playerTurnStat{
|
||||
{UserID: capableUser.UserID, Planets: 1, Population: 100},
|
||||
{UserID: incapableUser.UserID, Planets: 1, Population: 100},
|
||||
})
|
||||
|
||||
// game_finished bumps capable user's stats above the initial values
|
||||
// and leaves the incapable user unchanged. Capability rule is
|
||||
// `max_planets > initial_planets AND max_population > initial_population`.
|
||||
h.publishGMGameFinished(t, gameID, []playerTurnStat{
|
||||
{UserID: capableUser.UserID, Planets: 10, Population: 1000},
|
||||
{UserID: incapableUser.UserID, Planets: 1, Population: 100},
|
||||
})
|
||||
|
||||
// Capability evaluation runs asynchronously after the game_finished
|
||||
// event is consumed. Wait for the registration_eligible intent to
|
||||
// appear before attempting the user-side register call: the call only
|
||||
// succeeds once the pending registration is recorded.
|
||||
h.requireGameStatus(t, gameID, "finished")
|
||||
h.waitForIntent(t, intentTypeRaceNameEligible, capableUser.UserID)
|
||||
|
||||
h.userRegisterRaceName(t, capableUser.UserID, gameID, "Capable")
|
||||
|
||||
h.requireIntents(t,
|
||||
expect(intentTypeApplicationSubmitted, "admin"),
|
||||
expect(intentTypeApplicationSubmitted, "admin"),
|
||||
expect(intentTypeMembershipApproved, capableUser.UserID),
|
||||
expect(intentTypeMembershipApproved, incapableUser.UserID),
|
||||
expect(intentTypeRaceNameEligible, capableUser.UserID),
|
||||
expect(intentTypeRaceNameDenied, incapableUser.UserID),
|
||||
expect(intentTypeRaceNameRegistered, capableUser.UserID),
|
||||
)
|
||||
}
|
||||
|
||||
type playerTurnStat struct {
|
||||
UserID string `json:"user_id"`
|
||||
Planets int64 `json:"planets"`
|
||||
Population int64 `json:"population"`
|
||||
ShipsBuilt int64 `json:"ships_built"`
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) publishGMSnapshotUpdate(t *testing.T, gameID string, stats []playerTurnStat) {
|
||||
t.Helper()
|
||||
payload, err := json.Marshal(stats)
|
||||
require.NoError(t, err)
|
||||
_, err = h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: h.gmEventsStream,
|
||||
Values: map[string]any{
|
||||
"kind": "runtime_snapshot_update",
|
||||
"game_id": gameID,
|
||||
"current_turn": "1",
|
||||
"runtime_status": "healthy",
|
||||
"engine_health_summary": "ok",
|
||||
"player_turn_stats": string(payload),
|
||||
},
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) publishGMGameFinished(t *testing.T, gameID string, stats []playerTurnStat) {
|
||||
t.Helper()
|
||||
payload, err := json.Marshal(stats)
|
||||
require.NoError(t, err)
|
||||
_, err = h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: h.gmEventsStream,
|
||||
Values: map[string]any{
|
||||
"kind": "game_finished",
|
||||
"game_id": gameID,
|
||||
"finished_at_ms": strconv.FormatInt(time.Now().UnixMilli(), 10),
|
||||
"current_turn": "10",
|
||||
"runtime_status": "finished",
|
||||
"engine_health_summary": "ok",
|
||||
"player_turn_stats": string(payload),
|
||||
},
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) requireGameStatus(t *testing.T, gameID, want string) {
|
||||
t.Helper()
|
||||
require.Eventuallyf(t, func() bool {
|
||||
req, err := http.NewRequest(http.MethodGet,
|
||||
h.lobbyAdminURL+"/api/v1/internal/games/"+gameID, nil)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
resp := doRequest(t, req)
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return false
|
||||
}
|
||||
var record map[string]any
|
||||
if err := json.Unmarshal([]byte(resp.Body), &record); err != nil {
|
||||
return false
|
||||
}
|
||||
status, _ := record["status"].(string)
|
||||
return status == want
|
||||
}, 15*time.Second, 100*time.Millisecond,
|
||||
"game %s did not reach status %s", gameID, want)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) waitForIntent(t *testing.T, notificationType, recipient string) {
|
||||
t.Helper()
|
||||
require.Eventuallyf(t, func() bool {
|
||||
entries, err := h.redis.XRange(context.Background(), h.intentsStream, "-", "+").Result()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
published := decodePublishedIntents(t, entries)
|
||||
for _, p := range published {
|
||||
if p.NotificationType != notificationType {
|
||||
continue
|
||||
}
|
||||
if recipient == "admin" {
|
||||
if p.AudienceKind == "admin_email" {
|
||||
return true
|
||||
}
|
||||
continue
|
||||
}
|
||||
if slices.Contains(p.RecipientUserIDs, recipient) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}, 15*time.Second, 100*time.Millisecond,
|
||||
"intent %s for %s not observed on stream %s",
|
||||
notificationType, recipient, h.intentsStream)
|
||||
}
|
||||
|
||||
func (h *lobbyNotificationHarness) userRegisterRaceName(t *testing.T, userID, sourceGameID, raceName string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/race-names/register",
|
||||
map[string]any{
|
||||
"race_name": raceName,
|
||||
"source_game_id": sourceGameID,
|
||||
},
|
||||
http.Header{"X-User-Id": []string{userID}})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "register race name: %s", resp.Body)
|
||||
}
|
||||
@@ -1,747 +0,0 @@
|
||||
// Package lobbyrtm_test exercises the Lobby ↔ Runtime Manager
|
||||
// boundary against real Lobby + real Runtime Manager + real
|
||||
// PostgreSQL + real Redis + real Docker daemon running the
|
||||
// galaxy/game test engine container. It satisfies the inter-service
|
||||
// requirement spelled out in `TESTING.md §7` and PLAN.md Stage 20.
|
||||
//
|
||||
// The boundary contract is: Lobby publishes `runtime:start_jobs` and
|
||||
// `runtime:stop_jobs` envelopes, RTM consumes them and runs/stops
|
||||
// engine containers, RTM publishes `runtime:job_results`, Lobby
|
||||
// transitions the game accordingly. The suite asserts only on those
|
||||
// public surfaces (Lobby/RTM REST, Redis Streams, Docker container
|
||||
// state); it never imports `*/internal/...` packages of either
|
||||
// service.
|
||||
package lobbyrtm_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"maps"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultEngineVersion = "1.0.0"
|
||||
missingEngineVersion = "0.0.0-missing"
|
||||
|
||||
startJobsStream = "runtime:start_jobs"
|
||||
stopJobsStream = "runtime:stop_jobs"
|
||||
jobResultsStream = "runtime:job_results"
|
||||
healthEventsStream = "runtime:health_events"
|
||||
notificationIntentsKey = "notification:intents"
|
||||
userLifecycleStream = "user:lifecycle_events"
|
||||
gmEventsStream = "gm:lobby_events"
|
||||
expectedLobbyProducer = "game_lobby"
|
||||
notificationImagePulled = "runtime.image_pull_failed"
|
||||
)
|
||||
|
||||
// suiteSeq scopes per-test stream prefixes so concurrent test
|
||||
// invocations cannot bleed events into each other.
|
||||
var suiteSeq atomic.Int64
|
||||
|
||||
// lobbyRTMHarness owns the per-test infrastructure: containers,
|
||||
// processes, stream keys, and helper clients. One harness per test
|
||||
// keeps each scenario fully isolated.
|
||||
type lobbyRTMHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
userServiceURL string
|
||||
lobbyPublicURL string
|
||||
lobbyAdminURL string
|
||||
rtmInternalURL string
|
||||
|
||||
intentsStream string
|
||||
lifecycleStream string
|
||||
jobResultsStream string
|
||||
startJobsStream string
|
||||
stopJobsStream string
|
||||
healthEvents string
|
||||
|
||||
gmStub *httptest.Server
|
||||
|
||||
dockerNetwork string
|
||||
engineImage string
|
||||
|
||||
userServiceProcess *harness.Process
|
||||
lobbyProcess *harness.Process
|
||||
rtmProcess *harness.Process
|
||||
}
|
||||
|
||||
type ensureUserResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id"`
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
// newLobbyRTMHarness brings up one independent test environment:
|
||||
// Postgres containers per service (mirrors `lobbynotification`), one
|
||||
// Redis container, real binaries for User Service / Lobby / RTM, a
|
||||
// GM stub that returns 200, a per-test Docker bridge network, and
|
||||
// the freshly-built `galaxy/game` test image.
|
||||
func newLobbyRTMHarness(t *testing.T) *lobbyRTMHarness {
|
||||
t.Helper()
|
||||
|
||||
// Skip the whole suite when Docker is unreachable. The ensure-only
|
||||
// check runs before any testcontainer is started so the skip path
|
||||
// kicks in before testcontainers-go tries (and fails) to probe the
|
||||
// daemon.
|
||||
harness.RequireDockerDaemon(t)
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
gmStub := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}))
|
||||
t.Cleanup(gmStub.Close)
|
||||
|
||||
engineImage := harness.EnsureGalaxyGameImage(t)
|
||||
dockerNetwork := harness.EnsureDockerNetwork(t)
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
lobbyPublicAddr := harness.FreeTCPAddress(t)
|
||||
lobbyInternalAddr := harness.FreeTCPAddress(t)
|
||||
rtmInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
||||
rtmBinary := harness.BuildBinary(t, "rtmanager", "./rtmanager/cmd/rtmanager")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
suffix := strconv.FormatInt(suiteSeq.Add(1), 10)
|
||||
intentsStream := notificationIntentsKey + ":" + suffix
|
||||
lifecycleStream := userLifecycleStream + ":" + suffix
|
||||
jobResultsStreamKey := jobResultsStream + ":" + suffix
|
||||
startJobsStreamKey := startJobsStream + ":" + suffix
|
||||
stopJobsStreamKey := stopJobsStream + ":" + suffix
|
||||
healthEventsStreamKey := healthEventsStream + ":" + suffix
|
||||
gmEventsStreamKey := gmEventsStream + ":" + suffix
|
||||
|
||||
lobbyEnv := harness.StartLobbyServicePersistence(t, redisRuntime.Addr).Env
|
||||
lobbyEnv["LOBBY_LOG_LEVEL"] = "info"
|
||||
lobbyEnv["LOBBY_PUBLIC_HTTP_ADDR"] = lobbyPublicAddr
|
||||
lobbyEnv["LOBBY_INTERNAL_HTTP_ADDR"] = lobbyInternalAddr
|
||||
lobbyEnv["LOBBY_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
lobbyEnv["LOBBY_GM_BASE_URL"] = gmStub.URL
|
||||
lobbyEnv["LOBBY_NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||
lobbyEnv["LOBBY_USER_LIFECYCLE_STREAM"] = lifecycleStream
|
||||
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_STREAM"] = jobResultsStreamKey
|
||||
lobbyEnv["LOBBY_RUNTIME_START_JOBS_STREAM"] = startJobsStreamKey
|
||||
lobbyEnv["LOBBY_RUNTIME_STOP_JOBS_STREAM"] = stopJobsStreamKey
|
||||
lobbyEnv["LOBBY_GM_EVENTS_STREAM"] = gmEventsStreamKey
|
||||
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_USER_LIFECYCLE_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_GM_EVENTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_ENGINE_IMAGE_TEMPLATE"] = "galaxy/game:{engine_version}-lobbyrtm-it"
|
||||
lobbyEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
lobbyEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, lobbyEnv)
|
||||
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
rtmEnv := harness.StartRTManagerServicePersistence(t, redisRuntime.Addr).Env
|
||||
rtmEnv["RTMANAGER_LOG_LEVEL"] = "info"
|
||||
rtmEnv["RTMANAGER_INTERNAL_HTTP_ADDR"] = rtmInternalAddr
|
||||
rtmEnv["RTMANAGER_LOBBY_INTERNAL_BASE_URL"] = "http://" + lobbyInternalAddr
|
||||
rtmEnv["RTMANAGER_DOCKER_HOST"] = resolveDockerHost()
|
||||
rtmEnv["RTMANAGER_DOCKER_NETWORK"] = dockerNetwork
|
||||
// On dev machines and in sandboxes the rtmanager process cannot
|
||||
// chown the per-game state dir to root (uid 0). Pin the owner to
|
||||
// the current process uid/gid so `chown` is a no-op.
|
||||
rtmEnv["RTMANAGER_GAME_STATE_OWNER_UID"] = strconv.Itoa(os.Getuid())
|
||||
rtmEnv["RTMANAGER_GAME_STATE_OWNER_GID"] = strconv.Itoa(os.Getgid())
|
||||
rtmEnv["RTMANAGER_GAME_STATE_ROOT"] = t.TempDir()
|
||||
rtmEnv["RTMANAGER_REDIS_START_JOBS_STREAM"] = startJobsStreamKey
|
||||
rtmEnv["RTMANAGER_REDIS_STOP_JOBS_STREAM"] = stopJobsStreamKey
|
||||
rtmEnv["RTMANAGER_REDIS_JOB_RESULTS_STREAM"] = jobResultsStreamKey
|
||||
rtmEnv["RTMANAGER_REDIS_HEALTH_EVENTS_STREAM"] = healthEventsStreamKey
|
||||
rtmEnv["RTMANAGER_NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||
rtmEnv["RTMANAGER_STREAM_BLOCK_TIMEOUT"] = "200ms"
|
||||
rtmEnv["RTMANAGER_RECONCILE_INTERVAL"] = "1s"
|
||||
rtmEnv["RTMANAGER_CLEANUP_INTERVAL"] = "1s"
|
||||
rtmEnv["RTMANAGER_INSPECT_INTERVAL"] = "1s"
|
||||
rtmEnv["RTMANAGER_PROBE_INTERVAL"] = "1s"
|
||||
rtmEnv["RTMANAGER_PROBE_TIMEOUT"] = "1s"
|
||||
rtmEnv["RTMANAGER_PROBE_FAILURES_THRESHOLD"] = "3"
|
||||
rtmEnv["RTMANAGER_GAME_LEASE_TTL_SECONDS"] = "10"
|
||||
rtmEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
rtmEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
rtmProcess := harness.StartProcess(t, "rtmanager", rtmBinary, rtmEnv)
|
||||
harness.WaitForHTTPStatus(t, rtmProcess, "http://"+rtmInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
return &lobbyRTMHarness{
|
||||
redis: redisClient,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
lobbyPublicURL: "http://" + lobbyPublicAddr,
|
||||
lobbyAdminURL: "http://" + lobbyInternalAddr,
|
||||
rtmInternalURL: "http://" + rtmInternalAddr,
|
||||
intentsStream: intentsStream,
|
||||
lifecycleStream: lifecycleStream,
|
||||
jobResultsStream: jobResultsStreamKey,
|
||||
startJobsStream: startJobsStreamKey,
|
||||
stopJobsStream: stopJobsStreamKey,
|
||||
healthEvents: healthEventsStreamKey,
|
||||
gmStub: gmStub,
|
||||
dockerNetwork: dockerNetwork,
|
||||
engineImage: engineImage,
|
||||
userServiceProcess: userServiceProcess,
|
||||
lobbyProcess: lobbyProcess,
|
||||
rtmProcess: rtmProcess,
|
||||
}
|
||||
}
|
||||
|
||||
// ensureUser provisions a fresh User Service account by email and
|
||||
// returns the assigned user_id. The email pattern includes the test
|
||||
// name to avoid collisions across concurrent tests sharing the
|
||||
// container.
|
||||
func (h *lobbyRTMHarness) ensureUser(t *testing.T, email string) ensureUserResponse {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": "en",
|
||||
"time_zone": "Europe/Kaliningrad",
|
||||
},
|
||||
}, nil)
|
||||
var out ensureUserResponse
|
||||
requireJSONStatus(t, resp, http.StatusOK, &out)
|
||||
require.Equal(t, "created", out.Outcome)
|
||||
require.NotEmpty(t, out.UserID)
|
||||
return out
|
||||
}
|
||||
|
||||
// userCreatePrivateGame creates a private game owned by ownerUserID
|
||||
// with the supplied target engine version. Returns the assigned
|
||||
// game_id.
|
||||
func (h *lobbyRTMHarness) userCreatePrivateGame(
|
||||
t *testing.T,
|
||||
ownerUserID, name, targetEngineVersion string,
|
||||
enrollmentEndsAt int64,
|
||||
) string {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games", map[string]any{
|
||||
"game_name": name,
|
||||
"game_type": "private",
|
||||
"min_players": 1,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 6,
|
||||
"start_gap_players": 1,
|
||||
"enrollment_ends_at": enrollmentEndsAt,
|
||||
"turn_schedule": "0 18 * * *",
|
||||
"target_engine_version": targetEngineVersion,
|
||||
}, http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "create private game: %s", resp.Body)
|
||||
var record map[string]any
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &record))
|
||||
gameID, ok := record["game_id"].(string)
|
||||
require.Truef(t, ok, "game_id missing: %s", resp.Body)
|
||||
return gameID
|
||||
}
|
||||
|
||||
func (h *lobbyRTMHarness) userOpenEnrollment(t *testing.T, ownerUserID, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/open-enrollment",
|
||||
nil,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}},
|
||||
)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "user open enrollment: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyRTMHarness) userCreateInvite(t *testing.T, ownerUserID, gameID, inviteeUserID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/invites",
|
||||
map[string]any{"invitee_user_id": inviteeUserID},
|
||||
http.Header{"X-User-Id": []string{ownerUserID}},
|
||||
)
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "create invite: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyRTMHarness) firstCreatedInviteID(t *testing.T, inviteeUserID, gameID string) string {
|
||||
t.Helper()
|
||||
req, err := http.NewRequest(http.MethodGet,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/my/invites?status=created", nil)
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("X-User-Id", inviteeUserID)
|
||||
resp := doRequest(t, req)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "list my invites: %s", resp.Body)
|
||||
|
||||
var body struct {
|
||||
Items []struct {
|
||||
InviteID string `json:"invite_id"`
|
||||
GameID string `json:"game_id"`
|
||||
} `json:"items"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &body))
|
||||
for _, item := range body.Items {
|
||||
if item.GameID == gameID {
|
||||
return item.InviteID
|
||||
}
|
||||
}
|
||||
t.Fatalf("no invite found for invitee %s on game %s; body=%s", inviteeUserID, gameID, resp.Body)
|
||||
return ""
|
||||
}
|
||||
|
||||
func (h *lobbyRTMHarness) userRedeemInvite(t *testing.T, inviteeUserID, gameID, inviteID, raceName string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/invites/"+inviteID+"/redeem",
|
||||
map[string]any{"race_name": raceName},
|
||||
http.Header{"X-User-Id": []string{inviteeUserID}},
|
||||
)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "redeem invite: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyRTMHarness) userReadyToStart(t *testing.T, ownerUserID, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/ready-to-start",
|
||||
nil,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}},
|
||||
)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "ready-to-start: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyRTMHarness) userStartGame(t *testing.T, ownerUserID, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/start",
|
||||
nil,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}},
|
||||
)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "user start: %s", resp.Body)
|
||||
}
|
||||
|
||||
// prepareInflightGame walks one private game from creation through
|
||||
// `start`. For the happy and cancel scenarios the game subsequently
|
||||
// reaches `running` once RTM publishes the success job_result; for
|
||||
// the failure scenario it ends in `start_failed`.
|
||||
//
|
||||
// Returns owner and invitee user records plus the game id.
|
||||
func (h *lobbyRTMHarness) prepareInflightGame(
|
||||
t *testing.T,
|
||||
ownerEmail, inviteeEmail, gameName, targetEngineVersion string,
|
||||
) (owner, invitee ensureUserResponse, gameID string) {
|
||||
t.Helper()
|
||||
owner = h.ensureUser(t, ownerEmail)
|
||||
invitee = h.ensureUser(t, inviteeEmail)
|
||||
|
||||
gameID = h.userCreatePrivateGame(t, owner.UserID, gameName, targetEngineVersion,
|
||||
time.Now().Add(48*time.Hour).Unix())
|
||||
h.userOpenEnrollment(t, owner.UserID, gameID)
|
||||
h.userCreateInvite(t, owner.UserID, gameID, invitee.UserID)
|
||||
inviteID := h.firstCreatedInviteID(t, invitee.UserID, gameID)
|
||||
h.userRedeemInvite(t, invitee.UserID, gameID, inviteID, "PilotInvitee")
|
||||
h.userReadyToStart(t, owner.UserID, gameID)
|
||||
h.userStartGame(t, owner.UserID, gameID)
|
||||
return owner, invitee, gameID
|
||||
}
|
||||
|
||||
// gameStatus reads one game record off Lobby's internal API and
|
||||
// returns its status field. Used by waitGameStatus and direct
|
||||
// assertions.
|
||||
func (h *lobbyRTMHarness) gameStatus(t *testing.T, gameID string) string {
|
||||
t.Helper()
|
||||
req, err := http.NewRequest(http.MethodGet,
|
||||
h.lobbyAdminURL+"/api/v1/internal/games/"+gameID, nil)
|
||||
require.NoError(t, err)
|
||||
resp := doRequest(t, req)
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("get game internal: status=%d body=%s", resp.StatusCode, resp.Body)
|
||||
}
|
||||
var record struct {
|
||||
Status string `json:"status"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &record))
|
||||
return record.Status
|
||||
}
|
||||
|
||||
// waitGameStatus polls `GET /api/v1/internal/games/{gameID}` until
|
||||
// the record reports the expected status or the timeout fires.
|
||||
func (h *lobbyRTMHarness) waitGameStatus(t *testing.T, gameID, want string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
got := h.gameStatus(t, gameID)
|
||||
if got == want {
|
||||
return
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("game %s status: want %q got %q (after %s)", gameID, want, got, timeout)
|
||||
}
|
||||
time.Sleep(150 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// publishUserLifecycleEvent appends one event to the per-test
|
||||
// `user:lifecycle_events` stream. The Lobby userlifecycle worker
|
||||
// consumes the same stream.
|
||||
func (h *lobbyRTMHarness) publishUserLifecycleEvent(t *testing.T, eventType, userID string) {
|
||||
t.Helper()
|
||||
_, err := h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: h.lifecycleStream,
|
||||
Values: map[string]any{
|
||||
"event_type": eventType,
|
||||
"user_id": userID,
|
||||
"occurred_at_ms": strconv.FormatInt(time.Now().UnixMilli(), 10),
|
||||
"source": "user_admin",
|
||||
"actor_type": "admin",
|
||||
"actor_id": "admin-1",
|
||||
"reason_code": "terminal_policy_violation",
|
||||
},
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// jobResultEntry decodes one `runtime:job_results` Redis Stream entry.
|
||||
type jobResultEntry struct {
|
||||
StreamID string
|
||||
GameID string
|
||||
Outcome string
|
||||
ContainerID string
|
||||
EngineEndpoint string
|
||||
ErrorCode string
|
||||
ErrorMessage string
|
||||
}
|
||||
|
||||
// stopJobEntry decodes one `runtime:stop_jobs` Redis Stream entry as
|
||||
// published by Lobby.
|
||||
type stopJobEntry struct {
|
||||
StreamID string
|
||||
GameID string
|
||||
Reason string
|
||||
}
|
||||
|
||||
// notificationIntentEntry decodes one `notification:intents` entry.
|
||||
type notificationIntentEntry struct {
|
||||
StreamID string
|
||||
NotificationType string
|
||||
Producer string
|
||||
Payload map[string]any
|
||||
}
|
||||
|
||||
// allJobResults returns every entry on the per-test job_results
|
||||
// stream in stream order.
|
||||
func (h *lobbyRTMHarness) allJobResults(t *testing.T) []jobResultEntry {
|
||||
t.Helper()
|
||||
entries, err := h.redis.XRange(context.Background(), h.jobResultsStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
out := make([]jobResultEntry, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
out = append(out, jobResultEntry{
|
||||
StreamID: entry.ID,
|
||||
GameID: streamString(entry.Values, "game_id"),
|
||||
Outcome: streamString(entry.Values, "outcome"),
|
||||
ContainerID: streamString(entry.Values, "container_id"),
|
||||
EngineEndpoint: streamString(entry.Values, "engine_endpoint"),
|
||||
ErrorCode: streamString(entry.Values, "error_code"),
|
||||
ErrorMessage: streamString(entry.Values, "error_message"),
|
||||
})
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// waitJobResult polls the per-test job_results stream until predicate
|
||||
// matches one entry, or the timeout fires.
|
||||
func (h *lobbyRTMHarness) waitJobResult(
|
||||
t *testing.T,
|
||||
predicate func(jobResultEntry) bool,
|
||||
timeout time.Duration,
|
||||
) jobResultEntry {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
entries := h.allJobResults(t)
|
||||
for _, entry := range entries {
|
||||
if predicate(entry) {
|
||||
return entry
|
||||
}
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("no job_result matched within %s; observed=%+v", timeout, entries)
|
||||
}
|
||||
time.Sleep(150 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// allStopJobs returns every entry on the per-test stop_jobs stream.
|
||||
func (h *lobbyRTMHarness) allStopJobs(t *testing.T) []stopJobEntry {
|
||||
t.Helper()
|
||||
entries, err := h.redis.XRange(context.Background(), h.stopJobsStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
out := make([]stopJobEntry, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
out = append(out, stopJobEntry{
|
||||
StreamID: entry.ID,
|
||||
GameID: streamString(entry.Values, "game_id"),
|
||||
Reason: streamString(entry.Values, "reason"),
|
||||
})
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// waitStopJobReason polls the stop_jobs stream until an entry for
|
||||
// gameID with the expected reason appears.
|
||||
func (h *lobbyRTMHarness) waitStopJobReason(t *testing.T, gameID, reason string, timeout time.Duration) stopJobEntry {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
for _, entry := range h.allStopJobs(t) {
|
||||
if entry.GameID == gameID && entry.Reason == reason {
|
||||
return entry
|
||||
}
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("no stop_job for game %s with reason %q within %s", gameID, reason, timeout)
|
||||
}
|
||||
time.Sleep(150 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// allNotificationIntents returns every entry on the per-test
|
||||
// notification:intents stream.
|
||||
func (h *lobbyRTMHarness) allNotificationIntents(t *testing.T) []notificationIntentEntry {
|
||||
t.Helper()
|
||||
entries, err := h.redis.XRange(context.Background(), h.intentsStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
out := make([]notificationIntentEntry, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
decoded := notificationIntentEntry{
|
||||
StreamID: entry.ID,
|
||||
NotificationType: streamString(entry.Values, "notification_type"),
|
||||
Producer: streamString(entry.Values, "producer"),
|
||||
}
|
||||
// `pkg/notificationintent` publishes the payload under the
|
||||
// field name `payload_json`. Older versions of this harness
|
||||
// looked for `payload` and silently produced an empty Payload
|
||||
// map, which made every predicate that checks `Payload["…"]`
|
||||
// fall through. Read both field names for forward compat.
|
||||
raw := streamString(entry.Values, "payload_json")
|
||||
if raw == "" {
|
||||
raw = streamString(entry.Values, "payload")
|
||||
}
|
||||
if raw != "" {
|
||||
var parsed map[string]any
|
||||
if err := json.Unmarshal([]byte(raw), &parsed); err == nil {
|
||||
decoded.Payload = parsed
|
||||
}
|
||||
}
|
||||
out = append(out, decoded)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// waitNotificationIntent polls the intents stream until the
|
||||
// predicate matches.
|
||||
func (h *lobbyRTMHarness) waitNotificationIntent(
|
||||
t *testing.T,
|
||||
predicate func(notificationIntentEntry) bool,
|
||||
timeout time.Duration,
|
||||
) notificationIntentEntry {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
entries := h.allNotificationIntents(t)
|
||||
for _, entry := range entries {
|
||||
if predicate(entry) {
|
||||
return entry
|
||||
}
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
summary := make([]string, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
summary = append(summary, entry.NotificationType+":"+entry.Producer)
|
||||
}
|
||||
t.Fatalf("no notification_intent matched within %s; observed=%v", timeout, summary)
|
||||
}
|
||||
time.Sleep(150 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// rtmRuntimeStatus issues `GET /api/v1/internal/runtimes/{gameID}`
|
||||
// against RTM and returns the persisted runtime record's status, or
|
||||
// the empty string when RTM responds 404.
|
||||
func (h *lobbyRTMHarness) rtmRuntimeStatus(t *testing.T, gameID string) (string, int) {
|
||||
t.Helper()
|
||||
req, err := http.NewRequest(http.MethodGet,
|
||||
h.rtmInternalURL+"/api/v1/internal/runtimes/"+gameID, nil)
|
||||
require.NoError(t, err)
|
||||
resp := doRequest(t, req)
|
||||
if resp.StatusCode == http.StatusNotFound {
|
||||
return "", resp.StatusCode
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("rtm get runtime: status=%d body=%s", resp.StatusCode, resp.Body)
|
||||
}
|
||||
var record struct {
|
||||
Status string `json:"status"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &record))
|
||||
return record.Status, resp.StatusCode
|
||||
}
|
||||
|
||||
// waitRTMRuntimeStatus polls RTM until the runtime record reports
|
||||
// the expected status or the timeout fires.
|
||||
func (h *lobbyRTMHarness) waitRTMRuntimeStatus(t *testing.T, gameID, want string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
status, code := h.rtmRuntimeStatus(t, gameID)
|
||||
if status == want {
|
||||
return
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("rtm runtime status for %s: want %q got %q (http %d) within %s",
|
||||
gameID, want, status, code, timeout)
|
||||
}
|
||||
time.Sleep(150 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// streamString reads a Redis Streams field as a string regardless of
|
||||
// the underlying go-redis decoded type.
|
||||
func streamString(values map[string]any, key string) string {
|
||||
raw, ok := values[key]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
switch typed := raw.(type) {
|
||||
case string:
|
||||
return typed
|
||||
case []byte:
|
||||
return string(typed)
|
||||
default:
|
||||
return fmt.Sprintf("%v", typed)
|
||||
}
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet,
|
||||
baseURL+"/api/v1/internal/users/user-readiness-probe/exists", nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func postJSON(t *testing.T, url string, body any, header http.Header) httpResponse {
|
||||
t.Helper()
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
reader = bytes.NewReader(payload)
|
||||
}
|
||||
req, err := http.NewRequest(http.MethodPost, url, reader)
|
||||
require.NoError(t, err)
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
maps.Copy(req.Header, header)
|
||||
return doRequest(t, req)
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{DisableKeepAlives: true},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
require.Equalf(t, wantStatus, response.StatusCode, "unexpected status, body=%s", response.Body)
|
||||
if target != nil {
|
||||
require.NoError(t, decodeStrictJSON([]byte(response.Body), target))
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSON(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// resolveDockerHost honours DOCKER_HOST when the developer machine
|
||||
// routes through colima or a remote daemon, falling back to the
|
||||
// standard unix path otherwise.
|
||||
func resolveDockerHost() string {
|
||||
if host := strings.TrimSpace(os.Getenv("DOCKER_HOST")); host != "" {
|
||||
return host
|
||||
}
|
||||
return "unix:///var/run/docker.sock"
|
||||
}
|
||||
|
||||
@@ -1,204 +0,0 @@
|
||||
package lobbyrtm_test
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
jobOutcomeSuccess = "success"
|
||||
jobOutcomeFailure = "failure"
|
||||
|
||||
stopReasonCancelled = "cancelled"
|
||||
|
||||
errorCodeImagePullFailed = "image_pull_failed"
|
||||
)
|
||||
|
||||
// TestStartFlowSucceedsWithRealEngine drives the happy path:
|
||||
// Lobby creates a private game, the owner walks it through enrollment
|
||||
// to start, Lobby publishes a `runtime:start_jobs` envelope with the
|
||||
// resolved `image_ref`, RTM starts a real `galaxy/game` engine
|
||||
// container, publishes a success `runtime:job_results` entry, and
|
||||
// Lobby's runtimejobresult worker transitions the game to `running`.
|
||||
// The test then hits the engine's `/healthz` endpoint directly via
|
||||
// the bridge network IP, proving the container is alive end-to-end.
|
||||
func TestStartFlowSucceedsWithRealEngine(t *testing.T) {
|
||||
h := newLobbyRTMHarness(t)
|
||||
|
||||
owner, _, gameID := h.prepareInflightGame(t,
|
||||
"start-owner@example.com",
|
||||
"start-invitee@example.com",
|
||||
"Start Galaxy",
|
||||
defaultEngineVersion,
|
||||
)
|
||||
t.Logf("owner=%s game=%s", owner.UserID, gameID)
|
||||
|
||||
// RTM publishes a success job_result for the start envelope.
|
||||
startResult := h.waitJobResult(t, func(entry jobResultEntry) bool {
|
||||
return entry.GameID == gameID && entry.Outcome == jobOutcomeSuccess
|
||||
}, 90*time.Second)
|
||||
require.Empty(t, startResult.ErrorCode, "happy path must publish empty error_code")
|
||||
require.NotEmpty(t, startResult.ContainerID, "happy path must carry a container id")
|
||||
require.NotEmpty(t, startResult.EngineEndpoint, "happy path must carry an engine endpoint")
|
||||
|
||||
// Lobby's runtime-job-result worker drives the game to `running`.
|
||||
h.waitGameStatus(t, gameID, "running", 30*time.Second)
|
||||
|
||||
// RTM persists the runtime record and exposes it through REST.
|
||||
h.waitRTMRuntimeStatus(t, gameID, "running", 15*time.Second)
|
||||
|
||||
// A real engine container exists with the expected labels.
|
||||
containerID := harness.FindContainerIDByLabel(t, gameID)
|
||||
require.NotEmptyf(t, containerID, "no engine container found for game %s", gameID)
|
||||
require.Equal(t, startResult.ContainerID, containerID,
|
||||
"job_result container_id must match the live container")
|
||||
require.Equal(t, "running", harness.ContainerState(t, containerID))
|
||||
|
||||
// The engine answers /healthz on the bridge network IP.
|
||||
ip := harness.ContainerNetworkIP(t, containerID, h.dockerNetwork)
|
||||
require.NotEmptyf(t, ip, "engine container %s has no IP on network %s", containerID, h.dockerNetwork)
|
||||
harness.WaitForEngineHealthz(t, ip, 15*time.Second)
|
||||
}
|
||||
|
||||
// TestRunningGameStopsWhenOwnerCascadeBlocked drives the stop path:
|
||||
// drive the same game to `running`, publish a
|
||||
// `user.lifecycle.permanent_blocked` event for the owner, the Lobby
|
||||
// userlifecycle worker cascades to the inflight game, publishes a
|
||||
// `runtime:stop_jobs` envelope with `reason=cancelled`, and RTM stops
|
||||
// the engine. The test asserts on the public boundary surfaces only.
|
||||
func TestRunningGameStopsWhenOwnerCascadeBlocked(t *testing.T) {
|
||||
h := newLobbyRTMHarness(t)
|
||||
|
||||
owner, _, gameID := h.prepareInflightGame(t,
|
||||
"stop-owner@example.com",
|
||||
"stop-invitee@example.com",
|
||||
"Stop Galaxy",
|
||||
defaultEngineVersion,
|
||||
)
|
||||
t.Logf("owner=%s game=%s", owner.UserID, gameID)
|
||||
|
||||
// Wait for the start outcome so we know RTM is fully running
|
||||
// before we trigger the cascade.
|
||||
h.waitJobResult(t, func(entry jobResultEntry) bool {
|
||||
return entry.GameID == gameID && entry.Outcome == jobOutcomeSuccess
|
||||
}, 90*time.Second)
|
||||
h.waitGameStatus(t, gameID, "running", 30*time.Second)
|
||||
containerID := harness.FindContainerIDByLabel(t, gameID)
|
||||
require.NotEmpty(t, containerID)
|
||||
|
||||
// Trigger the cascade: permanent block on the game owner causes
|
||||
// Lobby's userlifecycle worker to publish stop_job(cancelled) and
|
||||
// transition the owned game to `cancelled`.
|
||||
h.publishUserLifecycleEvent(t, "user.lifecycle.permanent_blocked", owner.UserID)
|
||||
|
||||
// Lobby observably publishes the right stop envelope on the boundary.
|
||||
stop := h.waitStopJobReason(t, gameID, stopReasonCancelled, 30*time.Second)
|
||||
assert.Equal(t, gameID, stop.GameID)
|
||||
|
||||
// Lobby moves the game to cancelled.
|
||||
h.waitGameStatus(t, gameID, "cancelled", 30*time.Second)
|
||||
|
||||
// RTM consumes stop_job, stops the engine, and persists status=stopped.
|
||||
h.waitRTMRuntimeStatus(t, gameID, "stopped", 30*time.Second)
|
||||
|
||||
// The container is no longer running. Docker reports `exited`
|
||||
// (or `created`/`removing` during teardown); none of those match
|
||||
// `running`, which is the only state that contradicts a successful
|
||||
// stop.
|
||||
require.Eventuallyf(t, func() bool {
|
||||
state := harness.ContainerState(t, containerID)
|
||||
return state != "running"
|
||||
}, 30*time.Second, 250*time.Millisecond,
|
||||
"engine container %s did not leave running state", containerID)
|
||||
|
||||
// RTM emitted at least two job_results for this game: one success
|
||||
// for the start, one success for the stop.
|
||||
successCount := 0
|
||||
for _, entry := range h.allJobResults(t) {
|
||||
if entry.GameID == gameID && entry.Outcome == jobOutcomeSuccess {
|
||||
successCount++
|
||||
}
|
||||
}
|
||||
assert.GreaterOrEqualf(t, successCount, 2,
|
||||
"expected at least two success job_results (start + stop) for game %s", gameID)
|
||||
}
|
||||
|
||||
// TestStartFailsWhenImageMissing drives the failure path: the game's
|
||||
// `target_engine_version` resolves to a non-existent image tag, RTM
|
||||
// fails to pull, publishes a failure `runtime:job_results` plus a
|
||||
// `runtime.image_pull_failed` notification intent, and Lobby's
|
||||
// runtimejobresult worker transitions the game to `start_failed`.
|
||||
func TestStartFailsWhenImageMissing(t *testing.T) {
|
||||
h := newLobbyRTMHarness(t)
|
||||
|
||||
owner, _, gameID := h.prepareInflightGame(t,
|
||||
"fail-owner@example.com",
|
||||
"fail-invitee@example.com",
|
||||
"Fail Galaxy",
|
||||
missingEngineVersion,
|
||||
)
|
||||
t.Logf("owner=%s game=%s", owner.UserID, gameID)
|
||||
|
||||
expectedImageRef := "galaxy/game:" + missingEngineVersion + "-lobbyrtm-it"
|
||||
|
||||
// RTM publishes a failure job_result with the stable code.
|
||||
failure := h.waitJobResult(t, func(entry jobResultEntry) bool {
|
||||
return entry.GameID == gameID && entry.Outcome == jobOutcomeFailure
|
||||
}, 120*time.Second)
|
||||
assert.Equal(t, errorCodeImagePullFailed, failure.ErrorCode)
|
||||
assert.Empty(t, failure.ContainerID)
|
||||
assert.Empty(t, failure.EngineEndpoint)
|
||||
assert.NotEmpty(t, failure.ErrorMessage)
|
||||
|
||||
// RTM also publishes an admin notification intent on the shared stream.
|
||||
intent := h.waitNotificationIntent(t, func(entry notificationIntentEntry) bool {
|
||||
if entry.NotificationType != notificationImagePulled {
|
||||
return false
|
||||
}
|
||||
payloadGameID, _ := entry.Payload["game_id"].(string)
|
||||
return payloadGameID == gameID
|
||||
}, 30*time.Second)
|
||||
require.NotNil(t, intent.Payload)
|
||||
assert.Equal(t, gameID, intent.Payload["game_id"])
|
||||
assert.Equal(t, expectedImageRef, intent.Payload["image_ref"])
|
||||
assert.Equal(t, errorCodeImagePullFailed, intent.Payload["error_code"])
|
||||
|
||||
// Lobby flips the game to start_failed.
|
||||
h.waitGameStatus(t, gameID, "start_failed", 60*time.Second)
|
||||
|
||||
// No engine container should exist for this game.
|
||||
containerID := harness.FindContainerIDByLabel(t, gameID)
|
||||
if containerID != "" {
|
||||
state := harness.ContainerState(t, containerID)
|
||||
assert.NotEqual(t, "running", state,
|
||||
"failed image pull must not leave a running container behind (state=%s)", state)
|
||||
}
|
||||
|
||||
// RTM either has no record (clean rollback) or has one not in
|
||||
// `running`. Either is acceptable per the start service contract.
|
||||
status, code := h.rtmRuntimeStatus(t, gameID)
|
||||
switch code {
|
||||
case http.StatusNotFound:
|
||||
// nothing persisted — clean rollback path
|
||||
case http.StatusOK:
|
||||
assert.NotEqual(t, "running", status,
|
||||
"failed image pull must not persist a running record")
|
||||
default:
|
||||
t.Fatalf("unexpected RTM runtime response: status=%q code=%d", status, code)
|
||||
}
|
||||
|
||||
// Sanity check the notification carried RTM's producer marker
|
||||
// rather than Lobby's, so we know the suite truly observed RTM
|
||||
// publishing on the shared stream.
|
||||
assert.Truef(t,
|
||||
strings.Contains(intent.Producer, "rtm") ||
|
||||
strings.Contains(intent.Producer, "runtime"),
|
||||
"image_pull_failed intent producer should be RTM-flavoured, got %q", intent.Producer)
|
||||
}
|
||||
@@ -1,664 +0,0 @@
|
||||
// Package lobbyrtmnotification_test exercises the failure-with-
|
||||
// notification path that crosses three real services at once: Lobby
|
||||
// publishes a start job, Runtime Manager fails to pull the engine
|
||||
// image, RTM publishes both a failure `runtime:job_results` envelope
|
||||
// AND a `runtime.image_pull_failed` admin notification intent on
|
||||
// `notification:intents`. The Notification Service consumes the intent
|
||||
// and routes it to Mail Service, where the resulting delivery is
|
||||
// observable on the public list-deliveries surface.
|
||||
//
|
||||
// The suite proves the same Redis bus carries both flows correctly
|
||||
// when all three services are booted together — the union of
|
||||
// `integration/lobbyrtm` (which uses a stub notification) and
|
||||
// `integration/rtmanagernotification` (which has no Lobby).
|
||||
package lobbyrtmnotification_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
notificationIntentsStream = "notification:intents"
|
||||
startJobsStream = "runtime:start_jobs"
|
||||
stopJobsStream = "runtime:stop_jobs"
|
||||
jobResultsStream = "runtime:job_results"
|
||||
healthEventsStream = "runtime:health_events"
|
||||
userLifecycleStream = "user:lifecycle_events"
|
||||
gmEventsStream = "gm:lobby_events"
|
||||
mailDeliveriesPath = "/api/v1/internal/deliveries"
|
||||
notificationImagePulled = "runtime.image_pull_failed"
|
||||
missingEngineVersion = "0.0.0-missing"
|
||||
adminEmailRecipient = "rtm-admin@example.com"
|
||||
)
|
||||
|
||||
var suiteSeq atomic.Int64
|
||||
|
||||
// TestImagePullFailureReachesMailThroughNotification drives Lobby +
|
||||
// RTM + Notification + Mail end-to-end. Lobby publishes a start job
|
||||
// for an unresolvable image; RTM fails the pull and publishes both a
|
||||
// failure job_result (consumed by Lobby) and a notification intent
|
||||
// (consumed by Notification, then routed to Mail).
|
||||
func TestImagePullFailureReachesMailThroughNotification(t *testing.T) {
|
||||
h := newTripleHarness(t)
|
||||
|
||||
owner := h.ensureUser(t, "triple-owner@example.com")
|
||||
invitee := h.ensureUser(t, "triple-invitee@example.com")
|
||||
gameID := h.adminCreatePrivateGameForOwner(t, owner.UserID, "Triple Galaxy",
|
||||
time.Now().Add(48*time.Hour).Unix(), missingEngineVersion)
|
||||
h.userOpenEnrollment(t, owner.UserID, gameID)
|
||||
h.userCreateInvite(t, owner.UserID, gameID, invitee.UserID)
|
||||
inviteID := h.firstCreatedInviteID(t, invitee.UserID, gameID)
|
||||
h.userRedeemInvite(t, invitee.UserID, gameID, inviteID, "PilotTriple")
|
||||
h.userReadyToStart(t, owner.UserID, gameID)
|
||||
h.userStartGame(t, owner.UserID, gameID)
|
||||
t.Logf("triple harness gameID=%s ownerUserID=%s", gameID, owner.UserID)
|
||||
|
||||
expectedImageRef := "galaxy/game:" + missingEngineVersion + "-tripleit"
|
||||
|
||||
// 1. RTM publishes a failure job_result on `runtime:job_results`.
|
||||
failure := h.waitJobResult(t, func(entry jobResultEntry) bool {
|
||||
return entry.GameID == gameID && entry.Outcome == "failure"
|
||||
}, 120*time.Second)
|
||||
assert.Equal(t, "image_pull_failed", failure.ErrorCode)
|
||||
|
||||
// 2. RTM publishes an admin notification intent.
|
||||
intent := h.waitNotificationIntent(t, func(entry notificationIntentEntry) bool {
|
||||
return entry.NotificationType == notificationImagePulled &&
|
||||
entry.PayloadGameID == gameID
|
||||
}, 60*time.Second)
|
||||
assert.Equal(t, expectedImageRef, intent.PayloadImageRef)
|
||||
|
||||
// 3. Notification consumes the intent and Mail records the
|
||||
// delivery for the configured admin recipient.
|
||||
idempotencyKey := "notification:" + intent.RedisEntryID +
|
||||
"/email:email:" + adminEmailRecipient
|
||||
delivery := h.eventuallyDelivery(t, url.Values{
|
||||
"source": []string{"notification"},
|
||||
"status": []string{"sent"},
|
||||
"recipient": []string{adminEmailRecipient},
|
||||
"template_id": []string{notificationImagePulled},
|
||||
"idempotency_key": []string{idempotencyKey},
|
||||
})
|
||||
assert.Equal(t, "template", delivery.PayloadMode)
|
||||
assert.Equal(t, notificationImagePulled, delivery.TemplateID)
|
||||
assert.Equal(t, []string{adminEmailRecipient}, delivery.To)
|
||||
|
||||
// 4. Lobby's runtimejobresult worker drives the game to
|
||||
// `start_failed` because of the same failure outcome on the
|
||||
// shared bus.
|
||||
h.waitGameStatus(t, gameID, "start_failed", 60*time.Second)
|
||||
}
|
||||
|
||||
type tripleHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
userServiceURL string
|
||||
lobbyAdminURL string
|
||||
lobbyPublicURL string
|
||||
mailBaseURL string
|
||||
notificationURL string
|
||||
|
||||
intentsStream string
|
||||
startJobs string
|
||||
stopJobs string
|
||||
jobResults string
|
||||
healthEvents string
|
||||
lifecycleStream string
|
||||
gmEventsStream string
|
||||
|
||||
processes []*harness.Process
|
||||
}
|
||||
|
||||
func newTripleHarness(t *testing.T) *tripleHarness {
|
||||
t.Helper()
|
||||
harness.RequireDockerDaemon(t) // RTM /readyz pings Docker.
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() { require.NoError(t, redisClient.Close()) })
|
||||
|
||||
dockerNetwork := harness.EnsureDockerNetwork(t)
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
mailInternalAddr := harness.FreeTCPAddress(t)
|
||||
notificationInternalAddr := harness.FreeTCPAddress(t)
|
||||
lobbyPublicAddr := harness.FreeTCPAddress(t)
|
||||
lobbyInternalAddr := harness.FreeTCPAddress(t)
|
||||
rtmInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
||||
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
||||
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
||||
rtmBinary := harness.BuildBinary(t, "rtmanager", "./rtmanager/cmd/rtmanager")
|
||||
|
||||
suffix := strconv.FormatInt(suiteSeq.Add(1), 10)
|
||||
intentsStream := notificationIntentsStream + ":" + suffix
|
||||
startJobs := startJobsStream + ":" + suffix
|
||||
stopJobs := stopJobsStream + ":" + suffix
|
||||
jobResults := jobResultsStream + ":" + suffix
|
||||
healthEvents := healthEventsStream + ":" + suffix
|
||||
lifecycle := userLifecycleStream + ":" + suffix
|
||||
gmEvents := gmEventsStream + ":" + suffix
|
||||
|
||||
// User Service.
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
// Mail Service.
|
||||
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||
mailEnv["MAIL_TEMPLATE_DIR"] = mailTemplateDir(t)
|
||||
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||
|
||||
// Notification Service. Admin emails for runtime.* go to a single
|
||||
// shared address; the suite does not test multi-recipient routing.
|
||||
notificationEnv := harness.StartNotificationServicePersistence(t, redisRuntime.Addr).Env
|
||||
notificationEnv["NOTIFICATION_LOG_LEVEL"] = "info"
|
||||
notificationEnv["NOTIFICATION_INTERNAL_HTTP_ADDR"] = notificationInternalAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_TIMEOUT"] = time.Second.String()
|
||||
notificationEnv["NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||
notificationEnv["NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MIN"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MAX"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_RUNTIME_IMAGE_PULL_FAILED"] = adminEmailRecipient
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_RUNTIME_CONTAINER_START_FAILED"] = adminEmailRecipient
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_RUNTIME_START_CONFIG_INVALID"] = adminEmailRecipient
|
||||
notificationEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
notificationEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, notificationEnv)
|
||||
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
// Lobby.
|
||||
lobbyEnv := harness.StartLobbyServicePersistence(t, redisRuntime.Addr).Env
|
||||
lobbyEnv["LOBBY_LOG_LEVEL"] = "info"
|
||||
lobbyEnv["LOBBY_PUBLIC_HTTP_ADDR"] = lobbyPublicAddr
|
||||
lobbyEnv["LOBBY_INTERNAL_HTTP_ADDR"] = lobbyInternalAddr
|
||||
lobbyEnv["LOBBY_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
lobbyEnv["LOBBY_GM_BASE_URL"] = "http://" + notificationInternalAddr
|
||||
lobbyEnv["LOBBY_NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||
lobbyEnv["LOBBY_USER_LIFECYCLE_STREAM"] = lifecycle
|
||||
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_STREAM"] = jobResults
|
||||
lobbyEnv["LOBBY_RUNTIME_START_JOBS_STREAM"] = startJobs
|
||||
lobbyEnv["LOBBY_RUNTIME_STOP_JOBS_STREAM"] = stopJobs
|
||||
lobbyEnv["LOBBY_GM_EVENTS_STREAM"] = gmEvents
|
||||
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_USER_LIFECYCLE_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_GM_EVENTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||
lobbyEnv["LOBBY_ENGINE_IMAGE_TEMPLATE"] = "galaxy/game:{engine_version}-tripleit"
|
||||
lobbyEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
lobbyEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, lobbyEnv)
|
||||
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
// Runtime Manager.
|
||||
rtmEnv := harness.StartRTManagerServicePersistence(t, redisRuntime.Addr).Env
|
||||
rtmEnv["RTMANAGER_LOG_LEVEL"] = "info"
|
||||
rtmEnv["RTMANAGER_INTERNAL_HTTP_ADDR"] = rtmInternalAddr
|
||||
rtmEnv["RTMANAGER_LOBBY_INTERNAL_BASE_URL"] = "http://" + lobbyInternalAddr
|
||||
rtmEnv["RTMANAGER_LOBBY_INTERNAL_TIMEOUT"] = "200ms"
|
||||
rtmEnv["RTMANAGER_DOCKER_HOST"] = resolveDockerHost()
|
||||
rtmEnv["RTMANAGER_DOCKER_NETWORK"] = dockerNetwork
|
||||
rtmEnv["RTMANAGER_GAME_STATE_ROOT"] = t.TempDir()
|
||||
rtmEnv["RTMANAGER_REDIS_START_JOBS_STREAM"] = startJobs
|
||||
rtmEnv["RTMANAGER_REDIS_STOP_JOBS_STREAM"] = stopJobs
|
||||
rtmEnv["RTMANAGER_REDIS_JOB_RESULTS_STREAM"] = jobResults
|
||||
rtmEnv["RTMANAGER_REDIS_HEALTH_EVENTS_STREAM"] = healthEvents
|
||||
rtmEnv["RTMANAGER_NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||
rtmEnv["RTMANAGER_STREAM_BLOCK_TIMEOUT"] = "200ms"
|
||||
rtmEnv["RTMANAGER_RECONCILE_INTERVAL"] = "5s"
|
||||
rtmEnv["RTMANAGER_CLEANUP_INTERVAL"] = "5s"
|
||||
rtmEnv["RTMANAGER_INSPECT_INTERVAL"] = "5s"
|
||||
rtmEnv["RTMANAGER_PROBE_INTERVAL"] = "5s"
|
||||
rtmEnv["RTMANAGER_PROBE_TIMEOUT"] = "1s"
|
||||
rtmEnv["RTMANAGER_PROBE_FAILURES_THRESHOLD"] = "3"
|
||||
rtmEnv["RTMANAGER_GAME_LEASE_TTL_SECONDS"] = "30"
|
||||
rtmEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
rtmEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
rtmProcess := harness.StartProcess(t, "rtmanager", rtmBinary, rtmEnv)
|
||||
harness.WaitForHTTPStatus(t, rtmProcess, "http://"+rtmInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
return &tripleHarness{
|
||||
redis: redisClient,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
lobbyAdminURL: "http://" + lobbyInternalAddr,
|
||||
lobbyPublicURL: "http://" + lobbyPublicAddr,
|
||||
mailBaseURL: "http://" + mailInternalAddr,
|
||||
notificationURL: "http://" + notificationInternalAddr,
|
||||
intentsStream: intentsStream,
|
||||
startJobs: startJobs,
|
||||
stopJobs: stopJobs,
|
||||
jobResults: jobResults,
|
||||
healthEvents: healthEvents,
|
||||
lifecycleStream: lifecycle,
|
||||
gmEventsStream: gmEvents,
|
||||
processes: []*harness.Process{userServiceProcess, mailProcess, notificationProcess, lobbyProcess, rtmProcess},
|
||||
}
|
||||
}
|
||||
|
||||
// --- Lobby fixtures ---
|
||||
|
||||
type ensureUserResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id"`
|
||||
}
|
||||
|
||||
func (h *tripleHarness) ensureUser(t *testing.T, email string) ensureUserResponse {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": "en",
|
||||
"time_zone": "Europe/Kaliningrad",
|
||||
},
|
||||
}, nil)
|
||||
var out ensureUserResponse
|
||||
requireJSONStatus(t, resp, http.StatusOK, &out)
|
||||
require.NotEmpty(t, out.UserID)
|
||||
return out
|
||||
}
|
||||
|
||||
func (h *tripleHarness) adminCreatePrivateGameForOwner(t *testing.T, ownerUserID, gameName string, enrollmentEndsAt int64, engineVersion string) string {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games", map[string]any{
|
||||
"game_name": gameName,
|
||||
"game_type": "private",
|
||||
"min_players": 1,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 6,
|
||||
"start_gap_players": 1,
|
||||
"enrollment_ends_at": enrollmentEndsAt,
|
||||
"turn_schedule": "0 18 * * *",
|
||||
"target_engine_version": engineVersion,
|
||||
}, http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "create private game: %s", resp.Body)
|
||||
var record struct {
|
||||
GameID string `json:"game_id"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &record))
|
||||
require.NotEmpty(t, record.GameID)
|
||||
return record.GameID
|
||||
}
|
||||
|
||||
func (h *tripleHarness) userOpenEnrollment(t *testing.T, ownerUserID, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/open-enrollment", nil,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "open enrollment: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *tripleHarness) userReadyToStart(t *testing.T, ownerUserID, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/ready-to-start", nil,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "ready-to-start: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *tripleHarness) userStartGame(t *testing.T, ownerUserID, gameID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/start", nil,
|
||||
http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "start game: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *tripleHarness) userCreateInvite(t *testing.T, ownerUserID, gameID, inviteeUserID string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/invites",
|
||||
map[string]any{"invitee_user_id": inviteeUserID},
|
||||
http.Header{"X-User-Id": []string{ownerUserID}})
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "create invite: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *tripleHarness) firstCreatedInviteID(t *testing.T, inviteeUserID, gameID string) string {
|
||||
t.Helper()
|
||||
req, err := http.NewRequest(http.MethodGet,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/my/invites?status=created", nil)
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("X-User-Id", inviteeUserID)
|
||||
resp := doRequest(t, req)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "list my invites: %s", resp.Body)
|
||||
|
||||
var body struct {
|
||||
Items []struct {
|
||||
InviteID string `json:"invite_id"`
|
||||
GameID string `json:"game_id"`
|
||||
} `json:"items"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &body))
|
||||
for _, item := range body.Items {
|
||||
if item.GameID == gameID {
|
||||
return item.InviteID
|
||||
}
|
||||
}
|
||||
t.Fatalf("no invite for invitee %s on game %s", inviteeUserID, gameID)
|
||||
return ""
|
||||
}
|
||||
|
||||
func (h *tripleHarness) userRedeemInvite(t *testing.T, inviteeUserID, gameID, inviteID, raceName string) {
|
||||
t.Helper()
|
||||
resp := postJSON(t,
|
||||
h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/invites/"+inviteID+"/redeem",
|
||||
map[string]any{"race_name": raceName},
|
||||
http.Header{"X-User-Id": []string{inviteeUserID}})
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "redeem invite: %s", resp.Body)
|
||||
}
|
||||
|
||||
// --- observation helpers ---
|
||||
|
||||
type jobResultEntry struct {
|
||||
GameID string
|
||||
Outcome string
|
||||
ContainerID string
|
||||
EngineEndpoint string
|
||||
ErrorCode string
|
||||
ErrorMessage string
|
||||
}
|
||||
|
||||
func (h *tripleHarness) waitJobResult(t *testing.T, predicate func(jobResultEntry) bool, timeout time.Duration) jobResultEntry {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
entries, err := h.redis.XRange(context.Background(), h.jobResults, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
for _, entry := range entries {
|
||||
parsed := jobResultEntry{
|
||||
GameID: readString(entry.Values, "game_id"),
|
||||
Outcome: readString(entry.Values, "outcome"),
|
||||
ContainerID: readString(entry.Values, "container_id"),
|
||||
EngineEndpoint: readString(entry.Values, "engine_endpoint"),
|
||||
ErrorCode: readString(entry.Values, "error_code"),
|
||||
ErrorMessage: readString(entry.Values, "error_message"),
|
||||
}
|
||||
if predicate(parsed) {
|
||||
return parsed
|
||||
}
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("matching job_result not observed within %s", timeout)
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
type notificationIntentEntry struct {
|
||||
RedisEntryID string
|
||||
NotificationType string
|
||||
Producer string
|
||||
AudienceKind string
|
||||
PayloadGameID string
|
||||
PayloadImageRef string
|
||||
PayloadErrorCode string
|
||||
}
|
||||
|
||||
func (h *tripleHarness) waitNotificationIntent(t *testing.T, predicate func(notificationIntentEntry) bool, timeout time.Duration) notificationIntentEntry {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
entries, err := h.redis.XRange(context.Background(), h.intentsStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
for _, entry := range entries {
|
||||
parsed := notificationIntentEntry{
|
||||
RedisEntryID: entry.ID,
|
||||
NotificationType: readString(entry.Values, "notification_type"),
|
||||
Producer: readString(entry.Values, "producer"),
|
||||
AudienceKind: readString(entry.Values, "audience_kind"),
|
||||
}
|
||||
if payload := readString(entry.Values, "payload_json"); payload != "" {
|
||||
var data struct {
|
||||
GameID string `json:"game_id"`
|
||||
ImageRef string `json:"image_ref"`
|
||||
ErrorCode string `json:"error_code"`
|
||||
}
|
||||
if err := json.Unmarshal([]byte(payload), &data); err == nil {
|
||||
parsed.PayloadGameID = data.GameID
|
||||
parsed.PayloadImageRef = data.ImageRef
|
||||
parsed.PayloadErrorCode = data.ErrorCode
|
||||
}
|
||||
}
|
||||
if predicate(parsed) {
|
||||
return parsed
|
||||
}
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("matching notification intent not observed within %s", timeout)
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
type mailDeliverySummary struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
PayloadMode string `json:"payload_mode"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
To []string `json:"to"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
func (h *tripleHarness) eventuallyDelivery(t *testing.T, query url.Values) mailDeliverySummary {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(60 * time.Second)
|
||||
for {
|
||||
listURL := h.mailBaseURL + mailDeliveriesPath + "?" + query.Encode()
|
||||
req, err := http.NewRequest(http.MethodGet, listURL, nil)
|
||||
require.NoError(t, err)
|
||||
resp := doRequest(t, req)
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
var body struct {
|
||||
Items []mailDeliverySummary `json:"items"`
|
||||
}
|
||||
if json.Unmarshal([]byte(resp.Body), &body) == nil && len(body.Items) > 0 {
|
||||
return body.Items[0]
|
||||
}
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("mail delivery not observed within 60s for query %v", query)
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func (h *tripleHarness) waitGameStatus(t *testing.T, gameID, want string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
req, err := http.NewRequest(http.MethodGet, h.lobbyAdminURL+"/api/v1/lobby/games/"+gameID, nil)
|
||||
require.NoError(t, err)
|
||||
resp := doRequest(t, req)
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
var record struct {
|
||||
Status string `json:"status"`
|
||||
}
|
||||
if json.Unmarshal([]byte(resp.Body), &record) == nil && record.Status == want {
|
||||
return
|
||||
}
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("game %s did not reach status %q within %s", gameID, want, timeout)
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// --- shared helpers ---
|
||||
|
||||
func readString(values map[string]any, key string) string {
|
||||
v, _ := values[key].(string)
|
||||
return strings.TrimSpace(v)
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func postJSON(t *testing.T, url string, body any, header http.Header) httpResponse {
|
||||
t.Helper()
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
reader = bytes.NewReader(payload)
|
||||
}
|
||||
req, err := http.NewRequest(http.MethodPost, url, reader)
|
||||
require.NoError(t, err)
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
for key, vs := range header {
|
||||
for _, v := range vs {
|
||||
req.Header.Add(key, v)
|
||||
}
|
||||
}
|
||||
return doRequest(t, req)
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{DisableKeepAlives: true},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, want int, target any) {
|
||||
t.Helper()
|
||||
require.Equalf(t, want, response.StatusCode, "response: %s", response.Body)
|
||||
require.NoError(t, decodeStrictJSON([]byte(response.Body), target))
|
||||
}
|
||||
|
||||
func decodeStrictJSON(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-readiness-probe/exists", nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, baseURL+mailDeliveriesPath, nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func mailTemplateDir(t *testing.T) string {
|
||||
t.Helper()
|
||||
return filepath.Join(repositoryRoot(t), "mail", "templates")
|
||||
}
|
||||
|
||||
func repositoryRoot(t *testing.T) string {
|
||||
t.Helper()
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatal("resolve repository root: runtime caller is unavailable")
|
||||
}
|
||||
return filepath.Clean(filepath.Join(filepath.Dir(file), "..", ".."))
|
||||
}
|
||||
|
||||
// resolveDockerHost honours DOCKER_HOST when the developer machine
|
||||
// routes through colima or a remote daemon, fall back to the standard
|
||||
// unix path otherwise.
|
||||
func resolveDockerHost() string {
|
||||
if host := strings.TrimSpace(os.Getenv("DOCKER_HOST")); host != "" {
|
||||
return host
|
||||
}
|
||||
return "unix:///var/run/docker.sock"
|
||||
}
|
||||
@@ -1,323 +0,0 @@
|
||||
// Package lobbyuser_test exercises the synchronous Lobby → User Service
|
||||
// eligibility boundary by running both binaries in-process against a real
|
||||
// Redis container. The Game Master client surface is satisfied by an
|
||||
// inline httptest stub because the eligibility flow does not touch GM.
|
||||
package lobbyuser_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"maps"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestEligibilityCapturedOnApplication(t *testing.T) {
|
||||
h := newLobbyUserHarness(t)
|
||||
|
||||
user := h.ensureUser(t, "happy@example.com")
|
||||
gameID := h.adminCreatePublicGame(t, "Happy Path Galaxy", time.Now().Add(48*time.Hour).Unix())
|
||||
h.openEnrollment(t, gameID)
|
||||
|
||||
app := h.submitApplicationExpectStatus(t, user.UserID, gameID, "PilotAurora", http.StatusCreated)
|
||||
|
||||
require.NotEmpty(t, app["application_id"])
|
||||
require.Equal(t, gameID, app["game_id"])
|
||||
require.Equal(t, user.UserID, app["applicant_user_id"])
|
||||
require.Equal(t, "PilotAurora", app["race_name"])
|
||||
require.Equal(t, "submitted", app["status"])
|
||||
}
|
||||
|
||||
func TestEligibilityRejectedForPermanentlyBlockedUser(t *testing.T) {
|
||||
h := newLobbyUserHarness(t)
|
||||
|
||||
user := h.ensureUser(t, "blocked@example.com")
|
||||
h.applyPermanentBlock(t, user.UserID)
|
||||
|
||||
gameID := h.adminCreatePublicGame(t, "Block Galaxy", time.Now().Add(48*time.Hour).Unix())
|
||||
h.openEnrollment(t, gameID)
|
||||
|
||||
body := h.submitApplicationExpectStatus(t, user.UserID, gameID, "PilotEclipse", http.StatusUnprocessableEntity)
|
||||
requireErrorCode(t, body, "eligibility_denied")
|
||||
}
|
||||
|
||||
func TestEligibilityRejectedForUnknownUser(t *testing.T) {
|
||||
h := newLobbyUserHarness(t)
|
||||
|
||||
gameID := h.adminCreatePublicGame(t, "Unknown Galaxy", time.Now().Add(48*time.Hour).Unix())
|
||||
h.openEnrollment(t, gameID)
|
||||
|
||||
body := h.submitApplicationExpectStatus(t, "user-does-not-exist", gameID, "PilotPhantom", http.StatusUnprocessableEntity)
|
||||
requireErrorCode(t, body, "eligibility_denied")
|
||||
}
|
||||
|
||||
func TestEligibilityFailsWhenUserServiceDown(t *testing.T) {
|
||||
h := newLobbyUserHarness(t)
|
||||
|
||||
user := h.ensureUser(t, "transient@example.com")
|
||||
gameID := h.adminCreatePublicGame(t, "Transient Galaxy", time.Now().Add(48*time.Hour).Unix())
|
||||
h.openEnrollment(t, gameID)
|
||||
|
||||
h.userServiceProcess.Stop(t)
|
||||
|
||||
body := h.submitApplicationExpectStatus(t, user.UserID, gameID, "PilotOutage", http.StatusServiceUnavailable)
|
||||
requireErrorCode(t, body, "service_unavailable")
|
||||
}
|
||||
|
||||
type lobbyUserHarness struct {
|
||||
userServiceURL string
|
||||
lobbyPublicURL string
|
||||
lobbyAdminURL string
|
||||
|
||||
gmStub *httptest.Server
|
||||
|
||||
userServiceProcess *harness.Process
|
||||
lobbyProcess *harness.Process
|
||||
}
|
||||
|
||||
type ensureByEmailResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id"`
|
||||
}
|
||||
|
||||
func newLobbyUserHarness(t *testing.T) *lobbyUserHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
|
||||
gmStub := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_, _ = w.Write([]byte(`{}`))
|
||||
}))
|
||||
t.Cleanup(gmStub.Close)
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
lobbyPublicAddr := harness.FreeTCPAddress(t)
|
||||
lobbyInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
lobbyEnv := harness.StartLobbyServicePersistence(t, redisRuntime.Addr).Env
|
||||
lobbyEnv["LOBBY_LOG_LEVEL"] = "info"
|
||||
lobbyEnv["LOBBY_PUBLIC_HTTP_ADDR"] = lobbyPublicAddr
|
||||
lobbyEnv["LOBBY_INTERNAL_HTTP_ADDR"] = lobbyInternalAddr
|
||||
lobbyEnv["LOBBY_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
lobbyEnv["LOBBY_GM_BASE_URL"] = gmStub.URL
|
||||
lobbyEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
lobbyEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, lobbyEnv)
|
||||
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
return &lobbyUserHarness{
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
lobbyPublicURL: "http://" + lobbyPublicAddr,
|
||||
lobbyAdminURL: "http://" + lobbyInternalAddr,
|
||||
gmStub: gmStub,
|
||||
userServiceProcess: userServiceProcess,
|
||||
lobbyProcess: lobbyProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *lobbyUserHarness) ensureUser(t *testing.T, email string) ensureByEmailResponse {
|
||||
t.Helper()
|
||||
|
||||
resp := postJSON(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": "en",
|
||||
"time_zone": "Europe/Kaliningrad",
|
||||
},
|
||||
}, nil)
|
||||
|
||||
var out ensureByEmailResponse
|
||||
requireJSONStatus(t, resp, http.StatusOK, &out)
|
||||
require.Equal(t, "created", out.Outcome)
|
||||
require.NotEmpty(t, out.UserID)
|
||||
return out
|
||||
}
|
||||
|
||||
func (h *lobbyUserHarness) applyPermanentBlock(t *testing.T, userID string) {
|
||||
t.Helper()
|
||||
|
||||
resp := postJSON(t, h.userServiceURL+"/api/v1/internal/users/"+userID+"/sanctions/apply", map[string]any{
|
||||
"sanction_code": "permanent_block",
|
||||
"scope": "platform",
|
||||
"reason_code": "terminal_policy_violation",
|
||||
"actor": map[string]string{"type": "admin", "id": "admin-1"},
|
||||
"applied_at": time.Now().UTC().Format(time.RFC3339),
|
||||
}, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "apply permanent_block: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyUserHarness) adminCreatePublicGame(t *testing.T, name string, enrollmentEndsAt int64) string {
|
||||
t.Helper()
|
||||
|
||||
resp := postJSON(t, h.lobbyAdminURL+"/api/v1/lobby/games", map[string]any{
|
||||
"game_name": name,
|
||||
"game_type": "public",
|
||||
"min_players": 2,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 6,
|
||||
"start_gap_players": 1,
|
||||
"enrollment_ends_at": enrollmentEndsAt,
|
||||
"turn_schedule": "0 18 * * *",
|
||||
"target_engine_version": "1.0.0",
|
||||
}, nil)
|
||||
require.Equalf(t, http.StatusCreated, resp.StatusCode, "admin create game: %s", resp.Body)
|
||||
|
||||
var record map[string]any
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &record))
|
||||
gameID, ok := record["game_id"].(string)
|
||||
require.True(t, ok, "game_id missing in admin create response: %s", resp.Body)
|
||||
return gameID
|
||||
}
|
||||
|
||||
func (h *lobbyUserHarness) openEnrollment(t *testing.T, gameID string) {
|
||||
t.Helper()
|
||||
|
||||
resp := postJSON(t, h.lobbyAdminURL+"/api/v1/lobby/games/"+gameID+"/open-enrollment", nil, nil)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "open enrollment: %s", resp.Body)
|
||||
}
|
||||
|
||||
func (h *lobbyUserHarness) submitApplicationExpectStatus(t *testing.T, userID, gameID, raceName string, want int) map[string]any {
|
||||
t.Helper()
|
||||
|
||||
resp := postJSON(t, h.lobbyPublicURL+"/api/v1/lobby/games/"+gameID+"/applications", map[string]any{
|
||||
"race_name": raceName,
|
||||
}, http.Header{"X-User-Id": []string{userID}})
|
||||
require.Equalf(t, want, resp.StatusCode, "submit application: %s", resp.Body)
|
||||
|
||||
var body map[string]any
|
||||
if resp.Body != "" {
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &body))
|
||||
}
|
||||
return body
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-readiness-probe/exists", nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func postJSON(t *testing.T, url string, body any, header http.Header) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
reader = bytes.NewReader(payload)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest(http.MethodPost, url, reader)
|
||||
require.NoError(t, err)
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
maps.Copy(req.Header, header)
|
||||
return doRequest(t, req)
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
|
||||
require.Equalf(t, wantStatus, response.StatusCode, "unexpected status, body=%s", response.Body)
|
||||
if target != nil {
|
||||
require.NoError(t, decodeStrictJSON([]byte(response.Body), target))
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSON(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func requireErrorCode(t *testing.T, body map[string]any, want string) {
|
||||
t.Helper()
|
||||
require.NotNil(t, body, "error response body must not be empty")
|
||||
|
||||
envelope, ok := body["error"].(map[string]any)
|
||||
require.Truef(t, ok, "expected error envelope, got %v", body)
|
||||
require.Equalf(t, want, envelope["code"], "expected error code %q, got %v", want, envelope["code"])
|
||||
}
|
||||
@@ -0,0 +1,85 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestMailFlow_LoginCodeAndAdminListing triggers a login code email
|
||||
// (which uses backend's mail outbox), waits for mailpit to capture
|
||||
// the SMTP delivery, and verifies the admin endpoints expose the
|
||||
// same delivery via the typed list response.
|
||||
//
|
||||
// Resend on a `sent` row returns 409 (per OpenAPI/decision record);
|
||||
// this test asserts that contract by attempting a resend on the
|
||||
// captured (and now sent) delivery.
|
||||
func TestMailFlow_LoginCodeAndAdminListing(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Trigger a login code, which writes one mail_deliveries row and
|
||||
// drains via the worker into mailpit.
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+mail@example.com")
|
||||
if sess.DeviceSessionID == "" {
|
||||
t.Fatalf("session not established")
|
||||
}
|
||||
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
raw, resp, err := admin.Do(ctx, http.MethodGet, "/api/v1/admin/mail/deliveries?page=1&page_size=10", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("list deliveries: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("list deliveries: status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
var list struct {
|
||||
Items []struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Status string `json:"status"`
|
||||
TemplateID string `json:"template_id"`
|
||||
} `json:"items"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &list); err != nil {
|
||||
t.Fatalf("decode list: %v", err)
|
||||
}
|
||||
if len(list.Items) == 0 {
|
||||
t.Fatalf("admin list returned no deliveries; expected at least the login code row")
|
||||
}
|
||||
|
||||
var sent string
|
||||
deadline := time.Now().Add(15 * time.Second)
|
||||
for time.Now().Before(deadline) && sent == "" {
|
||||
raw, resp, err = admin.Do(ctx, http.MethodGet, "/api/v1/admin/mail/deliveries?page=1&page_size=10", nil)
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("list deliveries during wait: %v status=%v", err, resp)
|
||||
}
|
||||
_ = json.Unmarshal(raw, &list)
|
||||
for _, it := range list.Items {
|
||||
if it.Status == "sent" {
|
||||
sent = it.DeliveryID
|
||||
break
|
||||
}
|
||||
}
|
||||
if sent == "" {
|
||||
time.Sleep(300 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
if sent == "" {
|
||||
t.Fatalf("no delivery reached `sent` within 15s; admin list = %+v", list.Items)
|
||||
}
|
||||
|
||||
// Resend on a sent row must return 409.
|
||||
raw, resp, err = admin.Do(ctx, http.MethodPost, "/api/v1/admin/mail/deliveries/"+sent+"/resend", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("resend sent delivery: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusConflict {
|
||||
t.Fatalf("resend on sent delivery: status %d body=%s, want 409", resp.StatusCode, string(raw))
|
||||
}
|
||||
}
|
||||
@@ -1,367 +0,0 @@
|
||||
// Package mailsmoke_test exercises the real SMTP adapter of Mail
|
||||
// Service against a real SMTP receiver running in a testcontainer.
|
||||
// The suite is the small dedicated smoke suite called out in
|
||||
// `TESTING.md §4` ("Add only a small dedicated smoke suite for the
|
||||
// real mail adapter").
|
||||
//
|
||||
// The boundary contract under test is: a delivery accepted on Mail's
|
||||
// internal HTTP surface in `smtp` mode is actually transmitted over
|
||||
// SMTP to the configured upstream and is observable on the
|
||||
// receiver's inspection API. No other Galaxy service is booted; the
|
||||
// test is intentionally narrow.
|
||||
package mailsmoke_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
"encoding/json"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math/big"
|
||||
"net"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
const (
|
||||
mailpitImage = "axllent/mailpit:latest"
|
||||
mailpitSMTPPort = "1025/tcp"
|
||||
mailpitAPIPort = "8025/tcp"
|
||||
mailDeliveryPath = "/api/v1/internal/deliveries"
|
||||
commandSource = "mailsmoke"
|
||||
commandTemplate = "auth.login_code"
|
||||
smokeRecipient = "smoke-recipient@example.com"
|
||||
smokeFromEmail = "noreply@galaxy.example.com"
|
||||
)
|
||||
|
||||
var smokeSeq atomic.Int64
|
||||
|
||||
// TestMailServiceDeliversToRealSMTPProvider drives Mail Service in
|
||||
// `smtp` mode at a real Mailpit testcontainer. The service must
|
||||
// transmit the configured payload over SMTP and the receiver must
|
||||
// register it as a stored message visible on its HTTP inspection API.
|
||||
func TestMailServiceDeliversToRealSMTPProvider(t *testing.T) {
|
||||
mailpit := startMailpitContainer(t)
|
||||
|
||||
mailService := startMailServiceWithSMTP(t, mailpit.SMTPEndpoint())
|
||||
|
||||
suffix := strconv.FormatInt(smokeSeq.Add(1), 10)
|
||||
idempotencyKey := "mailsmoke:" + suffix
|
||||
uniqueRecipient := "smoke-" + suffix + "-" + smokeRecipient
|
||||
|
||||
// Mail Service has a synchronous trusted REST surface for the
|
||||
// auth login-code path (`/api/v1/internal/login-code-deliveries`).
|
||||
// It accepts the request, renders the template, and drives the
|
||||
// configured SMTP provider — exactly what the smoke suite needs
|
||||
// to verify against the real Mailpit container.
|
||||
loginCodeBody := map[string]any{
|
||||
"email": uniqueRecipient,
|
||||
"code": "123456",
|
||||
"locale": "en",
|
||||
}
|
||||
bodyBytes, err := json.Marshal(loginCodeBody)
|
||||
require.NoError(t, err)
|
||||
|
||||
req, err := http.NewRequest(http.MethodPost,
|
||||
mailService.BaseURL+"/api/v1/internal/login-code-deliveries",
|
||||
bytes.NewReader(bodyBytes),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Idempotency-Key", idempotencyKey)
|
||||
resp := doRequest(t, req)
|
||||
require.Equalf(t,
|
||||
http.StatusOK,
|
||||
resp.StatusCode,
|
||||
"submit login-code delivery: %s", resp.Body,
|
||||
)
|
||||
|
||||
// Mailpit exposes received messages at /api/v1/messages with a
|
||||
// JSON envelope containing `messages_count` plus per-message
|
||||
// items. Wait until our envelope shows up.
|
||||
waitForMailpitMessage(t, mailpit.APIBaseURL(), uniqueRecipient, 30*time.Second)
|
||||
}
|
||||
|
||||
// --- mailpit container ---
|
||||
|
||||
type mailpitContainer struct {
|
||||
container testcontainers.Container
|
||||
smtpHost string
|
||||
smtpPort string
|
||||
apiHost string
|
||||
apiPort string
|
||||
}
|
||||
|
||||
func (m *mailpitContainer) SMTPEndpoint() string {
|
||||
return m.smtpHost + ":" + m.smtpPort
|
||||
}
|
||||
|
||||
func (m *mailpitContainer) APIBaseURL() string {
|
||||
return "http://" + m.apiHost + ":" + m.apiPort
|
||||
}
|
||||
|
||||
func startMailpitContainer(t *testing.T) *mailpitContainer {
|
||||
t.Helper()
|
||||
|
||||
// Mail Service hardcodes `gomail.TLSMandatory`; the smoke suite
|
||||
// must give Mailpit a usable cert+key so STARTTLS succeeds even
|
||||
// against a self-signed server. The cert is short-lived and is
|
||||
// regenerated per test run.
|
||||
certPEM, keyPEM := generateSelfSignedCert(t, "mailpit-smoke")
|
||||
|
||||
ctx := context.Background()
|
||||
req := testcontainers.ContainerRequest{
|
||||
Image: mailpitImage,
|
||||
ExposedPorts: []string{
|
||||
mailpitSMTPPort,
|
||||
mailpitAPIPort,
|
||||
},
|
||||
Env: map[string]string{
|
||||
"MP_SMTP_TLS_CERT": "/etc/mailpit/cert.pem",
|
||||
"MP_SMTP_TLS_KEY": "/etc/mailpit/key.pem",
|
||||
},
|
||||
Files: []testcontainers.ContainerFile{
|
||||
{
|
||||
Reader: bytes.NewReader(certPEM),
|
||||
ContainerFilePath: "/etc/mailpit/cert.pem",
|
||||
FileMode: 0o644,
|
||||
},
|
||||
{
|
||||
Reader: bytes.NewReader(keyPEM),
|
||||
ContainerFilePath: "/etc/mailpit/key.pem",
|
||||
FileMode: 0o600,
|
||||
},
|
||||
},
|
||||
WaitingFor: wait.ForLog("accessible via").
|
||||
WithStartupTimeout(30 * time.Second),
|
||||
}
|
||||
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
|
||||
ContainerRequest: req,
|
||||
Started: true,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||
t.Errorf("terminate mailpit container: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
smtpHost, err := container.Host(ctx)
|
||||
require.NoError(t, err)
|
||||
smtpPort, err := container.MappedPort(ctx, mailpitSMTPPort)
|
||||
require.NoError(t, err)
|
||||
|
||||
apiPort, err := container.MappedPort(ctx, mailpitAPIPort)
|
||||
require.NoError(t, err)
|
||||
|
||||
return &mailpitContainer{
|
||||
container: container,
|
||||
smtpHost: smtpHost,
|
||||
smtpPort: smtpPort.Port(),
|
||||
apiHost: smtpHost,
|
||||
apiPort: apiPort.Port(),
|
||||
}
|
||||
}
|
||||
|
||||
func waitForMailpitMessage(t *testing.T, apiBaseURL, recipient string, timeout time.Duration) {
|
||||
t.Helper()
|
||||
|
||||
deadline := time.Now().Add(timeout)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, apiBaseURL+"/api/v1/messages", nil)
|
||||
require.NoError(t, err)
|
||||
resp := doRequest(t, req)
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
var body struct {
|
||||
Messages []struct {
|
||||
To []struct {
|
||||
Address string `json:"Address"`
|
||||
} `json:"To"`
|
||||
Subject string `json:"Subject"`
|
||||
} `json:"messages"`
|
||||
}
|
||||
if json.Unmarshal([]byte(resp.Body), &body) == nil {
|
||||
for _, m := range body.Messages {
|
||||
for _, addr := range m.To {
|
||||
if addr.Address == recipient {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("mailpit did not register a message for %s within %s", recipient, timeout)
|
||||
}
|
||||
|
||||
// --- mail service in real-SMTP mode ---
|
||||
|
||||
type mailService struct {
|
||||
BaseURL string
|
||||
}
|
||||
|
||||
func startMailServiceWithSMTP(t *testing.T, smtpAddr string) mailService {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
mailInternalAddr := harness.FreeTCPAddress(t)
|
||||
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
||||
|
||||
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||
mailEnv["MAIL_TEMPLATE_DIR"] = mailTemplateDir(t)
|
||||
mailEnv["MAIL_SMTP_MODE"] = "smtp"
|
||||
mailEnv["MAIL_SMTP_ADDR"] = smtpAddr
|
||||
mailEnv["MAIL_SMTP_FROM_EMAIL"] = smokeFromEmail
|
||||
mailEnv["MAIL_SMTP_FROM_NAME"] = "Galaxy Mail Smoke"
|
||||
mailEnv["MAIL_SMTP_TIMEOUT"] = "10s"
|
||||
mailEnv["MAIL_SMTP_INSECURE_SKIP_VERIFY"] = "true"
|
||||
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = "5s"
|
||||
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
|
||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||
|
||||
return mailService{BaseURL: "http://" + mailInternalAddr}
|
||||
}
|
||||
|
||||
// --- shared helpers ---
|
||||
|
||||
func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, baseURL+mailDeliveryPath, nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func postJSON(t *testing.T, url string, body any) httpResponse {
|
||||
t.Helper()
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
req, err := http.NewRequest(http.MethodPost, url, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
return doRequest(t, req)
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{DisableKeepAlives: true},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
// generateSelfSignedCert produces a short-lived RSA cert + key for the
|
||||
// Mailpit container so STARTTLS succeeds against
|
||||
// `MAIL_SMTP_INSECURE_SKIP_VERIFY=true` clients.
|
||||
func generateSelfSignedCert(t *testing.T, commonName string) ([]byte, []byte) {
|
||||
t.Helper()
|
||||
|
||||
priv, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
require.NoError(t, err)
|
||||
|
||||
serial, err := rand.Int(rand.Reader, big.NewInt(1<<62))
|
||||
require.NoError(t, err)
|
||||
|
||||
template := x509.Certificate{
|
||||
SerialNumber: serial,
|
||||
Subject: pkix.Name{CommonName: commonName},
|
||||
NotBefore: time.Now().Add(-time.Hour),
|
||||
NotAfter: time.Now().Add(24 * time.Hour),
|
||||
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment | x509.KeyUsageCertSign,
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
|
||||
BasicConstraintsValid: true,
|
||||
IsCA: true,
|
||||
IPAddresses: []net.IP{net.ParseIP("127.0.0.1")},
|
||||
DNSNames: []string{"localhost", commonName},
|
||||
}
|
||||
|
||||
certDER, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv)
|
||||
require.NoError(t, err)
|
||||
|
||||
certPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: certDER})
|
||||
keyPEM := pem.EncodeToMemory(&pem.Block{
|
||||
Type: "RSA PRIVATE KEY",
|
||||
Bytes: x509.MarshalPKCS1PrivateKey(priv),
|
||||
})
|
||||
return certPEM, keyPEM
|
||||
}
|
||||
|
||||
func mailTemplateDir(t *testing.T) string {
|
||||
t.Helper()
|
||||
return filepath.Join(repositoryRoot(t), "mail", "templates")
|
||||
}
|
||||
|
||||
func repositoryRoot(t *testing.T) string {
|
||||
t.Helper()
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatal("resolve repository root: runtime caller is unavailable")
|
||||
}
|
||||
return filepath.Clean(filepath.Join(filepath.Dir(file), "..", ".."))
|
||||
}
|
||||
|
||||
// silence unused-import noise for symbols touched only via reflection /
|
||||
// conditional compilation.
|
||||
var _ = fmt.Sprintf
|
||||
var _ = errors.New
|
||||
var _ = assert.Equal
|
||||
@@ -0,0 +1,138 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestNotificationFlow_LobbyInvite asserts that a `lobby.invite.received`
|
||||
// intent triggers a push frame on the gateway SubscribeEvents stream
|
||||
// for the invitee AND a captured email at mailpit.
|
||||
func TestNotificationFlow_LobbyInvite(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
// Register an engine version so private-game creation can pass
|
||||
// validation.
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions", map[string]any{
|
||||
"version": "v1.0.0", "image_ref": "galaxy/game:integration", "enabled": true,
|
||||
}); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("seed engine_version: err=%v resp=%v", err, resp)
|
||||
}
|
||||
|
||||
inviter := testenv.RegisterSession(t, plat, "inviter@example.com")
|
||||
invitee := testenv.RegisterSession(t, plat, "invitee@example.com")
|
||||
inviterUser, err := inviter.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve inviter user_id: %v", err)
|
||||
}
|
||||
inviteeUser, err := invitee.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve invitee user_id: %v", err)
|
||||
}
|
||||
|
||||
// Inviter creates a private game.
|
||||
inviterClient := testenv.NewBackendUserClient(plat.Backend.HTTPURL, inviterUser)
|
||||
gameBody := map[string]any{
|
||||
"game_name": "Private Sortie",
|
||||
"visibility": "private",
|
||||
"min_players": 2,
|
||||
"max_players": 4,
|
||||
"start_gap_hours": 1,
|
||||
"start_gap_players": 2,
|
||||
"enrollment_ends_at": time.Now().Add(24 * time.Hour).UTC().Format(time.RFC3339),
|
||||
"turn_schedule": "0 * * * *",
|
||||
"target_engine_version": "v1.0.0",
|
||||
}
|
||||
raw, resp, err := inviterClient.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games", gameBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("create private game: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var game struct {
|
||||
GameID string `json:"game_id"`
|
||||
}
|
||||
if err := decodeJSON(raw, &game); err != nil {
|
||||
t.Fatalf("decode game: %v", err)
|
||||
}
|
||||
|
||||
// Invitee opens SubscribeEvents stream BEFORE the invite is
|
||||
// issued so we cannot miss the push frame.
|
||||
gw, err := invitee.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("invitee dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
streamCtx, streamCancel := context.WithCancel(ctx)
|
||||
defer streamCancel()
|
||||
events, errCh, err := gw.SubscribeEvents(streamCtx, "gateway.subscribe")
|
||||
if err != nil {
|
||||
t.Fatalf("subscribe events: %v", err)
|
||||
}
|
||||
|
||||
// Drain the bootstrap server-time event before the test gets
|
||||
// going so the invite event is the next thing observed.
|
||||
select {
|
||||
case <-events:
|
||||
case err := <-errCh:
|
||||
t.Fatalf("subscribe stream error before invite: %v", err)
|
||||
case <-time.After(5 * time.Second):
|
||||
t.Fatalf("bootstrap event not received within 5s")
|
||||
}
|
||||
|
||||
// Now clear mailpit so we can detect the new invite email.
|
||||
if err := plat.Mailpit.DeleteAll(ctx); err != nil {
|
||||
t.Fatalf("clear mailpit: %v", err)
|
||||
}
|
||||
|
||||
// Inviter issues an invite for invitee.
|
||||
inviteBody := map[string]any{
|
||||
"invited_user_id": inviteeUser,
|
||||
"race_name": "Invitee-Crew",
|
||||
}
|
||||
raw, resp, err = inviterClient.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+game.GameID+"/invites", inviteBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("issue invite: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Push: expect a non-bootstrap event.
|
||||
pushDeadline := time.After(20 * time.Second)
|
||||
gotPush := false
|
||||
PUSH:
|
||||
for {
|
||||
select {
|
||||
case ev, ok := <-events:
|
||||
if !ok {
|
||||
break PUSH
|
||||
}
|
||||
if ev == nil || ev.GetEventType() == "gateway.server_time" {
|
||||
continue
|
||||
}
|
||||
gotPush = true
|
||||
break PUSH
|
||||
case err := <-errCh:
|
||||
t.Fatalf("subscribe stream error during invite: %v", err)
|
||||
case <-pushDeadline:
|
||||
break PUSH
|
||||
}
|
||||
}
|
||||
if !gotPush {
|
||||
t.Fatalf("no push event received for lobby invite within 20s")
|
||||
}
|
||||
|
||||
// Email: expect mailpit to receive a message addressed to invitee.
|
||||
if _, err := plat.Mailpit.WaitForMessage(ctx, "to:"+invitee.Email, 30*time.Second); err != nil {
|
||||
t.Fatalf("invite email not captured: %v", err)
|
||||
}
|
||||
_ = strings.TrimSpace
|
||||
}
|
||||
|
||||
func decodeJSON(raw []byte, v any) error {
|
||||
return jsonUnmarshal(raw, v)
|
||||
}
|
||||
@@ -1,526 +0,0 @@
|
||||
package notificationgateway_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1"
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
const (
|
||||
notificationGatewayClientEventsStream = "gateway:client_events"
|
||||
notificationGatewayIntentsStream = "notification:intents"
|
||||
)
|
||||
|
||||
func TestNotificationGatewayFanOutsAllUserPushTypesToAllUserSessions(t *testing.T) {
|
||||
h := newNotificationGatewayHarness(t)
|
||||
|
||||
recipient := h.ensureUser(t, "pilot@example.com", "fr-FR")
|
||||
|
||||
firstPrivateKey := newClientPrivateKey("first")
|
||||
secondPrivateKey := newClientPrivateKey("second")
|
||||
unrelatedPrivateKey := newClientPrivateKey("unrelated")
|
||||
h.seedGatewaySession(t, "device-session-1", recipient.UserID, firstPrivateKey)
|
||||
h.seedGatewaySession(t, "device-session-2", recipient.UserID, secondPrivateKey)
|
||||
h.seedGatewaySession(t, "device-session-3", "user-unrelated", unrelatedPrivateKey)
|
||||
|
||||
conn := h.dialGateway(t)
|
||||
client := gatewayv1.NewEdgeGatewayClient(conn)
|
||||
|
||||
firstCtx, cancelFirst := context.WithCancel(context.Background())
|
||||
defer cancelFirst()
|
||||
firstStream, err := client.SubscribeEvents(firstCtx, newSubscribeEventsRequest("device-session-1", "request-1", firstPrivateKey))
|
||||
require.NoError(t, err)
|
||||
assertBootstrapEvent(t, recvGatewayEvent(t, firstStream), h.responseSignerPublicKey, "request-1")
|
||||
|
||||
secondCtx, cancelSecond := context.WithCancel(context.Background())
|
||||
defer cancelSecond()
|
||||
secondStream, err := client.SubscribeEvents(secondCtx, newSubscribeEventsRequest("device-session-2", "request-2", secondPrivateKey))
|
||||
require.NoError(t, err)
|
||||
assertBootstrapEvent(t, recvGatewayEvent(t, secondStream), h.responseSignerPublicKey, "request-2")
|
||||
|
||||
unrelatedCtx, cancelUnrelated := context.WithCancel(context.Background())
|
||||
defer cancelUnrelated()
|
||||
unrelatedStream, err := client.SubscribeEvents(unrelatedCtx, newSubscribeEventsRequest("device-session-3", "request-3", unrelatedPrivateKey))
|
||||
require.NoError(t, err)
|
||||
assertBootstrapEvent(t, recvGatewayEvent(t, unrelatedStream), h.responseSignerPublicKey, "request-3")
|
||||
|
||||
cases := []pushIntentCase{
|
||||
{
|
||||
notificationType: "game.turn.ready",
|
||||
producer: "game_master",
|
||||
payloadJSON: `{"game_id":"game-123","game_name":"Nebula Clash","turn_number":54}`,
|
||||
},
|
||||
{
|
||||
notificationType: "game.finished",
|
||||
producer: "game_master",
|
||||
payloadJSON: `{"game_id":"game-123","game_name":"Nebula Clash","final_turn_number":55}`,
|
||||
},
|
||||
{
|
||||
notificationType: "lobby.application.submitted",
|
||||
producer: "game_lobby",
|
||||
payloadJSON: `{"game_id":"game-123","game_name":"Nebula Clash","applicant_user_id":"applicant-1","applicant_name":"Nova Pilot"}`,
|
||||
},
|
||||
{
|
||||
notificationType: "lobby.membership.approved",
|
||||
producer: "game_lobby",
|
||||
payloadJSON: `{"game_id":"game-123","game_name":"Nebula Clash"}`,
|
||||
},
|
||||
{
|
||||
notificationType: "lobby.membership.rejected",
|
||||
producer: "game_lobby",
|
||||
payloadJSON: `{"game_id":"game-123","game_name":"Nebula Clash"}`,
|
||||
},
|
||||
{
|
||||
notificationType: "lobby.invite.created",
|
||||
producer: "game_lobby",
|
||||
payloadJSON: `{"game_id":"game-123","game_name":"Nebula Clash","inviter_user_id":"owner-1","inviter_name":"Owner Pilot"}`,
|
||||
},
|
||||
{
|
||||
notificationType: "lobby.invite.redeemed",
|
||||
producer: "game_lobby",
|
||||
payloadJSON: `{"game_id":"game-123","game_name":"Nebula Clash","invitee_user_id":"invitee-1","invitee_name":"Nova Pilot"}`,
|
||||
},
|
||||
}
|
||||
|
||||
for index, tc := range cases {
|
||||
messageID := h.publishPushIntent(t, tc, recipient.UserID, index)
|
||||
|
||||
firstEvent := recvGatewayEvent(t, firstStream)
|
||||
assertNotificationPushEvent(t, firstEvent, h.responseSignerPublicKey, tc.notificationType, messageID, recipient.UserID, index)
|
||||
secondEvent := recvGatewayEvent(t, secondStream)
|
||||
assertNotificationPushEvent(t, secondEvent, h.responseSignerPublicKey, tc.notificationType, messageID, recipient.UserID, index)
|
||||
}
|
||||
assertNoGatewayEvent(t, unrelatedStream, cancelUnrelated)
|
||||
|
||||
messages, err := h.redis.XRange(context.Background(), notificationGatewayClientEventsStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, messages, len(cases))
|
||||
for index, message := range messages {
|
||||
require.Equal(t, recipient.UserID, message.Values["user_id"])
|
||||
require.Equal(t, cases[index].notificationType, message.Values["event_type"])
|
||||
require.NotContains(t, message.Values, "device_session_id")
|
||||
}
|
||||
}
|
||||
|
||||
type notificationGatewayHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
userServiceURL string
|
||||
|
||||
gatewayGRPCAddr string
|
||||
responseSignerPublicKey ed25519.PublicKey
|
||||
|
||||
notificationProcess *harness.Process
|
||||
gatewayProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
}
|
||||
|
||||
type pushIntentCase struct {
|
||||
notificationType string
|
||||
producer string
|
||||
payloadJSON string
|
||||
}
|
||||
|
||||
type ensureByEmailResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id"`
|
||||
}
|
||||
|
||||
func newNotificationGatewayHarness(t *testing.T) *notificationGatewayHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name())
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
notificationInternalAddr := harness.FreeTCPAddress(t)
|
||||
gatewayPublicAddr := harness.FreeTCPAddress(t)
|
||||
gatewayGRPCAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
notificationEnv := harness.StartNotificationServicePersistence(t, redisRuntime.Addr).Env
|
||||
notificationEnv["NOTIFICATION_LOG_LEVEL"] = "info"
|
||||
notificationEnv["NOTIFICATION_INTERNAL_HTTP_ADDR"] = notificationInternalAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_TIMEOUT"] = time.Second.String()
|
||||
notificationEnv["NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MIN"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MAX"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_GATEWAY_CLIENT_EVENTS_STREAM"] = notificationGatewayClientEventsStream
|
||||
notificationEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
notificationEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, notificationEnv)
|
||||
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, map[string]string{
|
||||
"GATEWAY_LOG_LEVEL": "info",
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||
"GATEWAY_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||
|
||||
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": notificationGatewayClientEventsStream,
|
||||
"GATEWAY_CLIENT_EVENTS_REDIS_READ_BLOCK_TIMEOUT": "100ms",
|
||||
"GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:",
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath),
|
||||
"OTEL_TRACES_EXPORTER": "none",
|
||||
"OTEL_METRICS_EXPORTER": "none",
|
||||
})
|
||||
harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK)
|
||||
harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr)
|
||||
|
||||
return ¬ificationGatewayHarness{
|
||||
redis: redisClient,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
gatewayGRPCAddr: gatewayGRPCAddr,
|
||||
responseSignerPublicKey: responseSignerPublicKey,
|
||||
notificationProcess: notificationProcess,
|
||||
gatewayProcess: gatewayProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *notificationGatewayHarness) ensureUser(t *testing.T, email string, preferredLanguage string) ensureByEmailResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": preferredLanguage,
|
||||
"time_zone": "Europe/Kaliningrad",
|
||||
},
|
||||
})
|
||||
|
||||
var body ensureByEmailResponse
|
||||
requireJSONStatus(t, response, http.StatusOK, &body)
|
||||
require.Equal(t, "created", body.Outcome)
|
||||
require.NotEmpty(t, body.UserID)
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *notificationGatewayHarness) dialGateway(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
conn, err := grpc.DialContext(
|
||||
ctx,
|
||||
h.gatewayGRPCAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithBlock(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, conn.Close())
|
||||
})
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
func (h *notificationGatewayHarness) seedGatewaySession(t *testing.T, deviceSessionID string, userID string, clientPrivateKey ed25519.PrivateKey) {
|
||||
t.Helper()
|
||||
|
||||
record := gatewaySessionRecord{
|
||||
DeviceSessionID: deviceSessionID,
|
||||
UserID: userID,
|
||||
ClientPublicKey: base64.StdEncoding.EncodeToString(clientPrivateKey.Public().(ed25519.PublicKey)),
|
||||
Status: "active",
|
||||
}
|
||||
payload, err := json.Marshal(record)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, h.redis.Set(context.Background(), "gateway:session:"+deviceSessionID, payload, 0).Err())
|
||||
}
|
||||
|
||||
func (h *notificationGatewayHarness) publishPushIntent(t *testing.T, tc pushIntentCase, recipientUserID string, index int) string {
|
||||
t.Helper()
|
||||
|
||||
messageID, err := h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: notificationGatewayIntentsStream,
|
||||
Values: map[string]any{
|
||||
"notification_type": tc.notificationType,
|
||||
"producer": tc.producer,
|
||||
"audience_kind": "user",
|
||||
"recipient_user_ids_json": `["` + recipientUserID + `"]`,
|
||||
"idempotency_key": tc.notificationType + ":gateway:" + string(rune('a'+index)),
|
||||
"occurred_at_ms": "1775121700000",
|
||||
"request_id": pushRequestID(index),
|
||||
"trace_id": pushTraceID(index),
|
||||
"payload_json": tc.payloadJSON,
|
||||
},
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
|
||||
return messageID
|
||||
}
|
||||
|
||||
type gatewaySessionRecord struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
UserID string `json:"user_id"`
|
||||
ClientPublicKey string `json:"client_public_key"`
|
||||
Status string `json:"status"`
|
||||
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
return doRequest(t, request)
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body)
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target))
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-missing/exists", nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func newClientPrivateKey(label string) ed25519.PrivateKey {
|
||||
seed := sha256.Sum256([]byte("galaxy-integration-notification-gateway-client-" + label))
|
||||
return ed25519.NewKeyFromSeed(seed[:])
|
||||
}
|
||||
|
||||
func newSubscribeEventsRequest(deviceSessionID string, requestID string, clientPrivateKey ed25519.PrivateKey) *gatewayv1.SubscribeEventsRequest {
|
||||
payloadHash := contractsgatewayv1.ComputePayloadHash(nil)
|
||||
|
||||
request := &gatewayv1.SubscribeEventsRequest{
|
||||
ProtocolVersion: contractsgatewayv1.ProtocolVersionV1,
|
||||
DeviceSessionId: deviceSessionID,
|
||||
MessageType: contractsgatewayv1.SubscribeMessageType,
|
||||
TimestampMs: time.Now().UnixMilli(),
|
||||
RequestId: requestID,
|
||||
PayloadHash: payloadHash,
|
||||
TraceId: "trace-" + requestID,
|
||||
}
|
||||
request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{
|
||||
ProtocolVersion: request.GetProtocolVersion(),
|
||||
DeviceSessionID: request.GetDeviceSessionId(),
|
||||
MessageType: request.GetMessageType(),
|
||||
TimestampMS: request.GetTimestampMs(),
|
||||
RequestID: request.GetRequestId(),
|
||||
PayloadHash: request.GetPayloadHash(),
|
||||
})
|
||||
|
||||
return request
|
||||
}
|
||||
|
||||
func recvGatewayEvent(t *testing.T, stream grpc.ServerStreamingClient[gatewayv1.GatewayEvent]) *gatewayv1.GatewayEvent {
|
||||
t.Helper()
|
||||
|
||||
eventCh := make(chan *gatewayv1.GatewayEvent, 1)
|
||||
errCh := make(chan error, 1)
|
||||
go func() {
|
||||
event, err := stream.Recv()
|
||||
if err != nil {
|
||||
errCh <- err
|
||||
return
|
||||
}
|
||||
eventCh <- event
|
||||
}()
|
||||
|
||||
select {
|
||||
case event := <-eventCh:
|
||||
return event
|
||||
case err := <-errCh:
|
||||
require.NoError(t, err)
|
||||
case <-time.After(5 * time.Second):
|
||||
require.FailNow(t, "timed out waiting for gateway event")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func assertBootstrapEvent(t *testing.T, event *gatewayv1.GatewayEvent, responseSignerPublicKey ed25519.PublicKey, wantRequestID string) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, contractsgatewayv1.ServerTimeEventType, event.GetEventType())
|
||||
require.Equal(t, wantRequestID, event.GetEventId())
|
||||
require.Equal(t, wantRequestID, event.GetRequestId())
|
||||
require.NoError(t, contractsgatewayv1.VerifyPayloadHash(event.GetPayloadBytes(), event.GetPayloadHash()))
|
||||
require.NoError(t, contractsgatewayv1.VerifyEventSignature(responseSignerPublicKey, event.GetSignature(), contractsgatewayv1.EventSigningFields{
|
||||
EventType: event.GetEventType(),
|
||||
EventID: event.GetEventId(),
|
||||
TimestampMS: event.GetTimestampMs(),
|
||||
RequestID: event.GetRequestId(),
|
||||
TraceID: event.GetTraceId(),
|
||||
PayloadHash: event.GetPayloadHash(),
|
||||
}))
|
||||
}
|
||||
|
||||
func assertNotificationPushEvent(
|
||||
t *testing.T,
|
||||
event *gatewayv1.GatewayEvent,
|
||||
responseSignerPublicKey ed25519.PublicKey,
|
||||
notificationType string,
|
||||
notificationID string,
|
||||
userID string,
|
||||
index int,
|
||||
) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, notificationType, event.GetEventType())
|
||||
require.Equal(t, notificationID+"/push:user:"+userID, event.GetEventId())
|
||||
require.Equal(t, pushRequestID(index), event.GetRequestId())
|
||||
require.Equal(t, pushTraceID(index), event.GetTraceId())
|
||||
require.NotEmpty(t, event.GetPayloadBytes())
|
||||
require.NoError(t, contractsgatewayv1.VerifyPayloadHash(event.GetPayloadBytes(), event.GetPayloadHash()))
|
||||
require.NoError(t, contractsgatewayv1.VerifyEventSignature(responseSignerPublicKey, event.GetSignature(), contractsgatewayv1.EventSigningFields{
|
||||
EventType: event.GetEventType(),
|
||||
EventID: event.GetEventId(),
|
||||
TimestampMS: event.GetTimestampMs(),
|
||||
RequestID: event.GetRequestId(),
|
||||
TraceID: event.GetTraceId(),
|
||||
PayloadHash: event.GetPayloadHash(),
|
||||
}))
|
||||
}
|
||||
|
||||
func assertNoGatewayEvent(t *testing.T, stream grpc.ServerStreamingClient[gatewayv1.GatewayEvent], cancel context.CancelFunc) {
|
||||
t.Helper()
|
||||
|
||||
eventCh := make(chan *gatewayv1.GatewayEvent, 1)
|
||||
errCh := make(chan error, 1)
|
||||
go func() {
|
||||
event, err := stream.Recv()
|
||||
if err != nil {
|
||||
errCh <- err
|
||||
return
|
||||
}
|
||||
eventCh <- event
|
||||
}()
|
||||
|
||||
select {
|
||||
case event := <-eventCh:
|
||||
require.FailNowf(t, "unexpected gateway event delivered", "%+v", event)
|
||||
case <-time.After(200 * time.Millisecond):
|
||||
cancel()
|
||||
case err := <-errCh:
|
||||
require.FailNowf(t, "stream closed unexpectedly", "%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func pushRequestID(index int) string {
|
||||
return "notification-request-" + string(rune('a'+index))
|
||||
}
|
||||
|
||||
func pushTraceID(index int) string {
|
||||
return "notification-trace-" + string(rune('a'+index))
|
||||
}
|
||||
@@ -1,619 +0,0 @@
|
||||
package notificationmail_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
notificationMailDeliveriesPath = "/api/v1/internal/deliveries"
|
||||
notificationMailIntentsStream = "notification:intents"
|
||||
)
|
||||
|
||||
func TestNotificationMailPublishesEveryTemplateModeDeliveryToRealMailService(t *testing.T) {
|
||||
h := newNotificationMailHarness(t)
|
||||
|
||||
recipient := h.ensureUser(t, "pilot@example.com", "fr-FR")
|
||||
|
||||
cases := []mailIntentCase{
|
||||
{
|
||||
name: "geo review recommended admin",
|
||||
notificationType: "geo.review_recommended",
|
||||
producer: "geoprofile",
|
||||
audienceKind: "admin_email",
|
||||
recipientEmail: "geo-admin@example.com",
|
||||
routeID: "email:email:geo-admin@example.com",
|
||||
payload: map[string]any{
|
||||
"user_id": "user-geo",
|
||||
"user_email": "traveler@example.com",
|
||||
"observed_country": "DE",
|
||||
"usual_connection_country": "PL",
|
||||
"review_reason": "country_mismatch",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "game turn ready user",
|
||||
notificationType: "game.turn.ready",
|
||||
producer: "game_master",
|
||||
audienceKind: "user",
|
||||
recipientEmail: recipient.Email,
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
"turn_number": 54,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "game finished user",
|
||||
notificationType: "game.finished",
|
||||
producer: "game_master",
|
||||
audienceKind: "user",
|
||||
recipientEmail: recipient.Email,
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
"final_turn_number": 55,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "game generation failed admin",
|
||||
notificationType: "game.generation_failed",
|
||||
producer: "game_master",
|
||||
audienceKind: "admin_email",
|
||||
recipientEmail: "game-admin@example.com",
|
||||
routeID: "email:email:game-admin@example.com",
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
"failure_reason": "engine_timeout",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "lobby runtime paused admin",
|
||||
notificationType: "lobby.runtime_paused_after_start",
|
||||
producer: "game_lobby",
|
||||
audienceKind: "admin_email",
|
||||
recipientEmail: "lobby-ops@example.com",
|
||||
routeID: "email:email:lobby-ops@example.com",
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "lobby application submitted user",
|
||||
notificationType: "lobby.application.submitted",
|
||||
producer: "game_lobby",
|
||||
audienceKind: "user",
|
||||
recipientEmail: recipient.Email,
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
"applicant_user_id": "applicant-1",
|
||||
"applicant_name": "Nova Pilot",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "lobby application submitted admin",
|
||||
notificationType: "lobby.application.submitted",
|
||||
producer: "game_lobby",
|
||||
audienceKind: "admin_email",
|
||||
recipientEmail: "lobby-admin@example.com",
|
||||
routeID: "email:email:lobby-admin@example.com",
|
||||
payload: map[string]any{
|
||||
"game_id": "game-456",
|
||||
"game_name": "Public Stars",
|
||||
"applicant_user_id": "applicant-2",
|
||||
"applicant_name": "Public Pilot",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "lobby membership approved user",
|
||||
notificationType: "lobby.membership.approved",
|
||||
producer: "game_lobby",
|
||||
audienceKind: "user",
|
||||
recipientEmail: recipient.Email,
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "lobby membership rejected user",
|
||||
notificationType: "lobby.membership.rejected",
|
||||
producer: "game_lobby",
|
||||
audienceKind: "user",
|
||||
recipientEmail: recipient.Email,
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "lobby invite created user",
|
||||
notificationType: "lobby.invite.created",
|
||||
producer: "game_lobby",
|
||||
audienceKind: "user",
|
||||
recipientEmail: recipient.Email,
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
"inviter_user_id": "owner-1",
|
||||
"inviter_name": "Owner Pilot",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "lobby invite redeemed user",
|
||||
notificationType: "lobby.invite.redeemed",
|
||||
producer: "game_lobby",
|
||||
audienceKind: "user",
|
||||
recipientEmail: recipient.Email,
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
"invitee_user_id": "invitee-1",
|
||||
"invitee_name": "Nova Pilot",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "lobby invite expired user",
|
||||
notificationType: "lobby.invite.expired",
|
||||
producer: "game_lobby",
|
||||
audienceKind: "user",
|
||||
recipientEmail: recipient.Email,
|
||||
payload: map[string]any{
|
||||
"game_id": "game-123",
|
||||
"game_name": "Nebula Clash",
|
||||
"invitee_user_id": "invitee-1",
|
||||
"invitee_name": "Nova Pilot",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for index, tc := range cases {
|
||||
tc := tc
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
messageID := h.publishMailIntent(t, tc, recipient.UserID, index)
|
||||
routeID := tc.routeID
|
||||
if routeID == "" {
|
||||
routeID = "email:user:" + recipient.UserID
|
||||
}
|
||||
|
||||
idempotencyKey := "notification:" + messageID + "/" + routeID
|
||||
list := h.eventuallyListDeliveries(t, url.Values{
|
||||
"source": []string{"notification"},
|
||||
"status": []string{"sent"},
|
||||
"recipient": []string{tc.recipientEmail},
|
||||
"template_id": []string{tc.notificationType},
|
||||
"idempotency_key": []string{idempotencyKey},
|
||||
})
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, "notification", list.Items[0].Source)
|
||||
require.Equal(t, "sent", list.Items[0].Status)
|
||||
require.Equal(t, "template", list.Items[0].PayloadMode)
|
||||
require.Equal(t, tc.notificationType, list.Items[0].TemplateID)
|
||||
require.Equal(t, "en", list.Items[0].Locale)
|
||||
require.Equal(t, []string{tc.recipientEmail}, list.Items[0].To)
|
||||
|
||||
detail := h.getDelivery(t, list.Items[0].DeliveryID)
|
||||
require.Equal(t, "notification", detail.Source)
|
||||
require.Equal(t, "template", detail.PayloadMode)
|
||||
require.Equal(t, tc.notificationType, detail.TemplateID)
|
||||
require.Equal(t, "en", detail.Locale)
|
||||
require.False(t, detail.LocaleFallbackUsed)
|
||||
require.Equal(t, idempotencyKey, detail.IdempotencyKey)
|
||||
require.Equal(t, []string{tc.recipientEmail}, detail.To)
|
||||
require.Empty(t, detail.Cc)
|
||||
require.Empty(t, detail.Bcc)
|
||||
require.Empty(t, detail.ReplyTo)
|
||||
require.Empty(t, detail.Attachments)
|
||||
assertTemplateVariables(t, tc.payload, detail.TemplateVariables)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type notificationMailHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
userServiceURL string
|
||||
mailBaseURL string
|
||||
|
||||
notificationProcess *harness.Process
|
||||
mailProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
}
|
||||
|
||||
type mailIntentCase struct {
|
||||
name string
|
||||
notificationType string
|
||||
producer string
|
||||
audienceKind string
|
||||
recipientEmail string
|
||||
routeID string
|
||||
payload map[string]any
|
||||
}
|
||||
|
||||
type ensureByEmailResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id"`
|
||||
Email string
|
||||
}
|
||||
|
||||
type mailDeliveryListResponse struct {
|
||||
Items []mailDeliverySummary `json:"items"`
|
||||
}
|
||||
|
||||
type mailDeliverySummary struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
PayloadMode string `json:"payload_mode"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
LocaleFallbackUsed bool `json:"locale_fallback_used"`
|
||||
To []string `json:"to"`
|
||||
Cc []string `json:"cc"`
|
||||
Bcc []string `json:"bcc"`
|
||||
ReplyTo []string `json:"reply_to"`
|
||||
IdempotencyKey string `json:"idempotency_key"`
|
||||
Status string `json:"status"`
|
||||
AttemptCount int `json:"attempt_count"`
|
||||
LastAttemptStatus string `json:"last_attempt_status,omitempty"`
|
||||
ProviderSummary string `json:"provider_summary,omitempty"`
|
||||
CreatedAtMS int64 `json:"created_at_ms"`
|
||||
UpdatedAtMS int64 `json:"updated_at_ms"`
|
||||
SentAtMS int64 `json:"sent_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
type mailDeliveryDetailResponse struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
PayloadMode string `json:"payload_mode"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
LocaleFallbackUsed bool `json:"locale_fallback_used"`
|
||||
To []string `json:"to"`
|
||||
Cc []string `json:"cc"`
|
||||
Bcc []string `json:"bcc"`
|
||||
ReplyTo []string `json:"reply_to"`
|
||||
Subject string `json:"subject,omitempty"`
|
||||
TextBody string `json:"text_body,omitempty"`
|
||||
HTMLBody string `json:"html_body,omitempty"`
|
||||
Attachments []any `json:"attachments"`
|
||||
IdempotencyKey string `json:"idempotency_key"`
|
||||
Status string `json:"status"`
|
||||
AttemptCount int `json:"attempt_count"`
|
||||
LastAttemptStatus string `json:"last_attempt_status,omitempty"`
|
||||
ProviderSummary string `json:"provider_summary,omitempty"`
|
||||
TemplateVariables map[string]any `json:"template_variables,omitempty"`
|
||||
CreatedAtMS int64 `json:"created_at_ms"`
|
||||
UpdatedAtMS int64 `json:"updated_at_ms"`
|
||||
SentAtMS int64 `json:"sent_at_ms,omitempty"`
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func newNotificationMailHarness(t *testing.T) *notificationMailHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
mailInternalAddr := harness.FreeTCPAddress(t)
|
||||
notificationInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
||||
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||
mailEnv["MAIL_TEMPLATE_DIR"] = mailTemplateDir(t)
|
||||
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||
|
||||
notificationEnv := harness.StartNotificationServicePersistence(t, redisRuntime.Addr).Env
|
||||
notificationEnv["NOTIFICATION_LOG_LEVEL"] = "info"
|
||||
notificationEnv["NOTIFICATION_INTERNAL_HTTP_ADDR"] = notificationInternalAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_TIMEOUT"] = time.Second.String()
|
||||
notificationEnv["NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MIN"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MAX"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_GEO_REVIEW_RECOMMENDED"] = "geo-admin@example.com"
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_GAME_GENERATION_FAILED"] = "game-admin@example.com"
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_LOBBY_RUNTIME_PAUSED_AFTER_START"] = "lobby-ops@example.com"
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_LOBBY_APPLICATION_SUBMITTED"] = "lobby-admin@example.com"
|
||||
notificationEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
notificationEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, notificationEnv)
|
||||
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
return ¬ificationMailHarness{
|
||||
redis: redisClient,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
mailBaseURL: "http://" + mailInternalAddr,
|
||||
notificationProcess: notificationProcess,
|
||||
mailProcess: mailProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *notificationMailHarness) ensureUser(t *testing.T, email string, preferredLanguage string) ensureByEmailResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": preferredLanguage,
|
||||
"time_zone": "Europe/Kaliningrad",
|
||||
},
|
||||
})
|
||||
|
||||
var body ensureByEmailResponse
|
||||
requireJSONStatus(t, response, http.StatusOK, &body)
|
||||
require.Equal(t, "created", body.Outcome)
|
||||
require.NotEmpty(t, body.UserID)
|
||||
body.Email = email
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *notificationMailHarness) publishMailIntent(t *testing.T, tc mailIntentCase, recipientUserID string, index int) string {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(tc.payload)
|
||||
require.NoError(t, err)
|
||||
|
||||
values := map[string]any{
|
||||
"notification_type": tc.notificationType,
|
||||
"producer": tc.producer,
|
||||
"audience_kind": tc.audienceKind,
|
||||
"idempotency_key": fmt.Sprintf("%s:mail:%02d", tc.notificationType, index),
|
||||
"occurred_at_ms": "1775121700000",
|
||||
"payload_json": string(payload),
|
||||
}
|
||||
if tc.audienceKind == "user" {
|
||||
values["recipient_user_ids_json"] = `["` + recipientUserID + `"]`
|
||||
}
|
||||
|
||||
messageID, err := h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: notificationMailIntentsStream,
|
||||
Values: values,
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
|
||||
return messageID
|
||||
}
|
||||
|
||||
func (h *notificationMailHarness) eventuallyListDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse {
|
||||
t.Helper()
|
||||
|
||||
var response mailDeliveryListResponse
|
||||
require.Eventually(t, func() bool {
|
||||
response = h.listDeliveries(t, query)
|
||||
return len(response.Items) > 0
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
|
||||
return response
|
||||
}
|
||||
|
||||
func (h *notificationMailHarness) listDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse {
|
||||
t.Helper()
|
||||
|
||||
target := h.mailBaseURL + notificationMailDeliveriesPath
|
||||
if encoded := query.Encode(); encoded != "" {
|
||||
target += "?" + encoded
|
||||
}
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, target, nil)
|
||||
require.NoError(t, err)
|
||||
return doJSONRequest[mailDeliveryListResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func (h *notificationMailHarness) getDelivery(t *testing.T, deliveryID string) mailDeliveryDetailResponse {
|
||||
t.Helper()
|
||||
|
||||
request, err := http.NewRequest(http.MethodGet, h.mailBaseURL+notificationMailDeliveriesPath+"/"+url.PathEscape(deliveryID), nil)
|
||||
require.NoError(t, err)
|
||||
return doJSONRequest[mailDeliveryDetailResponse](t, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-missing/exists", nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+notificationMailDeliveriesPath, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func doJSONRequest[T any](t *testing.T, request *http.Request, wantStatus int) T {
|
||||
t.Helper()
|
||||
|
||||
response := doRequest(t, request)
|
||||
require.Equal(t, wantStatus, response.StatusCode, response.Body)
|
||||
|
||||
var decoded T
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &decoded), response.Body)
|
||||
return decoded
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
return doRequest(t, request)
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body)
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target))
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func assertTemplateVariables(t *testing.T, want map[string]any, got map[string]any) {
|
||||
t.Helper()
|
||||
|
||||
require.NotEmpty(t, got)
|
||||
for key, wantValue := range want {
|
||||
gotValue, ok := got[key]
|
||||
require.Truef(t, ok, "template variable %q is missing", key)
|
||||
switch typedWant := wantValue.(type) {
|
||||
case string:
|
||||
require.Equal(t, typedWant, gotValue)
|
||||
case int:
|
||||
require.Equal(t, float64(typedWant), gotValue)
|
||||
default:
|
||||
require.Equal(t, typedWant, gotValue)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func mailTemplateDir(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
return filepath.Join(repositoryRoot(t), "mail", "templates")
|
||||
}
|
||||
|
||||
func repositoryRoot(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatal("resolve repository root: runtime caller is unavailable")
|
||||
}
|
||||
|
||||
return filepath.Clean(filepath.Join(filepath.Dir(file), "..", ".."))
|
||||
}
|
||||
@@ -1,435 +0,0 @@
|
||||
package notificationuser_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
_ "github.com/jackc/pgx/v5/stdlib"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const notificationUserIntentsStream = "notification:intents"
|
||||
|
||||
func TestNotificationUserEnrichmentPersistsResolvedRecipient(t *testing.T) {
|
||||
h := newNotificationUserHarness(t)
|
||||
|
||||
recipient := h.ensureUser(t, "pilot@example.com", "fr-FR")
|
||||
messageID := h.publishUserIntent(t, recipient.UserID, "game.turn.ready", "game_master", "enrichment-success", `{"game_id":"game-123","game_name":"Nebula Clash","turn_number":54}`)
|
||||
|
||||
route := h.waitForRoute(t, messageID, "email:user:"+recipient.UserID)
|
||||
require.Equal(t, messageID, route.NotificationID)
|
||||
require.Equal(t, "email:user:"+recipient.UserID, route.RouteID)
|
||||
require.Equal(t, "email", route.Channel)
|
||||
require.Equal(t, "user:"+recipient.UserID, route.RecipientRef)
|
||||
require.Equal(t, "pilot@example.com", route.ResolvedEmail)
|
||||
require.Equal(t, "en", route.ResolvedLocale)
|
||||
|
||||
offset := h.waitForStreamOffset(t)
|
||||
require.Equal(t, messageID, offset.LastProcessedEntryID)
|
||||
}
|
||||
|
||||
func TestNotificationUserMissingRecipientIsMalformedAndAdvancesOffset(t *testing.T) {
|
||||
h := newNotificationUserHarness(t)
|
||||
|
||||
messageID := h.publishUserIntent(t, "user-missing", "game.turn.ready", "game_master", "missing-user", `{"game_id":"game-123","game_name":"Nebula Clash","turn_number":54}`)
|
||||
|
||||
malformed := h.waitForMalformedIntent(t, messageID)
|
||||
require.Equal(t, messageID, malformed.StreamEntryID)
|
||||
require.Equal(t, "game.turn.ready", malformed.NotificationType)
|
||||
require.Equal(t, "game_master", malformed.Producer)
|
||||
require.Equal(t, "recipient_not_found", malformed.FailureCode)
|
||||
|
||||
offset := h.waitForStreamOffset(t)
|
||||
require.Equal(t, messageID, offset.LastProcessedEntryID)
|
||||
}
|
||||
|
||||
func TestNotificationUserTemporaryUnavailabilityDoesNotAdvanceOffset(t *testing.T) {
|
||||
h := newNotificationUserHarness(t)
|
||||
|
||||
recipient := h.ensureUser(t, "temporary@example.com", "en")
|
||||
h.notificationProcess.AllowUnexpectedExit()
|
||||
h.userServiceProcess.Stop(t)
|
||||
|
||||
messageID := h.publishUserIntent(t, recipient.UserID, "game.turn.ready", "game_master", "temporary-user-service", `{"game_id":"game-123","game_name":"Nebula Clash","turn_number":54}`)
|
||||
|
||||
require.Never(t, func() bool {
|
||||
offset, ok := h.loadStreamOffset(t)
|
||||
return ok && offset.LastProcessedEntryID == messageID
|
||||
}, time.Second, 50*time.Millisecond)
|
||||
|
||||
require.False(t, h.malformedIntentExists(t, messageID))
|
||||
require.False(t, h.routeExists(t, messageID, "email:user:"+recipient.UserID))
|
||||
}
|
||||
|
||||
type notificationUserHarness struct {
|
||||
redis *redis.Client
|
||||
pg *sql.DB
|
||||
|
||||
userServiceURL string
|
||||
|
||||
notificationProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
}
|
||||
|
||||
type ensureByEmailResponse struct {
|
||||
Outcome string `json:"outcome"`
|
||||
UserID string `json:"user_id"`
|
||||
}
|
||||
|
||||
type notificationRouteRecord struct {
|
||||
NotificationID string `json:"notification_id"`
|
||||
RouteID string `json:"route_id"`
|
||||
Channel string `json:"channel"`
|
||||
RecipientRef string `json:"recipient_ref"`
|
||||
Status string `json:"status"`
|
||||
ResolvedEmail string `json:"resolved_email,omitempty"`
|
||||
ResolvedLocale string `json:"resolved_locale,omitempty"`
|
||||
}
|
||||
|
||||
type malformedIntentRecord struct {
|
||||
StreamEntryID string `json:"stream_entry_id"`
|
||||
NotificationType string `json:"notification_type,omitempty"`
|
||||
Producer string `json:"producer,omitempty"`
|
||||
IdempotencyKey string `json:"idempotency_key,omitempty"`
|
||||
FailureCode string `json:"failure_code"`
|
||||
FailureMessage string `json:"failure_message"`
|
||||
RawFields map[string]any `json:"raw_fields_json"`
|
||||
RecordedAtMS int64 `json:"recorded_at_ms"`
|
||||
}
|
||||
|
||||
type streamOffsetRecord struct {
|
||||
Stream string `json:"stream"`
|
||||
LastProcessedEntryID string `json:"last_processed_entry_id"`
|
||||
UpdatedAtMS int64 `json:"updated_at_ms"`
|
||||
}
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func newNotificationUserHarness(t *testing.T) *notificationUserHarness {
|
||||
t.Helper()
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
notificationInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
||||
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
notificationPersistence := harness.StartNotificationServicePersistence(t, redisRuntime.Addr)
|
||||
notificationEnv := notificationPersistence.Env
|
||||
notificationPG, err := sql.Open("pgx", notificationPersistence.Postgres.DSNForSchema("notification", "notificationservice"))
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { _ = notificationPG.Close() })
|
||||
notificationEnv["NOTIFICATION_LOG_LEVEL"] = "info"
|
||||
notificationEnv["NOTIFICATION_INTERNAL_HTTP_ADDR"] = notificationInternalAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_TIMEOUT"] = "250ms"
|
||||
notificationEnv["NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MIN"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MAX"] = "100ms"
|
||||
notificationEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
notificationEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, notificationEnv)
|
||||
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
return ¬ificationUserHarness{
|
||||
redis: redisClient,
|
||||
pg: notificationPG,
|
||||
userServiceURL: "http://" + userServiceAddr,
|
||||
notificationProcess: notificationProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *notificationUserHarness) ensureUser(t *testing.T, email string, preferredLanguage string) ensureByEmailResponse {
|
||||
t.Helper()
|
||||
|
||||
response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{
|
||||
"email": email,
|
||||
"registration_context": map[string]string{
|
||||
"preferred_language": preferredLanguage,
|
||||
"time_zone": "Europe/Kaliningrad",
|
||||
},
|
||||
})
|
||||
|
||||
var body ensureByEmailResponse
|
||||
requireJSONStatus(t, response, http.StatusOK, &body)
|
||||
require.Equal(t, "created", body.Outcome)
|
||||
require.NotEmpty(t, body.UserID)
|
||||
return body
|
||||
}
|
||||
|
||||
func (h *notificationUserHarness) publishUserIntent(t *testing.T, recipientUserID string, notificationType string, producer string, idempotencyKey string, payloadJSON string) string {
|
||||
t.Helper()
|
||||
|
||||
messageID, err := h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: notificationUserIntentsStream,
|
||||
Values: map[string]any{
|
||||
"notification_type": notificationType,
|
||||
"producer": producer,
|
||||
"audience_kind": "user",
|
||||
"recipient_user_ids_json": `["` + recipientUserID + `"]`,
|
||||
"idempotency_key": idempotencyKey,
|
||||
"occurred_at_ms": "1775121700000",
|
||||
"payload_json": payloadJSON,
|
||||
},
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
|
||||
return messageID
|
||||
}
|
||||
|
||||
func (h *notificationUserHarness) waitForRoute(t *testing.T, notificationID string, routeID string) notificationRouteRecord {
|
||||
t.Helper()
|
||||
|
||||
var route notificationRouteRecord
|
||||
require.Eventually(t, func() bool {
|
||||
row := h.pg.QueryRowContext(context.Background(),
|
||||
`SELECT notification_id, route_id, channel, recipient_ref, status, resolved_email, resolved_locale
|
||||
FROM routes WHERE notification_id = $1 AND route_id = $2`,
|
||||
notificationID, routeID,
|
||||
)
|
||||
if err := row.Scan(
|
||||
&route.NotificationID,
|
||||
&route.RouteID,
|
||||
&route.Channel,
|
||||
&route.RecipientRef,
|
||||
&route.Status,
|
||||
&route.ResolvedEmail,
|
||||
&route.ResolvedLocale,
|
||||
); err != nil {
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return false
|
||||
}
|
||||
require.NoError(t, err)
|
||||
}
|
||||
return true
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
|
||||
return route
|
||||
}
|
||||
|
||||
func (h *notificationUserHarness) waitForMalformedIntent(t *testing.T, streamEntryID string) malformedIntentRecord {
|
||||
t.Helper()
|
||||
|
||||
var record malformedIntentRecord
|
||||
require.Eventually(t, func() bool {
|
||||
row := h.pg.QueryRowContext(context.Background(),
|
||||
`SELECT stream_entry_id, notification_type, producer, idempotency_key,
|
||||
failure_code, failure_message, recorded_at
|
||||
FROM malformed_intents WHERE stream_entry_id = $1`,
|
||||
streamEntryID,
|
||||
)
|
||||
var recordedAt time.Time
|
||||
if err := row.Scan(
|
||||
&record.StreamEntryID,
|
||||
&record.NotificationType,
|
||||
&record.Producer,
|
||||
&record.IdempotencyKey,
|
||||
&record.FailureCode,
|
||||
&record.FailureMessage,
|
||||
&recordedAt,
|
||||
); err != nil {
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return false
|
||||
}
|
||||
require.NoError(t, err)
|
||||
}
|
||||
record.RecordedAtMS = recordedAt.UTC().UnixMilli()
|
||||
return true
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
|
||||
return record
|
||||
}
|
||||
|
||||
func (h *notificationUserHarness) waitForStreamOffset(t *testing.T) streamOffsetRecord {
|
||||
t.Helper()
|
||||
|
||||
var offset streamOffsetRecord
|
||||
require.Eventually(t, func() bool {
|
||||
var ok bool
|
||||
offset, ok = h.loadStreamOffset(t)
|
||||
return ok
|
||||
}, 10*time.Second, 50*time.Millisecond)
|
||||
|
||||
return offset
|
||||
}
|
||||
|
||||
func (h *notificationUserHarness) loadStreamOffset(t *testing.T) (streamOffsetRecord, bool) {
|
||||
t.Helper()
|
||||
|
||||
payload, err := h.redis.Get(context.Background(), notificationStreamOffsetKey()).Bytes()
|
||||
if errors.Is(err, redis.Nil) {
|
||||
return streamOffsetRecord{}, false
|
||||
}
|
||||
require.NoError(t, err)
|
||||
|
||||
var offset streamOffsetRecord
|
||||
require.NoError(t, decodeStrictJSONPayload(payload, &offset))
|
||||
return offset, true
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
request, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-missing/exists", nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
response, err := client.Do(request)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func postJSONValue(t *testing.T, targetURL string, body any) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
payload, err := json.Marshal(body)
|
||||
require.NoError(t, err)
|
||||
|
||||
request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload))
|
||||
require.NoError(t, err)
|
||||
request.Header.Set("Content-Type", "application/json")
|
||||
return doRequest(t, request)
|
||||
}
|
||||
|
||||
func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) {
|
||||
t.Helper()
|
||||
|
||||
require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body)
|
||||
require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target))
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{
|
||||
DisableKeepAlives: true,
|
||||
},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func decodeJSONPayload(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *notificationUserHarness) routeExists(t *testing.T, notificationID string, routeID string) bool {
|
||||
t.Helper()
|
||||
var exists bool
|
||||
err := h.pg.QueryRowContext(context.Background(),
|
||||
`SELECT EXISTS(SELECT 1 FROM routes WHERE notification_id = $1 AND route_id = $2)`,
|
||||
notificationID, routeID,
|
||||
).Scan(&exists)
|
||||
require.NoError(t, err)
|
||||
return exists
|
||||
}
|
||||
|
||||
func (h *notificationUserHarness) malformedIntentExists(t *testing.T, streamEntryID string) bool {
|
||||
t.Helper()
|
||||
var exists bool
|
||||
err := h.pg.QueryRowContext(context.Background(),
|
||||
`SELECT EXISTS(SELECT 1 FROM malformed_intents WHERE stream_entry_id = $1)`,
|
||||
streamEntryID,
|
||||
).Scan(&exists)
|
||||
require.NoError(t, err)
|
||||
return exists
|
||||
}
|
||||
|
||||
func notificationStreamOffsetKey() string {
|
||||
return "notification:stream_offsets:" + encodeKeyComponent(notificationUserIntentsStream)
|
||||
}
|
||||
|
||||
func encodeKeyComponent(value string) string {
|
||||
return base64.RawURLEncoding.EncodeToString([]byte(value))
|
||||
}
|
||||
@@ -1,602 +0,0 @@
|
||||
// Package rtmanagernotification_test exercises the Runtime Manager →
|
||||
// Notification Service boundary against real RTM + real Notification +
|
||||
// real Mail Service + real User Service running on testcontainers
|
||||
// PostgreSQL and Redis, with a real Docker daemon for RTM's readiness
|
||||
// pings.
|
||||
//
|
||||
// The boundary contract under test is: when a start job points at an
|
||||
// unresolvable image, RTM publishes one `runtime.image_pull_failed`
|
||||
// admin-only notification intent on `notification:intents`; the
|
||||
// Notification Service consumes the intent, resolves the admin email
|
||||
// recipient list from configuration, and hands the delivery to Mail
|
||||
// Service in template-mode. The suite asserts the wire shape on
|
||||
// `notification:intents` and the resulting Mail delivery record.
|
||||
//
|
||||
// Game Master is not booted: RTM emits the intent itself; Notification
|
||||
// resolves the audience from `NOTIFICATION_ADMIN_EMAILS_*`; the
|
||||
// scenario needs no user-targeted resolution.
|
||||
package rtmanagernotification_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/internal/harness"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const (
|
||||
intentsStreamPrefix = "notification:intents"
|
||||
startJobsStreamPrefix = "runtime:start_jobs"
|
||||
stopJobsStreamPrefix = "runtime:stop_jobs"
|
||||
jobResultsStreamPrefix = "runtime:job_results"
|
||||
healthEventsStreamPrefix = "runtime:health_events"
|
||||
mailDeliveriesPath = "/api/v1/internal/deliveries"
|
||||
notificationTypeImagePull = "runtime.image_pull_failed"
|
||||
notificationTypeStartFailed = "runtime.container_start_failed"
|
||||
notificationTypeConfigInval = "runtime.start_config_invalid"
|
||||
expectedAdminEmailRecipient = "rtm-admin@example.com"
|
||||
expectedRTMProducer = "runtime_manager"
|
||||
missingImageRef = "galaxy/integration-missing:0.0.0"
|
||||
)
|
||||
|
||||
var suiteSeq atomic.Int64
|
||||
|
||||
// TestRTMImagePullFailureFlowsThroughNotificationToMail drives Runtime
|
||||
// Manager with a start envelope pointing at an unresolvable image
|
||||
// reference, then asserts:
|
||||
//
|
||||
// 1. RTM publishes one `runtime.image_pull_failed` intent on
|
||||
// `notification:intents` with the frozen admin payload.
|
||||
// 2. The Notification Service consumes it and fans out the matching
|
||||
// mail delivery to the configured admin recipient.
|
||||
// 3. Mail Service records the delivery with the right template id,
|
||||
// idempotency key, and template variables.
|
||||
//
|
||||
// The path covers the full producer → orchestrator → transport
|
||||
// pipeline that `TESTING.md §7` requests as the
|
||||
// `Runtime Manager ↔ Notification` boundary suite.
|
||||
func TestRTMImagePullFailureFlowsThroughNotificationToMail(t *testing.T) {
|
||||
h := newRTMNotificationHarness(t)
|
||||
|
||||
gameID := uniqueGameID(t)
|
||||
|
||||
h.publishStartJob(t, gameID, missingImageRef)
|
||||
|
||||
// Step 1 — RTM publishes the admin notification intent.
|
||||
intent := h.waitForIntent(t,
|
||||
notificationTypeImagePull,
|
||||
gameID,
|
||||
30*time.Second,
|
||||
)
|
||||
assert.Equal(t, expectedRTMProducer, intent.Producer)
|
||||
assert.Equal(t, "admin_email", intent.AudienceKind)
|
||||
assert.Equal(t, gameID, intent.PayloadGameID)
|
||||
assert.Equal(t, missingImageRef, intent.PayloadImageRef)
|
||||
assert.Equal(t, "image_pull_failed", intent.PayloadErrorCode)
|
||||
assert.NotEmpty(t, intent.PayloadErrorMessage,
|
||||
"intent payload must carry operator-readable detail")
|
||||
assert.NotZero(t, intent.PayloadAttemptedAtMS)
|
||||
|
||||
// Step 2 — Notification routes to Mail; Mail sends the delivery.
|
||||
idempotencyKey := "notification:" + intent.RedisEntryID +
|
||||
"/email:email:" + expectedAdminEmailRecipient
|
||||
|
||||
delivery := h.eventuallyDelivery(t, url.Values{
|
||||
"source": []string{"notification"},
|
||||
"status": []string{"sent"},
|
||||
"recipient": []string{expectedAdminEmailRecipient},
|
||||
"template_id": []string{notificationTypeImagePull},
|
||||
"idempotency_key": []string{idempotencyKey},
|
||||
})
|
||||
assert.Equal(t, "template", delivery.PayloadMode)
|
||||
assert.Equal(t, notificationTypeImagePull, delivery.TemplateID)
|
||||
assert.Equal(t, []string{expectedAdminEmailRecipient}, delivery.To)
|
||||
|
||||
detail := h.getDelivery(t, delivery.DeliveryID)
|
||||
assert.Equal(t, "notification", detail.Source)
|
||||
assert.Equal(t, "template", detail.PayloadMode)
|
||||
assert.Equal(t, notificationTypeImagePull, detail.TemplateID)
|
||||
assert.Equal(t, idempotencyKey, detail.IdempotencyKey)
|
||||
assert.Equal(t, []string{expectedAdminEmailRecipient}, detail.To)
|
||||
|
||||
require.NotNil(t, detail.TemplateVariables,
|
||||
"mail delivery must record template variables for admin triage")
|
||||
assert.Equal(t, gameID, detail.TemplateVariables["game_id"])
|
||||
assert.Equal(t, missingImageRef, detail.TemplateVariables["image_ref"])
|
||||
assert.Equal(t, "image_pull_failed", detail.TemplateVariables["error_code"])
|
||||
}
|
||||
|
||||
// rtmNotificationHarness owns the per-test infrastructure: shared
|
||||
// Redis, four real binaries (RTM, Notification, Mail, User), and the
|
||||
// per-test Docker network RTM's `/readyz` insists on. One harness per
|
||||
// test keeps each scenario fully isolated.
|
||||
type rtmNotificationHarness struct {
|
||||
redis *redis.Client
|
||||
|
||||
rtmInternalURL string
|
||||
mailBaseURL string
|
||||
|
||||
intentsStream string
|
||||
startJobsStream string
|
||||
stopJobsStream string
|
||||
jobResultsStream string
|
||||
healthEvents string
|
||||
|
||||
rtmProcess *harness.Process
|
||||
notificationProcess *harness.Process
|
||||
mailProcess *harness.Process
|
||||
userServiceProcess *harness.Process
|
||||
}
|
||||
|
||||
func newRTMNotificationHarness(t *testing.T) *rtmNotificationHarness {
|
||||
t.Helper()
|
||||
|
||||
// `/readyz` of RTM pings the Docker daemon; skip the suite if no
|
||||
// Docker socket is reachable.
|
||||
harness.RequireDockerDaemon(t)
|
||||
|
||||
redisRuntime := harness.StartRedisContainer(t)
|
||||
redisClient := redis.NewClient(&redis.Options{
|
||||
Addr: redisRuntime.Addr,
|
||||
Protocol: 2,
|
||||
DisableIdentity: true,
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, redisClient.Close())
|
||||
})
|
||||
|
||||
dockerNetwork := harness.EnsureDockerNetwork(t)
|
||||
|
||||
userServiceAddr := harness.FreeTCPAddress(t)
|
||||
mailInternalAddr := harness.FreeTCPAddress(t)
|
||||
notificationInternalAddr := harness.FreeTCPAddress(t)
|
||||
rtmInternalAddr := harness.FreeTCPAddress(t)
|
||||
|
||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
||||
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
||||
rtmBinary := harness.BuildBinary(t, "rtmanager", "./rtmanager/cmd/rtmanager")
|
||||
|
||||
// User Service: needed by Notification's port even though every
|
||||
// intent in this suite is admin-only.
|
||||
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||
|
||||
// Per-test stream prefixes.
|
||||
suffix := strconv.FormatInt(suiteSeq.Add(1), 10)
|
||||
intentsStream := intentsStreamPrefix + ":" + suffix
|
||||
startJobsStream := startJobsStreamPrefix + ":" + suffix
|
||||
stopJobsStream := stopJobsStreamPrefix + ":" + suffix
|
||||
jobResultsStream := jobResultsStreamPrefix + ":" + suffix
|
||||
healthEvents := healthEventsStreamPrefix + ":" + suffix
|
||||
|
||||
// Mail Service.
|
||||
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||
mailEnv["MAIL_TEMPLATE_DIR"] = mailTemplateDir(t)
|
||||
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||
|
||||
// Notification Service. Admin-email envs route every runtime.*
|
||||
// intent to a shared rtm-admin recipient.
|
||||
notificationEnv := harness.StartNotificationServicePersistence(t, redisRuntime.Addr).Env
|
||||
notificationEnv["NOTIFICATION_LOG_LEVEL"] = "info"
|
||||
notificationEnv["NOTIFICATION_INTERNAL_HTTP_ADDR"] = notificationInternalAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||
notificationEnv["NOTIFICATION_USER_SERVICE_TIMEOUT"] = time.Second.String()
|
||||
notificationEnv["NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MIN"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MAX"] = "100ms"
|
||||
notificationEnv["NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_RUNTIME_IMAGE_PULL_FAILED"] = expectedAdminEmailRecipient
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_RUNTIME_CONTAINER_START_FAILED"] = expectedAdminEmailRecipient
|
||||
notificationEnv["NOTIFICATION_ADMIN_EMAILS_RUNTIME_START_CONFIG_INVALID"] = expectedAdminEmailRecipient
|
||||
notificationEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
notificationEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, notificationEnv)
|
||||
harness.WaitForHTTPStatus(t, notificationProcess,
|
||||
"http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
// Runtime Manager. Lobby base URL points at notification's
|
||||
// ready-probe path so RTM's start-service ancillary GetGame call
|
||||
// resolves to a valid 200/404 surface even though no Lobby is
|
||||
// running. The start service treats the response as best-effort
|
||||
// and never aborts on an unparseable body.
|
||||
rtmEnv := harness.StartRTManagerServicePersistence(t, redisRuntime.Addr).Env
|
||||
rtmEnv["RTMANAGER_LOG_LEVEL"] = "info"
|
||||
rtmEnv["RTMANAGER_INTERNAL_HTTP_ADDR"] = rtmInternalAddr
|
||||
rtmEnv["RTMANAGER_LOBBY_INTERNAL_BASE_URL"] = "http://127.0.0.1:1"
|
||||
rtmEnv["RTMANAGER_LOBBY_INTERNAL_TIMEOUT"] = "200ms"
|
||||
rtmEnv["RTMANAGER_DOCKER_HOST"] = resolveDockerHost()
|
||||
rtmEnv["RTMANAGER_DOCKER_NETWORK"] = dockerNetwork
|
||||
rtmEnv["RTMANAGER_GAME_STATE_ROOT"] = t.TempDir()
|
||||
rtmEnv["RTMANAGER_REDIS_START_JOBS_STREAM"] = startJobsStream
|
||||
rtmEnv["RTMANAGER_REDIS_STOP_JOBS_STREAM"] = stopJobsStream
|
||||
rtmEnv["RTMANAGER_REDIS_JOB_RESULTS_STREAM"] = jobResultsStream
|
||||
rtmEnv["RTMANAGER_REDIS_HEALTH_EVENTS_STREAM"] = healthEvents
|
||||
rtmEnv["RTMANAGER_NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||
rtmEnv["RTMANAGER_STREAM_BLOCK_TIMEOUT"] = "200ms"
|
||||
rtmEnv["RTMANAGER_RECONCILE_INTERVAL"] = "5s"
|
||||
rtmEnv["RTMANAGER_CLEANUP_INTERVAL"] = "5s"
|
||||
rtmEnv["RTMANAGER_INSPECT_INTERVAL"] = "5s"
|
||||
rtmEnv["RTMANAGER_PROBE_INTERVAL"] = "5s"
|
||||
rtmEnv["RTMANAGER_PROBE_TIMEOUT"] = "1s"
|
||||
rtmEnv["RTMANAGER_PROBE_FAILURES_THRESHOLD"] = "3"
|
||||
rtmEnv["RTMANAGER_GAME_LEASE_TTL_SECONDS"] = "30"
|
||||
rtmEnv["RTMANAGER_IMAGE_PULL_POLICY"] = "if_missing"
|
||||
rtmEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||
rtmEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||
rtmProcess := harness.StartProcess(t, "rtmanager", rtmBinary, rtmEnv)
|
||||
harness.WaitForHTTPStatus(t, rtmProcess,
|
||||
"http://"+rtmInternalAddr+"/readyz", http.StatusOK)
|
||||
|
||||
return &rtmNotificationHarness{
|
||||
redis: redisClient,
|
||||
rtmInternalURL: "http://" + rtmInternalAddr,
|
||||
mailBaseURL: "http://" + mailInternalAddr,
|
||||
intentsStream: intentsStream,
|
||||
startJobsStream: startJobsStream,
|
||||
stopJobsStream: stopJobsStream,
|
||||
jobResultsStream: jobResultsStream,
|
||||
healthEvents: healthEvents,
|
||||
rtmProcess: rtmProcess,
|
||||
notificationProcess: notificationProcess,
|
||||
mailProcess: mailProcess,
|
||||
userServiceProcess: userServiceProcess,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *rtmNotificationHarness) publishStartJob(t *testing.T, gameID, imageRef string) {
|
||||
t.Helper()
|
||||
_, err := h.redis.XAdd(context.Background(), &redis.XAddArgs{
|
||||
Stream: h.startJobsStream,
|
||||
Values: map[string]any{
|
||||
"game_id": gameID,
|
||||
"image_ref": imageRef,
|
||||
"requested_at_ms": strconv.FormatInt(time.Now().UnixMilli(), 10),
|
||||
},
|
||||
}).Result()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// observedIntent stores the decoded fields of one notification intent
|
||||
// entry that the suite cares about.
|
||||
type observedIntent struct {
|
||||
RedisEntryID string
|
||||
NotificationType string
|
||||
Producer string
|
||||
AudienceKind string
|
||||
PayloadGameID string
|
||||
PayloadImageRef string
|
||||
PayloadErrorCode string
|
||||
PayloadErrorMessage string
|
||||
PayloadAttemptedAtMS int64
|
||||
}
|
||||
|
||||
func (h *rtmNotificationHarness) waitForIntent(
|
||||
t *testing.T,
|
||||
notificationType, gameID string,
|
||||
timeout time.Duration,
|
||||
) observedIntent {
|
||||
t.Helper()
|
||||
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
entries, err := h.redis.XRange(context.Background(), h.intentsStream, "-", "+").Result()
|
||||
require.NoError(t, err)
|
||||
for _, entry := range entries {
|
||||
intent, ok := decodeIntent(entry)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if intent.NotificationType != notificationType {
|
||||
continue
|
||||
}
|
||||
if intent.PayloadGameID != gameID {
|
||||
continue
|
||||
}
|
||||
return intent
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("intent %s for game %s not observed on stream %s within %s\n%s",
|
||||
notificationType, gameID, h.intentsStream, timeout, h.rtmProcess.Logs())
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func decodeIntent(entry redis.XMessage) (observedIntent, bool) {
|
||||
notificationType, _ := entry.Values["notification_type"].(string)
|
||||
producer, _ := entry.Values["producer"].(string)
|
||||
audienceKind, _ := entry.Values["audience_kind"].(string)
|
||||
payloadJSON, _ := entry.Values["payload_json"].(string)
|
||||
|
||||
if notificationType == "" {
|
||||
return observedIntent{}, false
|
||||
}
|
||||
|
||||
out := observedIntent{
|
||||
RedisEntryID: entry.ID,
|
||||
NotificationType: notificationType,
|
||||
Producer: producer,
|
||||
AudienceKind: audienceKind,
|
||||
}
|
||||
|
||||
if payloadJSON == "" {
|
||||
return out, true
|
||||
}
|
||||
var payload struct {
|
||||
GameID string `json:"game_id"`
|
||||
ImageRef string `json:"image_ref"`
|
||||
ErrorCode string `json:"error_code"`
|
||||
ErrorMessage string `json:"error_message"`
|
||||
AttemptedAtMS int64 `json:"attempted_at_ms"`
|
||||
}
|
||||
if err := json.Unmarshal([]byte(payloadJSON), &payload); err == nil {
|
||||
out.PayloadGameID = payload.GameID
|
||||
out.PayloadImageRef = payload.ImageRef
|
||||
out.PayloadErrorCode = payload.ErrorCode
|
||||
out.PayloadErrorMessage = payload.ErrorMessage
|
||||
out.PayloadAttemptedAtMS = payload.AttemptedAtMS
|
||||
}
|
||||
return out, true
|
||||
}
|
||||
|
||||
// mailDeliverySummary mirrors the public list-deliveries response of
|
||||
// Mail Service.
|
||||
type mailDeliverySummary struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
PayloadMode string `json:"payload_mode"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
To []string `json:"to"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
type mailDeliveryDetail struct {
|
||||
DeliveryID string `json:"delivery_id"`
|
||||
Source string `json:"source"`
|
||||
PayloadMode string `json:"payload_mode"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Locale string `json:"locale"`
|
||||
To []string `json:"to"`
|
||||
IdempotencyKey string `json:"idempotency_key"`
|
||||
Status string `json:"status"`
|
||||
TemplateVariables map[string]any `json:"template_variables,omitempty"`
|
||||
}
|
||||
|
||||
func (h *rtmNotificationHarness) eventuallyDelivery(
|
||||
t *testing.T,
|
||||
query url.Values,
|
||||
) mailDeliverySummary {
|
||||
t.Helper()
|
||||
|
||||
deadline := time.Now().Add(30 * time.Second)
|
||||
for {
|
||||
summary, found := h.findDelivery(t, query)
|
||||
if found {
|
||||
return summary
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
t.Fatalf("mail delivery for query %v not observed within 30s\n%s",
|
||||
query, h.notificationProcess.Logs())
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func (h *rtmNotificationHarness) findDelivery(
|
||||
t *testing.T,
|
||||
query url.Values,
|
||||
) (mailDeliverySummary, bool) {
|
||||
t.Helper()
|
||||
|
||||
listURL := h.mailBaseURL + mailDeliveriesPath + "?" + query.Encode()
|
||||
req, err := http.NewRequest(http.MethodGet, listURL, nil)
|
||||
require.NoError(t, err)
|
||||
resp := doRequest(t, req)
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return mailDeliverySummary{}, false
|
||||
}
|
||||
var body struct {
|
||||
Items []mailDeliverySummary `json:"items"`
|
||||
}
|
||||
if err := json.Unmarshal([]byte(resp.Body), &body); err != nil {
|
||||
return mailDeliverySummary{}, false
|
||||
}
|
||||
if len(body.Items) == 0 {
|
||||
return mailDeliverySummary{}, false
|
||||
}
|
||||
return body.Items[0], true
|
||||
}
|
||||
|
||||
func (h *rtmNotificationHarness) getDelivery(t *testing.T, deliveryID string) mailDeliveryDetail {
|
||||
t.Helper()
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, h.mailBaseURL+mailDeliveriesPath+"/"+url.PathEscape(deliveryID), nil)
|
||||
require.NoError(t, err)
|
||||
resp := doRequest(t, req)
|
||||
require.Equalf(t, http.StatusOK, resp.StatusCode, "get delivery: %s", resp.Body)
|
||||
|
||||
// Mail's detail response carries many fields the suite does not
|
||||
// assert on (cc, bcc, reply-to, attempt history, …). Use a
|
||||
// lenient decoder so additive contract changes do not break this
|
||||
// boundary test.
|
||||
var detail mailDeliveryDetail
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.Body), &detail))
|
||||
return detail
|
||||
}
|
||||
|
||||
// --- shared helpers (mirror the conventions of integration/notificationmail) ---
|
||||
|
||||
type httpResponse struct {
|
||||
StatusCode int
|
||||
Body string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, request *http.Request) httpResponse {
|
||||
t.Helper()
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{DisableKeepAlives: true},
|
||||
}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
response, err := client.Do(request)
|
||||
require.NoError(t, err)
|
||||
defer response.Body.Close()
|
||||
|
||||
payload, err := io.ReadAll(response.Body)
|
||||
require.NoError(t, err)
|
||||
return httpResponse{
|
||||
StatusCode: response.StatusCode,
|
||||
Body: string(payload),
|
||||
Header: response.Header.Clone(),
|
||||
}
|
||||
}
|
||||
|
||||
func decodeStrictJSON(payload []byte, target any) error {
|
||||
decoder := json.NewDecoder(bytes.NewReader(payload))
|
||||
decoder.DisallowUnknownFields()
|
||||
if err := decoder.Decode(target); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := decoder.Decode(&struct{}{}); err != io.EOF {
|
||||
if err == nil {
|
||||
return errors.New("unexpected trailing JSON input")
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet,
|
||||
baseURL+"/api/v1/internal/users/user-readiness-probe/exists", nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) {
|
||||
t.Helper()
|
||||
client := &http.Client{Timeout: 250 * time.Millisecond}
|
||||
t.Cleanup(client.CloseIdleConnections)
|
||||
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
req, err := http.NewRequest(http.MethodGet, baseURL+mailDeliveriesPath, nil)
|
||||
require.NoError(t, err)
|
||||
response, err := client.Do(req)
|
||||
if err == nil {
|
||||
_, _ = io.Copy(io.Discard, response.Body)
|
||||
response.Body.Close()
|
||||
if response.StatusCode == http.StatusOK {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(25 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs())
|
||||
}
|
||||
|
||||
func mailTemplateDir(t *testing.T) string {
|
||||
t.Helper()
|
||||
return filepath.Join(repositoryRoot(t), "mail", "templates")
|
||||
}
|
||||
|
||||
func repositoryRoot(t *testing.T) string {
|
||||
t.Helper()
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatal("resolve repository root: runtime caller is unavailable")
|
||||
}
|
||||
return filepath.Clean(filepath.Join(filepath.Dir(file), "..", ".."))
|
||||
}
|
||||
|
||||
// uniqueGameID derives a deterministic, per-test, per-invocation game
|
||||
// id usable as the `game_id` field on `runtime:start_jobs` entries
|
||||
// without colliding when `-count` exceeds one.
|
||||
func uniqueGameID(t *testing.T) string {
|
||||
t.Helper()
|
||||
return fmt.Sprintf("game-%s-%d", sanitiseGameName(t.Name()), time.Now().UnixNano())
|
||||
}
|
||||
|
||||
func sanitiseGameName(name string) string {
|
||||
allowed := func(r rune) rune {
|
||||
switch {
|
||||
case r >= 'a' && r <= 'z',
|
||||
r >= 'A' && r <= 'Z',
|
||||
r >= '0' && r <= '9':
|
||||
return r
|
||||
case r == '/' || r == '_' || r == '-':
|
||||
return '-'
|
||||
default:
|
||||
return -1
|
||||
}
|
||||
}
|
||||
out := make([]rune, 0, len(name))
|
||||
for _, r := range name {
|
||||
if mapped := allowed(r); mapped != -1 {
|
||||
out = append(out, mapped)
|
||||
}
|
||||
}
|
||||
return string(out)
|
||||
}
|
||||
|
||||
// resolveDockerHost mirrors `rtmanager/integration/harness.runtime.go`:
|
||||
// honour DOCKER_HOST when the developer machine routes through colima
|
||||
// or a remote daemon, fall back to the standard unix path otherwise.
|
||||
func resolveDockerHost() string {
|
||||
if host := strings.TrimSpace(os.Getenv("DOCKER_HOST")); host != "" {
|
||||
return host
|
||||
}
|
||||
return "unix:///var/run/docker.sock"
|
||||
}
|
||||
@@ -0,0 +1,125 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
)
|
||||
|
||||
// TestRuntimeLifecycle drives the runtime control plane against a
|
||||
// real `galaxy/game:integration` container with the engine's
|
||||
// production race-count requirement (`len(races) >= 10`) honoured.
|
||||
// The owner creates an enrollment-open game, ten pilots redeem
|
||||
// per-game invites, admin force-starts, and the test waits for the
|
||||
// runtime record to reach `running`. It then triggers force-stop and
|
||||
// asserts the runtime exits the active set.
|
||||
func TestRuntimeLifecycle(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
testenv.EnsureGameImage(t)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/engine-versions", map[string]any{
|
||||
"version": "v1.0.0", "image_ref": testenv.GameImage, "enabled": true,
|
||||
}); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("seed engine_version: err=%v resp=%v", err, resp)
|
||||
}
|
||||
|
||||
owner := testenv.RegisterSession(t, plat, "owner+runtime@example.com")
|
||||
ownerID, err := owner.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve owner: %v", err)
|
||||
}
|
||||
ownerHTTP := testenv.NewBackendUserClient(plat.Backend.HTTPURL, ownerID)
|
||||
|
||||
gameBody := map[string]any{
|
||||
"game_name": "Runtime Lifecycle",
|
||||
"visibility": "private",
|
||||
"min_players": 10,
|
||||
"max_players": 10,
|
||||
"start_gap_hours": 1,
|
||||
"start_gap_players": 10,
|
||||
"enrollment_ends_at": time.Now().Add(24 * time.Hour).UTC().Format(time.RFC3339),
|
||||
"turn_schedule": "0 * * * *",
|
||||
"target_engine_version": "v1.0.0",
|
||||
}
|
||||
raw, resp, err := ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games", gameBody)
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("create game: err=%v status=%d body=%s", err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var game struct {
|
||||
GameID string `json:"game_id"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &game); err != nil {
|
||||
t.Fatalf("decode game: %v", err)
|
||||
}
|
||||
if _, resp, err := ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+game.GameID+"/open-enrollment", nil); err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("open enrollment: err=%v status=%d", err, resp.StatusCode)
|
||||
}
|
||||
|
||||
// Engine init requires len(races) >= 10; enroll exactly that.
|
||||
testenv.EnrollPilots(t, plat, ownerHTTP, game.GameID, 10, "runtime")
|
||||
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/games/"+game.GameID+"/force-start", nil); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("force-start: err=%v status=%d", err, resp.StatusCode)
|
||||
}
|
||||
|
||||
// Wait for runtime to reach `running` against the live engine.
|
||||
deadline := time.Now().Add(3 * time.Minute)
|
||||
var runtimeStatus string
|
||||
for time.Now().Before(deadline) {
|
||||
raw, resp, err = admin.Do(ctx, http.MethodGet, "/api/v1/admin/runtimes/"+game.GameID, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("admin runtime get: %v", err)
|
||||
}
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
var rec struct {
|
||||
Status string `json:"status"`
|
||||
CurrentContainerID string `json:"current_container_id"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &rec); err == nil {
|
||||
runtimeStatus = rec.Status
|
||||
if rec.Status == "running" {
|
||||
if rec.CurrentContainerID == "" {
|
||||
t.Fatalf("runtime running but current_container_id is empty")
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
}
|
||||
if runtimeStatus != "running" {
|
||||
t.Fatalf("runtime did not reach running within 3 m (last=%q body=%s)", runtimeStatus, string(raw))
|
||||
}
|
||||
|
||||
// Force-stop and assert the runtime row exits the active set.
|
||||
if _, resp, err := admin.Do(ctx, http.MethodPost, "/api/v1/admin/games/"+game.GameID+"/force-stop", nil); err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("force-stop: err=%v status=%d", err, resp.StatusCode)
|
||||
}
|
||||
deadline = time.Now().Add(60 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
raw, resp, err = admin.Do(ctx, http.MethodGet, "/api/v1/admin/runtimes/"+game.GameID, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("admin runtime get post-stop: %v", err)
|
||||
}
|
||||
if resp.StatusCode == http.StatusNotFound {
|
||||
return
|
||||
}
|
||||
var rec struct {
|
||||
Status string `json:"status"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &rec); err == nil {
|
||||
if rec.Status == "removed" || rec.Status == "stopped" || rec.Status == "cancelled" {
|
||||
return
|
||||
}
|
||||
}
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
}
|
||||
t.Fatalf("runtime did not exit running within 60 s (last body=%s)", string(raw))
|
||||
}
|
||||
@@ -0,0 +1,67 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
// TestSessionRevoke_SubsequentRequestsRejected revokes a session via
|
||||
// the internal endpoint backend exposes (gateway uses the same path)
|
||||
// and asserts the gateway rejects subsequent authenticated requests
|
||||
// bound to that session.
|
||||
func TestSessionRevoke_SubsequentRequestsRejected(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+revoke@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
// Sanity: the authenticated path works before revoke.
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
if _, err := gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{}); err != nil {
|
||||
t.Fatalf("pre-revoke call failed: %v", err)
|
||||
}
|
||||
|
||||
// Revoke.
|
||||
internal := testenv.NewBackendInternalClient(plat.Backend.HTTPURL)
|
||||
raw, resp, err := internal.Do(ctx, http.MethodPost, "/api/v1/internal/sessions/"+sess.DeviceSessionID+"/revoke", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("revoke: %v", err)
|
||||
}
|
||||
if resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("revoke status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Authenticated requests must now be rejected. Allow up to 2s
|
||||
// for the session-invalidation push frame to propagate to
|
||||
// gateway and close any cached state.
|
||||
deadline := time.Now().Add(2 * time.Second)
|
||||
var lastErr error
|
||||
for time.Now().Before(deadline) {
|
||||
_, lastErr = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{})
|
||||
if lastErr != nil {
|
||||
break
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
if lastErr == nil {
|
||||
t.Fatalf("post-revoke call still succeeded; expected rejection")
|
||||
}
|
||||
if !testenv.IsUnauthenticated(lastErr) {
|
||||
t.Fatalf("post-revoke status: expected Unauthenticated, got %v", lastErr)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,86 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
// TestSoftDelete_Cascade triggers `POST /api/v1/user/account/delete`
|
||||
// with X-User-ID set (mirroring what gateway does after authenticated
|
||||
// verification) and asserts:
|
||||
// - the account fetch through the authenticated gRPC surface
|
||||
// subsequently fails because soft-delete revoked the session;
|
||||
// - the admin geo endpoint reports the user has no remaining
|
||||
// country counter rows.
|
||||
func TestSoftDelete_Cascade(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+softdelete@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
// Touch the account once so a geo counter row exists.
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
if _, err := gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{}); err != nil {
|
||||
t.Fatalf("pre-delete fetch failed: %v", err)
|
||||
}
|
||||
|
||||
userID, err := sess.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("resolve user_id: %v", err)
|
||||
}
|
||||
|
||||
// Trigger soft delete. The user surface is fronted by gateway in
|
||||
// production; here we replicate gateway's forwarding by hitting
|
||||
// backend's HTTP listener directly with X-User-ID, which is the
|
||||
// trusted identity input on the user surface.
|
||||
user := testenv.NewBackendUserClient(plat.Backend.HTTPURL, userID)
|
||||
raw, resp, err := user.Do(ctx, http.MethodPost, "/api/v1/user/account/delete", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("soft delete: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusNoContent && resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("soft delete: status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
// Authenticated gRPC must now be rejected.
|
||||
deadline := time.Now().Add(2 * time.Second)
|
||||
var lastErr error
|
||||
for time.Now().Before(deadline) {
|
||||
_, lastErr = gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{})
|
||||
if lastErr != nil {
|
||||
break
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
if lastErr == nil {
|
||||
t.Fatalf("gateway accepted authenticated call after soft delete; expected rejection")
|
||||
}
|
||||
if !testenv.IsUnauthenticated(lastErr) {
|
||||
t.Fatalf("post-delete status: expected Unauthenticated, got %v", lastErr)
|
||||
}
|
||||
|
||||
// Geo cascade: counters for this user should be gone.
|
||||
admin := testenv.NewBackendAdminClient(plat.Backend.HTTPURL, plat.Backend.AdminUser, plat.Backend.AdminPassword)
|
||||
body, resp, err := admin.Do(ctx, http.MethodGet, "/api/v1/admin/geo/users/"+userID+"/countries", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("admin geo lookup: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusNotFound {
|
||||
t.Fatalf("admin geo lookup: status %d body=%s", resp.StatusCode, string(body))
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,181 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/moby/moby/api/types/container"
|
||||
"github.com/moby/moby/api/types/mount"
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
tcnetwork "github.com/testcontainers/testcontainers-go/network"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
// BackendContainer wraps a running galaxy/backend:integration
|
||||
// container reachable from the host (HTTPHost, GRPCPushHost) and
|
||||
// from the shared Docker network at the alias "backend".
|
||||
type BackendContainer struct {
|
||||
Container testcontainers.Container
|
||||
HTTPHost string
|
||||
HTTPPort int
|
||||
HTTPURL string
|
||||
GRPCHost string
|
||||
GRPCPort int
|
||||
GRPCURL string
|
||||
|
||||
// AdminUser/AdminPassword are the bootstrap admin credentials this
|
||||
// container started with. Tests that exercise the admin surface
|
||||
// reuse them directly.
|
||||
AdminUser string
|
||||
AdminPassword string
|
||||
}
|
||||
|
||||
// BackendOptions tunes a backend container before it boots.
|
||||
type BackendOptions struct {
|
||||
NetworkAlias string
|
||||
NetworkName string
|
||||
PostgresDSN string
|
||||
MailpitHost string
|
||||
MailpitPort int
|
||||
GeoIPHostPath string
|
||||
AdminEmail string
|
||||
Extra map[string]string
|
||||
}
|
||||
|
||||
// StartBackend boots galaxy/backend:integration with the supplied
|
||||
// options.
|
||||
func StartBackend(t *testing.T, opts BackendOptions) *BackendContainer {
|
||||
t.Helper()
|
||||
EnsureBackendImage(t)
|
||||
|
||||
if opts.NetworkAlias == "" {
|
||||
opts.NetworkAlias = "backend"
|
||||
}
|
||||
if opts.AdminEmail == "" {
|
||||
opts.AdminEmail = "admin@galaxy.test"
|
||||
}
|
||||
|
||||
geoIPInContainer := "/var/lib/galaxy/geoip.mmdb"
|
||||
// Use a unique daemon-side path for each test so concurrent
|
||||
// runs cannot collide. Docker creates the source directory at
|
||||
// container start because BindOptions.CreateMountpoint=true.
|
||||
stateRoot := "/tmp/galaxy-state-" + uuid.NewString()
|
||||
|
||||
env := map[string]string{
|
||||
"BACKEND_HTTP_LISTEN_ADDR": ":8080",
|
||||
"BACKEND_GRPC_PUSH_LISTEN_ADDR": ":8081",
|
||||
"BACKEND_LOGGING_LEVEL": "info",
|
||||
"BACKEND_POSTGRES_DSN": opts.PostgresDSN,
|
||||
"BACKEND_SMTP_HOST": opts.MailpitHost,
|
||||
"BACKEND_SMTP_PORT": fmt.Sprintf("%d", opts.MailpitPort),
|
||||
"BACKEND_SMTP_FROM": "galaxy-backend@galaxy.test",
|
||||
"BACKEND_SMTP_TLS_MODE": "none",
|
||||
"BACKEND_DOCKER_NETWORK": opts.NetworkName,
|
||||
"BACKEND_GAME_STATE_ROOT": stateRoot,
|
||||
"BACKEND_ADMIN_BOOTSTRAP_USER": "bootstrap",
|
||||
"BACKEND_ADMIN_BOOTSTRAP_PASSWORD": "bootstrap-secret",
|
||||
"BACKEND_GEOIP_DB_PATH": geoIPInContainer,
|
||||
"BACKEND_OTEL_TRACES_EXPORTER": "none",
|
||||
"BACKEND_OTEL_METRICS_EXPORTER": "none",
|
||||
"BACKEND_NOTIFICATION_ADMIN_EMAIL": opts.AdminEmail,
|
||||
"BACKEND_AUTH_CHALLENGE_THROTTLE_MAX": "100",
|
||||
"BACKEND_MAIL_WORKER_INTERVAL": "500ms",
|
||||
"BACKEND_NOTIFICATION_WORKER_INTERVAL": "500ms",
|
||||
}
|
||||
for k, v := range opts.Extra {
|
||||
env[k] = v
|
||||
}
|
||||
|
||||
dockerSocket := DockerSocketPath()
|
||||
req := testcontainers.ContainerRequest{
|
||||
Image: BackendImage,
|
||||
ExposedPorts: []string{"8080/tcp", "8081/tcp"},
|
||||
Env: env,
|
||||
WaitingFor: wait.ForHTTP("/healthz").
|
||||
WithPort("8080/tcp").
|
||||
WithStartupTimeout(60 * time.Second),
|
||||
Files: []testcontainers.ContainerFile{
|
||||
{
|
||||
HostFilePath: opts.GeoIPHostPath,
|
||||
ContainerFilePath: geoIPInContainer,
|
||||
FileMode: 0o644,
|
||||
},
|
||||
},
|
||||
HostConfigModifier: func(hc *container.HostConfig) {
|
||||
hc.Binds = append(hc.Binds, dockerSocket+":/var/run/docker.sock")
|
||||
// Bind a unique daemon-side directory at the same path
|
||||
// inside the backend container. CreateMountpoint=true
|
||||
// asks the daemon to create the source directory if it
|
||||
// is missing, so we do not need a second container just
|
||||
// to mkdir on the daemon host. Per-game subdirectories
|
||||
// are created by backend's runtime via os.MkdirAll
|
||||
// before each engine container start.
|
||||
hc.Mounts = append(hc.Mounts, mount.Mount{
|
||||
Type: mount.TypeBind,
|
||||
Source: stateRoot,
|
||||
Target: stateRoot,
|
||||
BindOptions: &mount.BindOptions{
|
||||
CreateMountpoint: true,
|
||||
},
|
||||
})
|
||||
},
|
||||
// The distroless `nonroot` user (uid 65532) cannot reach the
|
||||
// Docker daemon socket that backend mounts to manage engine
|
||||
// containers. In integration tests we run as root so the
|
||||
// dockerclient.EnsureNetwork startup probe succeeds; the
|
||||
// production deployment will rely on a docker-socket-proxy
|
||||
// sidecar (see ARCHITECTURE.md §13).
|
||||
User: "0:0",
|
||||
}
|
||||
|
||||
gcr := &testcontainers.GenericContainerRequest{ContainerRequest: req}
|
||||
if opts.NetworkName != "" {
|
||||
_ = tcnetwork.WithNetwork([]string{opts.NetworkAlias}, &testcontainers.DockerNetwork{Name: opts.NetworkName}).Customize(gcr)
|
||||
}
|
||||
gcr.Started = true
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Minute)
|
||||
defer cancel()
|
||||
container, err := testcontainers.GenericContainer(ctx, *gcr)
|
||||
if err != nil {
|
||||
t.Fatalf("start backend container: %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||
t.Logf("terminate backend: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
host, err := container.Host(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("backend host: %v", err)
|
||||
}
|
||||
httpPort, err := container.MappedPort(ctx, "8080/tcp")
|
||||
if err != nil {
|
||||
t.Fatalf("backend http port: %v", err)
|
||||
}
|
||||
grpcPort, err := container.MappedPort(ctx, "8081/tcp")
|
||||
if err != nil {
|
||||
t.Fatalf("backend grpc port: %v", err)
|
||||
}
|
||||
|
||||
return &BackendContainer{
|
||||
Container: container,
|
||||
HTTPHost: host,
|
||||
HTTPPort: int(httpPort.Num()),
|
||||
HTTPURL: fmt.Sprintf("http://%s:%d", host, httpPort.Num()),
|
||||
GRPCHost: host,
|
||||
GRPCPort: int(grpcPort.Num()),
|
||||
GRPCURL: fmt.Sprintf("%s:%d", host, grpcPort.Num()),
|
||||
AdminUser: env["BACKEND_ADMIN_BOOTSTRAP_USER"],
|
||||
AdminPassword: env["BACKEND_ADMIN_BOOTSTRAP_PASSWORD"],
|
||||
}
|
||||
}
|
||||
|
||||
// _ keeps filepath imported even when only the network helper grows
|
||||
// here later.
|
||||
var _ = filepath.Separator
|
||||
@@ -0,0 +1,272 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// PublicRESTClient exposes the public REST surface of the gateway
|
||||
// (`/api/v1/public/*`). Tests use it for unauthenticated registration
|
||||
// flows.
|
||||
type PublicRESTClient struct {
|
||||
BaseURL string
|
||||
HTTP *http.Client
|
||||
}
|
||||
|
||||
// NewPublicRESTClient constructs a client targeting baseURL.
|
||||
func NewPublicRESTClient(baseURL string) *PublicRESTClient {
|
||||
return &PublicRESTClient{
|
||||
BaseURL: strings.TrimRight(baseURL, "/"),
|
||||
HTTP: &http.Client{Timeout: 30 * time.Second},
|
||||
}
|
||||
}
|
||||
|
||||
// SendEmailCodeResponse mirrors the wire shape of
|
||||
// `POST /api/v1/public/auth/send-email-code`.
|
||||
type SendEmailCodeResponse struct {
|
||||
ChallengeID string `json:"challenge_id"`
|
||||
}
|
||||
|
||||
// ConfirmEmailCodeResponse mirrors the wire shape of
|
||||
// `POST /api/v1/public/auth/confirm-email-code`.
|
||||
type ConfirmEmailCodeResponse struct {
|
||||
DeviceSessionID string `json:"device_session_id"`
|
||||
}
|
||||
|
||||
// SendEmailCode triggers an email-code challenge. The `locale` value
|
||||
// is sent through the public REST contract as the `Accept-Language`
|
||||
// header (gateway derives `preferred_language` from it; the body
|
||||
// schema rejects unknown fields).
|
||||
func (c *PublicRESTClient) SendEmailCode(ctx context.Context, email string, locale string) (*SendEmailCodeResponse, *http.Response, error) {
|
||||
body := map[string]any{"email": email}
|
||||
headers := http.Header{}
|
||||
if locale != "" {
|
||||
headers.Set("Accept-Language", locale)
|
||||
}
|
||||
resp, raw, err := c.doWithHeaders(ctx, http.MethodPost, "/api/v1/public/auth/send-email-code", body, headers)
|
||||
if err != nil {
|
||||
return nil, raw, err
|
||||
}
|
||||
if raw.StatusCode/100 != 2 {
|
||||
return nil, raw, fmt.Errorf("send-email-code: status %d: %s", raw.StatusCode, string(resp))
|
||||
}
|
||||
var out SendEmailCodeResponse
|
||||
if err := json.Unmarshal(resp, &out); err != nil {
|
||||
return nil, raw, err
|
||||
}
|
||||
return &out, raw, nil
|
||||
}
|
||||
|
||||
// ConfirmEmailCode confirms a challenge and registers a device
|
||||
// session.
|
||||
func (c *PublicRESTClient) ConfirmEmailCode(ctx context.Context, challengeID, code, clientPublicKey, timeZone string) (*ConfirmEmailCodeResponse, *http.Response, error) {
|
||||
body := map[string]any{
|
||||
"challenge_id": challengeID,
|
||||
"code": code,
|
||||
"client_public_key": clientPublicKey,
|
||||
"time_zone": timeZone,
|
||||
}
|
||||
resp, raw, err := c.do(ctx, http.MethodPost, "/api/v1/public/auth/confirm-email-code", body)
|
||||
if err != nil {
|
||||
return nil, raw, err
|
||||
}
|
||||
if raw.StatusCode/100 != 2 {
|
||||
return nil, raw, fmt.Errorf("confirm-email-code: status %d: %s", raw.StatusCode, string(resp))
|
||||
}
|
||||
var out ConfirmEmailCodeResponse
|
||||
if err := json.Unmarshal(resp, &out); err != nil {
|
||||
return nil, raw, err
|
||||
}
|
||||
return &out, raw, nil
|
||||
}
|
||||
|
||||
func (c *PublicRESTClient) do(ctx context.Context, method, path string, body any) ([]byte, *http.Response, error) {
|
||||
return c.doWithHeaders(ctx, method, path, body, nil)
|
||||
}
|
||||
|
||||
func (c *PublicRESTClient) doWithHeaders(ctx context.Context, method, path string, body any, headers http.Header) ([]byte, *http.Response, error) {
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
buf, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
reader = bytes.NewReader(buf)
|
||||
}
|
||||
req, err := http.NewRequestWithContext(ctx, method, c.BaseURL+path, reader)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
for k, vs := range headers {
|
||||
for _, v := range vs {
|
||||
req.Header.Add(k, v)
|
||||
}
|
||||
}
|
||||
resp, err := c.HTTP.Do(req)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
raw, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, resp, err
|
||||
}
|
||||
return raw, resp, nil
|
||||
}
|
||||
|
||||
// BackendInternalClient hits backend's `/api/v1/internal/*` endpoints
|
||||
// directly. Per ARCHITECTURE.md the trust boundary is the network, so
|
||||
// integration tests act as a trusted gateway-equivalent caller.
|
||||
type BackendInternalClient struct {
|
||||
BaseURL string
|
||||
HTTP *http.Client
|
||||
}
|
||||
|
||||
// NewBackendInternalClient targets backend's HTTP base URL.
|
||||
func NewBackendInternalClient(baseURL string) *BackendInternalClient {
|
||||
return &BackendInternalClient{
|
||||
BaseURL: strings.TrimRight(baseURL, "/"),
|
||||
HTTP: &http.Client{Timeout: 30 * time.Second},
|
||||
}
|
||||
}
|
||||
|
||||
// Do issues an internal request. The caller decodes the body.
|
||||
func (c *BackendInternalClient) Do(ctx context.Context, method, path string, body any) ([]byte, *http.Response, error) {
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
buf, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
reader = bytes.NewReader(buf)
|
||||
}
|
||||
req, err := http.NewRequestWithContext(ctx, method, c.BaseURL+path, reader)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
resp, err := c.HTTP.Do(req)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
raw, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, resp, err
|
||||
}
|
||||
return raw, resp, nil
|
||||
}
|
||||
|
||||
// BackendUserClient hits backend's `/api/v1/user/*` endpoints
|
||||
// directly with `X-User-ID` set, mirroring what gateway does after
|
||||
// authenticated traffic verification. Used by scenarios whose
|
||||
// message_type is not registered in gateway's gRPC router (lobby
|
||||
// create, soft delete, etc.).
|
||||
type BackendUserClient struct {
|
||||
BaseURL string
|
||||
UserID string
|
||||
HTTP *http.Client
|
||||
}
|
||||
|
||||
// NewBackendUserClient targets backend's HTTP base URL with userID
|
||||
// pre-bound.
|
||||
func NewBackendUserClient(baseURL, userID string) *BackendUserClient {
|
||||
return &BackendUserClient{
|
||||
BaseURL: strings.TrimRight(baseURL, "/"),
|
||||
UserID: userID,
|
||||
HTTP: &http.Client{Timeout: 30 * time.Second},
|
||||
}
|
||||
}
|
||||
|
||||
// Do issues a user-scoped backend request.
|
||||
func (c *BackendUserClient) Do(ctx context.Context, method, path string, body any) ([]byte, *http.Response, error) {
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
buf, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
reader = bytes.NewReader(buf)
|
||||
}
|
||||
req, err := http.NewRequestWithContext(ctx, method, c.BaseURL+path, reader)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
req.Header.Set("X-User-ID", c.UserID)
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
resp, err := c.HTTP.Do(req)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
raw, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, resp, err
|
||||
}
|
||||
return raw, resp, nil
|
||||
}
|
||||
|
||||
// BackendAdminClient hits backend's admin surface directly with HTTP
|
||||
// Basic Auth. Per ARCHITECTURE.md §14 the admin surface is on the
|
||||
// backend HTTP listener (not gateway), so tests address it directly.
|
||||
type BackendAdminClient struct {
|
||||
BaseURL string
|
||||
Username string
|
||||
Password string
|
||||
HTTP *http.Client
|
||||
}
|
||||
|
||||
// NewBackendAdminClient targets backend's HTTP base URL with the
|
||||
// supplied credentials.
|
||||
func NewBackendAdminClient(baseURL, username, password string) *BackendAdminClient {
|
||||
return &BackendAdminClient{
|
||||
BaseURL: strings.TrimRight(baseURL, "/"),
|
||||
Username: username,
|
||||
Password: password,
|
||||
HTTP: &http.Client{Timeout: 30 * time.Second},
|
||||
}
|
||||
}
|
||||
|
||||
// Do performs a request against an admin endpoint. The caller decodes
|
||||
// the body. Returned http.Response is always non-nil on success.
|
||||
func (c *BackendAdminClient) Do(ctx context.Context, method, path string, body any) ([]byte, *http.Response, error) {
|
||||
var reader io.Reader
|
||||
if body != nil {
|
||||
buf, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
reader = bytes.NewReader(buf)
|
||||
}
|
||||
req, err := http.NewRequestWithContext(ctx, method, c.BaseURL+path, reader)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
req.SetBasicAuth(c.Username, c.Password)
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
resp, err := c.HTTP.Do(req)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
raw, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, resp, err
|
||||
}
|
||||
return raw, resp, nil
|
||||
}
|
||||
@@ -0,0 +1,16 @@
|
||||
package testenv
|
||||
|
||||
// DockerSocketPath returns the bind-mountable filesystem path of the
|
||||
// Docker daemon socket reachable from a container running on the
|
||||
// same daemon.
|
||||
//
|
||||
// testcontainers's `ExtractDockerSocket` returns the path on the
|
||||
// machine that is *running tests* — on macOS+Colima that is the
|
||||
// Colima-managed path under `~/.colima/...`, which does not resolve
|
||||
// inside the Linux VM. For bind mounts into other containers we need
|
||||
// the path the daemon itself sees, which on every supported daemon
|
||||
// (native Linux, Docker Desktop, Colima, Rancher) is the canonical
|
||||
// `/var/run/docker.sock`.
|
||||
func DockerSocketPath() string {
|
||||
return "/var/run/docker.sock"
|
||||
}
|
||||
@@ -0,0 +1,166 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/rand"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
tcnetwork "github.com/testcontainers/testcontainers-go/network"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
// GatewayContainer wraps a running galaxy/gateway:integration
|
||||
// container.
|
||||
type GatewayContainer struct {
|
||||
Container testcontainers.Container
|
||||
HTTPHost string
|
||||
HTTPPort int
|
||||
HTTPURL string
|
||||
GRPCHost string
|
||||
GRPCPort int
|
||||
GRPCAddr string
|
||||
|
||||
// ResponseSignerPublic is the Ed25519 public key the gateway uses
|
||||
// to sign responses and push events. Tests verify signatures
|
||||
// against this value.
|
||||
ResponseSignerPublic ed25519.PublicKey
|
||||
}
|
||||
|
||||
// GatewayOptions tunes a gateway container before it boots.
|
||||
type GatewayOptions struct {
|
||||
NetworkAlias string
|
||||
NetworkName string
|
||||
BackendHTTPURL string
|
||||
BackendGRPCURL string
|
||||
RedisAddr string
|
||||
GatewayClientID string
|
||||
Extra map[string]string
|
||||
}
|
||||
|
||||
// StartGateway boots galaxy/gateway:integration with the supplied
|
||||
// options.
|
||||
func StartGateway(t *testing.T, opts GatewayOptions) *GatewayContainer {
|
||||
t.Helper()
|
||||
EnsureGatewayImage(t)
|
||||
|
||||
if opts.NetworkAlias == "" {
|
||||
opts.NetworkAlias = "gateway"
|
||||
}
|
||||
if opts.GatewayClientID == "" {
|
||||
opts.GatewayClientID = "integration-gateway"
|
||||
}
|
||||
|
||||
pub, priv, err := ed25519.GenerateKey(rand.Reader)
|
||||
if err != nil {
|
||||
t.Fatalf("generate ed25519 key: %v", err)
|
||||
}
|
||||
keyDER, err := x509.MarshalPKCS8PrivateKey(priv)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal ed25519 key: %v", err)
|
||||
}
|
||||
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "PRIVATE KEY", Bytes: keyDER})
|
||||
keyPath := filepath.Join(t.TempDir(), "gateway-signer.pem")
|
||||
if err := writeFile(keyPath, keyPEM); err != nil {
|
||||
t.Fatalf("write signer key: %v", err)
|
||||
}
|
||||
|
||||
containerKey := "/etc/galaxy/gateway-signer.pem"
|
||||
env := map[string]string{
|
||||
"GATEWAY_PUBLIC_HTTP_ADDR": ":8080",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": ":9090",
|
||||
"GATEWAY_LOG_LEVEL": "debug",
|
||||
"GATEWAY_REDIS_MASTER_ADDR": opts.RedisAddr,
|
||||
"GATEWAY_REDIS_PASSWORD": RedisIntegrationPassword,
|
||||
"GATEWAY_BACKEND_HTTP_URL": opts.BackendHTTPURL,
|
||||
"GATEWAY_BACKEND_GRPC_PUSH_URL": opts.BackendGRPCURL,
|
||||
"GATEWAY_BACKEND_GATEWAY_CLIENT_ID": opts.GatewayClientID,
|
||||
"GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": containerKey,
|
||||
// Loosen anti-abuse so happy-path scenarios aren't rate-limited.
|
||||
// Negative-path edge tests tighten these per-test.
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "10000",
|
||||
"GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "1000",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_IP_RATE_LIMIT_REQUESTS": "10000",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_IP_RATE_LIMIT_BURST": "1000",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_SESSION_RATE_LIMIT_REQUESTS": "10000",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_SESSION_RATE_LIMIT_BURST": "1000",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_USER_RATE_LIMIT_REQUESTS": "10000",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_USER_RATE_LIMIT_BURST": "1000",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_MESSAGE_CLASS_RATE_LIMIT_REQUESTS": "10000",
|
||||
"GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_MESSAGE_CLASS_RATE_LIMIT_BURST": "1000",
|
||||
}
|
||||
for k, v := range opts.Extra {
|
||||
env[k] = v
|
||||
}
|
||||
|
||||
req := testcontainers.ContainerRequest{
|
||||
Image: GatewayImage,
|
||||
ExposedPorts: []string{"8080/tcp", "9090/tcp"},
|
||||
Env: env,
|
||||
WaitingFor: wait.ForHTTP("/healthz").
|
||||
WithPort("8080/tcp").
|
||||
WithStartupTimeout(60 * time.Second),
|
||||
Files: []testcontainers.ContainerFile{
|
||||
{
|
||||
HostFilePath: keyPath,
|
||||
ContainerFilePath: containerKey,
|
||||
// 0o444 so the distroless `nonroot` user (uid 65532)
|
||||
// inside the gateway image can read the integration
|
||||
// signer key. The key is ephemeral and never leaves
|
||||
// the test process, so widening the mode is safe.
|
||||
FileMode: 0o444,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
gcr := &testcontainers.GenericContainerRequest{ContainerRequest: req}
|
||||
if opts.NetworkName != "" {
|
||||
_ = tcnetwork.WithNetwork([]string{opts.NetworkAlias}, &testcontainers.DockerNetwork{Name: opts.NetworkName}).Customize(gcr)
|
||||
}
|
||||
gcr.Started = true
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Minute)
|
||||
defer cancel()
|
||||
container, err := testcontainers.GenericContainer(ctx, *gcr)
|
||||
if err != nil {
|
||||
t.Fatalf("start gateway container: %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||
t.Logf("terminate gateway: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
host, err := container.Host(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("gateway host: %v", err)
|
||||
}
|
||||
port, err := container.MappedPort(ctx, "8080/tcp")
|
||||
if err != nil {
|
||||
t.Fatalf("gateway port: %v", err)
|
||||
}
|
||||
grpcPort, err := container.MappedPort(ctx, "9090/tcp")
|
||||
if err != nil {
|
||||
t.Fatalf("gateway grpc port: %v", err)
|
||||
}
|
||||
return &GatewayContainer{
|
||||
Container: container,
|
||||
HTTPHost: host,
|
||||
HTTPPort: int(port.Num()),
|
||||
HTTPURL: fmt.Sprintf("http://%s:%d", host, port.Num()),
|
||||
GRPCHost: host,
|
||||
GRPCPort: int(grpcPort.Num()),
|
||||
GRPCAddr: fmt.Sprintf("%s:%d", host, grpcPort.Num()),
|
||||
ResponseSignerPublic: pub,
|
||||
}
|
||||
}
|
||||
|
||||
func writeFile(path string, content []byte) error {
|
||||
return writeFileFn(path, content)
|
||||
}
|
||||
@@ -0,0 +1,57 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// SyntheticGeoIPDB copies the MaxMind reference Country test database
|
||||
// into a fresh temp directory and returns the absolute path. The same
|
||||
// fixture is used by pkg/geoip tests, so all integration tests resolve
|
||||
// the same set of synthetic IPs against the same country mapping.
|
||||
func SyntheticGeoIPDB(t *testing.T) string {
|
||||
t.Helper()
|
||||
src := geoipFixturePath(t)
|
||||
data, err := os.ReadFile(src)
|
||||
if err != nil {
|
||||
t.Fatalf("read mmdb fixture %s: %v", src, err)
|
||||
}
|
||||
dst := filepath.Join(t.TempDir(), "GeoIP2-Country-Test.mmdb")
|
||||
if err := os.WriteFile(dst, data, 0o644); err != nil {
|
||||
t.Fatalf("write mmdb fixture: %v", err)
|
||||
}
|
||||
return dst
|
||||
}
|
||||
|
||||
func geoipFixturePath(t *testing.T) string {
|
||||
t.Helper()
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
t.Fatalf("runtime.Caller failed")
|
||||
}
|
||||
// integration/testenv/geoip.go → workspace/pkg/geoip/...
|
||||
root := filepath.Dir(filepath.Dir(filepath.Dir(file)))
|
||||
return filepath.Join(root, "pkg", "geoip", "test-data", "test-data", "GeoIP2-Country-Test.mmdb")
|
||||
}
|
||||
|
||||
// CopyFile copies src into dst with mode 0644. Convenience helper for
|
||||
// container bind-mount preparation.
|
||||
func CopyFile(src, dst string) error {
|
||||
in, err := os.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer in.Close()
|
||||
out, err := os.Create(dst)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer out.Close()
|
||||
if _, err := io.Copy(out, in); err != nil {
|
||||
return err
|
||||
}
|
||||
return out.Chmod(0o644)
|
||||
}
|
||||
@@ -0,0 +1,259 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
gatewayauthn "galaxy/gateway/authn"
|
||||
gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// SignedGatewayClient drives the authenticated gRPC surface of the
|
||||
// gateway from tests. It signs ExecuteCommand envelopes with the
|
||||
// session's Ed25519 private key, verifies response signatures with
|
||||
// the gateway's response-signer public key, and exposes a
|
||||
// SubscribeEvents helper.
|
||||
type SignedGatewayClient struct {
|
||||
conn *grpc.ClientConn
|
||||
edge gatewayv1.EdgeGatewayClient
|
||||
deviceSID string
|
||||
privateKey ed25519.PrivateKey
|
||||
respPub ed25519.PublicKey
|
||||
|
||||
requestSeq uint64
|
||||
}
|
||||
|
||||
// NewSession is the device-session shape returned by registration.
|
||||
type NewSession struct {
|
||||
DeviceSessionID string
|
||||
PrivateKey ed25519.PrivateKey
|
||||
PublicKey ed25519.PublicKey
|
||||
}
|
||||
|
||||
// GenerateSessionKeyPair returns a fresh Ed25519 keypair for use in
|
||||
// `confirm-email-code`.
|
||||
func GenerateSessionKeyPair() (ed25519.PublicKey, ed25519.PrivateKey, error) {
|
||||
return ed25519.GenerateKey(rand.Reader)
|
||||
}
|
||||
|
||||
// EncodePublicKey base64-encodes the raw 32-byte Ed25519 public key
|
||||
// for the `client_public_key` field.
|
||||
func EncodePublicKey(pub ed25519.PublicKey) string {
|
||||
return base64.StdEncoding.EncodeToString(pub)
|
||||
}
|
||||
|
||||
// DialGateway opens a gRPC connection to gateway's authenticated
|
||||
// surface and prepares a signing client bound to deviceSID.
|
||||
func DialGateway(ctx context.Context, addr string, deviceSID string, privateKey ed25519.PrivateKey, respPub ed25519.PublicKey) (*SignedGatewayClient, error) {
|
||||
conn, err := grpc.NewClient(addr, grpc.WithTransportCredentials(insecure.NewCredentials()))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("dial gateway: %w", err)
|
||||
}
|
||||
return &SignedGatewayClient{
|
||||
conn: conn,
|
||||
edge: gatewayv1.NewEdgeGatewayClient(conn),
|
||||
deviceSID: deviceSID,
|
||||
privateKey: privateKey,
|
||||
respPub: respPub,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close releases the gRPC connection.
|
||||
func (c *SignedGatewayClient) Close() error {
|
||||
return c.conn.Close()
|
||||
}
|
||||
|
||||
// ExecuteOptions tunes one ExecuteCommand call. The zero value
|
||||
// produces a fresh `request_id` and the current timestamp; tests that
|
||||
// need a fixed request_id (anti-replay) or a stale timestamp
|
||||
// (freshness window) override the relevant fields.
|
||||
type ExecuteOptions struct {
|
||||
RequestID string
|
||||
TimestampMS int64
|
||||
OverrideSignature []byte
|
||||
OverridePayloadHash []byte
|
||||
OverrideSessionID string
|
||||
OverrideProtocolVersion string
|
||||
}
|
||||
|
||||
// ExecuteResult is the verified response of a successful
|
||||
// ExecuteCommand. PayloadBytes is the authenticated FlatBuffers
|
||||
// blob; tests decode it via galaxy/transcoder.
|
||||
type ExecuteResult struct {
|
||||
ResultCode string
|
||||
PayloadBytes []byte
|
||||
RequestID string
|
||||
TimestampMS int64
|
||||
}
|
||||
|
||||
// Execute signs the supplied payload, calls ExecuteCommand, verifies
|
||||
// the response signature against the gateway response signer, and
|
||||
// returns the decoded result.
|
||||
func (c *SignedGatewayClient) Execute(ctx context.Context, messageType string, payload []byte, opts ExecuteOptions) (*ExecuteResult, error) {
|
||||
if len(payload) == 0 {
|
||||
return nil, errors.New("ExecuteCommand requires non-empty payload")
|
||||
}
|
||||
|
||||
requestID := opts.RequestID
|
||||
if requestID == "" {
|
||||
requestID = uuid.NewString()
|
||||
}
|
||||
timestampMS := opts.TimestampMS
|
||||
if timestampMS == 0 {
|
||||
timestampMS = time.Now().UnixMilli()
|
||||
}
|
||||
protocolVersion := opts.OverrideProtocolVersion
|
||||
if protocolVersion == "" {
|
||||
protocolVersion = "v1"
|
||||
}
|
||||
deviceSID := opts.OverrideSessionID
|
||||
if deviceSID == "" {
|
||||
deviceSID = c.deviceSID
|
||||
}
|
||||
|
||||
hash := opts.OverridePayloadHash
|
||||
if hash == nil {
|
||||
sum := sha256.Sum256(payload)
|
||||
hash = sum[:]
|
||||
}
|
||||
|
||||
signature := opts.OverrideSignature
|
||||
if signature == nil {
|
||||
input := gatewayauthn.BuildRequestSigningInput(gatewayauthn.RequestSigningFields{
|
||||
ProtocolVersion: protocolVersion,
|
||||
DeviceSessionID: deviceSID,
|
||||
MessageType: messageType,
|
||||
TimestampMS: timestampMS,
|
||||
RequestID: requestID,
|
||||
PayloadHash: hash,
|
||||
})
|
||||
signature = ed25519.Sign(c.privateKey, input)
|
||||
}
|
||||
|
||||
req := &gatewayv1.ExecuteCommandRequest{
|
||||
ProtocolVersion: protocolVersion,
|
||||
DeviceSessionId: deviceSID,
|
||||
MessageType: messageType,
|
||||
TimestampMs: timestampMS,
|
||||
RequestId: requestID,
|
||||
PayloadBytes: payload,
|
||||
PayloadHash: hash,
|
||||
Signature: signature,
|
||||
}
|
||||
atomic.AddUint64(&c.requestSeq, 1)
|
||||
|
||||
resp, err := c.edge.ExecuteCommand(ctx, req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
respHash := sha256.Sum256(resp.GetPayloadBytes())
|
||||
if string(respHash[:]) != string(resp.GetPayloadHash()) {
|
||||
return nil, fmt.Errorf("response payload_hash mismatch")
|
||||
}
|
||||
if err := gatewayauthn.VerifyResponseSignature(c.respPub, resp.GetSignature(), gatewayauthn.ResponseSigningFields{
|
||||
ProtocolVersion: resp.GetProtocolVersion(),
|
||||
RequestID: resp.GetRequestId(),
|
||||
TimestampMS: resp.GetTimestampMs(),
|
||||
ResultCode: resp.GetResultCode(),
|
||||
PayloadHash: resp.GetPayloadHash(),
|
||||
}); err != nil {
|
||||
return nil, fmt.Errorf("response signature verification failed: %w", err)
|
||||
}
|
||||
|
||||
return &ExecuteResult{
|
||||
ResultCode: resp.GetResultCode(),
|
||||
PayloadBytes: resp.GetPayloadBytes(),
|
||||
RequestID: resp.GetRequestId(),
|
||||
TimestampMS: resp.GetTimestampMs(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// SubscribeEvents opens the authenticated server-streaming
|
||||
// SubscribeEvents RPC. The returned channel receives every
|
||||
// authenticated event the gateway delivers; the channel closes when
|
||||
// the stream ends or when ctx is done. Errors land on the err
|
||||
// channel.
|
||||
func (c *SignedGatewayClient) SubscribeEvents(ctx context.Context, messageType string) (<-chan *gatewayv1.GatewayEvent, <-chan error, error) {
|
||||
requestID := uuid.NewString()
|
||||
timestampMS := time.Now().UnixMilli()
|
||||
protocolVersion := "v1"
|
||||
|
||||
emptyHash := sha256.Sum256(nil)
|
||||
signature := ed25519.Sign(c.privateKey, gatewayauthn.BuildRequestSigningInput(gatewayauthn.RequestSigningFields{
|
||||
ProtocolVersion: protocolVersion,
|
||||
DeviceSessionID: c.deviceSID,
|
||||
MessageType: messageType,
|
||||
TimestampMS: timestampMS,
|
||||
RequestID: requestID,
|
||||
PayloadHash: emptyHash[:],
|
||||
}))
|
||||
|
||||
stream, err := c.edge.SubscribeEvents(ctx, &gatewayv1.SubscribeEventsRequest{
|
||||
ProtocolVersion: protocolVersion,
|
||||
DeviceSessionId: c.deviceSID,
|
||||
MessageType: messageType,
|
||||
TimestampMs: timestampMS,
|
||||
RequestId: requestID,
|
||||
PayloadHash: emptyHash[:],
|
||||
Signature: signature,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("open subscribe events: %w", err)
|
||||
}
|
||||
|
||||
events := make(chan *gatewayv1.GatewayEvent, 16)
|
||||
errs := make(chan error, 1)
|
||||
go func() {
|
||||
defer close(events)
|
||||
for {
|
||||
ev, err := stream.Recv()
|
||||
if err != nil {
|
||||
errs <- err
|
||||
return
|
||||
}
|
||||
events <- ev
|
||||
}
|
||||
}()
|
||||
return events, errs, nil
|
||||
}
|
||||
|
||||
// IsUnauthenticated reports whether err is a gRPC Unauthenticated
|
||||
// status, useful for negative-path edge tests.
|
||||
func IsUnauthenticated(err error) bool {
|
||||
return status.Code(err) == codes.Unauthenticated
|
||||
}
|
||||
|
||||
// IsInvalidArgument reports whether err is a gRPC InvalidArgument
|
||||
// status (used for malformed envelopes and unsupported
|
||||
// protocol_version).
|
||||
func IsInvalidArgument(err error) bool {
|
||||
return status.Code(err) == codes.InvalidArgument
|
||||
}
|
||||
|
||||
// IsResourceExhausted reports whether err is a gRPC
|
||||
// ResourceExhausted status (used for replay rejection).
|
||||
func IsResourceExhausted(err error) bool {
|
||||
return status.Code(err) == codes.ResourceExhausted
|
||||
}
|
||||
|
||||
// IsFailedPrecondition reports whether err is a gRPC
|
||||
// FailedPrecondition status. The gateway uses this code for replay
|
||||
// rejections (the canonical envelope was authentic but the
|
||||
// `request_id` was already consumed).
|
||||
func IsFailedPrecondition(err error) bool {
|
||||
return status.Code(err) == codes.FailedPrecondition
|
||||
}
|
||||
@@ -0,0 +1,91 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
BackendImage = "galaxy/backend:integration"
|
||||
GatewayImage = "galaxy/gateway:integration"
|
||||
GameImage = "galaxy/game:integration"
|
||||
)
|
||||
|
||||
var (
|
||||
backendOnce sync.Once
|
||||
backendErr error
|
||||
gatewayOnce sync.Once
|
||||
gatewayErr error
|
||||
gameOnce sync.Once
|
||||
gameErr error
|
||||
)
|
||||
|
||||
// EnsureBackendImage builds galaxy/backend:integration once per
|
||||
// process. Subsequent calls reuse the result.
|
||||
func EnsureBackendImage(t *testing.T) {
|
||||
t.Helper()
|
||||
backendOnce.Do(func() {
|
||||
backendErr = buildImage(BackendImage, "backend/Dockerfile")
|
||||
})
|
||||
if backendErr != nil {
|
||||
t.Skipf("build %s: %v", BackendImage, backendErr)
|
||||
}
|
||||
}
|
||||
|
||||
// EnsureGatewayImage builds galaxy/gateway:integration once per
|
||||
// process.
|
||||
func EnsureGatewayImage(t *testing.T) {
|
||||
t.Helper()
|
||||
gatewayOnce.Do(func() {
|
||||
gatewayErr = buildImage(GatewayImage, "gateway/Dockerfile")
|
||||
})
|
||||
if gatewayErr != nil {
|
||||
t.Skipf("build %s: %v", GatewayImage, gatewayErr)
|
||||
}
|
||||
}
|
||||
|
||||
// EnsureGameImage builds galaxy/game:integration once per process.
|
||||
func EnsureGameImage(t *testing.T) {
|
||||
t.Helper()
|
||||
gameOnce.Do(func() {
|
||||
gameErr = buildImage(GameImage, "game/Dockerfile")
|
||||
})
|
||||
if gameErr != nil {
|
||||
t.Skipf("build %s: %v", GameImage, gameErr)
|
||||
}
|
||||
}
|
||||
|
||||
func buildImage(tag, dockerfile string) error {
|
||||
root, err := workspaceRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
cmd := exec.CommandContext(ctx, "docker", "build",
|
||||
"-t", tag,
|
||||
"-f", filepath.Join(root, dockerfile),
|
||||
root,
|
||||
)
|
||||
out, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("docker build %s: %v\n%s", tag, err, string(out))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func workspaceRoot() (string, error) {
|
||||
_, file, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
return "", fmt.Errorf("runtime.Caller failed")
|
||||
}
|
||||
// integration/testenv/images.go → workspace root
|
||||
return filepath.Dir(filepath.Dir(filepath.Dir(file))), nil
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
package testenv
|
||||
|
||||
import "os"
|
||||
|
||||
// writeFileFn is a tiny indirection so other files in this package can
|
||||
// write fixtures without re-declaring os.WriteFile and to keep test
|
||||
// hooks centralised.
|
||||
func writeFileFn(path string, content []byte) error {
|
||||
return os.WriteFile(path, content, 0o600)
|
||||
}
|
||||
@@ -0,0 +1,197 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
tcnetwork "github.com/testcontainers/testcontainers-go/network"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
// Mailpit holds an axllent/mailpit testcontainer that captures
|
||||
// outbound SMTP from backend. The HTTP API is exposed for mail
|
||||
// inspection from tests.
|
||||
type Mailpit struct {
|
||||
container testcontainers.Container
|
||||
SMTPHost string
|
||||
SMTPPort int
|
||||
APIBase string
|
||||
}
|
||||
|
||||
// StartMailpit starts an axllent/mailpit container attached to network.
|
||||
func StartMailpit(t *testing.T, network string) *Mailpit {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
req := testcontainers.ContainerRequest{
|
||||
Image: "axllent/mailpit:latest",
|
||||
ExposedPorts: []string{"1025/tcp", "8025/tcp"},
|
||||
WaitingFor: wait.ForHTTP("/api/v1/info").WithPort("8025/tcp"),
|
||||
}
|
||||
gcr := &testcontainers.GenericContainerRequest{ContainerRequest: req}
|
||||
if network != "" {
|
||||
netOpt := tcnetwork.WithNetwork([]string{"mailpit"}, &testcontainers.DockerNetwork{Name: network})
|
||||
_ = netOpt.Customize(gcr)
|
||||
}
|
||||
|
||||
gcr.Started = true
|
||||
container, err := testcontainers.GenericContainer(ctx, *gcr)
|
||||
if err != nil {
|
||||
t.Skipf("mailpit container unavailable: %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||
t.Logf("terminate mailpit: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
host, err := container.Host(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("mailpit host: %v", err)
|
||||
}
|
||||
smtpPort, err := container.MappedPort(ctx, "1025/tcp")
|
||||
if err != nil {
|
||||
t.Fatalf("mailpit smtp port: %v", err)
|
||||
}
|
||||
apiPort, err := container.MappedPort(ctx, "8025/tcp")
|
||||
if err != nil {
|
||||
t.Fatalf("mailpit api port: %v", err)
|
||||
}
|
||||
return &Mailpit{
|
||||
container: container,
|
||||
SMTPHost: host,
|
||||
SMTPPort: int(smtpPort.Num()),
|
||||
APIBase: fmt.Sprintf("http://%s:%d", host, apiPort.Num()),
|
||||
}
|
||||
}
|
||||
|
||||
// Message is a single mailpit message summary.
|
||||
type Message struct {
|
||||
ID string `json:"ID"`
|
||||
From MessageAddress `json:"From"`
|
||||
To []MessageAddress `json:"To"`
|
||||
Subject string `json:"Subject"`
|
||||
Snippet string `json:"Snippet"`
|
||||
}
|
||||
|
||||
// MessageAddress is one address in From/To.
|
||||
type MessageAddress struct {
|
||||
Address string `json:"Address"`
|
||||
Name string `json:"Name"`
|
||||
}
|
||||
|
||||
type messagesResponse struct {
|
||||
Messages []Message `json:"messages"`
|
||||
Total int `json:"total"`
|
||||
}
|
||||
|
||||
// MessageBody fetches the rendered body (text) of message id.
|
||||
func (m *Mailpit) MessageBody(ctx context.Context, id string) (string, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, m.APIBase+"/api/v1/message/"+url.PathEscape(id), nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("mailpit message %s: status %d", id, resp.StatusCode)
|
||||
}
|
||||
var body struct {
|
||||
Text string `json:"Text"`
|
||||
HTML string `json:"HTML"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if body.Text != "" {
|
||||
return body.Text, nil
|
||||
}
|
||||
return body.HTML, nil
|
||||
}
|
||||
|
||||
// Search returns messages matching the mailpit search expression. See
|
||||
// https://mailpit.axllent.org/docs/usage/search-filters/.
|
||||
func (m *Mailpit) Search(ctx context.Context, query string) ([]Message, error) {
|
||||
u := m.APIBase + "/api/v1/search?query=" + url.QueryEscape(query)
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("mailpit search: status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
var out messagesResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out.Messages, nil
|
||||
}
|
||||
|
||||
// WaitForMessage polls Search until a message matching query is seen
|
||||
// or the deadline elapses.
|
||||
func (m *Mailpit) WaitForMessage(ctx context.Context, query string, timeout time.Duration) (Message, error) {
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
msgs, err := m.Search(ctx, query)
|
||||
if err == nil && len(msgs) > 0 {
|
||||
return msgs[0], nil
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
if err == nil {
|
||||
err = fmt.Errorf("no messages match %q", query)
|
||||
}
|
||||
return Message{}, err
|
||||
}
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return Message{}, ctx.Err()
|
||||
case <-time.After(200 * time.Millisecond):
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DeleteAll clears the mailpit inbox. Useful between phases of a test.
|
||||
func (m *Mailpit) DeleteAll(ctx context.Context) error {
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodDelete, m.APIBase+"/api/v1/messages", nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode/100 != 2 {
|
||||
return fmt.Errorf("mailpit delete: status %d", resp.StatusCode)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ContainsLine reports whether body contains a line that begins with
|
||||
// prefix; helpful for extracting login codes from the text body.
|
||||
func ContainsLine(body, prefix string) bool {
|
||||
for _, line := range strings.Split(body, "\n") {
|
||||
if strings.HasPrefix(strings.TrimSpace(line), prefix) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
@@ -0,0 +1,27 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
tcnetwork "github.com/testcontainers/testcontainers-go/network"
|
||||
)
|
||||
|
||||
// StartNetwork creates a user-defined Docker bridge network and
|
||||
// registers a t.Cleanup to remove it. All platform containers attach
|
||||
// to the same network so they can resolve each other by alias.
|
||||
func StartNetwork(t *testing.T) *testcontainers.DockerNetwork {
|
||||
t.Helper()
|
||||
ctx := context.Background()
|
||||
net, err := tcnetwork.New(ctx)
|
||||
if err != nil {
|
||||
t.Skipf("docker network unavailable: %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
if err := net.Remove(ctx); err != nil {
|
||||
t.Logf("remove network: %v", err)
|
||||
}
|
||||
})
|
||||
return net
|
||||
}
|
||||
@@ -0,0 +1,76 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Pilot bundles a registered Session with its resolved user_id and a
|
||||
// pre-built BackendUserClient so tests do not have to repeat the
|
||||
// resolution dance for each redeem call.
|
||||
type Pilot struct {
|
||||
Session *Session
|
||||
UserID string
|
||||
HTTP *BackendUserClient
|
||||
RaceName string
|
||||
}
|
||||
|
||||
// EnrollPilots registers `count` pilots with synthetic
|
||||
// `Player01..PlayerNN` race names and the matching
|
||||
// `playerNN+suffix@example.com` emails, then has owner issue an
|
||||
// invite for each one and the pilot redeem it. The game must be in
|
||||
// `enrollment_open` (or any state that accepts invites + redeem).
|
||||
//
|
||||
// The helper exists because the engine's `/api/v1/admin/init` enforces
|
||||
// `len(races) >= 10`, so any runtime-driven scenario needs at least
|
||||
// ten enrolled members. Using it from tests keeps each pilot a real
|
||||
// authenticated user, exactly mirroring how operators would seed a
|
||||
// production game.
|
||||
func EnrollPilots(t *testing.T, plat *Platform, ownerHTTP *BackendUserClient, gameID string, count int, suffix string) []*Pilot {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
pilots := make([]*Pilot, 0, count)
|
||||
for i := 1; i <= count; i++ {
|
||||
raceName := fmt.Sprintf("Player%02d", i)
|
||||
email := fmt.Sprintf("player%02d+%s@example.com", i, suffix)
|
||||
|
||||
sess := RegisterSession(t, plat, email)
|
||||
userID, err := sess.LookupUserID(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("pilot %s: resolve user_id: %v", raceName, err)
|
||||
}
|
||||
|
||||
raw, resp, err := ownerHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+gameID+"/invites", map[string]any{
|
||||
"invited_user_id": userID,
|
||||
"race_name": raceName,
|
||||
})
|
||||
if err != nil || resp.StatusCode != http.StatusCreated {
|
||||
t.Fatalf("pilot %s: issue invite: err=%v status=%d body=%s", raceName, err, resp.StatusCode, string(raw))
|
||||
}
|
||||
var invite struct {
|
||||
InviteID string `json:"invite_id"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &invite); err != nil {
|
||||
t.Fatalf("pilot %s: decode invite: %v", raceName, err)
|
||||
}
|
||||
|
||||
pilotHTTP := NewBackendUserClient(plat.Backend.HTTPURL, userID)
|
||||
raw, resp, err = pilotHTTP.Do(ctx, http.MethodPost, "/api/v1/user/lobby/games/"+gameID+"/invites/"+invite.InviteID+"/redeem", nil)
|
||||
if err != nil || resp.StatusCode/100 != 2 {
|
||||
t.Fatalf("pilot %s: redeem: err=%v status=%d body=%s", raceName, err, resp.StatusCode, string(raw))
|
||||
}
|
||||
|
||||
pilots = append(pilots, &Pilot{
|
||||
Session: sess,
|
||||
UserID: userID,
|
||||
HTTP: pilotHTTP,
|
||||
RaceName: raceName,
|
||||
})
|
||||
}
|
||||
return pilots
|
||||
}
|
||||
@@ -0,0 +1,102 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
)
|
||||
|
||||
// Platform aggregates a fully booted Galaxy stack: shared Docker
|
||||
// network, Postgres, Redis, mailpit, backend and gateway. Tests use
|
||||
// this struct to access HTTP/gRPC endpoints, mailpit and backend
|
||||
// admin without touching testcontainers directly.
|
||||
type Platform struct {
|
||||
Network string
|
||||
Postgres *Postgres
|
||||
Redis *Redis
|
||||
Mailpit *Mailpit
|
||||
Backend *BackendContainer
|
||||
Gateway *GatewayContainer
|
||||
}
|
||||
|
||||
// BootstrapOptions tunes platform-level knobs that flow into backend
|
||||
// or gateway configuration. The zero value is valid and produces a
|
||||
// stack with sensible defaults for happy-path scenarios.
|
||||
type BootstrapOptions struct {
|
||||
BackendExtra map[string]string
|
||||
GatewayExtra map[string]string
|
||||
}
|
||||
|
||||
// Bootstrap builds three Docker images (backend, gateway, optionally
|
||||
// the engine in the caller), spins up Postgres, Redis, mailpit, then
|
||||
// boots backend and gateway connected to those services. It registers
|
||||
// t.Cleanup hooks for every component, so callers do not own
|
||||
// teardown.
|
||||
//
|
||||
// The function calls RequireDocker and skips the test gracefully if
|
||||
// the daemon is unreachable, so every scenario can start with a
|
||||
// single Bootstrap call.
|
||||
func Bootstrap(t *testing.T, opts BootstrapOptions) *Platform {
|
||||
t.Helper()
|
||||
RequireDocker(t)
|
||||
|
||||
net := StartNetwork(t)
|
||||
pg := StartPostgres(t, net.Name)
|
||||
redis := StartRedis(t, net.Name)
|
||||
mp := StartMailpit(t, net.Name)
|
||||
geoip := SyntheticGeoIPDB(t)
|
||||
|
||||
backend := StartBackend(t, BackendOptions{
|
||||
NetworkAlias: "backend",
|
||||
NetworkName: net.Name,
|
||||
PostgresDSN: pg.NetworkDSN,
|
||||
MailpitHost: "mailpit",
|
||||
MailpitPort: 1025,
|
||||
GeoIPHostPath: geoip,
|
||||
Extra: opts.BackendExtra,
|
||||
})
|
||||
gateway := StartGateway(t, GatewayOptions{
|
||||
NetworkAlias: "gateway",
|
||||
NetworkName: net.Name,
|
||||
BackendHTTPURL: "http://backend:8080",
|
||||
BackendGRPCURL: "backend:8081",
|
||||
RedisAddr: "redis:6379",
|
||||
Extra: opts.GatewayExtra,
|
||||
})
|
||||
|
||||
plat := &Platform{
|
||||
Network: net.Name,
|
||||
Postgres: pg,
|
||||
Redis: redis,
|
||||
Mailpit: mp,
|
||||
Backend: backend,
|
||||
Gateway: gateway,
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
if !t.Failed() {
|
||||
return
|
||||
}
|
||||
dumpLogs(t, "backend", backend.Container)
|
||||
dumpLogs(t, "gateway", gateway.Container)
|
||||
})
|
||||
return plat
|
||||
}
|
||||
|
||||
// dumpLogs writes the container's stdout/stderr to test output. Used
|
||||
// only on failure to surface backend / gateway diagnostics.
|
||||
func dumpLogs(t *testing.T, name string, c testcontainers.Container) {
|
||||
t.Helper()
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
rc, err := c.Logs(context.Background())
|
||||
if err != nil {
|
||||
t.Logf("%s logs unavailable: %v", name, err)
|
||||
return
|
||||
}
|
||||
defer rc.Close()
|
||||
body, _ := io.ReadAll(rc)
|
||||
t.Logf("--- %s container logs ---\n%s", name, string(body))
|
||||
}
|
||||
@@ -0,0 +1,122 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||
tcnetwork "github.com/testcontainers/testcontainers-go/network"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
const (
|
||||
pgImage = "postgres:16-alpine"
|
||||
pgUser = "galaxy"
|
||||
pgPassword = "galaxy"
|
||||
pgDatabase = "galaxy_backend"
|
||||
pgSchema = "backend"
|
||||
pgStartup = 90 * time.Second
|
||||
)
|
||||
|
||||
// Postgres holds a running Postgres testcontainer reachable from both
|
||||
// the host (DSN with localhost-mapped port) and from another container
|
||||
// on the same Docker network (HostInNetworkDSN).
|
||||
type Postgres struct {
|
||||
container *tcpostgres.PostgresContainer
|
||||
HostDSN string
|
||||
NetworkDSN string
|
||||
}
|
||||
|
||||
// StartPostgres boots a postgres:16-alpine container, returns DSNs for
|
||||
// both host and in-network access, and registers a t.Cleanup to
|
||||
// terminate the container.
|
||||
func StartPostgres(t *testing.T, network string) *Postgres {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
opts := []testcontainers.ContainerCustomizer{
|
||||
tcpostgres.WithDatabase(pgDatabase),
|
||||
tcpostgres.WithUsername(pgUser),
|
||||
tcpostgres.WithPassword(pgPassword),
|
||||
testcontainers.WithWaitStrategy(
|
||||
wait.ForLog("database system is ready to accept connections").
|
||||
WithOccurrence(2).
|
||||
WithStartupTimeout(pgStartup),
|
||||
),
|
||||
}
|
||||
if network != "" {
|
||||
opts = append(opts, tcnetwork.WithNetwork([]string{"postgres"}, &testcontainers.DockerNetwork{Name: network}))
|
||||
}
|
||||
|
||||
container, err := tcpostgres.Run(ctx, pgImage, opts...)
|
||||
if err != nil {
|
||||
t.Skipf("postgres testcontainer unavailable: %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||
t.Logf("terminate postgres: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
hostDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||
if err != nil {
|
||||
t.Fatalf("postgres host DSN: %v", err)
|
||||
}
|
||||
hostDSN, err = withSearchPath(hostDSN, pgSchema)
|
||||
if err != nil {
|
||||
t.Fatalf("postgres host DSN search_path: %v", err)
|
||||
}
|
||||
|
||||
networkDSN := ""
|
||||
if network != "" {
|
||||
networkDSN = buildInNetworkDSN("postgres", 5432, pgUser, pgPassword, pgDatabase, pgSchema)
|
||||
}
|
||||
|
||||
return &Postgres{
|
||||
container: container,
|
||||
HostDSN: hostDSN,
|
||||
NetworkDSN: networkDSN,
|
||||
}
|
||||
}
|
||||
|
||||
func withSearchPath(dsn, schema string) (string, error) {
|
||||
parsed, err := url.Parse(dsn)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
q := parsed.Query()
|
||||
q.Set("search_path", schema)
|
||||
if q.Get("sslmode") == "" {
|
||||
q.Set("sslmode", "disable")
|
||||
}
|
||||
parsed.RawQuery = q.Encode()
|
||||
return parsed.String(), nil
|
||||
}
|
||||
|
||||
func buildInNetworkDSN(host string, port int, user, password, db, schema string) string {
|
||||
u := &url.URL{
|
||||
Scheme: "postgres",
|
||||
User: url.UserPassword(user, password),
|
||||
Host: fmt.Sprintf("%s:%d", host, port),
|
||||
Path: "/" + db,
|
||||
RawQuery: "sslmode=disable&search_path=" + schema,
|
||||
}
|
||||
return u.String()
|
||||
}
|
||||
|
||||
// HostPort renders a host:port pair so other testenv files can reuse
|
||||
// the same formatting.
|
||||
func HostPort(host string, port int) string {
|
||||
return fmt.Sprintf("%s:%d", host, port)
|
||||
}
|
||||
|
||||
// FormatPort returns the decimal representation of port.
|
||||
func FormatPort(port int) string {
|
||||
return strconv.Itoa(port)
|
||||
}
|
||||
@@ -0,0 +1,69 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
tcnetwork "github.com/testcontainers/testcontainers-go/network"
|
||||
"github.com/testcontainers/testcontainers-go/wait"
|
||||
)
|
||||
|
||||
// Redis holds a running Redis testcontainer reachable from the host
|
||||
// via HostAddr and from within the shared Docker network at the alias
|
||||
// "redis". Password is the requirepass value the test container was
|
||||
// started with so callers can pass it to gateway via env.
|
||||
type Redis struct {
|
||||
container testcontainers.Container
|
||||
HostAddr string
|
||||
Password string
|
||||
}
|
||||
|
||||
// RedisIntegrationPassword is the fixed requirepass value used by all
|
||||
// integration scenarios. Surface it as a constant so test envs can
|
||||
// agree on it without per-instance plumbing.
|
||||
const RedisIntegrationPassword = "integration-redis-pw"
|
||||
|
||||
// StartRedis starts a redis:7-alpine container attached to network.
|
||||
// The gateway uses Redis for anti-replay reservations only.
|
||||
func StartRedis(t *testing.T, network string) *Redis {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
|
||||
defer cancel()
|
||||
|
||||
req := testcontainers.ContainerRequest{
|
||||
Image: "redis:7-alpine",
|
||||
ExposedPorts: []string{"6379/tcp"},
|
||||
Cmd: []string{"redis-server", "--requirepass", RedisIntegrationPassword},
|
||||
WaitingFor: wait.ForLog("Ready to accept connections"),
|
||||
}
|
||||
gcr := &testcontainers.GenericContainerRequest{ContainerRequest: req}
|
||||
if network != "" {
|
||||
_ = tcnetwork.WithNetwork([]string{"redis"}, &testcontainers.DockerNetwork{Name: network}).Customize(gcr)
|
||||
}
|
||||
gcr.Started = true
|
||||
container, err := testcontainers.GenericContainer(ctx, *gcr)
|
||||
if err != nil {
|
||||
t.Skipf("redis testcontainer unavailable: %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||
t.Logf("terminate redis: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
host, err := container.Host(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("redis host: %v", err)
|
||||
}
|
||||
mapped, err := container.MappedPort(ctx, "6379/tcp")
|
||||
if err != nil {
|
||||
t.Fatalf("redis port: %v", err)
|
||||
}
|
||||
return &Redis{
|
||||
container: container,
|
||||
HostAddr: HostPort(host, int(mapped.Num())),
|
||||
Password: RedisIntegrationPassword,
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,111 @@
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Session is a registered device session ready to drive the
|
||||
// authenticated gRPC surface.
|
||||
type Session struct {
|
||||
Email string
|
||||
DeviceSessionID string
|
||||
Public ed25519.PublicKey
|
||||
Private ed25519.PrivateKey
|
||||
}
|
||||
|
||||
var sessionLoginCodeRE = regexp.MustCompile(`(?m)\b(\d{6})\b`)
|
||||
|
||||
// RegisterSession runs send-email-code → confirm-email-code through
|
||||
// the gateway public REST surface and returns a fresh Session. It
|
||||
// uses mailpit to capture the verification code and includes the
|
||||
// platform's mailpit reset to avoid stale messages between calls.
|
||||
func RegisterSession(t *testing.T, plat *Platform, email string) *Session {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := plat.Mailpit.DeleteAll(ctx); err != nil {
|
||||
t.Fatalf("clear mailpit: %v", err)
|
||||
}
|
||||
|
||||
pub, priv, err := GenerateSessionKeyPair()
|
||||
if err != nil {
|
||||
t.Fatalf("generate session keypair: %v", err)
|
||||
}
|
||||
public := NewPublicRESTClient(plat.Gateway.HTTPURL)
|
||||
|
||||
send, _, err := public.SendEmailCode(ctx, email, "en-US")
|
||||
if err != nil {
|
||||
t.Fatalf("send-email-code: %v", err)
|
||||
}
|
||||
if send.ChallengeID == "" {
|
||||
t.Fatalf("send-email-code returned empty challenge_id")
|
||||
}
|
||||
|
||||
msg, err := plat.Mailpit.WaitForMessage(ctx, "to:"+email, 30*time.Second)
|
||||
if err != nil {
|
||||
t.Fatalf("wait for mail: %v", err)
|
||||
}
|
||||
body, err := plat.Mailpit.MessageBody(ctx, msg.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("fetch mail body: %v", err)
|
||||
}
|
||||
m := sessionLoginCodeRE.FindStringSubmatch(body)
|
||||
if m == nil {
|
||||
t.Fatalf("no 6-digit code in mail body:\n%s", body)
|
||||
}
|
||||
code := m[1]
|
||||
|
||||
confirm, _, err := public.ConfirmEmailCode(ctx, send.ChallengeID, code, EncodePublicKey(pub), "UTC")
|
||||
if err != nil {
|
||||
t.Fatalf("confirm-email-code: %v", err)
|
||||
}
|
||||
if confirm.DeviceSessionID == "" {
|
||||
t.Fatalf("confirm-email-code returned empty device_session_id")
|
||||
}
|
||||
|
||||
return &Session{
|
||||
Email: email,
|
||||
DeviceSessionID: confirm.DeviceSessionID,
|
||||
Public: pub,
|
||||
Private: priv,
|
||||
}
|
||||
}
|
||||
|
||||
// DialAuthenticated returns a SignedGatewayClient bound to s.
|
||||
func (s *Session) DialAuthenticated(ctx context.Context, plat *Platform) (*SignedGatewayClient, error) {
|
||||
if s == nil {
|
||||
return nil, fmt.Errorf("nil session")
|
||||
}
|
||||
return DialGateway(ctx, plat.Gateway.GRPCAddr, s.DeviceSessionID, s.Private, plat.Gateway.ResponseSignerPublic)
|
||||
}
|
||||
|
||||
// LookupUserID resolves the user_id for s via backend's internal
|
||||
// session lookup. Returns an empty string if the session is unknown.
|
||||
func (s *Session) LookupUserID(ctx context.Context, plat *Platform) (string, error) {
|
||||
if s == nil || s.DeviceSessionID == "" {
|
||||
return "", fmt.Errorf("nil or empty session")
|
||||
}
|
||||
internal := NewBackendInternalClient(plat.Backend.HTTPURL)
|
||||
raw, resp, err := internal.Do(ctx, http.MethodGet, "/api/v1/internal/sessions/"+s.DeviceSessionID, nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("session lookup: status %d body=%s", resp.StatusCode, string(raw))
|
||||
}
|
||||
var body struct {
|
||||
UserID string `json:"user_id"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &body); err != nil {
|
||||
return "", fmt.Errorf("decode session: %w", err)
|
||||
}
|
||||
return body.UserID, nil
|
||||
}
|
||||
@@ -0,0 +1,33 @@
|
||||
// Package testenv builds and tears down an end-to-end Galaxy stack
|
||||
// (Postgres, Redis, mailpit, backend, gateway, optionally a game-engine
|
||||
// container) for use by the integration test suite. Tests interact with
|
||||
// the platform exclusively through the typed clients exposed here; no
|
||||
// other package in this module reaches the underlying containers
|
||||
// directly.
|
||||
package testenv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
)
|
||||
|
||||
// RequireDocker skips the test when no Docker daemon is reachable. Each
|
||||
// scenario starts with this guard so a CI worker without Docker emits a
|
||||
// clear SKIP rather than a confusing failure.
|
||||
func RequireDocker(t *testing.T) {
|
||||
t.Helper()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
provider, err := testcontainers.NewDockerProvider()
|
||||
if err != nil {
|
||||
t.Skipf("docker provider unavailable: %v", err)
|
||||
return
|
||||
}
|
||||
defer provider.Close()
|
||||
if err := provider.Health(ctx); err != nil {
|
||||
t.Skipf("docker daemon unreachable: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,63 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
// TestUserAccount_GetThroughGatewayGRPC drives the authenticated
|
||||
// gRPC user surface (`user.account.get`) through gateway → backend
|
||||
// → user store. The test signs an envelope, sends it via gRPC, and
|
||||
// verifies the response signature, then decodes the FlatBuffers
|
||||
// payload into the typed AccountResponse.
|
||||
//
|
||||
// Side effect: the gateway also sets `X-User-ID` and forwards to
|
||||
// backend's HTTP `/api/v1/user/account`, which triggers the geo
|
||||
// counter middleware. We validate the counter increments on the
|
||||
// admin geo endpoint.
|
||||
func TestUserAccount_GetThroughGatewayGRPC(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+account@example.com")
|
||||
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial gateway: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode get-account payload: %v", err)
|
||||
}
|
||||
|
||||
res, err := gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, payload, testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("execute get-account: %v", err)
|
||||
}
|
||||
if res.ResultCode != "ok" {
|
||||
t.Fatalf("expected ok result_code, got %q", res.ResultCode)
|
||||
}
|
||||
|
||||
got, err := transcoder.PayloadToAccountResponse(res.PayloadBytes)
|
||||
if err != nil {
|
||||
t.Fatalf("decode account response: %v", err)
|
||||
}
|
||||
if got.Account.UserID == "" {
|
||||
t.Fatalf("decoded account missing user_id")
|
||||
}
|
||||
if got.Account.Email != sess.Email {
|
||||
t.Fatalf("decoded account email = %q, want %q", got.Account.Email, sess.Email)
|
||||
}
|
||||
if !strings.HasPrefix(got.Account.UserName, "Player-") && !strings.HasPrefix(strings.ToLower(got.Account.UserName), "player-") {
|
||||
t.Fatalf("user_name = %q, want Player-XXXXXXXX shape", got.Account.UserName)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
// TestUserProfileUpdate exercises `user.profile.update` over the
|
||||
// authenticated gateway gRPC surface and verifies that the new
|
||||
// display_name is reflected by a subsequent `user.account.get`.
|
||||
func TestUserProfileUpdate(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+profile@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
const newName = "Captain Pilot"
|
||||
updatePayload, err := transcoder.UpdateMyProfileRequestToPayload(&usermodel.UpdateMyProfileRequest{
|
||||
DisplayName: newName,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("encode update payload: %v", err)
|
||||
}
|
||||
res, err := gw.Execute(ctx, usermodel.MessageTypeUpdateMyProfile, updatePayload, testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("execute update profile: %v", err)
|
||||
}
|
||||
if res.ResultCode != "ok" {
|
||||
t.Fatalf("update result_code = %q, want ok", res.ResultCode)
|
||||
}
|
||||
updated, err := transcoder.PayloadToAccountResponse(res.PayloadBytes)
|
||||
if err != nil {
|
||||
t.Fatalf("decode update response: %v", err)
|
||||
}
|
||||
if updated.Account.DisplayName != newName {
|
||||
t.Fatalf("update returned display_name = %q, want %q", updated.Account.DisplayName, newName)
|
||||
}
|
||||
|
||||
// Re-fetch the account to confirm persistence.
|
||||
getPayload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("encode get payload: %v", err)
|
||||
}
|
||||
gres, err := gw.Execute(ctx, usermodel.MessageTypeGetMyAccount, getPayload, testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("execute get-account: %v", err)
|
||||
}
|
||||
got, err := transcoder.PayloadToAccountResponse(gres.PayloadBytes)
|
||||
if err != nil {
|
||||
t.Fatalf("decode get response: %v", err)
|
||||
}
|
||||
if got.Account.DisplayName != newName {
|
||||
t.Fatalf("re-fetched display_name = %q, want %q", got.Account.DisplayName, newName)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,64 @@
|
||||
package integration_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"galaxy/integration/testenv"
|
||||
usermodel "galaxy/model/user"
|
||||
"galaxy/transcoder"
|
||||
)
|
||||
|
||||
// TestUserSettingsUpdate verifies `user.settings.update` accepts a
|
||||
// valid BCP 47 / IANA pair and rejects malformed inputs through the
|
||||
// gateway gRPC surface.
|
||||
func TestUserSettingsUpdate(t *testing.T) {
|
||||
plat := testenv.Bootstrap(t, testenv.BootstrapOptions{})
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||
defer cancel()
|
||||
|
||||
sess := testenv.RegisterSession(t, plat, "pilot+settings@example.com")
|
||||
gw, err := sess.DialAuthenticated(ctx, plat)
|
||||
if err != nil {
|
||||
t.Fatalf("dial: %v", err)
|
||||
}
|
||||
defer gw.Close()
|
||||
|
||||
good, err := transcoder.UpdateMySettingsRequestToPayload(&usermodel.UpdateMySettingsRequest{
|
||||
PreferredLanguage: "fr-CA",
|
||||
TimeZone: "America/Toronto",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("encode payload: %v", err)
|
||||
}
|
||||
res, err := gw.Execute(ctx, usermodel.MessageTypeUpdateMySettings, good, testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("execute valid update: %v", err)
|
||||
}
|
||||
if res.ResultCode != "ok" {
|
||||
t.Fatalf("valid update result_code = %q, want ok", res.ResultCode)
|
||||
}
|
||||
updated, err := transcoder.PayloadToAccountResponse(res.PayloadBytes)
|
||||
if err != nil {
|
||||
t.Fatalf("decode response: %v", err)
|
||||
}
|
||||
if updated.Account.PreferredLanguage != "fr-CA" || updated.Account.TimeZone != "America/Toronto" {
|
||||
t.Fatalf("settings not applied: lang=%q tz=%q", updated.Account.PreferredLanguage, updated.Account.TimeZone)
|
||||
}
|
||||
|
||||
bad, err := transcoder.UpdateMySettingsRequestToPayload(&usermodel.UpdateMySettingsRequest{
|
||||
PreferredLanguage: "not-a-language",
|
||||
TimeZone: "Mars/Olympus",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("encode bad payload: %v", err)
|
||||
}
|
||||
res, err = gw.Execute(ctx, usermodel.MessageTypeUpdateMySettings, bad, testenv.ExecuteOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("execute invalid update: %v", err)
|
||||
}
|
||||
if res.ResultCode == "ok" {
|
||||
t.Fatalf("invalid update was accepted: %q", res.ResultCode)
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user