diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md index 3679658..571b1ea 100644 --- a/ARCHITECTURE.md +++ b/ARCHITECTURE.md @@ -43,7 +43,9 @@ The current v1 platform uses Redis as the main data store and Redis Streams as t * The platform exposes a single external entry point: **Edge Gateway**. * Public unauthenticated flows use REST/JSON. -* Authenticated user traffic uses signed gRPC over HTTP/2 with protobuf control envelopes and FlatBuffers payload bytes. +* Authenticated user edge traffic uses signed gRPC over HTTP/2 with protobuf control envelopes and FlatBuffers payload bytes. +* Trusted synchronous inter-service traffic uses REST/JSON unless a service-specific contract states otherwise. +* For the direct `Gateway -> User` self-service boundary, gateway keeps the external authenticated gRPC + FlatBuffers contract and performs REST/JSON transcoding toward `User Service` internally. * The gateway handles only edge concerns: parsing, authentication, integrity checks, anti-replay, rate limiting, routing, and push delivery. Business authorization and domain rules remain in downstream services. * `Auth / Session Service` is the source of truth for `device_session`, but it is not on the hot path of every authenticated request. Gateway authenticates steady-state traffic from session cache and lifecycle updates. * `Game Lobby` owns platform-level metadata of game sessions. @@ -65,6 +67,10 @@ The gateway already distinguishes: * public REST/JSON for unauthenticated traffic such as health checks and public auth; * authenticated gRPC over HTTP/2 for verified commands and push delivery. +For downstream business services, the current default trusted transport is +strict REST/JSON. Gateway may therefore authenticate and verify one external +FlatBuffers command, then transcode it to one trusted downstream REST call. + The public auth contract is: * `send-email-code(email) -> challenge_id` @@ -230,17 +236,22 @@ Direct integrations: ## 3. [User Service](user/README.md) -`User Service` owns user identity and profile as platform-level business data. +`User Service` owns regular-user identity and profile as platform-level +business data. It is the source of truth for: -* `user_id`; -* profile fields and editable user settings; -* role model, including admin role; +* `user_id` of regular platform users; +* regular-user profile fields and editable user settings; * current tariff/entitlement state; * user-specific limits and platform sanctions; * latest effective `declared_country`. +System-administrator identity remains outside this service and belongs to the +later `Admin Service`. Trusted administrative reads and mutations against +regular-user state do not make `User Service` the owner of administrator +identity. + It is directly reachable through gateway for selected user-facing operations such as: * reading and editing allowed profile fields; @@ -253,6 +264,17 @@ Not every profile mutation goes directly here. For example: * email change must use a code-confirm flow; * `declared_country` change remains under admin approval flow via `Geo Profile Service`. +Architectural rules fixed for this service: + +* `User Service` owns regular-user identity only; system-admin identity is out + of scope. +* `User Service` stores only the current effective `declared_country`; review + workflow and history belong to `Geo Profile Service`. +* During the current auth-registration rollout, `Auth / Session Service` + passes temporary `preferred_language="en"` plus the confirmed `time_zone` + into `User Service`. Gateway-side geoip language derivation is a later + rollout step and is not part of the current source-of-truth contract. + Future billing does not become a direct dependency of other services. `Billing Service` will feed entitlement/payment outcomes into `User Service`, and the rest of the platform will continue to use `User Service` as the source of truth for current entitlements. ## 4. Mail Service @@ -533,7 +555,7 @@ flowchart TD N["Notification Service"] M["Mail Service"] - U -->|"users, roles, tariffs, limits, sanctions, current declared_country"| X1["Platform user identity"] + U -->|"regular users, profile/settings, tariffs, limits, sanctions, current declared_country"| X1["Platform user identity"] A -->|"challenges, device sessions, revoke/block state"| X2["Auth/session state"] L -->|"game metadata, invites, applications, membership, roster"| X3["Platform game records"] G -->|"runtime state, current turn, engine health, engine mapping, engine version registry"| X4["Running-game state"] @@ -918,8 +940,8 @@ Recommended order for implementation is: 2. **Auth / Session Service** (implemented) Public auth flow, `device_session`, revoke/block lifecycle, gateway session projection. -3. **User Service** (planned) - Platform user identity, roles, tariffs/entitlements, user limits, settings, sanctions, and current `declared_country`. +3. **User Service** (implemented) + Regular-user identity, profile/settings, tariffs/entitlements, user limits, sanctions, and current `declared_country`. 4. **Mail Service** Internal email delivery for auth codes first, later for platform notifications. diff --git a/TESTING.md b/TESTING.md index b1d0ed9..965de1a 100644 --- a/TESTING.md +++ b/TESTING.md @@ -368,10 +368,9 @@ The testing plan follows this service order: * create user * find by email - * normalized email uniqueness + * exact-after-trim e-mail storage and lookup semantics * generated default `race_name` for new users * `race_name` uniqueness and confusable-substitution policy - * role assignment * tariff/entitlement fields * Profile tests: @@ -400,7 +399,7 @@ The testing plan follows this service order: * resolve existing/creatable/blocked decision for auth * `ensure-by-email` create-only `registration_context` semantics * current `declared_country` read/write path - * exact lookup by `user_id`, normalized `email`, and `race_name` + * exact lookup by `user_id`, exact-after-trim `email`, and exact `race_name` * paginated filtered listing with deterministic ordering * Storage and API contract tests: @@ -417,13 +416,15 @@ The testing plan follows this service order: * blocked-by-policy outcome * `Gateway <-> User` - * authenticated profile read - * authenticated allowed profile update - * tariff and settings read paths + * authenticated `user.account.get` + * authenticated successful `user.profile.update` + * authenticated successful `user.settings.update` + * `profile_update_block` conflict projection + * invalid-request projection for malformed self-service payload values * `Gateway <-> Auth / Session <-> User` * first registration by email - * repeat login by same email + * repeat login by same email without overwriting create-only settings * blocked email/user behavior ### Regression tests to keep from this stage onward diff --git a/authsession/README.md b/authsession/README.md index a4e1319..0819313 100644 --- a/authsession/README.md +++ b/authsession/README.md @@ -146,6 +146,9 @@ key registered for the created device session. rollout phase, successful confirms forward create-only user registration context to `User Service` as `preferred_language="en"` and the supplied `time_zone` until gateway geoip-based language derivation is deployed. +`User Service` now validates `preferred_language` as BCP 47 and canonicalizes +the stored value on creation, so any future derived language must already be a +valid BCP 47 tag before auth forwards it. Public boundary rules: diff --git a/authsession/go.mod b/authsession/go.mod index 7a50199..074c8a4 100644 --- a/authsession/go.mod +++ b/authsession/go.mod @@ -4,45 +4,45 @@ go 1.26.0 require ( github.com/alicebob/miniredis/v2 v2.37.0 - github.com/getkin/kin-openapi v0.134.0 + github.com/getkin/kin-openapi v0.135.0 github.com/gin-gonic/gin v1.12.0 github.com/redis/go-redis/v9 v9.18.0 github.com/stretchr/testify v1.11.1 - go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.67.0 - go.opentelemetry.io/otel v1.42.0 - go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.42.0 - go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.42.0 - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.42.0 - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0 - go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.42.0 - go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.42.0 - go.opentelemetry.io/otel/metric v1.42.0 - go.opentelemetry.io/otel/sdk v1.42.0 - go.opentelemetry.io/otel/sdk/metric v1.42.0 - go.opentelemetry.io/otel/trace v1.42.0 + go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 + go.opentelemetry.io/otel v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 + go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 + go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 + go.opentelemetry.io/otel/metric v1.43.0 + go.opentelemetry.io/otel/sdk v1.43.0 + go.opentelemetry.io/otel/sdk/metric v1.43.0 + go.opentelemetry.io/otel/trace v1.43.0 go.uber.org/zap v1.27.1 golang.org/x/crypto v0.49.0 ) require ( - github.com/bytedance/gopkg v0.1.3 // indirect + github.com/bytedance/gopkg v0.1.4 // indirect github.com/bytedance/sonic v1.15.0 // indirect - github.com/bytedance/sonic/loader v0.5.0 // indirect + github.com/bytedance/sonic/loader v0.5.1 // indirect github.com/cenkalti/backoff/v5 v5.0.3 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cloudwego/base64x v0.1.6 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect github.com/gabriel-vasile/mimetype v1.4.13 // indirect - github.com/gin-contrib/sse v1.1.0 // indirect + github.com/gin-contrib/sse v1.1.1 // indirect github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/stdr v1.2.2 // indirect github.com/go-openapi/jsonpointer v0.21.0 // indirect github.com/go-openapi/swag v0.23.0 // indirect github.com/go-playground/locales v0.14.1 // indirect github.com/go-playground/universal-translator v0.18.1 // indirect - github.com/go-playground/validator/v10 v10.30.1 // indirect - github.com/goccy/go-json v0.10.5 // indirect + github.com/go-playground/validator/v10 v10.30.2 // indirect + github.com/goccy/go-json v0.10.6 // indirect github.com/goccy/go-yaml v1.19.2 // indirect github.com/google/uuid v1.6.0 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect @@ -55,9 +55,9 @@ require ( github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.2 // indirect github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect - github.com/oasdiff/yaml v0.0.0-20260313112342-a3ea61cb4d4c // indirect - github.com/oasdiff/yaml3 v0.0.0-20260224194419-61cd415a242b // indirect - github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/oasdiff/yaml v0.0.9 // indirect + github.com/oasdiff/yaml3 v0.0.9 // indirect + github.com/pelletier/go-toml/v2 v2.3.0 // indirect github.com/perimeterx/marshmallow v1.1.5 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/quic-go/qpack v0.6.0 // indirect @@ -68,11 +68,11 @@ require ( github.com/yuin/gopher-lua v1.1.1 // indirect go.mongodb.org/mongo-driver/v2 v2.5.0 // indirect go.opentelemetry.io/auto/sdk v1.2.1 // indirect - go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect go.opentelemetry.io/proto/otlp v1.10.0 // indirect go.uber.org/atomic v1.11.0 // indirect go.uber.org/multierr v1.10.0 // indirect - golang.org/x/arch v0.24.0 // indirect + golang.org/x/arch v0.25.0 // indirect golang.org/x/net v0.52.0 // indirect golang.org/x/sys v0.42.0 // indirect golang.org/x/text v0.35.0 // indirect diff --git a/authsession/go.sum b/authsession/go.sum index fbeb233..b94415b 100644 --- a/authsession/go.sum +++ b/authsession/go.sum @@ -4,12 +4,10 @@ github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs= github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c= github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= -github.com/bytedance/gopkg v0.1.3 h1:TPBSwH8RsouGCBcMBktLt1AymVo2TVsBVCY4b6TnZ/M= -github.com/bytedance/gopkg v0.1.3/go.mod h1:576VvJ+eJgyCzdjS+c4+77QF3p7ubbtiKARP3TxducM= +github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM= github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE= github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k= -github.com/bytedance/sonic/loader v0.5.0 h1:gXH3KVnatgY7loH5/TkeVyXPfESoqSBSBEiDd5VjlgE= -github.com/bytedance/sonic/loader v0.5.0/go.mod h1:AR4NYCk5DdzZizZ5djGqQ92eEhCCcdf5x77udYiSJRo= +github.com/bytedance/sonic/loader v0.5.1 h1:Ygpfa9zwRCCKSlrp5bBP/b/Xzc3VxsAW+5NIYXrOOpI= github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM= github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= @@ -23,10 +21,8 @@ github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/r github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM= github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s= -github.com/getkin/kin-openapi v0.134.0 h1:/L5+1+kfe6dXh8Ot/wqiTgUkjOIEJiC0bbYVziHB8rU= -github.com/getkin/kin-openapi v0.134.0/go.mod h1:wK6ZLG/VgoETO9pcLJ/VmAtIcl/DNlMayNTb716EUxE= -github.com/gin-contrib/sse v1.1.0 h1:n0w2GMuUpWDVp7qSpvze6fAu9iRxJY4Hmj6AmBOU05w= -github.com/gin-contrib/sse v1.1.0/go.mod h1:hxRZ5gVpWMT7Z0B0gSNYqqsSCNIJMjzvm6fqCz9vjwM= +github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg= +github.com/gin-contrib/sse v1.1.1 h1:uGYpNwTacv5R68bSGMapo62iLTRa9l5zxGCps4hK6ko= github.com/gin-gonic/gin v1.12.0 h1:b3YAbrZtnf8N//yjKeU2+MQsh2mY5htkZidOM7O0wG8= github.com/gin-gonic/gin v1.12.0/go.mod h1:VxccKfsSllpKshkBWgVgRniFFAzFb9csfngsqANjnLc= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= @@ -44,12 +40,10 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= -github.com/go-playground/validator/v10 v10.30.1 h1:f3zDSN/zOma+w6+1Wswgd9fLkdwy06ntQJp0BBvFG0w= -github.com/go-playground/validator/v10 v10.30.1/go.mod h1:oSuBIQzuJxL//3MelwSLD5hc2Tu889bF0Idm9Dg26cM= +github.com/go-playground/validator/v10 v10.30.2 h1:JiFIMtSSHb2/XBUbWM4i/MpeQm9ZK2xqPNk8vgvu5JQ= github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM= github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE= -github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4= -github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M= +github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU= github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM= github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= @@ -84,12 +78,9 @@ github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9G github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw= github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8= -github.com/oasdiff/yaml v0.0.0-20260313112342-a3ea61cb4d4c h1:7ACFcSaQsrWtrH4WHHfUqE1C+f8r2uv8KGaW0jTNjus= -github.com/oasdiff/yaml v0.0.0-20260313112342-a3ea61cb4d4c/go.mod h1:JKox4Gszkxt57kj27u7rvi7IFoIULvCZHUsBTUmQM/s= -github.com/oasdiff/yaml3 v0.0.0-20260224194419-61cd415a242b h1:vivRhVUAa9t1q0Db4ZmezBP8pWQWnXHFokZj0AOea2g= -github.com/oasdiff/yaml3 v0.0.0-20260224194419-61cd415a242b/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o= -github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= -github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48= +github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g= +github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM= github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s= github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= @@ -127,34 +118,20 @@ go.mongodb.org/mongo-driver/v2 v2.5.0 h1:yXUhImUjjAInNcpTcAlPHiT7bIXhshCTL3jVBkF go.mongodb.org/mongo-driver/v2 v2.5.0/go.mod h1:yOI9kBsufol30iFsl1slpdq1I0eHPzybRWdyYUs8K/0= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= -go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.67.0 h1:E7DmskpIO7ZR6QI6zKSEKIDNUYoKw9oHXP23gzbCdU0= -go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.67.0/go.mod h1:WB2cS9y+AwqqKhoo9gw6/ZxlSjFBUQGZ8BQOaD3FVXM= -go.opentelemetry.io/contrib/propagators/b3 v1.42.0 h1:B2Pew5ufEtgkjLF+tSkXjgYZXQr9m7aCm1wLKB0URbU= -go.opentelemetry.io/contrib/propagators/b3 v1.42.0/go.mod h1:iPgUcSEF5DORW6+yNbdw/YevUy+QqJ508ncjhrRSCjc= -go.opentelemetry.io/otel v1.42.0 h1:lSQGzTgVR3+sgJDAU/7/ZMjN9Z+vUip7leaqBKy4sho= -go.opentelemetry.io/otel v1.42.0/go.mod h1:lJNsdRMxCUIWuMlVJWzecSMuNjE7dOYyWlqOXWkdqCc= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.42.0 h1:MdKucPl/HbzckWWEisiNqMPhRrAOQX8r4jTuGr636gk= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.42.0/go.mod h1:RolT8tWtfHcjajEH5wFIZ4Dgh5jpPdFXYV9pTAk/qjc= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.42.0 h1:H7O6RlGOMTizyl3R08Kn5pdM06bnH8oscSj7o11tmLA= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.42.0/go.mod h1:mBFWu/WOVDkWWsR7Tx7h6EpQB8wsv7P0Yrh0Pb7othc= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0 h1:THuZiwpQZuHPul65w4WcwEnkX2QIuMT+UFoOrygtoJw= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0/go.mod h1:J2pvYM5NGHofZ2/Ru6zw/TNWnEQp5crgyDeSrYpXkAw= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.42.0 h1:zWWrB1U6nqhS/k6zYB74CjRpuiitRtLLi68VcgmOEto= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.42.0/go.mod h1:2qXPNBX1OVRC0IwOnfo1ljoid+RD0QK3443EaqVlsOU= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0 h1:uLXP+3mghfMf7XmV4PkGfFhFKuNWoCvvx5wP/wOXo0o= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0/go.mod h1:v0Tj04armyT59mnURNUJf7RCKcKzq+lgJs6QSjHjaTc= -go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.42.0 h1:lSZHgNHfbmQTPfuTmWVkEu8J8qXaQwuV30pjCcAUvP8= -go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.42.0/go.mod h1:so9ounLcuoRDu033MW/E0AD4hhUjVqswrMF5FoZlBcw= -go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.42.0 h1:s/1iRkCKDfhlh1JF26knRneorus8aOwVIDhvYx9WoDw= -go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.42.0/go.mod h1:UI3wi0FXg1Pofb8ZBiBLhtMzgoTm1TYkMvn71fAqDzs= -go.opentelemetry.io/otel/metric v1.42.0 h1:2jXG+3oZLNXEPfNmnpxKDeZsFI5o4J+nz6xUlaFdF/4= -go.opentelemetry.io/otel/metric v1.42.0/go.mod h1:RlUN/7vTU7Ao/diDkEpQpnz3/92J9ko05BIwxYa2SSI= -go.opentelemetry.io/otel/sdk v1.42.0 h1:LyC8+jqk6UJwdrI/8VydAq/hvkFKNHZVIWuslJXYsDo= -go.opentelemetry.io/otel/sdk v1.42.0/go.mod h1:rGHCAxd9DAph0joO4W6OPwxjNTYWghRWmkHuGbayMts= -go.opentelemetry.io/otel/sdk/metric v1.42.0 h1:D/1QR46Clz6ajyZ3G8SgNlTJKBdGp84q9RKCAZ3YGuA= -go.opentelemetry.io/otel/sdk/metric v1.42.0/go.mod h1:Ua6AAlDKdZ7tdvaQKfSmnFTdHx37+J4ba8MwVCYM5hc= -go.opentelemetry.io/otel/trace v1.42.0 h1:OUCgIPt+mzOnaUTpOQcBiM/PLQ/Op7oq6g4LenLmOYY= -go.opentelemetry.io/otel/trace v1.42.0/go.mod h1:f3K9S+IFqnumBkKhRJMeaZeNk9epyhnCmQh/EysQCdc= +go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 h1:5FXSL2s6afUC1bzNzl1iedZZ8yqR7GOhbCoEXtyeK6Q= +go.opentelemetry.io/contrib/propagators/b3 v1.43.0 h1:CETqV3QLLPTy5yNrqyMr41VnAOOD4lsRved7n4QG00A= +go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 h1:8UQVDcZxOJLtX6gxtDt3vY2WTgvZqMQRzjsqiIHQdkc= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 h1:w1K+pCJoPpQifuVpsKamUdn9U0zM3xUziVOqsGksUrY= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc= +go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 h1:TC+BewnDpeiAmcscXbGMfxkO+mwYUwE/VySwvw88PfA= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 h1:mS47AX77OtFfKG4vtp+84kuGSFZHTyxtXIN269vChY0= +go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM= +go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg= +go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw= +go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A= go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g= go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk= go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= @@ -167,8 +144,7 @@ go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ= go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc= go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= -golang.org/x/arch v0.24.0 h1:qlJ3M9upxvFfwRM51tTg3Yl+8CP9vCC1E7vlFpgv99Y= -golang.org/x/arch v0.24.0/go.mod h1:dNHoOeKiyja7GTvF9NJS1l3Z2yntpQNzgrjh1cU103A= +golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE= golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA= golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= diff --git a/authsession/user_service_real_runtime_compatibility_test.go b/authsession/user_service_real_runtime_compatibility_test.go new file mode 100644 index 0000000..78ad37d --- /dev/null +++ b/authsession/user_service_real_runtime_compatibility_test.go @@ -0,0 +1,273 @@ +package authsession + +import ( + "bytes" + "context" + "fmt" + "io" + "net" + "os" + "os/exec" + "path/filepath" + "runtime" + "strings" + "syscall" + "testing" + "time" + + "galaxy/authsession/internal/adapters/userservice" + "galaxy/authsession/internal/domain/common" + "galaxy/authsession/internal/domain/userresolution" + "galaxy/authsession/internal/ports" + + "github.com/alicebob/miniredis/v2" +) + +func TestUserServiceRESTClientWorksAgainstRealUserServiceRuntime(t *testing.T) { + redisServer := miniredis.RunT(t) + internalAddr := freeTCPAddress(t) + binaryPath := buildUserServiceBinary(t) + process := startUserServiceProcess(t, binaryPath, map[string]string{ + "USERSERVICE_INTERNAL_HTTP_ADDR": internalAddr, + "USERSERVICE_REDIS_ADDR": redisServer.Addr(), + }) + waitForTCP(t, process, internalAddr) + + client, err := userservice.NewRESTClient(userservice.Config{ + BaseURL: "http://" + internalAddr, + RequestTimeout: 500 * time.Millisecond, + }) + if err != nil { + t.Fatalf("NewRESTClient() error = %v, want nil", err) + } + t.Cleanup(func() { + _ = client.Close() + }) + + creatableEmail := common.Email("pilot@example.com") + + resolution, err := client.ResolveByEmail(context.Background(), creatableEmail) + if err != nil { + t.Fatalf("ResolveByEmail(creatable) error = %v, want nil", err) + } + if got, want := resolution.Kind, userresolution.KindCreatable; got != want { + t.Fatalf("ResolveByEmail(creatable).Kind = %q, want %q", got, want) + } + + created, err := client.EnsureUserByEmail(context.Background(), ports.EnsureUserInput{ + Email: creatableEmail, + RegistrationContext: &ports.RegistrationContext{ + PreferredLanguage: "en", + TimeZone: "Europe/Kaliningrad", + }, + }) + if err != nil { + t.Fatalf("EnsureUserByEmail(created) error = %v, want nil", err) + } + if got, want := created.Outcome, ports.EnsureUserOutcomeCreated; got != want { + t.Fatalf("EnsureUserByEmail(created).Outcome = %q, want %q", got, want) + } + if created.UserID.IsZero() { + t.Fatalf("EnsureUserByEmail(created).UserID = zero, want non-zero") + } + + existing, err := client.ResolveByEmail(context.Background(), creatableEmail) + if err != nil { + t.Fatalf("ResolveByEmail(existing) error = %v, want nil", err) + } + if got, want := existing.Kind, userresolution.KindExisting; got != want { + t.Fatalf("ResolveByEmail(existing).Kind = %q, want %q", got, want) + } + if got, want := existing.UserID, created.UserID; got != want { + t.Fatalf("ResolveByEmail(existing).UserID = %q, want %q", got, want) + } + + exists, err := client.ExistsByUserID(context.Background(), created.UserID) + if err != nil { + t.Fatalf("ExistsByUserID(existing) error = %v, want nil", err) + } + if !exists { + t.Fatalf("ExistsByUserID(existing) = false, want true") + } + + blocked, err := client.BlockByUserID(context.Background(), ports.BlockUserByIDInput{ + UserID: created.UserID, + ReasonCode: userresolution.BlockReasonCode("policy_blocked"), + }) + if err != nil { + t.Fatalf("BlockByUserID() error = %v, want nil", err) + } + if got, want := blocked.Outcome, ports.BlockUserOutcomeBlocked; got != want { + t.Fatalf("BlockByUserID().Outcome = %q, want %q", got, want) + } + if got, want := blocked.UserID, created.UserID; got != want { + t.Fatalf("BlockByUserID().UserID = %q, want %q", got, want) + } + + repeated, err := client.BlockByEmail(context.Background(), ports.BlockUserByEmailInput{ + Email: creatableEmail, + ReasonCode: userresolution.BlockReasonCode("policy_blocked"), + }) + if err != nil { + t.Fatalf("BlockByEmail(repeated) error = %v, want nil", err) + } + if got, want := repeated.Outcome, ports.BlockUserOutcomeAlreadyBlocked; got != want { + t.Fatalf("BlockByEmail(repeated).Outcome = %q, want %q", got, want) + } + if got, want := repeated.UserID, created.UserID; got != want { + t.Fatalf("BlockByEmail(repeated).UserID = %q, want %q", got, want) + } + + blockedResolution, err := client.ResolveByEmail(context.Background(), creatableEmail) + if err != nil { + t.Fatalf("ResolveByEmail(blocked) error = %v, want nil", err) + } + if got, want := blockedResolution.Kind, userresolution.KindBlocked; got != want { + t.Fatalf("ResolveByEmail(blocked).Kind = %q, want %q", got, want) + } + if got, want := blockedResolution.BlockReasonCode, userresolution.BlockReasonCode("policy_blocked"); got != want { + t.Fatalf("ResolveByEmail(blocked).BlockReasonCode = %q, want %q", got, want) + } +} + +type userServiceProcess struct { + cmd *exec.Cmd + doneCh chan struct{} + logs bytes.Buffer +} + +func startUserServiceProcess(t *testing.T, binaryPath string, env map[string]string) *userServiceProcess { + t.Helper() + + cmd := exec.Command(binaryPath) + cmd.Env = mergeEnvironment(os.Environ(), env) + + process := &userServiceProcess{ + cmd: cmd, + doneCh: make(chan struct{}), + } + cmd.Stdout = &process.logs + cmd.Stderr = &process.logs + + if err := cmd.Start(); err != nil { + t.Fatalf("start user service process: %v", err) + } + + go func() { + _ = cmd.Wait() + close(process.doneCh) + }() + + t.Cleanup(func() { + stopUserServiceProcess(t, process) + if t.Failed() { + t.Logf("userservice logs:\n%s", process.logs.String()) + } + }) + + return process +} + +func stopUserServiceProcess(t *testing.T, process *userServiceProcess) { + t.Helper() + + if process == nil || process.cmd == nil || process.cmd.Process == nil { + return + } + + select { + case <-process.doneCh: + return + default: + } + + _ = process.cmd.Process.Signal(syscall.SIGTERM) + + select { + case <-process.doneCh: + case <-time.After(5 * time.Second): + _ = process.cmd.Process.Kill() + <-process.doneCh + } +} + +func waitForTCP(t *testing.T, process *userServiceProcess, address string) { + t.Helper() + + deadline := time.Now().Add(10 * time.Second) + for time.Now().Before(deadline) { + select { + case <-process.doneCh: + t.Fatalf("userservice exited before %s became reachable\n%s", address, process.logs.String()) + default: + } + + conn, err := net.DialTimeout("tcp", address, 100*time.Millisecond) + if err == nil { + _ = conn.Close() + return + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("userservice did not become reachable at %s\n%s", address, process.logs.String()) +} + +func freeTCPAddress(t *testing.T) string { + t.Helper() + + listener, err := net.Listen("tcp", "127.0.0.1:0") + if err != nil { + t.Fatalf("reserve free TCP address: %v", err) + } + defer listener.Close() + + return listener.Addr().String() +} + +func buildUserServiceBinary(t *testing.T) string { + t.Helper() + + outputPath := filepath.Join(t.TempDir(), "userservice") + cmd := exec.Command("go", "build", "-o", outputPath, "./user/cmd/userservice") + cmd.Dir = repositoryRoot(t) + output, err := cmd.CombinedOutput() + if err != nil { + t.Fatalf("build userservice binary: %v\n%s", err, output) + } + + return outputPath +} + +func repositoryRoot(t *testing.T) string { + t.Helper() + + _, file, _, ok := runtime.Caller(0) + if !ok { + t.Fatal("resolve repository root: runtime caller unavailable") + } + + return filepath.Clean(filepath.Join(filepath.Dir(file), "..")) +} + +func mergeEnvironment(base []string, overrides map[string]string) []string { + values := make(map[string]string, len(base)+len(overrides)) + for _, entry := range base { + name, value, ok := strings.Cut(entry, "=") + if ok { + values[name] = value + } + } + for name, value := range overrides { + values[name] = value + } + + merged := make([]string, 0, len(values)) + for name, value := range values { + merged = append(merged, fmt.Sprintf("%s=%s", name, value)) + } + return merged +} + +var _ io.Writer = (*bytes.Buffer)(nil) diff --git a/game/go.mod b/game/go.mod index fbc7c1f..5369e82 100644 --- a/game/go.mod +++ b/game/go.mod @@ -4,22 +4,22 @@ go 1.26.0 require ( github.com/gin-gonic/gin v1.12.0 - github.com/go-playground/validator/v10 v10.30.1 + github.com/go-playground/validator/v10 v10.30.2 github.com/google/uuid v1.6.0 github.com/stretchr/testify v1.11.1 ) require ( - github.com/bytedance/gopkg v0.1.3 // indirect + github.com/bytedance/gopkg v0.1.4 // indirect github.com/bytedance/sonic v1.15.0 // indirect - github.com/bytedance/sonic/loader v0.5.0 // indirect + github.com/bytedance/sonic/loader v0.5.1 // indirect github.com/cloudwego/base64x v0.1.6 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/gabriel-vasile/mimetype v1.4.13 // indirect - github.com/gin-contrib/sse v1.1.0 // indirect + github.com/gin-contrib/sse v1.1.1 // indirect github.com/go-playground/locales v0.14.1 // indirect github.com/go-playground/universal-translator v0.18.1 // indirect - github.com/goccy/go-json v0.10.5 // indirect + github.com/goccy/go-json v0.10.6 // indirect github.com/goccy/go-yaml v1.19.2 // indirect github.com/json-iterator/go v1.1.12 // indirect github.com/klauspost/cpuid/v2 v2.3.0 // indirect @@ -27,7 +27,7 @@ require ( github.com/mattn/go-isatty v0.0.20 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.2 // indirect - github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pelletier/go-toml/v2 v2.3.0 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/quic-go/qpack v0.6.0 // indirect github.com/quic-go/quic-go v0.59.0 // indirect @@ -35,7 +35,7 @@ require ( github.com/twitchyliquid64/golang-asm v0.15.1 // indirect github.com/ugorji/go/codec v1.3.1 // indirect go.mongodb.org/mongo-driver/v2 v2.5.0 // indirect - golang.org/x/arch v0.24.0 // indirect + golang.org/x/arch v0.25.0 // indirect golang.org/x/crypto v0.49.0 // indirect golang.org/x/net v0.52.0 // indirect golang.org/x/sys v0.42.0 // indirect diff --git a/game/go.sum b/game/go.sum index 724011a..464a5d9 100644 --- a/game/go.sum +++ b/game/go.sum @@ -1,9 +1,7 @@ -github.com/bytedance/gopkg v0.1.3 h1:TPBSwH8RsouGCBcMBktLt1AymVo2TVsBVCY4b6TnZ/M= -github.com/bytedance/gopkg v0.1.3/go.mod h1:576VvJ+eJgyCzdjS+c4+77QF3p7ubbtiKARP3TxducM= +github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM= github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE= github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k= -github.com/bytedance/sonic/loader v0.5.0 h1:gXH3KVnatgY7loH5/TkeVyXPfESoqSBSBEiDd5VjlgE= -github.com/bytedance/sonic/loader v0.5.0/go.mod h1:AR4NYCk5DdzZizZ5djGqQ92eEhCCcdf5x77udYiSJRo= +github.com/bytedance/sonic/loader v0.5.1 h1:Ygpfa9zwRCCKSlrp5bBP/b/Xzc3VxsAW+5NIYXrOOpI= github.com/cloudwego/base64x v0.1.6 h1:t11wG9AECkCDk5fMSoxmufanudBtJ+/HemLstXDLI2M= github.com/cloudwego/base64x v0.1.6/go.mod h1:OFcloc187FXDaYHvrNIjxSe8ncn0OOM8gEHfghB2IPU= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= @@ -11,8 +9,7 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM= github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s= -github.com/gin-contrib/sse v1.1.0 h1:n0w2GMuUpWDVp7qSpvze6fAu9iRxJY4Hmj6AmBOU05w= -github.com/gin-contrib/sse v1.1.0/go.mod h1:hxRZ5gVpWMT7Z0B0gSNYqqsSCNIJMjzvm6fqCz9vjwM= +github.com/gin-contrib/sse v1.1.1 h1:uGYpNwTacv5R68bSGMapo62iLTRa9l5zxGCps4hK6ko= github.com/gin-gonic/gin v1.12.0 h1:b3YAbrZtnf8N//yjKeU2+MQsh2mY5htkZidOM7O0wG8= github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= @@ -20,10 +17,8 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= -github.com/go-playground/validator/v10 v10.30.1 h1:f3zDSN/zOma+w6+1Wswgd9fLkdwy06ntQJp0BBvFG0w= -github.com/go-playground/validator/v10 v10.30.1/go.mod h1:oSuBIQzuJxL//3MelwSLD5hc2Tu889bF0Idm9Dg26cM= -github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4= -github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M= +github.com/go-playground/validator/v10 v10.30.2 h1:JiFIMtSSHb2/XBUbWM4i/MpeQm9ZK2xqPNk8vgvu5JQ= +github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU= github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM= github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= @@ -48,8 +43,7 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= -github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= -github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8= @@ -75,8 +69,7 @@ github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2W go.mongodb.org/mongo-driver/v2 v2.5.0 h1:yXUhImUjjAInNcpTcAlPHiT7bIXhshCTL3jVBkF3xaE= go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y= go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU= -golang.org/x/arch v0.24.0 h1:qlJ3M9upxvFfwRM51tTg3Yl+8CP9vCC1E7vlFpgv99Y= -golang.org/x/arch v0.24.0/go.mod h1:dNHoOeKiyja7GTvF9NJS1l3Z2yntpQNzgrjh1cU103A= +golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE= golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= diff --git a/gateway/README.md b/gateway/README.md index c9073b7..912b505 100644 --- a/gateway/README.md +++ b/gateway/README.md @@ -23,6 +23,8 @@ Optional integrations: - `GATEWAY_ADMIN_HTTP_ADDR` enables the private `/metrics` listener; - `GATEWAY_AUTH_SERVICE_BASE_URL` enables real public auth handling through Auth / Session Service public HTTP; +- `GATEWAY_USER_SERVICE_BASE_URL` enables direct authenticated self-service + routing to User Service internal HTTP; - injected downstream routes are required for successful `ExecuteCommand`. Operational caveats: @@ -118,6 +120,10 @@ The public auth JSON contract uses a challenge-token flow: key for the device session being created. `time_zone` is the client-selected IANA time zone name forwarded unchanged to `Auth / Session Service`. +The current create-path source of truth for `preferred_language` is still the +temporary authsession-to-user rollout using `"en"`. Gateway-side language +derivation is a later rollout. The public `confirm-email-code` DTO itself +remains unchanged. These routes remain unauthenticated and delegate only through an injected `AuthServiceClient`. @@ -322,10 +328,24 @@ The authenticated transport uses a split contract: - signatures are computed over canonical envelope fields and a hash of raw FlatBuffers bytes. -The gateway treats authenticated request `payload_bytes` as opaque business -data. -It verifies integrity and forwards verified bytes downstream without rewriting -them. +The gateway verifies authenticated payload bytes before any downstream call. +Most downstream routes may still treat those bytes as opaque, but the gateway +is also allowed to transcode verified FlatBuffers payloads into trusted +downstream REST/JSON calls when the concrete downstream contract requires it. + +The current direct `Gateway -> User` self-service boundary uses that pattern: + +- external message types: + - `user.account.get` + - `user.profile.update` + - `user.settings.update` +- external payloads and responses: + - FlatBuffers +- internal downstream transport: + - strict REST/JSON to User Service +- business error projection: + - gateway `result_code` + - FlatBuffers error payload mirroring User Service `code` and `message` The request envelope version literal is `v1`. `payload_hash` is the raw 32-byte SHA-256 digest of `payload_bytes`. @@ -965,6 +985,11 @@ failing process startup. Resolves the target downstream service or adapter by the full exact-match `message_type` literal. +The default `cmd/gateway` wiring keeps the reserved `user.*` self-service +message types mounted even when `GATEWAY_USER_SERVICE_BASE_URL` is unset. In +that configuration they fail closed as dependency-unavailable instead of +falling through to a generic route miss. + ### DownstreamClient Executes a verified authenticated command against a downstream internal service @@ -972,6 +997,11 @@ and returns response payload bytes plus a stable opaque result code. An empty or whitespace-only result code is treated as an internal downstream contract violation. +Downstream clients may be pure pass-through adapters or gateway-owned +transcoding adapters. The current User Service adapter decodes authenticated +FlatBuffers payloads, calls the trusted internal REST API, and re-encodes the +result into FlatBuffers before the signed gateway response is emitted. + ### EventSubscriber Subscribes to internal pub/sub topics used for: diff --git a/gateway/TODO.md b/gateway/TODO.md index 93a9e31..f9e1e12 100644 --- a/gateway/TODO.md +++ b/gateway/TODO.md @@ -3,5 +3,8 @@ ## 1. Suggest User's Preferred Language when registering a new User Upon user's device/session registration flow, `preferred_language` value -must be obtained via existing [geoip](../pkg/geoip) package by returned Country. -When geoip feils to return country by ip, fallback is `en` language. +must be obtained via existing [geoip](../pkg/geoip) package by returned +country. +The derived value must be emitted as a valid BCP 47 language tag because +`User Service` now validates that contract semantically on create. +When geoip fails to return country by IP, fallback is `en`. diff --git a/gateway/cmd/gateway/main.go b/gateway/cmd/gateway/main.go index a099172..0398724 100644 --- a/gateway/cmd/gateway/main.go +++ b/gateway/cmd/gateway/main.go @@ -13,6 +13,7 @@ import ( "galaxy/gateway/internal/authn" "galaxy/gateway/internal/config" "galaxy/gateway/internal/downstream" + "galaxy/gateway/internal/downstream/userservice" "galaxy/gateway/internal/events" "galaxy/gateway/internal/grpcapi" "galaxy/gateway/internal/logging" @@ -184,12 +185,27 @@ func newAuthenticatedGRPCDependencies(ctx context.Context, cfg config.Config, lo ) } + userRoutes, closeUserServiceRoutes, err := userservice.NewRoutes(cfg.UserService.BaseURL) + if err != nil { + closeErr := errors.Join( + fallbackSessionCache.Close(), + replayStore.Close(), + sessionSubscriber.Close(), + clientEventSubscriber.Close(), + ) + return grpcapi.ServerDependencies{}, nil, nil, errors.Join( + fmt.Errorf("build authenticated grpc dependencies: user service routes: %w", err), + closeErr, + ) + } + cleanup := func() error { return errors.Join( fallbackSessionCache.Close(), replayStore.Close(), sessionSubscriber.Close(), clientEventSubscriber.Close(), + closeUserServiceRoutes(), ) } @@ -227,7 +243,7 @@ func newAuthenticatedGRPCDependencies(ctx context.Context, cfg config.Config, lo return grpcapi.ServerDependencies{ Service: grpcapi.NewFanOutPushStreamService(pushHub, responseSigner, nil, logger), - Router: downstream.NewStaticRouter(nil), + Router: downstream.NewStaticRouter(userRoutes), ResponseSigner: responseSigner, SessionCache: sessionCache, ReplayStore: replayStore, diff --git a/gateway/docs/examples.md b/gateway/docs/examples.md index a23fc23..db80966 100644 --- a/gateway/docs/examples.md +++ b/gateway/docs/examples.md @@ -89,6 +89,26 @@ Example `ExecuteCommandResponse`: } ``` +Example authenticated self-service request metadata: + +```json +{ + "protocolVersion": "v1", + "deviceSessionId": "device-session-123", + "messageType": "user.account.get", + "timestampMs": "1775121600000", + "requestId": "request-account-123", + "payloadBytes": "RkxBVEJVRkZFUlNfVVNFUl9SRVFVRVNU", + "payloadHash": "5fY6Q8V9mK8x2B7v6v0V0m0i1rQ2QF0rQ8V1Yt1r8Ys=", + "signature": "3o4v8f3h0Y6I0x1bS7zY+8m0bV1Lk4D3yq8J2n8F1rD7yK9v8M1Q0w2s4a6f8d0Q0m3L6y8R1t5w7x9z0a2cA==" +} +``` + +The external payload remains FlatBuffers. The current `Gateway -> User` +self-service adapter decodes that payload, calls the trusted internal +User Service REST API, then re-encodes the returned account aggregate or error +envelope back into FlatBuffers before signing the response. + Example bootstrap `GatewayEvent` sent after `SubscribeEvents` opens: ```json diff --git a/gateway/docs/flows.md b/gateway/docs/flows.md index 40118eb..c48af5f 100644 --- a/gateway/docs/flows.md +++ b/gateway/docs/flows.md @@ -52,6 +52,24 @@ sequenceDiagram Gateway-->>Client: ExecuteCommandResponse + signature ``` +## Direct Gateway -> User Self-Service Flow + +```mermaid +sequenceDiagram + participant Client + participant Gateway + participant User as User Service + + Client->>Gateway: ExecuteCommand(user.account.get | user.profile.update | user.settings.update) + Gateway->>Gateway: verify envelope + session + signature + replay + Gateway->>Gateway: decode FlatBuffers payload + Gateway->>User: trusted REST/JSON internal request + User-->>Gateway: JSON account aggregate or JSON error envelope + Gateway->>Gateway: encode FlatBuffers success or error payload + Gateway->>Gateway: sign response + Gateway-->>Client: ExecuteCommandResponse(result_code, payload_bytes, signature) +``` + ## SubscribeEvents Lifecycle ```mermaid diff --git a/gateway/docs/runtime.md b/gateway/docs/runtime.md index 3400417..ec4a3f5 100644 --- a/gateway/docs/runtime.md +++ b/gateway/docs/runtime.md @@ -55,5 +55,7 @@ Notes: - The admin listener is optional and serves only Prometheus text metrics. - Public auth routing stays available without an upstream adapter, but returns `503 service_unavailable`. -- Authenticated gRPC starts with an empty static router; `ExecuteCommand` - remains `UNIMPLEMENTED` until downstream routes are injected. +- The default runtime reserves direct `user.*` authenticated self-service + routes. When `GATEWAY_USER_SERVICE_BASE_URL` is unset those routes stay + mounted but fail closed as dependency-unavailable instead of returning a + route miss. diff --git a/gateway/go.mod b/gateway/go.mod index a215cbb..70b6207 100644 --- a/gateway/go.mod +++ b/gateway/go.mod @@ -6,22 +6,22 @@ require ( buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.11-20260209202127-80ab13bee0bf.1 buf.build/go/protovalidate v1.1.3 github.com/alicebob/miniredis/v2 v2.37.0 - github.com/getkin/kin-openapi v0.134.0 + github.com/getkin/kin-openapi v0.135.0 github.com/gin-gonic/gin v1.12.0 github.com/google/flatbuffers v25.12.19+incompatible github.com/prometheus/client_golang v1.23.2 github.com/redis/go-redis/v9 v9.18.0 github.com/stretchr/testify v1.11.1 - go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.67.0 + go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.67.0 - go.opentelemetry.io/otel v1.42.0 - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.42.0 - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0 - go.opentelemetry.io/otel/exporters/prometheus v0.64.0 - go.opentelemetry.io/otel/metric v1.42.0 - go.opentelemetry.io/otel/sdk v1.42.0 - go.opentelemetry.io/otel/sdk/metric v1.42.0 - go.opentelemetry.io/otel/trace v1.42.0 + go.opentelemetry.io/otel v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 + go.opentelemetry.io/otel/exporters/prometheus v0.65.0 + go.opentelemetry.io/otel/metric v1.43.0 + go.opentelemetry.io/otel/sdk v1.43.0 + go.opentelemetry.io/otel/sdk/metric v1.43.0 + go.opentelemetry.io/otel/trace v1.43.0 go.uber.org/zap v1.27.1 golang.org/x/time v0.15.0 google.golang.org/grpc v1.80.0 @@ -32,24 +32,24 @@ require ( cel.dev/expr v0.25.1 // indirect github.com/antlr4-go/antlr/v4 v4.13.1 // indirect github.com/beorn7/perks v1.0.1 // indirect - github.com/bytedance/gopkg v0.1.3 // indirect + github.com/bytedance/gopkg v0.1.4 // indirect github.com/bytedance/sonic v1.15.0 // indirect - github.com/bytedance/sonic/loader v0.5.0 // indirect + github.com/bytedance/sonic/loader v0.5.1 // indirect github.com/cenkalti/backoff/v5 v5.0.3 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cloudwego/base64x v0.1.6 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect github.com/gabriel-vasile/mimetype v1.4.13 // indirect - github.com/gin-contrib/sse v1.1.0 // indirect + github.com/gin-contrib/sse v1.1.1 // indirect github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/stdr v1.2.2 // indirect github.com/go-openapi/jsonpointer v0.21.0 // indirect github.com/go-openapi/swag v0.23.0 // indirect github.com/go-playground/locales v0.14.1 // indirect github.com/go-playground/universal-translator v0.18.1 // indirect - github.com/go-playground/validator/v10 v10.30.1 // indirect - github.com/goccy/go-json v0.10.5 // indirect + github.com/go-playground/validator/v10 v10.30.2 // indirect + github.com/goccy/go-json v0.10.6 // indirect github.com/goccy/go-yaml v1.19.2 // indirect github.com/google/cel-go v0.27.0 // indirect github.com/google/uuid v1.6.0 // indirect @@ -64,15 +64,15 @@ require ( github.com/modern-go/reflect2 v1.0.2 // indirect github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect - github.com/oasdiff/yaml v0.0.0-20260313112342-a3ea61cb4d4c // indirect - github.com/oasdiff/yaml3 v0.0.0-20260224194419-61cd415a242b // indirect - github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/oasdiff/yaml v0.0.9 // indirect + github.com/oasdiff/yaml3 v0.0.9 // indirect + github.com/pelletier/go-toml/v2 v2.3.0 // indirect github.com/perimeterx/marshmallow v1.1.5 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/prometheus/client_model v0.6.2 // indirect github.com/prometheus/common v0.67.5 // indirect github.com/prometheus/otlptranslator v1.0.0 // indirect - github.com/prometheus/procfs v0.19.2 // indirect + github.com/prometheus/procfs v0.20.1 // indirect github.com/quic-go/qpack v0.6.0 // indirect github.com/quic-go/quic-go v0.59.0 // indirect github.com/twitchyliquid64/golang-asm v0.15.1 // indirect @@ -81,12 +81,12 @@ require ( github.com/yuin/gopher-lua v1.1.1 // indirect go.mongodb.org/mongo-driver/v2 v2.5.0 // indirect go.opentelemetry.io/auto/sdk v1.2.1 // indirect - go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect go.opentelemetry.io/proto/otlp v1.10.0 // indirect go.uber.org/atomic v1.11.0 // indirect go.uber.org/multierr v1.10.0 // indirect - go.yaml.in/yaml/v2 v2.4.3 // indirect - golang.org/x/arch v0.24.0 // indirect + go.yaml.in/yaml/v2 v2.4.4 // indirect + golang.org/x/arch v0.25.0 // indirect golang.org/x/crypto v0.49.0 // indirect golang.org/x/exp v0.0.0-20250813145105-42675adae3e6 // indirect golang.org/x/net v0.52.0 // indirect diff --git a/gateway/go.sum b/gateway/go.sum index 4b3657e..0093f15 100644 --- a/gateway/go.sum +++ b/gateway/go.sum @@ -16,12 +16,10 @@ github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs= github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c= github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= -github.com/bytedance/gopkg v0.1.3 h1:TPBSwH8RsouGCBcMBktLt1AymVo2TVsBVCY4b6TnZ/M= -github.com/bytedance/gopkg v0.1.3/go.mod h1:576VvJ+eJgyCzdjS+c4+77QF3p7ubbtiKARP3TxducM= +github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM= github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE= github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k= -github.com/bytedance/sonic/loader v0.5.0 h1:gXH3KVnatgY7loH5/TkeVyXPfESoqSBSBEiDd5VjlgE= -github.com/bytedance/sonic/loader v0.5.0/go.mod h1:AR4NYCk5DdzZizZ5djGqQ92eEhCCcdf5x77udYiSJRo= +github.com/bytedance/sonic/loader v0.5.1 h1:Ygpfa9zwRCCKSlrp5bBP/b/Xzc3VxsAW+5NIYXrOOpI= github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM= github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= @@ -35,10 +33,8 @@ github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/r github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM= github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s= -github.com/getkin/kin-openapi v0.134.0 h1:/L5+1+kfe6dXh8Ot/wqiTgUkjOIEJiC0bbYVziHB8rU= -github.com/getkin/kin-openapi v0.134.0/go.mod h1:wK6ZLG/VgoETO9pcLJ/VmAtIcl/DNlMayNTb716EUxE= -github.com/gin-contrib/sse v1.1.0 h1:n0w2GMuUpWDVp7qSpvze6fAu9iRxJY4Hmj6AmBOU05w= -github.com/gin-contrib/sse v1.1.0/go.mod h1:hxRZ5gVpWMT7Z0B0gSNYqqsSCNIJMjzvm6fqCz9vjwM= +github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg= +github.com/gin-contrib/sse v1.1.1 h1:uGYpNwTacv5R68bSGMapo62iLTRa9l5zxGCps4hK6ko= github.com/gin-gonic/gin v1.12.0 h1:b3YAbrZtnf8N//yjKeU2+MQsh2mY5htkZidOM7O0wG8= github.com/gin-gonic/gin v1.12.0/go.mod h1:VxccKfsSllpKshkBWgVgRniFFAzFb9csfngsqANjnLc= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= @@ -56,12 +52,10 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= -github.com/go-playground/validator/v10 v10.30.1 h1:f3zDSN/zOma+w6+1Wswgd9fLkdwy06ntQJp0BBvFG0w= -github.com/go-playground/validator/v10 v10.30.1/go.mod h1:oSuBIQzuJxL//3MelwSLD5hc2Tu889bF0Idm9Dg26cM= +github.com/go-playground/validator/v10 v10.30.2 h1:JiFIMtSSHb2/XBUbWM4i/MpeQm9ZK2xqPNk8vgvu5JQ= github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM= github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE= -github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4= -github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M= +github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU= github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM= github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= @@ -106,12 +100,9 @@ github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9 github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= -github.com/oasdiff/yaml v0.0.0-20260313112342-a3ea61cb4d4c h1:7ACFcSaQsrWtrH4WHHfUqE1C+f8r2uv8KGaW0jTNjus= -github.com/oasdiff/yaml v0.0.0-20260313112342-a3ea61cb4d4c/go.mod h1:JKox4Gszkxt57kj27u7rvi7IFoIULvCZHUsBTUmQM/s= -github.com/oasdiff/yaml3 v0.0.0-20260224194419-61cd415a242b h1:vivRhVUAa9t1q0Db4ZmezBP8pWQWnXHFokZj0AOea2g= -github.com/oasdiff/yaml3 v0.0.0-20260224194419-61cd415a242b/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o= -github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= -github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48= +github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g= +github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM= github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s= github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= @@ -124,8 +115,7 @@ github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTU github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw= github.com/prometheus/otlptranslator v1.0.0 h1:s0LJW/iN9dkIH+EnhiD3BlkkP5QVIUVEoIwkU+A6qos= github.com/prometheus/otlptranslator v1.0.0/go.mod h1:vRYWnXvI6aWGpsdY/mOT/cbeVRBlPWtBNDb7kGR3uKM= -github.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws= -github.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw= +github.com/prometheus/procfs v0.20.1 h1:XwbrGOIplXW/AU3YhIhLODXMJYyC1isLFfYCsTEycfc= github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8= github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII= github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw= @@ -161,32 +151,20 @@ go.mongodb.org/mongo-driver/v2 v2.5.0 h1:yXUhImUjjAInNcpTcAlPHiT7bIXhshCTL3jVBkF go.mongodb.org/mongo-driver/v2 v2.5.0/go.mod h1:yOI9kBsufol30iFsl1slpdq1I0eHPzybRWdyYUs8K/0= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= -go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.67.0 h1:E7DmskpIO7ZR6QI6zKSEKIDNUYoKw9oHXP23gzbCdU0= -go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.67.0/go.mod h1:WB2cS9y+AwqqKhoo9gw6/ZxlSjFBUQGZ8BQOaD3FVXM= +go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 h1:5FXSL2s6afUC1bzNzl1iedZZ8yqR7GOhbCoEXtyeK6Q= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.67.0 h1:yI1/OhfEPy7J9eoa6Sj051C7n5dvpj0QX8g4sRchg04= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.67.0/go.mod h1:NoUCKYWK+3ecatC4HjkRktREheMeEtrXoQxrqYFeHSc= -go.opentelemetry.io/contrib/propagators/b3 v1.42.0 h1:B2Pew5ufEtgkjLF+tSkXjgYZXQr9m7aCm1wLKB0URbU= -go.opentelemetry.io/contrib/propagators/b3 v1.42.0/go.mod h1:iPgUcSEF5DORW6+yNbdw/YevUy+QqJ508ncjhrRSCjc= -go.opentelemetry.io/otel v1.42.0 h1:lSQGzTgVR3+sgJDAU/7/ZMjN9Z+vUip7leaqBKy4sho= -go.opentelemetry.io/otel v1.42.0/go.mod h1:lJNsdRMxCUIWuMlVJWzecSMuNjE7dOYyWlqOXWkdqCc= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0 h1:THuZiwpQZuHPul65w4WcwEnkX2QIuMT+UFoOrygtoJw= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0/go.mod h1:J2pvYM5NGHofZ2/Ru6zw/TNWnEQp5crgyDeSrYpXkAw= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.42.0 h1:zWWrB1U6nqhS/k6zYB74CjRpuiitRtLLi68VcgmOEto= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.42.0/go.mod h1:2qXPNBX1OVRC0IwOnfo1ljoid+RD0QK3443EaqVlsOU= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0 h1:uLXP+3mghfMf7XmV4PkGfFhFKuNWoCvvx5wP/wOXo0o= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0/go.mod h1:v0Tj04armyT59mnURNUJf7RCKcKzq+lgJs6QSjHjaTc= -go.opentelemetry.io/otel/exporters/prometheus v0.64.0 h1:g0LRDXMX/G1SEZtK8zl8Chm4K6GBwRkjPKE36LxiTYs= -go.opentelemetry.io/otel/exporters/prometheus v0.64.0/go.mod h1:UrgcjnarfdlBDP3GjDIJWe6HTprwSazNjwsI+Ru6hro= -go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.42.0 h1:s/1iRkCKDfhlh1JF26knRneorus8aOwVIDhvYx9WoDw= -go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.42.0/go.mod h1:UI3wi0FXg1Pofb8ZBiBLhtMzgoTm1TYkMvn71fAqDzs= -go.opentelemetry.io/otel/metric v1.42.0 h1:2jXG+3oZLNXEPfNmnpxKDeZsFI5o4J+nz6xUlaFdF/4= -go.opentelemetry.io/otel/metric v1.42.0/go.mod h1:RlUN/7vTU7Ao/diDkEpQpnz3/92J9ko05BIwxYa2SSI= -go.opentelemetry.io/otel/sdk v1.42.0 h1:LyC8+jqk6UJwdrI/8VydAq/hvkFKNHZVIWuslJXYsDo= -go.opentelemetry.io/otel/sdk v1.42.0/go.mod h1:rGHCAxd9DAph0joO4W6OPwxjNTYWghRWmkHuGbayMts= -go.opentelemetry.io/otel/sdk/metric v1.42.0 h1:D/1QR46Clz6ajyZ3G8SgNlTJKBdGp84q9RKCAZ3YGuA= -go.opentelemetry.io/otel/sdk/metric v1.42.0/go.mod h1:Ua6AAlDKdZ7tdvaQKfSmnFTdHx37+J4ba8MwVCYM5hc= -go.opentelemetry.io/otel/trace v1.42.0 h1:OUCgIPt+mzOnaUTpOQcBiM/PLQ/Op7oq6g4LenLmOYY= -go.opentelemetry.io/otel/trace v1.42.0/go.mod h1:f3K9S+IFqnumBkKhRJMeaZeNk9epyhnCmQh/EysQCdc= +go.opentelemetry.io/contrib/propagators/b3 v1.43.0 h1:CETqV3QLLPTy5yNrqyMr41VnAOOD4lsRved7n4QG00A= +go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc= +go.opentelemetry.io/otel/exporters/prometheus v0.65.0 h1:jOveH/b4lU9HT7y+Gfamf18BqlOuz2PWEvs8yM7Q6XE= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 h1:mS47AX77OtFfKG4vtp+84kuGSFZHTyxtXIN269vChY0= +go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM= +go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg= +go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw= +go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A= go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g= go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= @@ -198,12 +176,10 @@ go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ= go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc= go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= -go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0= -go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8= +go.yaml.in/yaml/v2 v2.4.4 h1:tuyd0P+2Ont/d6e2rl3be67goVK4R6deVxCUX5vyPaQ= go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= -golang.org/x/arch v0.24.0 h1:qlJ3M9upxvFfwRM51tTg3Yl+8CP9vCC1E7vlFpgv99Y= -golang.org/x/arch v0.24.0/go.mod h1:dNHoOeKiyja7GTvF9NJS1l3Z2yntpQNzgrjh1cU103A= +golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE= golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= golang.org/x/exp v0.0.0-20250813145105-42675adae3e6 h1:SbTAbRFnd5kjQXbczszQ0hdk3ctwYf3qBNH9jIsGclE= golang.org/x/exp v0.0.0-20250813145105-42675adae3e6/go.mod h1:4QTo5u+SEIbbKW1RacMZq1YEfOBqeXa19JeshGi+zc4= diff --git a/gateway/internal/config/config.go b/gateway/internal/config/config.go index 77b34b6..ef4a062 100644 --- a/gateway/internal/config/config.go +++ b/gateway/internal/config/config.go @@ -45,6 +45,11 @@ const ( // public-auth delegation. authServiceBaseURLEnvVar = "GATEWAY_AUTH_SERVICE_BASE_URL" + // userServiceBaseURLEnvVar names the environment variable that configures + // the optional User Service internal HTTP base URL used by authenticated + // gateway self-service delegation. + userServiceBaseURLEnvVar = "GATEWAY_USER_SERVICE_BASE_URL" + // adminHTTPAddrEnvVar names the environment variable that configures the // private admin HTTP listener address. When it is empty, the admin listener // remains disabled. @@ -479,6 +484,15 @@ type AuthServiceConfig struct { BaseURL string } +// UserServiceConfig describes the optional authenticated self-service upstream +// used by the gateway runtime. +type UserServiceConfig struct { + // BaseURL is the absolute base URL of the User Service internal HTTP API. + // When BaseURL is empty, the gateway keeps using its built-in unavailable + // downstream adapter for the reserved `user.*` routes. + BaseURL string +} + // AdminHTTPConfig describes the private operational HTTP listener used for // metrics exposure. The listener remains disabled when Addr is empty. type AdminHTTPConfig struct { @@ -610,6 +624,10 @@ type Config struct { // Session Service. AuthService AuthServiceConfig + // UserService configures the optional authenticated self-service + // delegation to User Service. + UserService UserServiceConfig + // AdminHTTP configures the optional private admin listener used for metrics // exposure. AdminHTTP AdminHTTPConfig @@ -791,6 +809,13 @@ func DefaultAuthServiceConfig() AuthServiceConfig { return AuthServiceConfig{} } +// DefaultUserServiceConfig returns the default authenticated self-service +// upstream settings. The zero value keeps the built-in unavailable adapter +// active for reserved `user.*` routes. +func DefaultUserServiceConfig() UserServiceConfig { + return UserServiceConfig{} +} + // LoadFromEnv loads Config from the process environment, applies defaults for // omitted settings, and validates the resulting values. func LoadFromEnv() (Config, error) { @@ -799,6 +824,7 @@ func LoadFromEnv() (Config, error) { Logging: DefaultLoggingConfig(), PublicHTTP: DefaultPublicHTTPConfig(), AuthService: DefaultAuthServiceConfig(), + UserService: DefaultUserServiceConfig(), AdminHTTP: DefaultAdminHTTPConfig(), AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(), SessionCacheRedis: DefaultSessionCacheRedisConfig(), @@ -856,6 +882,11 @@ func LoadFromEnv() (Config, error) { cfg.AuthService.BaseURL = rawAuthServiceBaseURL } + rawUserServiceBaseURL, ok := os.LookupEnv(userServiceBaseURLEnvVar) + if ok { + cfg.UserService.BaseURL = rawUserServiceBaseURL + } + rawAdminHTTPAddr, ok := os.LookupEnv(adminHTTPAddrEnvVar) if ok { cfg.AdminHTTP.Addr = rawAdminHTTPAddr @@ -1124,6 +1155,17 @@ func LoadFromEnv() (Config, error) { } cfg.AuthService.BaseURL = strings.TrimRight(parsedAuthServiceBaseURL.String(), "/") } + cfg.UserService.BaseURL = strings.TrimSpace(cfg.UserService.BaseURL) + if cfg.UserService.BaseURL != "" { + parsedUserServiceBaseURL, err := url.Parse(cfg.UserService.BaseURL) + if err != nil { + return Config{}, fmt.Errorf("load gateway config: parse %s: %w", userServiceBaseURLEnvVar, err) + } + if parsedUserServiceBaseURL.Scheme == "" || parsedUserServiceBaseURL.Host == "" { + return Config{}, fmt.Errorf("load gateway config: %s must be an absolute URL", userServiceBaseURLEnvVar) + } + cfg.UserService.BaseURL = strings.TrimRight(parsedUserServiceBaseURL.String(), "/") + } if addr := strings.TrimSpace(cfg.AdminHTTP.Addr); addr != "" { cfg.AdminHTTP.Addr = addr } diff --git a/gateway/internal/config/config_test.go b/gateway/internal/config/config_test.go index e97ba6b..949cabc 100644 --- a/gateway/internal/config/config_test.go +++ b/gateway/internal/config/config_test.go @@ -7,6 +7,7 @@ import ( "encoding/pem" "os" "path/filepath" + "sync" "testing" "time" @@ -14,6 +15,8 @@ import ( "github.com/stretchr/testify/require" ) +var configEnvMu sync.Mutex + func TestLoadFromEnv(t *testing.T) { customResponseSignerPrivateKeyPEMPath := new(string) *customResponseSignerPrivateKeyPEMPath = writeTestResponseSignerPEMFile(t) @@ -27,6 +30,9 @@ func TestLoadFromEnv(t *testing.T) { customAuthServiceBaseURL := new(string) *customAuthServiceBaseURL = " http://127.0.0.1:8082/ " + customUserServiceBaseURL := new(string) + *customUserServiceBaseURL = " http://127.0.0.1:8083/ " + customAuthenticatedGRPCAddr := new(string) *customAuthenticatedGRPCAddr = "127.0.0.1:9191" @@ -80,6 +86,7 @@ func TestLoadFromEnv(t *testing.T) { shutdownTimeout *string publicHTTPAddr *string authServiceBaseURL *string + userServiceBaseURL *string authenticatedGRPCAddr *string authenticatedGRPCFreshnessWindow *string sessionCacheRedisAddr *string @@ -217,6 +224,40 @@ func TestLoadFromEnv(t *testing.T) { }, }, }, + { + name: "custom user service base url", + userServiceBaseURL: customUserServiceBaseURL, + sessionCacheRedisAddr: customSessionCacheRedisAddr, + responseSignerPrivateKeyPEMPath: customResponseSignerPrivateKeyPEMPath, + want: Config{ + ShutdownTimeout: 5 * time.Second, + Logging: DefaultLoggingConfig(), + PublicHTTP: DefaultPublicHTTPConfig(), + UserService: UserServiceConfig{ + BaseURL: "http://127.0.0.1:8083", + }, + AdminHTTP: DefaultAdminHTTPConfig(), + AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(), + SessionCacheRedis: SessionCacheRedisConfig{ + Addr: "127.0.0.1:6379", + DB: defaultSessionCacheRedisDB, + KeyPrefix: defaultSessionCacheRedisKeyPrefix, + LookupTimeout: defaultSessionCacheRedisLookupTimeout, + }, + ReplayRedis: DefaultReplayRedisConfig(), + SessionEventsRedis: SessionEventsRedisConfig{ + Stream: "gateway:session_events", + ReadBlockTimeout: defaultSessionEventsRedisReadBlockTimeout, + }, + ClientEventsRedis: ClientEventsRedisConfig{ + Stream: "gateway:client_events", + ReadBlockTimeout: defaultClientEventsRedisReadBlockTimeout, + }, + ResponseSigner: ResponseSignerConfig{ + PrivateKeyPEMPath: *customResponseSignerPrivateKeyPEMPath, + }, + }, + }, { name: "custom authenticated grpc address", authenticatedGRPCAddr: customAuthenticatedGRPCAddr, @@ -368,6 +409,7 @@ func TestLoadFromEnv(t *testing.T) { shutdownTimeoutEnvVar, publicHTTPAddrEnvVar, authServiceBaseURLEnvVar, + userServiceBaseURLEnvVar, authenticatedGRPCAddrEnvVar, authenticatedGRPCFreshnessWindowEnvVar, sessionCacheRedisAddrEnvVar, @@ -379,6 +421,7 @@ func TestLoadFromEnv(t *testing.T) { setEnvValue(t, shutdownTimeoutEnvVar, tt.shutdownTimeout) setEnvValue(t, publicHTTPAddrEnvVar, tt.publicHTTPAddr) setEnvValue(t, authServiceBaseURLEnvVar, tt.authServiceBaseURL) + setEnvValue(t, userServiceBaseURLEnvVar, tt.userServiceBaseURL) setEnvValue(t, authenticatedGRPCAddrEnvVar, tt.authenticatedGRPCAddr) setEnvValue(t, authenticatedGRPCFreshnessWindowEnvVar, tt.authenticatedGRPCFreshnessWindow) setEnvValue(t, sessionCacheRedisAddrEnvVar, tt.sessionCacheRedisAddr) @@ -492,7 +535,7 @@ func TestLoadFromEnvOperationalSettings(t *testing.T) { restoreEnvs(t, append( append( append( - append(operationalEnvVars(), sessionCacheRedisEnvVars()...), + append(append(operationalEnvVars(), authServiceBaseURLEnvVar, userServiceBaseURLEnvVar), sessionCacheRedisEnvVars()...), sessionEventsRedisEnvVars()..., ), clientEventsRedisEnvVars()..., @@ -563,6 +606,8 @@ func TestLoadFromEnvAuthService(t *testing.T) { restoreEnvs(t, authServiceBaseURLEnvVar, + userServiceBaseURLEnvVar, + logLevelEnvVar, sessionCacheRedisAddrEnvVar, sessionEventsRedisStreamEnvVar, clientEventsRedisStreamEnvVar, @@ -581,6 +626,72 @@ func TestLoadFromEnvAuthService(t *testing.T) { } } +func TestLoadFromEnvUserService(t *testing.T) { + t.Parallel() + + customSessionCacheRedisAddr := new(string) + *customSessionCacheRedisAddr = "127.0.0.1:6379" + + customSessionEventsRedisStream := new(string) + *customSessionEventsRedisStream = "gateway:session_events" + + customClientEventsRedisStream := new(string) + *customClientEventsRedisStream = "gateway:client_events" + + customResponseSignerPrivateKeyPEMPath := new(string) + *customResponseSignerPrivateKeyPEMPath = writeTestResponseSignerPEMFile(t) + + invalidRelativeURL := new(string) + *invalidRelativeURL = "/user" + + invalidURL := new(string) + *invalidURL = "://bad" + + tests := []struct { + name string + value *string + wantErr string + }{ + { + name: "relative url rejected", + value: invalidRelativeURL, + wantErr: userServiceBaseURLEnvVar + " must be an absolute URL", + }, + { + name: "malformed url rejected", + value: invalidURL, + wantErr: "parse " + userServiceBaseURLEnvVar, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + restoreEnvs(t, + authServiceBaseURLEnvVar, + userServiceBaseURLEnvVar, + logLevelEnvVar, + sessionCacheRedisAddrEnvVar, + sessionEventsRedisStreamEnvVar, + clientEventsRedisStreamEnvVar, + responseSignerPrivateKeyPEMPathEnvVar, + ) + setEnvValue(t, userServiceBaseURLEnvVar, tt.value) + setEnvValue(t, sessionCacheRedisAddrEnvVar, customSessionCacheRedisAddr) + setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream) + setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream) + setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath) + + _, err := LoadFromEnv() + require.Error(t, err) + require.ErrorContains(t, err, tt.wantErr) + }) + } +} + func TestLoadFromEnvAuthenticatedGRPCAntiAbuse(t *testing.T) { customSessionCacheRedisAddr := new(string) *customSessionCacheRedisAddr = "127.0.0.1:6379" @@ -1276,6 +1387,9 @@ func setEnvValue(t *testing.T, envVar string, value *string) { func restoreEnvs(t *testing.T, envVars ...string) { t.Helper() + configEnvMu.Lock() + t.Cleanup(configEnvMu.Unlock) + for _, envVar := range envVars { restoreEnv(t, envVar) } diff --git a/gateway/internal/downstream/userservice/client.go b/gateway/internal/downstream/userservice/client.go new file mode 100644 index 0000000..ab65f02 --- /dev/null +++ b/gateway/internal/downstream/userservice/client.go @@ -0,0 +1,311 @@ +// Package userservice implements the authenticated Gateway -> User Service +// self-service downstream adapter. +package userservice + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "net/url" + "strings" + + "galaxy/gateway/internal/downstream" + usermodel "galaxy/model/user" + "galaxy/transcoder" +) + +const ( + getMyAccountResultCodeOK = "ok" + + userServiceAccountPathSuffix = "/account" + userServiceProfilePathSuffix = "/profile" + userServiceSettingsPathSuffix = "/settings" +) + +var stableErrorMessages = map[string]string{ + "invalid_request": "request is invalid", + "subject_not_found": "subject not found", + "conflict": "request conflicts with current state", + "internal_error": "internal server error", +} + +// HTTPClient implements downstream.Client against the trusted internal User +// Service REST API while preserving FlatBuffers at the external authenticated +// gateway boundary. +type HTTPClient struct { + baseURL string + httpClient *http.Client +} + +// NewHTTPClient constructs one User Service downstream client backed by the +// trusted internal REST API at baseURL. +func NewHTTPClient(baseURL string) (*HTTPClient, error) { + transport, ok := http.DefaultTransport.(*http.Transport) + if !ok { + return nil, errors.New("new user service HTTP client: default transport is not *http.Transport") + } + + return newHTTPClient(baseURL, &http.Client{ + Transport: transport.Clone(), + }) +} + +func newHTTPClient(baseURL string, httpClient *http.Client) (*HTTPClient, error) { + if httpClient == nil { + return nil, errors.New("new user service HTTP client: http client must not be nil") + } + + trimmedBaseURL := strings.TrimSpace(baseURL) + if trimmedBaseURL == "" { + return nil, errors.New("new user service HTTP client: base URL must not be empty") + } + + parsedBaseURL, err := url.Parse(strings.TrimRight(trimmedBaseURL, "/")) + if err != nil { + return nil, fmt.Errorf("new user service HTTP client: parse base URL: %w", err) + } + if parsedBaseURL.Scheme == "" || parsedBaseURL.Host == "" { + return nil, errors.New("new user service HTTP client: base URL must be absolute") + } + + return &HTTPClient{ + baseURL: parsedBaseURL.String(), + httpClient: httpClient, + }, nil +} + +// Close releases idle HTTP connections owned by the client transport. +func (c *HTTPClient) Close() error { + if c == nil || c.httpClient == nil { + return nil + } + + type idleCloser interface { + CloseIdleConnections() + } + + if transport, ok := c.httpClient.Transport.(idleCloser); ok { + transport.CloseIdleConnections() + } + + return nil +} + +// ExecuteCommand routes one authenticated gateway command to the matching +// trusted internal User Service self-service route. +func (c *HTTPClient) ExecuteCommand(ctx context.Context, command downstream.AuthenticatedCommand) (downstream.UnaryResult, error) { + if c == nil || c.httpClient == nil { + return downstream.UnaryResult{}, errors.New("execute user service command: nil client") + } + if ctx == nil { + return downstream.UnaryResult{}, errors.New("execute user service command: nil context") + } + if err := ctx.Err(); err != nil { + return downstream.UnaryResult{}, err + } + if strings.TrimSpace(command.UserID) == "" { + return downstream.UnaryResult{}, errors.New("execute user service command: user_id must not be empty") + } + + switch command.MessageType { + case usermodel.MessageTypeGetMyAccount: + if _, err := transcoder.PayloadToGetMyAccountRequest(command.PayloadBytes); err != nil { + return downstream.UnaryResult{}, fmt.Errorf("execute user service command %q: %w", command.MessageType, err) + } + return c.executeGetMyAccount(ctx, command.UserID) + case usermodel.MessageTypeUpdateMyProfile: + request, err := transcoder.PayloadToUpdateMyProfileRequest(command.PayloadBytes) + if err != nil { + return downstream.UnaryResult{}, fmt.Errorf("execute user service command %q: %w", command.MessageType, err) + } + return c.executeUpdateMyProfile(ctx, command.UserID, request) + case usermodel.MessageTypeUpdateMySettings: + request, err := transcoder.PayloadToUpdateMySettingsRequest(command.PayloadBytes) + if err != nil { + return downstream.UnaryResult{}, fmt.Errorf("execute user service command %q: %w", command.MessageType, err) + } + return c.executeUpdateMySettings(ctx, command.UserID, request) + default: + return downstream.UnaryResult{}, fmt.Errorf("execute user service command: unsupported message type %q", command.MessageType) + } +} + +func (c *HTTPClient) executeGetMyAccount(ctx context.Context, userID string) (downstream.UnaryResult, error) { + payload, statusCode, err := c.doRequest(ctx, http.MethodGet, c.userPath(userID, userServiceAccountPathSuffix), nil) + if err != nil { + return downstream.UnaryResult{}, fmt.Errorf("execute get my account: %w", err) + } + + return projectResponse(statusCode, payload) +} + +func (c *HTTPClient) executeUpdateMyProfile(ctx context.Context, userID string, request *usermodel.UpdateMyProfileRequest) (downstream.UnaryResult, error) { + payload, statusCode, err := c.doRequest(ctx, http.MethodPost, c.userPath(userID, userServiceProfilePathSuffix), request) + if err != nil { + return downstream.UnaryResult{}, fmt.Errorf("execute update my profile: %w", err) + } + + return projectResponse(statusCode, payload) +} + +func (c *HTTPClient) executeUpdateMySettings(ctx context.Context, userID string, request *usermodel.UpdateMySettingsRequest) (downstream.UnaryResult, error) { + payload, statusCode, err := c.doRequest(ctx, http.MethodPost, c.userPath(userID, userServiceSettingsPathSuffix), request) + if err != nil { + return downstream.UnaryResult{}, fmt.Errorf("execute update my settings: %w", err) + } + + return projectResponse(statusCode, payload) +} + +func (c *HTTPClient) doRequest(ctx context.Context, method string, targetURL string, requestBody any) ([]byte, int, error) { + if c == nil || c.httpClient == nil { + return nil, 0, errors.New("nil client") + } + + var bodyReader io.Reader + if requestBody != nil { + payload, err := json.Marshal(requestBody) + if err != nil { + return nil, 0, fmt.Errorf("marshal request body: %w", err) + } + bodyReader = bytes.NewReader(payload) + } + + request, err := http.NewRequestWithContext(ctx, method, targetURL, bodyReader) + if err != nil { + return nil, 0, fmt.Errorf("build request: %w", err) + } + if requestBody != nil { + request.Header.Set("Content-Type", "application/json") + } + + response, err := c.httpClient.Do(request) + if err != nil { + return nil, 0, err + } + defer response.Body.Close() + + payload, err := io.ReadAll(response.Body) + if err != nil { + return nil, 0, fmt.Errorf("read response body: %w", err) + } + + return payload, response.StatusCode, nil +} + +func (c *HTTPClient) userPath(userID string, suffix string) string { + return c.baseURL + "/api/v1/internal/users/" + url.PathEscape(userID) + suffix +} + +func projectResponse(statusCode int, payload []byte) (downstream.UnaryResult, error) { + switch { + case statusCode == http.StatusOK: + var response usermodel.AccountResponse + if err := decodeStrictJSONPayload(payload, &response); err != nil { + return downstream.UnaryResult{}, fmt.Errorf("decode success response: %w", err) + } + + payloadBytes, err := transcoder.AccountResponseToPayload(&response) + if err != nil { + return downstream.UnaryResult{}, fmt.Errorf("encode success response payload: %w", err) + } + + return downstream.UnaryResult{ + ResultCode: getMyAccountResultCodeOK, + PayloadBytes: payloadBytes, + }, nil + case statusCode == http.StatusServiceUnavailable: + return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable + case statusCode >= 400 && statusCode <= 599: + errorResponse, err := decodeUserServiceError(statusCode, payload) + if err != nil { + return downstream.UnaryResult{}, fmt.Errorf("decode error response: %w", err) + } + + payloadBytes, err := transcoder.ErrorResponseToPayload(errorResponse) + if err != nil { + return downstream.UnaryResult{}, fmt.Errorf("encode error response payload: %w", err) + } + + return downstream.UnaryResult{ + ResultCode: errorResponse.Error.Code, + PayloadBytes: payloadBytes, + }, nil + default: + return downstream.UnaryResult{}, fmt.Errorf("unexpected HTTP status %d", statusCode) + } +} + +func decodeUserServiceError(statusCode int, payload []byte) (*usermodel.ErrorResponse, error) { + var response usermodel.ErrorResponse + if err := decodeStrictJSONPayload(payload, &response); err != nil { + return nil, err + } + + response.Error.Code = normalizeErrorCode(statusCode, response.Error.Code) + response.Error.Message = normalizeErrorMessage(response.Error.Code, response.Error.Message) + + if strings.TrimSpace(response.Error.Code) == "" { + return nil, errors.New("missing error code") + } + if strings.TrimSpace(response.Error.Message) == "" { + return nil, errors.New("missing error message") + } + + return &response, nil +} + +func normalizeErrorCode(statusCode int, code string) string { + trimmed := strings.TrimSpace(code) + if trimmed != "" { + return trimmed + } + + switch statusCode { + case http.StatusBadRequest: + return "invalid_request" + case http.StatusNotFound: + return "subject_not_found" + case http.StatusConflict: + return "conflict" + default: + return "internal_error" + } +} + +func normalizeErrorMessage(code string, message string) string { + trimmed := strings.TrimSpace(message) + if trimmed != "" { + return trimmed + } + + if stable, ok := stableErrorMessages[code]; ok { + return stable + } + + return stableErrorMessages["internal_error"] +} + +func decodeStrictJSONPayload(payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return err + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return errors.New("unexpected trailing JSON input") + } + + return err + } + + return nil +} + +var _ downstream.Client = (*HTTPClient)(nil) diff --git a/gateway/internal/downstream/userservice/client_test.go b/gateway/internal/downstream/userservice/client_test.go new file mode 100644 index 0000000..a6cf871 --- /dev/null +++ b/gateway/internal/downstream/userservice/client_test.go @@ -0,0 +1,399 @@ +package userservice + +import ( + "context" + "encoding/json" + "io" + "net/http" + "net/http/httptest" + "testing" + "time" + + "galaxy/gateway/internal/downstream" + usermodel "galaxy/model/user" + "galaxy/transcoder" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestNewHTTPClient(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + baseURL string + wantURL string + wantErr string + }{ + { + name: "absolute URL is normalized", + baseURL: " http://127.0.0.1:8081/ ", + wantURL: "http://127.0.0.1:8081", + }, + { + name: "empty base URL is rejected", + baseURL: " ", + wantErr: "base URL must not be empty", + }, + { + name: "relative base URL is rejected", + baseURL: "/relative", + wantErr: "base URL must be absolute", + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + client, err := NewHTTPClient(tt.baseURL) + if tt.wantErr != "" { + require.Error(t, err) + assert.Contains(t, err.Error(), tt.wantErr) + return + } + + require.NoError(t, err) + assert.Equal(t, tt.wantURL, client.baseURL) + }) + } +} + +func TestHTTPClientExecuteGetMyAccountSuccess(t *testing.T) { + t.Parallel() + + wantResponse := sampleAccountResponse() + server := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { + require.Equal(t, http.MethodGet, request.Method) + require.Equal(t, "/api/v1/internal/users/user-123/account", request.URL.Path) + require.NoError(t, json.NewEncoder(writer).Encode(wantResponse)) + })) + defer server.Close() + + client := newTestHTTPClient(t, server) + payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{}) + require.NoError(t, err) + + result, err := client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{ + UserID: "user-123", + MessageType: usermodel.MessageTypeGetMyAccount, + PayloadBytes: payload, + }) + require.NoError(t, err) + assert.Equal(t, getMyAccountResultCodeOK, result.ResultCode) + + decoded, err := transcoder.PayloadToAccountResponse(result.PayloadBytes) + require.NoError(t, err) + assert.Equal(t, wantResponse, decoded) +} + +func TestHTTPClientExecuteUpdateMyProfileProjectsConflict(t *testing.T) { + t.Parallel() + + server := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { + require.Equal(t, http.MethodPost, request.Method) + require.Equal(t, "/api/v1/internal/users/user-123/profile", request.URL.Path) + + body, err := io.ReadAll(request.Body) + require.NoError(t, err) + require.JSONEq(t, `{"race_name":"Nova Prime"}`, string(body)) + + writer.WriteHeader(http.StatusConflict) + require.NoError(t, json.NewEncoder(writer).Encode(&usermodel.ErrorResponse{ + Error: usermodel.ErrorBody{ + Code: "conflict", + Message: "request conflicts with current state", + }, + })) + })) + defer server.Close() + + client := newTestHTTPClient(t, server) + payload, err := transcoder.UpdateMyProfileRequestToPayload(&usermodel.UpdateMyProfileRequest{RaceName: "Nova Prime"}) + require.NoError(t, err) + + result, err := client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{ + UserID: "user-123", + MessageType: usermodel.MessageTypeUpdateMyProfile, + PayloadBytes: payload, + }) + require.NoError(t, err) + assert.Equal(t, "conflict", result.ResultCode) + + decoded, err := transcoder.PayloadToErrorResponse(result.PayloadBytes) + require.NoError(t, err) + assert.Equal(t, &usermodel.ErrorResponse{ + Error: usermodel.ErrorBody{ + Code: "conflict", + Message: "request conflicts with current state", + }, + }, decoded) +} + +func TestHTTPClientExecuteUpdateMySettingsProjectsInvalidRequest(t *testing.T) { + t.Parallel() + + server := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { + require.Equal(t, http.MethodPost, request.Method) + require.Equal(t, "/api/v1/internal/users/user-123/settings", request.URL.Path) + + body, err := io.ReadAll(request.Body) + require.NoError(t, err) + require.JSONEq(t, `{"preferred_language":"bad","time_zone":"Mars/Base"}`, string(body)) + + writer.WriteHeader(http.StatusBadRequest) + require.NoError(t, json.NewEncoder(writer).Encode(&usermodel.ErrorResponse{ + Error: usermodel.ErrorBody{ + Code: "invalid_request", + Message: "request is invalid", + }, + })) + })) + defer server.Close() + + client := newTestHTTPClient(t, server) + payload, err := transcoder.UpdateMySettingsRequestToPayload(&usermodel.UpdateMySettingsRequest{ + PreferredLanguage: "bad", + TimeZone: "Mars/Base", + }) + require.NoError(t, err) + + result, err := client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{ + UserID: "user-123", + MessageType: usermodel.MessageTypeUpdateMySettings, + PayloadBytes: payload, + }) + require.NoError(t, err) + assert.Equal(t, "invalid_request", result.ResultCode) + + decoded, err := transcoder.PayloadToErrorResponse(result.PayloadBytes) + require.NoError(t, err) + assert.Equal(t, "invalid_request", decoded.Error.Code) +} + +func TestHTTPClientExecuteCommandProjectsSubjectNotFound(t *testing.T) { + t.Parallel() + + server := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { + writer.WriteHeader(http.StatusNotFound) + require.NoError(t, json.NewEncoder(writer).Encode(&usermodel.ErrorResponse{ + Error: usermodel.ErrorBody{ + Code: "subject_not_found", + Message: "subject not found", + }, + })) + })) + defer server.Close() + + client := newTestHTTPClient(t, server) + payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{}) + require.NoError(t, err) + + result, err := client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{ + UserID: "user-missing", + MessageType: usermodel.MessageTypeGetMyAccount, + PayloadBytes: payload, + }) + require.NoError(t, err) + assert.Equal(t, "subject_not_found", result.ResultCode) +} + +func TestHTTPClientExecuteCommandMaps503ToUnavailable(t *testing.T) { + t.Parallel() + + server := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { + writer.WriteHeader(http.StatusServiceUnavailable) + require.NoError(t, json.NewEncoder(writer).Encode(&usermodel.ErrorResponse{ + Error: usermodel.ErrorBody{ + Code: "service_unavailable", + Message: "service is unavailable", + }, + })) + })) + defer server.Close() + + client := newTestHTTPClient(t, server) + payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{}) + require.NoError(t, err) + + _, err = client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{ + UserID: "user-123", + MessageType: usermodel.MessageTypeGetMyAccount, + PayloadBytes: payload, + }) + require.Error(t, err) + assert.ErrorIs(t, err, downstream.ErrDownstreamUnavailable) +} + +func TestHTTPClientExecuteCommandUsesCallerContext(t *testing.T) { + t.Parallel() + + server := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { + <-request.Context().Done() + })) + defer server.Close() + + client := newTestHTTPClient(t, server) + payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{}) + require.NoError(t, err) + + ctx, cancel := context.WithTimeout(context.Background(), 25*time.Millisecond) + defer cancel() + + _, err = client.ExecuteCommand(ctx, downstream.AuthenticatedCommand{ + UserID: "user-123", + MessageType: usermodel.MessageTypeGetMyAccount, + PayloadBytes: payload, + }) + require.Error(t, err) + assert.ErrorIs(t, err, context.DeadlineExceeded) +} + +func TestHTTPClientExecuteCommandRejectsMalformedSuccessPayload(t *testing.T) { + t.Parallel() + + server := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { + _, _ = writer.Write([]byte(`{"account":{"user_id":"user-123","unexpected":true}}`)) + })) + defer server.Close() + + client := newTestHTTPClient(t, server) + payload, err := transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{}) + require.NoError(t, err) + + _, err = client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{ + UserID: "user-123", + MessageType: usermodel.MessageTypeGetMyAccount, + PayloadBytes: payload, + }) + require.Error(t, err) + assert.Contains(t, err.Error(), "decode success response") +} + +func TestHTTPClientExecuteCommandRejectsUnsupportedMessageType(t *testing.T) { + t.Parallel() + + server := httptest.NewServer(http.NotFoundHandler()) + defer server.Close() + + client := newTestHTTPClient(t, server) + + _, err := client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{ + UserID: "user-123", + MessageType: "user.unsupported", + PayloadBytes: []byte("payload"), + }) + require.Error(t, err) + assert.Contains(t, err.Error(), "unsupported message type") +} + +func TestNewRoutesReserveUserMessageTypesWhenUnconfigured(t *testing.T) { + t.Parallel() + + routes, closeFn, err := NewRoutes("") + require.NoError(t, err) + require.NoError(t, closeFn()) + + router := downstream.NewStaticRouter(routes) + for _, messageType := range []string{ + usermodel.MessageTypeGetMyAccount, + usermodel.MessageTypeUpdateMyProfile, + usermodel.MessageTypeUpdateMySettings, + } { + client, routeErr := router.Route(messageType) + require.NoError(t, routeErr) + + _, execErr := client.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{ + UserID: "user-123", + MessageType: messageType, + }) + require.Error(t, execErr) + assert.ErrorIs(t, execErr, downstream.ErrDownstreamUnavailable) + } +} + +func TestUnavailableClientReturnsDownstreamUnavailable(t *testing.T) { + t.Parallel() + + _, err := unavailableClient{}.ExecuteCommand(context.Background(), downstream.AuthenticatedCommand{}) + require.Error(t, err) + assert.ErrorIs(t, err, downstream.ErrDownstreamUnavailable) +} + +func newTestHTTPClient(t *testing.T, server *httptest.Server) *HTTPClient { + t.Helper() + + client, err := newHTTPClient(server.URL, server.Client()) + require.NoError(t, err) + return client +} + +func sampleAccountResponse() *usermodel.AccountResponse { + now := time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC) + expiresAt := now.Add(30 * 24 * time.Hour) + + return &usermodel.AccountResponse{ + Account: usermodel.Account{ + UserID: "user-123", + Email: "pilot@example.com", + RaceName: "Pilot Nova", + PreferredLanguage: "en", + TimeZone: "Europe/Kaliningrad", + DeclaredCountry: "DE", + Entitlement: usermodel.EntitlementSnapshot{ + PlanCode: "free", + IsPaid: false, + Source: "auth_registration", + Actor: usermodel.ActorRef{Type: "service", ID: "user-service"}, + ReasonCode: "initial_free_entitlement", + StartsAt: now, + UpdatedAt: now, + }, + ActiveSanctions: []usermodel.ActiveSanction{ + { + SanctionCode: "profile_update_block", + Scope: "lobby", + ReasonCode: "manual_block", + Actor: usermodel.ActorRef{Type: "admin", ID: "admin-1"}, + AppliedAt: now, + ExpiresAt: &expiresAt, + }, + }, + ActiveLimits: []usermodel.ActiveLimit{ + { + LimitCode: "max_owned_private_games", + Value: 3, + ReasonCode: "manual_override", + Actor: usermodel.ActorRef{Type: "admin", ID: "admin-1"}, + AppliedAt: now, + }, + }, + CreatedAt: now, + UpdatedAt: now, + }, + } +} + +func TestDecodeUserServiceErrorNormalizesBlankFields(t *testing.T) { + t.Parallel() + + response, err := decodeUserServiceError(http.StatusBadRequest, []byte(`{"error":{"code":" ","message":" "}}`)) + require.NoError(t, err) + assert.Equal(t, "invalid_request", response.Error.Code) + assert.Equal(t, "request is invalid", response.Error.Message) +} + +func TestHTTPClientExecuteCommandRejectsNilContext(t *testing.T) { + t.Parallel() + + server := httptest.NewServer(http.NotFoundHandler()) + defer server.Close() + + client := newTestHTTPClient(t, server) + + _, err := client.ExecuteCommand(nil, downstream.AuthenticatedCommand{}) + require.Error(t, err) + assert.Contains(t, err.Error(), "nil context") +} diff --git a/gateway/internal/downstream/userservice/routes.go b/gateway/internal/downstream/userservice/routes.go new file mode 100644 index 0000000..dd76065 --- /dev/null +++ b/gateway/internal/downstream/userservice/routes.go @@ -0,0 +1,46 @@ +package userservice + +import ( + "context" + + "galaxy/gateway/internal/downstream" + usermodel "galaxy/model/user" +) + +var noOpClose = func() error { return nil } + +// NewRoutes returns the reserved authenticated gateway routes owned by the +// Gateway -> User self-service boundary. +// +// When baseURL is empty, the returned routes still reserve the stable +// `user.*` message types but resolve them to a dependency-unavailable client +// so callers receive the transport-level unavailable outcome instead of a +// route-miss error. +func NewRoutes(baseURL string) (map[string]downstream.Client, func() error, error) { + client := downstream.Client(unavailableClient{}) + closeFn := noOpClose + + if baseURL != "" { + httpClient, err := NewHTTPClient(baseURL) + if err != nil { + return nil, nil, err + } + + client = httpClient + closeFn = httpClient.Close + } + + return map[string]downstream.Client{ + usermodel.MessageTypeGetMyAccount: client, + usermodel.MessageTypeUpdateMyProfile: client, + usermodel.MessageTypeUpdateMySettings: client, + }, closeFn, nil +} + +type unavailableClient struct{} + +func (unavailableClient) ExecuteCommand(context.Context, downstream.AuthenticatedCommand) (downstream.UnaryResult, error) { + return downstream.UnaryResult{}, downstream.ErrDownstreamUnavailable +} + +var _ downstream.Client = unavailableClient{} diff --git a/go.work.sum b/go.work.sum index d4bef3c..76fcfce 100644 --- a/go.work.sum +++ b/go.work.sum @@ -12,14 +12,11 @@ github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5P github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5/go.mod h1:KdCmV+x/BuvyMxRnYBlmVaq4OLiKW6iRQfvC62cvdkI= github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= -github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= -github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/envoyproxy/go-control-plane v0.14.0/go.mod h1:NcS5X47pLl/hfqxU70yPwL9ZMkUlwlKxtAohpi2wBEU= github.com/envoyproxy/go-control-plane/envoy v1.36.0/go.mod h1:ty89S1YCCVruQAm9OtKeEkQLTb+Lkz0k8v9W0Oxsv98= github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4= github.com/envoyproxy/protoc-gen-validate v1.3.0/go.mod h1:HvYl7zwPa5mffgyeTUHA9zHIH36nmrm7oCbo4YKoSWA= github.com/francoispqt/gojay v1.2.13/go.mod h1:ehT5mTG4ua4581f1++1WLG0vPdaA9HaiDsoyrBGkyDY= -github.com/gabriel-vasile/mimetype v1.4.12/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s= github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08= github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE= @@ -44,8 +41,6 @@ github.com/natefinch/atomic v1.0.1/go.mod h1:N/D/ELrljoqDyT3rZrsUmtsuzvHkeB/wWjH github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8= -github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= -github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho= github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI= github.com/prometheus/common v0.48.0/go.mod h1:0/KsvlIEfPQCQ5I2iNSAWKPZziNCvRs5EC6ILDTlAPc= @@ -53,6 +48,7 @@ github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3c github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= +github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= @@ -67,19 +63,12 @@ github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfS github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= go.opentelemetry.io/contrib/detectors/gcp v1.39.0/go.mod h1:t/OGqzHBa5v6RHZwrDBJ2OirWc+4q/w2fTbLZwAKjTk= go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.42.0 h1:MdKucPl/HbzckWWEisiNqMPhRrAOQX8r4jTuGr636gk= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.42.0/go.mod h1:RolT8tWtfHcjajEH5wFIZ4Dgh5jpPdFXYV9pTAk/qjc= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.42.0 h1:H7O6RlGOMTizyl3R08Kn5pdM06bnH8oscSj7o11tmLA= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.42.0/go.mod h1:mBFWu/WOVDkWWsR7Tx7h6EpQB8wsv7P0Yrh0Pb7othc= -go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.42.0 h1:lSZHgNHfbmQTPfuTmWVkEu8J8qXaQwuV30pjCcAUvP8= -go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.42.0/go.mod h1:so9ounLcuoRDu033MW/E0AD4hhUjVqswrMF5FoZlBcw= go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs= go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE= go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew= go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA= go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o= golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8= -golang.org/x/arch v0.22.0/go.mod h1:dNHoOeKiyja7GTvF9NJS1l3Z2yntpQNzgrjh1cU103A= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= @@ -115,6 +104,7 @@ golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -134,6 +124,7 @@ golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/telemetry v0.0.0-20240521205824-bda55230c457/go.mod h1:pRgIJT+bRLFKnoM1ldnzKoxTIn14Yxz928LQRYYgIN0= golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2/go.mod h1:b7fPSJ0pKZ3ccUh8gnTONJxhn3c/PS6tyzQvyqw4iA8= golang.org/x/telemetry v0.0.0-20260209163413-e7419c687ee4/go.mod h1:g5NllXBEermZrmR51cJDQxmJUHUOfRAaNyWBM+R+548= @@ -183,5 +174,4 @@ google.golang.org/grpc v1.79.2/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhH google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= -google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4= diff --git a/integration/README.md b/integration/README.md index 55b4df0..504ff3e 100644 --- a/integration/README.md +++ b/integration/README.md @@ -8,13 +8,25 @@ Each suite must raise real service processes, speak only over public HTTP/gRPC/R ```text integration/ ├── README.md -├── go.mod +├── authsessionuser/ +│ ├── authsession_user_test.go +│ └── harness_test.go ├── gatewayauthsession/ │ ├── harness_test.go │ └── gateway_authsession_test.go +├── gatewayauthsessionuser/ +│ ├── gateway_authsession_user_test.go +│ └── harness_test.go +├── gatewayuser/ +│ ├── gateway_user_test.go +│ └── harness_test.go +├── go.mod +├── go.sum └── internal/ ├── contracts/ - │ └── gatewayv1/ + │ ├── gatewayv1/ + │ │ └── contract.go + │ └── userv1/ │ └── contract.go └── harness/ ├── binary.go @@ -35,8 +47,12 @@ integration/ ## Current Boundary Suites - `gatewayauthsession` verifies the integration boundary between real `Edge Gateway` and real `Auth / Session Service`. +- `authsessionuser` verifies the integration boundary between real `Auth / Session Service` and real `User Service`. +- `gatewayuser` verifies the direct authenticated self-service boundary between real `Edge Gateway` and real `User Service`. +- `gatewayauthsessionuser` verifies the full public-auth plus authenticated-account chain across real `Edge Gateway`, real `Auth / Session Service`, and real `User Service`. -The current fast suite uses one isolated `miniredis` instance plus external stateful HTTP stubs for mail and user services. +The current fast suites use one isolated `miniredis` instance plus either +real downstream processes or external stateful HTTP stubs where appropriate. ## Running @@ -45,14 +61,21 @@ Run from the module directory: ```bash cd integration go test ./gatewayauthsession/... +go test ./authsessionuser/... +go test ./gatewayuser/... +go test ./gatewayauthsessionuser/... ``` Useful regression commands after boundary changes: ```bash go test ./gatewayauthsession/... +go test ./authsessionuser/... +go test ./gatewayuser/... +go test ./gatewayauthsessionuser/... cd ../gateway && go test ./... cd ../authsession && go test ./... -run GatewayCompatibility +cd ../user && go test ./... ``` Do not use `go test ./...` from the repository root. The repository is organized through `go.work`, so verification should stay module-scoped. diff --git a/integration/authsessionuser/authsession_user_test.go b/integration/authsessionuser/authsession_user_test.go new file mode 100644 index 0000000..9819785 --- /dev/null +++ b/integration/authsessionuser/authsession_user_test.go @@ -0,0 +1,90 @@ +package authsessionuser_test + +import ( + "net/http" + "strings" + "testing" + + "github.com/stretchr/testify/require" +) + +func TestAuthsessionUserBlackBoxConfirmCreatesUserWithForwardedRegistrationContext(t *testing.T) { + t.Parallel() + + h := newAuthsessionUserHarness(t) + email := "created@example.com" + + challengeID := h.sendChallenge(t, email) + code := lastMailCodeFor(t, h.mailStub, email) + + response := h.confirmCode(t, challengeID, code) + var confirmBody struct { + DeviceSessionID string `json:"device_session_id"` + } + requireJSONStatus(t, response, http.StatusOK, &confirmBody) + require.True(t, strings.HasPrefix(confirmBody.DeviceSessionID, "device-session-")) + + lookupResponse, account := lookupUserByEmail(t, h.userServiceURL, email) + require.Equalf(t, http.StatusOK, lookupResponse.StatusCode, formatStatusError(lookupResponse)) + require.Equal(t, email, account.User.Email) + require.Equal(t, "en", account.User.PreferredLanguage) + require.Equal(t, testTimeZone, account.User.TimeZone) + require.True(t, strings.HasPrefix(account.User.UserID, "user-")) + require.True(t, strings.HasPrefix(account.User.RaceName, "player-")) + require.Equal(t, "free", account.User.Entitlement.PlanCode) + require.False(t, account.User.Entitlement.IsPaid) + require.Empty(t, account.User.ActiveSanctions) + require.Empty(t, account.User.ActiveLimits) +} + +func TestAuthsessionUserBlackBoxConfirmForExistingUserKeepsCreateOnlySettings(t *testing.T) { + t.Parallel() + + h := newAuthsessionUserHarness(t) + email := "existing@example.com" + + created := postEnsureUser(t, h.userServiceURL, email, "fr-FR", "Europe/Paris") + require.Equal(t, "created", created.Outcome) + sleepForDistinctCreatedAt() + + challengeID := h.sendChallenge(t, email) + code := lastMailCodeFor(t, h.mailStub, email) + + response := h.confirmCode(t, challengeID, code) + var confirmBody struct { + DeviceSessionID string `json:"device_session_id"` + } + requireJSONStatus(t, response, http.StatusOK, &confirmBody) + require.True(t, strings.HasPrefix(confirmBody.DeviceSessionID, "device-session-")) + + lookupResponse, account := lookupUserByEmail(t, h.userServiceURL, email) + require.Equalf(t, http.StatusOK, lookupResponse.StatusCode, formatStatusError(lookupResponse)) + require.Equal(t, created.UserID, account.User.UserID) + require.Equal(t, "fr-FR", account.User.PreferredLanguage) + require.Equal(t, "Europe/Paris", account.User.TimeZone) +} + +func TestAuthsessionUserBlackBoxBlockedEmailSendIsSuccessShapedAndConfirmIsRejectedWithoutCreatingUser(t *testing.T) { + t.Parallel() + + h := newAuthsessionUserHarness(t) + + blockedAtSendEmail := "blocked-send@example.com" + postBlockByEmail(t, h.userServiceURL, blockedAtSendEmail) + + beforeBlockedSendDeliveries := len(h.mailStub.RecordedDeliveries()) + blockedChallengeID := h.sendChallenge(t, blockedAtSendEmail) + require.NotEmpty(t, blockedChallengeID) + require.Len(t, h.mailStub.RecordedDeliveries(), beforeBlockedSendDeliveries) + + blockedAtConfirmEmail := "blocked-confirm@example.com" + challengeID := h.sendChallenge(t, blockedAtConfirmEmail) + code := lastMailCodeFor(t, h.mailStub, blockedAtConfirmEmail) + postBlockByEmail(t, h.userServiceURL, blockedAtConfirmEmail) + + confirmResponse := h.confirmCode(t, challengeID, code) + requireJSONStatusRaw(t, confirmResponse, http.StatusForbidden, `{"error":{"code":"blocked_by_policy","message":"authentication is blocked by policy"}}`) + + lookupResponse, _ := lookupUserByEmail(t, h.userServiceURL, blockedAtConfirmEmail) + requireLookupNotFound(t, lookupResponse) +} diff --git a/integration/authsessionuser/harness_test.go b/integration/authsessionuser/harness_test.go new file mode 100644 index 0000000..0e95d15 --- /dev/null +++ b/integration/authsessionuser/harness_test.go @@ -0,0 +1,386 @@ +package authsessionuser_test + +import ( + "bytes" + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "testing" + "time" + + "galaxy/integration/internal/harness" + + "github.com/stretchr/testify/require" +) + +const ( + testClientPublicKey = "AAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8=" + testTimeZone = "Europe/Kaliningrad" +) + +type authsessionUserHarness struct { + mailStub *harness.MailStub + + authsessionPublicURL string + userServiceURL string + + authsessionProcess *harness.Process + userServiceProcess *harness.Process +} + +func newAuthsessionUserHarness(t *testing.T) *authsessionUserHarness { + t.Helper() + + redisServer := harness.StartMiniredis(t) + mailStub := harness.NewMailStub(t) + + userServiceAddr := harness.FreeTCPAddress(t) + authsessionPublicAddr := harness.FreeTCPAddress(t) + authsessionInternalAddr := harness.FreeTCPAddress(t) + + userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice") + authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession") + + userServiceEnv := map[string]string{ + "USERSERVICE_LOG_LEVEL": "info", + "USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr, + "USERSERVICE_REDIS_ADDR": redisServer.Addr(), + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + } + userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv) + waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr) + + authsessionEnv := map[string]string{ + "AUTHSESSION_LOG_LEVEL": "info", + "AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr, + "AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr, + "AUTHSESSION_REDIS_ADDR": redisServer.Addr(), + "AUTHSESSION_USER_SERVICE_MODE": "rest", + "AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr, + "AUTHSESSION_MAIL_SERVICE_MODE": "rest", + "AUTHSESSION_MAIL_SERVICE_BASE_URL": mailStub.BaseURL(), + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + "AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(), + } + authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, authsessionEnv) + waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr) + + return &authsessionUserHarness{ + mailStub: mailStub, + authsessionPublicURL: "http://" + authsessionPublicAddr, + userServiceURL: "http://" + userServiceAddr, + authsessionProcess: authsessionProcess, + userServiceProcess: userServiceProcess, + } +} + +func (h *authsessionUserHarness) sendChallenge(t *testing.T, email string) string { + t.Helper() + + response := postJSONValue(t, h.authsessionPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{ + "email": email, + }) + require.Equal(t, http.StatusOK, response.StatusCode) + + var body struct { + ChallengeID string `json:"challenge_id"` + } + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body)) + require.NotEmpty(t, body.ChallengeID) + + return body.ChallengeID +} + +func (h *authsessionUserHarness) confirmCode(t *testing.T, challengeID string, code string) httpResponse { + t.Helper() + + return postJSONValue(t, h.authsessionPublicURL+"/api/v1/public/auth/confirm-email-code", map[string]string{ + "challenge_id": challengeID, + "code": code, + "client_public_key": testClientPublicKey, + "time_zone": testTimeZone, + }) +} + +type httpResponse struct { + StatusCode int + Body string + Header http.Header +} + +func postJSONValue(t *testing.T, targetURL string, body any) httpResponse { + t.Helper() + + payload, err := json.Marshal(body) + require.NoError(t, err) + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + require.NoError(t, err) + request.Header.Set("Content-Type", "application/json") + + client := &http.Client{ + Timeout: 250 * time.Millisecond, + Transport: &http.Transport{ + DisableKeepAlives: true, + }, + } + t.Cleanup(client.CloseIdleConnections) + + response, err := client.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + responseBody, err := io.ReadAll(response.Body) + require.NoError(t, err) + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(responseBody), + Header: response.Header.Clone(), + } +} + +func decodeStrictJSONPayload(payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return err + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return errors.New("unexpected trailing JSON input") + } + return err + } + + return nil +} + +func waitForUserServiceReady(t *testing.T, process *harness.Process, baseURL string) { + t.Helper() + + client := &http.Client{Timeout: 250 * time.Millisecond} + deadline := time.Now().Add(10 * time.Second) + + for time.Now().Before(deadline) { + request, err := http.NewRequest(http.MethodGet, baseURL+"/api/v1/internal/users/user-missing/exists", nil) + require.NoError(t, err) + + response, err := client.Do(request) + if err == nil { + _, _ = io.Copy(io.Discard, response.Body) + response.Body.Close() + if response.StatusCode == http.StatusOK { + return + } + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("wait for userservice readiness: timeout\n%s", process.Logs()) +} + +func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) { + t.Helper() + + client := &http.Client{Timeout: 250 * time.Millisecond} + deadline := time.Now().Add(10 * time.Second) + + for time.Now().Before(deadline) { + response, err := postJSONValueMaybe(client, baseURL+"/api/v1/public/auth/send-email-code", map[string]string{ + "email": "", + }) + if err == nil && response.StatusCode == http.StatusBadRequest { + return + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs()) +} + +func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) { + payload, err := json.Marshal(body) + if err != nil { + return httpResponse{}, err + } + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + if err != nil { + return httpResponse{}, err + } + request.Header.Set("Content-Type", "application/json") + + response, err := client.Do(request) + if err != nil { + return httpResponse{}, err + } + defer response.Body.Close() + + responseBody, err := io.ReadAll(response.Body) + if err != nil { + return httpResponse{}, err + } + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(responseBody), + Header: response.Header.Clone(), + }, nil +} + +func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) { + t.Helper() + + require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body) + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target)) +} + +func requireJSONStatusRaw(t *testing.T, response httpResponse, wantStatus int, wantBody string) { + t.Helper() + + require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body) + require.JSONEq(t, wantBody, response.Body) +} + +func postEnsureUser(t *testing.T, baseURL string, email string, preferredLanguage string, timeZone string) ensureByEmailResponse { + t.Helper() + + response := postJSONValue(t, baseURL+"/api/v1/internal/users/ensure-by-email", map[string]any{ + "email": email, + "registration_context": map[string]string{ + "preferred_language": preferredLanguage, + "time_zone": timeZone, + }, + }) + + var body ensureByEmailResponse + requireJSONStatus(t, response, http.StatusOK, &body) + return body +} + +func postBlockByEmail(t *testing.T, baseURL string, email string) { + t.Helper() + + response := postJSONValue(t, baseURL+"/api/v1/internal/user-blocks/by-email", map[string]string{ + "email": email, + "reason_code": "policy_blocked", + }) + + var body blockMutationResponse + requireJSONStatus(t, response, http.StatusOK, &body) +} + +func lookupUserByEmail(t *testing.T, baseURL string, email string) (httpResponse, userLookupResponse) { + t.Helper() + + response := postJSONValue(t, baseURL+"/api/v1/internal/user-lookups/by-email", map[string]string{ + "email": email, + }) + + if response.StatusCode != http.StatusOK { + return response, userLookupResponse{} + } + + var body userLookupResponse + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body)) + return response, body +} + +type ensureByEmailResponse struct { + Outcome string `json:"outcome"` + UserID string `json:"user_id,omitempty"` +} + +type blockMutationResponse struct { + Outcome string `json:"outcome"` + UserID string `json:"user_id,omitempty"` +} + +type userLookupResponse struct { + User accountView `json:"user"` +} + +type accountView struct { + UserID string `json:"user_id"` + Email string `json:"email"` + RaceName string `json:"race_name"` + PreferredLanguage string `json:"preferred_language"` + TimeZone string `json:"time_zone"` + DeclaredCountry string `json:"declared_country,omitempty"` + Entitlement entitlementSnapshotView `json:"entitlement"` + ActiveSanctions []activeSanctionView `json:"active_sanctions"` + ActiveLimits []activeLimitView `json:"active_limits"` + CreatedAt time.Time `json:"created_at"` + UpdatedAt time.Time `json:"updated_at"` +} + +type entitlementSnapshotView struct { + PlanCode string `json:"plan_code"` + IsPaid bool `json:"is_paid"` + Source string `json:"source"` + Actor actorRefView `json:"actor"` + ReasonCode string `json:"reason_code"` + StartsAt time.Time `json:"starts_at"` + EndsAt *time.Time `json:"ends_at,omitempty"` + UpdatedAt time.Time `json:"updated_at"` +} + +type activeSanctionView struct { + SanctionCode string `json:"sanction_code"` + Scope string `json:"scope"` + ReasonCode string `json:"reason_code"` + Actor actorRefView `json:"actor"` + AppliedAt time.Time `json:"applied_at"` + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +type activeLimitView struct { + LimitCode string `json:"limit_code"` + Value int `json:"value"` + ReasonCode string `json:"reason_code"` + Actor actorRefView `json:"actor"` + AppliedAt time.Time `json:"applied_at"` + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +type actorRefView struct { + Type string `json:"type"` + ID string `json:"id,omitempty"` +} + +func requireLookupNotFound(t *testing.T, response httpResponse) { + t.Helper() + + requireJSONStatusRaw(t, response, http.StatusNotFound, `{"error":{"code":"subject_not_found","message":"subject not found"}}`) +} + +func lastMailCodeFor(t *testing.T, stub *harness.MailStub, email string) string { + t.Helper() + + deliveries := stub.RecordedDeliveries() + for index := len(deliveries) - 1; index >= 0; index-- { + if deliveries[index].Email == email { + return deliveries[index].Code + } + } + + t.Fatalf("mail stub did not record delivery for %s", email) + return "" +} + +func sleepForDistinctCreatedAt() { + time.Sleep(10 * time.Millisecond) +} + +func formatStatusError(response httpResponse) string { + return fmt.Sprintf("status=%d body=%s", response.StatusCode, response.Body) +} diff --git a/integration/gatewayauthsessionuser/gateway_authsession_user_test.go b/integration/gatewayauthsessionuser/gateway_authsession_user_test.go new file mode 100644 index 0000000..bc0c076 --- /dev/null +++ b/integration/gatewayauthsessionuser/gateway_authsession_user_test.go @@ -0,0 +1,86 @@ +package gatewayauthsessionuser_test + +import ( + "net/http" + "strings" + "testing" + + "github.com/stretchr/testify/require" +) + +func TestGatewayAuthsessionUserFirstRegistrationCreatesUserAndAllowsAccountRead(t *testing.T) { + h := newGatewayAuthsessionUserHarness(t) + + const email = "created@example.com" + + challengeID := h.sendChallenge(t, email) + code := lastMailCodeFor(t, h.mailStub, email) + clientPrivateKey := newClientPrivateKey("first-registration") + + confirmResponse := h.confirmCode(t, challengeID, code, clientPrivateKey) + var confirmBody struct { + DeviceSessionID string `json:"device_session_id"` + } + requireJSONStatus(t, confirmResponse, http.StatusOK, &confirmBody) + require.True(t, strings.HasPrefix(confirmBody.DeviceSessionID, "device-session-")) + + sessionRecord := h.waitForGatewaySession(t, confirmBody.DeviceSessionID) + accountResponse := h.executeGetMyAccount(t, confirmBody.DeviceSessionID, "request-first-registration", clientPrivateKey) + + require.Equal(t, sessionRecord.UserID, accountResponse.Account.UserID) + require.Equal(t, email, accountResponse.Account.Email) + require.Equal(t, "en", accountResponse.Account.PreferredLanguage) + require.Equal(t, gatewayAuthsessionUserTestTimeZone, accountResponse.Account.TimeZone) + + lookupResponse, lookup := h.lookupUserByEmail(t, email) + require.Equalf(t, http.StatusOK, lookupResponse.StatusCode, "status=%d body=%s", lookupResponse.StatusCode, lookupResponse.Body) + require.Equal(t, accountResponse.Account.UserID, lookup.User.UserID) +} + +func TestGatewayAuthsessionUserExistingAccountKeepsCreateOnlySettings(t *testing.T) { + h := newGatewayAuthsessionUserHarness(t) + + const email = "existing@example.com" + + created := h.ensureUser(t, email, "fr-FR", "Europe/Paris") + require.Equal(t, "created", created.Outcome) + + challengeID := h.sendChallenge(t, email) + code := lastMailCodeFor(t, h.mailStub, email) + clientPrivateKey := newClientPrivateKey("existing-account") + + confirmResponse := h.confirmCode(t, challengeID, code, clientPrivateKey) + var confirmBody struct { + DeviceSessionID string `json:"device_session_id"` + } + requireJSONStatus(t, confirmResponse, http.StatusOK, &confirmBody) + + accountResponse := h.executeGetMyAccount(t, confirmBody.DeviceSessionID, "request-existing-account", clientPrivateKey) + require.Equal(t, created.UserID, accountResponse.Account.UserID) + require.Equal(t, "fr-FR", accountResponse.Account.PreferredLanguage) + require.Equal(t, "Europe/Paris", accountResponse.Account.TimeZone) +} + +func TestGatewayAuthsessionUserBlockedEmailAndUserBehavior(t *testing.T) { + h := newGatewayAuthsessionUserHarness(t) + + blockedAtSendEmail := "blocked-send@example.com" + h.blockByEmail(t, blockedAtSendEmail) + + beforeBlockedSendDeliveries := len(h.mailStub.RecordedDeliveries()) + blockedChallengeID := h.sendChallenge(t, blockedAtSendEmail) + require.NotEmpty(t, blockedChallengeID) + require.Len(t, h.mailStub.RecordedDeliveries(), beforeBlockedSendDeliveries) + + blockedAtConfirmEmail := "blocked-confirm@example.com" + challengeID := h.sendChallenge(t, blockedAtConfirmEmail) + code := lastMailCodeFor(t, h.mailStub, blockedAtConfirmEmail) + h.blockByEmail(t, blockedAtConfirmEmail) + + confirmResponse := h.confirmCode(t, challengeID, code, newClientPrivateKey("blocked-confirm")) + require.Equal(t, http.StatusForbidden, confirmResponse.StatusCode) + require.JSONEq(t, `{"error":{"code":"blocked_by_policy","message":"authentication is blocked by policy"}}`, confirmResponse.Body) + + lookupResponse, _ := h.lookupUserByEmail(t, blockedAtConfirmEmail) + requireLookupNotFound(t, lookupResponse) +} diff --git a/integration/gatewayauthsessionuser/harness_test.go b/integration/gatewayauthsessionuser/harness_test.go new file mode 100644 index 0000000..90310e0 --- /dev/null +++ b/integration/gatewayauthsessionuser/harness_test.go @@ -0,0 +1,460 @@ +package gatewayauthsessionuser_test + +import ( + "bytes" + "context" + "crypto/ed25519" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "net/http" + "path/filepath" + "testing" + "time" + + gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1" + contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1" + contractsuserv1 "galaxy/integration/internal/contracts/userv1" + "galaxy/integration/internal/harness" + usermodel "galaxy/model/user" + + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" +) + +const gatewayAuthsessionUserTestTimeZone = "Europe/Kaliningrad" + +type gatewayAuthsessionUserHarness struct { + redis *redis.Client + + mailStub *harness.MailStub + + authsessionPublicURL string + userServiceURL string + gatewayPublicURL string + gatewayGRPCAddr string + + responseSignerPublicKey ed25519.PublicKey + + gatewayProcess *harness.Process + authsessionProcess *harness.Process + userServiceProcess *harness.Process +} + +func newGatewayAuthsessionUserHarness(t *testing.T) *gatewayAuthsessionUserHarness { + t.Helper() + + redisServer := harness.StartMiniredis(t) + redisClient := redis.NewClient(&redis.Options{ + Addr: redisServer.Addr(), + Protocol: 2, + DisableIdentity: true, + }) + t.Cleanup(func() { + require.NoError(t, redisClient.Close()) + }) + + mailStub := harness.NewMailStub(t) + + responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name()) + userServiceAddr := harness.FreeTCPAddress(t) + authsessionPublicAddr := harness.FreeTCPAddress(t) + authsessionInternalAddr := harness.FreeTCPAddress(t) + gatewayPublicAddr := harness.FreeTCPAddress(t) + gatewayGRPCAddr := harness.FreeTCPAddress(t) + + userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice") + authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession") + gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway") + + userServiceEnv := map[string]string{ + "USERSERVICE_LOG_LEVEL": "info", + "USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr, + "USERSERVICE_REDIS_ADDR": redisServer.Addr(), + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + } + userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv) + harness.WaitForHTTPStatus(t, userServiceProcess, "http://"+userServiceAddr+"/api/v1/internal/users/user-missing/exists", http.StatusOK) + + authsessionEnv := map[string]string{ + "AUTHSESSION_LOG_LEVEL": "info", + "AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr, + "AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr, + "AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_REDIS_ADDR": redisServer.Addr(), + "AUTHSESSION_USER_SERVICE_MODE": "rest", + "AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr, + "AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_MAIL_SERVICE_MODE": "rest", + "AUTHSESSION_MAIL_SERVICE_BASE_URL": mailStub.BaseURL(), + "AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_REDIS_GATEWAY_SESSION_CACHE_KEY_PREFIX": "gateway:session:", + "AUTHSESSION_REDIS_GATEWAY_SESSION_EVENTS_STREAM": "gateway:session_events", + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + } + authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, authsessionEnv) + waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr) + + gatewayEnv := map[string]string{ + "GATEWAY_LOG_LEVEL": "info", + "GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr, + "GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr, + "GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr, + "GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr, + "GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(), + "GATEWAY_SESSION_CACHE_REDIS_ADDR": redisServer.Addr(), + "GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:", + "GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events", + "GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events", + "GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:", + "GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath), + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_WINDOW": "1s", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100", + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + } + gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, gatewayEnv) + harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK) + harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr) + + return &gatewayAuthsessionUserHarness{ + redis: redisClient, + mailStub: mailStub, + authsessionPublicURL: "http://" + authsessionPublicAddr, + userServiceURL: "http://" + userServiceAddr, + gatewayPublicURL: "http://" + gatewayPublicAddr, + gatewayGRPCAddr: gatewayGRPCAddr, + responseSignerPublicKey: responseSignerPublicKey, + gatewayProcess: gatewayProcess, + authsessionProcess: authsessionProcess, + userServiceProcess: userServiceProcess, + } +} + +func (h *gatewayAuthsessionUserHarness) sendChallenge(t *testing.T, email string) string { + t.Helper() + + response := postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{ + "email": email, + }) + require.Equal(t, http.StatusOK, response.StatusCode) + + var body struct { + ChallengeID string `json:"challenge_id"` + } + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body)) + return body.ChallengeID +} + +func (h *gatewayAuthsessionUserHarness) confirmCode(t *testing.T, challengeID string, code string, clientPrivateKey ed25519.PrivateKey) httpResponse { + t.Helper() + + return postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/confirm-email-code", map[string]string{ + "challenge_id": challengeID, + "code": code, + "client_public_key": base64.StdEncoding.EncodeToString(clientPrivateKey.Public().(ed25519.PublicKey)), + "time_zone": gatewayAuthsessionUserTestTimeZone, + }) +} + +func (h *gatewayAuthsessionUserHarness) ensureUser(t *testing.T, email string, preferredLanguage string, timeZone string) ensureByEmailResponse { + t.Helper() + + response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{ + "email": email, + "registration_context": map[string]string{ + "preferred_language": preferredLanguage, + "time_zone": timeZone, + }, + }) + + var body ensureByEmailResponse + requireJSONStatus(t, response, http.StatusOK, &body) + return body +} + +func (h *gatewayAuthsessionUserHarness) lookupUserByEmail(t *testing.T, email string) (httpResponse, userLookupResponse) { + t.Helper() + + response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/user-lookups/by-email", map[string]string{ + "email": email, + }) + if response.StatusCode != http.StatusOK { + return response, userLookupResponse{} + } + + var body userLookupResponse + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body)) + return response, body +} + +func (h *gatewayAuthsessionUserHarness) blockByEmail(t *testing.T, email string) { + t.Helper() + + response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/user-blocks/by-email", map[string]string{ + "email": email, + "reason_code": "policy_blocked", + }) + require.Equal(t, http.StatusOK, response.StatusCode, "response body: %s", response.Body) +} + +func (h *gatewayAuthsessionUserHarness) waitForGatewaySession(t *testing.T, deviceSessionID string) gatewaySessionRecord { + t.Helper() + + deadline := time.Now().Add(5 * time.Second) + for time.Now().Before(deadline) { + payload, err := h.redis.Get(context.Background(), "gateway:session:"+deviceSessionID).Bytes() + if err == nil { + var record gatewaySessionRecord + require.NoError(t, decodeStrictJSONPayload(payload, &record)) + return record + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("gateway session projection for %s was not published in time", deviceSessionID) + return gatewaySessionRecord{} +} + +func (h *gatewayAuthsessionUserHarness) executeGetMyAccount(t *testing.T, deviceSessionID string, requestID string, clientPrivateKey ed25519.PrivateKey) *usermodel.AccountResponse { + t.Helper() + + conn := h.dialGateway(t) + client := gatewayv1.NewEdgeGatewayClient(conn) + + payload, err := contractsuserv1.EncodeGetMyAccountRequest() + require.NoError(t, err) + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + response, err := client.ExecuteCommand(ctx, newExecuteCommandRequest(deviceSessionID, requestID, contractsuserv1.MessageTypeGetMyAccount, payload, clientPrivateKey)) + require.NoError(t, err) + require.Equal(t, contractsuserv1.ResultCodeOK, response.GetResultCode()) + assertSignedExecuteCommandResponse(t, response, h.responseSignerPublicKey) + + accountResponse, err := contractsuserv1.DecodeAccountResponse(response.GetPayloadBytes()) + require.NoError(t, err) + return accountResponse +} + +func (h *gatewayAuthsessionUserHarness) dialGateway(t *testing.T) *grpc.ClientConn { + t.Helper() + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + conn, err := grpc.DialContext( + ctx, + h.gatewayGRPCAddr, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithBlock(), + ) + require.NoError(t, err) + t.Cleanup(func() { + require.NoError(t, conn.Close()) + }) + + return conn +} + +type httpResponse struct { + StatusCode int + Body string + Header http.Header +} + +type ensureByEmailResponse struct { + Outcome string `json:"outcome"` + UserID string `json:"user_id,omitempty"` +} + +type gatewaySessionRecord struct { + DeviceSessionID string `json:"device_session_id"` + UserID string `json:"user_id"` + ClientPublicKey string `json:"client_public_key"` + Status string `json:"status"` + RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"` +} + +type userLookupResponse struct { + User usermodel.Account `json:"user"` +} + +func postJSONValue(t *testing.T, targetURL string, body any) httpResponse { + t.Helper() + + payload, err := json.Marshal(body) + require.NoError(t, err) + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + require.NoError(t, err) + request.Header.Set("Content-Type", "application/json") + + client := &http.Client{Timeout: 5 * time.Second} + response, err := client.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + responseBody, err := io.ReadAll(response.Body) + require.NoError(t, err) + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(responseBody), + Header: response.Header.Clone(), + } +} + +func decodeStrictJSONPayload(payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return err + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return fmt.Errorf("unexpected trailing JSON input") + } + return err + } + + return nil +} + +func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) { + t.Helper() + + require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body) + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target)) +} + +func requireLookupNotFound(t *testing.T, response httpResponse) { + t.Helper() + + require.Equal(t, http.StatusNotFound, response.StatusCode, "response body: %s", response.Body) + require.JSONEq(t, `{"error":{"code":"subject_not_found","message":"subject not found"}}`, response.Body) +} + +func lastMailCodeFor(t *testing.T, stub *harness.MailStub, email string) string { + t.Helper() + + deliveries := stub.RecordedDeliveries() + for index := len(deliveries) - 1; index >= 0; index-- { + if deliveries[index].Email == email { + return deliveries[index].Code + } + } + + t.Fatalf("mail stub did not record delivery for %s", email) + return "" +} + +func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) { + t.Helper() + + client := &http.Client{Timeout: 250 * time.Millisecond} + deadline := time.Now().Add(10 * time.Second) + + for time.Now().Before(deadline) { + response, err := postJSONValueMaybe(client, baseURL+"/api/v1/public/auth/send-email-code", map[string]string{ + "email": "", + }) + if err == nil && response.StatusCode == http.StatusBadRequest { + return + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs()) +} + +func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) { + payload, err := json.Marshal(body) + if err != nil { + return httpResponse{}, err + } + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + if err != nil { + return httpResponse{}, err + } + request.Header.Set("Content-Type", "application/json") + + response, err := client.Do(request) + if err != nil { + return httpResponse{}, err + } + defer response.Body.Close() + + responseBody, err := io.ReadAll(response.Body) + if err != nil { + return httpResponse{}, err + } + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(responseBody), + Header: response.Header.Clone(), + }, nil +} + +func newClientPrivateKey(label string) ed25519.PrivateKey { + seed := sha256.Sum256([]byte("galaxy-integration-gateway-authsession-user-client-" + label)) + return ed25519.NewKeyFromSeed(seed[:]) +} + +func newExecuteCommandRequest(deviceSessionID string, requestID string, messageType string, payload []byte, clientPrivateKey ed25519.PrivateKey) *gatewayv1.ExecuteCommandRequest { + payloadHash := contractsgatewayv1.ComputePayloadHash(payload) + + request := &gatewayv1.ExecuteCommandRequest{ + ProtocolVersion: contractsgatewayv1.ProtocolVersionV1, + DeviceSessionId: deviceSessionID, + MessageType: messageType, + TimestampMs: time.Now().UnixMilli(), + RequestId: requestID, + PayloadBytes: payload, + PayloadHash: payloadHash, + TraceId: "trace-" + requestID, + } + request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{ + ProtocolVersion: request.GetProtocolVersion(), + DeviceSessionID: request.GetDeviceSessionId(), + MessageType: request.GetMessageType(), + TimestampMS: request.GetTimestampMs(), + RequestID: request.GetRequestId(), + PayloadHash: request.GetPayloadHash(), + }) + + return request +} + +func assertSignedExecuteCommandResponse(t *testing.T, response *gatewayv1.ExecuteCommandResponse, publicKey ed25519.PublicKey) { + t.Helper() + + require.NoError(t, contractsgatewayv1.VerifyPayloadHash(response.GetPayloadBytes(), response.GetPayloadHash())) + require.NoError(t, contractsgatewayv1.VerifyResponseSignature(publicKey, response.GetSignature(), contractsgatewayv1.ResponseSigningFields{ + ProtocolVersion: response.GetProtocolVersion(), + RequestID: response.GetRequestId(), + TimestampMS: response.GetTimestampMs(), + ResultCode: response.GetResultCode(), + PayloadHash: response.GetPayloadHash(), + })) +} diff --git a/integration/gatewayuser/gateway_user_test.go b/integration/gatewayuser/gateway_user_test.go new file mode 100644 index 0000000..715cbc0 --- /dev/null +++ b/integration/gatewayuser/gateway_user_test.go @@ -0,0 +1,147 @@ +package gatewayuser_test + +import ( + "testing" + + contractsuserv1 "galaxy/integration/internal/contracts/userv1" + + "github.com/stretchr/testify/require" +) + +func TestGatewayUserGetMyAccountAuthenticated(t *testing.T) { + h := newGatewayUserHarness(t) + + const ( + email = "pilot@example.com" + deviceSessionID = "device-session-get-account" + requestID = "request-get-account" + ) + + created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone) + require.Equal(t, "created", created.Outcome) + + clientPrivateKey := newClientPrivateKey("get-account") + h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey) + + payload, err := contractsuserv1.EncodeGetMyAccountRequest() + require.NoError(t, err) + + response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeGetMyAccount, payload, clientPrivateKey) + require.Equal(t, contractsuserv1.ResultCodeOK, response.GetResultCode()) + + accountResponse, err := contractsuserv1.DecodeAccountResponse(response.GetPayloadBytes()) + require.NoError(t, err) + require.Equal(t, created.UserID, accountResponse.Account.UserID) + require.Equal(t, email, accountResponse.Account.Email) + require.Equal(t, "en", accountResponse.Account.PreferredLanguage) + require.Equal(t, gatewayUserTestTimeZone, accountResponse.Account.TimeZone) +} + +func TestGatewayUserUpdateMyProfileSuccess(t *testing.T) { + h := newGatewayUserHarness(t) + + const ( + email = "pilot-profile@example.com" + deviceSessionID = "device-session-update-profile" + requestID = "request-update-profile" + ) + + created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone) + clientPrivateKey := newClientPrivateKey("update-profile") + h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey) + + payload, err := contractsuserv1.EncodeUpdateMyProfileRequest("Nova Prime") + require.NoError(t, err) + + response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeUpdateMyProfile, payload, clientPrivateKey) + require.Equal(t, contractsuserv1.ResultCodeOK, response.GetResultCode()) + + accountResponse, err := contractsuserv1.DecodeAccountResponse(response.GetPayloadBytes()) + require.NoError(t, err) + require.Equal(t, "Nova Prime", accountResponse.Account.RaceName) + + lookup := h.lookupUserByEmail(t, email) + require.Equal(t, "Nova Prime", lookup.User.RaceName) +} + +func TestGatewayUserUpdateMySettingsSuccess(t *testing.T) { + h := newGatewayUserHarness(t) + + const ( + email = "pilot-settings@example.com" + deviceSessionID = "device-session-update-settings" + requestID = "request-update-settings" + ) + + created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone) + clientPrivateKey := newClientPrivateKey("update-settings") + h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey) + + payload, err := contractsuserv1.EncodeUpdateMySettingsRequest("fr-FR", "Europe/Paris") + require.NoError(t, err) + + response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeUpdateMySettings, payload, clientPrivateKey) + require.Equal(t, contractsuserv1.ResultCodeOK, response.GetResultCode()) + + accountResponse, err := contractsuserv1.DecodeAccountResponse(response.GetPayloadBytes()) + require.NoError(t, err) + require.Equal(t, "fr-FR", accountResponse.Account.PreferredLanguage) + require.Equal(t, "Europe/Paris", accountResponse.Account.TimeZone) + + lookup := h.lookupUserByEmail(t, email) + require.Equal(t, "fr-FR", lookup.User.PreferredLanguage) + require.Equal(t, "Europe/Paris", lookup.User.TimeZone) +} + +func TestGatewayUserUpdateMyProfileConflict(t *testing.T) { + h := newGatewayUserHarness(t) + + const ( + email = "pilot-conflict@example.com" + deviceSessionID = "device-session-profile-conflict" + requestID = "request-profile-conflict" + ) + + created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone) + h.applyProfileUpdateBlock(t, created.UserID) + + clientPrivateKey := newClientPrivateKey("profile-conflict") + h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey) + + payload, err := contractsuserv1.EncodeUpdateMyProfileRequest("Blocked Nova") + require.NoError(t, err) + + response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeUpdateMyProfile, payload, clientPrivateKey) + require.Equal(t, "conflict", response.GetResultCode()) + + errorResponse, err := contractsuserv1.DecodeErrorResponse(response.GetPayloadBytes()) + require.NoError(t, err) + require.Equal(t, "conflict", errorResponse.Error.Code) + require.Equal(t, "request conflicts with current state", errorResponse.Error.Message) +} + +func TestGatewayUserUpdateMySettingsInvalidRequest(t *testing.T) { + h := newGatewayUserHarness(t) + + const ( + email = "pilot-invalid@example.com" + deviceSessionID = "device-session-settings-invalid" + requestID = "request-settings-invalid" + ) + + created := h.ensureUser(t, email, "en", gatewayUserTestTimeZone) + + clientPrivateKey := newClientPrivateKey("settings-invalid") + h.seedGatewaySession(t, deviceSessionID, created.UserID, clientPrivateKey) + + payload, err := contractsuserv1.EncodeUpdateMySettingsRequest("en", "Mars/Base") + require.NoError(t, err) + + response := h.executeCommand(t, deviceSessionID, requestID, contractsuserv1.MessageTypeUpdateMySettings, payload, clientPrivateKey) + require.Equal(t, "invalid_request", response.GetResultCode()) + + errorResponse, err := contractsuserv1.DecodeErrorResponse(response.GetPayloadBytes()) + require.NoError(t, err) + require.Equal(t, "invalid_request", errorResponse.Error.Code) + require.NotEmpty(t, errorResponse.Error.Message) +} diff --git a/integration/gatewayuser/harness_test.go b/integration/gatewayuser/harness_test.go new file mode 100644 index 0000000..732b950 --- /dev/null +++ b/integration/gatewayuser/harness_test.go @@ -0,0 +1,311 @@ +package gatewayuser_test + +import ( + "bytes" + "context" + "crypto/ed25519" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "net/http" + "path/filepath" + "testing" + "time" + + gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1" + contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1" + "galaxy/integration/internal/harness" + usermodel "galaxy/model/user" + + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" +) + +const ( + gatewayUserDefaultHTTPTimeout = time.Second + gatewayUserTestTimeZone = "Europe/Kaliningrad" +) + +type gatewayUserHarness struct { + redis *redis.Client + + userServiceURL string + gatewayGRPCAddr string + + responseSignerPublicKey ed25519.PublicKey + + gatewayProcess *harness.Process + userServiceProcess *harness.Process +} + +func newGatewayUserHarness(t *testing.T) *gatewayUserHarness { + t.Helper() + + redisServer := harness.StartMiniredis(t) + redisClient := redis.NewClient(&redis.Options{ + Addr: redisServer.Addr(), + Protocol: 2, + DisableIdentity: true, + }) + t.Cleanup(func() { + require.NoError(t, redisClient.Close()) + }) + + responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name()) + userServiceAddr := harness.FreeTCPAddress(t) + gatewayPublicAddr := harness.FreeTCPAddress(t) + gatewayGRPCAddr := harness.FreeTCPAddress(t) + + userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice") + gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway") + + userServiceEnv := map[string]string{ + "USERSERVICE_LOG_LEVEL": "info", + "USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr, + "USERSERVICE_REDIS_ADDR": redisServer.Addr(), + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + } + userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv) + harness.WaitForHTTPStatus(t, userServiceProcess, "http://"+userServiceAddr+"/api/v1/internal/users/user-missing/exists", http.StatusOK) + + gatewayEnv := map[string]string{ + "GATEWAY_LOG_LEVEL": "info", + "GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr, + "GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr, + "GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr, + "GATEWAY_SESSION_CACHE_REDIS_ADDR": redisServer.Addr(), + "GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:", + "GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events", + "GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events", + "GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:", + "GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath), + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + } + gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, gatewayEnv) + harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK) + harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr) + + return &gatewayUserHarness{ + redis: redisClient, + userServiceURL: "http://" + userServiceAddr, + gatewayGRPCAddr: gatewayGRPCAddr, + responseSignerPublicKey: responseSignerPublicKey, + gatewayProcess: gatewayProcess, + userServiceProcess: userServiceProcess, + } +} + +func (h *gatewayUserHarness) dialGateway(t *testing.T) *grpc.ClientConn { + t.Helper() + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + conn, err := grpc.DialContext( + ctx, + h.gatewayGRPCAddr, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithBlock(), + ) + require.NoError(t, err) + t.Cleanup(func() { + require.NoError(t, conn.Close()) + }) + + return conn +} + +func (h *gatewayUserHarness) ensureUser(t *testing.T, email string, preferredLanguage string, timeZone string) ensureByEmailResponse { + t.Helper() + + response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/ensure-by-email", map[string]any{ + "email": email, + "registration_context": map[string]string{ + "preferred_language": preferredLanguage, + "time_zone": timeZone, + }, + }) + + var body ensureByEmailResponse + requireJSONStatus(t, response, http.StatusOK, &body) + return body +} + +func (h *gatewayUserHarness) lookupUserByEmail(t *testing.T, email string) userLookupResponse { + t.Helper() + + response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/user-lookups/by-email", map[string]string{ + "email": email, + }) + + var body userLookupResponse + requireJSONStatus(t, response, http.StatusOK, &body) + return body +} + +func (h *gatewayUserHarness) applyProfileUpdateBlock(t *testing.T, userID string) { + t.Helper() + + response := postJSONValue(t, h.userServiceURL+"/api/v1/internal/users/"+userID+"/sanctions/apply", map[string]any{ + "sanction_code": "profile_update_block", + "scope": "lobby", + "reason_code": "manual_block", + "actor": map[string]string{ + "type": "admin", + "id": "admin-1", + }, + "applied_at": "2026-04-09T10:00:00Z", + }) + require.Equal(t, http.StatusOK, response.StatusCode, "response body: %s", response.Body) +} + +func (h *gatewayUserHarness) seedGatewaySession(t *testing.T, deviceSessionID string, userID string, clientPrivateKey ed25519.PrivateKey) { + t.Helper() + + record := gatewaySessionRecord{ + DeviceSessionID: deviceSessionID, + UserID: userID, + ClientPublicKey: base64.StdEncoding.EncodeToString(clientPrivateKey.Public().(ed25519.PublicKey)), + Status: "active", + } + + payload, err := json.Marshal(record) + require.NoError(t, err) + require.NoError(t, h.redis.Set(context.Background(), "gateway:session:"+deviceSessionID, payload, 0).Err()) +} + +func (h *gatewayUserHarness) executeCommand(t *testing.T, deviceSessionID string, requestID string, messageType string, payload []byte, clientPrivateKey ed25519.PrivateKey) *gatewayv1.ExecuteCommandResponse { + t.Helper() + + conn := h.dialGateway(t) + client := gatewayv1.NewEdgeGatewayClient(conn) + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + response, err := client.ExecuteCommand(ctx, newExecuteCommandRequest(deviceSessionID, requestID, messageType, payload, clientPrivateKey)) + require.NoError(t, err) + assertSignedExecuteCommandResponse(t, response, h.responseSignerPublicKey) + return response +} + +type httpResponse struct { + StatusCode int + Body string + Header http.Header +} + +type gatewaySessionRecord struct { + DeviceSessionID string `json:"device_session_id"` + UserID string `json:"user_id"` + ClientPublicKey string `json:"client_public_key"` + Status string `json:"status"` + RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"` +} + +type ensureByEmailResponse struct { + Outcome string `json:"outcome"` + UserID string `json:"user_id,omitempty"` +} + +type userLookupResponse struct { + User usermodel.Account `json:"user"` +} + +func postJSONValue(t *testing.T, targetURL string, body any) httpResponse { + t.Helper() + + payload, err := json.Marshal(body) + require.NoError(t, err) + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + require.NoError(t, err) + request.Header.Set("Content-Type", "application/json") + + client := &http.Client{Timeout: gatewayUserDefaultHTTPTimeout} + response, err := client.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + responseBody, err := io.ReadAll(response.Body) + require.NoError(t, err) + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(responseBody), + Header: response.Header.Clone(), + } +} + +func requireJSONStatus(t *testing.T, response httpResponse, wantStatus int, target any) { + t.Helper() + + require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body) + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target)) +} + +func decodeStrictJSONPayload(payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return err + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return fmt.Errorf("unexpected trailing JSON input") + } + return err + } + + return nil +} + +func newClientPrivateKey(label string) ed25519.PrivateKey { + seed := sha256.Sum256([]byte("galaxy-integration-gateway-user-client-" + label)) + return ed25519.NewKeyFromSeed(seed[:]) +} + +func newExecuteCommandRequest(deviceSessionID string, requestID string, messageType string, payload []byte, clientPrivateKey ed25519.PrivateKey) *gatewayv1.ExecuteCommandRequest { + payloadHash := contractsgatewayv1.ComputePayloadHash(payload) + + request := &gatewayv1.ExecuteCommandRequest{ + ProtocolVersion: contractsgatewayv1.ProtocolVersionV1, + DeviceSessionId: deviceSessionID, + MessageType: messageType, + TimestampMs: time.Now().UnixMilli(), + RequestId: requestID, + PayloadBytes: payload, + PayloadHash: payloadHash, + TraceId: "trace-" + requestID, + } + request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{ + ProtocolVersion: request.GetProtocolVersion(), + DeviceSessionID: request.GetDeviceSessionId(), + MessageType: request.GetMessageType(), + TimestampMS: request.GetTimestampMs(), + RequestID: request.GetRequestId(), + PayloadHash: request.GetPayloadHash(), + }) + + return request +} + +func assertSignedExecuteCommandResponse(t *testing.T, response *gatewayv1.ExecuteCommandResponse, publicKey ed25519.PublicKey) { + t.Helper() + + require.NoError(t, contractsgatewayv1.VerifyPayloadHash(response.GetPayloadBytes(), response.GetPayloadHash())) + require.NoError(t, contractsgatewayv1.VerifyResponseSignature(publicKey, response.GetSignature(), contractsgatewayv1.ResponseSigningFields{ + ProtocolVersion: response.GetProtocolVersion(), + RequestID: response.GetRequestId(), + TimestampMS: response.GetTimestampMs(), + ResultCode: response.GetResultCode(), + PayloadHash: response.GetPayloadHash(), + })) +} diff --git a/integration/go.mod b/integration/go.mod index 7a8d48b..e7fa428 100644 --- a/integration/go.mod +++ b/integration/go.mod @@ -18,8 +18,8 @@ require ( github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/rogpeppe/go-internal v1.14.1 // indirect github.com/yuin/gopher-lua v1.1.1 // indirect - go.opentelemetry.io/otel v1.42.0 // indirect - go.opentelemetry.io/otel/sdk/metric v1.42.0 // indirect + go.opentelemetry.io/otel v1.43.0 // indirect + go.opentelemetry.io/otel/sdk/metric v1.43.0 // indirect go.uber.org/atomic v1.11.0 // indirect golang.org/x/net v0.52.0 // indirect golang.org/x/sys v0.42.0 // indirect diff --git a/integration/go.sum b/integration/go.sum index 23fcc34..82496b7 100644 --- a/integration/go.sum +++ b/integration/go.sum @@ -34,11 +34,11 @@ github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0= github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= -go.opentelemetry.io/otel v1.42.0 h1:lSQGzTgVR3+sgJDAU/7/ZMjN9Z+vUip7leaqBKy4sho= -go.opentelemetry.io/otel/metric v1.42.0 h1:2jXG+3oZLNXEPfNmnpxKDeZsFI5o4J+nz6xUlaFdF/4= -go.opentelemetry.io/otel/sdk v1.42.0 h1:LyC8+jqk6UJwdrI/8VydAq/hvkFKNHZVIWuslJXYsDo= -go.opentelemetry.io/otel/sdk/metric v1.42.0 h1:D/1QR46Clz6ajyZ3G8SgNlTJKBdGp84q9RKCAZ3YGuA= -go.opentelemetry.io/otel/trace v1.42.0 h1:OUCgIPt+mzOnaUTpOQcBiM/PLQ/Op7oq6g4LenLmOYY= +go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I= +go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM= +go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg= +go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw= +go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A= go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= diff --git a/integration/internal/contracts/gatewayv1/contract.go b/integration/internal/contracts/gatewayv1/contract.go index a5e6a1b..25b3bb2 100644 --- a/integration/internal/contracts/gatewayv1/contract.go +++ b/integration/internal/contracts/gatewayv1/contract.go @@ -38,6 +38,11 @@ var ( // ErrInvalidEventSignature reports that one gateway event signature is not // a raw Ed25519 signature for the canonical event signing input. ErrInvalidEventSignature = errors.New("invalid event signature") + + // ErrInvalidResponseSignature reports that one gateway unary response + // signature is not a raw Ed25519 signature for the canonical response + // signing input. + ErrInvalidResponseSignature = errors.New("invalid response signature") ) // RequestSigningFields stores the canonical public request fields bound into @@ -85,6 +90,25 @@ type EventSigningFields struct { PayloadHash []byte } +// ResponseSigningFields stores the canonical public unary response fields +// bound into one gateway signature input. +type ResponseSigningFields struct { + // ProtocolVersion identifies the gateway transport envelope version. + ProtocolVersion string + + // RequestID is the transport correlation identifier echoed by the gateway. + RequestID string + + // TimestampMS carries the gateway response timestamp in milliseconds. + TimestampMS int64 + + // ResultCode stores the stable opaque gateway result code. + ResultCode string + + // PayloadHash stores the raw SHA-256 digest of PayloadBytes. + PayloadHash []byte +} + // ComputePayloadHash returns the canonical raw SHA-256 digest for payloadBytes. func ComputePayloadHash(payloadBytes []byte) []byte { sum := sha256.Sum256(payloadBytes) @@ -154,6 +178,28 @@ func BuildEventSigningInput(fields EventSigningFields) []byte { return buf } +// BuildResponseSigningInput returns the canonical byte sequence the v1 +// gateway unary response signature covers. +func BuildResponseSigningInput(fields ResponseSigningFields) []byte { + size := len("galaxy-response-v1") + + len(fields.ProtocolVersion) + + len(fields.RequestID) + + len(fields.ResultCode) + + len(fields.PayloadHash) + + (5 * binary.MaxVarintLen64) + + 8 + + buf := make([]byte, 0, size) + buf = appendLengthPrefixedString(buf, "galaxy-response-v1") + buf = appendLengthPrefixedString(buf, fields.ProtocolVersion) + buf = appendLengthPrefixedString(buf, fields.RequestID) + buf = binary.BigEndian.AppendUint64(buf, uint64(fields.TimestampMS)) + buf = appendLengthPrefixedString(buf, fields.ResultCode) + buf = appendLengthPrefixedBytes(buf, fields.PayloadHash) + + return buf +} + // SignRequest returns one raw Ed25519 client signature for the canonical v1 // request signing input. func SignRequest(privateKey ed25519.PrivateKey, fields RequestSigningFields) []byte { @@ -173,6 +219,19 @@ func VerifyEventSignature(publicKey ed25519.PublicKey, signature []byte, fields return nil } +// VerifyResponseSignature reports whether signature authenticates fields under +// publicKey using the canonical gateway unary-response signing input. +func VerifyResponseSignature(publicKey ed25519.PublicKey, signature []byte, fields ResponseSigningFields) error { + if len(publicKey) != ed25519.PublicKeySize || len(signature) != ed25519.SignatureSize { + return ErrInvalidResponseSignature + } + if !ed25519.Verify(publicKey, BuildResponseSigningInput(fields), signature) { + return ErrInvalidResponseSignature + } + + return nil +} + func appendLengthPrefixedString(dst []byte, value string) []byte { return appendLengthPrefixedBytes(dst, []byte(value)) } diff --git a/integration/internal/contracts/userv1/contract.go b/integration/internal/contracts/userv1/contract.go new file mode 100644 index 0000000..7147ad2 --- /dev/null +++ b/integration/internal/contracts/userv1/contract.go @@ -0,0 +1,61 @@ +// Package userv1contract provides public-contract helpers for the +// authenticated gateway v1 User Service self-service message types. +package userv1contract + +import ( + usermodel "galaxy/model/user" + "galaxy/transcoder" +) + +const ( + // MessageTypeGetMyAccount is the authenticated gateway message type used to + // read the current self-service account aggregate. + MessageTypeGetMyAccount = usermodel.MessageTypeGetMyAccount + + // MessageTypeUpdateMyProfile is the authenticated gateway message type used + // to mutate self-service profile fields. + MessageTypeUpdateMyProfile = usermodel.MessageTypeUpdateMyProfile + + // MessageTypeUpdateMySettings is the authenticated gateway message type used + // to mutate self-service settings fields. + MessageTypeUpdateMySettings = usermodel.MessageTypeUpdateMySettings + + // ResultCodeOK is the success result code projected by gateway for all + // successful `user.*` authenticated commands. + ResultCodeOK = "ok" +) + +// EncodeGetMyAccountRequest returns the FlatBuffers payload for the public +// empty get-account request. +func EncodeGetMyAccountRequest() ([]byte, error) { + return transcoder.GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{}) +} + +// EncodeUpdateMyProfileRequest returns the FlatBuffers payload for one public +// self-service profile mutation request. +func EncodeUpdateMyProfileRequest(raceName string) ([]byte, error) { + return transcoder.UpdateMyProfileRequestToPayload(&usermodel.UpdateMyProfileRequest{ + RaceName: raceName, + }) +} + +// EncodeUpdateMySettingsRequest returns the FlatBuffers payload for one public +// self-service settings mutation request. +func EncodeUpdateMySettingsRequest(preferredLanguage string, timeZone string) ([]byte, error) { + return transcoder.UpdateMySettingsRequestToPayload(&usermodel.UpdateMySettingsRequest{ + PreferredLanguage: preferredLanguage, + TimeZone: timeZone, + }) +} + +// DecodeAccountResponse decodes the public FlatBuffers success payload shared +// by all authenticated `user.*` commands. +func DecodeAccountResponse(payload []byte) (*usermodel.AccountResponse, error) { + return transcoder.PayloadToAccountResponse(payload) +} + +// DecodeErrorResponse decodes the public FlatBuffers error payload shared by +// all authenticated `user.*` commands. +func DecodeErrorResponse(payload []byte) (*usermodel.ErrorResponse, error) { + return transcoder.PayloadToErrorResponse(payload) +} diff --git a/pkg/model/user/user.go b/pkg/model/user/user.go new file mode 100644 index 0000000..14143c2 --- /dev/null +++ b/pkg/model/user/user.go @@ -0,0 +1,188 @@ +// Package user defines the public typed command and response payloads exposed +// at the authenticated Gateway -> User self-service boundary. +package user + +import "time" + +const ( + // MessageTypeGetMyAccount is the authenticated gateway message type used to + // read the current regular-user account aggregate. + MessageTypeGetMyAccount = "user.account.get" + + // MessageTypeUpdateMyProfile is the authenticated gateway message type used + // to mutate self-service profile fields. + MessageTypeUpdateMyProfile = "user.profile.update" + + // MessageTypeUpdateMySettings is the authenticated gateway message type used + // to mutate self-service settings fields. + MessageTypeUpdateMySettings = "user.settings.update" +) + +// GetMyAccountRequest stores the authenticated self-service read request for +// the current regular-user account aggregate. +// +// The request body is intentionally empty because gateway derives user +// identity from the authenticated device session rather than from client +// payload fields. +type GetMyAccountRequest struct{} + +// UpdateMyProfileRequest stores the authenticated self-service profile +// mutation request. +type UpdateMyProfileRequest struct { + // RaceName stores the requested exact replacement race name. + RaceName string `json:"race_name"` +} + +// UpdateMySettingsRequest stores the authenticated self-service settings +// mutation request. +type UpdateMySettingsRequest struct { + // PreferredLanguage stores the requested BCP 47 language tag. + PreferredLanguage string `json:"preferred_language"` + + // TimeZone stores the requested IANA time-zone name. + TimeZone string `json:"time_zone"` +} + +// ActorRef stores transport-ready audit actor metadata projected by User +// Service. +type ActorRef struct { + // Type stores the machine-readable actor type. + Type string `json:"type"` + + // ID stores the optional stable actor identifier. + ID string `json:"id,omitempty"` +} + +// EntitlementSnapshot stores the transport-ready current entitlement snapshot +// of one account. +type EntitlementSnapshot struct { + // PlanCode stores the effective entitlement plan code. + PlanCode string `json:"plan_code"` + + // IsPaid reports whether the effective entitlement is currently paid. + IsPaid bool `json:"is_paid"` + + // Source stores the machine-readable source that produced the snapshot. + Source string `json:"source"` + + // Actor stores the audit actor metadata attached to the current snapshot. + Actor ActorRef `json:"actor"` + + // ReasonCode stores the machine-readable reason attached to the snapshot. + ReasonCode string `json:"reason_code"` + + // StartsAt stores when the effective state started. + StartsAt time.Time `json:"starts_at"` + + // EndsAt stores the optional finite entitlement expiry. + EndsAt *time.Time `json:"ends_at,omitempty"` + + // UpdatedAt stores when the snapshot was last recomputed. + UpdatedAt time.Time `json:"updated_at"` +} + +// ActiveSanction stores one transport-ready active sanction returned in the +// shared account aggregate. +type ActiveSanction struct { + // SanctionCode stores the active sanction code. + SanctionCode string `json:"sanction_code"` + + // Scope stores the machine-readable sanction scope. + Scope string `json:"scope"` + + // ReasonCode stores the machine-readable sanction reason. + ReasonCode string `json:"reason_code"` + + // Actor stores the audit actor metadata attached to the sanction. + Actor ActorRef `json:"actor"` + + // AppliedAt stores when the sanction became active. + AppliedAt time.Time `json:"applied_at"` + + // ExpiresAt stores the optional planned sanction expiry. + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +// ActiveLimit stores one transport-ready active user-specific limit override +// returned in the shared account aggregate. +type ActiveLimit struct { + // LimitCode stores the active limit code. + LimitCode string `json:"limit_code"` + + // Value stores the current override value. + Value int `json:"value"` + + // ReasonCode stores the machine-readable limit reason. + ReasonCode string `json:"reason_code"` + + // Actor stores the audit actor metadata attached to the limit. + Actor ActorRef `json:"actor"` + + // AppliedAt stores when the limit became active. + AppliedAt time.Time `json:"applied_at"` + + // ExpiresAt stores the optional planned limit expiry. + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +// Account stores the transport-ready account aggregate shared by User Service +// self-service read and mutation responses. +type Account struct { + // UserID stores the durable regular-user identifier. + UserID string `json:"user_id"` + + // Email stores the exact-after-trim login e-mail address. + Email string `json:"email"` + + // RaceName stores the current user-facing race name. + RaceName string `json:"race_name"` + + // PreferredLanguage stores the current BCP 47 language tag. + PreferredLanguage string `json:"preferred_language"` + + // TimeZone stores the current IANA time-zone name. + TimeZone string `json:"time_zone"` + + // DeclaredCountry stores the optional current effective declared country. + DeclaredCountry string `json:"declared_country,omitempty"` + + // Entitlement stores the current entitlement snapshot. + Entitlement EntitlementSnapshot `json:"entitlement"` + + // ActiveSanctions stores the current active sanctions sorted by code. + ActiveSanctions []ActiveSanction `json:"active_sanctions"` + + // ActiveLimits stores the current active user-specific limits sorted by + // code. + ActiveLimits []ActiveLimit `json:"active_limits"` + + // CreatedAt stores when the account was created. + CreatedAt time.Time `json:"created_at"` + + // UpdatedAt stores when the account was last mutated. + UpdatedAt time.Time `json:"updated_at"` +} + +// AccountResponse stores the success payload shared by the authenticated +// GetMyAccount, UpdateMyProfile, and UpdateMySettings gateway message types. +type AccountResponse struct { + // Account stores the current account aggregate. + Account Account `json:"account"` +} + +// ErrorBody stores the machine-readable and human-readable failure payload +// mirrored from the User Service trusted internal error envelope. +type ErrorBody struct { + // Code stores the stable machine-readable failure code. + Code string `json:"code"` + + // Message stores the client-safe failure message. + Message string `json:"message"` +} + +// ErrorResponse stores the error payload returned by the authenticated +// Gateway -> User boundary when User Service rejects a request semantically. +type ErrorResponse struct { + // Error stores the mirrored error envelope body. + Error ErrorBody `json:"error"` +} diff --git a/pkg/schema/fbs/user.fbs b/pkg/schema/fbs/user.fbs new file mode 100644 index 0000000..1327223 --- /dev/null +++ b/pkg/schema/fbs/user.fbs @@ -0,0 +1,78 @@ +// user contains FlatBuffers payloads used by the authenticated gateway +// self-service boundary for User Service. +namespace user; + +table GetMyAccountRequest { +} + +table UpdateMyProfileRequest { + race_name:string; +} + +table UpdateMySettingsRequest { + preferred_language:string; + time_zone:string; +} + +table ActorRef { + type:string; + id:string; +} + +table EntitlementSnapshot { + plan_code:string; + is_paid:bool; + source:string; + actor:ActorRef; + reason_code:string; + starts_at_ms:int64; + ends_at_ms:int64; + updated_at_ms:int64; +} + +table ActiveSanction { + sanction_code:string; + scope:string; + reason_code:string; + actor:ActorRef; + applied_at_ms:int64; + expires_at_ms:int64; +} + +table ActiveLimit { + limit_code:string; + value:int64; + reason_code:string; + actor:ActorRef; + applied_at_ms:int64; + expires_at_ms:int64; +} + +table AccountView { + user_id:string; + email:string; + race_name:string; + preferred_language:string; + time_zone:string; + declared_country:string; + entitlement:EntitlementSnapshot; + active_sanctions:[ActiveSanction]; + active_limits:[ActiveLimit]; + created_at_ms:int64; + updated_at_ms:int64; +} + +table AccountResponse { + account:AccountView; +} + +table ErrorBody { + code:string; + message:string; +} + +table ErrorResponse { + error:ErrorBody; +} + +root_type AccountResponse; diff --git a/pkg/schema/fbs/user/AccountResponse.go b/pkg/schema/fbs/user/AccountResponse.go new file mode 100644 index 0000000..43c78db --- /dev/null +++ b/pkg/schema/fbs/user/AccountResponse.go @@ -0,0 +1,65 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type AccountResponse struct { + _tab flatbuffers.Table +} + +func GetRootAsAccountResponse(buf []byte, offset flatbuffers.UOffsetT) *AccountResponse { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &AccountResponse{} + x.Init(buf, n+offset) + return x +} + +func FinishAccountResponseBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsAccountResponse(buf []byte, offset flatbuffers.UOffsetT) *AccountResponse { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &AccountResponse{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedAccountResponseBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *AccountResponse) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *AccountResponse) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *AccountResponse) Account(obj *AccountView) *AccountView { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + x := rcv._tab.Indirect(o + rcv._tab.Pos) + if obj == nil { + obj = new(AccountView) + } + obj.Init(rcv._tab.Bytes, x) + return obj + } + return nil +} + +func AccountResponseStart(builder *flatbuffers.Builder) { + builder.StartObject(1) +} +func AccountResponseAddAccount(builder *flatbuffers.Builder, account flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(account), 0) +} +func AccountResponseEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/AccountView.go b/pkg/schema/fbs/user/AccountView.go new file mode 100644 index 0000000..2437ed2 --- /dev/null +++ b/pkg/schema/fbs/user/AccountView.go @@ -0,0 +1,213 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type AccountView struct { + _tab flatbuffers.Table +} + +func GetRootAsAccountView(buf []byte, offset flatbuffers.UOffsetT) *AccountView { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &AccountView{} + x.Init(buf, n+offset) + return x +} + +func FinishAccountViewBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsAccountView(buf []byte, offset flatbuffers.UOffsetT) *AccountView { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &AccountView{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedAccountViewBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *AccountView) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *AccountView) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *AccountView) UserId() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *AccountView) Email() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(6)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *AccountView) RaceName() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(8)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *AccountView) PreferredLanguage() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(10)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *AccountView) TimeZone() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(12)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *AccountView) DeclaredCountry() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(14)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *AccountView) Entitlement(obj *EntitlementSnapshot) *EntitlementSnapshot { + o := flatbuffers.UOffsetT(rcv._tab.Offset(16)) + if o != 0 { + x := rcv._tab.Indirect(o + rcv._tab.Pos) + if obj == nil { + obj = new(EntitlementSnapshot) + } + obj.Init(rcv._tab.Bytes, x) + return obj + } + return nil +} + +func (rcv *AccountView) ActiveSanctions(obj *ActiveSanction, j int) bool { + o := flatbuffers.UOffsetT(rcv._tab.Offset(18)) + if o != 0 { + x := rcv._tab.Vector(o) + x += flatbuffers.UOffsetT(j) * 4 + x = rcv._tab.Indirect(x) + obj.Init(rcv._tab.Bytes, x) + return true + } + return false +} + +func (rcv *AccountView) ActiveSanctionsLength() int { + o := flatbuffers.UOffsetT(rcv._tab.Offset(18)) + if o != 0 { + return rcv._tab.VectorLen(o) + } + return 0 +} + +func (rcv *AccountView) ActiveLimits(obj *ActiveLimit, j int) bool { + o := flatbuffers.UOffsetT(rcv._tab.Offset(20)) + if o != 0 { + x := rcv._tab.Vector(o) + x += flatbuffers.UOffsetT(j) * 4 + x = rcv._tab.Indirect(x) + obj.Init(rcv._tab.Bytes, x) + return true + } + return false +} + +func (rcv *AccountView) ActiveLimitsLength() int { + o := flatbuffers.UOffsetT(rcv._tab.Offset(20)) + if o != 0 { + return rcv._tab.VectorLen(o) + } + return 0 +} + +func (rcv *AccountView) CreatedAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(22)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *AccountView) MutateCreatedAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(22, n) +} + +func (rcv *AccountView) UpdatedAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(24)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *AccountView) MutateUpdatedAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(24, n) +} + +func AccountViewStart(builder *flatbuffers.Builder) { + builder.StartObject(11) +} +func AccountViewAddUserId(builder *flatbuffers.Builder, userId flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(userId), 0) +} +func AccountViewAddEmail(builder *flatbuffers.Builder, email flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(1, flatbuffers.UOffsetT(email), 0) +} +func AccountViewAddRaceName(builder *flatbuffers.Builder, raceName flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(2, flatbuffers.UOffsetT(raceName), 0) +} +func AccountViewAddPreferredLanguage(builder *flatbuffers.Builder, preferredLanguage flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(3, flatbuffers.UOffsetT(preferredLanguage), 0) +} +func AccountViewAddTimeZone(builder *flatbuffers.Builder, timeZone flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(4, flatbuffers.UOffsetT(timeZone), 0) +} +func AccountViewAddDeclaredCountry(builder *flatbuffers.Builder, declaredCountry flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(5, flatbuffers.UOffsetT(declaredCountry), 0) +} +func AccountViewAddEntitlement(builder *flatbuffers.Builder, entitlement flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(6, flatbuffers.UOffsetT(entitlement), 0) +} +func AccountViewAddActiveSanctions(builder *flatbuffers.Builder, activeSanctions flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(7, flatbuffers.UOffsetT(activeSanctions), 0) +} +func AccountViewStartActiveSanctionsVector(builder *flatbuffers.Builder, numElems int) flatbuffers.UOffsetT { + return builder.StartVector(4, numElems, 4) +} +func AccountViewAddActiveLimits(builder *flatbuffers.Builder, activeLimits flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(8, flatbuffers.UOffsetT(activeLimits), 0) +} +func AccountViewStartActiveLimitsVector(builder *flatbuffers.Builder, numElems int) flatbuffers.UOffsetT { + return builder.StartVector(4, numElems, 4) +} +func AccountViewAddCreatedAtMs(builder *flatbuffers.Builder, createdAtMs int64) { + builder.PrependInt64Slot(9, createdAtMs, 0) +} +func AccountViewAddUpdatedAtMs(builder *flatbuffers.Builder, updatedAtMs int64) { + builder.PrependInt64Slot(10, updatedAtMs, 0) +} +func AccountViewEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/ActiveLimit.go b/pkg/schema/fbs/user/ActiveLimit.go new file mode 100644 index 0000000..2b91435 --- /dev/null +++ b/pkg/schema/fbs/user/ActiveLimit.go @@ -0,0 +1,132 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type ActiveLimit struct { + _tab flatbuffers.Table +} + +func GetRootAsActiveLimit(buf []byte, offset flatbuffers.UOffsetT) *ActiveLimit { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &ActiveLimit{} + x.Init(buf, n+offset) + return x +} + +func FinishActiveLimitBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsActiveLimit(buf []byte, offset flatbuffers.UOffsetT) *ActiveLimit { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &ActiveLimit{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedActiveLimitBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *ActiveLimit) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *ActiveLimit) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *ActiveLimit) LimitCode() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *ActiveLimit) Value() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(6)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *ActiveLimit) MutateValue(n int64) bool { + return rcv._tab.MutateInt64Slot(6, n) +} + +func (rcv *ActiveLimit) ReasonCode() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(8)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *ActiveLimit) Actor(obj *ActorRef) *ActorRef { + o := flatbuffers.UOffsetT(rcv._tab.Offset(10)) + if o != 0 { + x := rcv._tab.Indirect(o + rcv._tab.Pos) + if obj == nil { + obj = new(ActorRef) + } + obj.Init(rcv._tab.Bytes, x) + return obj + } + return nil +} + +func (rcv *ActiveLimit) AppliedAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(12)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *ActiveLimit) MutateAppliedAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(12, n) +} + +func (rcv *ActiveLimit) ExpiresAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(14)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *ActiveLimit) MutateExpiresAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(14, n) +} + +func ActiveLimitStart(builder *flatbuffers.Builder) { + builder.StartObject(6) +} +func ActiveLimitAddLimitCode(builder *flatbuffers.Builder, limitCode flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(limitCode), 0) +} +func ActiveLimitAddValue(builder *flatbuffers.Builder, value int64) { + builder.PrependInt64Slot(1, value, 0) +} +func ActiveLimitAddReasonCode(builder *flatbuffers.Builder, reasonCode flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(2, flatbuffers.UOffsetT(reasonCode), 0) +} +func ActiveLimitAddActor(builder *flatbuffers.Builder, actor flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(3, flatbuffers.UOffsetT(actor), 0) +} +func ActiveLimitAddAppliedAtMs(builder *flatbuffers.Builder, appliedAtMs int64) { + builder.PrependInt64Slot(4, appliedAtMs, 0) +} +func ActiveLimitAddExpiresAtMs(builder *flatbuffers.Builder, expiresAtMs int64) { + builder.PrependInt64Slot(5, expiresAtMs, 0) +} +func ActiveLimitEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/ActiveSanction.go b/pkg/schema/fbs/user/ActiveSanction.go new file mode 100644 index 0000000..01b95e8 --- /dev/null +++ b/pkg/schema/fbs/user/ActiveSanction.go @@ -0,0 +1,128 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type ActiveSanction struct { + _tab flatbuffers.Table +} + +func GetRootAsActiveSanction(buf []byte, offset flatbuffers.UOffsetT) *ActiveSanction { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &ActiveSanction{} + x.Init(buf, n+offset) + return x +} + +func FinishActiveSanctionBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsActiveSanction(buf []byte, offset flatbuffers.UOffsetT) *ActiveSanction { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &ActiveSanction{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedActiveSanctionBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *ActiveSanction) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *ActiveSanction) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *ActiveSanction) SanctionCode() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *ActiveSanction) Scope() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(6)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *ActiveSanction) ReasonCode() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(8)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *ActiveSanction) Actor(obj *ActorRef) *ActorRef { + o := flatbuffers.UOffsetT(rcv._tab.Offset(10)) + if o != 0 { + x := rcv._tab.Indirect(o + rcv._tab.Pos) + if obj == nil { + obj = new(ActorRef) + } + obj.Init(rcv._tab.Bytes, x) + return obj + } + return nil +} + +func (rcv *ActiveSanction) AppliedAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(12)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *ActiveSanction) MutateAppliedAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(12, n) +} + +func (rcv *ActiveSanction) ExpiresAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(14)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *ActiveSanction) MutateExpiresAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(14, n) +} + +func ActiveSanctionStart(builder *flatbuffers.Builder) { + builder.StartObject(6) +} +func ActiveSanctionAddSanctionCode(builder *flatbuffers.Builder, sanctionCode flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(sanctionCode), 0) +} +func ActiveSanctionAddScope(builder *flatbuffers.Builder, scope flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(1, flatbuffers.UOffsetT(scope), 0) +} +func ActiveSanctionAddReasonCode(builder *flatbuffers.Builder, reasonCode flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(2, flatbuffers.UOffsetT(reasonCode), 0) +} +func ActiveSanctionAddActor(builder *flatbuffers.Builder, actor flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(3, flatbuffers.UOffsetT(actor), 0) +} +func ActiveSanctionAddAppliedAtMs(builder *flatbuffers.Builder, appliedAtMs int64) { + builder.PrependInt64Slot(4, appliedAtMs, 0) +} +func ActiveSanctionAddExpiresAtMs(builder *flatbuffers.Builder, expiresAtMs int64) { + builder.PrependInt64Slot(5, expiresAtMs, 0) +} +func ActiveSanctionEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/ActorRef.go b/pkg/schema/fbs/user/ActorRef.go new file mode 100644 index 0000000..db6b4fe --- /dev/null +++ b/pkg/schema/fbs/user/ActorRef.go @@ -0,0 +1,71 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type ActorRef struct { + _tab flatbuffers.Table +} + +func GetRootAsActorRef(buf []byte, offset flatbuffers.UOffsetT) *ActorRef { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &ActorRef{} + x.Init(buf, n+offset) + return x +} + +func FinishActorRefBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsActorRef(buf []byte, offset flatbuffers.UOffsetT) *ActorRef { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &ActorRef{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedActorRefBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *ActorRef) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *ActorRef) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *ActorRef) Type() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *ActorRef) Id() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(6)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func ActorRefStart(builder *flatbuffers.Builder) { + builder.StartObject(2) +} +func ActorRefAddType(builder *flatbuffers.Builder, type_ flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(type_), 0) +} +func ActorRefAddId(builder *flatbuffers.Builder, id flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(1, flatbuffers.UOffsetT(id), 0) +} +func ActorRefEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/EntitlementSnapshot.go b/pkg/schema/fbs/user/EntitlementSnapshot.go new file mode 100644 index 0000000..eb87691 --- /dev/null +++ b/pkg/schema/fbs/user/EntitlementSnapshot.go @@ -0,0 +1,158 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type EntitlementSnapshot struct { + _tab flatbuffers.Table +} + +func GetRootAsEntitlementSnapshot(buf []byte, offset flatbuffers.UOffsetT) *EntitlementSnapshot { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &EntitlementSnapshot{} + x.Init(buf, n+offset) + return x +} + +func FinishEntitlementSnapshotBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsEntitlementSnapshot(buf []byte, offset flatbuffers.UOffsetT) *EntitlementSnapshot { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &EntitlementSnapshot{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedEntitlementSnapshotBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *EntitlementSnapshot) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *EntitlementSnapshot) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *EntitlementSnapshot) PlanCode() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *EntitlementSnapshot) IsPaid() bool { + o := flatbuffers.UOffsetT(rcv._tab.Offset(6)) + if o != 0 { + return rcv._tab.GetBool(o + rcv._tab.Pos) + } + return false +} + +func (rcv *EntitlementSnapshot) MutateIsPaid(n bool) bool { + return rcv._tab.MutateBoolSlot(6, n) +} + +func (rcv *EntitlementSnapshot) Source() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(8)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *EntitlementSnapshot) Actor(obj *ActorRef) *ActorRef { + o := flatbuffers.UOffsetT(rcv._tab.Offset(10)) + if o != 0 { + x := rcv._tab.Indirect(o + rcv._tab.Pos) + if obj == nil { + obj = new(ActorRef) + } + obj.Init(rcv._tab.Bytes, x) + return obj + } + return nil +} + +func (rcv *EntitlementSnapshot) ReasonCode() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(12)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *EntitlementSnapshot) StartsAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(14)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *EntitlementSnapshot) MutateStartsAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(14, n) +} + +func (rcv *EntitlementSnapshot) EndsAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(16)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *EntitlementSnapshot) MutateEndsAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(16, n) +} + +func (rcv *EntitlementSnapshot) UpdatedAtMs() int64 { + o := flatbuffers.UOffsetT(rcv._tab.Offset(18)) + if o != 0 { + return rcv._tab.GetInt64(o + rcv._tab.Pos) + } + return 0 +} + +func (rcv *EntitlementSnapshot) MutateUpdatedAtMs(n int64) bool { + return rcv._tab.MutateInt64Slot(18, n) +} + +func EntitlementSnapshotStart(builder *flatbuffers.Builder) { + builder.StartObject(8) +} +func EntitlementSnapshotAddPlanCode(builder *flatbuffers.Builder, planCode flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(planCode), 0) +} +func EntitlementSnapshotAddIsPaid(builder *flatbuffers.Builder, isPaid bool) { + builder.PrependBoolSlot(1, isPaid, false) +} +func EntitlementSnapshotAddSource(builder *flatbuffers.Builder, source flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(2, flatbuffers.UOffsetT(source), 0) +} +func EntitlementSnapshotAddActor(builder *flatbuffers.Builder, actor flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(3, flatbuffers.UOffsetT(actor), 0) +} +func EntitlementSnapshotAddReasonCode(builder *flatbuffers.Builder, reasonCode flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(4, flatbuffers.UOffsetT(reasonCode), 0) +} +func EntitlementSnapshotAddStartsAtMs(builder *flatbuffers.Builder, startsAtMs int64) { + builder.PrependInt64Slot(5, startsAtMs, 0) +} +func EntitlementSnapshotAddEndsAtMs(builder *flatbuffers.Builder, endsAtMs int64) { + builder.PrependInt64Slot(6, endsAtMs, 0) +} +func EntitlementSnapshotAddUpdatedAtMs(builder *flatbuffers.Builder, updatedAtMs int64) { + builder.PrependInt64Slot(7, updatedAtMs, 0) +} +func EntitlementSnapshotEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/ErrorBody.go b/pkg/schema/fbs/user/ErrorBody.go new file mode 100644 index 0000000..e168277 --- /dev/null +++ b/pkg/schema/fbs/user/ErrorBody.go @@ -0,0 +1,71 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type ErrorBody struct { + _tab flatbuffers.Table +} + +func GetRootAsErrorBody(buf []byte, offset flatbuffers.UOffsetT) *ErrorBody { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &ErrorBody{} + x.Init(buf, n+offset) + return x +} + +func FinishErrorBodyBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsErrorBody(buf []byte, offset flatbuffers.UOffsetT) *ErrorBody { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &ErrorBody{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedErrorBodyBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *ErrorBody) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *ErrorBody) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *ErrorBody) Code() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *ErrorBody) Message() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(6)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func ErrorBodyStart(builder *flatbuffers.Builder) { + builder.StartObject(2) +} +func ErrorBodyAddCode(builder *flatbuffers.Builder, code flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(code), 0) +} +func ErrorBodyAddMessage(builder *flatbuffers.Builder, message flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(1, flatbuffers.UOffsetT(message), 0) +} +func ErrorBodyEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/ErrorResponse.go b/pkg/schema/fbs/user/ErrorResponse.go new file mode 100644 index 0000000..30972ba --- /dev/null +++ b/pkg/schema/fbs/user/ErrorResponse.go @@ -0,0 +1,65 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type ErrorResponse struct { + _tab flatbuffers.Table +} + +func GetRootAsErrorResponse(buf []byte, offset flatbuffers.UOffsetT) *ErrorResponse { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &ErrorResponse{} + x.Init(buf, n+offset) + return x +} + +func FinishErrorResponseBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsErrorResponse(buf []byte, offset flatbuffers.UOffsetT) *ErrorResponse { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &ErrorResponse{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedErrorResponseBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *ErrorResponse) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *ErrorResponse) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *ErrorResponse) Error(obj *ErrorBody) *ErrorBody { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + x := rcv._tab.Indirect(o + rcv._tab.Pos) + if obj == nil { + obj = new(ErrorBody) + } + obj.Init(rcv._tab.Bytes, x) + return obj + } + return nil +} + +func ErrorResponseStart(builder *flatbuffers.Builder) { + builder.StartObject(1) +} +func ErrorResponseAddError(builder *flatbuffers.Builder, error flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(error), 0) +} +func ErrorResponseEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/GetMyAccountRequest.go b/pkg/schema/fbs/user/GetMyAccountRequest.go new file mode 100644 index 0000000..76fc669 --- /dev/null +++ b/pkg/schema/fbs/user/GetMyAccountRequest.go @@ -0,0 +1,49 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type GetMyAccountRequest struct { + _tab flatbuffers.Table +} + +func GetRootAsGetMyAccountRequest(buf []byte, offset flatbuffers.UOffsetT) *GetMyAccountRequest { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &GetMyAccountRequest{} + x.Init(buf, n+offset) + return x +} + +func FinishGetMyAccountRequestBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsGetMyAccountRequest(buf []byte, offset flatbuffers.UOffsetT) *GetMyAccountRequest { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &GetMyAccountRequest{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedGetMyAccountRequestBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *GetMyAccountRequest) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *GetMyAccountRequest) Table() flatbuffers.Table { + return rcv._tab +} + +func GetMyAccountRequestStart(builder *flatbuffers.Builder) { + builder.StartObject(0) +} +func GetMyAccountRequestEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/UpdateMyProfileRequest.go b/pkg/schema/fbs/user/UpdateMyProfileRequest.go new file mode 100644 index 0000000..2cf7d2f --- /dev/null +++ b/pkg/schema/fbs/user/UpdateMyProfileRequest.go @@ -0,0 +1,60 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type UpdateMyProfileRequest struct { + _tab flatbuffers.Table +} + +func GetRootAsUpdateMyProfileRequest(buf []byte, offset flatbuffers.UOffsetT) *UpdateMyProfileRequest { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &UpdateMyProfileRequest{} + x.Init(buf, n+offset) + return x +} + +func FinishUpdateMyProfileRequestBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsUpdateMyProfileRequest(buf []byte, offset flatbuffers.UOffsetT) *UpdateMyProfileRequest { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &UpdateMyProfileRequest{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedUpdateMyProfileRequestBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *UpdateMyProfileRequest) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *UpdateMyProfileRequest) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *UpdateMyProfileRequest) RaceName() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func UpdateMyProfileRequestStart(builder *flatbuffers.Builder) { + builder.StartObject(1) +} +func UpdateMyProfileRequestAddRaceName(builder *flatbuffers.Builder, raceName flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(raceName), 0) +} +func UpdateMyProfileRequestEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/schema/fbs/user/UpdateMySettingsRequest.go b/pkg/schema/fbs/user/UpdateMySettingsRequest.go new file mode 100644 index 0000000..d1c846b --- /dev/null +++ b/pkg/schema/fbs/user/UpdateMySettingsRequest.go @@ -0,0 +1,71 @@ +// Code generated by the FlatBuffers compiler. DO NOT EDIT. + +package user + +import ( + flatbuffers "github.com/google/flatbuffers/go" +) + +type UpdateMySettingsRequest struct { + _tab flatbuffers.Table +} + +func GetRootAsUpdateMySettingsRequest(buf []byte, offset flatbuffers.UOffsetT) *UpdateMySettingsRequest { + n := flatbuffers.GetUOffsetT(buf[offset:]) + x := &UpdateMySettingsRequest{} + x.Init(buf, n+offset) + return x +} + +func FinishUpdateMySettingsRequestBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.Finish(offset) +} + +func GetSizePrefixedRootAsUpdateMySettingsRequest(buf []byte, offset flatbuffers.UOffsetT) *UpdateMySettingsRequest { + n := flatbuffers.GetUOffsetT(buf[offset+flatbuffers.SizeUint32:]) + x := &UpdateMySettingsRequest{} + x.Init(buf, n+offset+flatbuffers.SizeUint32) + return x +} + +func FinishSizePrefixedUpdateMySettingsRequestBuffer(builder *flatbuffers.Builder, offset flatbuffers.UOffsetT) { + builder.FinishSizePrefixed(offset) +} + +func (rcv *UpdateMySettingsRequest) Init(buf []byte, i flatbuffers.UOffsetT) { + rcv._tab.Bytes = buf + rcv._tab.Pos = i +} + +func (rcv *UpdateMySettingsRequest) Table() flatbuffers.Table { + return rcv._tab +} + +func (rcv *UpdateMySettingsRequest) PreferredLanguage() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(4)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func (rcv *UpdateMySettingsRequest) TimeZone() []byte { + o := flatbuffers.UOffsetT(rcv._tab.Offset(6)) + if o != 0 { + return rcv._tab.ByteVector(o + rcv._tab.Pos) + } + return nil +} + +func UpdateMySettingsRequestStart(builder *flatbuffers.Builder) { + builder.StartObject(2) +} +func UpdateMySettingsRequestAddPreferredLanguage(builder *flatbuffers.Builder, preferredLanguage flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(0, flatbuffers.UOffsetT(preferredLanguage), 0) +} +func UpdateMySettingsRequestAddTimeZone(builder *flatbuffers.Builder, timeZone flatbuffers.UOffsetT) { + builder.PrependUOffsetTSlot(1, flatbuffers.UOffsetT(timeZone), 0) +} +func UpdateMySettingsRequestEnd(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + return builder.EndObject() +} diff --git a/pkg/transcoder/user.go b/pkg/transcoder/user.go new file mode 100644 index 0000000..a6bb0c2 --- /dev/null +++ b/pkg/transcoder/user.go @@ -0,0 +1,504 @@ +package transcoder + +import ( + "errors" + "fmt" + "time" + + usermodel "galaxy/model/user" + userfbs "galaxy/schema/fbs/user" + + flatbuffers "github.com/google/flatbuffers/go" +) + +// GetMyAccountRequestToPayload converts usermodel.GetMyAccountRequest to +// FlatBuffers bytes suitable for the authenticated gateway transport. +func GetMyAccountRequestToPayload(request *usermodel.GetMyAccountRequest) ([]byte, error) { + if request == nil { + return nil, errors.New("encode get my account request payload: request is nil") + } + + builder := flatbuffers.NewBuilder(32) + userfbs.GetMyAccountRequestStart(builder) + offset := userfbs.GetMyAccountRequestEnd(builder) + userfbs.FinishGetMyAccountRequestBuffer(builder, offset) + + return builder.FinishedBytes(), nil +} + +// PayloadToGetMyAccountRequest converts FlatBuffers payload bytes into +// usermodel.GetMyAccountRequest. +func PayloadToGetMyAccountRequest(data []byte) (result *usermodel.GetMyAccountRequest, err error) { + if len(data) == 0 { + return nil, errors.New("decode get my account request payload: data is empty") + } + + defer recoverUserDecodePanic("decode get my account request payload", &result, &err) + + _ = userfbs.GetRootAsGetMyAccountRequest(data, 0) + return &usermodel.GetMyAccountRequest{}, nil +} + +// UpdateMyProfileRequestToPayload converts usermodel.UpdateMyProfileRequest to +// FlatBuffers bytes suitable for the authenticated gateway transport. +func UpdateMyProfileRequestToPayload(request *usermodel.UpdateMyProfileRequest) ([]byte, error) { + if request == nil { + return nil, errors.New("encode update my profile request payload: request is nil") + } + + builder := flatbuffers.NewBuilder(128) + raceName := builder.CreateString(request.RaceName) + + userfbs.UpdateMyProfileRequestStart(builder) + userfbs.UpdateMyProfileRequestAddRaceName(builder, raceName) + offset := userfbs.UpdateMyProfileRequestEnd(builder) + userfbs.FinishUpdateMyProfileRequestBuffer(builder, offset) + + return builder.FinishedBytes(), nil +} + +// PayloadToUpdateMyProfileRequest converts FlatBuffers payload bytes into +// usermodel.UpdateMyProfileRequest. +func PayloadToUpdateMyProfileRequest(data []byte) (result *usermodel.UpdateMyProfileRequest, err error) { + if len(data) == 0 { + return nil, errors.New("decode update my profile request payload: data is empty") + } + + defer recoverUserDecodePanic("decode update my profile request payload", &result, &err) + + request := userfbs.GetRootAsUpdateMyProfileRequest(data, 0) + return &usermodel.UpdateMyProfileRequest{ + RaceName: string(request.RaceName()), + }, nil +} + +// UpdateMySettingsRequestToPayload converts +// usermodel.UpdateMySettingsRequest to FlatBuffers bytes suitable for the +// authenticated gateway transport. +func UpdateMySettingsRequestToPayload(request *usermodel.UpdateMySettingsRequest) ([]byte, error) { + if request == nil { + return nil, errors.New("encode update my settings request payload: request is nil") + } + + builder := flatbuffers.NewBuilder(128) + preferredLanguage := builder.CreateString(request.PreferredLanguage) + timeZone := builder.CreateString(request.TimeZone) + + userfbs.UpdateMySettingsRequestStart(builder) + userfbs.UpdateMySettingsRequestAddPreferredLanguage(builder, preferredLanguage) + userfbs.UpdateMySettingsRequestAddTimeZone(builder, timeZone) + offset := userfbs.UpdateMySettingsRequestEnd(builder) + userfbs.FinishUpdateMySettingsRequestBuffer(builder, offset) + + return builder.FinishedBytes(), nil +} + +// PayloadToUpdateMySettingsRequest converts FlatBuffers payload bytes into +// usermodel.UpdateMySettingsRequest. +func PayloadToUpdateMySettingsRequest(data []byte) (result *usermodel.UpdateMySettingsRequest, err error) { + if len(data) == 0 { + return nil, errors.New("decode update my settings request payload: data is empty") + } + + defer recoverUserDecodePanic("decode update my settings request payload", &result, &err) + + request := userfbs.GetRootAsUpdateMySettingsRequest(data, 0) + return &usermodel.UpdateMySettingsRequest{ + PreferredLanguage: string(request.PreferredLanguage()), + TimeZone: string(request.TimeZone()), + }, nil +} + +// AccountResponseToPayload converts usermodel.AccountResponse to FlatBuffers +// bytes suitable for the authenticated gateway transport. +func AccountResponseToPayload(response *usermodel.AccountResponse) ([]byte, error) { + if response == nil { + return nil, errors.New("encode account response payload: response is nil") + } + + builder := flatbuffers.NewBuilder(512) + accountOffset, err := encodeAccount(builder, response.Account) + if err != nil { + return nil, fmt.Errorf("encode account response payload: %w", err) + } + + userfbs.AccountResponseStart(builder) + userfbs.AccountResponseAddAccount(builder, accountOffset) + offset := userfbs.AccountResponseEnd(builder) + userfbs.FinishAccountResponseBuffer(builder, offset) + + return builder.FinishedBytes(), nil +} + +// PayloadToAccountResponse converts FlatBuffers payload bytes into +// usermodel.AccountResponse. +func PayloadToAccountResponse(data []byte) (result *usermodel.AccountResponse, err error) { + if len(data) == 0 { + return nil, errors.New("decode account response payload: data is empty") + } + + defer recoverUserDecodePanic("decode account response payload", &result, &err) + + response := userfbs.GetRootAsAccountResponse(data, 0) + account := response.Account(nil) + if account == nil { + return nil, errors.New("decode account response payload: account is missing") + } + + decodedAccount, err := decodeAccount(account) + if err != nil { + return nil, fmt.Errorf("decode account response payload: %w", err) + } + + return &usermodel.AccountResponse{Account: decodedAccount}, nil +} + +// ErrorResponseToPayload converts usermodel.ErrorResponse to FlatBuffers bytes +// suitable for the authenticated gateway transport. +func ErrorResponseToPayload(response *usermodel.ErrorResponse) ([]byte, error) { + if response == nil { + return nil, errors.New("encode error response payload: response is nil") + } + + builder := flatbuffers.NewBuilder(128) + errorOffset := encodeErrorBody(builder, response.Error) + + userfbs.ErrorResponseStart(builder) + userfbs.ErrorResponseAddError(builder, errorOffset) + offset := userfbs.ErrorResponseEnd(builder) + userfbs.FinishErrorResponseBuffer(builder, offset) + + return builder.FinishedBytes(), nil +} + +// PayloadToErrorResponse converts FlatBuffers payload bytes into +// usermodel.ErrorResponse. +func PayloadToErrorResponse(data []byte) (result *usermodel.ErrorResponse, err error) { + if len(data) == 0 { + return nil, errors.New("decode error response payload: data is empty") + } + + defer recoverUserDecodePanic("decode error response payload", &result, &err) + + response := userfbs.GetRootAsErrorResponse(data, 0) + errorBody := response.Error(nil) + if errorBody == nil { + return nil, errors.New("decode error response payload: error is missing") + } + + return &usermodel.ErrorResponse{ + Error: usermodel.ErrorBody{ + Code: string(errorBody.Code()), + Message: string(errorBody.Message()), + }, + }, nil +} + +func encodeAccount(builder *flatbuffers.Builder, account usermodel.Account) (flatbuffers.UOffsetT, error) { + entitlementOffset, err := encodeEntitlementSnapshot(builder, account.Entitlement) + if err != nil { + return 0, fmt.Errorf("encode account: %w", err) + } + + activeSanctionOffsets := make([]flatbuffers.UOffsetT, len(account.ActiveSanctions)) + for index := range account.ActiveSanctions { + activeSanctionOffsets[index], err = encodeActiveSanction(builder, account.ActiveSanctions[index]) + if err != nil { + return 0, fmt.Errorf("encode account active sanction %d: %w", index, err) + } + } + + var activeSanctionsVector flatbuffers.UOffsetT + if len(activeSanctionOffsets) > 0 { + userfbs.AccountViewStartActiveSanctionsVector(builder, len(activeSanctionOffsets)) + for index := len(activeSanctionOffsets) - 1; index >= 0; index-- { + builder.PrependUOffsetT(activeSanctionOffsets[index]) + } + activeSanctionsVector = builder.EndVector(len(activeSanctionOffsets)) + } + + activeLimitOffsets := make([]flatbuffers.UOffsetT, len(account.ActiveLimits)) + for index := range account.ActiveLimits { + activeLimitOffsets[index], err = encodeActiveLimit(builder, account.ActiveLimits[index]) + if err != nil { + return 0, fmt.Errorf("encode account active limit %d: %w", index, err) + } + } + + var activeLimitsVector flatbuffers.UOffsetT + if len(activeLimitOffsets) > 0 { + userfbs.AccountViewStartActiveLimitsVector(builder, len(activeLimitOffsets)) + for index := len(activeLimitOffsets) - 1; index >= 0; index-- { + builder.PrependUOffsetT(activeLimitOffsets[index]) + } + activeLimitsVector = builder.EndVector(len(activeLimitOffsets)) + } + + userID := builder.CreateString(account.UserID) + email := builder.CreateString(account.Email) + raceName := builder.CreateString(account.RaceName) + preferredLanguage := builder.CreateString(account.PreferredLanguage) + timeZone := builder.CreateString(account.TimeZone) + + var declaredCountry flatbuffers.UOffsetT + if account.DeclaredCountry != "" { + declaredCountry = builder.CreateString(account.DeclaredCountry) + } + + userfbs.AccountViewStart(builder) + userfbs.AccountViewAddUserId(builder, userID) + userfbs.AccountViewAddEmail(builder, email) + userfbs.AccountViewAddRaceName(builder, raceName) + userfbs.AccountViewAddPreferredLanguage(builder, preferredLanguage) + userfbs.AccountViewAddTimeZone(builder, timeZone) + if declaredCountry != 0 { + userfbs.AccountViewAddDeclaredCountry(builder, declaredCountry) + } + userfbs.AccountViewAddEntitlement(builder, entitlementOffset) + if activeSanctionsVector != 0 { + userfbs.AccountViewAddActiveSanctions(builder, activeSanctionsVector) + } + if activeLimitsVector != 0 { + userfbs.AccountViewAddActiveLimits(builder, activeLimitsVector) + } + userfbs.AccountViewAddCreatedAtMs(builder, account.CreatedAt.UTC().UnixMilli()) + userfbs.AccountViewAddUpdatedAtMs(builder, account.UpdatedAt.UTC().UnixMilli()) + + return userfbs.AccountViewEnd(builder), nil +} + +func decodeAccount(account *userfbs.AccountView) (usermodel.Account, error) { + entitlement := account.Entitlement(nil) + if entitlement == nil { + return usermodel.Account{}, errors.New("account entitlement is missing") + } + + decodedEntitlement, err := decodeEntitlementSnapshot(entitlement) + if err != nil { + return usermodel.Account{}, fmt.Errorf("decode account entitlement: %w", err) + } + + createdAt := time.UnixMilli(account.CreatedAtMs()).UTC() + updatedAt := time.UnixMilli(account.UpdatedAtMs()).UTC() + + result := usermodel.Account{ + UserID: string(account.UserId()), + Email: string(account.Email()), + RaceName: string(account.RaceName()), + PreferredLanguage: string(account.PreferredLanguage()), + TimeZone: string(account.TimeZone()), + DeclaredCountry: string(account.DeclaredCountry()), + Entitlement: decodedEntitlement, + ActiveSanctions: make([]usermodel.ActiveSanction, 0, account.ActiveSanctionsLength()), + ActiveLimits: make([]usermodel.ActiveLimit, 0, account.ActiveLimitsLength()), + CreatedAt: createdAt, + UpdatedAt: updatedAt, + } + + activeSanction := new(userfbs.ActiveSanction) + for index := 0; index < account.ActiveSanctionsLength(); index++ { + if !account.ActiveSanctions(activeSanction, index) { + return usermodel.Account{}, fmt.Errorf("account active sanction %d is missing", index) + } + + decodedSanction, err := decodeActiveSanction(activeSanction) + if err != nil { + return usermodel.Account{}, fmt.Errorf("decode account active sanction %d: %w", index, err) + } + result.ActiveSanctions = append(result.ActiveSanctions, decodedSanction) + } + + activeLimit := new(userfbs.ActiveLimit) + for index := 0; index < account.ActiveLimitsLength(); index++ { + if !account.ActiveLimits(activeLimit, index) { + return usermodel.Account{}, fmt.Errorf("account active limit %d is missing", index) + } + + decodedLimit, err := decodeActiveLimit(activeLimit) + if err != nil { + return usermodel.Account{}, fmt.Errorf("decode account active limit %d: %w", index, err) + } + result.ActiveLimits = append(result.ActiveLimits, decodedLimit) + } + + return result, nil +} + +func encodeEntitlementSnapshot(builder *flatbuffers.Builder, snapshot usermodel.EntitlementSnapshot) (flatbuffers.UOffsetT, error) { + actorOffset := encodeActorRef(builder, snapshot.Actor) + planCode := builder.CreateString(snapshot.PlanCode) + source := builder.CreateString(snapshot.Source) + reasonCode := builder.CreateString(snapshot.ReasonCode) + + userfbs.EntitlementSnapshotStart(builder) + userfbs.EntitlementSnapshotAddPlanCode(builder, planCode) + userfbs.EntitlementSnapshotAddIsPaid(builder, snapshot.IsPaid) + userfbs.EntitlementSnapshotAddSource(builder, source) + userfbs.EntitlementSnapshotAddActor(builder, actorOffset) + userfbs.EntitlementSnapshotAddReasonCode(builder, reasonCode) + userfbs.EntitlementSnapshotAddStartsAtMs(builder, snapshot.StartsAt.UTC().UnixMilli()) + if snapshot.EndsAt != nil { + userfbs.EntitlementSnapshotAddEndsAtMs(builder, snapshot.EndsAt.UTC().UnixMilli()) + } + userfbs.EntitlementSnapshotAddUpdatedAtMs(builder, snapshot.UpdatedAt.UTC().UnixMilli()) + + return userfbs.EntitlementSnapshotEnd(builder), nil +} + +func decodeEntitlementSnapshot(snapshot *userfbs.EntitlementSnapshot) (usermodel.EntitlementSnapshot, error) { + actor := snapshot.Actor(nil) + if actor == nil { + return usermodel.EntitlementSnapshot{}, errors.New("entitlement actor is missing") + } + + decodedActor, err := decodeActorRef(actor) + if err != nil { + return usermodel.EntitlementSnapshot{}, fmt.Errorf("decode entitlement actor: %w", err) + } + + return usermodel.EntitlementSnapshot{ + PlanCode: string(snapshot.PlanCode()), + IsPaid: snapshot.IsPaid(), + Source: string(snapshot.Source()), + Actor: decodedActor, + ReasonCode: string(snapshot.ReasonCode()), + StartsAt: time.UnixMilli(snapshot.StartsAtMs()).UTC(), + EndsAt: optionalUnixMilli(snapshot.EndsAtMs()), + UpdatedAt: time.UnixMilli(snapshot.UpdatedAtMs()).UTC(), + }, nil +} + +func encodeActiveSanction(builder *flatbuffers.Builder, sanction usermodel.ActiveSanction) (flatbuffers.UOffsetT, error) { + actorOffset := encodeActorRef(builder, sanction.Actor) + sanctionCode := builder.CreateString(sanction.SanctionCode) + scope := builder.CreateString(sanction.Scope) + reasonCode := builder.CreateString(sanction.ReasonCode) + + userfbs.ActiveSanctionStart(builder) + userfbs.ActiveSanctionAddSanctionCode(builder, sanctionCode) + userfbs.ActiveSanctionAddScope(builder, scope) + userfbs.ActiveSanctionAddReasonCode(builder, reasonCode) + userfbs.ActiveSanctionAddActor(builder, actorOffset) + userfbs.ActiveSanctionAddAppliedAtMs(builder, sanction.AppliedAt.UTC().UnixMilli()) + if sanction.ExpiresAt != nil { + userfbs.ActiveSanctionAddExpiresAtMs(builder, sanction.ExpiresAt.UTC().UnixMilli()) + } + + return userfbs.ActiveSanctionEnd(builder), nil +} + +func decodeActiveSanction(sanction *userfbs.ActiveSanction) (usermodel.ActiveSanction, error) { + actor := sanction.Actor(nil) + if actor == nil { + return usermodel.ActiveSanction{}, errors.New("sanction actor is missing") + } + + decodedActor, err := decodeActorRef(actor) + if err != nil { + return usermodel.ActiveSanction{}, fmt.Errorf("decode sanction actor: %w", err) + } + + return usermodel.ActiveSanction{ + SanctionCode: string(sanction.SanctionCode()), + Scope: string(sanction.Scope()), + ReasonCode: string(sanction.ReasonCode()), + Actor: decodedActor, + AppliedAt: time.UnixMilli(sanction.AppliedAtMs()).UTC(), + ExpiresAt: optionalUnixMilli(sanction.ExpiresAtMs()), + }, nil +} + +func encodeActiveLimit(builder *flatbuffers.Builder, limit usermodel.ActiveLimit) (flatbuffers.UOffsetT, error) { + actorOffset := encodeActorRef(builder, limit.Actor) + limitCode := builder.CreateString(limit.LimitCode) + reasonCode := builder.CreateString(limit.ReasonCode) + + userfbs.ActiveLimitStart(builder) + userfbs.ActiveLimitAddLimitCode(builder, limitCode) + userfbs.ActiveLimitAddValue(builder, int64(limit.Value)) + userfbs.ActiveLimitAddReasonCode(builder, reasonCode) + userfbs.ActiveLimitAddActor(builder, actorOffset) + userfbs.ActiveLimitAddAppliedAtMs(builder, limit.AppliedAt.UTC().UnixMilli()) + if limit.ExpiresAt != nil { + userfbs.ActiveLimitAddExpiresAtMs(builder, limit.ExpiresAt.UTC().UnixMilli()) + } + + return userfbs.ActiveLimitEnd(builder), nil +} + +func decodeActiveLimit(limit *userfbs.ActiveLimit) (usermodel.ActiveLimit, error) { + actor := limit.Actor(nil) + if actor == nil { + return usermodel.ActiveLimit{}, errors.New("limit actor is missing") + } + + decodedActor, err := decodeActorRef(actor) + if err != nil { + return usermodel.ActiveLimit{}, fmt.Errorf("decode limit actor: %w", err) + } + + value, err := int64ToInt(limit.Value(), "value") + if err != nil { + return usermodel.ActiveLimit{}, err + } + + return usermodel.ActiveLimit{ + LimitCode: string(limit.LimitCode()), + Value: value, + ReasonCode: string(limit.ReasonCode()), + Actor: decodedActor, + AppliedAt: time.UnixMilli(limit.AppliedAtMs()).UTC(), + ExpiresAt: optionalUnixMilli(limit.ExpiresAtMs()), + }, nil +} + +func encodeActorRef(builder *flatbuffers.Builder, actor usermodel.ActorRef) flatbuffers.UOffsetT { + actorType := builder.CreateString(actor.Type) + + var actorID flatbuffers.UOffsetT + if actor.ID != "" { + actorID = builder.CreateString(actor.ID) + } + + userfbs.ActorRefStart(builder) + userfbs.ActorRefAddType(builder, actorType) + if actorID != 0 { + userfbs.ActorRefAddId(builder, actorID) + } + + return userfbs.ActorRefEnd(builder) +} + +func decodeActorRef(actor *userfbs.ActorRef) (usermodel.ActorRef, error) { + return usermodel.ActorRef{ + Type: string(actor.Type()), + ID: string(actor.Id()), + }, nil +} + +func encodeErrorBody(builder *flatbuffers.Builder, errorBody usermodel.ErrorBody) flatbuffers.UOffsetT { + code := builder.CreateString(errorBody.Code) + message := builder.CreateString(errorBody.Message) + + userfbs.ErrorBodyStart(builder) + userfbs.ErrorBodyAddCode(builder, code) + userfbs.ErrorBodyAddMessage(builder, message) + + return userfbs.ErrorBodyEnd(builder) +} + +func optionalUnixMilli(value int64) *time.Time { + if value == 0 { + return nil + } + + decoded := time.UnixMilli(value).UTC() + return &decoded +} + +func recoverUserDecodePanic[T any](message string, result **T, err *error) { + if recovered := recover(); recovered != nil { + *result = nil + *err = fmt.Errorf("%s: panic recovered: %v", message, recovered) + } +} diff --git a/pkg/transcoder/user_test.go b/pkg/transcoder/user_test.go new file mode 100644 index 0000000..0fb5e62 --- /dev/null +++ b/pkg/transcoder/user_test.go @@ -0,0 +1,468 @@ +package transcoder + +import ( + "reflect" + "strconv" + "strings" + "testing" + "time" + + usermodel "galaxy/model/user" + userfbs "galaxy/schema/fbs/user" + + flatbuffers "github.com/google/flatbuffers/go" +) + +func TestUserRequestPayloadRoundTrips(t *testing.T) { + t.Parallel() + + getPayload, err := GetMyAccountRequestToPayload(&usermodel.GetMyAccountRequest{}) + if err != nil { + t.Fatalf("encode get my account request: %v", err) + } + + getDecoded, err := PayloadToGetMyAccountRequest(getPayload) + if err != nil { + t.Fatalf("decode get my account request: %v", err) + } + if !reflect.DeepEqual(&usermodel.GetMyAccountRequest{}, getDecoded) { + t.Fatalf("get my account request mismatch: %#v", getDecoded) + } + + profileSource := &usermodel.UpdateMyProfileRequest{RaceName: "Nova Prime"} + profilePayload, err := UpdateMyProfileRequestToPayload(profileSource) + if err != nil { + t.Fatalf("encode update my profile request: %v", err) + } + + profileDecoded, err := PayloadToUpdateMyProfileRequest(profilePayload) + if err != nil { + t.Fatalf("decode update my profile request: %v", err) + } + if !reflect.DeepEqual(profileSource, profileDecoded) { + t.Fatalf("update my profile request mismatch\nsource: %#v\ndecoded:%#v", profileSource, profileDecoded) + } + + settingsSource := &usermodel.UpdateMySettingsRequest{ + PreferredLanguage: "en-US", + TimeZone: "Europe/Kaliningrad", + } + settingsPayload, err := UpdateMySettingsRequestToPayload(settingsSource) + if err != nil { + t.Fatalf("encode update my settings request: %v", err) + } + + settingsDecoded, err := PayloadToUpdateMySettingsRequest(settingsPayload) + if err != nil { + t.Fatalf("decode update my settings request: %v", err) + } + if !reflect.DeepEqual(settingsSource, settingsDecoded) { + t.Fatalf("update my settings request mismatch\nsource: %#v\ndecoded:%#v", settingsSource, settingsDecoded) + } +} + +func TestAccountResponsePayloadRoundTrip(t *testing.T) { + t.Parallel() + + now := time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC) + expiresAt := now.Add(30 * 24 * time.Hour) + limitExpiresAt := now.Add(90 * 24 * time.Hour) + + source := &usermodel.AccountResponse{ + Account: usermodel.Account{ + UserID: "user-123", + Email: "pilot@example.com", + RaceName: "Pilot Nova", + PreferredLanguage: "en", + TimeZone: "Europe/Kaliningrad", + DeclaredCountry: "DE", + Entitlement: usermodel.EntitlementSnapshot{ + PlanCode: "paid_monthly", + IsPaid: true, + Source: "billing", + Actor: usermodel.ActorRef{Type: "billing", ID: "invoice-1"}, + ReasonCode: "renewal", + StartsAt: now, + EndsAt: &expiresAt, + UpdatedAt: now, + }, + ActiveSanctions: []usermodel.ActiveSanction{ + { + SanctionCode: "profile_update_block", + Scope: "lobby", + ReasonCode: "manual_block", + Actor: usermodel.ActorRef{Type: "admin", ID: "admin-1"}, + AppliedAt: now, + ExpiresAt: &expiresAt, + }, + }, + ActiveLimits: []usermodel.ActiveLimit{ + { + LimitCode: "max_owned_private_games", + Value: 3, + ReasonCode: "manual_override", + Actor: usermodel.ActorRef{Type: "admin", ID: "admin-1"}, + AppliedAt: now, + ExpiresAt: &limitExpiresAt, + }, + }, + CreatedAt: now, + UpdatedAt: now.Add(time.Hour), + }, + } + + payload, err := AccountResponseToPayload(source) + if err != nil { + t.Fatalf("encode account response: %v", err) + } + + decoded, err := PayloadToAccountResponse(payload) + if err != nil { + t.Fatalf("decode account response: %v", err) + } + + if !reflect.DeepEqual(source, decoded) { + t.Fatalf("account response mismatch\nsource: %#v\ndecoded:%#v", source, decoded) + } +} + +func TestErrorResponsePayloadRoundTrip(t *testing.T) { + t.Parallel() + + source := &usermodel.ErrorResponse{ + Error: usermodel.ErrorBody{ + Code: "conflict", + Message: "request conflicts with current state", + }, + } + + payload, err := ErrorResponseToPayload(source) + if err != nil { + t.Fatalf("encode error response: %v", err) + } + + decoded, err := PayloadToErrorResponse(payload) + if err != nil { + t.Fatalf("decode error response: %v", err) + } + + if !reflect.DeepEqual(source, decoded) { + t.Fatalf("error response mismatch\nsource: %#v\ndecoded:%#v", source, decoded) + } +} + +func TestUserPayloadEncodersRejectNilInputs(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + call func() error + }{ + { + name: "get my account request", + call: func() error { + _, err := GetMyAccountRequestToPayload(nil) + return err + }, + }, + { + name: "update my profile request", + call: func() error { + _, err := UpdateMyProfileRequestToPayload(nil) + return err + }, + }, + { + name: "update my settings request", + call: func() error { + _, err := UpdateMySettingsRequestToPayload(nil) + return err + }, + }, + { + name: "account response", + call: func() error { + _, err := AccountResponseToPayload(nil) + return err + }, + }, + { + name: "error response", + call: func() error { + _, err := ErrorResponseToPayload(nil) + return err + }, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + if err := tt.call(); err == nil { + t.Fatal("expected error") + } + }) + } +} + +func TestUserPayloadDecodersRejectEmptyPayloads(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + call func() error + }{ + { + name: "get my account request", + call: func() error { + _, err := PayloadToGetMyAccountRequest(nil) + return err + }, + }, + { + name: "update my profile request", + call: func() error { + _, err := PayloadToUpdateMyProfileRequest(nil) + return err + }, + }, + { + name: "update my settings request", + call: func() error { + _, err := PayloadToUpdateMySettingsRequest(nil) + return err + }, + }, + { + name: "account response", + call: func() error { + _, err := PayloadToAccountResponse(nil) + return err + }, + }, + { + name: "error response", + call: func() error { + _, err := PayloadToErrorResponse(nil) + return err + }, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + if err := tt.call(); err == nil { + t.Fatal("expected error") + } + }) + } +} + +func TestUserPayloadDecodersRecoverFromGarbagePayloads(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + call func() error + }{ + { + name: "get my account request", + call: func() error { + _, err := PayloadToGetMyAccountRequest([]byte{0x01, 0x02, 0x03}) + return err + }, + }, + { + name: "update my profile request", + call: func() error { + _, err := PayloadToUpdateMyProfileRequest([]byte{0x01, 0x02, 0x03}) + return err + }, + }, + { + name: "update my settings request", + call: func() error { + _, err := PayloadToUpdateMySettingsRequest([]byte{0x01, 0x02, 0x03}) + return err + }, + }, + { + name: "account response", + call: func() error { + _, err := PayloadToAccountResponse([]byte{0x01, 0x02, 0x03}) + return err + }, + }, + { + name: "error response", + call: func() error { + _, err := PayloadToErrorResponse([]byte{0x01, 0x02, 0x03}) + return err + }, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + if err := tt.call(); err == nil { + t.Fatal("expected error") + } + }) + } +} + +func TestPayloadToAccountResponseRejectsMissingAccount(t *testing.T) { + t.Parallel() + + builder := flatbuffers.NewBuilder(64) + userfbs.AccountResponseStart(builder) + offset := userfbs.AccountResponseEnd(builder) + userfbs.FinishAccountResponseBuffer(builder, offset) + + _, err := PayloadToAccountResponse(builder.FinishedBytes()) + if err == nil { + t.Fatal("expected error for missing account") + } + if !strings.Contains(err.Error(), "account is missing") { + t.Fatalf("unexpected error: %v", err) + } +} + +func TestPayloadToAccountResponseRejectsMissingEntitlement(t *testing.T) { + t.Parallel() + + payload := buildAccountResponsePayload(func(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + userID := builder.CreateString("user-123") + email := builder.CreateString("pilot@example.com") + raceName := builder.CreateString("Pilot Nova") + preferredLanguage := builder.CreateString("en") + timeZone := builder.CreateString("Europe/Kaliningrad") + + userfbs.AccountViewStart(builder) + userfbs.AccountViewAddUserId(builder, userID) + userfbs.AccountViewAddEmail(builder, email) + userfbs.AccountViewAddRaceName(builder, raceName) + userfbs.AccountViewAddPreferredLanguage(builder, preferredLanguage) + userfbs.AccountViewAddTimeZone(builder, timeZone) + userfbs.AccountViewAddCreatedAtMs(builder, 1) + userfbs.AccountViewAddUpdatedAtMs(builder, 2) + return userfbs.AccountViewEnd(builder) + }) + + _, err := PayloadToAccountResponse(payload) + if err == nil { + t.Fatal("expected error for missing entitlement") + } + if !strings.Contains(err.Error(), "entitlement is missing") { + t.Fatalf("unexpected error: %v", err) + } +} + +func TestPayloadToAccountResponseRejectsOverflowLimitValue(t *testing.T) { + t.Parallel() + + if strconv.IntSize == 64 { + t.Skip("int overflow from int64 is not possible on 64-bit runtime") + } + + maxInt := int64(int(^uint(0) >> 1)) + overflow := maxInt + 1 + nowMS := int64(1) + + payload := buildAccountResponsePayload(func(builder *flatbuffers.Builder) flatbuffers.UOffsetT { + actorType := builder.CreateString("admin") + userfbs.ActorRefStart(builder) + userfbs.ActorRefAddType(builder, actorType) + actorOffset := userfbs.ActorRefEnd(builder) + + planCode := builder.CreateString("free") + source := builder.CreateString("auth_registration") + reasonCode := builder.CreateString("initial_free_entitlement") + userfbs.EntitlementSnapshotStart(builder) + userfbs.EntitlementSnapshotAddPlanCode(builder, planCode) + userfbs.EntitlementSnapshotAddSource(builder, source) + userfbs.EntitlementSnapshotAddActor(builder, actorOffset) + userfbs.EntitlementSnapshotAddReasonCode(builder, reasonCode) + userfbs.EntitlementSnapshotAddStartsAtMs(builder, nowMS) + userfbs.EntitlementSnapshotAddUpdatedAtMs(builder, nowMS) + entitlementOffset := userfbs.EntitlementSnapshotEnd(builder) + + limitCode := builder.CreateString("max_owned_private_games") + limitReasonCode := builder.CreateString("manual_override") + userfbs.ActiveLimitStart(builder) + userfbs.ActiveLimitAddLimitCode(builder, limitCode) + userfbs.ActiveLimitAddValue(builder, overflow) + userfbs.ActiveLimitAddReasonCode(builder, limitReasonCode) + userfbs.ActiveLimitAddActor(builder, actorOffset) + userfbs.ActiveLimitAddAppliedAtMs(builder, nowMS) + limitOffset := userfbs.ActiveLimitEnd(builder) + + userfbs.AccountViewStartActiveLimitsVector(builder, 1) + builder.PrependUOffsetT(limitOffset) + limitsVector := builder.EndVector(1) + + userID := builder.CreateString("user-123") + email := builder.CreateString("pilot@example.com") + raceName := builder.CreateString("Pilot Nova") + preferredLanguage := builder.CreateString("en") + timeZone := builder.CreateString("Europe/Kaliningrad") + + userfbs.AccountViewStart(builder) + userfbs.AccountViewAddUserId(builder, userID) + userfbs.AccountViewAddEmail(builder, email) + userfbs.AccountViewAddRaceName(builder, raceName) + userfbs.AccountViewAddPreferredLanguage(builder, preferredLanguage) + userfbs.AccountViewAddTimeZone(builder, timeZone) + userfbs.AccountViewAddEntitlement(builder, entitlementOffset) + userfbs.AccountViewAddActiveLimits(builder, limitsVector) + userfbs.AccountViewAddCreatedAtMs(builder, nowMS) + userfbs.AccountViewAddUpdatedAtMs(builder, nowMS) + return userfbs.AccountViewEnd(builder) + }) + + _, err := PayloadToAccountResponse(payload) + if err == nil { + t.Fatal("expected overflow error") + } + if !strings.Contains(err.Error(), "overflows int") { + t.Fatalf("unexpected error: %v", err) + } +} + +func TestPayloadToErrorResponseRejectsMissingError(t *testing.T) { + t.Parallel() + + builder := flatbuffers.NewBuilder(64) + userfbs.ErrorResponseStart(builder) + offset := userfbs.ErrorResponseEnd(builder) + userfbs.FinishErrorResponseBuffer(builder, offset) + + _, err := PayloadToErrorResponse(builder.FinishedBytes()) + if err == nil { + t.Fatal("expected error for missing error body") + } + if !strings.Contains(err.Error(), "error is missing") { + t.Fatalf("unexpected error: %v", err) + } +} + +func buildAccountResponsePayload(accountBuilder func(*flatbuffers.Builder) flatbuffers.UOffsetT) []byte { + builder := flatbuffers.NewBuilder(256) + + accountOffset := accountBuilder(builder) + + userfbs.AccountResponseStart(builder) + userfbs.AccountResponseAddAccount(builder, accountOffset) + responseOffset := userfbs.AccountResponseEnd(builder) + userfbs.FinishAccountResponseBuffer(builder, responseOffset) + + return builder.FinishedBytes() +} diff --git a/user/PLAN.md b/user/PLAN.md index 96983bd..5d5f531 100644 --- a/user/PLAN.md +++ b/user/PLAN.md @@ -1,5 +1,9 @@ # User Service Implementation Plan +This plan has been already implemented and stays here for historical reasons. + +It should NOT be threated as source of truth for service functionality. + ## Planning Principles This plan is aligned with the current repository architecture and is written @@ -17,7 +21,9 @@ Execution priorities: - keep the first version storage-agnostic at the domain boundary even if Redis is the initial backend -## Stage 01 — Freeze Vocabulary, Contracts, and Cross-Service Ownership +## ~~Stage 01~~ — Freeze Vocabulary, Contracts, and Cross-Service Ownership + +Status: implemented. ### Goal @@ -38,8 +44,10 @@ Remove naming ambiguity and freeze the service boundary before implementation. - workflow and history in `Geo Profile Service` - Freeze the auth-facing internal REST endpoints already reserved by `Auth / Session Service`. -- Freeze the need for create-only registration context on - `EnsureUserByEmail`. +- Freeze the exact create-only registration context shape on + `EnsureUserByEmail`: + - `preferred_language` + - `time_zone` ### Deliverables @@ -58,7 +66,9 @@ Remove naming ambiguity and freeze the service boundary before implementation. - none yet beyond documentation review -## Stage 02 — Define Domain Entities and Redis-Backed Logical State +## ~~Stage 02~~ — Define Domain Entities and Redis-Backed Logical State + +Status: implemented. ### Goal @@ -97,7 +107,9 @@ without revisiting core semantics. - domain validation tests for required fields - tests for effective-state evaluation of active versus expired records -## Stage 03 — Implement Auth-Facing Resolution, Ensure, Existence, and E-Mail Blocking +## ~~Stage 03~~ — Implement Auth-Facing Resolution, Ensure, Existence, and E-Mail Blocking + +Status: implemented. ### Goal @@ -122,6 +134,12 @@ Provide the minimum trusted API needed by `Auth / Session Service`. - trusted internal REST handlers for auth-facing endpoints - domain services for resolution and block behavior - Redis-backed storage for user existence and blocked-email subjects +- runnable `cmd/userservice` process using `Gin` and `go-redis/v9` +- durable create path that already materializes: + - opaque `user_id` + - generated `player-` race name + - stored `preferred_language` and `time_zone` + - initial free entitlement snapshot ### Exit Criteria @@ -137,22 +155,26 @@ Provide the minimum trusted API needed by `Auth / Session Service`. - block by user id on unknown user returns not found - repeated block calls stay idempotent -## Stage 04 — Add New-User Creation Context from Auth +## ~~Stage 04~~ — Implement New-User Creation Context from Auth + +Status: implemented. ### Goal -Support first-login user creation with initial settings captured at confirm -time. +Tighten the already-implemented first-login create path with stricter semantic +validation. ### Tasks -- Extend `EnsureUserByEmail` contract with create-only registration context: +- Preserve the already-frozen create-only `EnsureUserByEmail` + registration context with: - `preferred_language` - `time_zone` -- Validate `preferred_language` as BCP 47. -- Validate `time_zone` as IANA TZ name. -- Generate initial `race_name` in `player-` form during creation. -- Initialize the newly created user with: +- Tighten `preferred_language` validation to BCP 47 semantics. +- Tighten `time_zone` validation to IANA TZ semantics. +- Preserve generated initial `race_name` in `player-` form during + creation. +- Preserve the newly created user initialization with: - free entitlement - no active sanctions - no custom limits @@ -161,9 +183,9 @@ time. ### Deliverables -- extended ensure-by-email request model -- create-user domain service +- create-user domain service using the frozen ensure-by-email request model - generated-race-name helper +- create-path validation for `preferred_language` and `time_zone` ### Exit Criteria @@ -177,7 +199,9 @@ time. - existing user ensure ignores create-only registration context - invalid BCP 47 or IANA inputs are rejected on create path -## Stage 05 — Implement Self-Service Account Read and Split Profile/Settings Mutations +## ~~Stage 05~~ — Implement Self-Service Account Read and Split Profile/Settings Mutations + +Status: implemented. ### Goal @@ -220,7 +244,9 @@ Expose the minimal authenticated account surface routed by `Edge Gateway`. - `UpdateMySettings` validates BCP 47 and IANA values - active `profile_update_block` denies both update flows -## Stage 06 — Implement race_name Uniqueness Policy Behind a Dedicated Interface +## ~~Stage 06~~ — Implement race_name Uniqueness Policy Behind a Dedicated Interface + +Status: implemented. ### Goal @@ -256,7 +282,9 @@ Keep `race_name` uniqueness strict and replaceable. - rename releases the old reservation only after the new one is secured - failed reservation backend causes mutation to fail closed -## Stage 07 — Implement Entitlement History Plus Materialized Current Snapshot +## ~~Stage 07~~ — Implement Entitlement History Plus Materialized Current Snapshot + +Status: implemented. ### Goal @@ -298,7 +326,9 @@ Support both auditability and fast synchronous entitlement reads. - free default is created for new users - extending or revoking access preserves deterministic current-state behavior -## Stage 08 — Implement Sanctions and Limit Records with Active/Effective Evaluation +## ~~Stage 08~~ — Implement Sanctions and Limit Records with Active/Effective Evaluation + +Status: implemented. ### Goal @@ -317,11 +347,23 @@ consumers. - `profile_update_block` - Freeze v1 limit catalog: - `max_owned_private_games` - - `max_active_private_games` - `max_pending_public_applications` - - `max_pending_private_join_requests` - - `max_pending_private_invites_sent` - `max_active_game_memberships` +- Freeze supported v1 limit semantics: + - paid effective defaults: + - `max_owned_private_games=3` + - `max_pending_public_applications=10` + - `max_active_game_memberships=10` + - free effective defaults: + - `max_owned_private_games` is omitted + - `max_pending_public_applications=3` + - `max_active_game_memberships=3` + - `max_active_game_memberships` applies only to public games + - `max_pending_public_applications` is the total public-games budget and is + interpreted by `Game Lobby` together with current active public + memberships +- Keep legacy retired limit codes backward-compatible on reads, but reject + them for new trusted limit commands. - Implement active/effective evaluation with current time. - Implement trusted explicit commands to apply/remove sanctions and set/remove limits. @@ -343,9 +385,14 @@ consumers. - active sanctions appear in account reads - expired sanctions and limits stop affecting effective state +- retired legacy limit records are ignored during reads and effective + evaluation +- retired legacy limit codes are rejected by trusted limit commands - applying and removing sanctions/limits is idempotent where appropriate -## Stage 09 — Implement Lobby Eligibility Snapshot API +## ~~Stage 09~~ — Implement Lobby Eligibility Snapshot API + +Status: implemented. ### Goal @@ -361,6 +408,16 @@ user-level access decisions. - active lobby-relevant sanctions - effective lobby-relevant limits - derived booleans for lobby decisions +- Freeze the lobby-facing effective limit catalog: + - paid users receive `max_owned_private_games=3`, + `max_pending_public_applications=10`, and + `max_active_game_memberships=10` + - free users omit `max_owned_private_games` and receive + `max_pending_public_applications=3` and + `max_active_game_memberships=3` + - `max_pending_public_applications` remains the total public-games budget + consumed together with current active public memberships inside + `Game Lobby` - Keep the response read-optimized so lobby does not need multiple dependent calls back into `User Service`. - Define deterministic not-found behavior. @@ -381,8 +438,12 @@ user-level access decisions. - lobby eligibility snapshot reflects paid status, sanctions, and limits - unknown user returns stable not-found behavior - derived booleans remain consistent with raw effective state +- free and paid snapshots materialize the reduced three-code effective limit + catalog correctly -## Stage 10 — Implement Geo declared_country Sync Command +## ~~Stage 10~~ — Implement Geo declared_country Sync Command + +Status: implemented. ### Goal @@ -416,7 +477,9 @@ Support the current-country denormalization path owned by `Geo Profile Service`. - invalid country codes are rejected - country sync emits the correct auxiliary event after commit -## Stage 11 — Implement Admin Lookup, Filtered Listing, and Explicit Trusted Mutations +## ~~Stage 11~~ — Implement Admin Lookup, Filtered Listing, and Explicit Trusted Mutations + +Status: implemented. ### Goal @@ -462,7 +525,9 @@ operations. - exact lookups by `user_id`, email, and `race_name` resolve the correct user - every trusted mutation preserves actor and reason metadata -## Stage 12 — Add Per-Domain-Area Async Events and Observability +## ~~Stage 12~~ — Add Per-Domain-Area Async Events and Observability + +Status: implemented. ### Goal @@ -505,7 +570,9 @@ truth. - event payloads include minimum required metadata - observability hooks do not change business behavior -## Stage 13 — Add Contract Tests Against Auth, Lobby, and Geo Expectations +## ~~Stage 13~~ — Add Contract Tests Against Auth, Lobby, and Geo Expectations + +Status: implemented. ### Goal @@ -542,7 +609,9 @@ must satisfy for other services. - lobby eligibility snapshot reflects paid status, sanctions, and limits - geo country sync changes only current `declared_country` -## Stage 14 — Add Rollout Notes for Gateway/Auth/OpenAPI Updates and Shared geoip +## ~~Stage 14~~ — Add Rollout Notes for Gateway/Auth/OpenAPI Updates and Shared geoip + +Status: implemented. ### Goal @@ -551,11 +620,13 @@ its intended end-to-end form. ### Tasks -- Document the required `gateway` public `confirm-email-code` addition of +- Document the required `gateway` public `confirm-email-code` dependency on + `time_zone`. +- Document the required `authsession` public OpenAPI preservation of the same + `time_zone` requirement. +- Document that the frozen `authsession -> user` ensure contract requires + create-only `registration_context` with `preferred_language` and `time_zone`. -- Document the required `authsession` public OpenAPI mirror change. -- Document the required `authsession -> user` ensure contract extension for - create-only registration context. - Document the required shared `pkg/geoip` package for gateway and geo. - Document README follow-up updates needed in `gateway` and `geoprofile`. - Define rollout order so the cross-service contract changes do not land in an diff --git a/user/README.md b/user/README.md index b0fa087..7ceedfc 100644 --- a/user/README.md +++ b/user/README.md @@ -1,261 +1,216 @@ # User Service -## Context and Purpose +`galaxy/user` owns regular-user platform identity and account state. -`User Service` is the internal source of truth for regular Galaxy Plus platform -users. +The service is internal-only. Its source-of-truth transport is trusted +REST/JSON. `Edge Gateway` exposes selected self-service operations externally +through authenticated gRPC with FlatBuffers payloads and transcodes those +requests to this service's internal REST API. -The service exists to solve six closely related problems: +## Scope -- Own the durable platform user account identified by `user_id`. -- Store the current editable self-service profile and settings of a user. -- Materialize the current effective entitlement state used by the rest of the - platform. -- Store user-specific sanctions and limit overrides that affect access - decisions. -- Expose synchronous trusted APIs needed by `Auth / Session Service`, - `Game Lobby`, `Geo Profile Service`, and future administrative tooling. -- Publish auxiliary user-domain change events without turning events into the - source of truth. +`User Service` is the source of truth for: -The service is intentionally the owner of regular user identity only. -System-administrator identity is outside this service and belongs to the later -`Admin Service` architecture. - -## Explicit Non-Goals - -The following are intentionally out of scope for this service: - -- Authentication challenges, device sessions, or request-signing state. -- System-administrator identity or administrator role management. -- Ownership of game membership, invites, roster, or per-game moderation. -- Automatic billing computation or payment-provider integration in v1. -- History of `declared_country` changes. -- Geo-IP lookup or country-review workflow logic. -- Direct public unauthenticated exposure. -- Using async events as the authoritative representation of user state. - -## Place in the Existing Microservice System - -`User Service` operates inside the trusted internal platform and integrates -with: - -- `Edge Gateway` -- `Auth / Session Service` -- `Game Lobby` -- `Geo Profile Service` -- `Admin Service` later -- `Billing Service` later -- internal event bus - -`Edge Gateway` routes authenticated user-facing account operations to this -service. - -`Auth / Session Service` uses this service to resolve, create, and block users -during the public e-mail-code login flow. - -`Game Lobby` uses this service for synchronous eligibility checks that depend -on current entitlement, sanctions, and limit state. - -`Geo Profile Service` remains the owner of country-change workflow and history, -but synchronizes the latest effective `declared_country` into this service. - -`Admin Service` will later use the trusted internal read and mutation APIs -defined here. Administrator accounts themselves still do not belong to -`User Service`. - -`Billing Service` is future-only in v1 and will later feed entitlement -outcomes through the trusted entitlement mutation path defined here. - -The event bus is an auxiliary propagation channel and not the source of truth. - -## Responsibility Boundaries - -`User Service` owns: - -- regular platform `user_id` -- login/contact e-mail stored on the user account -- `race_name` -- `preferred_language` -- `time_zone` +- opaque regular-user identifiers in `user-*` form +- exact-after-trim login e-mail addresses +- current race name and editable self-service settings +- current entitlement snapshot +- active sanctions and active user-specific limits - current effective `declared_country` -- current effective entitlement snapshot -- entitlement history records -- active and historical user sanctions -- active and historical user-specific limit overrides -- blocked e-mail subjects that may exist before any user record exists -- synchronous trusted reads used for auth, lobby, geo, and admin workflows -- auxiliary user-domain events -`User Service` does not own: +`User Service` is not the source of truth for: -- system-admin accounts -- `device_session` or revoke state -- full payment history -- game ownership, game membership, or game-level bans -- `declared_country` history or approval workflow -- per-request country observations +- system-administrator identity +- device sessions, challenges, or client public keys +- declared-country review workflow or history +- edge authentication, request signing, or replay protection -## High-Level Architecture +Administrative reads and writes against regular-user state do not make this +service the owner of administrator identity. Admin identity belongs to the +future `Admin Service`. -```mermaid -flowchart LR - Gateway[Edge Gateway] - Auth[Auth / Session Service] - Lobby[Game Lobby] - Geo[Geo Profile Service] - Admin[Admin Service] - Billing[Billing Service] - User[User Service] - Redis[Redis] - Bus[Event Bus] +## Trusted Surfaces - Gateway --> User - Auth --> User - Lobby --> User - Geo --> User - Admin --> User - Billing --> User - User --> Redis - User --> Bus -``` +The internal REST surface is split into five stable groups: -## Semantic Model +- `AuthIntegration` + - resolve-by-email + - exists-by-user-id + - ensure-by-email + - block-by-user-id + - block-by-email +- `MyAccount` + - get account aggregate + - update profile + - update settings +- `LobbyIntegration` + - read synchronous eligibility snapshot +- `GeoIntegration` + - synchronize current effective `declared_country` +- `AdminUsers` + - lookups by `user_id`, exact-after-trim `email`, and exact `race_name` + - deterministic filtered listing + - explicit entitlement, sanction, and limit commands -The service works with several core concepts. +The public authenticated gateway boundary currently exposes exactly three +self-service message types: -### User Account +- `user.account.get` +- `user.profile.update` +- `user.settings.update` -The user account is the canonical regular-user aggregate. +Externally these commands use authenticated gRPC plus FlatBuffers payloads. +Internally gateway calls: -Required logical fields: +- `GET /api/v1/internal/users/{user_id}/account` +- `POST /api/v1/internal/users/{user_id}/profile` +- `POST /api/v1/internal/users/{user_id}/settings` -- `user_id` -- normalized `email` -- `race_name` -- `preferred_language` -- `time_zone` -- current effective `declared_country` -- creation timestamp -- last update timestamp +Gateway must derive `user_id` from authenticated session context only. The +client payload never carries user identity for this boundary. -Important rules: +## Identity And Lookup Rules -- `email` is the primary login/contact identifier for end users. -- `email` is not directly editable through self-service profile updates. -- future e-mail change is a separate confirm-based workflow and is not part of - v1 `User Service` mutations. -- `declared_country` is readable in account views but writable only through the - trusted geo sync path. +- User IDs are opaque stable identifiers generated by `User Service`. +- New users receive generated default race names in `player-*` form until the + user replaces them. +- E-mail semantics are exact-after-trim. + - The service trims surrounding whitespace. + - The service does not lowercase, canonicalize, or alias-normalize e-mail + values. + - Exact lookup by e-mail uses the trimmed stored value. +- Race-name lookup is exact by stored value. +- Race-name uniqueness is not exact-string-only. + - Stored casing is preserved. + - Uniqueness is enforced by a canonical reservation key. + - The canonical policy is case-insensitive and includes the frozen + anti-fraud confusable-pair rules used by the race-name policy adapter. -### race_name +## Auth-Facing Contract -`race_name` is the user-facing account name. +`Auth / Session Service` depends on the following synchronous user-owned +decisions: -Properties: +- `resolve-by-email` + - returns `creatable`, `existing`, or `blocked` +- `ensure-by-email` + - returns `created`, `existing`, or `blocked` +- `exists-by-user-id` + - supports trusted session revoke and block flows +- block operations + - support trusted auth-driven user or e-mail blocking flows -- It is globally unique. -- It is not an identity key. Internal identity remains `user_id`, and end-user - login identity remains `email`. -- It is stored and returned in the original user-provided casing. -- Uniqueness is enforced through a dedicated policy boundary rather than by - naive string equality. +`ensure-by-email` rules: -The uniqueness policy must at minimum: +- `registration_context` is required. +- Its frozen shape is: + - `preferred_language` + - `time_zone` +- The registration context is create-only. + - New users store the supplied values after semantic validation. + - Existing users ignore the registration context completely. + - Existing users must not have settings overwritten by a later auth flow. +- The current rollout source of truth is: + - `Auth / Session Service` sends temporary `preferred_language="en"` + - `Auth / Session Service` forwards the public confirm `time_zone` + - gateway-side geoip language derivation is not part of the current + contract yet -- compare case-insensitively -- reject common confusable substitutions used for impersonation, such as - `I` versus `1`, `O` versus `0`, and `B` versus `8` -- remain replaceable behind a dedicated interface because a future shared name - catalog service is expected +Auth-facing blocking semantics: -### preferred_language and time_zone +- `blocked` means the auth flow must not create or return a usable session for + that subject. +- `send-email-code` may still remain success-shaped at the auth edge, but + `User Service` remains the source of truth for the blocked decision. -`preferred_language` and `time_zone` are explicit user settings, not inferred -runtime facts. +## Self-Service Account Contract -Properties: +Self-service reads and writes operate on one shared account aggregate: -- `preferred_language` uses BCP 47 language tags. -- `time_zone` uses IANA time zone names. -- both values exist on every created user in v1 -- both values are editable later through self-service settings mutation +- profile: + - `race_name` +- settings: + - `preferred_language` + - `time_zone` +- derived current state: + - entitlement snapshot + - active sanctions + - active limits + - current `declared_country` -Initial creation rules: +Self-service writes return the refreshed full account aggregate. -- `preferred_language` is supplied to `User Service` through auth create-only - registration context -- the value is derived by `Edge Gateway` from local geoip country plus local - country-to-language mapping -- when that lookup cannot determine a language, gateway falls back to `en` -- `time_zone` is supplied by the client in public `confirm-email-code` +Forbidden self-service mutations: -`User Service` does not perform its own geo lookup for this purpose. +- e-mail change +- direct `declared_country` change +- direct entitlement mutation +- direct sanction mutation +- direct limit mutation -### declared_country +Current write rules: -`declared_country` is the latest effective user-declared country. +- `UpdateMyProfile` + - changes only `race_name` + - rejects unsupported or unknown fields + - returns the current aggregate unchanged on no-op rename +- `UpdateMySettings` + - changes only `preferred_language` and `time_zone` + - rejects unsupported or unknown fields +- active `profile_update_block` sanction blocks both profile and settings + writes with `409 conflict` -Properties: +## Validation Rules -- It uses ISO 3166-1 alpha-2. -- It is the read-optimized current value only. -- It is owned for storage by `User Service`. -- It is owned for mutation workflow and history by `Geo Profile Service`. +### E-mail -This split is intentional: +- trim surrounding whitespace +- validate as structurally valid e-mail +- keep the trimmed exact value +- do not lowercase or canonicalize -- reads of current account state go to `User Service` -- reads of review workflow and country history go to `Geo Profile Service` +### Race Name -### Entitlement +- validate non-empty user-facing name +- preserve accepted casing in storage and reads +- enforce uniqueness through canonical reservation +- reject conflicts as `409 conflict` -Entitlement describes the paid/free access state of a user account. +### preferred_language -The plan catalog fixed for v1 is: +- validate as BCP 47 language tag +- store canonical BCP 47 tag form +- current auth-driven create path temporarily uses `"en"` from authsession -- `free` -- `paid_monthly` -- `paid_yearly` -- `paid_lifetime` +### time_zone -The service stores both: +- validate as IANA time-zone name +- store trimmed value +- do not apply additional alias canonicalization -- immutable or append-only entitlement period history records -- a materialized current effective entitlement snapshot for synchronous reads +## Entitlements -The current effective snapshot is not computed on every request. It is updated -when trusted entitlement mutations succeed. +`User Service` owns the current effective entitlement snapshot. -Period history records store: +Rules: -- `user_id` -- `plan_code` -- `source` -- actor identity or actor type -- `reason_code` -- `starts_at` -- optional `ends_at` -- creation timestamp +- every new user starts with the frozen free entitlement baseline +- explicit admin or later billing commands may: + - grant + - extend + - revoke +- finite paid entitlements are repaired lazily on read when expiry has passed +- downstream services read current entitlement from `User Service`, not from + billing or any write-side source -Current effective snapshot stores at minimum: +The shared account aggregate and lobby eligibility snapshot always expose the +current effective entitlement after lazy expiry repair. -- `user_id` -- current `plan_code` -- effective paid/free state -- effective period bounds when applicable -- source metadata needed by operations and admin reads -- last recomputation timestamp +## Sanctions And Limits -In v1, entitlement mutations come from explicit trusted admin/internal -commands. Later, `Billing Service` uses the same mutation path. +Sanctions and user-specific limits are explicit command-driven state. -### Sanctions - -Sanctions are negative policy records that may deny or restrict access -regardless of entitlement state. - -The initial sanction set for v1 is: +Supported sanction codes: - `login_block` - `private_game_create_block` @@ -263,539 +218,162 @@ The initial sanction set for v1 is: - `game_join_block` - `profile_update_block` -Each sanction record stores: - -- `user_id` -- `sanction_code` -- scope -- `reason_code` -- actor identity or actor type -- `applied_at` -- optional `expires_at` -- optional removal metadata if later removed - -Sanctions are typed records rather than inline booleans so the service can keep -auditability and deterministic active-state evaluation. - -### User-Specific Limits - -User-specific limits are count-based override records that shape access and -eligibility decisions. - -The initial limit set for v1 is: +Supported user-specific limit codes: - `max_owned_private_games` -- `max_active_private_games` - `max_pending_public_applications` -- `max_pending_private_join_requests` -- `max_pending_private_invites_sent` - `max_active_game_memberships` -Each limit record stores: +Rules: -- `user_id` -- `limit_code` -- numeric value -- `reason_code` -- actor identity or actor type -- `applied_at` -- optional `expires_at` -- optional removal metadata if later removed +- active views expose only currently supported codes +- retired legacy limit codes may remain in stored history but are not part of + the active read or write contract +- sanctions and limits are projected into: + - the self-service account aggregate + - admin reads + - lobby eligibility snapshots -Limit rules: +## Lobby Eligibility Semantics -- limits are count-based only in v1 -- limits are user-specific overrides, not the global default catalog itself -- effective eligibility combines entitlement-derived defaults with active - user-specific overrides - -### Blocked E-Mail Subject - -`User Service` must support blocking an e-mail subject before any user account -exists. - -This requires a separate blocked-email-subject model. - -Required logical fields: - -- normalized `email` -- `reason_code` -- block timestamp -- optional actor metadata when available -- optional expiry or removal metadata if policy later requires it -- optional resolved `user_id` when the e-mail already belongs to an existing - user - -This model exists to support `Auth / Session Service` flows such as -`BlockByEmail` and `ResolveByEmail` before user creation. - -## Data Ownership Rules - -The ownership split is intentional and must remain stable. - -- `User Service` owns regular user identity and current effective account - state. -- `Auth / Session Service` owns login challenge and session lifecycle state. -- `Game Lobby` owns game membership and game-specific moderation. -- `Geo Profile Service` owns geo workflow and `declared_country` history. -- `Admin Service` later owns administrator identity and UI orchestration. - -In particular: - -- no service other than `Geo Profile Service` should mutate - `declared_country` -- no service other than `User Service` should create or edit regular user - profile/settings records -- no other service should maintain its own source of truth for current - entitlements - -## User-Facing Interface Model - -User-facing traffic reaches `User Service` only through authenticated gateway -routing. - -The v1 aggregate query is: - -- `GetMyAccount` - -The v1 self-service mutations are: - -- `UpdateMyProfile` -- `UpdateMySettings` - -### GetMyAccount - -`GetMyAccount` returns one read-optimized account aggregate for the currently -authenticated regular user. - -The aggregate should include at minimum: - -- `user_id` -- `email` -- `race_name` -- `preferred_language` -- `time_zone` -- current `declared_country` -- current entitlement snapshot -- active sanctions -- active effective limits -- account timestamps needed by clients - -`declared_country` is read-only in this aggregate. - -### UpdateMyProfile - -`UpdateMyProfile` updates self-service profile fields only. - -Editable fields in v1: - -- `race_name` +`Game Lobby` depends on a synchronous read-optimized eligibility snapshot. Rules: -- e-mail cannot be changed here -- `declared_country` cannot be changed here -- `race_name` must pass global uniqueness policy before commit -- active `profile_update_block` sanction rejects the mutation +- unknown users return `exists=false` rather than `404` +- entitlement state is current and expiry-repaired +- active sanctions are filtered to the lobby-relevant subset +- effective limits are derived from: + - the frozen free or paid default catalog + - plus any active user-specific override -### UpdateMySettings +Current markers: -`UpdateMySettings` updates self-service settings only. +- `can_login` +- `can_create_private_game` +- `can_manage_private_game` +- `can_join_game` +- `can_update_profile` -Editable fields in v1: +## declared_country Ownership Split -- `preferred_language` -- `time_zone` +Ownership is intentionally split: -Rules: +- `User Service` + - stores only the current effective `declared_country` value +- `Geo Profile Service` + - owns review workflow + - owns decision history + - owns version history and retry state -- values are validated as BCP 47 and IANA formats -- active `profile_update_block` sanction rejects the mutation +`User Service` accepts only trusted sync commands from `Geo Profile Service` +for the latest approved effective value. -## Trusted Internal API Model +Sync rules: -All service-to-service integration in v1 is documented as trusted JSON REST. +- accepted values are uppercase ISO 3166-1 alpha-2 country codes +- syncing the already stored value is a no-op +- a successful change updates the current account record and emits a domain + event -### Auth-Facing Contract +## Admin Read And List Semantics -The auth-facing contract is already reserved by `Auth / Session Service` and -must remain stable. +Trusted admin reads operate on regular-user state only. -Frozen endpoints: +Lookups: -- `POST /api/v1/internal/user-resolutions/by-email` -- `GET /api/v1/internal/users/{user_id}/exists` -- `POST /api/v1/internal/users/ensure-by-email` -- `POST /api/v1/internal/users/{user_id}/block` -- `POST /api/v1/internal/user-blocks/by-email` +- by `user_id` +- by exact-after-trim `email` +- by exact `race_name` -Auth-facing behavior: +Listing rules: -- resolve by e-mail returns `existing`, `creatable`, or `blocked` -- ensure by e-mail returns `existing`, `created`, or `blocked` -- block by user id and block by e-mail are idempotent -- blocked e-mail subjects are respected even when no user exists yet +- deterministic order: + - `created_at desc` + - then `user_id desc` +- all supplied filters combine with logical `AND` +- `page_token` is opaque and bound to the normalized filter set that produced + it +- malformed or filter-mismatched tokens return `400 invalid_request` -`EnsureUserByEmail` is extended for v1 user creation context. - -Recommended request shape: - -```json -{ - "email": "pilot@example.com", - "registration_context": { - "preferred_language": "en", - "time_zone": "Europe/Berlin" - } -} -``` - -Rules for `registration_context`: - -- it is used only when the user is created -- it is ignored for an existing user -- it must not overwrite settings of an existing user -- it is required by the future auth contract because first successful confirm - may create the user - -### Lobby-Facing Eligibility Snapshot - -`Game Lobby` needs one synchronous query by `user_id`. - -Purpose: - -- determine whether the user currently may create or join game flows -- obtain effective quotas relevant to lobby decisions - -The response should include at minimum: - -- whether the user exists -- current entitlement snapshot -- active sanctions relevant to lobby actions -- effective limit values relevant to lobby actions -- derived booleans such as whether private-game creation is currently allowed - -This query is intentionally one read-optimized snapshot rather than multiple -smaller cross-service round trips. - -### Geo-Facing Declared Country Sync - -`Geo Profile Service` needs one explicit trusted command to synchronize the -current effective `declared_country`. - -Required behavior: - -- update only the current `declared_country` value in `User Service` -- not create or manage country history here -- fail explicitly on unknown `user_id` -- remain synchronous so geo workflow can decide whether its own version record - becomes effective - -### Admin/Internal Reads - -The trusted admin/internal read surface must support: - -- exact lookup by `user_id` -- exact lookup by normalized `email` -- exact lookup by exact `race_name` -- paginated listing with filters - -The v1 listing filters must support at minimum: +Listing filters include: - paid/free state -- paid expiry window +- paid expiry bounds - current `declared_country` - active sanction code - active limit code -- relevant eligibility markers +- derived eligibility markers -Listing must use deterministic pagination and stable ordering. -Recommended default ordering is newest first by `created_at`, with `user_id` -used as the deterministic tiebreaker. +## Domain Events -### Admin/Internal Mutations +`User Service` publishes auxiliary post-commit domain events to the shared +Redis stream configured for domain events. -Trusted mutations remain explicit-command based. - -The minimum command vocabulary in v1 is: - -- grant paid access -- extend paid access -- revoke paid access -- apply sanction -- remove sanction -- set limit -- remove limit -- sync declared country - -These are intentionally explicit commands rather than one generic patch API. -The service should preserve reason and actor metadata on every trusted -administrative mutation. - -## New User Creation Flow - -```mermaid -sequenceDiagram - participant Client - participant Gateway - participant Auth as Auth / Session Service - participant User as User Service - - Client->>Gateway: confirm-email-code(code, client_public_key, time_zone) - Gateway->>Gateway: local geoip lookup and country-to-language mapping - Gateway->>Auth: confirm-email-code(..., time_zone) - Auth->>User: ensure-by-email(email, registration_context) - alt user already exists - User-->>Auth: existing user_id - else new user - User->>User: create user with generated race_name - User->>User: initialize language, time zone, free entitlement - User-->>Auth: created user_id - end - Auth-->>Gateway: device_session_id - Gateway-->>Client: device_session_id -``` - -New-user defaults: - -- generated `race_name` in `player-` form -- `preferred_language` from gateway-derived registration context -- `time_zone` from client-provided registration context -- `free` entitlement -- no active sanctions -- no custom limit overrides - -## Interface Between Entitlement, Sanction, and Limit Evaluation - -The service must keep these three layers separate. - -- entitlement provides the base paid/free access state -- sanctions can deny actions regardless of entitlement -- user-specific limits can narrow or override numeric quotas - -This means: - -- a paid user may still be denied private-game creation by sanction -- a non-blocked user may still be quota-limited by effective count limits -- lobby checks should consume one effective snapshot rather than reimplementing - this evaluation itself - -## Events - -Events are auxiliary notifications only. They are not the source of truth. - -The service should emit per-domain-area events for: - -- profile changes -- settings changes -- entitlement changes -- sanction changes -- limit changes -- declared-country changes - -Recommended event classes: +Frozen event types: - `user.profile.changed` - `user.settings.changed` - `user.entitlement.changed` - `user.sanction.changed` - `user.limit.changed` -- `user.declared_country.changed` -Each event should include at minimum: +The current effective declared-country sync remains externally observable as +`user.declared_country.changed`. -- `user_id` -- event timestamp -- mutation source -- correlation or request metadata when available -- enough event-specific detail to identify the changed domain area +Event rules: -Loss of an event must not lose the authoritative business state. +- events are post-commit only +- event envelopes carry `user_id`, mutation source, occurrence timestamp, and + optional trace correlation +- event payloads expose the latest committed state relevant to the operation +- profile and settings events use `initialized` for auth-driven creation and + `updated` for later self-service writes +- entitlement events use: + - `initialized` + - `granted` + - `extended` + - `revoked` + - `expired_repaired` +- sanction events use: + - `applied` + - `removed` +- limit events use: + - `set` + - `removed` -## Data Entities +## Error Model -This section defines the core logical entities. These are domain entities, not -mandatory final physical Redis key names. +The trusted internal REST contract uses strict JSON error envelopes: -### User Account Record +```json +{ + "error": { + "code": "invalid_request", + "message": "request is invalid" + } +} +``` -Required logical fields: +Stable error codes: -- `user_id` -- normalized `email` -- `race_name` -- `preferred_language` -- `time_zone` -- current `declared_country` -- `created_at` -- `updated_at` +- `invalid_request` +- `conflict` +- `subject_not_found` +- `internal_error` +- `service_unavailable` -### race_name Reservation +Gateway mirrors these business errors on the authenticated `user.*` boundary +as: -Required logical fields: +- gateway `result_code` +- FlatBuffers error payload carrying the same `code` and `message` -- canonical uniqueness key produced by the race-name policy -- referenced `user_id` -- original stored `race_name` -- reservation timestamp +Transport failures, timeouts, and upstream `503` remain transport-level +gateway `UNAVAILABLE`, not business results. -This entity exists so uniqueness policy stays replaceable and explicit. +## References -### Blocked E-Mail Subject Entity - -Required logical fields: - -- normalized `email` -- `reason_code` -- block status -- `blocked_at` -- optional resolved `user_id` - -### Entitlement Period Record - -Required logical fields: - -- `user_id` -- `plan_code` -- `source` -- actor metadata -- `reason_code` -- `starts_at` -- optional `ends_at` -- record creation timestamp - -### Current Entitlement Snapshot - -Required logical fields: - -- `user_id` -- effective `plan_code` -- paid/free state -- effective period bounds -- source metadata -- snapshot update timestamp - -### Sanction Record - -Required logical fields: - -- `user_id` -- `sanction_code` -- scope -- `reason_code` -- actor metadata -- `applied_at` -- optional `expires_at` -- current status metadata - -### Limit Record - -Required logical fields: - -- `user_id` -- `limit_code` -- numeric value -- `reason_code` -- actor metadata -- `applied_at` -- optional `expires_at` -- current status metadata - -## Failure and Degradation Model - -The service is synchronous for critical reads and mutations. - -### Auth Dependency Failure - -If `User Service` is unavailable during auth flows: - -- `Auth / Session Service` must fail the affected operation explicitly -- no user should be created partially without source-of-truth persistence - -### Event Publication Failure - -If event publication fails: - -- the source-of-truth mutation still remains committed -- failure is logged and metered -- downstream consumers recover from direct reads if needed - -### race_name Uniqueness Backend Failure - -If the dedicated race-name uniqueness policy backend fails: - -- self-service profile update must fail closed -- new-user creation must fail explicitly rather than create ambiguous names - -### Geo Sync Failure - -If `Geo Profile Service` cannot synchronize `declared_country` into -`User Service`: - -- geo must treat the change as not yet effective -- `User Service` must not create hidden partial country history - -## Minimal Initial API Surface - -The minimum useful v1 API surface is: - -- gateway-routed authenticated: - - `GetMyAccount` - - `UpdateMyProfile` - - `UpdateMySettings` -- trusted internal auth: - - resolve by e-mail - - ensure by e-mail - - exists by user id - - block by user id - - block by e-mail -- trusted internal lobby: - - get user eligibility snapshot -- trusted internal geo: - - sync current `declared_country` -- trusted internal admin: - - exact user reads - - filtered user listing - - entitlement mutations - - sanction mutations - - limit mutations - -## Cross-Service Follow-Up Dependencies - -The service design here depends on later follow-up work in other modules. - -Required follow-up items: - -- `Edge Gateway` public `confirm-email-code` contract must add required - `time_zone`. -- `Auth / Session Service` public OpenAPI must mirror the same `time_zone` - addition. -- `Auth / Session Service -> User Service` ensure-by-email contract must add - create-only registration context with `preferred_language` and `time_zone`. -- a shared `pkg/geoip` package must be introduced for `Edge Gateway` and - `Geo Profile Service` -- `gateway/README.md` should later document the local geoip dependency used for - initial language derivation -- `geoprofile/README.md` should later document the shared `pkg/geoip` - dependency explicitly alongside its own local geo lookup - -## Design Trade-Offs Accepted by This Architecture - -- Current entitlement is materialized for fast reads instead of computed from - history on each request. -- User-specific limits are count-based only in v1 to keep evaluation simple. -- `race_name` uniqueness is stricter than plain case-insensitive comparison to - reduce impersonation risk. -- `User Service` stores only the latest effective `declared_country` while geo - owns the workflow and version history. -- Explicit trusted commands are preferred over generic patch semantics so - administrative changes remain auditable and predictable. - -## Implementation Readiness Statement - -This service specification is intended to be implementation-ready for a first -production-capable internal version. - -The main remaining work is not product ambiguity inside `User Service`, but -follow-up cross-service contract changes in `gateway`, `authsession`, and the -future shared `pkg/geoip` package. +- [Internal REST contract](openapi.yaml) +- [Service docs index](docs/README.md) +- [System architecture](../ARCHITECTURE.md) diff --git a/user/cmd/userservice/main.go b/user/cmd/userservice/main.go new file mode 100644 index 0000000..5c0c6e2 --- /dev/null +++ b/user/cmd/userservice/main.go @@ -0,0 +1,45 @@ +package main + +import ( + "context" + "fmt" + "os" + "os/signal" + "syscall" + + "galaxy/user/internal/app" + "galaxy/user/internal/config" + "galaxy/user/internal/logging" +) + +func main() { + if err := run(); err != nil { + _, _ = fmt.Fprintf(os.Stderr, "userservice: %v\n", err) + os.Exit(1) + } +} + +func run() error { + cfg, err := config.LoadFromEnv() + if err != nil { + return err + } + + logger, err := logging.New(cfg.Logging.Level) + if err != nil { + return err + } + + rootCtx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM) + defer stop() + + runtime, err := app.NewRuntime(rootCtx, cfg, logger) + if err != nil { + return err + } + defer func() { + _ = runtime.Close() + }() + + return runtime.Run(rootCtx) +} diff --git a/user/docs/README.md b/user/docs/README.md new file mode 100644 index 0000000..47d017e --- /dev/null +++ b/user/docs/README.md @@ -0,0 +1,19 @@ +# User Service Docs + +This directory keeps service-local documentation that is more operational or +more example-heavy than [`../README.md`](../README.md). + +Sections: + +- [Runtime and components](runtime.md) +- [Main flows and boundaries](flows.md) +- [Operator runbook](runbook.md) +- [Contract examples](examples.md) + +Primary references: + +- [`../README.md`](../README.md) for stable service scope and business rules +- [`../openapi.yaml`](../openapi.yaml) for the trusted internal REST contract +- [`../../ARCHITECTURE.md`](../../ARCHITECTURE.md) for system-level transport + and ownership rules +- [`../../TESTING.md`](../../TESTING.md) for the cross-service testing matrix diff --git a/user/docs/examples.md b/user/docs/examples.md new file mode 100644 index 0000000..10c2d7c --- /dev/null +++ b/user/docs/examples.md @@ -0,0 +1,206 @@ +# Contract Examples + +## ensure-by-email + +Request: + +```json +{ + "email": "pilot@example.com", + "registration_context": { + "preferred_language": "en", + "time_zone": "Europe/Kaliningrad" + } +} +``` + +Created response: + +```json +{ + "outcome": "created", + "user_id": "user-123" +} +``` + +Existing response: + +```json +{ + "outcome": "existing", + "user_id": "user-123" +} +``` + +Blocked response: + +```json +{ + "outcome": "blocked", + "block_reason_code": "policy_blocked" +} +``` + +## account aggregate + +```json +{ + "account": { + "user_id": "user-123", + "email": "pilot@example.com", + "race_name": "Pilot Nova", + "preferred_language": "en", + "time_zone": "Europe/Kaliningrad", + "declared_country": "DE", + "entitlement": { + "plan_code": "free", + "is_paid": false, + "source": "auth_registration", + "actor": { + "type": "service", + "id": "user-service" + }, + "reason_code": "initial_free_entitlement", + "starts_at": "2026-04-09T10:00:00Z", + "updated_at": "2026-04-09T10:00:00Z" + }, + "active_sanctions": [], + "active_limits": [], + "created_at": "2026-04-09T10:00:00Z", + "updated_at": "2026-04-09T10:00:00Z" + } +} +``` + +## update profile + +Request: + +```json +{ + "race_name": "Nova Prime" +} +``` + +Success: + +```json +{ + "account": { + "user_id": "user-123", + "email": "pilot@example.com", + "race_name": "Nova Prime", + "preferred_language": "en", + "time_zone": "Europe/Kaliningrad", + "entitlement": { + "plan_code": "free", + "is_paid": false, + "source": "auth_registration", + "actor": { + "type": "service", + "id": "user-service" + }, + "reason_code": "initial_free_entitlement", + "starts_at": "2026-04-09T10:00:00Z", + "updated_at": "2026-04-09T10:00:00Z" + }, + "active_sanctions": [], + "active_limits": [], + "created_at": "2026-04-09T10:00:00Z", + "updated_at": "2026-04-09T10:05:00Z" + } +} +``` + +Conflict: + +```json +{ + "error": { + "code": "conflict", + "message": "request conflicts with current state" + } +} +``` + +## update settings + +Request: + +```json +{ + "preferred_language": "fr-FR", + "time_zone": "Europe/Paris" +} +``` + +## admin lookup by e-mail + +Request: + +```json +{ + "email": "pilot@example.com" +} +``` + +Success: + +```json +{ + "user": { + "user_id": "user-123", + "email": "pilot@example.com", + "race_name": "Pilot Nova", + "preferred_language": "en", + "time_zone": "Europe/Kaliningrad", + "entitlement": { + "plan_code": "free", + "is_paid": false, + "source": "auth_registration", + "actor": { + "type": "service", + "id": "user-service" + }, + "reason_code": "initial_free_entitlement", + "starts_at": "2026-04-09T10:00:00Z", + "updated_at": "2026-04-09T10:00:00Z" + }, + "active_sanctions": [], + "active_limits": [], + "created_at": "2026-04-09T10:00:00Z", + "updated_at": "2026-04-09T10:00:00Z" + } +} +``` + +## declared-country sync + +Request: + +```json +{ + "declared_country": "DE" +} +``` + +Response: + +```json +{ + "user_id": "user-123", + "declared_country": "DE", + "updated_at": "2026-04-09T10:10:00Z" +} +``` + +## shared error envelope + +```json +{ + "error": { + "code": "invalid_request", + "message": "request is invalid" + } +} +``` diff --git a/user/docs/flows.md b/user/docs/flows.md new file mode 100644 index 0000000..f0ba36f --- /dev/null +++ b/user/docs/flows.md @@ -0,0 +1,163 @@ +# Main Flows and Boundaries + +## Auth / Session -> User + +`Auth / Session Service` uses synchronous REST calls for user ownership +decisions during public auth. + +### Resolve by e-mail + +`POST /api/v1/internal/user-resolutions/by-email` + +Outcome vocabulary: + +- `creatable` +- `existing` +- `blocked` + +The decision is based on exact-after-trim e-mail matching plus the current +block state for that subject. + +### Ensure by e-mail + +`POST /api/v1/internal/users/ensure-by-email` + +Rules: + +- `registration_context` is required +- `registration_context` is create-only +- existing users ignore the supplied registration context +- blocked subjects return `blocked` rather than creating a user +- the current rollout sends temporary `preferred_language="en"` from + authsession and forwards the public confirm `time_zone` + +Create side effects: + +- generate opaque `user_id` +- generate default `player-*` race name +- store initial preferred language and time zone +- materialize the initial free entitlement snapshot +- publish initialization-style profile, settings, and entitlement events + +## Gateway -> User + +Gateway owns the external authenticated gRPC contract and transcodes to this +service's internal REST API. + +External authenticated message types: + +- `user.account.get` +- `user.profile.update` +- `user.settings.update` + +Internal REST routes: + +- `GET /api/v1/internal/users/{user_id}/account` +- `POST /api/v1/internal/users/{user_id}/profile` +- `POST /api/v1/internal/users/{user_id}/settings` + +Rules: + +- gateway derives `user_id` from authenticated session context only +- success returns the shared account aggregate +- business errors return stable `code` and `message` +- timeout or upstream `503` stay transport-level unavailable at gateway + +### Profile update + +`UpdateMyProfile` changes only `race_name`. + +Rules: + +- preserve stored casing on success +- enforce canonical reservation uniqueness +- reject conflicts as `409 conflict` +- reject writes while `profile_update_block` is active +- return current aggregate on no-op rename + +### Settings update + +`UpdateMySettings` changes only: + +- `preferred_language` +- `time_zone` + +Rules: + +- validate BCP 47 and IANA semantics +- reject writes while `profile_update_block` is active +- return the refreshed account aggregate + +## Lobby -> User + +`Game Lobby Service` reads one synchronous eligibility snapshot through: + +- `GET /api/v1/internal/users/{user_id}/eligibility` + +Rules: + +- unknown users return `exists=false` +- current entitlement is expiry-repaired lazily +- active sanctions are filtered to the lobby-relevant set +- effective limits combine default catalog values plus active overrides +- markers are derived from sanctions, entitlement, and limits + +## Geo -> User + +`Geo Profile Service` synchronizes the latest approved effective declared +country through: + +- `POST /api/v1/internal/users/{user_id}/declared-country/sync` + +Rules: + +- input must be uppercase ISO 3166-1 alpha-2 +- syncing the stored value is a no-op +- `User Service` stores only the current effective value +- geo owns review workflow and history +- successful updates publish `user.declared_country.changed` + +## Admin Reads And Commands + +Trusted admin callers use: + +- exact reads by `user_id`, e-mail, and race name +- deterministic filtered listing +- explicit entitlement commands +- explicit sanction commands +- explicit limit commands + +Listing rules: + +- order by `created_at desc`, then `user_id desc` +- combine filters with `AND` +- `page_token` is opaque and filter-bound + +## Domain Events + +The shared auxiliary event stream contains post-commit state propagation for: + +- `user.profile.changed` +- `user.settings.changed` +- `user.entitlement.changed` +- `user.sanction.changed` +- `user.limit.changed` +- `user.declared_country.changed` + +Operation vocabularies: + +- profile and settings: + - `initialized` + - `updated` +- entitlement: + - `initialized` + - `granted` + - `extended` + - `revoked` + - `expired_repaired` +- sanction: + - `applied` + - `removed` +- limit: + - `set` + - `removed` diff --git a/user/docs/runbook.md b/user/docs/runbook.md new file mode 100644 index 0000000..9bf1e96 --- /dev/null +++ b/user/docs/runbook.md @@ -0,0 +1,106 @@ +# Runbook + +## Startup Checklist + +Before starting `userservice`, verify: + +- `USERSERVICE_REDIS_ADDR` points to the intended Redis instance +- internal HTTP bind address is free +- optional admin metrics listener does not collide with another process +- domain-events stream settings match the environment that consumes them + +Expected startup behavior: + +- configuration is loaded and validated first +- Redis-backed stores and publishers are constructed +- startup fails fast on Redis misconfiguration or connectivity failure + +## Health And Readiness + +`userservice` does not expose public health endpoints. + +Operational readiness is typically checked through one trusted internal route, +for example: + +- `GET /api/v1/internal/users/{user_id}/exists` + +with a guaranteed-missing `user_id`. A healthy process returns `200` with +`{"exists":false}`. + +If admin metrics are enabled, `/metrics` on the admin listener is the +additional process-level operational endpoint. + +## Common Failure Modes + +### Redis unavailable + +Symptoms: + +- process fails during startup +- internal API returns `503 service_unavailable` +- domain events stop being published + +Checks: + +- connectivity to `USERSERVICE_REDIS_ADDR` +- Redis ACL credentials +- Redis DB number +- TLS setting mismatch + +### Invalid registration context + +Symptoms: + +- `ensure-by-email` returns `400 invalid_request` + +Checks: + +- `preferred_language` is a valid BCP 47 tag +- `time_zone` is a valid IANA time-zone name + +### race_name conflict + +Symptoms: + +- profile update returns `409 conflict` + +Checks: + +- desired race name is not already reserved under canonical uniqueness rules +- user is not currently blocked by `profile_update_block` + +### declared-country sync rejected + +Symptoms: + +- geo sync returns `400 invalid_request` + +Checks: + +- country code is uppercase ISO 3166-1 alpha-2 +- trusted caller is using the intended internal route + +## Safe Rollout Notes + +- Keep `Auth / Session Service` and `User Service` aligned on the current + `registration_context` shape. +- During the current rollout, treat authsession-provided + `preferred_language="en"` as the active create-path contract. +- Gateway direct `user.*` self-service routing depends on the internal REST + routes staying stable. +- Do not roll out billing-driven entitlement mutations assuming another + service owns current entitlement state. `User Service` remains the source of + truth for current entitlement. + +## Debugging Data Mismatches + +When a caller reports mismatched user state: + +1. Read the current account aggregate through the trusted internal route. +2. Confirm whether the discrepancy is in source-of-truth state or in a + downstream projection. +3. If the issue concerns declared-country workflow history, switch to `Geo + Profile Service`; `User Service` stores only the current effective value. +4. If the issue concerns authenticated edge transport, verify the same user + through gateway `user.account.get` to distinguish transport problems from + source-of-truth problems. diff --git a/user/docs/runtime.md b/user/docs/runtime.md new file mode 100644 index 0000000..d3240b7 --- /dev/null +++ b/user/docs/runtime.md @@ -0,0 +1,151 @@ +# Runtime and Components + +The diagram below focuses on the deployed `galaxy/user` process and its +runtime dependencies. + +```mermaid +flowchart LR + subgraph Callers + Auth["Auth / Session Service"] + Gateway["Edge Gateway"] + Lobby["Game Lobby Service"] + Geo["Geo Profile Service"] + Admin["Trusted admin callers"] + end + + subgraph User["User Service process"] + InternalHTTP["Trusted internal HTTP listener\n/api/v1/internal/*"] + AdminHTTP["Optional admin HTTP listener\n/metrics"] + Services["Application services"] + Telemetry["Logs, traces, metrics"] + end + + Redis["Redis\nkeyspace + domain-events stream"] + + Auth --> InternalHTTP + Gateway --> InternalHTTP + Lobby --> InternalHTTP + Geo --> InternalHTTP + Admin --> InternalHTTP + InternalHTTP --> Services + Services --> Redis + InternalHTTP --> Telemetry + AdminHTTP --> Telemetry +``` + +## Listeners + +`userservice` exposes two HTTP listeners: + +| Listener | Default addr | Purpose | +| --- | --- | --- | +| Internal HTTP | `:8091` | Trusted business API under `/api/v1/internal/*` | +| Admin HTTP | disabled | Optional Prometheus metrics on `/metrics` | + +Shared listener defaults: + +- read-header timeout: `2s` +- read timeout: `10s` +- idle timeout: `1m` + +The internal application timeout is configured separately through +`USERSERVICE_INTERNAL_HTTP_REQUEST_TIMEOUT`. + +Intentional omissions: + +- no public listener +- no authenticated edge gRPC listener +- no built-in `/healthz` +- no built-in `/readyz` + +## Startup Wiring + +`cmd/userservice` loads config, constructs logging and telemetry, and then +creates the runtime through `internal/app.NewRuntime`. + +The runtime wires: + +- Redis-backed stores for accounts, entitlement snapshots, sanctions, limits, + and listing indexes +- the trusted internal HTTP router +- the optional admin metrics listener +- the optional Redis-backed domain-event publishers +- service-local helpers for clock, IDs, and validation/policy adapters + +Startup fails fast when Redis connectivity is unavailable or configuration is +invalid. + +## Redis Namespaces + +The service uses one Redis keyspace prefix plus one auxiliary domain-events +stream. + +Configuration: + +- `USERSERVICE_REDIS_KEYSPACE_PREFIX` +- `USERSERVICE_REDIS_DOMAIN_EVENTS_STREAM` +- `USERSERVICE_REDIS_DOMAIN_EVENTS_STREAM_MAX_LEN` + +The keyspace stores source-of-truth business state. The stream carries +post-commit auxiliary domain events and must not be treated as the source of +truth. + +## Configuration Groups + +Required for all process starts: + +- `USERSERVICE_REDIS_ADDR` + +Core process config: + +- `USERSERVICE_SHUTDOWN_TIMEOUT` +- `USERSERVICE_LOG_LEVEL` + +Internal HTTP config: + +- `USERSERVICE_INTERNAL_HTTP_ADDR` +- `USERSERVICE_INTERNAL_HTTP_READ_HEADER_TIMEOUT` +- `USERSERVICE_INTERNAL_HTTP_READ_TIMEOUT` +- `USERSERVICE_INTERNAL_HTTP_IDLE_TIMEOUT` +- `USERSERVICE_INTERNAL_HTTP_REQUEST_TIMEOUT` + +Admin HTTP config: + +- `USERSERVICE_ADMIN_HTTP_ADDR` +- `USERSERVICE_ADMIN_HTTP_READ_HEADER_TIMEOUT` +- `USERSERVICE_ADMIN_HTTP_READ_TIMEOUT` +- `USERSERVICE_ADMIN_HTTP_IDLE_TIMEOUT` + +Redis connectivity and namespace config: + +- `USERSERVICE_REDIS_USERNAME` +- `USERSERVICE_REDIS_PASSWORD` +- `USERSERVICE_REDIS_DB` +- `USERSERVICE_REDIS_TLS_ENABLED` +- `USERSERVICE_REDIS_OPERATION_TIMEOUT` +- `USERSERVICE_REDIS_KEYSPACE_PREFIX` +- `USERSERVICE_REDIS_DOMAIN_EVENTS_STREAM` +- `USERSERVICE_REDIS_DOMAIN_EVENTS_STREAM_MAX_LEN` + +Telemetry: + +- `OTEL_SERVICE_NAME` +- `OTEL_TRACES_EXPORTER` +- `OTEL_METRICS_EXPORTER` +- `OTEL_EXPORTER_OTLP_PROTOCOL` +- `OTEL_EXPORTER_OTLP_TRACES_PROTOCOL` +- `OTEL_EXPORTER_OTLP_METRICS_PROTOCOL` +- `USERSERVICE_OTEL_STDOUT_TRACES_ENABLED` +- `USERSERVICE_OTEL_STDOUT_METRICS_ENABLED` + +## Runtime Notes + +- The service remains internal REST only; gateway owns external authenticated + gRPC and FlatBuffers. +- Gateway self-service traffic reaches this service over REST/JSON after + gateway-side authentication and FlatBuffers transcoding. +- Current direct synchronous callers are `Auth / Session Service`, + `Edge Gateway`, `Game Lobby Service`, `Geo Profile Service`, and trusted + admin callers. +- Domain-event publication is auxiliary. A failed auxiliary consumer must not + become the source of truth for current account state. diff --git a/user/go.mod b/user/go.mod index b945ca8..51cf0e5 100644 --- a/user/go.mod +++ b/user/go.mod @@ -1,3 +1,92 @@ module galaxy/user go 1.26.1 + +require ( + github.com/alicebob/miniredis/v2 v2.37.0 + github.com/disciplinedware/go-confusables v0.1.1 + github.com/getkin/kin-openapi v0.135.0 + github.com/gin-gonic/gin v1.12.0 + github.com/prometheus/client_golang v1.23.2 + github.com/redis/go-redis/v9 v9.18.0 + github.com/stretchr/testify v1.11.1 + go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 + go.opentelemetry.io/otel v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 + go.opentelemetry.io/otel/exporters/prometheus v0.65.0 + go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 + go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 + go.opentelemetry.io/otel/metric v1.43.0 + go.opentelemetry.io/otel/sdk v1.43.0 + go.opentelemetry.io/otel/sdk/metric v1.43.0 + go.opentelemetry.io/otel/trace v1.43.0 + golang.org/x/text v0.35.0 +) + +require ( + github.com/beorn7/perks v1.0.1 // indirect + github.com/bytedance/gopkg v0.1.4 // indirect + github.com/bytedance/sonic v1.15.0 // indirect + github.com/bytedance/sonic/loader v0.5.1 // indirect + github.com/cenkalti/backoff/v5 v5.0.3 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/cloudwego/base64x v0.1.6 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect + github.com/gabriel-vasile/mimetype v1.4.13 // indirect + github.com/gin-contrib/sse v1.1.1 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-logr/stdr v1.2.2 // indirect + github.com/go-openapi/jsonpointer v0.21.0 // indirect + github.com/go-openapi/swag v0.23.0 // indirect + github.com/go-playground/locales v0.14.1 // indirect + github.com/go-playground/universal-translator v0.18.1 // indirect + github.com/go-playground/validator/v10 v10.30.2 // indirect + github.com/goccy/go-json v0.10.6 // indirect + github.com/goccy/go-yaml v1.19.2 // indirect + github.com/google/uuid v1.6.0 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/klauspost/cpuid/v2 v2.3.0 // indirect + github.com/leodido/go-urn v1.4.0 // indirect + github.com/mailru/easyjson v0.7.7 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/oasdiff/yaml v0.0.9 // indirect + github.com/oasdiff/yaml3 v0.0.9 // indirect + github.com/pelletier/go-toml/v2 v2.3.0 // indirect + github.com/perimeterx/marshmallow v1.1.5 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.67.5 // indirect + github.com/prometheus/otlptranslator v1.0.0 // indirect + github.com/prometheus/procfs v0.20.1 // indirect + github.com/quic-go/qpack v0.6.0 // indirect + github.com/quic-go/quic-go v0.59.0 // indirect + github.com/twitchyliquid64/golang-asm v0.15.1 // indirect + github.com/ugorji/go/codec v1.3.1 // indirect + github.com/woodsbury/decimal128 v1.3.0 // indirect + github.com/yuin/gopher-lua v1.1.1 // indirect + go.mongodb.org/mongo-driver/v2 v2.5.0 // indirect + go.opentelemetry.io/auto/sdk v1.2.1 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect + go.opentelemetry.io/proto/otlp v1.10.0 // indirect + go.uber.org/atomic v1.11.0 // indirect + go.yaml.in/yaml/v2 v2.4.4 // indirect + golang.org/x/arch v0.25.0 // indirect + golang.org/x/crypto v0.49.0 // indirect + golang.org/x/net v0.52.0 // indirect + golang.org/x/sys v0.42.0 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect + google.golang.org/grpc v1.80.0 // indirect + google.golang.org/protobuf v1.36.11 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect +) diff --git a/user/go.sum b/user/go.sum new file mode 100644 index 0000000..9c868ef --- /dev/null +++ b/user/go.sum @@ -0,0 +1,218 @@ +github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68= +github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs= +github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c= +github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= +github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= +github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM= +github.com/bytedance/gopkg v0.1.4/go.mod h1:v1zWfPm21Fb+OsyXN2VAHdL6TBb2L88anLQgdyje6R4= +github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE= +github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k= +github.com/bytedance/sonic/loader v0.5.1 h1:Ygpfa9zwRCCKSlrp5bBP/b/Xzc3VxsAW+5NIYXrOOpI= +github.com/bytedance/sonic/loader v0.5.1/go.mod h1:AR4NYCk5DdzZizZ5djGqQ92eEhCCcdf5x77udYiSJRo= +github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM= +github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/cloudwego/base64x v0.1.6 h1:t11wG9AECkCDk5fMSoxmufanudBtJ+/HemLstXDLI2M= +github.com/cloudwego/base64x v0.1.6/go.mod h1:OFcloc187FXDaYHvrNIjxSe8ncn0OOM8gEHfghB2IPU= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= +github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= +github.com/disciplinedware/go-confusables v0.1.1 h1:l/JVOsdrEDHo7nvL+tQfRO1F14UyuuDm1Uvv3Nqmq9Q= +github.com/disciplinedware/go-confusables v0.1.1/go.mod h1:2hAXIAtpSqx+tMKdCzgRNv4J/kmz/oGfSHTBGJjVgfc= +github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM= +github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s= +github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg= +github.com/getkin/kin-openapi v0.135.0/go.mod h1:6dd5FJl6RdX4usBtFBaQhk9q62Yb2J0Mk5IhUO/QqFI= +github.com/gin-contrib/sse v1.1.1 h1:uGYpNwTacv5R68bSGMapo62iLTRa9l5zxGCps4hK6ko= +github.com/gin-contrib/sse v1.1.1/go.mod h1:QXzuVkA0YO7o/gun03UI1Q+FTI8ZV/n5t03kIQAI89s= +github.com/gin-gonic/gin v1.12.0 h1:b3YAbrZtnf8N//yjKeU2+MQsh2mY5htkZidOM7O0wG8= +github.com/gin-gonic/gin v1.12.0/go.mod h1:VxccKfsSllpKshkBWgVgRniFFAzFb9csfngsqANjnLc= +github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= +github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ= +github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY= +github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE= +github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ= +github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= +github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= +github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= +github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= +github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= +github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= +github.com/go-playground/validator/v10 v10.30.2 h1:JiFIMtSSHb2/XBUbWM4i/MpeQm9ZK2xqPNk8vgvu5JQ= +github.com/go-playground/validator/v10 v10.30.2/go.mod h1:mAf2pIOVXjTEBrwUMGKkCWKKPs9NheYGabeB04txQSc= +github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM= +github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE= +github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU= +github.com/goccy/go-json v0.10.6/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M= +github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM= +github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= +github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= +github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y= +github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= +github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= +github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= +github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw= +github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48= +github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM= +github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g= +github.com/oasdiff/yaml3 v0.0.9/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o= +github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM= +github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s= +github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4= +github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw= +github.com/prometheus/otlptranslator v1.0.0 h1:s0LJW/iN9dkIH+EnhiD3BlkkP5QVIUVEoIwkU+A6qos= +github.com/prometheus/otlptranslator v1.0.0/go.mod h1:vRYWnXvI6aWGpsdY/mOT/cbeVRBlPWtBNDb7kGR3uKM= +github.com/prometheus/procfs v0.20.1 h1:XwbrGOIplXW/AU3YhIhLODXMJYyC1isLFfYCsTEycfc= +github.com/prometheus/procfs v0.20.1/go.mod h1:o9EMBZGRyvDrSPH1RqdxhojkuXstoe4UlK79eF5TGGo= +github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8= +github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII= +github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw= +github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU= +github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs= +github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI= +github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08= +github.com/ugorji/go/codec v1.3.1 h1:waO7eEiFDwidsBN6agj1vJQ4AG7lh2yqXyOXqhgQuyY= +github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4= +github.com/woodsbury/decimal128 v1.3.0 h1:8pffMNWIlC0O5vbyHWFZAt5yWvWcrHA+3ovIIjVWss0= +github.com/woodsbury/decimal128 v1.3.0/go.mod h1:C5UTmyTjW3JftjUFzOVhC20BEQa2a4ZKOB5I6Zjb+ds= +github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M= +github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw= +github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0= +github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA= +go.mongodb.org/mongo-driver/v2 v2.5.0 h1:yXUhImUjjAInNcpTcAlPHiT7bIXhshCTL3jVBkF3xaE= +go.mongodb.org/mongo-driver/v2 v2.5.0/go.mod h1:yOI9kBsufol30iFsl1slpdq1I0eHPzybRWdyYUs8K/0= +go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= +go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= +go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 h1:5FXSL2s6afUC1bzNzl1iedZZ8yqR7GOhbCoEXtyeK6Q= +go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0/go.mod h1:MdHW7tLtkeGJnR4TyOrnd5D0zUGZQB1l84uHCe8hRpE= +go.opentelemetry.io/contrib/propagators/b3 v1.43.0 h1:CETqV3QLLPTy5yNrqyMr41VnAOOD4lsRved7n4QG00A= +go.opentelemetry.io/contrib/propagators/b3 v1.43.0/go.mod h1:Q4mCiCdziYzpNR0g+6UqVotAlCDZdzz6L8jwY4knOrw= +go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I= +go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 h1:8UQVDcZxOJLtX6gxtDt3vY2WTgvZqMQRzjsqiIHQdkc= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0/go.mod h1:2lmweYCiHYpEjQ/lSJBYhj9jP1zvCvQW4BqL9dnT7FQ= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 h1:w1K+pCJoPpQifuVpsKamUdn9U0zM3xUziVOqsGksUrY= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0/go.mod h1:HBy4BjzgVE8139ieRI75oXm3EcDN+6GhD88JT1Kjvxg= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0/go.mod h1:Vl1/iaggsuRlrHf/hfPJPvVag77kKyvrLeD10kpMl+A= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0/go.mod h1:AGmbycVGEsRx9mXMZ75CsOyhSP6MFIcj/6dnG+vhVjk= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0/go.mod h1:/G+nUPfhq2e+qiXMGxMwumDrP5jtzU+mWN7/sjT2rak= +go.opentelemetry.io/otel/exporters/prometheus v0.65.0 h1:jOveH/b4lU9HT7y+Gfamf18BqlOuz2PWEvs8yM7Q6XE= +go.opentelemetry.io/otel/exporters/prometheus v0.65.0/go.mod h1:i1P8pcumauPtUI4YNopea1dhzEMuEqWP1xoUZDylLHo= +go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 h1:TC+BewnDpeiAmcscXbGMfxkO+mwYUwE/VySwvw88PfA= +go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0/go.mod h1:J/ZyF4vfPwsSr9xJSPyQ4LqtcTPULFR64KwTikGLe+A= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 h1:mS47AX77OtFfKG4vtp+84kuGSFZHTyxtXIN269vChY0= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0/go.mod h1:PJnsC41lAGncJlPUniSwM81gc80GkgWJWr3cu2nKEtU= +go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM= +go.opentelemetry.io/otel/metric v1.43.0/go.mod h1:RDnPtIxvqlgO8GRW18W6Z/4P462ldprJtfxHxyKd2PY= +go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg= +go.opentelemetry.io/otel/sdk v1.43.0/go.mod h1:P+IkVU3iWukmiit/Yf9AWvpyRDlUeBaRg6Y+C58QHzg= +go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw= +go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A= +go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A= +go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0= +go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g= +go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk= +go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= +go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y= +go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU= +go.yaml.in/yaml/v2 v2.4.4 h1:tuyd0P+2Ont/d6e2rl3be67goVK4R6deVxCUX5vyPaQ= +go.yaml.in/yaml/v2 v2.4.4/go.mod h1:gMZqIpDtDqOfM0uNfy0SkpRhvUryYH0Z6wdMYcacYXQ= +golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE= +golang.org/x/arch v0.25.0/go.mod h1:0X+GdSIP+kL5wPmpK7sdkEVTt2XoYP0cSjQSbZBwOi8= +golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= +golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA= +golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= +golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo= +golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw= +golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8= +golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA= +gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4= +gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E= +google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA= +google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M= +google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg= +google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8= +google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM= +google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4= +google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= +google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/user/internal/adapters/local/clock.go b/user/internal/adapters/local/clock.go new file mode 100644 index 0000000..eb3f771 --- /dev/null +++ b/user/internal/adapters/local/clock.go @@ -0,0 +1,13 @@ +// Package local provides small in-process runtime adapters used by the user +// service process. +package local + +import "time" + +// Clock returns the current wall-clock time. +type Clock struct{} + +// Now returns the current time. +func (Clock) Now() time.Time { + return time.Now() +} diff --git a/user/internal/adapters/local/declared_country_changed_publisher.go b/user/internal/adapters/local/declared_country_changed_publisher.go new file mode 100644 index 0000000..b7e1538 --- /dev/null +++ b/user/internal/adapters/local/declared_country_changed_publisher.go @@ -0,0 +1,29 @@ +package local + +import ( + "context" + "fmt" + + "galaxy/user/internal/ports" +) + +// NoopDeclaredCountryChangedPublisher validates and discards auxiliary +// declared-country change events. +type NoopDeclaredCountryChangedPublisher struct{} + +// PublishDeclaredCountryChanged validates event and discards it. +func (NoopDeclaredCountryChangedPublisher) PublishDeclaredCountryChanged( + ctx context.Context, + event ports.DeclaredCountryChangedEvent, +) error { + if ctx == nil { + return fmt.Errorf("publish declared-country changed event: nil context") + } + if err := ctx.Err(); err != nil { + return err + } + + return event.Validate() +} + +var _ ports.DeclaredCountryChangedPublisher = NoopDeclaredCountryChangedPublisher{} diff --git a/user/internal/adapters/local/domain_event_publishers.go b/user/internal/adapters/local/domain_event_publishers.go new file mode 100644 index 0000000..7e4249d --- /dev/null +++ b/user/internal/adapters/local/domain_event_publishers.go @@ -0,0 +1,62 @@ +package local + +import ( + "context" + "fmt" + + "galaxy/user/internal/ports" +) + +// NoopDomainEventPublisher validates and discards auxiliary user-domain +// events. +type NoopDomainEventPublisher struct{} + +// PublishProfileChanged validates event and discards it. +func (NoopDomainEventPublisher) PublishProfileChanged(ctx context.Context, event ports.ProfileChangedEvent) error { + return validateNoopPublish(ctx, "publish profile changed event", event.Validate) +} + +// PublishSettingsChanged validates event and discards it. +func (NoopDomainEventPublisher) PublishSettingsChanged(ctx context.Context, event ports.SettingsChangedEvent) error { + return validateNoopPublish(ctx, "publish settings changed event", event.Validate) +} + +// PublishEntitlementChanged validates event and discards it. +func (NoopDomainEventPublisher) PublishEntitlementChanged(ctx context.Context, event ports.EntitlementChangedEvent) error { + return validateNoopPublish(ctx, "publish entitlement changed event", event.Validate) +} + +// PublishSanctionChanged validates event and discards it. +func (NoopDomainEventPublisher) PublishSanctionChanged(ctx context.Context, event ports.SanctionChangedEvent) error { + return validateNoopPublish(ctx, "publish sanction changed event", event.Validate) +} + +// PublishLimitChanged validates event and discards it. +func (NoopDomainEventPublisher) PublishLimitChanged(ctx context.Context, event ports.LimitChangedEvent) error { + return validateNoopPublish(ctx, "publish limit changed event", event.Validate) +} + +// PublishDeclaredCountryChanged validates event and discards it. +func (NoopDomainEventPublisher) PublishDeclaredCountryChanged(ctx context.Context, event ports.DeclaredCountryChangedEvent) error { + return validateNoopPublish(ctx, "publish declared-country changed event", event.Validate) +} + +func validateNoopPublish(ctx context.Context, operation string, validate func() error) error { + if ctx == nil { + return fmt.Errorf("%s: nil context", operation) + } + if err := ctx.Err(); err != nil { + return err + } + + return validate() +} + +var ( + _ ports.ProfileChangedPublisher = NoopDomainEventPublisher{} + _ ports.SettingsChangedPublisher = NoopDomainEventPublisher{} + _ ports.EntitlementChangedPublisher = NoopDomainEventPublisher{} + _ ports.SanctionChangedPublisher = NoopDomainEventPublisher{} + _ ports.LimitChangedPublisher = NoopDomainEventPublisher{} + _ ports.DeclaredCountryChangedPublisher = NoopDomainEventPublisher{} +) diff --git a/user/internal/adapters/local/id_generator.go b/user/internal/adapters/local/id_generator.go new file mode 100644 index 0000000..eda6ea9 --- /dev/null +++ b/user/internal/adapters/local/id_generator.go @@ -0,0 +1,105 @@ +package local + +import ( + "crypto/rand" + "encoding/base32" + "fmt" + "strings" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" +) + +var base32NoPadding = base32.StdEncoding.WithPadding(base32.NoPadding) + +// IDGenerator creates opaque stable user identifiers and generated initial +// race names. +type IDGenerator struct{} + +// NewUserID returns one newly generated opaque user identifier. +func (IDGenerator) NewUserID() (common.UserID, error) { + token, err := randomToken(10) + if err != nil { + return "", fmt.Errorf("generate user id: %w", err) + } + + userID := common.UserID("user-" + token) + if err := userID.Validate(); err != nil { + return "", fmt.Errorf("generate user id: %w", err) + } + + return userID, nil +} + +// NewInitialRaceName returns one generated race name in the `player-` +// form. +func (IDGenerator) NewInitialRaceName() (common.RaceName, error) { + token, err := randomToken(5) + if err != nil { + return "", fmt.Errorf("generate initial race name: %w", err) + } + + raceName := common.RaceName("player-" + token) + if err := raceName.Validate(); err != nil { + return "", fmt.Errorf("generate initial race name: %w", err) + } + + return raceName, nil +} + +// NewEntitlementRecordID returns one generated entitlement history record +// identifier. +func (IDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + token, err := randomToken(10) + if err != nil { + return "", fmt.Errorf("generate entitlement record id: %w", err) + } + + recordID := entitlement.EntitlementRecordID("entitlement-" + token) + if err := recordID.Validate(); err != nil { + return "", fmt.Errorf("generate entitlement record id: %w", err) + } + + return recordID, nil +} + +// NewSanctionRecordID returns one generated sanction history record +// identifier. +func (IDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + token, err := randomToken(10) + if err != nil { + return "", fmt.Errorf("generate sanction record id: %w", err) + } + + recordID := policy.SanctionRecordID("sanction-" + token) + if err := recordID.Validate(); err != nil { + return "", fmt.Errorf("generate sanction record id: %w", err) + } + + return recordID, nil +} + +// NewLimitRecordID returns one generated limit history record identifier. +func (IDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + token, err := randomToken(10) + if err != nil { + return "", fmt.Errorf("generate limit record id: %w", err) + } + + recordID := policy.LimitRecordID("limit-" + token) + if err := recordID.Validate(); err != nil { + return "", fmt.Errorf("generate limit record id: %w", err) + } + + return recordID, nil +} + +func randomToken(size int) (string, error) { + buffer := make([]byte, size) + if _, err := rand.Read(buffer); err != nil { + return "", err + } + + return strings.ToLower(base32NoPadding.EncodeToString(buffer)), nil +} diff --git a/user/internal/adapters/local/race_name_policy.go b/user/internal/adapters/local/race_name_policy.go new file mode 100644 index 0000000..333e4e7 --- /dev/null +++ b/user/internal/adapters/local/race_name_policy.go @@ -0,0 +1,65 @@ +package local + +import ( + "fmt" + "strings" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/ports" + + confusables "github.com/disciplinedware/go-confusables" + "golang.org/x/text/cases" +) + +type confusableSkeletoner interface { + Skeleton(string) string +} + +type raceNamePolicy struct { + caseFolder cases.Caser + skeletoner confusableSkeletoner +} + +var raceNameAntiFraudReplacer = strings.NewReplacer( + "1", "i", + "0", "o", + "8", "b", +) + +// NewRaceNamePolicy returns the local Stage 06 race-name canonicalization +// policy backed by Unicode case folding, explicit ASCII anti-fraud mappings, +// and a TR39 confusable skeleton. +func NewRaceNamePolicy() (ports.RaceNamePolicy, error) { + policy := &raceNamePolicy{ + caseFolder: cases.Fold(), + skeletoner: confusables.Default(), + } + if policy.skeletoner == nil { + return nil, fmt.Errorf("new race-name policy: nil confusable skeletoner") + } + + return policy, nil +} + +// CanonicalKey returns the stable uniqueness key for raceName. +func (policy *raceNamePolicy) CanonicalKey(raceName common.RaceName) (account.RaceNameCanonicalKey, error) { + switch { + case policy == nil: + return "", fmt.Errorf("canonicalize race name: nil policy") + case policy.skeletoner == nil: + return "", fmt.Errorf("canonicalize race name: nil confusable skeletoner") + } + if err := raceName.Validate(); err != nil { + return "", fmt.Errorf("canonicalize race name: %w", err) + } + + folded := policy.caseFolder.String(raceName.String()) + antiFraudMapped := raceNameAntiFraudReplacer.Replace(folded) + key := account.RaceNameCanonicalKey(policy.skeletoner.Skeleton(antiFraudMapped)) + if err := key.Validate(); err != nil { + return "", fmt.Errorf("canonicalize race name: %w", err) + } + + return key, nil +} diff --git a/user/internal/adapters/local/race_name_policy_test.go b/user/internal/adapters/local/race_name_policy_test.go new file mode 100644 index 0000000..7d479cc --- /dev/null +++ b/user/internal/adapters/local/race_name_policy_test.go @@ -0,0 +1,72 @@ +package local + +import ( + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/service/shared" + + "github.com/stretchr/testify/require" +) + +func TestRaceNamePolicyCanonicalKey(t *testing.T) { + t.Parallel() + + policy, err := NewRaceNamePolicy() + require.NoError(t, err) + + tests := []struct { + name string + left common.RaceName + right common.RaceName + }{ + { + name: "case insensitive collision", + left: common.RaceName("Pilot Nova"), + right: common.RaceName("pilot nova"), + }, + { + name: "ascii anti fraud collision", + left: common.RaceName("Pilot Nova"), + right: common.RaceName("P1lot N0va"), + }, + { + name: "unicode confusable collision", + left: common.RaceName("paypal"), + right: common.RaceName("раураl"), + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + leftKey, err := policy.CanonicalKey(tt.left) + require.NoError(t, err) + rightKey, err := policy.CanonicalKey(tt.right) + require.NoError(t, err) + require.Equal(t, rightKey, leftKey) + }) + } +} + +func TestBuildRaceNameReservationPreservesOriginalDisplayValue(t *testing.T) { + t.Parallel() + + policy, err := NewRaceNamePolicy() + require.NoError(t, err) + + record, err := shared.BuildRaceNameReservation( + policy, + common.UserID("user-123"), + common.RaceName("P1lot Nova"), + time.Unix(1_775_240_000, 0).UTC(), + ) + require.NoError(t, err) + + require.Equal(t, common.RaceName("P1lot Nova"), record.RaceName) + require.NotEqual(t, account.RaceNameCanonicalKey(""), record.CanonicalKey) +} diff --git a/user/internal/adapters/redis/domainevents/publisher.go b/user/internal/adapters/redis/domainevents/publisher.go new file mode 100644 index 0000000..76393ff --- /dev/null +++ b/user/internal/adapters/redis/domainevents/publisher.go @@ -0,0 +1,311 @@ +// Package domainevents implements Redis Stream-backed auxiliary user-domain +// event publishers. +package domainevents + +import ( + "context" + "crypto/tls" + "errors" + "fmt" + "strconv" + "strings" + "time" + + "galaxy/user/internal/ports" + + "github.com/redis/go-redis/v9" + "go.opentelemetry.io/otel/trace" +) + +// Config configures one Redis-backed user domain-event publisher. +type Config struct { + // Addr is the Redis network address in host:port form. + Addr string + + // Username is the optional Redis ACL username. + Username string + + // Password is the optional Redis ACL password. + Password string + + // DB is the Redis logical database index. + DB int + + // TLSEnabled enables TLS with a conservative minimum protocol version. + TLSEnabled bool + + // Stream identifies the Redis Stream key used for domain events. + Stream string + + // StreamMaxLen bounds the stream with approximate trimming via + // `XADD MAXLEN ~`. + StreamMaxLen int64 + + // OperationTimeout bounds each Redis round trip performed by the adapter. + OperationTimeout time.Duration +} + +// Publisher publishes auxiliary user-domain events into one Redis Stream. +type Publisher struct { + client *redis.Client + stream string + streamMaxLen int64 + operationTimeout time.Duration +} + +// New constructs a Redis-backed domain-event publisher from cfg. +func New(cfg Config) (*Publisher, error) { + switch { + case strings.TrimSpace(cfg.Addr) == "": + return nil, errors.New("new redis domain-event publisher: redis addr must not be empty") + case cfg.DB < 0: + return nil, errors.New("new redis domain-event publisher: redis db must not be negative") + case strings.TrimSpace(cfg.Stream) == "": + return nil, errors.New("new redis domain-event publisher: stream must not be empty") + case cfg.StreamMaxLen <= 0: + return nil, errors.New("new redis domain-event publisher: stream max len must be positive") + case cfg.OperationTimeout <= 0: + return nil, errors.New("new redis domain-event publisher: operation timeout must be positive") + } + + options := &redis.Options{ + Addr: cfg.Addr, + Username: cfg.Username, + Password: cfg.Password, + DB: cfg.DB, + Protocol: 2, + DisableIdentity: true, + } + if cfg.TLSEnabled { + options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12} + } + + return &Publisher{ + client: redis.NewClient(options), + stream: cfg.Stream, + streamMaxLen: cfg.StreamMaxLen, + operationTimeout: cfg.OperationTimeout, + }, nil +} + +// Close releases the underlying Redis client resources. +func (publisher *Publisher) Close() error { + if publisher == nil || publisher.client == nil { + return nil + } + + return publisher.client.Close() +} + +// Ping verifies that the configured Redis backend is reachable within the +// adapter operation timeout budget. +func (publisher *Publisher) Ping(ctx context.Context) error { + operationCtx, cancel, err := publisher.operationContext(ctx, "ping redis domain-event publisher") + if err != nil { + return err + } + defer cancel() + + if err := publisher.client.Ping(operationCtx).Err(); err != nil { + return fmt.Errorf("ping redis domain-event publisher: %w", err) + } + + return nil +} + +// PublishProfileChanged publishes one committed profile-change event. +func (publisher *Publisher) PublishProfileChanged(ctx context.Context, event ports.ProfileChangedEvent) error { + if err := event.Validate(); err != nil { + return fmt.Errorf("publish profile changed event: %w", err) + } + + values := buildEnvelope(ports.ProfileChangedEventType, event.UserID.String(), event.OccurredAt, event.Source.String(), traceIDFromContext(ctx, event.TraceID)) + values["operation"] = string(event.Operation) + values["race_name"] = event.RaceName.String() + + return publisher.publish(ctx, "publish profile changed event", values) +} + +// PublishSettingsChanged publishes one committed settings-change event. +func (publisher *Publisher) PublishSettingsChanged(ctx context.Context, event ports.SettingsChangedEvent) error { + if err := event.Validate(); err != nil { + return fmt.Errorf("publish settings changed event: %w", err) + } + + values := buildEnvelope(ports.SettingsChangedEventType, event.UserID.String(), event.OccurredAt, event.Source.String(), traceIDFromContext(ctx, event.TraceID)) + values["operation"] = string(event.Operation) + values["preferred_language"] = event.PreferredLanguage.String() + values["time_zone"] = event.TimeZone.String() + + return publisher.publish(ctx, "publish settings changed event", values) +} + +// PublishEntitlementChanged publishes one committed entitlement-change event. +func (publisher *Publisher) PublishEntitlementChanged(ctx context.Context, event ports.EntitlementChangedEvent) error { + if err := event.Validate(); err != nil { + return fmt.Errorf("publish entitlement changed event: %w", err) + } + + values := buildEnvelope(ports.EntitlementChangedEventType, event.UserID.String(), event.OccurredAt, event.Source.String(), traceIDFromContext(ctx, event.TraceID)) + values["operation"] = string(event.Operation) + values["plan_code"] = string(event.PlanCode) + values["is_paid"] = strconv.FormatBool(event.IsPaid) + values["starts_at_ms"] = strconv.FormatInt(event.StartsAt.UTC().UnixMilli(), 10) + values["reason_code"] = event.ReasonCode.String() + values["actor_type"] = event.Actor.Type.String() + values["updated_at_ms"] = strconv.FormatInt(event.UpdatedAt.UTC().UnixMilli(), 10) + if !event.Actor.ID.IsZero() { + values["actor_id"] = event.Actor.ID.String() + } + if event.EndsAt != nil { + values["ends_at_ms"] = strconv.FormatInt(event.EndsAt.UTC().UnixMilli(), 10) + } + + return publisher.publish(ctx, "publish entitlement changed event", values) +} + +// PublishSanctionChanged publishes one committed sanction-change event. +func (publisher *Publisher) PublishSanctionChanged(ctx context.Context, event ports.SanctionChangedEvent) error { + if err := event.Validate(); err != nil { + return fmt.Errorf("publish sanction changed event: %w", err) + } + + values := buildEnvelope(ports.SanctionChangedEventType, event.UserID.String(), event.OccurredAt, event.Source.String(), traceIDFromContext(ctx, event.TraceID)) + values["operation"] = string(event.Operation) + values["sanction_code"] = string(event.SanctionCode) + values["scope"] = event.Scope.String() + values["reason_code"] = event.ReasonCode.String() + values["actor_type"] = event.Actor.Type.String() + values["applied_at_ms"] = strconv.FormatInt(event.AppliedAt.UTC().UnixMilli(), 10) + if !event.Actor.ID.IsZero() { + values["actor_id"] = event.Actor.ID.String() + } + if event.ExpiresAt != nil { + values["expires_at_ms"] = strconv.FormatInt(event.ExpiresAt.UTC().UnixMilli(), 10) + } + if event.RemovedAt != nil { + values["removed_at_ms"] = strconv.FormatInt(event.RemovedAt.UTC().UnixMilli(), 10) + } + + return publisher.publish(ctx, "publish sanction changed event", values) +} + +// PublishLimitChanged publishes one committed limit-change event. +func (publisher *Publisher) PublishLimitChanged(ctx context.Context, event ports.LimitChangedEvent) error { + if err := event.Validate(); err != nil { + return fmt.Errorf("publish limit changed event: %w", err) + } + + values := buildEnvelope(ports.LimitChangedEventType, event.UserID.String(), event.OccurredAt, event.Source.String(), traceIDFromContext(ctx, event.TraceID)) + values["operation"] = string(event.Operation) + values["limit_code"] = string(event.LimitCode) + values["reason_code"] = event.ReasonCode.String() + values["actor_type"] = event.Actor.Type.String() + values["applied_at_ms"] = strconv.FormatInt(event.AppliedAt.UTC().UnixMilli(), 10) + if event.Value != nil { + values["value"] = strconv.Itoa(*event.Value) + } + if !event.Actor.ID.IsZero() { + values["actor_id"] = event.Actor.ID.String() + } + if event.ExpiresAt != nil { + values["expires_at_ms"] = strconv.FormatInt(event.ExpiresAt.UTC().UnixMilli(), 10) + } + if event.RemovedAt != nil { + values["removed_at_ms"] = strconv.FormatInt(event.RemovedAt.UTC().UnixMilli(), 10) + } + + return publisher.publish(ctx, "publish limit changed event", values) +} + +// PublishDeclaredCountryChanged publishes one committed declared-country change +// event. +func (publisher *Publisher) PublishDeclaredCountryChanged(ctx context.Context, event ports.DeclaredCountryChangedEvent) error { + if err := event.Validate(); err != nil { + return fmt.Errorf("publish declared-country changed event: %w", err) + } + + values := buildEnvelope( + ports.DeclaredCountryChangedEventType, + event.UserID.String(), + event.UpdatedAt, + event.Source.String(), + traceIDFromContext(ctx, event.TraceID), + ) + values["declared_country"] = event.DeclaredCountry.String() + values["updated_at_ms"] = strconv.FormatInt(event.UpdatedAt.UTC().UnixMilli(), 10) + + return publisher.publish(ctx, "publish declared-country changed event", values) +} + +func (publisher *Publisher) publish(ctx context.Context, operation string, values map[string]any) error { + operationCtx, cancel, err := publisher.operationContext(ctx, operation) + if err != nil { + return err + } + defer cancel() + + if err := publisher.client.XAdd(operationCtx, &redis.XAddArgs{ + Stream: publisher.stream, + MaxLen: publisher.streamMaxLen, + Approx: true, + Values: values, + }).Err(); err != nil { + return fmt.Errorf("%s: %w", operation, err) + } + + return nil +} + +func (publisher *Publisher) operationContext(ctx context.Context, operation string) (context.Context, context.CancelFunc, error) { + if publisher == nil || publisher.client == nil { + return nil, nil, fmt.Errorf("%s: nil publisher", operation) + } + if ctx == nil { + return nil, nil, fmt.Errorf("%s: nil context", operation) + } + + operationCtx, cancel := context.WithTimeout(ctx, publisher.operationTimeout) + return operationCtx, cancel, nil +} + +func buildEnvelope(eventType string, userID string, occurredAt time.Time, source string, traceID string) map[string]any { + values := map[string]any{ + "event_type": eventType, + "user_id": userID, + "occurred_at_ms": strconv.FormatInt(occurredAt.UTC().UnixMilli(), 10), + "source": source, + } + if traceID != "" { + values["trace_id"] = traceID + } + + return values +} + +func traceIDFromContext(ctx context.Context, fallback string) string { + if strings.TrimSpace(fallback) != "" { + return fallback + } + if ctx == nil { + return "" + } + + spanContext := trace.SpanContextFromContext(ctx) + if !spanContext.IsValid() { + return "" + } + + return spanContext.TraceID().String() +} + +var ( + _ interface{ Close() error } = (*Publisher)(nil) + _ interface{ Ping(context.Context) error } = (*Publisher)(nil) + _ ports.ProfileChangedPublisher = (*Publisher)(nil) + _ ports.SettingsChangedPublisher = (*Publisher)(nil) + _ ports.EntitlementChangedPublisher = (*Publisher)(nil) + _ ports.SanctionChangedPublisher = (*Publisher)(nil) + _ ports.LimitChangedPublisher = (*Publisher)(nil) + _ ports.DeclaredCountryChangedPublisher = (*Publisher)(nil) +) diff --git a/user/internal/adapters/redis/domainevents/publisher_test.go b/user/internal/adapters/redis/domainevents/publisher_test.go new file mode 100644 index 0000000..0c2a116 --- /dev/null +++ b/user/internal/adapters/redis/domainevents/publisher_test.go @@ -0,0 +1,90 @@ +package domainevents + +import ( + "context" + "strconv" + "testing" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/ports" + + "github.com/alicebob/miniredis/v2" + "github.com/stretchr/testify/require" +) + +func TestPublisherPublishesFlatRedisStreamEntry(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + publisher, err := New(Config{ + Addr: server.Addr(), + Stream: "user:test_events", + StreamMaxLen: 5, + OperationTimeout: time.Second, + }) + require.NoError(t, err) + + occurredAt := time.Unix(1_775_240_000, 0).UTC() + err = publisher.PublishProfileChanged(context.Background(), ports.ProfileChangedEvent{ + UserID: common.UserID("user-123"), + OccurredAt: occurredAt, + Source: common.Source("gateway_self_service"), + TraceID: "4bf92f3577b34da6a3ce929d0e0e4736", + Operation: ports.ProfileChangedOperationUpdated, + RaceName: common.RaceName("Nova Prime"), + }) + require.NoError(t, err) + + entries, err := publisher.client.XRange(context.Background(), publisher.stream, "-", "+").Result() + require.NoError(t, err) + require.Len(t, entries, 1) + require.Equal(t, ports.ProfileChangedEventType, entries[0].Values["event_type"]) + require.Equal(t, "user-123", entries[0].Values["user_id"]) + require.Equal(t, strconv.FormatInt(occurredAt.UnixMilli(), 10), entries[0].Values["occurred_at_ms"]) + require.Equal(t, "gateway_self_service", entries[0].Values["source"]) + require.Equal(t, "4bf92f3577b34da6a3ce929d0e0e4736", entries[0].Values["trace_id"]) + require.Equal(t, string(ports.ProfileChangedOperationUpdated), entries[0].Values["operation"]) + require.Equal(t, "Nova Prime", entries[0].Values["race_name"]) + + for index := 0; index < 20; index++ { + err = publisher.PublishSettingsChanged(context.Background(), ports.SettingsChangedEvent{ + UserID: common.UserID("user-123"), + OccurredAt: occurredAt.Add(time.Duration(index+1) * time.Second), + Source: common.Source("gateway_self_service"), + Operation: ports.SettingsChangedOperationUpdated, + PreferredLanguage: common.LanguageTag("en-US"), + TimeZone: common.TimeZoneName("UTC"), + }) + require.NoError(t, err) + } + + length, err := publisher.client.XLen(context.Background(), publisher.stream).Result() + require.NoError(t, err) + require.LessOrEqual(t, length, int64(20)) +} + +func TestPublisherRejectsInvalidEventBeforeXAdd(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + publisher, err := New(Config{ + Addr: server.Addr(), + Stream: "user:test_events", + StreamMaxLen: 5, + OperationTimeout: time.Second, + }) + require.NoError(t, err) + + err = publisher.PublishProfileChanged(context.Background(), ports.ProfileChangedEvent{ + UserID: common.UserID("user-123"), + OccurredAt: time.Unix(1_775_240_000, 0).UTC(), + Operation: ports.ProfileChangedOperationUpdated, + RaceName: common.RaceName("Nova Prime"), + }) + require.Error(t, err) + + length, xLenErr := publisher.client.XLen(context.Background(), publisher.stream).Result() + require.NoError(t, xLenErr) + require.Zero(t, length) +} diff --git a/user/internal/adapters/redis/userstore/admin_index.go b/user/internal/adapters/redis/userstore/admin_index.go new file mode 100644 index 0000000..d505a5a --- /dev/null +++ b/user/internal/adapters/redis/userstore/admin_index.go @@ -0,0 +1,215 @@ +package userstore + +import ( + "context" + "errors" + + "galaxy/user/internal/adapters/redisstate" + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + + "github.com/redis/go-redis/v9" +) + +var knownSanctionCodes = []policy.SanctionCode{ + policy.SanctionCodeLoginBlock, + policy.SanctionCodePrivateGameCreateBlock, + policy.SanctionCodePrivateGameManageBlock, + policy.SanctionCodeGameJoinBlock, + policy.SanctionCodeProfileUpdateBlock, +} + +var knownLimitCodes = []policy.LimitCode{ + policy.LimitCodeMaxOwnedPrivateGames, + policy.LimitCodeMaxPendingPublicApplications, + policy.LimitCodeMaxActiveGameMemberships, +} + +var knownEligibilityMarkers = []policy.EligibilityMarker{ + policy.EligibilityMarkerCanLogin, + policy.EligibilityMarkerCanCreatePrivateGame, + policy.EligibilityMarkerCanManagePrivateGame, + policy.EligibilityMarkerCanJoinGame, + policy.EligibilityMarkerCanUpdateProfile, +} + +func (store *Store) addCreatedAtIndex( + pipe redis.Pipeliner, + ctx context.Context, + record account.UserAccount, +) { + pipe.ZAdd(ctx, store.keyspace.CreatedAtIndex(), redis.Z{ + Score: redisstate.CreatedAtScore(record.CreatedAt), + Member: record.UserID.String(), + }) +} + +func (store *Store) syncDeclaredCountryIndex( + pipe redis.Pipeliner, + ctx context.Context, + previous account.UserAccount, + current account.UserAccount, +) { + if !previous.DeclaredCountry.IsZero() { + pipe.SRem(ctx, store.keyspace.DeclaredCountryIndex(previous.DeclaredCountry), current.UserID.String()) + } + if !current.DeclaredCountry.IsZero() { + pipe.SAdd(ctx, store.keyspace.DeclaredCountryIndex(current.DeclaredCountry), current.UserID.String()) + } +} + +func (store *Store) syncEntitlementIndexes( + pipe redis.Pipeliner, + ctx context.Context, + snapshot entitlement.CurrentSnapshot, +) { + pipe.SRem(ctx, store.keyspace.PaidStateIndex(entitlement.PaidStateFree), snapshot.UserID.String()) + pipe.SRem(ctx, store.keyspace.PaidStateIndex(entitlement.PaidStatePaid), snapshot.UserID.String()) + pipe.SAdd(ctx, store.keyspace.PaidStateIndex(paidStateFromSnapshot(snapshot)), snapshot.UserID.String()) + + pipe.ZRem(ctx, store.keyspace.FinitePaidExpiryIndex(), snapshot.UserID.String()) + if snapshot.HasFiniteExpiry() { + pipe.ZAdd(ctx, store.keyspace.FinitePaidExpiryIndex(), redis.Z{ + Score: redisstate.ExpiryScore(*snapshot.EndsAt), + Member: snapshot.UserID.String(), + }) + } +} + +func (store *Store) syncActiveSanctionCodeIndexes( + pipe redis.Pipeliner, + ctx context.Context, + userID common.UserID, + activeCodes map[policy.SanctionCode]struct{}, +) { + for _, code := range knownSanctionCodes { + pipe.SRem(ctx, store.keyspace.ActiveSanctionCodeIndex(code), userID.String()) + if _, ok := activeCodes[code]; ok { + pipe.SAdd(ctx, store.keyspace.ActiveSanctionCodeIndex(code), userID.String()) + } + } +} + +func (store *Store) syncActiveLimitCodeIndexes( + pipe redis.Pipeliner, + ctx context.Context, + userID common.UserID, + activeCodes map[policy.LimitCode]struct{}, +) { + for _, code := range knownLimitCodes { + pipe.SRem(ctx, store.keyspace.ActiveLimitCodeIndex(code), userID.String()) + if _, ok := activeCodes[code]; ok { + pipe.SAdd(ctx, store.keyspace.ActiveLimitCodeIndex(code), userID.String()) + } + } +} + +func (store *Store) syncEligibilityMarkerIndexes( + pipe redis.Pipeliner, + ctx context.Context, + userID common.UserID, + isPaid bool, + activeSanctionCodes map[policy.SanctionCode]struct{}, +) { + values := deriveEligibilityMarkerValues(isPaid, activeSanctionCodes) + + for _, marker := range knownEligibilityMarkers { + pipe.SRem(ctx, store.keyspace.EligibilityMarkerIndex(marker, true), userID.String()) + pipe.SRem(ctx, store.keyspace.EligibilityMarkerIndex(marker, false), userID.String()) + pipe.SAdd(ctx, store.keyspace.EligibilityMarkerIndex(marker, values[marker]), userID.String()) + } +} + +func (store *Store) loadActiveSanctionCodeSet( + ctx context.Context, + getter bytesGetter, + userID common.UserID, +) (map[policy.SanctionCode]struct{}, error) { + activeCodes := make(map[policy.SanctionCode]struct{}, len(knownSanctionCodes)) + + for _, code := range knownSanctionCodes { + _, err := store.loadActiveSanctionRecordID(ctx, getter, store.keyspace.ActiveSanction(userID, code)) + switch { + case err == nil: + activeCodes[code] = struct{}{} + case errors.Is(err, ports.ErrNotFound): + continue + default: + return nil, err + } + } + + return activeCodes, nil +} + +func (store *Store) loadActiveLimitCodeSet( + ctx context.Context, + getter bytesGetter, + userID common.UserID, +) (map[policy.LimitCode]struct{}, error) { + activeCodes := make(map[policy.LimitCode]struct{}, len(knownLimitCodes)) + + for _, code := range knownLimitCodes { + _, err := store.loadActiveLimitRecordID(ctx, getter, store.keyspace.ActiveLimit(userID, code)) + switch { + case err == nil: + activeCodes[code] = struct{}{} + case errors.Is(err, ports.ErrNotFound): + continue + default: + return nil, err + } + } + + return activeCodes, nil +} + +func (store *Store) activeSanctionWatchKeys(userID common.UserID) []string { + keys := make([]string, 0, len(knownSanctionCodes)) + for _, code := range knownSanctionCodes { + keys = append(keys, store.keyspace.ActiveSanction(userID, code)) + } + + return keys +} + +func (store *Store) activeLimitWatchKeys(userID common.UserID) []string { + keys := make([]string, 0, len(knownLimitCodes)) + for _, code := range knownLimitCodes { + keys = append(keys, store.keyspace.ActiveLimit(userID, code)) + } + + return keys +} + +func deriveEligibilityMarkerValues( + isPaid bool, + activeSanctionCodes map[policy.SanctionCode]struct{}, +) map[policy.EligibilityMarker]bool { + _, loginBlocked := activeSanctionCodes[policy.SanctionCodeLoginBlock] + _, createBlocked := activeSanctionCodes[policy.SanctionCodePrivateGameCreateBlock] + _, manageBlocked := activeSanctionCodes[policy.SanctionCodePrivateGameManageBlock] + _, joinBlocked := activeSanctionCodes[policy.SanctionCodeGameJoinBlock] + _, profileBlocked := activeSanctionCodes[policy.SanctionCodeProfileUpdateBlock] + + canLogin := !loginBlocked + + return map[policy.EligibilityMarker]bool{ + policy.EligibilityMarkerCanLogin: canLogin, + policy.EligibilityMarkerCanCreatePrivateGame: canLogin && isPaid && !createBlocked, + policy.EligibilityMarkerCanManagePrivateGame: canLogin && isPaid && !manageBlocked, + policy.EligibilityMarkerCanJoinGame: canLogin && !joinBlocked, + policy.EligibilityMarkerCanUpdateProfile: canLogin && !profileBlocked, + } +} + +func paidStateFromSnapshot(snapshot entitlement.CurrentSnapshot) entitlement.PaidState { + if snapshot.IsPaid { + return entitlement.PaidStatePaid + } + + return entitlement.PaidStateFree +} diff --git a/user/internal/adapters/redis/userstore/admin_list_test.go b/user/internal/adapters/redis/userstore/admin_list_test.go new file mode 100644 index 0000000..6156767 --- /dev/null +++ b/user/internal/adapters/redis/userstore/admin_list_test.go @@ -0,0 +1,449 @@ +package userstore + +import ( + "context" + "testing" + "time" + + "galaxy/user/internal/adapters/redisstate" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/adminusers" + "galaxy/user/internal/service/entitlementsvc" + + "github.com/stretchr/testify/require" +) + +func TestListUserIDsCreatedAtPagination(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + base := time.Unix(1_775_240_000, 0).UTC() + + first := validAccountRecord() + first.UserID = common.UserID("user-100") + first.Email = common.Email("u100@example.com") + first.RaceName = common.RaceName("User 100") + first.CreatedAt = base.Add(-time.Hour) + first.UpdatedAt = first.CreatedAt + + second := validAccountRecord() + second.UserID = common.UserID("user-200") + second.Email = common.Email("u200@example.com") + second.RaceName = common.RaceName("User 200") + second.CreatedAt = base + second.UpdatedAt = second.CreatedAt + + third := validAccountRecord() + third.UserID = common.UserID("user-300") + third.Email = common.Email("u300@example.com") + third.RaceName = common.RaceName("User 300") + third.CreatedAt = base + third.UpdatedAt = third.CreatedAt + + require.NoError(t, store.Create(context.Background(), createAccountInput(first))) + require.NoError(t, store.Create(context.Background(), createAccountInput(second))) + require.NoError(t, store.Create(context.Background(), createAccountInput(third))) + + firstPage, err := store.ListUserIDs(context.Background(), ports.ListUsersInput{ + PageSize: 2, + Filters: ports.UserListFilters{}, + }) + require.NoError(t, err) + require.Equal(t, []common.UserID{third.UserID, second.UserID}, firstPage.UserIDs) + require.NotEmpty(t, firstPage.NextPageToken) + + secondPage, err := store.ListUserIDs(context.Background(), ports.ListUsersInput{ + PageSize: 2, + PageToken: firstPage.NextPageToken, + Filters: ports.UserListFilters{}, + }) + require.NoError(t, err) + require.Equal(t, []common.UserID{first.UserID}, secondPage.UserIDs) + require.Empty(t, secondPage.NextPageToken) +} + +func TestEnsureByEmailInitialAdminIndexes(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + now := time.Unix(1_775_240_000, 0).UTC() + record := validAccountRecord() + record.DeclaredCountry = common.CountryCode("DE") + record.CreatedAt = now + record.UpdatedAt = now + + result, err := store.EnsureByEmail(context.Background(), ports.EnsureByEmailInput{ + Email: record.Email, + Account: record, + Entitlement: validEntitlementSnapshot(record.UserID, now), + EntitlementRecord: validEntitlementRecord(record.UserID, now), + Reservation: raceNameReservation(record.UserID, record.RaceName, now), + }) + require.NoError(t, err) + require.Equal(t, ports.EnsureByEmailOutcomeCreated, result.Outcome) + + requireSortedSetScore(t, store, store.keyspace.CreatedAtIndex(), record.UserID.String(), redisstate.CreatedAtScore(record.CreatedAt)) + requireSetContains(t, store, store.keyspace.PaidStateIndex(entitlement.PaidStateFree), record.UserID.String()) + requireSetNotContains(t, store, store.keyspace.PaidStateIndex(entitlement.PaidStatePaid), record.UserID.String()) + requireSetContains(t, store, store.keyspace.DeclaredCountryIndex(record.DeclaredCountry), record.UserID.String()) + requireSetContains(t, store, store.keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanLogin, true), record.UserID.String()) + requireSetContains(t, store, store.keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanCreatePrivateGame, false), record.UserID.String()) + requireSetContains(t, store, store.keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanJoinGame, true), record.UserID.String()) +} + +func TestAccountUpdateSyncsDeclaredCountryIndex(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + accountStore := store.Accounts() + record := validAccountRecord() + record.DeclaredCountry = common.CountryCode("DE") + require.NoError(t, accountStore.Create(context.Background(), createAccountInput(record))) + + updated := record + updated.DeclaredCountry = common.CountryCode("FR") + updated.UpdatedAt = record.UpdatedAt.Add(time.Minute) + require.NoError(t, accountStore.Update(context.Background(), updated)) + + requireSetNotContains(t, store, store.keyspace.DeclaredCountryIndex(common.CountryCode("DE")), record.UserID.String()) + requireSetContains(t, store, store.keyspace.DeclaredCountryIndex(common.CountryCode("FR")), record.UserID.String()) +} + +func TestEntitlementLifecycleSyncsAdminIndexes(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + now := time.Unix(1_775_240_000, 0).UTC() + record := validAccountRecord() + record.CreatedAt = now + record.UpdatedAt = now + _, err := store.EnsureByEmail(context.Background(), ports.EnsureByEmailInput{ + Email: record.Email, + Account: record, + Entitlement: validEntitlementSnapshot(record.UserID, now), + EntitlementRecord: validEntitlementRecord(record.UserID, now), + Reservation: raceNameReservation(record.UserID, record.RaceName, now), + }) + require.NoError(t, err) + + lifecycleStore := store.EntitlementLifecycle() + freeRecord := validEntitlementRecord(record.UserID, now) + freeSnapshot := validEntitlementSnapshot(record.UserID, now) + + grantStartsAt := now.Add(time.Hour) + grantEndsAt := grantStartsAt.Add(30 * 24 * time.Hour) + grantedRecord := paidEntitlementRecord( + entitlement.EntitlementRecordID("entitlement-paid-1"), + record.UserID, + entitlement.PlanCodePaidMonthly, + grantStartsAt, + grantEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + grantedSnapshot := paidEntitlementSnapshot( + record.UserID, + entitlement.PlanCodePaidMonthly, + grantStartsAt, + grantEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + closedFreeRecord := freeRecord + closedFreeRecord.ClosedAt = timePointer(grantStartsAt) + closedFreeRecord.ClosedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")} + closedFreeRecord.ClosedReasonCode = common.ReasonCode("manual_grant") + + require.NoError(t, lifecycleStore.Grant(context.Background(), ports.GrantEntitlementInput{ + ExpectedCurrentSnapshot: freeSnapshot, + ExpectedCurrentRecord: freeRecord, + UpdatedCurrentRecord: closedFreeRecord, + NewRecord: grantedRecord, + NewSnapshot: grantedSnapshot, + })) + + requireSetContains(t, store, store.keyspace.PaidStateIndex(entitlement.PaidStatePaid), record.UserID.String()) + requireSetNotContains(t, store, store.keyspace.PaidStateIndex(entitlement.PaidStateFree), record.UserID.String()) + requireSortedSetScore(t, store, store.keyspace.FinitePaidExpiryIndex(), record.UserID.String(), redisstate.ExpiryScore(grantEndsAt)) + requireSetContains(t, store, store.keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanCreatePrivateGame, true), record.UserID.String()) + + extendedEndsAt := grantEndsAt.Add(30 * 24 * time.Hour) + extensionRecord := paidEntitlementRecord( + entitlement.EntitlementRecordID("entitlement-paid-2"), + record.UserID, + entitlement.PlanCodePaidMonthly, + grantEndsAt, + extendedEndsAt, + common.Source("admin"), + common.ReasonCode("manual_extend"), + ) + extendedSnapshot := paidEntitlementSnapshot( + record.UserID, + entitlement.PlanCodePaidMonthly, + grantStartsAt, + extendedEndsAt, + common.Source("admin"), + common.ReasonCode("manual_extend"), + ) + require.NoError(t, lifecycleStore.Extend(context.Background(), ports.ExtendEntitlementInput{ + ExpectedCurrentSnapshot: grantedSnapshot, + NewRecord: extensionRecord, + NewSnapshot: extendedSnapshot, + })) + + requireSortedSetScore(t, store, store.keyspace.FinitePaidExpiryIndex(), record.UserID.String(), redisstate.ExpiryScore(extendedEndsAt)) + + revokeAt := grantEndsAt.Add(12 * time.Hour) + revokedCurrentRecord := extensionRecord + revokedCurrentRecord.ClosedAt = timePointer(revokeAt) + revokedCurrentRecord.ClosedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")} + revokedCurrentRecord.ClosedReasonCode = common.ReasonCode("manual_revoke") + freeAfterRevokeRecord := entitlement.PeriodRecord{ + RecordID: entitlement.EntitlementRecordID("entitlement-free-2"), + UserID: record.UserID, + PlanCode: entitlement.PlanCodeFree, + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_revoke"), + StartsAt: revokeAt, + CreatedAt: revokeAt, + } + freeAfterRevokeSnapshot := entitlement.CurrentSnapshot{ + UserID: record.UserID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: revokeAt, + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_revoke"), + UpdatedAt: revokeAt, + } + require.NoError(t, lifecycleStore.Revoke(context.Background(), ports.RevokeEntitlementInput{ + ExpectedCurrentSnapshot: extendedSnapshot, + ExpectedCurrentRecord: extensionRecord, + UpdatedCurrentRecord: revokedCurrentRecord, + NewRecord: freeAfterRevokeRecord, + NewSnapshot: freeAfterRevokeSnapshot, + })) + + requireSetContains(t, store, store.keyspace.PaidStateIndex(entitlement.PaidStateFree), record.UserID.String()) + requireSetNotContains(t, store, store.keyspace.PaidStateIndex(entitlement.PaidStatePaid), record.UserID.String()) + requireSortedSetMissing(t, store, store.keyspace.FinitePaidExpiryIndex(), record.UserID.String()) + requireSetContains(t, store, store.keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanCreatePrivateGame, false), record.UserID.String()) +} + +func TestPolicyLifecycleSyncsAdminIndexes(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + now := time.Unix(1_775_240_000, 0).UTC() + record := validAccountRecord() + record.CreatedAt = now + record.UpdatedAt = now + _, err := store.EnsureByEmail(context.Background(), ports.EnsureByEmailInput{ + Email: record.Email, + Account: record, + Entitlement: validEntitlementSnapshot(record.UserID, now), + EntitlementRecord: validEntitlementRecord(record.UserID, now), + Reservation: raceNameReservation(record.UserID, record.RaceName, now), + }) + require.NoError(t, err) + + lifecycleStore := store.PolicyLifecycle() + sanctionRecord := policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-1"), + UserID: record.UserID, + SanctionCode: policy.SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("manual_block"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now, + } + require.NoError(t, lifecycleStore.ApplySanction(context.Background(), ports.ApplySanctionInput{ + NewRecord: sanctionRecord, + })) + + requireSetContains(t, store, store.keyspace.ActiveSanctionCodeIndex(policy.SanctionCodeLoginBlock), record.UserID.String()) + requireSetContains(t, store, store.keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanLogin, false), record.UserID.String()) + requireSetContains(t, store, store.keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanJoinGame, false), record.UserID.String()) + + removedSanction := sanctionRecord + removedAt := now.Add(time.Minute) + removedSanction.RemovedAt = &removedAt + removedSanction.RemovedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")} + removedSanction.RemovedReasonCode = common.ReasonCode("manual_remove") + require.NoError(t, lifecycleStore.RemoveSanction(context.Background(), ports.RemoveSanctionInput{ + ExpectedActiveRecord: sanctionRecord, + UpdatedRecord: removedSanction, + })) + + requireSetNotContains(t, store, store.keyspace.ActiveSanctionCodeIndex(policy.SanctionCodeLoginBlock), record.UserID.String()) + requireSetContains(t, store, store.keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanLogin, true), record.UserID.String()) + + limitRecord := policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-1"), + UserID: record.UserID, + LimitCode: policy.LimitCodeMaxOwnedPrivateGames, + Value: 5, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(2 * time.Minute), + } + require.NoError(t, lifecycleStore.SetLimit(context.Background(), ports.SetLimitInput{ + NewRecord: limitRecord, + })) + + requireSetContains(t, store, store.keyspace.ActiveLimitCodeIndex(policy.LimitCodeMaxOwnedPrivateGames), record.UserID.String()) + + removedLimit := limitRecord + limitRemovedAt := now.Add(3 * time.Minute) + removedLimit.RemovedAt = &limitRemovedAt + removedLimit.RemovedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")} + removedLimit.RemovedReasonCode = common.ReasonCode("manual_remove") + require.NoError(t, lifecycleStore.RemoveLimit(context.Background(), ports.RemoveLimitInput{ + ExpectedActiveRecord: limitRecord, + UpdatedRecord: removedLimit, + })) + + requireSetNotContains(t, store, store.keyspace.ActiveLimitCodeIndex(policy.LimitCodeMaxOwnedPrivateGames), record.UserID.String()) +} + +func TestAdminListerReevaluatesExpiredPaidSnapshots(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + userID := common.UserID("user-123") + now := time.Unix(1_775_240_000, 0).UTC() + record := validAccountRecord() + record.CreatedAt = now.Add(-2 * time.Hour) + record.UpdatedAt = record.CreatedAt + _, err := store.EnsureByEmail(context.Background(), ports.EnsureByEmailInput{ + Email: record.Email, + Account: record, + Entitlement: validEntitlementSnapshot(userID, record.CreatedAt), + EntitlementRecord: validEntitlementRecord(userID, record.CreatedAt), + Reservation: raceNameReservation(userID, record.RaceName, record.CreatedAt), + }) + require.NoError(t, err) + + grantStartsAt := now.Add(-90 * time.Minute) + grantEndsAt := now.Add(-30 * time.Minute) + freeRecord := validEntitlementRecord(userID, record.CreatedAt) + freeSnapshot := validEntitlementSnapshot(userID, record.CreatedAt) + grantedRecord := paidEntitlementRecord( + entitlement.EntitlementRecordID("entitlement-paid-expired"), + userID, + entitlement.PlanCodePaidMonthly, + grantStartsAt, + grantEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + grantedSnapshot := paidEntitlementSnapshot( + userID, + entitlement.PlanCodePaidMonthly, + grantStartsAt, + grantEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + closedFreeRecord := freeRecord + closedFreeRecord.ClosedAt = timePointer(grantStartsAt) + closedFreeRecord.ClosedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")} + closedFreeRecord.ClosedReasonCode = common.ReasonCode("manual_grant") + require.NoError(t, store.EntitlementLifecycle().Grant(context.Background(), ports.GrantEntitlementInput{ + ExpectedCurrentSnapshot: freeSnapshot, + ExpectedCurrentRecord: freeRecord, + UpdatedCurrentRecord: closedFreeRecord, + NewRecord: grantedRecord, + NewSnapshot: grantedSnapshot, + })) + + reader, err := entitlementsvc.NewReader( + store.EntitlementSnapshots(), + store.EntitlementLifecycle(), + adminStoreClock{now: now}, + adminStoreIDGenerator{entitlementRecordID: entitlement.EntitlementRecordID("entitlement-free-after-expiry")}, + ) + require.NoError(t, err) + lister, err := adminusers.NewLister(store.Accounts(), reader, store.Sanctions(), store.Limits(), adminStoreClock{now: now}, store) + require.NoError(t, err) + + result, err := lister.Execute(context.Background(), adminusers.ListUsersInput{PaidState: "free"}) + require.NoError(t, err) + require.Len(t, result.Items, 1) + require.Equal(t, "user-123", result.Items[0].UserID) + require.Equal(t, "free", result.Items[0].Entitlement.PlanCode) + require.False(t, result.Items[0].Entitlement.IsPaid) + + storedSnapshot, err := store.EntitlementSnapshots().GetByUserID(context.Background(), userID) + require.NoError(t, err) + require.Equal(t, entitlement.PlanCodeFree, storedSnapshot.PlanCode) + require.False(t, storedSnapshot.IsPaid) +} + +type adminStoreClock struct { + now time.Time +} + +func (clock adminStoreClock) Now() time.Time { + return clock.now +} + +type adminStoreIDGenerator struct { + entitlementRecordID entitlement.EntitlementRecordID +} + +func (generator adminStoreIDGenerator) NewUserID() (common.UserID, error) { + return "", nil +} + +func (generator adminStoreIDGenerator) NewInitialRaceName() (common.RaceName, error) { + return "", nil +} + +func (generator adminStoreIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + return generator.entitlementRecordID, nil +} + +func (generator adminStoreIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + return "", nil +} + +func (generator adminStoreIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + return "", nil +} + +func requireSetContains(t *testing.T, store *Store, key string, member string) { + t.Helper() + + exists, err := store.client.SIsMember(context.Background(), key, member).Result() + require.NoError(t, err) + require.True(t, exists, "expected %q to contain %q", key, member) +} + +func requireSetNotContains(t *testing.T, store *Store, key string, member string) { + t.Helper() + + exists, err := store.client.SIsMember(context.Background(), key, member).Result() + require.NoError(t, err) + require.False(t, exists, "expected %q not to contain %q", key, member) +} + +func requireSortedSetScore(t *testing.T, store *Store, key string, member string, want float64) { + t.Helper() + + got, err := store.client.ZScore(context.Background(), key, member).Result() + require.NoError(t, err) + require.Equal(t, want, got) +} + +func requireSortedSetMissing(t *testing.T, store *Store, key string, member string) { + t.Helper() + + _, err := store.client.ZScore(context.Background(), key, member).Result() + require.Error(t, err) +} diff --git a/user/internal/adapters/redis/userstore/entitlement_store.go b/user/internal/adapters/redis/userstore/entitlement_store.go new file mode 100644 index 0000000..de31b39 --- /dev/null +++ b/user/internal/adapters/redis/userstore/entitlement_store.go @@ -0,0 +1,752 @@ +package userstore + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/ports" + + "github.com/redis/go-redis/v9" +) + +type entitlementPeriodRecord struct { + RecordID string `json:"record_id"` + UserID string `json:"user_id"` + PlanCode string `json:"plan_code"` + Source string `json:"source"` + ActorType string `json:"actor_type"` + ActorID *string `json:"actor_id,omitempty"` + ReasonCode string `json:"reason_code"` + StartsAt string `json:"starts_at"` + EndsAt *string `json:"ends_at,omitempty"` + CreatedAt string `json:"created_at"` + ClosedAt *string `json:"closed_at,omitempty"` + ClosedByType *string `json:"closed_by_type,omitempty"` + ClosedByID *string `json:"closed_by_id,omitempty"` + ClosedReasonCode *string `json:"closed_reason_code,omitempty"` +} + +// CreateEntitlementRecord stores one new entitlement history record. +func (store *Store) CreateEntitlementRecord(ctx context.Context, record entitlement.PeriodRecord) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("create entitlement record in redis: %w", err) + } + + payload, err := marshalEntitlementPeriodRecord(record) + if err != nil { + return fmt.Errorf("create entitlement record in redis: %w", err) + } + + recordKey := store.keyspace.EntitlementRecord(record.RecordID) + historyKey := store.keyspace.EntitlementHistory(record.UserID) + + operationCtx, cancel, err := store.operationContext(ctx, "create entitlement record in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if err := ensureKeyAbsent(operationCtx, tx, recordKey); err != nil { + return fmt.Errorf("create entitlement record %q in redis: %w", record.RecordID, err) + } + + _, err := tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, payload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(record.StartsAt.UTC().UnixMicro()), + Member: record.RecordID.String(), + }) + return nil + }) + if err != nil { + return fmt.Errorf("create entitlement record %q in redis: %w", record.RecordID, err) + } + + return nil + }, recordKey, historyKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("create entitlement record %q in redis: %w", record.RecordID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// GetEntitlementRecordByRecordID returns the entitlement history record +// identified by recordID. +func (store *Store) GetEntitlementRecordByRecordID( + ctx context.Context, + recordID entitlement.EntitlementRecordID, +) (entitlement.PeriodRecord, error) { + if err := recordID.Validate(); err != nil { + return entitlement.PeriodRecord{}, fmt.Errorf("get entitlement record by record id from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "get entitlement record by record id from redis") + if err != nil { + return entitlement.PeriodRecord{}, err + } + defer cancel() + + record, err := store.loadEntitlementRecord(operationCtx, store.client, recordID) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return entitlement.PeriodRecord{}, fmt.Errorf("get entitlement record by record id %q from redis: %w", recordID, ports.ErrNotFound) + default: + return entitlement.PeriodRecord{}, fmt.Errorf("get entitlement record by record id %q from redis: %w", recordID, err) + } + } + + return record, nil +} + +// ListEntitlementRecordsByUserID returns every entitlement history record +// owned by userID. +func (store *Store) ListEntitlementRecordsByUserID( + ctx context.Context, + userID common.UserID, +) ([]entitlement.PeriodRecord, error) { + if err := userID.Validate(); err != nil { + return nil, fmt.Errorf("list entitlement records by user id from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "list entitlement records by user id from redis") + if err != nil { + return nil, err + } + defer cancel() + + recordIDs, err := store.client.ZRange(operationCtx, store.keyspace.EntitlementHistory(userID), 0, -1).Result() + if err != nil { + return nil, fmt.Errorf("list entitlement records by user id %q from redis: %w", userID, err) + } + + records := make([]entitlement.PeriodRecord, 0, len(recordIDs)) + for _, rawRecordID := range recordIDs { + record, err := store.loadEntitlementRecord(operationCtx, store.client, entitlement.EntitlementRecordID(rawRecordID)) + if err != nil { + return nil, fmt.Errorf("list entitlement records by user id %q from redis: %w", userID, err) + } + records = append(records, record) + } + + return records, nil +} + +// UpdateEntitlementRecord replaces one stored entitlement history record. +func (store *Store) UpdateEntitlementRecord(ctx context.Context, record entitlement.PeriodRecord) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("update entitlement record in redis: %w", err) + } + + payload, err := marshalEntitlementPeriodRecord(record) + if err != nil { + return fmt.Errorf("update entitlement record in redis: %w", err) + } + + recordKey := store.keyspace.EntitlementRecord(record.RecordID) + + operationCtx, cancel, err := store.operationContext(ctx, "update entitlement record in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if _, err := store.loadEntitlementRecord(operationCtx, tx, record.RecordID); err != nil { + return fmt.Errorf("update entitlement record %q in redis: %w", record.RecordID, err) + } + + _, err := tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, payload, 0) + return nil + }) + if err != nil { + return fmt.Errorf("update entitlement record %q in redis: %w", record.RecordID, err) + } + + return nil + }, recordKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("update entitlement record %q in redis: %w", record.RecordID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// GrantEntitlement atomically closes the current free history record, creates +// one paid history record, and replaces the current snapshot. +func (store *Store) GrantEntitlement(ctx context.Context, input ports.GrantEntitlementInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("grant entitlement in redis: %w", err) + } + + updatedCurrentRecordPayload, err := marshalEntitlementPeriodRecord(input.UpdatedCurrentRecord) + if err != nil { + return fmt.Errorf("grant entitlement in redis: %w", err) + } + newRecordPayload, err := marshalEntitlementPeriodRecord(input.NewRecord) + if err != nil { + return fmt.Errorf("grant entitlement in redis: %w", err) + } + newSnapshotPayload, err := marshalEntitlementSnapshotRecord(input.NewSnapshot) + if err != nil { + return fmt.Errorf("grant entitlement in redis: %w", err) + } + + currentRecordKey := store.keyspace.EntitlementRecord(input.ExpectedCurrentRecord.RecordID) + newRecordKey := store.keyspace.EntitlementRecord(input.NewRecord.RecordID) + historyKey := store.keyspace.EntitlementHistory(input.NewRecord.UserID) + snapshotKey := store.keyspace.EntitlementSnapshot(input.NewSnapshot.UserID) + watchedKeys := append( + []string{currentRecordKey, newRecordKey, historyKey, snapshotKey}, + store.activeSanctionWatchKeys(input.NewSnapshot.UserID)..., + ) + + operationCtx, cancel, err := store.operationContext(ctx, "grant entitlement in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + storedSnapshot, err := store.loadEntitlementSnapshot(operationCtx, tx, input.ExpectedCurrentSnapshot.UserID) + if err != nil { + return fmt.Errorf("grant entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + if !equalEntitlementSnapshots(storedSnapshot, input.ExpectedCurrentSnapshot) { + return fmt.Errorf("grant entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, ports.ErrConflict) + } + + storedCurrentRecord, err := store.loadEntitlementRecord(operationCtx, tx, input.ExpectedCurrentRecord.RecordID) + if err != nil { + return fmt.Errorf("grant entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + if !equalEntitlementPeriodRecords(storedCurrentRecord, input.ExpectedCurrentRecord) { + return fmt.Errorf("grant entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, ports.ErrConflict) + } + if err := ensureKeyAbsent(operationCtx, tx, newRecordKey); err != nil { + return fmt.Errorf("grant entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + activeSanctionCodes, err := store.loadActiveSanctionCodeSet(operationCtx, tx, input.NewSnapshot.UserID) + if err != nil { + return fmt.Errorf("grant entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, currentRecordKey, updatedCurrentRecordPayload, 0) + pipe.Set(operationCtx, newRecordKey, newRecordPayload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(input.NewRecord.StartsAt.UTC().UnixMicro()), + Member: input.NewRecord.RecordID.String(), + }) + pipe.Set(operationCtx, snapshotKey, newSnapshotPayload, 0) + store.syncEntitlementIndexes(pipe, operationCtx, input.NewSnapshot) + store.syncEligibilityMarkerIndexes(pipe, operationCtx, input.NewSnapshot.UserID, input.NewSnapshot.IsPaid, activeSanctionCodes) + return nil + }) + if err != nil { + return fmt.Errorf("grant entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + + return nil + }, watchedKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("grant entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// ExtendEntitlement atomically appends one paid history segment and replaces +// the current paid snapshot. +func (store *Store) ExtendEntitlement(ctx context.Context, input ports.ExtendEntitlementInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("extend entitlement in redis: %w", err) + } + + newRecordPayload, err := marshalEntitlementPeriodRecord(input.NewRecord) + if err != nil { + return fmt.Errorf("extend entitlement in redis: %w", err) + } + newSnapshotPayload, err := marshalEntitlementSnapshotRecord(input.NewSnapshot) + if err != nil { + return fmt.Errorf("extend entitlement in redis: %w", err) + } + + newRecordKey := store.keyspace.EntitlementRecord(input.NewRecord.RecordID) + historyKey := store.keyspace.EntitlementHistory(input.NewRecord.UserID) + snapshotKey := store.keyspace.EntitlementSnapshot(input.NewSnapshot.UserID) + watchedKeys := append( + []string{newRecordKey, historyKey, snapshotKey}, + store.activeSanctionWatchKeys(input.NewSnapshot.UserID)..., + ) + + operationCtx, cancel, err := store.operationContext(ctx, "extend entitlement in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + storedSnapshot, err := store.loadEntitlementSnapshot(operationCtx, tx, input.ExpectedCurrentSnapshot.UserID) + if err != nil { + return fmt.Errorf("extend entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + if !equalEntitlementSnapshots(storedSnapshot, input.ExpectedCurrentSnapshot) { + return fmt.Errorf("extend entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, ports.ErrConflict) + } + if err := ensureKeyAbsent(operationCtx, tx, newRecordKey); err != nil { + return fmt.Errorf("extend entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + activeSanctionCodes, err := store.loadActiveSanctionCodeSet(operationCtx, tx, input.NewSnapshot.UserID) + if err != nil { + return fmt.Errorf("extend entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, newRecordKey, newRecordPayload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(input.NewRecord.StartsAt.UTC().UnixMicro()), + Member: input.NewRecord.RecordID.String(), + }) + pipe.Set(operationCtx, snapshotKey, newSnapshotPayload, 0) + store.syncEntitlementIndexes(pipe, operationCtx, input.NewSnapshot) + store.syncEligibilityMarkerIndexes(pipe, operationCtx, input.NewSnapshot.UserID, input.NewSnapshot.IsPaid, activeSanctionCodes) + return nil + }) + if err != nil { + return fmt.Errorf("extend entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + + return nil + }, watchedKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("extend entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// RevokeEntitlement atomically closes the current paid history record, +// creates one free history record, and replaces the current snapshot. +func (store *Store) RevokeEntitlement(ctx context.Context, input ports.RevokeEntitlementInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("revoke entitlement in redis: %w", err) + } + + updatedCurrentRecordPayload, err := marshalEntitlementPeriodRecord(input.UpdatedCurrentRecord) + if err != nil { + return fmt.Errorf("revoke entitlement in redis: %w", err) + } + newRecordPayload, err := marshalEntitlementPeriodRecord(input.NewRecord) + if err != nil { + return fmt.Errorf("revoke entitlement in redis: %w", err) + } + newSnapshotPayload, err := marshalEntitlementSnapshotRecord(input.NewSnapshot) + if err != nil { + return fmt.Errorf("revoke entitlement in redis: %w", err) + } + + currentRecordKey := store.keyspace.EntitlementRecord(input.ExpectedCurrentRecord.RecordID) + newRecordKey := store.keyspace.EntitlementRecord(input.NewRecord.RecordID) + historyKey := store.keyspace.EntitlementHistory(input.NewRecord.UserID) + snapshotKey := store.keyspace.EntitlementSnapshot(input.NewSnapshot.UserID) + watchedKeys := append( + []string{currentRecordKey, newRecordKey, historyKey, snapshotKey}, + store.activeSanctionWatchKeys(input.NewSnapshot.UserID)..., + ) + + operationCtx, cancel, err := store.operationContext(ctx, "revoke entitlement in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + storedSnapshot, err := store.loadEntitlementSnapshot(operationCtx, tx, input.ExpectedCurrentSnapshot.UserID) + if err != nil { + return fmt.Errorf("revoke entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + if !equalEntitlementSnapshots(storedSnapshot, input.ExpectedCurrentSnapshot) { + return fmt.Errorf("revoke entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, ports.ErrConflict) + } + + storedCurrentRecord, err := store.loadEntitlementRecord(operationCtx, tx, input.ExpectedCurrentRecord.RecordID) + if err != nil { + return fmt.Errorf("revoke entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + if !equalEntitlementPeriodRecords(storedCurrentRecord, input.ExpectedCurrentRecord) { + return fmt.Errorf("revoke entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, ports.ErrConflict) + } + if err := ensureKeyAbsent(operationCtx, tx, newRecordKey); err != nil { + return fmt.Errorf("revoke entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + activeSanctionCodes, err := store.loadActiveSanctionCodeSet(operationCtx, tx, input.NewSnapshot.UserID) + if err != nil { + return fmt.Errorf("revoke entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, currentRecordKey, updatedCurrentRecordPayload, 0) + pipe.Set(operationCtx, newRecordKey, newRecordPayload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(input.NewRecord.StartsAt.UTC().UnixMicro()), + Member: input.NewRecord.RecordID.String(), + }) + pipe.Set(operationCtx, snapshotKey, newSnapshotPayload, 0) + store.syncEntitlementIndexes(pipe, operationCtx, input.NewSnapshot) + store.syncEligibilityMarkerIndexes(pipe, operationCtx, input.NewSnapshot.UserID, input.NewSnapshot.IsPaid, activeSanctionCodes) + return nil + }) + if err != nil { + return fmt.Errorf("revoke entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, err) + } + + return nil + }, watchedKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("revoke entitlement for user %q in redis: %w", input.ExpectedCurrentSnapshot.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// RepairExpiredEntitlement atomically replaces one expired finite paid +// snapshot with a materialized free state. +func (store *Store) RepairExpiredEntitlement(ctx context.Context, input ports.RepairExpiredEntitlementInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("repair expired entitlement in redis: %w", err) + } + + newRecordPayload, err := marshalEntitlementPeriodRecord(input.NewRecord) + if err != nil { + return fmt.Errorf("repair expired entitlement in redis: %w", err) + } + newSnapshotPayload, err := marshalEntitlementSnapshotRecord(input.NewSnapshot) + if err != nil { + return fmt.Errorf("repair expired entitlement in redis: %w", err) + } + + newRecordKey := store.keyspace.EntitlementRecord(input.NewRecord.RecordID) + historyKey := store.keyspace.EntitlementHistory(input.NewRecord.UserID) + snapshotKey := store.keyspace.EntitlementSnapshot(input.NewSnapshot.UserID) + watchedKeys := append( + []string{newRecordKey, historyKey, snapshotKey}, + store.activeSanctionWatchKeys(input.NewSnapshot.UserID)..., + ) + + operationCtx, cancel, err := store.operationContext(ctx, "repair expired entitlement in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + storedSnapshot, err := store.loadEntitlementSnapshot(operationCtx, tx, input.ExpectedExpiredSnapshot.UserID) + if err != nil { + return fmt.Errorf("repair expired entitlement for user %q in redis: %w", input.ExpectedExpiredSnapshot.UserID, err) + } + if !equalEntitlementSnapshots(storedSnapshot, input.ExpectedExpiredSnapshot) { + return fmt.Errorf("repair expired entitlement for user %q in redis: %w", input.ExpectedExpiredSnapshot.UserID, ports.ErrConflict) + } + if err := ensureKeyAbsent(operationCtx, tx, newRecordKey); err != nil { + return fmt.Errorf("repair expired entitlement for user %q in redis: %w", input.ExpectedExpiredSnapshot.UserID, err) + } + activeSanctionCodes, err := store.loadActiveSanctionCodeSet(operationCtx, tx, input.NewSnapshot.UserID) + if err != nil { + return fmt.Errorf("repair expired entitlement for user %q in redis: %w", input.ExpectedExpiredSnapshot.UserID, err) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, newRecordKey, newRecordPayload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(input.NewRecord.StartsAt.UTC().UnixMicro()), + Member: input.NewRecord.RecordID.String(), + }) + pipe.Set(operationCtx, snapshotKey, newSnapshotPayload, 0) + store.syncEntitlementIndexes(pipe, operationCtx, input.NewSnapshot) + store.syncEligibilityMarkerIndexes(pipe, operationCtx, input.NewSnapshot.UserID, input.NewSnapshot.IsPaid, activeSanctionCodes) + return nil + }) + if err != nil { + return fmt.Errorf("repair expired entitlement for user %q in redis: %w", input.ExpectedExpiredSnapshot.UserID, err) + } + + return nil + }, watchedKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("repair expired entitlement for user %q in redis: %w", input.ExpectedExpiredSnapshot.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +func (store *Store) loadEntitlementRecord( + ctx context.Context, + getter bytesGetter, + recordID entitlement.EntitlementRecordID, +) (entitlement.PeriodRecord, error) { + payload, err := getter.Get(ctx, store.keyspace.EntitlementRecord(recordID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return entitlement.PeriodRecord{}, ports.ErrNotFound + case err != nil: + return entitlement.PeriodRecord{}, err + } + + return decodeEntitlementPeriodRecord(payload) +} + +func marshalEntitlementPeriodRecord(record entitlement.PeriodRecord) ([]byte, error) { + encoded := entitlementPeriodRecord{ + RecordID: record.RecordID.String(), + UserID: record.UserID.String(), + PlanCode: string(record.PlanCode), + Source: record.Source.String(), + ActorType: record.Actor.Type.String(), + ReasonCode: record.ReasonCode.String(), + StartsAt: record.StartsAt.UTC().Format(time.RFC3339Nano), + CreatedAt: record.CreatedAt.UTC().Format(time.RFC3339Nano), + } + if !record.Actor.ID.IsZero() { + value := record.Actor.ID.String() + encoded.ActorID = &value + } + if record.EndsAt != nil { + value := record.EndsAt.UTC().Format(time.RFC3339Nano) + encoded.EndsAt = &value + } + if record.ClosedAt != nil { + value := record.ClosedAt.UTC().Format(time.RFC3339Nano) + encoded.ClosedAt = &value + } + if !record.ClosedBy.Type.IsZero() { + value := record.ClosedBy.Type.String() + encoded.ClosedByType = &value + } + if !record.ClosedBy.ID.IsZero() { + value := record.ClosedBy.ID.String() + encoded.ClosedByID = &value + } + if !record.ClosedReasonCode.IsZero() { + value := record.ClosedReasonCode.String() + encoded.ClosedReasonCode = &value + } + + return json.Marshal(encoded) +} + +func decodeEntitlementPeriodRecord(payload []byte) (entitlement.PeriodRecord, error) { + var encoded entitlementPeriodRecord + if err := decodeJSONPayload(payload, &encoded); err != nil { + return entitlement.PeriodRecord{}, err + } + + startsAt, err := time.Parse(time.RFC3339Nano, encoded.StartsAt) + if err != nil { + return entitlement.PeriodRecord{}, fmt.Errorf("decode entitlement period record starts_at: %w", err) + } + createdAt, err := time.Parse(time.RFC3339Nano, encoded.CreatedAt) + if err != nil { + return entitlement.PeriodRecord{}, fmt.Errorf("decode entitlement period record created_at: %w", err) + } + + record := entitlement.PeriodRecord{ + RecordID: entitlement.EntitlementRecordID(encoded.RecordID), + UserID: common.UserID(encoded.UserID), + PlanCode: entitlement.PlanCode(encoded.PlanCode), + Source: common.Source(encoded.Source), + Actor: common.ActorRef{Type: common.ActorType(encoded.ActorType)}, + ReasonCode: common.ReasonCode(encoded.ReasonCode), + StartsAt: startsAt.UTC(), + CreatedAt: createdAt.UTC(), + } + if encoded.ActorID != nil { + record.Actor.ID = common.ActorID(*encoded.ActorID) + } + if encoded.EndsAt != nil { + value, err := time.Parse(time.RFC3339Nano, *encoded.EndsAt) + if err != nil { + return entitlement.PeriodRecord{}, fmt.Errorf("decode entitlement period record ends_at: %w", err) + } + value = value.UTC() + record.EndsAt = &value + } + if encoded.ClosedAt != nil { + value, err := time.Parse(time.RFC3339Nano, *encoded.ClosedAt) + if err != nil { + return entitlement.PeriodRecord{}, fmt.Errorf("decode entitlement period record closed_at: %w", err) + } + value = value.UTC() + record.ClosedAt = &value + } + if encoded.ClosedByType != nil { + record.ClosedBy.Type = common.ActorType(*encoded.ClosedByType) + } + if encoded.ClosedByID != nil { + record.ClosedBy.ID = common.ActorID(*encoded.ClosedByID) + } + if encoded.ClosedReasonCode != nil { + record.ClosedReasonCode = common.ReasonCode(*encoded.ClosedReasonCode) + } + if err := record.Validate(); err != nil { + return entitlement.PeriodRecord{}, fmt.Errorf("decode entitlement period record: %w", err) + } + + return record, nil +} + +func equalEntitlementSnapshots(left entitlement.CurrentSnapshot, right entitlement.CurrentSnapshot) bool { + return left.UserID == right.UserID && + left.PlanCode == right.PlanCode && + left.IsPaid == right.IsPaid && + left.StartsAt.Equal(right.StartsAt) && + equalOptionalTime(left.EndsAt, right.EndsAt) && + left.Source == right.Source && + left.Actor == right.Actor && + left.ReasonCode == right.ReasonCode && + left.UpdatedAt.Equal(right.UpdatedAt) +} + +func equalEntitlementPeriodRecords(left entitlement.PeriodRecord, right entitlement.PeriodRecord) bool { + return left.RecordID == right.RecordID && + left.UserID == right.UserID && + left.PlanCode == right.PlanCode && + left.Source == right.Source && + left.Actor == right.Actor && + left.ReasonCode == right.ReasonCode && + left.StartsAt.Equal(right.StartsAt) && + equalOptionalTime(left.EndsAt, right.EndsAt) && + left.CreatedAt.Equal(right.CreatedAt) && + equalOptionalTime(left.ClosedAt, right.ClosedAt) && + left.ClosedBy == right.ClosedBy && + left.ClosedReasonCode == right.ClosedReasonCode +} + +func equalOptionalTime(left *time.Time, right *time.Time) bool { + switch { + case left == nil && right == nil: + return true + case left == nil || right == nil: + return false + default: + return left.Equal(*right) + } +} + +// EntitlementHistoryStore adapts Store to the existing +// EntitlementHistoryStore port. +type EntitlementHistoryStore struct { + store *Store +} + +// EntitlementHistory returns one adapter that exposes the entitlement-history +// store port over Store. +func (store *Store) EntitlementHistory() *EntitlementHistoryStore { + if store == nil { + return nil + } + + return &EntitlementHistoryStore{store: store} +} + +// Create stores one new entitlement history record. +func (adapter *EntitlementHistoryStore) Create(ctx context.Context, record entitlement.PeriodRecord) error { + return adapter.store.CreateEntitlementRecord(ctx, record) +} + +// GetByRecordID returns the entitlement history record identified by recordID. +func (adapter *EntitlementHistoryStore) GetByRecordID( + ctx context.Context, + recordID entitlement.EntitlementRecordID, +) (entitlement.PeriodRecord, error) { + return adapter.store.GetEntitlementRecordByRecordID(ctx, recordID) +} + +// ListByUserID returns every entitlement history record owned by userID. +func (adapter *EntitlementHistoryStore) ListByUserID( + ctx context.Context, + userID common.UserID, +) ([]entitlement.PeriodRecord, error) { + return adapter.store.ListEntitlementRecordsByUserID(ctx, userID) +} + +// Update replaces one stored entitlement history record. +func (adapter *EntitlementHistoryStore) Update(ctx context.Context, record entitlement.PeriodRecord) error { + return adapter.store.UpdateEntitlementRecord(ctx, record) +} + +var _ ports.EntitlementHistoryStore = (*EntitlementHistoryStore)(nil) + +// EntitlementLifecycleStore adapts Store to the existing +// EntitlementLifecycleStore port. +type EntitlementLifecycleStore struct { + store *Store +} + +// EntitlementLifecycle returns one adapter that exposes the atomic +// entitlement-lifecycle store port over Store. +func (store *Store) EntitlementLifecycle() *EntitlementLifecycleStore { + if store == nil { + return nil + } + + return &EntitlementLifecycleStore{store: store} +} + +// Grant atomically applies one free-to-paid transition. +func (adapter *EntitlementLifecycleStore) Grant(ctx context.Context, input ports.GrantEntitlementInput) error { + return adapter.store.GrantEntitlement(ctx, input) +} + +// Extend atomically appends one paid extension segment and updates the current +// snapshot. +func (adapter *EntitlementLifecycleStore) Extend(ctx context.Context, input ports.ExtendEntitlementInput) error { + return adapter.store.ExtendEntitlement(ctx, input) +} + +// Revoke atomically applies one paid-to-free transition. +func (adapter *EntitlementLifecycleStore) Revoke(ctx context.Context, input ports.RevokeEntitlementInput) error { + return adapter.store.RevokeEntitlement(ctx, input) +} + +// RepairExpired atomically repairs one expired finite paid snapshot. +func (adapter *EntitlementLifecycleStore) RepairExpired( + ctx context.Context, + input ports.RepairExpiredEntitlementInput, +) error { + return adapter.store.RepairExpiredEntitlement(ctx, input) +} + +var _ ports.EntitlementLifecycleStore = (*EntitlementLifecycleStore)(nil) diff --git a/user/internal/adapters/redis/userstore/list_store.go b/user/internal/adapters/redis/userstore/list_store.go new file mode 100644 index 0000000..e380c3d --- /dev/null +++ b/user/internal/adapters/redis/userstore/list_store.go @@ -0,0 +1,137 @@ +package userstore + +import ( + "context" + "errors" + "fmt" + "time" + + "galaxy/user/internal/adapters/redisstate" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/ports" + + "github.com/redis/go-redis/v9" +) + +// ListUserIDs returns one deterministic page of user identifiers ordered by +// `created_at desc`, then `user_id desc`. +func (store *Store) ListUserIDs(ctx context.Context, input ports.ListUsersInput) (ports.ListUsersResult, error) { + if err := input.Validate(); err != nil { + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "list users in redis") + if err != nil { + return ports.ListUsersResult{}, err + } + defer cancel() + + startIndex := int64(0) + filters := userListFiltersFromPorts(input.Filters) + if input.PageToken != "" { + cursor, err := redisstate.DecodePageToken(input.PageToken, filters) + if err != nil { + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", ports.ErrInvalidPageToken) + } + + score, err := store.client.ZScore(operationCtx, store.keyspace.CreatedAtIndex(), cursor.UserID.String()).Result() + switch { + case errors.Is(err, redis.Nil): + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", ports.ErrInvalidPageToken) + case err != nil: + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", err) + } + if !time.UnixMicro(int64(score)).UTC().Equal(cursor.CreatedAt.UTC()) { + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", ports.ErrInvalidPageToken) + } + + rank, err := store.client.ZRevRank(operationCtx, store.keyspace.CreatedAtIndex(), cursor.UserID.String()).Result() + switch { + case errors.Is(err, redis.Nil): + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", ports.ErrInvalidPageToken) + case err != nil: + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", err) + } + + startIndex = rank + 1 + } + + rawPage, err := store.client.ZRevRangeWithScores( + operationCtx, + store.keyspace.CreatedAtIndex(), + startIndex, + startIndex+int64(input.PageSize), + ).Result() + if err != nil { + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", err) + } + + result := ports.ListUsersResult{ + UserIDs: make([]common.UserID, 0, min(len(rawPage), input.PageSize)), + } + + visibleCount := min(len(rawPage), input.PageSize) + for index := 0; index < visibleCount; index++ { + userID, err := memberUserID(rawPage[index].Member) + if err != nil { + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", err) + } + result.UserIDs = append(result.UserIDs, userID) + } + + if len(rawPage) > input.PageSize { + lastVisible := rawPage[input.PageSize-1] + lastUserID, err := memberUserID(lastVisible.Member) + if err != nil { + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", err) + } + token, err := redisstate.EncodePageToken(redisstate.PageCursor{ + CreatedAt: time.UnixMicro(int64(lastVisible.Score)).UTC(), + UserID: lastUserID, + }, filters) + if err != nil { + return ports.ListUsersResult{}, fmt.Errorf("list users in redis: %w", err) + } + result.NextPageToken = token + } + + return result, nil +} + +func userListFiltersFromPorts(filters ports.UserListFilters) redisstate.UserListFilters { + return redisstate.UserListFilters{ + PaidState: filters.PaidState, + PaidExpiresBefore: filters.PaidExpiresBefore, + PaidExpiresAfter: filters.PaidExpiresAfter, + DeclaredCountry: filters.DeclaredCountry, + SanctionCode: filters.SanctionCode, + LimitCode: filters.LimitCode, + CanLogin: filters.CanLogin, + CanCreatePrivateGame: filters.CanCreatePrivateGame, + CanJoinGame: filters.CanJoinGame, + } +} + +func memberUserID(member any) (common.UserID, error) { + value, ok := member.(string) + if !ok { + return "", fmt.Errorf("unexpected created-at index member type %T", member) + } + + userID := common.UserID(value) + if err := userID.Validate(); err != nil { + return "", fmt.Errorf("created-at index member user id: %w", err) + } + + return userID, nil +} + +func min(left int, right int) int { + if left < right { + return left + } + + return right +} + +var _ ports.UserListStore = (*Store)(nil) diff --git a/user/internal/adapters/redis/userstore/policy_store.go b/user/internal/adapters/redis/userstore/policy_store.go new file mode 100644 index 0000000..20fc250 --- /dev/null +++ b/user/internal/adapters/redis/userstore/policy_store.go @@ -0,0 +1,445 @@ +package userstore + +import ( + "context" + "errors" + "fmt" + "time" + + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + + "github.com/redis/go-redis/v9" +) + +// ApplySanction atomically creates one new active sanction record. +func (store *Store) ApplySanction(ctx context.Context, input ports.ApplySanctionInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("apply sanction in redis: %w", err) + } + + recordPayload, err := marshalSanctionRecord(input.NewRecord) + if err != nil { + return fmt.Errorf("apply sanction in redis: %w", err) + } + + recordKey := store.keyspace.SanctionRecord(input.NewRecord.RecordID) + historyKey := store.keyspace.SanctionHistory(input.NewRecord.UserID) + activeKey := store.keyspace.ActiveSanction(input.NewRecord.UserID, input.NewRecord.SanctionCode) + snapshotKey := store.keyspace.EntitlementSnapshot(input.NewRecord.UserID) + watchedKeys := append( + []string{recordKey, historyKey, activeKey, snapshotKey}, + store.activeSanctionWatchKeys(input.NewRecord.UserID)..., + ) + + operationCtx, cancel, err := store.operationContext(ctx, "apply sanction in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if err := ensureKeyAbsent(operationCtx, tx, recordKey); err != nil { + return fmt.Errorf("apply sanction for user %q in redis: %w", input.NewRecord.UserID, err) + } + if err := ensureKeyAbsent(operationCtx, tx, activeKey); err != nil { + return fmt.Errorf("apply sanction for user %q in redis: %w", input.NewRecord.UserID, err) + } + snapshot, err := store.loadEntitlementSnapshot(operationCtx, tx, input.NewRecord.UserID) + if err != nil { + return fmt.Errorf("apply sanction for user %q in redis: %w", input.NewRecord.UserID, err) + } + activeSanctionCodes, err := store.loadActiveSanctionCodeSet(operationCtx, tx, input.NewRecord.UserID) + if err != nil { + return fmt.Errorf("apply sanction for user %q in redis: %w", input.NewRecord.UserID, err) + } + activeSanctionCodes[input.NewRecord.SanctionCode] = struct{}{} + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, recordPayload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(input.NewRecord.AppliedAt.UTC().UnixMicro()), + Member: input.NewRecord.RecordID.String(), + }) + setActiveSlot(pipe, operationCtx, activeKey, input.NewRecord.RecordID.String(), input.NewRecord.ExpiresAt) + store.syncActiveSanctionCodeIndexes(pipe, operationCtx, input.NewRecord.UserID, activeSanctionCodes) + store.syncEligibilityMarkerIndexes(pipe, operationCtx, input.NewRecord.UserID, snapshot.IsPaid, activeSanctionCodes) + return nil + }) + if err != nil { + return fmt.Errorf("apply sanction for user %q in redis: %w", input.NewRecord.UserID, err) + } + + return nil + }, watchedKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("apply sanction for user %q in redis: %w", input.NewRecord.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// RemoveSanction atomically removes one active sanction record. +func (store *Store) RemoveSanction(ctx context.Context, input ports.RemoveSanctionInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("remove sanction in redis: %w", err) + } + + updatedPayload, err := marshalSanctionRecord(input.UpdatedRecord) + if err != nil { + return fmt.Errorf("remove sanction in redis: %w", err) + } + + recordKey := store.keyspace.SanctionRecord(input.ExpectedActiveRecord.RecordID) + activeKey := store.keyspace.ActiveSanction(input.ExpectedActiveRecord.UserID, input.ExpectedActiveRecord.SanctionCode) + snapshotKey := store.keyspace.EntitlementSnapshot(input.ExpectedActiveRecord.UserID) + watchedKeys := append( + []string{recordKey, activeKey, snapshotKey}, + store.activeSanctionWatchKeys(input.ExpectedActiveRecord.UserID)..., + ) + + operationCtx, cancel, err := store.operationContext(ctx, "remove sanction in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + activeRecordID, err := store.loadActiveSanctionRecordID(operationCtx, tx, activeKey) + if err != nil { + return fmt.Errorf("remove sanction for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + if activeRecordID != input.ExpectedActiveRecord.RecordID { + return fmt.Errorf("remove sanction for user %q in redis: %w", input.ExpectedActiveRecord.UserID, ports.ErrConflict) + } + + storedRecord, err := store.loadSanctionRecord(operationCtx, tx, input.ExpectedActiveRecord.RecordID) + if err != nil { + return fmt.Errorf("remove sanction for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + if !equalSanctionRecords(storedRecord, input.ExpectedActiveRecord) { + return fmt.Errorf("remove sanction for user %q in redis: %w", input.ExpectedActiveRecord.UserID, ports.ErrConflict) + } + snapshot, err := store.loadEntitlementSnapshot(operationCtx, tx, input.ExpectedActiveRecord.UserID) + if err != nil { + return fmt.Errorf("remove sanction for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + activeSanctionCodes, err := store.loadActiveSanctionCodeSet(operationCtx, tx, input.ExpectedActiveRecord.UserID) + if err != nil { + return fmt.Errorf("remove sanction for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + delete(activeSanctionCodes, input.ExpectedActiveRecord.SanctionCode) + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, updatedPayload, 0) + pipe.Del(operationCtx, activeKey) + store.syncActiveSanctionCodeIndexes(pipe, operationCtx, input.ExpectedActiveRecord.UserID, activeSanctionCodes) + store.syncEligibilityMarkerIndexes(pipe, operationCtx, input.ExpectedActiveRecord.UserID, snapshot.IsPaid, activeSanctionCodes) + return nil + }) + if err != nil { + return fmt.Errorf("remove sanction for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + + return nil + }, watchedKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("remove sanction for user %q in redis: %w", input.ExpectedActiveRecord.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// SetLimit atomically creates or replaces one active limit record. +func (store *Store) SetLimit(ctx context.Context, input ports.SetLimitInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("set limit in redis: %w", err) + } + + newRecordPayload, err := marshalLimitRecord(input.NewRecord) + if err != nil { + return fmt.Errorf("set limit in redis: %w", err) + } + + newRecordKey := store.keyspace.LimitRecord(input.NewRecord.RecordID) + historyKey := store.keyspace.LimitHistory(input.NewRecord.UserID) + activeKey := store.keyspace.ActiveLimit(input.NewRecord.UserID, input.NewRecord.LimitCode) + watchedKeys := append( + []string{newRecordKey, historyKey, activeKey}, + store.activeLimitWatchKeys(input.NewRecord.UserID)..., + ) + + operationCtx, cancel, err := store.operationContext(ctx, "set limit in redis") + if err != nil { + return err + } + defer cancel() + if input.ExpectedActiveRecord != nil { + watchedKeys = append(watchedKeys, store.keyspace.LimitRecord(input.ExpectedActiveRecord.RecordID)) + } + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if err := ensureKeyAbsent(operationCtx, tx, newRecordKey); err != nil { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, err) + } + + var updatedPayload []byte + if input.ExpectedActiveRecord == nil { + if err := ensureKeyAbsent(operationCtx, tx, activeKey); err != nil { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, err) + } + } else { + activeRecordID, err := store.loadActiveLimitRecordID(operationCtx, tx, activeKey) + if err != nil { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, err) + } + if activeRecordID != input.ExpectedActiveRecord.RecordID { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, ports.ErrConflict) + } + + storedRecord, err := store.loadLimitRecord(operationCtx, tx, input.ExpectedActiveRecord.RecordID) + if err != nil { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, err) + } + if !equalLimitRecords(storedRecord, *input.ExpectedActiveRecord) { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, ports.ErrConflict) + } + + updatedPayload, err = marshalLimitRecord(*input.UpdatedActiveRecord) + if err != nil { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, err) + } + } + activeLimitCodes, err := store.loadActiveLimitCodeSet(operationCtx, tx, input.NewRecord.UserID) + if err != nil { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, err) + } + activeLimitCodes[input.NewRecord.LimitCode] = struct{}{} + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + if input.ExpectedActiveRecord != nil { + pipe.Set(operationCtx, store.keyspace.LimitRecord(input.ExpectedActiveRecord.RecordID), updatedPayload, 0) + } + pipe.Set(operationCtx, newRecordKey, newRecordPayload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(input.NewRecord.AppliedAt.UTC().UnixMicro()), + Member: input.NewRecord.RecordID.String(), + }) + setActiveSlot(pipe, operationCtx, activeKey, input.NewRecord.RecordID.String(), input.NewRecord.ExpiresAt) + store.syncActiveLimitCodeIndexes(pipe, operationCtx, input.NewRecord.UserID, activeLimitCodes) + return nil + }) + if err != nil { + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, err) + } + + return nil + }, watchedKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("set limit for user %q in redis: %w", input.NewRecord.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// RemoveLimit atomically removes one active limit record. +func (store *Store) RemoveLimit(ctx context.Context, input ports.RemoveLimitInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("remove limit in redis: %w", err) + } + + updatedPayload, err := marshalLimitRecord(input.UpdatedRecord) + if err != nil { + return fmt.Errorf("remove limit in redis: %w", err) + } + + recordKey := store.keyspace.LimitRecord(input.ExpectedActiveRecord.RecordID) + activeKey := store.keyspace.ActiveLimit(input.ExpectedActiveRecord.UserID, input.ExpectedActiveRecord.LimitCode) + watchedKeys := append( + []string{recordKey, activeKey}, + store.activeLimitWatchKeys(input.ExpectedActiveRecord.UserID)..., + ) + + operationCtx, cancel, err := store.operationContext(ctx, "remove limit in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + activeRecordID, err := store.loadActiveLimitRecordID(operationCtx, tx, activeKey) + if err != nil { + return fmt.Errorf("remove limit for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + if activeRecordID != input.ExpectedActiveRecord.RecordID { + return fmt.Errorf("remove limit for user %q in redis: %w", input.ExpectedActiveRecord.UserID, ports.ErrConflict) + } + + storedRecord, err := store.loadLimitRecord(operationCtx, tx, input.ExpectedActiveRecord.RecordID) + if err != nil { + return fmt.Errorf("remove limit for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + if !equalLimitRecords(storedRecord, input.ExpectedActiveRecord) { + return fmt.Errorf("remove limit for user %q in redis: %w", input.ExpectedActiveRecord.UserID, ports.ErrConflict) + } + activeLimitCodes, err := store.loadActiveLimitCodeSet(operationCtx, tx, input.ExpectedActiveRecord.UserID) + if err != nil { + return fmt.Errorf("remove limit for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + delete(activeLimitCodes, input.ExpectedActiveRecord.LimitCode) + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, updatedPayload, 0) + pipe.Del(operationCtx, activeKey) + store.syncActiveLimitCodeIndexes(pipe, operationCtx, input.ExpectedActiveRecord.UserID, activeLimitCodes) + return nil + }) + if err != nil { + return fmt.Errorf("remove limit for user %q in redis: %w", input.ExpectedActiveRecord.UserID, err) + } + + return nil + }, watchedKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("remove limit for user %q in redis: %w", input.ExpectedActiveRecord.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +func (store *Store) loadActiveSanctionRecordID( + ctx context.Context, + getter bytesGetter, + key string, +) (policy.SanctionRecordID, error) { + value, err := getter.Get(ctx, key).Result() + switch { + case errors.Is(err, redis.Nil): + return "", ports.ErrNotFound + case err != nil: + return "", err + } + + recordID := policy.SanctionRecordID(value) + if err := recordID.Validate(); err != nil { + return "", fmt.Errorf("active sanction record id: %w", err) + } + + return recordID, nil +} + +func (store *Store) loadActiveLimitRecordID( + ctx context.Context, + getter bytesGetter, + key string, +) (policy.LimitRecordID, error) { + value, err := getter.Get(ctx, key).Result() + switch { + case errors.Is(err, redis.Nil): + return "", ports.ErrNotFound + case err != nil: + return "", err + } + + recordID := policy.LimitRecordID(value) + if err := recordID.Validate(); err != nil { + return "", fmt.Errorf("active limit record id: %w", err) + } + + return recordID, nil +} + +func setActiveSlot( + pipe redis.Pipeliner, + ctx context.Context, + key string, + recordID string, + expiresAt *time.Time, +) { + pipe.Set(ctx, key, recordID, 0) + if expiresAt != nil { + pipe.PExpireAt(ctx, key, expiresAt.UTC()) + } +} + +func equalSanctionRecords(left policy.SanctionRecord, right policy.SanctionRecord) bool { + return left.RecordID == right.RecordID && + left.UserID == right.UserID && + left.SanctionCode == right.SanctionCode && + left.Scope == right.Scope && + left.ReasonCode == right.ReasonCode && + left.Actor == right.Actor && + left.AppliedAt.Equal(right.AppliedAt) && + equalOptionalTime(left.ExpiresAt, right.ExpiresAt) && + equalOptionalTime(left.RemovedAt, right.RemovedAt) && + left.RemovedBy == right.RemovedBy && + left.RemovedReasonCode == right.RemovedReasonCode +} + +func equalLimitRecords(left policy.LimitRecord, right policy.LimitRecord) bool { + return left.RecordID == right.RecordID && + left.UserID == right.UserID && + left.LimitCode == right.LimitCode && + left.Value == right.Value && + left.ReasonCode == right.ReasonCode && + left.Actor == right.Actor && + left.AppliedAt.Equal(right.AppliedAt) && + equalOptionalTime(left.ExpiresAt, right.ExpiresAt) && + equalOptionalTime(left.RemovedAt, right.RemovedAt) && + left.RemovedBy == right.RemovedBy && + left.RemovedReasonCode == right.RemovedReasonCode +} + +// PolicyLifecycleStore adapts Store to the existing PolicyLifecycleStore +// port. +type PolicyLifecycleStore struct { + store *Store +} + +// PolicyLifecycle returns one adapter that exposes the atomic policy-lifecycle +// store port over Store. +func (store *Store) PolicyLifecycle() *PolicyLifecycleStore { + if store == nil { + return nil + } + + return &PolicyLifecycleStore{store: store} +} + +// ApplySanction atomically creates one new active sanction record. +func (adapter *PolicyLifecycleStore) ApplySanction(ctx context.Context, input ports.ApplySanctionInput) error { + return adapter.store.ApplySanction(ctx, input) +} + +// RemoveSanction atomically removes one active sanction record. +func (adapter *PolicyLifecycleStore) RemoveSanction(ctx context.Context, input ports.RemoveSanctionInput) error { + return adapter.store.RemoveSanction(ctx, input) +} + +// SetLimit atomically creates or replaces one active limit record. +func (adapter *PolicyLifecycleStore) SetLimit(ctx context.Context, input ports.SetLimitInput) error { + return adapter.store.SetLimit(ctx, input) +} + +// RemoveLimit atomically removes one active limit record. +func (adapter *PolicyLifecycleStore) RemoveLimit(ctx context.Context, input ports.RemoveLimitInput) error { + return adapter.store.RemoveLimit(ctx, input) +} + +var _ ports.PolicyLifecycleStore = (*PolicyLifecycleStore)(nil) diff --git a/user/internal/adapters/redis/userstore/store.go b/user/internal/adapters/redis/userstore/store.go new file mode 100644 index 0000000..5422e41 --- /dev/null +++ b/user/internal/adapters/redis/userstore/store.go @@ -0,0 +1,2054 @@ +// Package userstore implements the Redis-backed source-of-truth persistence +// used by the first runnable user-service slice. +package userstore + +import ( + "bytes" + "context" + "crypto/tls" + "encoding/json" + "errors" + "fmt" + "io" + "strings" + "time" + + "galaxy/user/internal/adapters/redisstate" + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/authblock" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + + "github.com/redis/go-redis/v9" +) + +const mutationRetryLimit = 3 + +// Config configures one Redis-backed user store instance. +type Config struct { + // Addr stores the Redis network address in host:port form. + Addr string + + // Username stores the optional Redis ACL username. + Username string + + // Password stores the optional Redis ACL password. + Password string + + // DB stores the Redis logical database index. + DB int + + // TLSEnabled enables TLS with a conservative minimum protocol version. + TLSEnabled bool + + // KeyspacePrefix stores the root prefix of the service-owned Redis keyspace. + KeyspacePrefix string + + // OperationTimeout bounds each Redis round trip performed by the store. + OperationTimeout time.Duration +} + +// Store persists auth-facing user state in Redis and exposes the narrow atomic +// auth-facing mutation boundary plus selected entity-store interfaces. +type Store struct { + client *redis.Client + keyspace redisstate.Keyspace + operationTimeout time.Duration +} + +type accountRecord struct { + UserID string `json:"user_id"` + Email string `json:"email"` + RaceName string `json:"race_name"` + PreferredLanguage string `json:"preferred_language"` + TimeZone string `json:"time_zone"` + DeclaredCountry *string `json:"declared_country,omitempty"` + CreatedAt string `json:"created_at"` + UpdatedAt string `json:"updated_at"` +} + +type raceNameReservationRecord struct { + CanonicalKey string `json:"canonical_key"` + UserID string `json:"user_id"` + RaceName string `json:"race_name"` + ReservedAt string `json:"reserved_at"` +} + +type blockedEmailRecord struct { + Email string `json:"email"` + ReasonCode string `json:"reason_code"` + BlockedAt string `json:"blocked_at"` + ActorType *string `json:"actor_type,omitempty"` + ActorID *string `json:"actor_id,omitempty"` + ResolvedUserID *string `json:"resolved_user_id,omitempty"` +} + +type entitlementSnapshotRecord struct { + UserID string `json:"user_id"` + PlanCode string `json:"plan_code"` + IsPaid bool `json:"is_paid"` + StartsAt string `json:"starts_at"` + EndsAt *string `json:"ends_at,omitempty"` + Source string `json:"source"` + ActorType string `json:"actor_type"` + ActorID *string `json:"actor_id,omitempty"` + ReasonCode string `json:"reason_code"` + UpdatedAt string `json:"updated_at"` +} + +type sanctionRecord struct { + RecordID string `json:"record_id"` + UserID string `json:"user_id"` + SanctionCode string `json:"sanction_code"` + Scope string `json:"scope"` + ReasonCode string `json:"reason_code"` + ActorType string `json:"actor_type"` + ActorID *string `json:"actor_id,omitempty"` + AppliedAt string `json:"applied_at"` + ExpiresAt *string `json:"expires_at,omitempty"` + RemovedAt *string `json:"removed_at,omitempty"` + RemovedByType *string `json:"removed_by_type,omitempty"` + RemovedByID *string `json:"removed_by_id,omitempty"` + RemovedReasonCode *string `json:"removed_reason_code,omitempty"` +} + +type limitRecord struct { + RecordID string `json:"record_id"` + UserID string `json:"user_id"` + LimitCode string `json:"limit_code"` + Value int `json:"value"` + ReasonCode string `json:"reason_code"` + ActorType string `json:"actor_type"` + ActorID *string `json:"actor_id,omitempty"` + AppliedAt string `json:"applied_at"` + ExpiresAt *string `json:"expires_at,omitempty"` + RemovedAt *string `json:"removed_at,omitempty"` + RemovedByType *string `json:"removed_by_type,omitempty"` + RemovedByID *string `json:"removed_by_id,omitempty"` + RemovedReasonCode *string `json:"removed_reason_code,omitempty"` +} + +type bytesGetter interface { + Get(context.Context, string) *redis.StringCmd +} + +// New constructs one Redis-backed user store from cfg. +func New(cfg Config) (*Store, error) { + switch { + case strings.TrimSpace(cfg.Addr) == "": + return nil, errors.New("new redis user store: redis addr must not be empty") + case cfg.DB < 0: + return nil, errors.New("new redis user store: redis db must not be negative") + case strings.TrimSpace(cfg.KeyspacePrefix) == "": + return nil, errors.New("new redis user store: redis keyspace prefix must not be empty") + case cfg.OperationTimeout <= 0: + return nil, errors.New("new redis user store: operation timeout must be positive") + } + + options := &redis.Options{ + Addr: cfg.Addr, + Username: cfg.Username, + Password: cfg.Password, + DB: cfg.DB, + Protocol: 2, + DisableIdentity: true, + } + if cfg.TLSEnabled { + options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12} + } + + return &Store{ + client: redis.NewClient(options), + keyspace: redisstate.Keyspace{Prefix: cfg.KeyspacePrefix}, + operationTimeout: cfg.OperationTimeout, + }, nil +} + +// Close releases the underlying Redis client resources. +func (store *Store) Close() error { + if store == nil || store.client == nil { + return nil + } + + return store.client.Close() +} + +// Ping verifies that the configured Redis backend is reachable. +func (store *Store) Ping(ctx context.Context) error { + operationCtx, cancel, err := store.operationContext(ctx, "ping redis user store") + if err != nil { + return err + } + defer cancel() + + if err := store.client.Ping(operationCtx).Err(); err != nil { + return fmt.Errorf("ping redis user store: %w", err) + } + + return nil +} + +// Create stores one new account record together with the exact and canonical +// race-name lookup state. +func (store *Store) Create(ctx context.Context, input ports.CreateAccountInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("create account in redis: %w", err) + } + + accountPayload, err := marshalAccountRecord(input.Account) + if err != nil { + return fmt.Errorf("create account in redis: %w", err) + } + reservationPayload, err := marshalRaceNameReservationRecord(input.Reservation) + if err != nil { + return fmt.Errorf("create account in redis: %w", err) + } + + accountKey := store.keyspace.Account(input.Account.UserID) + emailLookupKey := store.keyspace.EmailLookup(input.Account.Email) + raceNameLookupKey := store.keyspace.RaceNameLookup(input.Account.RaceName) + reservationKey := store.keyspace.RaceNameReservation(input.Reservation.CanonicalKey) + + operationCtx, cancel, err := store.operationContext(ctx, "create account in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if err := ensureKeyAbsent(operationCtx, tx, accountKey); err != nil { + return fmt.Errorf("create account %q in redis: %w", input.Account.UserID, err) + } + if err := ensureKeyAbsent(operationCtx, tx, emailLookupKey); err != nil { + return fmt.Errorf("create account %q in redis: %w", input.Account.UserID, err) + } + if err := ensureKeyAbsent(operationCtx, tx, raceNameLookupKey); err != nil { + return fmt.Errorf("create account %q in redis: %w", input.Account.UserID, err) + } + if err := ensureKeyAbsent(operationCtx, tx, reservationKey); err != nil { + return fmt.Errorf("create account %q in redis: %w", input.Account.UserID, err) + } + + _, err := tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, accountKey, accountPayload, 0) + pipe.Set(operationCtx, emailLookupKey, input.Account.UserID.String(), 0) + pipe.Set(operationCtx, raceNameLookupKey, input.Account.UserID.String(), 0) + pipe.Set(operationCtx, reservationKey, reservationPayload, 0) + store.addCreatedAtIndex(pipe, operationCtx, input.Account) + store.syncDeclaredCountryIndex(pipe, operationCtx, account.UserAccount{}, input.Account) + return nil + }) + if err != nil { + return fmt.Errorf("create account %q in redis: %w", input.Account.UserID, err) + } + + return nil + }, accountKey, emailLookupKey, raceNameLookupKey, reservationKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("create account %q in redis: %w", input.Account.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// GetByUserID returns the stored account identified by userID. +func (store *Store) GetByUserID(ctx context.Context, userID common.UserID) (account.UserAccount, error) { + if err := userID.Validate(); err != nil { + return account.UserAccount{}, fmt.Errorf("get account by user id from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "get account by user id from redis") + if err != nil { + return account.UserAccount{}, err + } + defer cancel() + + record, err := store.loadAccount(operationCtx, store.client, userID) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return account.UserAccount{}, fmt.Errorf("get account by user id %q from redis: %w", userID, ports.ErrNotFound) + default: + return account.UserAccount{}, fmt.Errorf("get account by user id %q from redis: %w", userID, err) + } + } + + return record, nil +} + +// GetByEmail returns the stored account identified by email. +func (store *Store) GetByEmail(ctx context.Context, email common.Email) (account.UserAccount, error) { + if err := email.Validate(); err != nil { + return account.UserAccount{}, fmt.Errorf("get account by email from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "get account by email from redis") + if err != nil { + return account.UserAccount{}, err + } + defer cancel() + + userID, err := store.loadLookupUserID(operationCtx, store.client, store.keyspace.EmailLookup(email)) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return account.UserAccount{}, fmt.Errorf("get account by email %q from redis: %w", email, ports.ErrNotFound) + default: + return account.UserAccount{}, fmt.Errorf("get account by email %q from redis: %w", email, err) + } + } + + record, err := store.loadAccount(operationCtx, store.client, userID) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return account.UserAccount{}, fmt.Errorf("get account by email %q from redis: lookup references missing user %q", email, userID) + default: + return account.UserAccount{}, fmt.Errorf("get account by email %q from redis: %w", email, err) + } + } + + return record, nil +} + +// GetByRaceName returns the stored account identified by the exact stored race +// name. +func (store *Store) GetByRaceName(ctx context.Context, raceName common.RaceName) (account.UserAccount, error) { + if err := raceName.Validate(); err != nil { + return account.UserAccount{}, fmt.Errorf("get account by race name from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "get account by race name from redis") + if err != nil { + return account.UserAccount{}, err + } + defer cancel() + + userID, err := store.loadLookupUserID(operationCtx, store.client, store.keyspace.RaceNameLookup(raceName)) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return account.UserAccount{}, fmt.Errorf("get account by race name %q from redis: %w", raceName, ports.ErrNotFound) + default: + return account.UserAccount{}, fmt.Errorf("get account by race name %q from redis: %w", raceName, err) + } + } + + record, err := store.loadAccount(operationCtx, store.client, userID) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return account.UserAccount{}, fmt.Errorf("get account by race name %q from redis: lookup references missing user %q", raceName, userID) + default: + return account.UserAccount{}, fmt.Errorf("get account by race name %q from redis: %w", raceName, err) + } + } + + return record, nil +} + +// ExistsByUserID reports whether userID identifies a stored account. +func (store *Store) ExistsByUserID(ctx context.Context, userID common.UserID) (bool, error) { + if err := userID.Validate(); err != nil { + return false, fmt.Errorf("exists by user id from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "exists by user id from redis") + if err != nil { + return false, err + } + defer cancel() + + exists, err := store.client.Exists(operationCtx, store.keyspace.Account(userID)).Result() + if err != nil { + return false, fmt.Errorf("exists by user id %q from redis: %w", userID, err) + } + + return exists == 1, nil +} + +// RenameRaceName replaces the stored race name of userID and swaps the exact +// and canonical race-name lookup state atomically. +func (store *Store) RenameRaceName(ctx context.Context, input ports.RenameRaceNameInput) error { + if err := input.Validate(); err != nil { + return fmt.Errorf("rename account race name in redis: %w", err) + } + + accountKey := store.keyspace.Account(input.UserID) + newRaceNameLookupKey := store.keyspace.RaceNameLookup(input.NewRaceName) + newReservationKey := store.keyspace.RaceNameReservation(input.NewReservation.CanonicalKey) + newReservationPayload, err := marshalRaceNameReservationRecord(input.NewReservation) + if err != nil { + return fmt.Errorf("rename account race name in redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "rename account race name in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + record, err := store.loadAccount(operationCtx, tx, input.UserID) + if err != nil { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, err) + } + if record.RaceName == input.NewRaceName { + return nil + } + + currentRaceNameLookupKey := store.keyspace.RaceNameLookup(record.RaceName) + currentLookupUserID, err := store.loadLookupUserID(operationCtx, tx, currentRaceNameLookupKey) + if err != nil { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, err) + } + if currentLookupUserID != input.UserID { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, ports.ErrConflict) + } + + currentReservation, err := store.loadRaceNameReservation(operationCtx, tx, input.CurrentCanonicalKey) + if err != nil { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, err) + } + if currentReservation.UserID != input.UserID || currentReservation.RaceName != record.RaceName { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, ports.ErrConflict) + } + + if err := ensureLookupAvailableOrOwned(operationCtx, tx, newRaceNameLookupKey, input.UserID); err != nil { + if errors.Is(err, ports.ErrConflict) { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, ports.ErrRaceNameConflict) + } + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, err) + } + + if input.CurrentCanonicalKey != input.NewReservation.CanonicalKey { + if err := store.ensureReservationAvailableOrOwned(operationCtx, tx, input.NewReservation.CanonicalKey, input.UserID); err != nil { + if errors.Is(err, ports.ErrConflict) { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, ports.ErrRaceNameConflict) + } + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, err) + } + } + + record.RaceName = input.NewRaceName + record.UpdatedAt = input.UpdatedAt.UTC() + + payload, err := marshalAccountRecord(record) + if err != nil { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, err) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, accountKey, payload, 0) + pipe.Set(operationCtx, newRaceNameLookupKey, input.UserID.String(), 0) + pipe.Set(operationCtx, newReservationKey, newReservationPayload, 0) + pipe.Del(operationCtx, currentRaceNameLookupKey) + if input.CurrentCanonicalKey != input.NewReservation.CanonicalKey { + pipe.Del(operationCtx, store.keyspace.RaceNameReservation(input.CurrentCanonicalKey)) + } + + return nil + }) + if err != nil { + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, err) + } + + return nil + }, accountKey, newRaceNameLookupKey, newReservationKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("rename account race name %q in redis: %w", input.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// Update replaces the stored account state for record.UserID. +func (store *Store) Update(ctx context.Context, record account.UserAccount) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("update account in redis: %w", err) + } + + accountPayload, err := marshalAccountRecord(record) + if err != nil { + return fmt.Errorf("update account in redis: %w", err) + } + + accountKey := store.keyspace.Account(record.UserID) + emailLookupKey := store.keyspace.EmailLookup(record.Email) + raceNameLookupKey := store.keyspace.RaceNameLookup(record.RaceName) + + operationCtx, cancel, err := store.operationContext(ctx, "update account in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + current, err := store.loadAccount(operationCtx, tx, record.UserID) + if err != nil { + return fmt.Errorf("update account %q in redis: %w", record.UserID, err) + } + if current.Email != record.Email || current.RaceName != record.RaceName { + return fmt.Errorf("update account %q in redis: %w", record.UserID, ports.ErrConflict) + } + + lookupUserID, err := store.loadLookupUserID(operationCtx, tx, emailLookupKey) + if err != nil { + return fmt.Errorf("update account %q in redis: %w", record.UserID, err) + } + if lookupUserID != record.UserID { + return fmt.Errorf("update account %q in redis: %w", record.UserID, ports.ErrConflict) + } + + raceLookupUserID, err := store.loadLookupUserID(operationCtx, tx, raceNameLookupKey) + if err != nil { + return fmt.Errorf("update account %q in redis: %w", record.UserID, err) + } + if raceLookupUserID != record.UserID { + return fmt.Errorf("update account %q in redis: %w", record.UserID, ports.ErrConflict) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, accountKey, accountPayload, 0) + store.syncDeclaredCountryIndex(pipe, operationCtx, current, record) + return nil + }) + if err != nil { + return fmt.Errorf("update account %q in redis: %w", record.UserID, err) + } + + return nil + }, accountKey, emailLookupKey, raceNameLookupKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("update account %q in redis: %w", record.UserID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// GetBlockedEmail returns the blocked-email subject for email. +func (store *Store) GetBlockedEmail(ctx context.Context, email common.Email) (authblock.BlockedEmailSubject, error) { + if err := email.Validate(); err != nil { + return authblock.BlockedEmailSubject{}, fmt.Errorf("get blocked email subject from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "get blocked email subject from redis") + if err != nil { + return authblock.BlockedEmailSubject{}, err + } + defer cancel() + + record, err := store.loadBlockedEmail(operationCtx, store.client, email) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return authblock.BlockedEmailSubject{}, fmt.Errorf("get blocked email subject %q from redis: %w", email, ports.ErrNotFound) + default: + return authblock.BlockedEmailSubject{}, fmt.Errorf("get blocked email subject %q from redis: %w", email, err) + } + } + + return record, nil +} + +// PutBlockedEmail stores or replaces the blocked-email subject for +// record.Email. +func (store *Store) PutBlockedEmail(ctx context.Context, record authblock.BlockedEmailSubject) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("upsert blocked email subject in redis: %w", err) + } + + payload, err := marshalBlockedEmailRecord(record) + if err != nil { + return fmt.Errorf("upsert blocked email subject in redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "upsert blocked email subject in redis") + if err != nil { + return err + } + defer cancel() + + if err := store.client.Set(operationCtx, store.keyspace.BlockedEmailSubject(record.Email), payload, 0).Err(); err != nil { + return fmt.Errorf("upsert blocked email subject %q in redis: %w", record.Email, err) + } + + return nil +} + +// GetEntitlementByUserID returns the current entitlement snapshot for userID. +func (store *Store) GetEntitlementByUserID(ctx context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) { + if err := userID.Validate(); err != nil { + return entitlement.CurrentSnapshot{}, fmt.Errorf("get entitlement snapshot from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "get entitlement snapshot from redis") + if err != nil { + return entitlement.CurrentSnapshot{}, err + } + defer cancel() + + record, err := store.loadEntitlementSnapshot(operationCtx, store.client, userID) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return entitlement.CurrentSnapshot{}, fmt.Errorf("get entitlement snapshot %q from redis: %w", userID, ports.ErrNotFound) + default: + return entitlement.CurrentSnapshot{}, fmt.Errorf("get entitlement snapshot %q from redis: %w", userID, err) + } + } + + return record, nil +} + +// PutEntitlement stores the current entitlement snapshot for record.UserID. +func (store *Store) PutEntitlement(ctx context.Context, record entitlement.CurrentSnapshot) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("put entitlement snapshot in redis: %w", err) + } + + payload, err := marshalEntitlementSnapshotRecord(record) + if err != nil { + return fmt.Errorf("put entitlement snapshot in redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "put entitlement snapshot in redis") + if err != nil { + return err + } + defer cancel() + + if err := store.client.Set(operationCtx, store.keyspace.EntitlementSnapshot(record.UserID), payload, 0).Err(); err != nil { + return fmt.Errorf("put entitlement snapshot %q in redis: %w", record.UserID, err) + } + + return nil +} + +// CreateSanction stores one new sanction history record. +func (store *Store) CreateSanction(ctx context.Context, record policy.SanctionRecord) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("create sanction in redis: %w", err) + } + + payload, err := marshalSanctionRecord(record) + if err != nil { + return fmt.Errorf("create sanction in redis: %w", err) + } + + recordKey := store.keyspace.SanctionRecord(record.RecordID) + historyKey := store.keyspace.SanctionHistory(record.UserID) + + operationCtx, cancel, err := store.operationContext(ctx, "create sanction in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if err := ensureKeyAbsent(operationCtx, tx, recordKey); err != nil { + return fmt.Errorf("create sanction %q in redis: %w", record.RecordID, err) + } + + _, err := tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, payload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(record.AppliedAt.UTC().UnixMicro()), + Member: record.RecordID.String(), + }) + return nil + }) + if err != nil { + return fmt.Errorf("create sanction %q in redis: %w", record.RecordID, err) + } + + return nil + }, recordKey, historyKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("create sanction %q in redis: %w", record.RecordID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// GetSanctionByRecordID returns the sanction history record identified by +// recordID. +func (store *Store) GetSanctionByRecordID(ctx context.Context, recordID policy.SanctionRecordID) (policy.SanctionRecord, error) { + if err := recordID.Validate(); err != nil { + return policy.SanctionRecord{}, fmt.Errorf("get sanction by record id from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "get sanction by record id from redis") + if err != nil { + return policy.SanctionRecord{}, err + } + defer cancel() + + record, err := store.loadSanctionRecord(operationCtx, store.client, recordID) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return policy.SanctionRecord{}, fmt.Errorf("get sanction by record id %q from redis: %w", recordID, ports.ErrNotFound) + default: + return policy.SanctionRecord{}, fmt.Errorf("get sanction by record id %q from redis: %w", recordID, err) + } + } + + return record, nil +} + +// ListSanctionsByUserID returns every sanction history record owned by userID. +func (store *Store) ListSanctionsByUserID(ctx context.Context, userID common.UserID) ([]policy.SanctionRecord, error) { + if err := userID.Validate(); err != nil { + return nil, fmt.Errorf("list sanctions by user id from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "list sanctions by user id from redis") + if err != nil { + return nil, err + } + defer cancel() + + recordIDs, err := store.client.ZRange(operationCtx, store.keyspace.SanctionHistory(userID), 0, -1).Result() + if err != nil { + return nil, fmt.Errorf("list sanctions by user id %q from redis: %w", userID, err) + } + + records := make([]policy.SanctionRecord, 0, len(recordIDs)) + for _, rawRecordID := range recordIDs { + record, err := store.loadSanctionRecord(operationCtx, store.client, policy.SanctionRecordID(rawRecordID)) + if err != nil { + return nil, fmt.Errorf("list sanctions by user id %q from redis: %w", userID, err) + } + records = append(records, record) + } + + return records, nil +} + +// UpdateSanction replaces one stored sanction history record. +func (store *Store) UpdateSanction(ctx context.Context, record policy.SanctionRecord) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("update sanction in redis: %w", err) + } + + payload, err := marshalSanctionRecord(record) + if err != nil { + return fmt.Errorf("update sanction in redis: %w", err) + } + + recordKey := store.keyspace.SanctionRecord(record.RecordID) + + operationCtx, cancel, err := store.operationContext(ctx, "update sanction in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if _, err := store.loadSanctionRecord(operationCtx, tx, record.RecordID); err != nil { + return fmt.Errorf("update sanction %q in redis: %w", record.RecordID, err) + } + + _, err := tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, payload, 0) + return nil + }) + if err != nil { + return fmt.Errorf("update sanction %q in redis: %w", record.RecordID, err) + } + + return nil + }, recordKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("update sanction %q in redis: %w", record.RecordID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// CreateLimit stores one new limit history record. +func (store *Store) CreateLimit(ctx context.Context, record policy.LimitRecord) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("create limit in redis: %w", err) + } + + payload, err := marshalLimitRecord(record) + if err != nil { + return fmt.Errorf("create limit in redis: %w", err) + } + + recordKey := store.keyspace.LimitRecord(record.RecordID) + historyKey := store.keyspace.LimitHistory(record.UserID) + + operationCtx, cancel, err := store.operationContext(ctx, "create limit in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if err := ensureKeyAbsent(operationCtx, tx, recordKey); err != nil { + return fmt.Errorf("create limit %q in redis: %w", record.RecordID, err) + } + + _, err := tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, payload, 0) + pipe.ZAdd(operationCtx, historyKey, redis.Z{ + Score: float64(record.AppliedAt.UTC().UnixMicro()), + Member: record.RecordID.String(), + }) + return nil + }) + if err != nil { + return fmt.Errorf("create limit %q in redis: %w", record.RecordID, err) + } + + return nil + }, recordKey, historyKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("create limit %q in redis: %w", record.RecordID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// GetLimitByRecordID returns the limit history record identified by recordID. +func (store *Store) GetLimitByRecordID(ctx context.Context, recordID policy.LimitRecordID) (policy.LimitRecord, error) { + if err := recordID.Validate(); err != nil { + return policy.LimitRecord{}, fmt.Errorf("get limit by record id from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "get limit by record id from redis") + if err != nil { + return policy.LimitRecord{}, err + } + defer cancel() + + record, err := store.loadLimitRecord(operationCtx, store.client, recordID) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return policy.LimitRecord{}, fmt.Errorf("get limit by record id %q from redis: %w", recordID, ports.ErrNotFound) + default: + return policy.LimitRecord{}, fmt.Errorf("get limit by record id %q from redis: %w", recordID, err) + } + } + + return record, nil +} + +// ListLimitsByUserID returns every limit history record owned by userID. +func (store *Store) ListLimitsByUserID(ctx context.Context, userID common.UserID) ([]policy.LimitRecord, error) { + if err := userID.Validate(); err != nil { + return nil, fmt.Errorf("list limits by user id from redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "list limits by user id from redis") + if err != nil { + return nil, err + } + defer cancel() + + recordIDs, err := store.client.ZRange(operationCtx, store.keyspace.LimitHistory(userID), 0, -1).Result() + if err != nil { + return nil, fmt.Errorf("list limits by user id %q from redis: %w", userID, err) + } + + records := make([]policy.LimitRecord, 0, len(recordIDs)) + for _, rawRecordID := range recordIDs { + record, err := store.loadLimitRecord(operationCtx, store.client, policy.LimitRecordID(rawRecordID)) + if err != nil { + return nil, fmt.Errorf("list limits by user id %q from redis: %w", userID, err) + } + records = append(records, record) + } + + return records, nil +} + +// UpdateLimit replaces one stored limit history record. +func (store *Store) UpdateLimit(ctx context.Context, record policy.LimitRecord) error { + if err := record.Validate(); err != nil { + return fmt.Errorf("update limit in redis: %w", err) + } + + payload, err := marshalLimitRecord(record) + if err != nil { + return fmt.Errorf("update limit in redis: %w", err) + } + + recordKey := store.keyspace.LimitRecord(record.RecordID) + + operationCtx, cancel, err := store.operationContext(ctx, "update limit in redis") + if err != nil { + return err + } + defer cancel() + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + if _, err := store.loadLimitRecord(operationCtx, tx, record.RecordID); err != nil { + return fmt.Errorf("update limit %q in redis: %w", record.RecordID, err) + } + + _, err := tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, recordKey, payload, 0) + return nil + }) + if err != nil { + return fmt.Errorf("update limit %q in redis: %w", record.RecordID, err) + } + + return nil + }, recordKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("update limit %q in redis: %w", record.RecordID, ports.ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// ResolveByEmail returns the current coarse auth-facing resolution state for +// email. +func (store *Store) ResolveByEmail(ctx context.Context, email common.Email) (ports.ResolveByEmailResult, error) { + if err := email.Validate(); err != nil { + return ports.ResolveByEmailResult{}, fmt.Errorf("resolve by email in redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "resolve by email in redis") + if err != nil { + return ports.ResolveByEmailResult{}, err + } + defer cancel() + + blocked, err := store.loadBlockedEmail(operationCtx, store.client, email) + switch { + case err == nil: + return ports.ResolveByEmailResult{ + Kind: ports.AuthResolutionKindBlocked, + BlockReasonCode: blocked.ReasonCode, + }, nil + case !errors.Is(err, ports.ErrNotFound): + return ports.ResolveByEmailResult{}, fmt.Errorf("resolve by email %q in redis: %w", email, err) + } + + accountRecord, err := store.GetByEmailAccount(operationCtx, email) + switch { + case err == nil: + return ports.ResolveByEmailResult{ + Kind: ports.AuthResolutionKindExisting, + UserID: accountRecord.UserID, + }, nil + case errors.Is(err, ports.ErrNotFound): + return ports.ResolveByEmailResult{Kind: ports.AuthResolutionKindCreatable}, nil + default: + return ports.ResolveByEmailResult{}, fmt.Errorf("resolve by email %q in redis: %w", email, err) + } +} + +// EnsureByEmail atomically returns an existing user, creates a new one, or +// reports a blocked outcome for one e-mail subject. +func (store *Store) EnsureByEmail(ctx context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + if err := input.Validate(); err != nil { + return ports.EnsureByEmailResult{}, fmt.Errorf("ensure by email in redis: %w", err) + } + + accountPayload, err := marshalAccountRecord(input.Account) + if err != nil { + return ports.EnsureByEmailResult{}, fmt.Errorf("ensure by email in redis: %w", err) + } + entitlementPayload, err := marshalEntitlementSnapshotRecord(input.Entitlement) + if err != nil { + return ports.EnsureByEmailResult{}, fmt.Errorf("ensure by email in redis: %w", err) + } + entitlementRecordPayload, err := marshalEntitlementPeriodRecord(input.EntitlementRecord) + if err != nil { + return ports.EnsureByEmailResult{}, fmt.Errorf("ensure by email in redis: %w", err) + } + reservationPayload, err := marshalRaceNameReservationRecord(input.Reservation) + if err != nil { + return ports.EnsureByEmailResult{}, fmt.Errorf("ensure by email in redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "ensure by email in redis") + if err != nil { + return ports.EnsureByEmailResult{}, err + } + defer cancel() + + var result ports.EnsureByEmailResult + var handled bool + + accountKey := store.keyspace.Account(input.Account.UserID) + emailLookupKey := store.keyspace.EmailLookup(input.Email) + raceNameLookupKey := store.keyspace.RaceNameLookup(input.Account.RaceName) + reservationKey := store.keyspace.RaceNameReservation(input.Reservation.CanonicalKey) + blockedEmailKey := store.keyspace.BlockedEmailSubject(input.Email) + entitlementKey := store.keyspace.EntitlementSnapshot(input.Account.UserID) + entitlementRecordKey := store.keyspace.EntitlementRecord(input.EntitlementRecord.RecordID) + entitlementHistoryKey := store.keyspace.EntitlementHistory(input.Account.UserID) + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + blocked, err := store.loadBlockedEmail(operationCtx, tx, input.Email) + switch { + case err == nil: + result = ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeBlocked, + BlockReasonCode: blocked.ReasonCode, + } + handled = true + return nil + case !errors.Is(err, ports.ErrNotFound): + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + + userID, err := store.loadLookupUserID(operationCtx, tx, emailLookupKey) + switch { + case err == nil: + record, err := store.loadAccount(operationCtx, tx, userID) + if err != nil { + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + result = ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeExisting, + UserID: record.UserID, + } + handled = true + return nil + case !errors.Is(err, ports.ErrNotFound): + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + + if err := ensureKeyAbsent(operationCtx, tx, accountKey); err != nil { + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + if err := ensureKeyAbsent(operationCtx, tx, raceNameLookupKey); err != nil { + if errors.Is(err, ports.ErrConflict) { + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, ports.ErrRaceNameConflict) + } + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + if err := ensureKeyAbsent(operationCtx, tx, reservationKey); err != nil { + if errors.Is(err, ports.ErrConflict) { + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, ports.ErrRaceNameConflict) + } + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + if err := ensureKeyAbsent(operationCtx, tx, entitlementKey); err != nil { + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + if err := ensureKeyAbsent(operationCtx, tx, entitlementRecordKey); err != nil { + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, accountKey, accountPayload, 0) + pipe.Set(operationCtx, emailLookupKey, input.Account.UserID.String(), 0) + pipe.Set(operationCtx, raceNameLookupKey, input.Account.UserID.String(), 0) + pipe.Set(operationCtx, reservationKey, reservationPayload, 0) + pipe.Set(operationCtx, entitlementKey, entitlementPayload, 0) + pipe.Set(operationCtx, entitlementRecordKey, entitlementRecordPayload, 0) + pipe.ZAdd(operationCtx, entitlementHistoryKey, redis.Z{ + Score: float64(input.EntitlementRecord.StartsAt.UTC().UnixMicro()), + Member: input.EntitlementRecord.RecordID.String(), + }) + store.addCreatedAtIndex(pipe, operationCtx, input.Account) + store.syncDeclaredCountryIndex(pipe, operationCtx, account.UserAccount{}, input.Account) + store.syncEntitlementIndexes(pipe, operationCtx, input.Entitlement) + store.syncActiveSanctionCodeIndexes(pipe, operationCtx, input.Account.UserID, map[policy.SanctionCode]struct{}{}) + store.syncActiveLimitCodeIndexes(pipe, operationCtx, input.Account.UserID, map[policy.LimitCode]struct{}{}) + store.syncEligibilityMarkerIndexes(pipe, operationCtx, input.Account.UserID, input.Entitlement.IsPaid, map[policy.SanctionCode]struct{}{}) + return nil + }) + if err != nil { + return fmt.Errorf("ensure by email %q in redis: %w", input.Email, err) + } + + result = ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeCreated, + UserID: input.Account.UserID, + } + handled = true + return nil + }, blockedEmailKey, emailLookupKey, accountKey, raceNameLookupKey, reservationKey, entitlementKey, entitlementRecordKey, entitlementHistoryKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return ports.EnsureByEmailResult{}, fmt.Errorf("ensure by email %q in redis: %w", input.Email, ports.ErrConflict) + case watchErr != nil: + return ports.EnsureByEmailResult{}, watchErr + case !handled: + return ports.EnsureByEmailResult{}, fmt.Errorf("ensure by email %q in redis: unhandled watch result", input.Email) + default: + return result, nil + } +} + +// BlockByUserID applies a block state to the account identified by userID. +func (store *Store) BlockByUserID(ctx context.Context, input ports.BlockByUserIDInput) (ports.BlockResult, error) { + if err := input.Validate(); err != nil { + return ports.BlockResult{}, fmt.Errorf("block by user id in redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "block by user id in redis") + if err != nil { + return ports.BlockResult{}, err + } + defer cancel() + + var result ports.BlockResult + var handled bool + + currentAccount, err := store.loadAccount(operationCtx, store.client, input.UserID) + if err != nil { + if errors.Is(err, ports.ErrNotFound) { + return ports.BlockResult{}, fmt.Errorf("block by user id %q in redis: %w", input.UserID, ports.ErrNotFound) + } + return ports.BlockResult{}, fmt.Errorf("block by user id %q in redis: %w", input.UserID, err) + } + + accountKey := store.keyspace.Account(input.UserID) + blockedEmailKey := store.keyspace.BlockedEmailSubject(currentAccount.Email) + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + accountRecord, err := store.loadAccount(operationCtx, tx, input.UserID) + if err != nil { + return fmt.Errorf("block by user id %q in redis: %w", input.UserID, err) + } + + blocked, err := store.loadBlockedEmail(operationCtx, tx, accountRecord.Email) + switch { + case err == nil: + result = ports.BlockResult{ + Outcome: ports.AuthBlockOutcomeAlreadyBlocked, + UserID: input.UserID, + } + if !blocked.ResolvedUserID.IsZero() { + result.UserID = blocked.ResolvedUserID + } + handled = true + return nil + case !errors.Is(err, ports.ErrNotFound): + return fmt.Errorf("block by user id %q in redis: %w", input.UserID, err) + } + + record := authblock.BlockedEmailSubject{ + Email: accountRecord.Email, + ReasonCode: input.ReasonCode, + BlockedAt: input.BlockedAt.UTC(), + ResolvedUserID: input.UserID, + } + payload, err := marshalBlockedEmailRecord(record) + if err != nil { + return fmt.Errorf("block by user id %q in redis: %w", input.UserID, err) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, blockedEmailKey, payload, 0) + return nil + }) + if err != nil { + return fmt.Errorf("block by user id %q in redis: %w", input.UserID, err) + } + + result = ports.BlockResult{ + Outcome: ports.AuthBlockOutcomeBlocked, + UserID: input.UserID, + } + handled = true + return nil + }, accountKey, blockedEmailKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return ports.BlockResult{}, fmt.Errorf("block by user id %q in redis: %w", input.UserID, ports.ErrConflict) + case watchErr != nil: + if errors.Is(watchErr, ports.ErrNotFound) { + return ports.BlockResult{}, fmt.Errorf("block by user id %q in redis: %w", input.UserID, ports.ErrNotFound) + } + return ports.BlockResult{}, watchErr + case !handled: + return ports.BlockResult{}, fmt.Errorf("block by user id %q in redis: unhandled watch result", input.UserID) + default: + return result, nil + } +} + +// BlockByEmail applies a block state to email even when no account exists yet. +func (store *Store) BlockByEmail(ctx context.Context, input ports.BlockByEmailInput) (ports.BlockResult, error) { + if err := input.Validate(); err != nil { + return ports.BlockResult{}, fmt.Errorf("block by email in redis: %w", err) + } + + operationCtx, cancel, err := store.operationContext(ctx, "block by email in redis") + if err != nil { + return ports.BlockResult{}, err + } + defer cancel() + + var result ports.BlockResult + var handled bool + + blockedEmailKey := store.keyspace.BlockedEmailSubject(input.Email) + emailLookupKey := store.keyspace.EmailLookup(input.Email) + + watchErr := store.client.Watch(operationCtx, func(tx *redis.Tx) error { + blocked, err := store.loadBlockedEmail(operationCtx, tx, input.Email) + switch { + case err == nil: + result = ports.BlockResult{ + Outcome: ports.AuthBlockOutcomeAlreadyBlocked, + UserID: blocked.ResolvedUserID, + } + handled = true + return nil + case !errors.Is(err, ports.ErrNotFound): + return fmt.Errorf("block by email %q in redis: %w", input.Email, err) + } + + resolvedUserID, err := store.loadLookupUserID(operationCtx, tx, emailLookupKey) + switch { + case err == nil: + if _, err := store.loadAccount(operationCtx, tx, resolvedUserID); err != nil { + return fmt.Errorf("block by email %q in redis: %w", input.Email, err) + } + case !errors.Is(err, ports.ErrNotFound): + return fmt.Errorf("block by email %q in redis: %w", input.Email, err) + default: + resolvedUserID = "" + } + + record := authblock.BlockedEmailSubject{ + Email: input.Email, + ReasonCode: input.ReasonCode, + BlockedAt: input.BlockedAt.UTC(), + } + if !resolvedUserID.IsZero() { + record.ResolvedUserID = resolvedUserID + } + payload, err := marshalBlockedEmailRecord(record) + if err != nil { + return fmt.Errorf("block by email %q in redis: %w", input.Email, err) + } + + _, err = tx.TxPipelined(operationCtx, func(pipe redis.Pipeliner) error { + pipe.Set(operationCtx, blockedEmailKey, payload, 0) + return nil + }) + if err != nil { + return fmt.Errorf("block by email %q in redis: %w", input.Email, err) + } + + result = ports.BlockResult{ + Outcome: ports.AuthBlockOutcomeBlocked, + UserID: resolvedUserID, + } + handled = true + return nil + }, blockedEmailKey, emailLookupKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return ports.BlockResult{}, fmt.Errorf("block by email %q in redis: %w", input.Email, ports.ErrConflict) + case watchErr != nil: + return ports.BlockResult{}, watchErr + case !handled: + return ports.BlockResult{}, fmt.Errorf("block by email %q in redis: unhandled watch result", input.Email) + default: + return result, nil + } +} + +func (store *Store) GetByEmailAccount(ctx context.Context, email common.Email) (account.UserAccount, error) { + userID, err := store.loadLookupUserID(ctx, store.client, store.keyspace.EmailLookup(email)) + if err != nil { + return account.UserAccount{}, err + } + + return store.loadAccount(ctx, store.client, userID) +} + +func (store *Store) loadAccount(ctx context.Context, getter bytesGetter, userID common.UserID) (account.UserAccount, error) { + payload, err := getter.Get(ctx, store.keyspace.Account(userID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return account.UserAccount{}, ports.ErrNotFound + case err != nil: + return account.UserAccount{}, err + } + + return decodeAccountRecord(payload) +} + +func (store *Store) loadLookupUserID(ctx context.Context, getter bytesGetter, key string) (common.UserID, error) { + value, err := getter.Get(ctx, key).Result() + switch { + case errors.Is(err, redis.Nil): + return "", ports.ErrNotFound + case err != nil: + return "", err + } + + userID := common.UserID(value) + if err := userID.Validate(); err != nil { + return "", fmt.Errorf("lookup user id: %w", err) + } + + return userID, nil +} + +func (store *Store) loadRaceNameReservation( + ctx context.Context, + getter bytesGetter, + key account.RaceNameCanonicalKey, +) (account.RaceNameReservation, error) { + payload, err := getter.Get(ctx, store.keyspace.RaceNameReservation(key)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return account.RaceNameReservation{}, ports.ErrNotFound + case err != nil: + return account.RaceNameReservation{}, err + } + + return decodeRaceNameReservationRecord(payload) +} + +func (store *Store) loadBlockedEmail(ctx context.Context, getter bytesGetter, email common.Email) (authblock.BlockedEmailSubject, error) { + payload, err := getter.Get(ctx, store.keyspace.BlockedEmailSubject(email)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return authblock.BlockedEmailSubject{}, ports.ErrNotFound + case err != nil: + return authblock.BlockedEmailSubject{}, err + } + + return decodeBlockedEmailRecord(payload) +} + +func (store *Store) loadEntitlementSnapshot(ctx context.Context, getter bytesGetter, userID common.UserID) (entitlement.CurrentSnapshot, error) { + payload, err := getter.Get(ctx, store.keyspace.EntitlementSnapshot(userID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return entitlement.CurrentSnapshot{}, ports.ErrNotFound + case err != nil: + return entitlement.CurrentSnapshot{}, err + } + + return decodeEntitlementSnapshotRecord(payload) +} + +func (store *Store) loadSanctionRecord(ctx context.Context, getter bytesGetter, recordID policy.SanctionRecordID) (policy.SanctionRecord, error) { + payload, err := getter.Get(ctx, store.keyspace.SanctionRecord(recordID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return policy.SanctionRecord{}, ports.ErrNotFound + case err != nil: + return policy.SanctionRecord{}, err + } + + return decodeSanctionRecord(payload) +} + +func (store *Store) loadLimitRecord(ctx context.Context, getter bytesGetter, recordID policy.LimitRecordID) (policy.LimitRecord, error) { + payload, err := getter.Get(ctx, store.keyspace.LimitRecord(recordID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return policy.LimitRecord{}, ports.ErrNotFound + case err != nil: + return policy.LimitRecord{}, err + } + + return decodeLimitRecord(payload) +} + +func (store *Store) operationContext(ctx context.Context, operation string) (context.Context, context.CancelFunc, error) { + if store == nil || store.client == nil { + return nil, nil, fmt.Errorf("%s: nil store", operation) + } + if ctx == nil { + return nil, nil, fmt.Errorf("%s: nil context", operation) + } + + operationCtx, cancel := context.WithTimeout(ctx, store.operationTimeout) + return operationCtx, cancel, nil +} + +func ensureKeyAbsent(ctx context.Context, getter bytesGetter, key string) error { + _, err := getter.Get(ctx, key).Bytes() + switch { + case errors.Is(err, redis.Nil): + return nil + case err != nil: + return err + default: + return ports.ErrConflict + } +} + +func ensureLookupAvailableOrOwned( + ctx context.Context, + getter bytesGetter, + key string, + userID common.UserID, +) error { + currentUserID, err := getter.Get(ctx, key).Result() + switch { + case errors.Is(err, redis.Nil): + return nil + case err != nil: + return err + } + + if currentUserID != userID.String() { + return ports.ErrConflict + } + + return nil +} + +func (store *Store) ensureReservationAvailableOrOwned( + ctx context.Context, + getter bytesGetter, + key account.RaceNameCanonicalKey, + userID common.UserID, +) error { + record, err := store.loadRaceNameReservation(ctx, getter, key) + switch { + case errors.Is(err, ports.ErrNotFound): + return nil + case err != nil: + return err + } + + if record.UserID != userID { + return ports.ErrConflict + } + + return nil +} + +func marshalAccountRecord(record account.UserAccount) ([]byte, error) { + encoded := accountRecord{ + UserID: record.UserID.String(), + Email: record.Email.String(), + RaceName: record.RaceName.String(), + PreferredLanguage: record.PreferredLanguage.String(), + TimeZone: record.TimeZone.String(), + CreatedAt: record.CreatedAt.UTC().Format(time.RFC3339Nano), + UpdatedAt: record.UpdatedAt.UTC().Format(time.RFC3339Nano), + } + if !record.DeclaredCountry.IsZero() { + value := record.DeclaredCountry.String() + encoded.DeclaredCountry = &value + } + + return json.Marshal(encoded) +} + +func decodeAccountRecord(payload []byte) (account.UserAccount, error) { + var encoded accountRecord + if err := decodeJSONPayload(payload, &encoded); err != nil { + return account.UserAccount{}, err + } + + createdAt, err := time.Parse(time.RFC3339Nano, encoded.CreatedAt) + if err != nil { + return account.UserAccount{}, fmt.Errorf("decode account record created_at: %w", err) + } + updatedAt, err := time.Parse(time.RFC3339Nano, encoded.UpdatedAt) + if err != nil { + return account.UserAccount{}, fmt.Errorf("decode account record updated_at: %w", err) + } + + record := account.UserAccount{ + UserID: common.UserID(encoded.UserID), + Email: common.Email(encoded.Email), + RaceName: common.RaceName(encoded.RaceName), + PreferredLanguage: common.LanguageTag(encoded.PreferredLanguage), + TimeZone: common.TimeZoneName(encoded.TimeZone), + CreatedAt: createdAt.UTC(), + UpdatedAt: updatedAt.UTC(), + } + if encoded.DeclaredCountry != nil { + record.DeclaredCountry = common.CountryCode(*encoded.DeclaredCountry) + } + if err := record.Validate(); err != nil { + return account.UserAccount{}, fmt.Errorf("decode account record: %w", err) + } + + return record, nil +} + +func marshalRaceNameReservationRecord(record account.RaceNameReservation) ([]byte, error) { + encoded := raceNameReservationRecord{ + CanonicalKey: record.CanonicalKey.String(), + UserID: record.UserID.String(), + RaceName: record.RaceName.String(), + ReservedAt: record.ReservedAt.UTC().Format(time.RFC3339Nano), + } + + return json.Marshal(encoded) +} + +func decodeRaceNameReservationRecord(payload []byte) (account.RaceNameReservation, error) { + var encoded raceNameReservationRecord + if err := decodeJSONPayload(payload, &encoded); err != nil { + return account.RaceNameReservation{}, err + } + + reservedAt, err := time.Parse(time.RFC3339Nano, encoded.ReservedAt) + if err != nil { + return account.RaceNameReservation{}, fmt.Errorf("decode race-name reservation reserved_at: %w", err) + } + + record := account.RaceNameReservation{ + CanonicalKey: account.RaceNameCanonicalKey(encoded.CanonicalKey), + UserID: common.UserID(encoded.UserID), + RaceName: common.RaceName(encoded.RaceName), + ReservedAt: reservedAt.UTC(), + } + if err := record.Validate(); err != nil { + return account.RaceNameReservation{}, fmt.Errorf("decode race-name reservation: %w", err) + } + + return record, nil +} + +func marshalBlockedEmailRecord(record authblock.BlockedEmailSubject) ([]byte, error) { + encoded := blockedEmailRecord{ + Email: record.Email.String(), + ReasonCode: record.ReasonCode.String(), + BlockedAt: record.BlockedAt.UTC().Format(time.RFC3339Nano), + } + if !record.Actor.IsZero() { + actorType := record.Actor.Type.String() + encoded.ActorType = &actorType + if !record.Actor.ID.IsZero() { + actorID := record.Actor.ID.String() + encoded.ActorID = &actorID + } + } + if !record.ResolvedUserID.IsZero() { + resolvedUserID := record.ResolvedUserID.String() + encoded.ResolvedUserID = &resolvedUserID + } + + return json.Marshal(encoded) +} + +func decodeBlockedEmailRecord(payload []byte) (authblock.BlockedEmailSubject, error) { + var encoded blockedEmailRecord + if err := decodeJSONPayload(payload, &encoded); err != nil { + return authblock.BlockedEmailSubject{}, err + } + + blockedAt, err := time.Parse(time.RFC3339Nano, encoded.BlockedAt) + if err != nil { + return authblock.BlockedEmailSubject{}, fmt.Errorf("decode blocked email record blocked_at: %w", err) + } + + record := authblock.BlockedEmailSubject{ + Email: common.Email(encoded.Email), + ReasonCode: common.ReasonCode(encoded.ReasonCode), + BlockedAt: blockedAt.UTC(), + } + if encoded.ActorType != nil { + record.Actor.Type = common.ActorType(*encoded.ActorType) + } + if encoded.ActorID != nil { + record.Actor.ID = common.ActorID(*encoded.ActorID) + } + if encoded.ResolvedUserID != nil { + record.ResolvedUserID = common.UserID(*encoded.ResolvedUserID) + } + if err := record.Validate(); err != nil { + return authblock.BlockedEmailSubject{}, fmt.Errorf("decode blocked email record: %w", err) + } + + return record, nil +} + +func marshalEntitlementSnapshotRecord(record entitlement.CurrentSnapshot) ([]byte, error) { + encoded := entitlementSnapshotRecord{ + UserID: record.UserID.String(), + PlanCode: string(record.PlanCode), + IsPaid: record.IsPaid, + StartsAt: record.StartsAt.UTC().Format(time.RFC3339Nano), + Source: record.Source.String(), + ActorType: record.Actor.Type.String(), + ReasonCode: record.ReasonCode.String(), + UpdatedAt: record.UpdatedAt.UTC().Format(time.RFC3339Nano), + } + if record.EndsAt != nil { + value := record.EndsAt.UTC().Format(time.RFC3339Nano) + encoded.EndsAt = &value + } + if !record.Actor.ID.IsZero() { + value := record.Actor.ID.String() + encoded.ActorID = &value + } + + return json.Marshal(encoded) +} + +func decodeEntitlementSnapshotRecord(payload []byte) (entitlement.CurrentSnapshot, error) { + var encoded entitlementSnapshotRecord + if err := decodeJSONPayload(payload, &encoded); err != nil { + return entitlement.CurrentSnapshot{}, err + } + + startsAt, err := time.Parse(time.RFC3339Nano, encoded.StartsAt) + if err != nil { + return entitlement.CurrentSnapshot{}, fmt.Errorf("decode entitlement snapshot record starts_at: %w", err) + } + updatedAt, err := time.Parse(time.RFC3339Nano, encoded.UpdatedAt) + if err != nil { + return entitlement.CurrentSnapshot{}, fmt.Errorf("decode entitlement snapshot record updated_at: %w", err) + } + + record := entitlement.CurrentSnapshot{ + UserID: common.UserID(encoded.UserID), + PlanCode: entitlement.PlanCode(encoded.PlanCode), + IsPaid: encoded.IsPaid, + StartsAt: startsAt.UTC(), + Source: common.Source(encoded.Source), + Actor: common.ActorRef{Type: common.ActorType(encoded.ActorType)}, + ReasonCode: common.ReasonCode(encoded.ReasonCode), + UpdatedAt: updatedAt.UTC(), + } + if encoded.ActorID != nil { + record.Actor.ID = common.ActorID(*encoded.ActorID) + } + if encoded.EndsAt != nil { + value, err := time.Parse(time.RFC3339Nano, *encoded.EndsAt) + if err != nil { + return entitlement.CurrentSnapshot{}, fmt.Errorf("decode entitlement snapshot record ends_at: %w", err) + } + value = value.UTC() + record.EndsAt = &value + } + if err := record.Validate(); err != nil { + return entitlement.CurrentSnapshot{}, fmt.Errorf("decode entitlement snapshot record: %w", err) + } + + return record, nil +} + +func marshalSanctionRecord(record policy.SanctionRecord) ([]byte, error) { + encoded := sanctionRecord{ + RecordID: record.RecordID.String(), + UserID: record.UserID.String(), + SanctionCode: string(record.SanctionCode), + Scope: record.Scope.String(), + ReasonCode: record.ReasonCode.String(), + ActorType: record.Actor.Type.String(), + AppliedAt: record.AppliedAt.UTC().Format(time.RFC3339Nano), + } + if !record.Actor.ID.IsZero() { + value := record.Actor.ID.String() + encoded.ActorID = &value + } + if record.ExpiresAt != nil { + value := record.ExpiresAt.UTC().Format(time.RFC3339Nano) + encoded.ExpiresAt = &value + } + if record.RemovedAt != nil { + value := record.RemovedAt.UTC().Format(time.RFC3339Nano) + encoded.RemovedAt = &value + } + if !record.RemovedBy.Type.IsZero() { + value := record.RemovedBy.Type.String() + encoded.RemovedByType = &value + } + if !record.RemovedBy.ID.IsZero() { + value := record.RemovedBy.ID.String() + encoded.RemovedByID = &value + } + if !record.RemovedReasonCode.IsZero() { + value := record.RemovedReasonCode.String() + encoded.RemovedReasonCode = &value + } + + return json.Marshal(encoded) +} + +func decodeSanctionRecord(payload []byte) (policy.SanctionRecord, error) { + var encoded sanctionRecord + if err := decodeJSONPayload(payload, &encoded); err != nil { + return policy.SanctionRecord{}, err + } + + appliedAt, err := time.Parse(time.RFC3339Nano, encoded.AppliedAt) + if err != nil { + return policy.SanctionRecord{}, fmt.Errorf("decode sanction record applied_at: %w", err) + } + + record := policy.SanctionRecord{ + RecordID: policy.SanctionRecordID(encoded.RecordID), + UserID: common.UserID(encoded.UserID), + SanctionCode: policy.SanctionCode(encoded.SanctionCode), + Scope: common.Scope(encoded.Scope), + ReasonCode: common.ReasonCode(encoded.ReasonCode), + Actor: common.ActorRef{Type: common.ActorType(encoded.ActorType)}, + AppliedAt: appliedAt.UTC(), + } + if encoded.ActorID != nil { + record.Actor.ID = common.ActorID(*encoded.ActorID) + } + if encoded.ExpiresAt != nil { + value, err := time.Parse(time.RFC3339Nano, *encoded.ExpiresAt) + if err != nil { + return policy.SanctionRecord{}, fmt.Errorf("decode sanction record expires_at: %w", err) + } + value = value.UTC() + record.ExpiresAt = &value + } + if encoded.RemovedAt != nil { + value, err := time.Parse(time.RFC3339Nano, *encoded.RemovedAt) + if err != nil { + return policy.SanctionRecord{}, fmt.Errorf("decode sanction record removed_at: %w", err) + } + value = value.UTC() + record.RemovedAt = &value + } + if encoded.RemovedByType != nil { + record.RemovedBy.Type = common.ActorType(*encoded.RemovedByType) + } + if encoded.RemovedByID != nil { + record.RemovedBy.ID = common.ActorID(*encoded.RemovedByID) + } + if encoded.RemovedReasonCode != nil { + record.RemovedReasonCode = common.ReasonCode(*encoded.RemovedReasonCode) + } + if err := record.Validate(); err != nil { + return policy.SanctionRecord{}, fmt.Errorf("decode sanction record: %w", err) + } + + return record, nil +} + +func marshalLimitRecord(record policy.LimitRecord) ([]byte, error) { + encoded := limitRecord{ + RecordID: record.RecordID.String(), + UserID: record.UserID.String(), + LimitCode: string(record.LimitCode), + Value: record.Value, + ReasonCode: record.ReasonCode.String(), + ActorType: record.Actor.Type.String(), + AppliedAt: record.AppliedAt.UTC().Format(time.RFC3339Nano), + } + if !record.Actor.ID.IsZero() { + value := record.Actor.ID.String() + encoded.ActorID = &value + } + if record.ExpiresAt != nil { + value := record.ExpiresAt.UTC().Format(time.RFC3339Nano) + encoded.ExpiresAt = &value + } + if record.RemovedAt != nil { + value := record.RemovedAt.UTC().Format(time.RFC3339Nano) + encoded.RemovedAt = &value + } + if !record.RemovedBy.Type.IsZero() { + value := record.RemovedBy.Type.String() + encoded.RemovedByType = &value + } + if !record.RemovedBy.ID.IsZero() { + value := record.RemovedBy.ID.String() + encoded.RemovedByID = &value + } + if !record.RemovedReasonCode.IsZero() { + value := record.RemovedReasonCode.String() + encoded.RemovedReasonCode = &value + } + + return json.Marshal(encoded) +} + +func decodeLimitRecord(payload []byte) (policy.LimitRecord, error) { + var encoded limitRecord + if err := decodeJSONPayload(payload, &encoded); err != nil { + return policy.LimitRecord{}, err + } + + appliedAt, err := time.Parse(time.RFC3339Nano, encoded.AppliedAt) + if err != nil { + return policy.LimitRecord{}, fmt.Errorf("decode limit record applied_at: %w", err) + } + + record := policy.LimitRecord{ + RecordID: policy.LimitRecordID(encoded.RecordID), + UserID: common.UserID(encoded.UserID), + LimitCode: policy.LimitCode(encoded.LimitCode), + Value: encoded.Value, + ReasonCode: common.ReasonCode(encoded.ReasonCode), + Actor: common.ActorRef{Type: common.ActorType(encoded.ActorType)}, + AppliedAt: appliedAt.UTC(), + } + if encoded.ActorID != nil { + record.Actor.ID = common.ActorID(*encoded.ActorID) + } + if encoded.ExpiresAt != nil { + value, err := time.Parse(time.RFC3339Nano, *encoded.ExpiresAt) + if err != nil { + return policy.LimitRecord{}, fmt.Errorf("decode limit record expires_at: %w", err) + } + value = value.UTC() + record.ExpiresAt = &value + } + if encoded.RemovedAt != nil { + value, err := time.Parse(time.RFC3339Nano, *encoded.RemovedAt) + if err != nil { + return policy.LimitRecord{}, fmt.Errorf("decode limit record removed_at: %w", err) + } + value = value.UTC() + record.RemovedAt = &value + } + if encoded.RemovedByType != nil { + record.RemovedBy.Type = common.ActorType(*encoded.RemovedByType) + } + if encoded.RemovedByID != nil { + record.RemovedBy.ID = common.ActorID(*encoded.RemovedByID) + } + if encoded.RemovedReasonCode != nil { + record.RemovedReasonCode = common.ReasonCode(*encoded.RemovedReasonCode) + } + if err := record.Validate(); err != nil { + return policy.LimitRecord{}, fmt.Errorf("decode limit record: %w", err) + } + + return record, nil +} + +func decodeJSONPayload(payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return fmt.Errorf("decode JSON payload: %w", err) + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return errors.New("decode JSON payload: unexpected trailing JSON input") + } + + return fmt.Errorf("decode JSON payload: %w", err) + } + + return nil +} + +var ( + _ ports.AuthDirectoryStore = (*Store)(nil) +) + +// AccountStore adapts Store to the existing UserAccountStore port. +type AccountStore struct { + store *Store +} + +// Accounts returns one adapter that exposes the existing user-account store +// port over Store. +func (store *Store) Accounts() *AccountStore { + if store == nil { + return nil + } + + return &AccountStore{store: store} +} + +// Create stores one new account record. +func (adapter *AccountStore) Create(ctx context.Context, input ports.CreateAccountInput) error { + return adapter.store.Create(ctx, input) +} + +// GetByUserID returns the stored account identified by userID. +func (adapter *AccountStore) GetByUserID(ctx context.Context, userID common.UserID) (account.UserAccount, error) { + return adapter.store.GetByUserID(ctx, userID) +} + +// GetByEmail returns the stored account identified by email. +func (adapter *AccountStore) GetByEmail(ctx context.Context, email common.Email) (account.UserAccount, error) { + return adapter.store.GetByEmail(ctx, email) +} + +// GetByRaceName returns the stored account identified by raceName. +func (adapter *AccountStore) GetByRaceName(ctx context.Context, raceName common.RaceName) (account.UserAccount, error) { + return adapter.store.GetByRaceName(ctx, raceName) +} + +// ExistsByUserID reports whether userID currently identifies a stored +// account. +func (adapter *AccountStore) ExistsByUserID(ctx context.Context, userID common.UserID) (bool, error) { + return adapter.store.ExistsByUserID(ctx, userID) +} + +// RenameRaceName replaces the stored race name of userID atomically. +func (adapter *AccountStore) RenameRaceName(ctx context.Context, input ports.RenameRaceNameInput) error { + return adapter.store.RenameRaceName(ctx, input) +} + +// Update replaces the stored account state for record.UserID. +func (adapter *AccountStore) Update(ctx context.Context, record account.UserAccount) error { + return adapter.store.Update(ctx, record) +} + +var _ ports.UserAccountStore = (*AccountStore)(nil) + +// BlockedEmailStore adapts Store to the existing BlockedEmailStore port. +type BlockedEmailStore struct { + store *Store +} + +// BlockedEmails returns one adapter that exposes the existing blocked-email +// store port over Store. +func (store *Store) BlockedEmails() *BlockedEmailStore { + if store == nil { + return nil + } + + return &BlockedEmailStore{store: store} +} + +// GetByEmail returns the blocked-email subject for email. +func (adapter *BlockedEmailStore) GetByEmail(ctx context.Context, email common.Email) (authblock.BlockedEmailSubject, error) { + return adapter.store.GetBlockedEmail(ctx, email) +} + +// Upsert stores or replaces the blocked-email subject for record.Email. +func (adapter *BlockedEmailStore) Upsert(ctx context.Context, record authblock.BlockedEmailSubject) error { + return adapter.store.PutBlockedEmail(ctx, record) +} + +var _ ports.BlockedEmailStore = (*BlockedEmailStore)(nil) + +// EntitlementSnapshotStore adapts Store to the existing +// EntitlementSnapshotStore port. +type EntitlementSnapshotStore struct { + store *Store +} + +// EntitlementSnapshots returns one adapter that exposes the existing +// entitlement-snapshot store port over Store. +func (store *Store) EntitlementSnapshots() *EntitlementSnapshotStore { + if store == nil { + return nil + } + + return &EntitlementSnapshotStore{store: store} +} + +// GetByUserID returns the current entitlement snapshot for userID. +func (adapter *EntitlementSnapshotStore) GetByUserID(ctx context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) { + return adapter.store.GetEntitlementByUserID(ctx, userID) +} + +// Put stores the current entitlement snapshot for record.UserID. +func (adapter *EntitlementSnapshotStore) Put(ctx context.Context, record entitlement.CurrentSnapshot) error { + return adapter.store.PutEntitlement(ctx, record) +} + +var _ ports.EntitlementSnapshotStore = (*EntitlementSnapshotStore)(nil) + +// SanctionStore adapts Store to the existing SanctionStore port. +type SanctionStore struct { + store *Store +} + +// Sanctions returns one adapter that exposes the sanction store port over +// Store. +func (store *Store) Sanctions() *SanctionStore { + if store == nil { + return nil + } + + return &SanctionStore{store: store} +} + +// Create stores one new sanction history record. +func (adapter *SanctionStore) Create(ctx context.Context, record policy.SanctionRecord) error { + return adapter.store.CreateSanction(ctx, record) +} + +// GetByRecordID returns the sanction history record identified by recordID. +func (adapter *SanctionStore) GetByRecordID(ctx context.Context, recordID policy.SanctionRecordID) (policy.SanctionRecord, error) { + return adapter.store.GetSanctionByRecordID(ctx, recordID) +} + +// ListByUserID returns every sanction history record owned by userID. +func (adapter *SanctionStore) ListByUserID(ctx context.Context, userID common.UserID) ([]policy.SanctionRecord, error) { + return adapter.store.ListSanctionsByUserID(ctx, userID) +} + +// Update replaces one stored sanction history record. +func (adapter *SanctionStore) Update(ctx context.Context, record policy.SanctionRecord) error { + return adapter.store.UpdateSanction(ctx, record) +} + +var _ ports.SanctionStore = (*SanctionStore)(nil) + +// LimitStore adapts Store to the existing LimitStore port. +type LimitStore struct { + store *Store +} + +// Limits returns one adapter that exposes the limit store port over Store. +func (store *Store) Limits() *LimitStore { + if store == nil { + return nil + } + + return &LimitStore{store: store} +} + +// Create stores one new limit history record. +func (adapter *LimitStore) Create(ctx context.Context, record policy.LimitRecord) error { + return adapter.store.CreateLimit(ctx, record) +} + +// GetByRecordID returns the limit history record identified by recordID. +func (adapter *LimitStore) GetByRecordID(ctx context.Context, recordID policy.LimitRecordID) (policy.LimitRecord, error) { + return adapter.store.GetLimitByRecordID(ctx, recordID) +} + +// ListByUserID returns every limit history record owned by userID. +func (adapter *LimitStore) ListByUserID(ctx context.Context, userID common.UserID) ([]policy.LimitRecord, error) { + return adapter.store.ListLimitsByUserID(ctx, userID) +} + +// Update replaces one stored limit history record. +func (adapter *LimitStore) Update(ctx context.Context, record policy.LimitRecord) error { + return adapter.store.UpdateLimit(ctx, record) +} + +var _ ports.LimitStore = (*LimitStore)(nil) diff --git a/user/internal/adapters/redis/userstore/store_test.go b/user/internal/adapters/redis/userstore/store_test.go new file mode 100644 index 0000000..33ee5c2 --- /dev/null +++ b/user/internal/adapters/redis/userstore/store_test.go @@ -0,0 +1,930 @@ +package userstore + +import ( + "context" + "strings" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/authblock" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + + "github.com/alicebob/miniredis/v2" + "github.com/stretchr/testify/require" +) + +func TestAccountStoreCreateAndLookups(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + accountStore := store.Accounts() + + record := validAccountRecord() + require.NoError(t, accountStore.Create(context.Background(), createAccountInput(record))) + + byUserID, err := accountStore.GetByUserID(context.Background(), record.UserID) + require.NoError(t, err) + require.Equal(t, record, byUserID) + + byEmail, err := accountStore.GetByEmail(context.Background(), record.Email) + require.NoError(t, err) + require.Equal(t, record, byEmail) + + byRaceName, err := accountStore.GetByRaceName(context.Background(), record.RaceName) + require.NoError(t, err) + require.Equal(t, record, byRaceName) + + exists, err := accountStore.ExistsByUserID(context.Background(), record.UserID) + require.NoError(t, err) + require.True(t, exists) + + reservation, err := store.loadRaceNameReservation(context.Background(), store.client, canonicalKey(record.RaceName)) + require.NoError(t, err) + require.Equal(t, record.UserID, reservation.UserID) + require.Equal(t, record.RaceName, reservation.RaceName) +} + +func TestBlockedEmailStoreUpsertAndGet(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + blockedEmailStore := store.BlockedEmails() + + record := authblock.BlockedEmailSubject{ + Email: common.Email("blocked@example.com"), + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: time.Unix(1_775_240_100, 0).UTC(), + ResolvedUserID: common.UserID("user-123"), + } + require.NoError(t, blockedEmailStore.Upsert(context.Background(), record)) + + got, err := blockedEmailStore.GetByEmail(context.Background(), record.Email) + require.NoError(t, err) + require.Equal(t, record, got) +} + +func TestEnsureResolveAndBlockFlows(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + now := time.Unix(1_775_240_000, 0).UTC() + accountRecord := validAccountRecord() + entitlementSnapshot := validEntitlementSnapshot(accountRecord.UserID, now) + + created, err := store.EnsureByEmail(context.Background(), ports.EnsureByEmailInput{ + Email: accountRecord.Email, + Account: accountRecord, + Entitlement: entitlementSnapshot, + EntitlementRecord: validEntitlementRecord(accountRecord.UserID, now), + Reservation: raceNameReservation(accountRecord.UserID, accountRecord.RaceName, accountRecord.UpdatedAt), + }) + require.NoError(t, err) + require.Equal(t, ports.EnsureByEmailOutcomeCreated, created.Outcome) + + reservation, err := store.loadRaceNameReservation(context.Background(), store.client, canonicalKey(accountRecord.RaceName)) + require.NoError(t, err) + require.Equal(t, accountRecord.UserID, reservation.UserID) + + entitlementHistory, err := store.ListEntitlementRecordsByUserID(context.Background(), accountRecord.UserID) + require.NoError(t, err) + require.Len(t, entitlementHistory, 1) + require.Equal(t, validEntitlementRecord(accountRecord.UserID, now), entitlementHistory[0]) + + resolved, err := store.ResolveByEmail(context.Background(), accountRecord.Email) + require.NoError(t, err) + require.Equal(t, ports.AuthResolutionKindExisting, resolved.Kind) + + blockedByUserID, err := store.BlockByUserID(context.Background(), ports.BlockByUserIDInput{ + UserID: accountRecord.UserID, + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: now.Add(time.Minute), + }) + require.NoError(t, err) + require.Equal(t, ports.AuthBlockOutcomeBlocked, blockedByUserID.Outcome) + + repeatedBlock, err := store.BlockByEmail(context.Background(), ports.BlockByEmailInput{ + Email: accountRecord.Email, + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: now.Add(2 * time.Minute), + }) + require.NoError(t, err) + require.Equal(t, ports.AuthBlockOutcomeAlreadyBlocked, repeatedBlock.Outcome) + require.Equal(t, accountRecord.UserID, repeatedBlock.UserID) + + blockedResolution, err := store.ResolveByEmail(context.Background(), accountRecord.Email) + require.NoError(t, err) + require.Equal(t, ports.AuthResolutionKindBlocked, blockedResolution.Kind) + + ensureBlocked, err := store.EnsureByEmail(context.Background(), ports.EnsureByEmailInput{ + Email: accountRecord.Email, + Account: accountRecord, + Entitlement: entitlementSnapshot, + EntitlementRecord: validEntitlementRecord(accountRecord.UserID, now), + Reservation: raceNameReservation(accountRecord.UserID, accountRecord.RaceName, accountRecord.UpdatedAt), + }) + require.NoError(t, err) + require.Equal(t, ports.EnsureByEmailOutcomeBlocked, ensureBlocked.Outcome) +} + +func TestBlockedEmailWithoutUserPreventsEnsureCreate(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + now := time.Unix(1_775_240_000, 0).UTC() + accountRecord := validAccountRecord() + entitlementSnapshot := validEntitlementSnapshot(accountRecord.UserID, now) + + blocked, err := store.BlockByEmail(context.Background(), ports.BlockByEmailInput{ + Email: accountRecord.Email, + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: now, + }) + require.NoError(t, err) + require.Equal(t, ports.AuthBlockOutcomeBlocked, blocked.Outcome) + require.True(t, blocked.UserID.IsZero()) + + resolved, err := store.ResolveByEmail(context.Background(), accountRecord.Email) + require.NoError(t, err) + require.Equal(t, ports.AuthResolutionKindBlocked, resolved.Kind) + + ensured, err := store.EnsureByEmail(context.Background(), ports.EnsureByEmailInput{ + Email: accountRecord.Email, + Account: accountRecord, + Entitlement: entitlementSnapshot, + EntitlementRecord: validEntitlementRecord(accountRecord.UserID, now), + Reservation: raceNameReservation(accountRecord.UserID, accountRecord.RaceName, accountRecord.UpdatedAt), + }) + require.NoError(t, err) + require.Equal(t, ports.EnsureByEmailOutcomeBlocked, ensured.Outcome) + + exists, err := store.ExistsByUserID(context.Background(), accountRecord.UserID) + require.NoError(t, err) + require.False(t, exists) +} + +func TestEnsureByEmailExistingDoesNotOverwriteStoredSettings(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + createdAt := time.Unix(1_775_240_000, 0).UTC() + existingAccount := account.UserAccount{ + UserID: common.UserID("user-existing"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Kaliningrad"), + CreatedAt: createdAt, + UpdatedAt: createdAt, + } + require.NoError(t, store.Create(context.Background(), createAccountInput(existingAccount))) + + result, err := store.EnsureByEmail(context.Background(), ports.EnsureByEmailInput{ + Email: existingAccount.Email, + Account: account.UserAccount{ + UserID: common.UserID("user-created"), + Email: existingAccount.Email, + RaceName: common.RaceName("player-new123"), + PreferredLanguage: common.LanguageTag("fr-FR"), + TimeZone: common.TimeZoneName("UTC"), + CreatedAt: createdAt.Add(time.Minute), + UpdatedAt: createdAt.Add(time.Minute), + }, + Entitlement: validEntitlementSnapshot(common.UserID("user-created"), createdAt.Add(time.Minute)), + EntitlementRecord: validEntitlementRecord(common.UserID("user-created"), createdAt.Add(time.Minute)), + Reservation: raceNameReservation(common.UserID("user-created"), common.RaceName("player-new123"), createdAt.Add(time.Minute)), + }) + require.NoError(t, err) + require.Equal(t, ports.EnsureByEmailOutcomeExisting, result.Outcome) + require.Equal(t, existingAccount.UserID, result.UserID) + + storedAccount, err := store.GetByEmail(context.Background(), existingAccount.Email) + require.NoError(t, err) + require.Equal(t, existingAccount, storedAccount) +} + +func TestAccountStoreRenameRaceNameSwapsLookupAtomically(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + accountStore := store.Accounts() + record := validAccountRecord() + require.NoError(t, accountStore.Create(context.Background(), createAccountInput(record))) + + updatedAt := record.UpdatedAt.Add(time.Minute) + require.NoError(t, accountStore.RenameRaceName(context.Background(), renameRaceNameInput(record, common.RaceName("Nova Prime"), updatedAt))) + + stored, err := accountStore.GetByUserID(context.Background(), record.UserID) + require.NoError(t, err) + require.Equal(t, common.RaceName("Nova Prime"), stored.RaceName) + require.True(t, stored.UpdatedAt.Equal(updatedAt)) + + _, err = accountStore.GetByRaceName(context.Background(), record.RaceName) + require.ErrorIs(t, err, ports.ErrNotFound) + + renamed, err := accountStore.GetByRaceName(context.Background(), common.RaceName("Nova Prime")) + require.NoError(t, err) + require.Equal(t, record.UserID, renamed.UserID) + + _, err = store.loadRaceNameReservation(context.Background(), store.client, canonicalKey(record.RaceName)) + require.ErrorIs(t, err, ports.ErrNotFound) + + reservation, err := store.loadRaceNameReservation(context.Background(), store.client, canonicalKey(common.RaceName("Nova Prime"))) + require.NoError(t, err) + require.Equal(t, common.RaceName("Nova Prime"), reservation.RaceName) +} + +func TestAccountStoreRenameRaceNameAllowsSameOwnerCanonicalSlot(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + accountStore := store.Accounts() + + record := validAccountRecord() + record.RaceName = common.RaceName("Pilot Nova") + require.NoError(t, accountStore.Create(context.Background(), createAccountInput(record))) + + updatedAt := record.UpdatedAt.Add(time.Minute) + require.NoError(t, accountStore.RenameRaceName(context.Background(), renameRaceNameInput(record, common.RaceName("P1lot Nova"), updatedAt))) + + reservation, err := store.loadRaceNameReservation(context.Background(), store.client, canonicalKey(common.RaceName("P1lot Nova"))) + require.NoError(t, err) + require.Equal(t, common.RaceName("P1lot Nova"), reservation.RaceName) +} + +func TestAccountStoreRenameRaceNameReturnsConflictWhenTargetExists(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + accountStore := store.Accounts() + + first := validAccountRecord() + second := validAccountRecord() + second.UserID = common.UserID("user-456") + second.Email = common.Email("other@example.com") + second.RaceName = common.RaceName("Taken Name") + + require.NoError(t, accountStore.Create(context.Background(), createAccountInput(first))) + require.NoError(t, accountStore.Create(context.Background(), createAccountInput(second))) + + err := accountStore.RenameRaceName(context.Background(), renameRaceNameInput(first, second.RaceName, first.UpdatedAt.Add(time.Minute))) + require.ErrorIs(t, err, ports.ErrConflict) + + stored, err := accountStore.GetByUserID(context.Background(), first.UserID) + require.NoError(t, err) + require.Equal(t, first.RaceName, stored.RaceName) +} + +func TestAccountStoreUpdateDeclaredCountryPreservesLookups(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + accountStore := store.Accounts() + + record := validAccountRecord() + require.NoError(t, accountStore.Create(context.Background(), createAccountInput(record))) + + updated := record + updated.DeclaredCountry = common.CountryCode("FR") + updated.UpdatedAt = record.UpdatedAt.Add(time.Minute) + + require.NoError(t, accountStore.Update(context.Background(), updated)) + + byUserID, err := accountStore.GetByUserID(context.Background(), record.UserID) + require.NoError(t, err) + require.Equal(t, updated, byUserID) + + byEmail, err := accountStore.GetByEmail(context.Background(), record.Email) + require.NoError(t, err) + require.Equal(t, updated, byEmail) + + byRaceName, err := accountStore.GetByRaceName(context.Background(), record.RaceName) + require.NoError(t, err) + require.Equal(t, updated, byRaceName) +} + +func TestAccountStoreCreateReturnsConflictWhenCanonicalReservationExists(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + accountStore := store.Accounts() + + first := validAccountRecord() + second := validAccountRecord() + second.UserID = common.UserID("user-456") + second.Email = common.Email("other@example.com") + second.RaceName = common.RaceName("P1lot Nova") + + require.NoError(t, accountStore.Create(context.Background(), createAccountInput(first))) + + err := accountStore.Create(context.Background(), createAccountInput(second)) + require.ErrorIs(t, err, ports.ErrConflict) +} + +func TestBlockByUserIDRepeatedCallsStayIdempotent(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + now := time.Unix(1_775_240_000, 0).UTC() + accountRecord := validAccountRecord() + + require.NoError(t, store.Create(context.Background(), createAccountInput(accountRecord))) + + first, err := store.BlockByUserID(context.Background(), ports.BlockByUserIDInput{ + UserID: accountRecord.UserID, + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: now, + }) + require.NoError(t, err) + require.Equal(t, ports.AuthBlockOutcomeBlocked, first.Outcome) + + second, err := store.BlockByUserID(context.Background(), ports.BlockByUserIDInput{ + UserID: accountRecord.UserID, + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: now.Add(time.Minute), + }) + require.NoError(t, err) + require.Equal(t, ports.AuthBlockOutcomeAlreadyBlocked, second.Outcome) + require.Equal(t, accountRecord.UserID, second.UserID) +} + +func TestBlockByUserIDUnknownUserReturnsNotFound(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + + _, err := store.BlockByUserID(context.Background(), ports.BlockByUserIDInput{ + UserID: common.UserID("user-missing"), + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: time.Unix(1_775_240_000, 0).UTC(), + }) + require.ErrorIs(t, err, ports.ErrNotFound) +} + +func TestSanctionAndLimitStoresRoundTrip(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + sanctionStore := store.Sanctions() + limitStore := store.Limits() + now := time.Unix(1_775_240_000, 0).UTC() + + sanctionRecord := policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-1"), + UserID: common.UserID("user-123"), + SanctionCode: policy.SanctionCodeLoginBlock, + Scope: common.Scope("self_service"), + ReasonCode: common.ReasonCode("policy_enforced"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + AppliedAt: now, + } + require.NoError(t, sanctionStore.Create(context.Background(), sanctionRecord)) + + gotSanction, err := sanctionStore.GetByRecordID(context.Background(), sanctionRecord.RecordID) + require.NoError(t, err) + require.Equal(t, sanctionRecord, gotSanction) + + sanctions, err := sanctionStore.ListByUserID(context.Background(), sanctionRecord.UserID) + require.NoError(t, err) + require.Len(t, sanctions, 1) + + expiresAt := now.Add(time.Hour) + sanctionRecord.ExpiresAt = &expiresAt + require.NoError(t, sanctionStore.Update(context.Background(), sanctionRecord)) + + gotSanction, err = sanctionStore.GetByRecordID(context.Background(), sanctionRecord.RecordID) + require.NoError(t, err) + require.Equal(t, sanctionRecord.RecordID, gotSanction.RecordID) + require.Equal(t, sanctionRecord.UserID, gotSanction.UserID) + require.Equal(t, sanctionRecord.SanctionCode, gotSanction.SanctionCode) + require.Equal(t, sanctionRecord.Scope, gotSanction.Scope) + require.Equal(t, sanctionRecord.ReasonCode, gotSanction.ReasonCode) + require.Equal(t, sanctionRecord.Actor, gotSanction.Actor) + require.True(t, gotSanction.AppliedAt.Equal(sanctionRecord.AppliedAt)) + require.NotNil(t, gotSanction.ExpiresAt) + require.True(t, gotSanction.ExpiresAt.Equal(*sanctionRecord.ExpiresAt)) + + limitRecord := policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-1"), + UserID: common.UserID("user-123"), + LimitCode: policy.LimitCodeMaxOwnedPrivateGames, + Value: 3, + ReasonCode: common.ReasonCode("policy_enforced"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + AppliedAt: now, + } + require.NoError(t, limitStore.Create(context.Background(), limitRecord)) + + gotLimit, err := limitStore.GetByRecordID(context.Background(), limitRecord.RecordID) + require.NoError(t, err) + require.Equal(t, limitRecord, gotLimit) + + limits, err := limitStore.ListByUserID(context.Background(), limitRecord.UserID) + require.NoError(t, err) + require.Len(t, limits, 1) + + limitRecord.Value = 5 + require.NoError(t, limitStore.Update(context.Background(), limitRecord)) + + gotLimit, err = limitStore.GetByRecordID(context.Background(), limitRecord.RecordID) + require.NoError(t, err) + require.Equal(t, limitRecord, gotLimit) +} + +func TestPolicyLifecycleApplyAndRemoveSanction(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + lifecycleStore := store.PolicyLifecycle() + sanctionStore := store.Sanctions() + snapshotStore := store.EntitlementSnapshots() + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + require.NoError(t, snapshotStore.Put(context.Background(), validEntitlementSnapshot(userID, now))) + + record := policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-1"), + UserID: userID, + SanctionCode: policy.SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("manual_block"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now, + } + require.NoError(t, lifecycleStore.ApplySanction(context.Background(), ports.ApplySanctionInput{ + NewRecord: record, + })) + + activeRecordID, err := store.loadActiveSanctionRecordID( + context.Background(), + store.client, + store.keyspace.ActiveSanction(userID, policy.SanctionCodeLoginBlock), + ) + require.NoError(t, err) + require.Equal(t, record.RecordID, activeRecordID) + + err = lifecycleStore.ApplySanction(context.Background(), ports.ApplySanctionInput{ + NewRecord: policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-2"), + UserID: userID, + SanctionCode: policy.SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("manual_block"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")}, + AppliedAt: now.Add(time.Minute), + }, + }) + require.ErrorIs(t, err, ports.ErrConflict) + + removed := record + removedAt := now.Add(30 * time.Minute) + removed.RemovedAt = &removedAt + removed.RemovedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")} + removed.RemovedReasonCode = common.ReasonCode("manual_remove") + require.NoError(t, lifecycleStore.RemoveSanction(context.Background(), ports.RemoveSanctionInput{ + ExpectedActiveRecord: record, + UpdatedRecord: removed, + })) + + stored, err := sanctionStore.GetByRecordID(context.Background(), record.RecordID) + require.NoError(t, err) + require.Equal(t, removed, stored) + + _, err = store.loadActiveSanctionRecordID( + context.Background(), + store.client, + store.keyspace.ActiveSanction(userID, policy.SanctionCodeLoginBlock), + ) + require.ErrorIs(t, err, ports.ErrNotFound) +} + +func TestPolicyLifecycleSetAndRemoveLimit(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + lifecycleStore := store.PolicyLifecycle() + limitStore := store.Limits() + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + + first := policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-1"), + UserID: userID, + LimitCode: policy.LimitCodeMaxOwnedPrivateGames, + Value: 3, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now, + } + require.NoError(t, lifecycleStore.SetLimit(context.Background(), ports.SetLimitInput{ + NewRecord: first, + })) + + activeRecordID, err := store.loadActiveLimitRecordID( + context.Background(), + store.client, + store.keyspace.ActiveLimit(userID, policy.LimitCodeMaxOwnedPrivateGames), + ) + require.NoError(t, err) + require.Equal(t, first.RecordID, activeRecordID) + + second := policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-2"), + UserID: userID, + LimitCode: policy.LimitCodeMaxOwnedPrivateGames, + Value: 5, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")}, + AppliedAt: now.Add(time.Hour), + } + updatedFirst := first + removedAt := second.AppliedAt + updatedFirst.RemovedAt = &removedAt + updatedFirst.RemovedBy = second.Actor + updatedFirst.RemovedReasonCode = second.ReasonCode + require.NoError(t, lifecycleStore.SetLimit(context.Background(), ports.SetLimitInput{ + ExpectedActiveRecord: &first, + UpdatedActiveRecord: &updatedFirst, + NewRecord: second, + })) + + storedFirst, err := limitStore.GetByRecordID(context.Background(), first.RecordID) + require.NoError(t, err) + require.Equal(t, updatedFirst, storedFirst) + + activeRecordID, err = store.loadActiveLimitRecordID( + context.Background(), + store.client, + store.keyspace.ActiveLimit(userID, policy.LimitCodeMaxOwnedPrivateGames), + ) + require.NoError(t, err) + require.Equal(t, second.RecordID, activeRecordID) + + removedSecond := second + removeAt := now.Add(90 * time.Minute) + removedSecond.RemovedAt = &removeAt + removedSecond.RemovedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-3")} + removedSecond.RemovedReasonCode = common.ReasonCode("manual_remove") + require.NoError(t, lifecycleStore.RemoveLimit(context.Background(), ports.RemoveLimitInput{ + ExpectedActiveRecord: second, + UpdatedRecord: removedSecond, + })) + + storedSecond, err := limitStore.GetByRecordID(context.Background(), second.RecordID) + require.NoError(t, err) + require.Equal(t, removedSecond, storedSecond) + + _, err = store.loadActiveLimitRecordID( + context.Background(), + store.client, + store.keyspace.ActiveLimit(userID, policy.LimitCodeMaxOwnedPrivateGames), + ) + require.ErrorIs(t, err, ports.ErrNotFound) +} + +func TestEntitlementLifecycleTransitions(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + historyStore := store.EntitlementHistory() + snapshotStore := store.EntitlementSnapshots() + lifecycleStore := store.EntitlementLifecycle() + userID := common.UserID("user-123") + startedFreeAt := time.Unix(1_775_240_000, 0).UTC() + + freeRecord := validEntitlementRecord(userID, startedFreeAt) + freeSnapshot := validEntitlementSnapshot(userID, startedFreeAt) + require.NoError(t, historyStore.Create(context.Background(), freeRecord)) + require.NoError(t, snapshotStore.Put(context.Background(), freeSnapshot)) + + grantStartsAt := startedFreeAt.Add(24 * time.Hour) + grantEndsAt := grantStartsAt.Add(30 * 24 * time.Hour) + grantedRecord := paidEntitlementRecord( + entitlement.EntitlementRecordID("entitlement-paid-1"), + userID, + entitlement.PlanCodePaidMonthly, + grantStartsAt, + grantEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + grantedSnapshot := paidEntitlementSnapshot( + userID, + entitlement.PlanCodePaidMonthly, + grantStartsAt, + grantEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + closedFreeRecord := freeRecord + closedFreeRecord.ClosedAt = timePointer(grantStartsAt) + closedFreeRecord.ClosedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")} + closedFreeRecord.ClosedReasonCode = common.ReasonCode("manual_grant") + + require.NoError(t, lifecycleStore.Grant(context.Background(), ports.GrantEntitlementInput{ + ExpectedCurrentSnapshot: freeSnapshot, + ExpectedCurrentRecord: freeRecord, + UpdatedCurrentRecord: closedFreeRecord, + NewRecord: grantedRecord, + NewSnapshot: grantedSnapshot, + })) + + storedSnapshot, err := snapshotStore.GetByUserID(context.Background(), userID) + require.NoError(t, err) + require.Equal(t, grantedSnapshot, storedSnapshot) + + storedFreeRecord, err := historyStore.GetByRecordID(context.Background(), freeRecord.RecordID) + require.NoError(t, err) + require.Equal(t, closedFreeRecord, storedFreeRecord) + + extendedEndsAt := grantEndsAt.Add(30 * 24 * time.Hour) + extensionRecord := paidEntitlementRecord( + entitlement.EntitlementRecordID("entitlement-paid-2"), + userID, + entitlement.PlanCodePaidMonthly, + grantEndsAt, + extendedEndsAt, + common.Source("admin"), + common.ReasonCode("manual_extend"), + ) + extendedSnapshot := paidEntitlementSnapshot( + userID, + entitlement.PlanCodePaidMonthly, + grantStartsAt, + extendedEndsAt, + common.Source("admin"), + common.ReasonCode("manual_extend"), + ) + + require.NoError(t, lifecycleStore.Extend(context.Background(), ports.ExtendEntitlementInput{ + ExpectedCurrentSnapshot: grantedSnapshot, + NewRecord: extensionRecord, + NewSnapshot: extendedSnapshot, + })) + + storedSnapshot, err = snapshotStore.GetByUserID(context.Background(), userID) + require.NoError(t, err) + require.Equal(t, extendedSnapshot, storedSnapshot) + + revokeAt := grantEndsAt.Add(12 * time.Hour) + revokedCurrentRecord := extensionRecord + revokedCurrentRecord.ClosedAt = timePointer(revokeAt) + revokedCurrentRecord.ClosedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")} + revokedCurrentRecord.ClosedReasonCode = common.ReasonCode("manual_revoke") + + freeAfterRevokeRecord := entitlement.PeriodRecord{ + RecordID: entitlement.EntitlementRecordID("entitlement-free-2"), + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_revoke"), + StartsAt: revokeAt, + CreatedAt: revokeAt, + } + freeAfterRevokeSnapshot := entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: revokeAt, + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_revoke"), + UpdatedAt: revokeAt, + } + + require.NoError(t, lifecycleStore.Revoke(context.Background(), ports.RevokeEntitlementInput{ + ExpectedCurrentSnapshot: extendedSnapshot, + ExpectedCurrentRecord: extensionRecord, + UpdatedCurrentRecord: revokedCurrentRecord, + NewRecord: freeAfterRevokeRecord, + NewSnapshot: freeAfterRevokeSnapshot, + })) + + storedSnapshot, err = snapshotStore.GetByUserID(context.Background(), userID) + require.NoError(t, err) + require.Equal(t, freeAfterRevokeSnapshot, storedSnapshot) + + historyRecords, err := historyStore.ListByUserID(context.Background(), userID) + require.NoError(t, err) + require.Len(t, historyRecords, 4) +} + +func TestRepairExpiredEntitlementMaterializesFreeSnapshot(t *testing.T) { + t.Parallel() + + store := newTestStore(t) + historyStore := store.EntitlementHistory() + snapshotStore := store.EntitlementSnapshots() + lifecycleStore := store.EntitlementLifecycle() + userID := common.UserID("user-123") + startsAt := time.Unix(1_775_240_000, 0).UTC() + endsAt := startsAt.Add(24 * time.Hour) + expiredSnapshot := paidEntitlementSnapshot( + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + endsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + expiredSnapshot.UpdatedAt = endsAt.Add(24 * time.Hour) + expiredRecord := paidEntitlementRecord( + entitlement.EntitlementRecordID("entitlement-paid-1"), + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + endsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + require.NoError(t, historyStore.Create(context.Background(), expiredRecord)) + require.NoError(t, snapshotStore.Put(context.Background(), expiredSnapshot)) + + repairedAt := endsAt.Add(2 * time.Hour) + freeRecord := entitlement.PeriodRecord{ + RecordID: entitlement.EntitlementRecordID("entitlement-free-after-expiry"), + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + Source: common.Source("entitlement_expiry_repair"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: common.ReasonCode("paid_entitlement_expired"), + StartsAt: endsAt, + CreatedAt: repairedAt, + } + freeSnapshot := entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: endsAt, + Source: common.Source("entitlement_expiry_repair"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: common.ReasonCode("paid_entitlement_expired"), + UpdatedAt: repairedAt, + } + + require.NoError(t, lifecycleStore.RepairExpired(context.Background(), ports.RepairExpiredEntitlementInput{ + ExpectedExpiredSnapshot: expiredSnapshot, + NewRecord: freeRecord, + NewSnapshot: freeSnapshot, + })) + + storedSnapshot, err := snapshotStore.GetByUserID(context.Background(), userID) + require.NoError(t, err) + require.Equal(t, freeSnapshot, storedSnapshot) + + historyRecords, err := historyStore.ListByUserID(context.Background(), userID) + require.NoError(t, err) + require.Len(t, historyRecords, 2) + require.Equal(t, freeRecord, historyRecords[1]) +} + +func newTestStore(t *testing.T) *Store { + t.Helper() + + server := miniredis.RunT(t) + store, err := New(Config{ + Addr: server.Addr(), + DB: 0, + KeyspacePrefix: "user:test:", + OperationTimeout: 250 * time.Millisecond, + }) + require.NoError(t, err) + t.Cleanup(func() { + _ = store.Close() + }) + + return store +} + +func validAccountRecord() account.UserAccount { + createdAt := time.Unix(1_775_240_000, 0).UTC() + return account.UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Kaliningrad"), + CreatedAt: createdAt, + UpdatedAt: createdAt, + } +} + +func validEntitlementSnapshot(userID common.UserID, now time.Time) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: now, + Source: common.Source("auth_registration"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: common.ReasonCode("initial_free_entitlement"), + UpdatedAt: now, + } +} + +func validEntitlementRecord(userID common.UserID, now time.Time) entitlement.PeriodRecord { + return entitlement.PeriodRecord{ + RecordID: entitlement.EntitlementRecordID("entitlement-" + userID.String()), + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + Source: common.Source("auth_registration"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: common.ReasonCode("initial_free_entitlement"), + StartsAt: now, + CreatedAt: now, + } +} + +func paidEntitlementRecord( + recordID entitlement.EntitlementRecordID, + userID common.UserID, + planCode entitlement.PlanCode, + startsAt time.Time, + endsAt time.Time, + source common.Source, + reasonCode common.ReasonCode, +) entitlement.PeriodRecord { + return entitlement.PeriodRecord{ + RecordID: recordID, + UserID: userID, + PlanCode: planCode, + Source: source, + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: reasonCode, + StartsAt: startsAt, + EndsAt: timePointer(endsAt), + CreatedAt: startsAt, + } +} + +func paidEntitlementSnapshot( + userID common.UserID, + planCode entitlement.PlanCode, + startsAt time.Time, + endsAt time.Time, + source common.Source, + reasonCode common.ReasonCode, +) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: planCode, + IsPaid: true, + StartsAt: startsAt, + EndsAt: timePointer(endsAt), + Source: source, + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: reasonCode, + UpdatedAt: startsAt, + } +} + +func timePointer(value time.Time) *time.Time { + utcValue := value.UTC() + return &utcValue +} + +func createAccountInput(record account.UserAccount) ports.CreateAccountInput { + return ports.CreateAccountInput{ + Account: record, + Reservation: raceNameReservation(record.UserID, record.RaceName, record.UpdatedAt), + } +} + +func renameRaceNameInput( + record account.UserAccount, + newRaceName common.RaceName, + updatedAt time.Time, +) ports.RenameRaceNameInput { + return ports.RenameRaceNameInput{ + UserID: record.UserID, + CurrentCanonicalKey: canonicalKey(record.RaceName), + NewRaceName: newRaceName, + NewReservation: raceNameReservation(record.UserID, newRaceName, updatedAt), + UpdatedAt: updatedAt, + } +} + +func raceNameReservation( + userID common.UserID, + raceName common.RaceName, + reservedAt time.Time, +) account.RaceNameReservation { + return account.RaceNameReservation{ + CanonicalKey: canonicalKey(raceName), + UserID: userID, + RaceName: raceName, + ReservedAt: reservedAt.UTC(), + } +} + +func canonicalKey(raceName common.RaceName) account.RaceNameCanonicalKey { + return account.RaceNameCanonicalKey(strings.NewReplacer( + "1", "i", + "0", "o", + "8", "b", + ).Replace(strings.ToLower(raceName.String()))) +} diff --git a/user/internal/adapters/redisstate/keyspace.go b/user/internal/adapters/redisstate/keyspace.go new file mode 100644 index 0000000..1426645 --- /dev/null +++ b/user/internal/adapters/redisstate/keyspace.go @@ -0,0 +1,200 @@ +// Package redisstate defines the frozen Redis logical keyspace and pagination +// helpers used by future User Service storage adapters. +package redisstate + +import ( + "encoding/base64" + "fmt" + "strings" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" +) + +const defaultPrefix = "user:" + +// Keyspace builds the frozen Redis logical keys used by future storage +// adapters. The package intentionally exposes key construction only and does +// not depend on any Redis client. +type Keyspace struct { + // Prefix stores the namespace prefix applied to every key. The zero value + // uses `user:`. + Prefix string +} + +// Account returns the primary user-account key for userID. +func (k Keyspace) Account(userID common.UserID) string { + return k.prefix() + "account:" + encodeKeyComponent(userID.String()) +} + +// EmailLookup returns the exact normalized e-mail lookup key. +func (k Keyspace) EmailLookup(email common.Email) string { + return k.prefix() + "lookup:email:" + encodeKeyComponent(email.String()) +} + +// RaceNameLookup returns the exact stored race-name lookup key. +func (k Keyspace) RaceNameLookup(raceName common.RaceName) string { + return k.prefix() + "lookup:race-name:" + encodeKeyComponent(raceName.String()) +} + +// RaceNameReservation returns the replaceable canonical race-name reservation +// key. +func (k Keyspace) RaceNameReservation(key account.RaceNameCanonicalKey) string { + return k.prefix() + "reservation:race-name:" + encodeKeyComponent(key.String()) +} + +// BlockedEmailSubject returns the dedicated blocked-email-subject key. +func (k Keyspace) BlockedEmailSubject(email common.Email) string { + return k.prefix() + "blocked-email:" + encodeKeyComponent(email.String()) +} + +// EntitlementRecord returns the primary entitlement history-record key. +func (k Keyspace) EntitlementRecord(recordID entitlement.EntitlementRecordID) string { + return k.prefix() + "entitlement:record:" + encodeKeyComponent(recordID.String()) +} + +// EntitlementHistory returns the per-user entitlement-history index key. +func (k Keyspace) EntitlementHistory(userID common.UserID) string { + return k.prefix() + "entitlement:history:" + encodeKeyComponent(userID.String()) +} + +// EntitlementSnapshot returns the current entitlement-snapshot key. +func (k Keyspace) EntitlementSnapshot(userID common.UserID) string { + return k.prefix() + "entitlement:snapshot:" + encodeKeyComponent(userID.String()) +} + +// SanctionRecord returns the primary sanction history-record key. +func (k Keyspace) SanctionRecord(recordID policy.SanctionRecordID) string { + return k.prefix() + "sanction:record:" + encodeKeyComponent(recordID.String()) +} + +// SanctionHistory returns the per-user sanction-history index key. +func (k Keyspace) SanctionHistory(userID common.UserID) string { + return k.prefix() + "sanction:history:" + encodeKeyComponent(userID.String()) +} + +// ActiveSanction returns the per-user active-sanction slot for one sanction +// code. The slot guarantees at most one active sanction per `user_id + +// sanction_code`. +func (k Keyspace) ActiveSanction(userID common.UserID, code policy.SanctionCode) string { + return k.prefix() + "sanction:active:" + encodeKeyComponent(userID.String()) + ":" + encodeKeyComponent(string(code)) +} + +// LimitRecord returns the primary limit history-record key. +func (k Keyspace) LimitRecord(recordID policy.LimitRecordID) string { + return k.prefix() + "limit:record:" + encodeKeyComponent(recordID.String()) +} + +// LimitHistory returns the per-user limit-history index key. +func (k Keyspace) LimitHistory(userID common.UserID) string { + return k.prefix() + "limit:history:" + encodeKeyComponent(userID.String()) +} + +// ActiveLimit returns the per-user active-limit slot for one limit code. The +// slot guarantees at most one active limit per `user_id + limit_code`. +func (k Keyspace) ActiveLimit(userID common.UserID, code policy.LimitCode) string { + return k.prefix() + "limit:active:" + encodeKeyComponent(userID.String()) + ":" + encodeKeyComponent(string(code)) +} + +// CreatedAtIndex returns the deterministic newest-first user-ordering index. +func (k Keyspace) CreatedAtIndex() string { + return k.prefix() + "index:created-at" +} + +// PaidStateIndex returns the coarse free-versus-paid index key. +func (k Keyspace) PaidStateIndex(state entitlement.PaidState) string { + return k.prefix() + "index:paid-state:" + encodeKeyComponent(string(state)) +} + +// FinitePaidExpiryIndex returns the finite paid-expiry index key. Lifetime +// plans intentionally do not participate in this index. +func (k Keyspace) FinitePaidExpiryIndex() string { + return k.prefix() + "index:paid-expiry:finite" +} + +// DeclaredCountryIndex returns the current declared-country reverse-lookup +// index key. +func (k Keyspace) DeclaredCountryIndex(code common.CountryCode) string { + return k.prefix() + "index:declared-country:" + encodeKeyComponent(code.String()) +} + +// ActiveSanctionCodeIndex returns the reverse-lookup index key for users with +// an active sanction code. +func (k Keyspace) ActiveSanctionCodeIndex(code policy.SanctionCode) string { + return k.prefix() + "index:active-sanction:" + encodeKeyComponent(string(code)) +} + +// ActiveLimitCodeIndex returns the reverse-lookup index key for users with an +// active limit code. +func (k Keyspace) ActiveLimitCodeIndex(code policy.LimitCode) string { + return k.prefix() + "index:active-limit:" + encodeKeyComponent(string(code)) +} + +// EligibilityMarkerIndex returns the reverse-lookup index key for one derived +// eligibility marker boolean. +func (k Keyspace) EligibilityMarkerIndex(marker policy.EligibilityMarker, value bool) string { + return fmt.Sprintf("%sindex:eligibility:%s:%t", k.prefix(), encodeKeyComponent(string(marker)), value) +} + +// CreatedAtScore returns the frozen ZSET score representation for created-at +// ordering and deterministic pagination. +func CreatedAtScore(createdAt time.Time) float64 { + return float64(createdAt.UTC().UnixMicro()) +} + +// ExpiryScore returns the frozen ZSET score representation for finite paid +// expiry ordering. +func ExpiryScore(expiresAt time.Time) float64 { + return float64(expiresAt.UTC().UnixMicro()) +} + +// PageCursor identifies the last seen `(created_at, user_id)` tuple used by +// deterministic newest-first pagination. +type PageCursor struct { + // CreatedAt stores the created-at component of the last seen row. + CreatedAt time.Time + + // UserID stores the user-id tiebreaker component of the last seen row. + UserID common.UserID +} + +// Validate reports whether PageCursor contains a complete cursor tuple. +func (cursor PageCursor) Validate() error { + if err := common.ValidateTimestamp("page cursor created at", cursor.CreatedAt); err != nil { + return err + } + if err := cursor.UserID.Validate(); err != nil { + return fmt.Errorf("page cursor user id: %w", err) + } + + return nil +} + +// ComparePageOrder compares two listing positions using the frozen ordering: +// `created_at desc`, then `user_id desc`. +func ComparePageOrder(left PageCursor, right PageCursor) int { + switch { + case left.CreatedAt.After(right.CreatedAt): + return -1 + case left.CreatedAt.Before(right.CreatedAt): + return 1 + default: + return -strings.Compare(left.UserID.String(), right.UserID.String()) + } +} + +func (k Keyspace) prefix() string { + prefix := strings.TrimSpace(k.Prefix) + if prefix == "" { + return defaultPrefix + } + + return prefix +} + +func encodeKeyComponent(value string) string { + return base64.RawURLEncoding.EncodeToString([]byte(value)) +} diff --git a/user/internal/adapters/redisstate/keyspace_test.go b/user/internal/adapters/redisstate/keyspace_test.go new file mode 100644 index 0000000..8c8fa2e --- /dev/null +++ b/user/internal/adapters/redisstate/keyspace_test.go @@ -0,0 +1,59 @@ +package redisstate + +import ( + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + + "github.com/stretchr/testify/require" +) + +func TestKeyspaceBuildsStableKeys(t *testing.T) { + t.Parallel() + + keyspace := Keyspace{Prefix: "custom:"} + + require.Equal(t, "custom:account:dXNlci0xMjM", keyspace.Account(common.UserID("user-123"))) + require.Equal(t, "custom:lookup:email:cGlsb3RAZXhhbXBsZS5jb20", keyspace.EmailLookup(common.Email("pilot@example.com"))) + require.Equal(t, "custom:lookup:race-name:UGlsb3QgTm92YQ", keyspace.RaceNameLookup(common.RaceName("Pilot Nova"))) + require.Equal(t, "custom:reservation:race-name:cGlsb3Qtbm92YQ", keyspace.RaceNameReservation(account.RaceNameCanonicalKey("pilot-nova"))) + require.Equal(t, "custom:blocked-email:cGlsb3RAZXhhbXBsZS5jb20", keyspace.BlockedEmailSubject(common.Email("pilot@example.com"))) + require.Equal(t, "custom:entitlement:record:ZW50aXRsZW1lbnQtMTIz", keyspace.EntitlementRecord(entitlement.EntitlementRecordID("entitlement-123"))) + require.Equal(t, "custom:sanction:record:c2FuY3Rpb24tMQ", keyspace.SanctionRecord(policy.SanctionRecordID("sanction-1"))) + require.Equal(t, "custom:limit:record:bGltaXQtMQ", keyspace.LimitRecord(policy.LimitRecordID("limit-1"))) + require.Equal(t, "custom:sanction:active:dXNlci0xMjM:bG9naW5fYmxvY2s", keyspace.ActiveSanction(common.UserID("user-123"), policy.SanctionCodeLoginBlock)) + require.Equal(t, "custom:limit:active:dXNlci0xMjM:bWF4X293bmVkX3ByaXZhdGVfZ2FtZXM", keyspace.ActiveLimit(common.UserID("user-123"), policy.LimitCodeMaxOwnedPrivateGames)) + require.Equal(t, "custom:index:created-at", keyspace.CreatedAtIndex()) + require.Equal(t, "custom:index:paid-state:cGFpZA", keyspace.PaidStateIndex(entitlement.PaidStatePaid)) + require.Equal(t, "custom:index:paid-expiry:finite", keyspace.FinitePaidExpiryIndex()) + require.Equal(t, "custom:index:declared-country:REU", keyspace.DeclaredCountryIndex(common.CountryCode("DE"))) + require.Equal(t, "custom:index:active-sanction:bG9naW5fYmxvY2s", keyspace.ActiveSanctionCodeIndex(policy.SanctionCodeLoginBlock)) + require.Equal(t, "custom:index:active-limit:bWF4X293bmVkX3ByaXZhdGVfZ2FtZXM", keyspace.ActiveLimitCodeIndex(policy.LimitCodeMaxOwnedPrivateGames)) + require.Equal(t, "custom:index:eligibility:Y2FuX2xvZ2lu:true", keyspace.EligibilityMarkerIndex(policy.EligibilityMarkerCanLogin, true)) +} + +func TestComparePageOrder(t *testing.T) { + t.Parallel() + + newer := PageCursor{CreatedAt: time.Unix(20, 0).UTC(), UserID: common.UserID("user-200")} + older := PageCursor{CreatedAt: time.Unix(10, 0).UTC(), UserID: common.UserID("user-100")} + sameTimeHigherUserID := PageCursor{CreatedAt: time.Unix(20, 0).UTC(), UserID: common.UserID("user-300")} + + require.Negative(t, ComparePageOrder(newer, older)) + require.Positive(t, ComparePageOrder(older, newer)) + require.Negative(t, ComparePageOrder(sameTimeHigherUserID, newer)) +} + +func TestScoresUseUnixMicro(t *testing.T) { + t.Parallel() + + value := time.Unix(1_775_240_000, 123_000).UTC() + want := float64(value.UnixMicro()) + + require.Equal(t, want, CreatedAtScore(value)) + require.Equal(t, want, ExpiryScore(value)) +} diff --git a/user/internal/adapters/redisstate/page_token.go b/user/internal/adapters/redisstate/page_token.go new file mode 100644 index 0000000..8a268a4 --- /dev/null +++ b/user/internal/adapters/redisstate/page_token.go @@ -0,0 +1,191 @@ +package redisstate + +import ( + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" +) + +var ( + // ErrPageTokenFiltersMismatch reports that a supplied page token was created + // for a different normalized filter set. + ErrPageTokenFiltersMismatch = errors.New("page token filters do not match current filters") +) + +// UserListFilters stores the frozen admin-listing filter set that becomes part +// of the opaque page token fingerprint. +type UserListFilters struct { + // PaidState stores the coarse free-versus-paid filter. + PaidState entitlement.PaidState + + // PaidExpiresBefore stores the optional finite-paid expiry upper bound. + PaidExpiresBefore *time.Time + + // PaidExpiresAfter stores the optional finite-paid expiry lower bound. + PaidExpiresAfter *time.Time + + // DeclaredCountry stores the optional declared-country filter. + DeclaredCountry common.CountryCode + + // SanctionCode stores the optional active-sanction filter. + SanctionCode policy.SanctionCode + + // LimitCode stores the optional active-limit filter. + LimitCode policy.LimitCode + + // CanLogin stores the optional login-eligibility filter. + CanLogin *bool + + // CanCreatePrivateGame stores the optional private-game-create eligibility + // filter. + CanCreatePrivateGame *bool + + // CanJoinGame stores the optional join-game eligibility filter. + CanJoinGame *bool +} + +// Validate reports whether UserListFilters is structurally valid. +func (filters UserListFilters) Validate() error { + if !filters.PaidState.IsKnown() { + return fmt.Errorf("paid state %q is unsupported", filters.PaidState) + } + if filters.PaidExpiresBefore != nil && filters.PaidExpiresBefore.IsZero() { + return fmt.Errorf("paid expires before must not be zero") + } + if filters.PaidExpiresAfter != nil && filters.PaidExpiresAfter.IsZero() { + return fmt.Errorf("paid expires after must not be zero") + } + if !filters.DeclaredCountry.IsZero() { + if err := filters.DeclaredCountry.Validate(); err != nil { + return fmt.Errorf("declared country: %w", err) + } + } + if filters.SanctionCode != "" && !filters.SanctionCode.IsKnown() { + return fmt.Errorf("sanction code %q is unsupported", filters.SanctionCode) + } + if filters.LimitCode != "" && !filters.LimitCode.IsKnown() { + return fmt.Errorf("limit code %q is unsupported", filters.LimitCode) + } + + return nil +} + +// EncodePageToken encodes cursor and filters into the frozen opaque page token +// format. +func EncodePageToken(cursor PageCursor, filters UserListFilters) (string, error) { + if err := cursor.Validate(); err != nil { + return "", fmt.Errorf("encode page token: %w", err) + } + fingerprint, err := normalizeFilters(filters) + if err != nil { + return "", fmt.Errorf("encode page token: %w", err) + } + + payload, err := json.Marshal(pageTokenPayload{ + CreatedAt: cursor.CreatedAt.UTC().Format(time.RFC3339Nano), + UserID: cursor.UserID.String(), + Filters: fingerprint, + }) + if err != nil { + return "", fmt.Errorf("encode page token: %w", err) + } + + return base64.RawURLEncoding.EncodeToString(payload), nil +} + +// DecodePageToken decodes raw into the frozen page cursor and verifies that +// the embedded normalized filter set matches expectedFilters. +func DecodePageToken(raw string, expectedFilters UserListFilters) (PageCursor, error) { + fingerprint, err := normalizeFilters(expectedFilters) + if err != nil { + return PageCursor{}, fmt.Errorf("decode page token: %w", err) + } + + payload, err := base64.RawURLEncoding.DecodeString(raw) + if err != nil { + return PageCursor{}, fmt.Errorf("decode page token: %w", err) + } + + var token pageTokenPayload + if err := json.Unmarshal(payload, &token); err != nil { + return PageCursor{}, fmt.Errorf("decode page token: %w", err) + } + if token.Filters != fingerprint { + return PageCursor{}, ErrPageTokenFiltersMismatch + } + + createdAt, err := time.Parse(time.RFC3339Nano, token.CreatedAt) + if err != nil { + return PageCursor{}, fmt.Errorf("decode page token: parse created_at: %w", err) + } + + cursor := PageCursor{ + CreatedAt: createdAt.UTC(), + UserID: common.UserID(token.UserID), + } + if err := cursor.Validate(); err != nil { + return PageCursor{}, fmt.Errorf("decode page token: %w", err) + } + + return cursor, nil +} + +type pageTokenPayload struct { + CreatedAt string `json:"created_at"` + UserID string `json:"user_id"` + Filters normalizedFilterPayload `json:"filters"` +} + +type normalizedFilterPayload struct { + PaidState string `json:"paid_state,omitempty"` + PaidExpiresBeforeUTC string `json:"paid_expires_before_utc,omitempty"` + PaidExpiresAfterUTC string `json:"paid_expires_after_utc,omitempty"` + DeclaredCountry string `json:"declared_country,omitempty"` + SanctionCode string `json:"sanction_code,omitempty"` + LimitCode string `json:"limit_code,omitempty"` + CanLogin string `json:"can_login,omitempty"` + CanCreatePrivateGame string `json:"can_create_private_game,omitempty"` + CanJoinGame string `json:"can_join_game,omitempty"` +} + +func normalizeFilters(filters UserListFilters) (normalizedFilterPayload, error) { + if err := filters.Validate(); err != nil { + return normalizedFilterPayload{}, err + } + + return normalizedFilterPayload{ + PaidState: string(filters.PaidState), + PaidExpiresBeforeUTC: formatOptionalTime(filters.PaidExpiresBefore), + PaidExpiresAfterUTC: formatOptionalTime(filters.PaidExpiresAfter), + DeclaredCountry: filters.DeclaredCountry.String(), + SanctionCode: string(filters.SanctionCode), + LimitCode: string(filters.LimitCode), + CanLogin: formatOptionalBool(filters.CanLogin), + CanCreatePrivateGame: formatOptionalBool(filters.CanCreatePrivateGame), + CanJoinGame: formatOptionalBool(filters.CanJoinGame), + }, nil +} + +func formatOptionalTime(value *time.Time) string { + if value == nil { + return "" + } + + return value.UTC().Format(time.RFC3339Nano) +} + +func formatOptionalBool(value *bool) string { + if value == nil { + return "" + } + if *value { + return "true" + } + return "false" +} diff --git a/user/internal/adapters/redisstate/page_token_test.go b/user/internal/adapters/redisstate/page_token_test.go new file mode 100644 index 0000000..8455b04 --- /dev/null +++ b/user/internal/adapters/redisstate/page_token_test.go @@ -0,0 +1,70 @@ +package redisstate + +import ( + "testing" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + + "github.com/stretchr/testify/require" +) + +func TestEncodeDecodePageToken(t *testing.T) { + t.Parallel() + + before := time.Unix(1_775_250_000, 0).UTC() + after := time.Unix(1_775_240_000, 0).UTC() + canLogin := true + canCreate := false + canJoin := true + + filters := UserListFilters{ + PaidState: entitlement.PaidStatePaid, + PaidExpiresBefore: &before, + PaidExpiresAfter: &after, + DeclaredCountry: common.CountryCode("DE"), + SanctionCode: policy.SanctionCodeLoginBlock, + LimitCode: policy.LimitCodeMaxOwnedPrivateGames, + CanLogin: &canLogin, + CanCreatePrivateGame: &canCreate, + CanJoinGame: &canJoin, + } + cursor := PageCursor{ + CreatedAt: time.Unix(1_775_240_100, 987_000_000).UTC(), + UserID: common.UserID("user-123"), + } + + token, err := EncodePageToken(cursor, filters) + require.NoError(t, err) + + decoded, err := DecodePageToken(token, filters) + require.NoError(t, err) + require.Equal(t, cursor, decoded) +} + +func TestDecodePageTokenFilterMismatch(t *testing.T) { + t.Parallel() + + cursor := PageCursor{ + CreatedAt: time.Unix(1_775_240_100, 0).UTC(), + UserID: common.UserID("user-123"), + } + filters := UserListFilters{ + PaidState: entitlement.PaidStatePaid, + } + + token, err := EncodePageToken(cursor, filters) + require.NoError(t, err) + + _, err = DecodePageToken(token, UserListFilters{PaidState: entitlement.PaidStateFree}) + require.ErrorIs(t, err, ErrPageTokenFiltersMismatch) +} + +func TestDecodePageTokenRejectsInvalidInput(t *testing.T) { + t.Parallel() + + _, err := DecodePageToken("%%%not-base64%%%", UserListFilters{}) + require.Error(t, err) +} diff --git a/user/internal/adminapi/server.go b/user/internal/adminapi/server.go new file mode 100644 index 0000000..2b04ffa --- /dev/null +++ b/user/internal/adminapi/server.go @@ -0,0 +1,133 @@ +// Package adminapi exposes the optional private admin HTTP listener used for +// operational endpoints such as Prometheus metrics. +package adminapi + +import ( + "context" + "errors" + "fmt" + "log/slog" + "net" + "net/http" + "sync" + + "galaxy/user/internal/config" +) + +// Server owns the optional admin HTTP listener exposed by the user service. +type Server struct { + cfg config.AdminHTTPConfig + handler http.Handler + logger *slog.Logger + + stateMu sync.RWMutex + server *http.Server + listener net.Listener +} + +// NewServer constructs an admin HTTP server for cfg and handler. +func NewServer(cfg config.AdminHTTPConfig, handler http.Handler, logger *slog.Logger) *Server { + if handler == nil { + handler = http.NotFoundHandler() + } + if logger == nil { + logger = slog.Default() + } + mux := http.NewServeMux() + mux.Handle("GET /metrics", handler) + + return &Server{ + cfg: cfg, + handler: mux, + logger: logger.With("component", "admin_http"), + } +} + +// Enabled reports whether the admin listener should run. +func (server *Server) Enabled() bool { + return server != nil && server.cfg.Addr != "" +} + +// Run binds the configured listener and serves the admin HTTP surface until +// Shutdown closes the server. A disabled admin server returns when ctx is +// canceled. +func (server *Server) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run admin HTTP server: nil context") + } + if err := ctx.Err(); err != nil { + return err + } + if !server.Enabled() { + <-ctx.Done() + return nil + } + + listener, err := net.Listen("tcp", server.cfg.Addr) + if err != nil { + return fmt.Errorf("run admin HTTP server: listen on %q: %w", server.cfg.Addr, err) + } + + httpServer := &http.Server{ + Handler: server.handler, + ReadHeaderTimeout: server.cfg.ReadHeaderTimeout, + ReadTimeout: server.cfg.ReadTimeout, + IdleTimeout: server.cfg.IdleTimeout, + } + + server.stateMu.Lock() + server.server = httpServer + server.listener = listener + server.stateMu.Unlock() + + server.logger.Info("admin HTTP server started", "addr", listener.Addr().String()) + + shutdownDone := make(chan struct{}) + go func() { + defer close(shutdownDone) + <-ctx.Done() + shutdownCtx, cancel := context.WithTimeout(context.Background(), server.cfg.ReadTimeout) + defer cancel() + _ = server.Shutdown(shutdownCtx) + }() + + defer func() { + server.stateMu.Lock() + server.server = nil + server.listener = nil + server.stateMu.Unlock() + <-shutdownDone + }() + + err = httpServer.Serve(listener) + switch { + case err == nil: + return nil + case errors.Is(err, http.ErrServerClosed): + server.logger.Info("admin HTTP server stopped") + return nil + default: + return fmt.Errorf("run admin HTTP server: serve on %q: %w", server.cfg.Addr, err) + } +} + +// Shutdown gracefully stops the admin HTTP server within ctx. +func (server *Server) Shutdown(ctx context.Context) error { + if ctx == nil { + return errors.New("shutdown admin HTTP server: nil context") + } + + server.stateMu.RLock() + httpServer := server.server + server.stateMu.RUnlock() + + if httpServer == nil { + return nil + } + + if err := httpServer.Shutdown(ctx); err != nil && !errors.Is(err, http.ErrServerClosed) { + return fmt.Errorf("shutdown admin HTTP server: %w", err) + } + + return nil +} diff --git a/user/internal/adminapi/server_test.go b/user/internal/adminapi/server_test.go new file mode 100644 index 0000000..44cfd8e --- /dev/null +++ b/user/internal/adminapi/server_test.go @@ -0,0 +1,98 @@ +package adminapi + +import ( + "context" + "net/http" + "testing" + "time" + + "galaxy/user/internal/config" + + "github.com/stretchr/testify/require" +) + +func TestServerRunDisabledWaitsForContext(t *testing.T) { + t.Parallel() + + server := NewServer(config.AdminHTTPConfig{}, http.HandlerFunc(func(http.ResponseWriter, *http.Request) { + t.Fatal("disabled admin server must not serve requests") + }), nil) + + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + errCh := make(chan error, 1) + go func() { + errCh <- server.Run(ctx) + }() + + cancel() + + select { + case err := <-errCh: + require.ErrorIs(t, err, context.Canceled) + case <-time.After(2 * time.Second): + t.Fatal("disabled admin server did not stop after context cancellation") + } +} + +func TestServerRunServesMetricsOnly(t *testing.T) { + t.Parallel() + + server := NewServer(config.AdminHTTPConfig{ + Addr: "127.0.0.1:0", + ReadHeaderTimeout: 2 * time.Second, + ReadTimeout: 10 * time.Second, + IdleTimeout: time.Minute, + }, http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + _, _ = w.Write([]byte("sample_metric 1\n")) + }), nil) + + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + errCh := make(chan error, 1) + go func() { + errCh <- server.Run(ctx) + }() + + addr := waitForListener(t, server) + + metricsResponse, err := http.Get("http://" + addr + "/metrics") + require.NoError(t, err) + t.Cleanup(func() { _ = metricsResponse.Body.Close() }) + require.Equal(t, http.StatusOK, metricsResponse.StatusCode) + + rootResponse, err := http.Get("http://" + addr + "/") + require.NoError(t, err) + t.Cleanup(func() { _ = rootResponse.Body.Close() }) + require.Equal(t, http.StatusNotFound, rootResponse.StatusCode) + + cancel() + + select { + case err := <-errCh: + require.NoError(t, err) + case <-time.After(2 * time.Second): + t.Fatal("admin server did not stop after context cancellation") + } +} + +func waitForListener(t *testing.T, server *Server) string { + t.Helper() + + deadline := time.Now().Add(2 * time.Second) + for time.Now().Before(deadline) { + server.stateMu.RLock() + listener := server.listener + server.stateMu.RUnlock() + if listener != nil { + return listener.Addr().String() + } + + time.Sleep(10 * time.Millisecond) + } + + t.Fatal("admin server listener did not start") + return "" +} diff --git a/user/internal/api/internalhttp/admin_handler.go b/user/internal/api/internalhttp/admin_handler.go new file mode 100644 index 0000000..ad35f3b --- /dev/null +++ b/user/internal/api/internalhttp/admin_handler.go @@ -0,0 +1,205 @@ +package internalhttp + +import ( + "context" + "net/http" + "strconv" + "strings" + "time" + + "galaxy/user/internal/service/adminusers" + "galaxy/user/internal/service/shared" + + "github.com/gin-gonic/gin" +) + +type getUserByEmailRequest struct { + Email string `json:"email"` +} + +type getUserByRaceNameRequest struct { + RaceName string `json:"race_name"` +} + +func handleGetUserByID(useCase GetUserByIDUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, adminusers.GetUserByIDInput{ + UserID: c.Param("user_id"), + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func handleGetUserByEmail(useCase GetUserByEmailUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request getUserByEmailRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, adminusers.GetUserByEmailInput{ + Email: request.Email, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func handleGetUserByRaceName(useCase GetUserByRaceNameUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request getUserByRaceNameRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, adminusers.GetUserByRaceNameInput{ + RaceName: request.RaceName, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func handleListUsers(useCase ListUsersUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + input, err := buildListUsersInput(c) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, input) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func buildListUsersInput(c *gin.Context) (adminusers.ListUsersInput, error) { + pageSize, err := parseOptionalPageSize(c, "page_size") + if err != nil { + return adminusers.ListUsersInput{}, err + } + pageToken, err := parseOptionalPageToken(c, "page_token") + if err != nil { + return adminusers.ListUsersInput{}, err + } + paidExpiresBefore, err := parseOptionalRFC3339Query(c, "paid_expires_before") + if err != nil { + return adminusers.ListUsersInput{}, err + } + paidExpiresAfter, err := parseOptionalRFC3339Query(c, "paid_expires_after") + if err != nil { + return adminusers.ListUsersInput{}, err + } + canLogin, err := parseOptionalBoolQuery(c, "can_login") + if err != nil { + return adminusers.ListUsersInput{}, err + } + canCreatePrivateGame, err := parseOptionalBoolQuery(c, "can_create_private_game") + if err != nil { + return adminusers.ListUsersInput{}, err + } + canJoinGame, err := parseOptionalBoolQuery(c, "can_join_game") + if err != nil { + return adminusers.ListUsersInput{}, err + } + + return adminusers.ListUsersInput{ + PageSize: pageSize, + PageToken: pageToken, + PaidState: c.Query("paid_state"), + PaidExpiresBefore: paidExpiresBefore, + PaidExpiresAfter: paidExpiresAfter, + DeclaredCountry: c.Query("declared_country"), + SanctionCode: c.Query("sanction_code"), + LimitCode: c.Query("limit_code"), + CanLogin: canLogin, + CanCreatePrivateGame: canCreatePrivateGame, + CanJoinGame: canJoinGame, + }, nil +} + +func parseOptionalPageSize(c *gin.Context, name string) (int, error) { + raw, present := c.GetQuery(name) + if !present { + return 0, nil + } + + value, err := strconv.Atoi(strings.TrimSpace(raw)) + if err != nil || value < 1 || value > 200 { + return 0, shared.InvalidRequest("page_size must be between 1 and 200") + } + + return value, nil +} + +func parseOptionalPageToken(c *gin.Context, name string) (string, error) { + raw, present := c.GetQuery(name) + if !present { + return "", nil + } + if strings.TrimSpace(raw) != raw { + return "", shared.InvalidRequest("page_token must not contain surrounding whitespace") + } + + return raw, nil +} + +func parseOptionalRFC3339Query(c *gin.Context, name string) (*time.Time, error) { + raw, present := c.GetQuery(name) + if !present { + return nil, nil + } + + parsed, err := time.Parse(time.RFC3339, strings.TrimSpace(raw)) + if err != nil { + return nil, shared.InvalidRequest(name + " must be a valid RFC 3339 timestamp") + } + + return &parsed, nil +} + +func parseOptionalBoolQuery(c *gin.Context, name string) (*bool, error) { + raw, present := c.GetQuery(name) + if !present { + return nil, nil + } + + parsed, err := strconv.ParseBool(strings.TrimSpace(raw)) + if err != nil { + return nil, shared.InvalidRequest(name + " must be a valid boolean") + } + + return &parsed, nil +} diff --git a/user/internal/api/internalhttp/admin_handler_test.go b/user/internal/api/internalhttp/admin_handler_test.go new file mode 100644 index 0000000..8eb9207 --- /dev/null +++ b/user/internal/api/internalhttp/admin_handler_test.go @@ -0,0 +1,233 @@ +package internalhttp + +import ( + "bytes" + "context" + "net/http" + "net/http/httptest" + "testing" + "time" + + "galaxy/user/internal/service/accountview" + "galaxy/user/internal/service/adminusers" + "galaxy/user/internal/service/shared" + + "github.com/stretchr/testify/require" +) + +func TestAdminReadHandlersSuccessCases(t *testing.T) { + t.Parallel() + + handler := mustNewHandler(t, Dependencies{ + GetUserByID: getUserByIDFunc(func(_ context.Context, input adminusers.GetUserByIDInput) (adminusers.LookupResult, error) { + require.Equal(t, "user-123", input.UserID) + return adminusers.LookupResult{User: sampleAccountView()}, nil + }), + GetUserByEmail: getUserByEmailFunc(func(_ context.Context, input adminusers.GetUserByEmailInput) (adminusers.LookupResult, error) { + require.Equal(t, "pilot@example.com", input.Email) + return adminusers.LookupResult{User: sampleAccountView()}, nil + }), + GetUserByRaceName: getUserByRaceNameFunc(func(_ context.Context, input adminusers.GetUserByRaceNameInput) (adminusers.LookupResult, error) { + require.Equal(t, "Pilot Nova", input.RaceName) + return adminusers.LookupResult{User: sampleAccountView()}, nil + }), + ListUsers: listUsersFunc(func(_ context.Context, input adminusers.ListUsersInput) (adminusers.ListUsersResult, error) { + require.Equal(t, 2, input.PageSize) + require.Equal(t, "cursor-1", input.PageToken) + require.Equal(t, "paid", input.PaidState) + require.Equal(t, "DE", input.DeclaredCountry) + require.Equal(t, "login_block", input.SanctionCode) + require.Equal(t, "max_owned_private_games", input.LimitCode) + require.NotNil(t, input.PaidExpiresBefore) + require.NotNil(t, input.PaidExpiresAfter) + require.NotNil(t, input.CanLogin) + require.NotNil(t, input.CanCreatePrivateGame) + require.NotNil(t, input.CanJoinGame) + require.False(t, *input.CanLogin) + require.True(t, *input.CanCreatePrivateGame) + require.True(t, *input.CanJoinGame) + require.Equal(t, time.Date(2026, time.April, 10, 12, 0, 0, 0, time.UTC), input.PaidExpiresBefore.UTC()) + require.Equal(t, time.Date(2026, time.April, 1, 12, 0, 0, 0, time.UTC), input.PaidExpiresAfter.UTC()) + + other := sampleAccountView() + other.UserID = "user-234" + other.Email = "second@example.com" + other.RaceName = "Second Pilot" + + return adminusers.ListUsersResult{ + Items: []accountview.AccountView{sampleAccountView(), other}, + NextPageToken: "cursor-2", + }, nil + }), + }) + + tests := []struct { + name string + method string + path string + body string + wantStatus int + wantBody string + }{ + { + name: "get user by id", + method: http.MethodGet, + path: "/api/v1/internal/users/user-123", + wantStatus: http.StatusOK, + wantBody: `{"user":{"user_id":"user-123","email":"pilot@example.com","race_name":"Pilot Nova","preferred_language":"en","time_zone":"Europe/Kaliningrad","declared_country":"DE","entitlement":{"plan_code":"free","is_paid":false,"source":"auth_registration","actor":{"type":"service","id":"user-service"},"reason_code":"initial_free_entitlement","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[],"active_limits":[],"created_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "get user by email", + method: http.MethodPost, + path: "/api/v1/internal/user-lookups/by-email", + body: `{"email":"pilot@example.com"}`, + wantStatus: http.StatusOK, + wantBody: `{"user":{"user_id":"user-123","email":"pilot@example.com","race_name":"Pilot Nova","preferred_language":"en","time_zone":"Europe/Kaliningrad","declared_country":"DE","entitlement":{"plan_code":"free","is_paid":false,"source":"auth_registration","actor":{"type":"service","id":"user-service"},"reason_code":"initial_free_entitlement","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[],"active_limits":[],"created_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "get user by race name", + method: http.MethodPost, + path: "/api/v1/internal/user-lookups/by-race-name", + body: `{"race_name":"Pilot Nova"}`, + wantStatus: http.StatusOK, + wantBody: `{"user":{"user_id":"user-123","email":"pilot@example.com","race_name":"Pilot Nova","preferred_language":"en","time_zone":"Europe/Kaliningrad","declared_country":"DE","entitlement":{"plan_code":"free","is_paid":false,"source":"auth_registration","actor":{"type":"service","id":"user-service"},"reason_code":"initial_free_entitlement","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[],"active_limits":[],"created_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "list users", + method: http.MethodGet, + path: "/api/v1/internal/users?page_size=2&page_token=cursor-1&paid_state=paid&paid_expires_before=2026-04-10T12:00:00Z&paid_expires_after=2026-04-01T12:00:00Z&declared_country=DE&sanction_code=login_block&limit_code=max_owned_private_games&can_login=false&can_create_private_game=true&can_join_game=true", + wantStatus: http.StatusOK, + wantBody: `{"items":[{"user_id":"user-123","email":"pilot@example.com","race_name":"Pilot Nova","preferred_language":"en","time_zone":"Europe/Kaliningrad","declared_country":"DE","entitlement":{"plan_code":"free","is_paid":false,"source":"auth_registration","actor":{"type":"service","id":"user-service"},"reason_code":"initial_free_entitlement","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[],"active_limits":[],"created_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},{"user_id":"user-234","email":"second@example.com","race_name":"Second Pilot","preferred_language":"en","time_zone":"Europe/Kaliningrad","declared_country":"DE","entitlement":{"plan_code":"free","is_paid":false,"source":"auth_registration","actor":{"type":"service","id":"user-service"},"reason_code":"initial_free_entitlement","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[],"active_limits":[],"created_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}],"next_page_token":"cursor-2"}`, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + var body *bytes.Buffer + if tt.body != "" { + body = bytes.NewBufferString(tt.body) + } else { + body = &bytes.Buffer{} + } + + request := httptest.NewRequest(tt.method, tt.path, body) + if tt.body != "" { + request.Header.Set("Content-Type", "application/json") + } + recorder := httptest.NewRecorder() + + handler.ServeHTTP(recorder, request) + + require.Equal(t, tt.wantStatus, recorder.Code) + assertJSONEq(t, recorder.Body.String(), tt.wantBody) + }) + } +} + +func TestAdminReadHandlersErrorCases(t *testing.T) { + t.Parallel() + + handler := mustNewHandler(t, Dependencies{ + GetUserByID: getUserByIDFunc(func(context.Context, adminusers.GetUserByIDInput) (adminusers.LookupResult, error) { + return adminusers.LookupResult{}, shared.SubjectNotFound() + }), + GetUserByEmail: getUserByEmailFunc(func(context.Context, adminusers.GetUserByEmailInput) (adminusers.LookupResult, error) { + return adminusers.LookupResult{}, shared.SubjectNotFound() + }), + GetUserByRaceName: getUserByRaceNameFunc(func(context.Context, adminusers.GetUserByRaceNameInput) (adminusers.LookupResult, error) { + return adminusers.LookupResult{}, shared.SubjectNotFound() + }), + ListUsers: listUsersFunc(func(context.Context, adminusers.ListUsersInput) (adminusers.ListUsersResult, error) { + return adminusers.ListUsersResult{}, shared.InvalidRequest("page_token is invalid or does not match current filters") + }), + }) + + tests := []struct { + name string + method string + path string + body string + wantStatus int + wantBody string + }{ + { + name: "get user by id not found", + method: http.MethodGet, + path: "/api/v1/internal/users/user-missing", + wantStatus: http.StatusNotFound, + wantBody: `{"error":{"code":"subject_not_found","message":"subject not found"}}`, + }, + { + name: "get user by email malformed json", + method: http.MethodPost, + path: "/api/v1/internal/user-lookups/by-email", + body: `{"email":"pilot@example.com","extra":true}`, + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"request body contains unknown field \"extra\""}}`, + }, + { + name: "get user by race name not found", + method: http.MethodPost, + path: "/api/v1/internal/user-lookups/by-race-name", + body: `{"race_name":"Missing Pilot"}`, + wantStatus: http.StatusNotFound, + wantBody: `{"error":{"code":"subject_not_found","message":"subject not found"}}`, + }, + { + name: "list users invalid page size", + method: http.MethodGet, + path: "/api/v1/internal/users?page_size=201", + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"page_size must be between 1 and 200"}}`, + }, + { + name: "list users invalid timestamp", + method: http.MethodGet, + path: "/api/v1/internal/users?paid_expires_before=not-a-time", + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"paid_expires_before must be a valid RFC 3339 timestamp"}}`, + }, + { + name: "list users invalid boolean", + method: http.MethodGet, + path: "/api/v1/internal/users?can_login=maybe", + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"can_login must be a valid boolean"}}`, + }, + { + name: "list users invalid page token", + method: http.MethodGet, + path: "/api/v1/internal/users?page_token=cursor-1", + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"page_token is invalid or does not match current filters"}}`, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + var body *bytes.Buffer + if tt.body != "" { + body = bytes.NewBufferString(tt.body) + } else { + body = &bytes.Buffer{} + } + + request := httptest.NewRequest(tt.method, tt.path, body) + if tt.body != "" { + request.Header.Set("Content-Type", "application/json") + } + recorder := httptest.NewRecorder() + + handler.ServeHTTP(recorder, request) + + require.Equal(t, tt.wantStatus, recorder.Code) + assertJSONEq(t, recorder.Body.String(), tt.wantBody) + }) + } +} diff --git a/user/internal/api/internalhttp/handler.go b/user/internal/api/internalhttp/handler.go new file mode 100644 index 0000000..7472455 --- /dev/null +++ b/user/internal/api/internalhttp/handler.go @@ -0,0 +1,841 @@ +package internalhttp + +import ( + "context" + "fmt" + "log/slog" + "net/http" + "time" + + "galaxy/user/internal/logging" + "galaxy/user/internal/service/authdirectory" + "galaxy/user/internal/service/entitlementsvc" + "galaxy/user/internal/service/geosync" + "galaxy/user/internal/service/lobbyeligibility" + "galaxy/user/internal/service/policysvc" + "galaxy/user/internal/service/selfservice" + "galaxy/user/internal/service/shared" + "galaxy/user/internal/telemetry" + + "github.com/gin-gonic/gin" + "go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin" + "go.opentelemetry.io/otel/attribute" +) + +const internalHTTPServiceName = "galaxy-user-internal" + +type errorResponse struct { + Error errorBody `json:"error"` +} + +type errorBody struct { + Code string `json:"code"` + Message string `json:"message"` +} + +type resolveByEmailRequest struct { + Email string `json:"email"` +} + +type resolveByEmailResponse struct { + Kind string `json:"kind"` + UserID string `json:"user_id,omitempty"` + BlockReasonCode string `json:"block_reason_code,omitempty"` +} + +type existsByUserIDResponse struct { + Exists bool `json:"exists"` +} + +type ensureByEmailRequest struct { + Email string `json:"email"` + RegistrationContext *ensureRegistrationContextDTO `json:"registration_context"` +} + +type ensureRegistrationContextDTO struct { + PreferredLanguage string `json:"preferred_language"` + TimeZone string `json:"time_zone"` +} + +type ensureByEmailResponse struct { + Outcome string `json:"outcome"` + UserID string `json:"user_id,omitempty"` + BlockReasonCode string `json:"block_reason_code,omitempty"` +} + +type blockByUserIDRequest struct { + ReasonCode string `json:"reason_code"` +} + +type blockByEmailRequest struct { + Email string `json:"email"` + ReasonCode string `json:"reason_code"` +} + +type blockResponse struct { + Outcome string `json:"outcome"` + UserID string `json:"user_id,omitempty"` +} + +type getMyAccountResponse struct { + Account selfservice.AccountView `json:"account"` +} + +type updateMyProfileRequest struct { + RaceName string `json:"race_name"` +} + +type updateMySettingsRequest struct { + PreferredLanguage string `json:"preferred_language"` + TimeZone string `json:"time_zone"` +} + +type syncDeclaredCountryRequest struct { + DeclaredCountry string `json:"declared_country"` +} + +type syncDeclaredCountryResponse struct { + UserID string `json:"user_id"` + DeclaredCountry string `json:"declared_country"` + UpdatedAt time.Time `json:"updated_at"` +} + +type actorDTO struct { + Type string `json:"type"` + ID string `json:"id,omitempty"` +} + +type grantEntitlementRequest struct { + PlanCode string `json:"plan_code"` + Source string `json:"source"` + ReasonCode string `json:"reason_code"` + Actor actorDTO `json:"actor"` + StartsAt string `json:"starts_at"` + EndsAt string `json:"ends_at,omitempty"` +} + +type extendEntitlementRequest struct { + Source string `json:"source"` + ReasonCode string `json:"reason_code"` + Actor actorDTO `json:"actor"` + EndsAt string `json:"ends_at"` +} + +type revokeEntitlementRequest struct { + Source string `json:"source"` + ReasonCode string `json:"reason_code"` + Actor actorDTO `json:"actor"` +} + +type applySanctionRequest struct { + SanctionCode string `json:"sanction_code"` + Scope string `json:"scope"` + ReasonCode string `json:"reason_code"` + Actor actorDTO `json:"actor"` + AppliedAt string `json:"applied_at"` + ExpiresAt string `json:"expires_at,omitempty"` +} + +type removeSanctionRequest struct { + SanctionCode string `json:"sanction_code"` + ReasonCode string `json:"reason_code"` + Actor actorDTO `json:"actor"` +} + +type setLimitRequest struct { + LimitCode string `json:"limit_code"` + Value int `json:"value"` + ReasonCode string `json:"reason_code"` + Actor actorDTO `json:"actor"` + AppliedAt string `json:"applied_at"` + ExpiresAt string `json:"expires_at,omitempty"` +} + +type removeLimitRequest struct { + LimitCode string `json:"limit_code"` + ReasonCode string `json:"reason_code"` + Actor actorDTO `json:"actor"` +} + +type entitlementSnapshotResponse struct { + PlanCode string `json:"plan_code"` + IsPaid bool `json:"is_paid"` + Source string `json:"source"` + Actor actorDTO `json:"actor"` + ReasonCode string `json:"reason_code"` + StartsAt time.Time `json:"starts_at"` + EndsAt *time.Time `json:"ends_at,omitempty"` + UpdatedAt time.Time `json:"updated_at"` +} + +type entitlementCommandResponse struct { + UserID string `json:"user_id"` + Entitlement entitlementSnapshotResponse `json:"entitlement"` +} + +func newHandlerWithConfig(cfg Config, deps Dependencies) (http.Handler, error) { + if err := cfg.Validate(); err != nil { + return nil, err + } + + normalizedDeps, err := normalizeDependencies(deps) + if err != nil { + return nil, err + } + + configureGinModeOnce.Do(func() { + gin.SetMode(gin.ReleaseMode) + }) + + engine := gin.New() + engine.Use(newOTelMiddleware(normalizedDeps.Telemetry)) + engine.Use(withObservability(normalizedDeps.Logger, normalizedDeps.Telemetry)) + engine.POST("/api/v1/internal/user-resolutions/by-email", handleResolveByEmail(normalizedDeps.ResolveByEmail, cfg.RequestTimeout)) + engine.GET("/api/v1/internal/users/:user_id/exists", handleExistsByUserID(normalizedDeps.ExistsByUserID, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/ensure-by-email", handleEnsureByEmail(normalizedDeps.EnsureByEmail, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/block", handleBlockByUserID(normalizedDeps.BlockByUserID, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/user-blocks/by-email", handleBlockByEmail(normalizedDeps.BlockByEmail, cfg.RequestTimeout)) + engine.GET("/api/v1/internal/users/:user_id/account", handleGetMyAccount(normalizedDeps.GetMyAccount, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/profile", handleUpdateMyProfile(normalizedDeps.UpdateMyProfile, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/settings", handleUpdateMySettings(normalizedDeps.UpdateMySettings, cfg.RequestTimeout)) + engine.GET("/api/v1/internal/users/:user_id", handleGetUserByID(normalizedDeps.GetUserByID, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/user-lookups/by-email", handleGetUserByEmail(normalizedDeps.GetUserByEmail, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/user-lookups/by-race-name", handleGetUserByRaceName(normalizedDeps.GetUserByRaceName, cfg.RequestTimeout)) + engine.GET("/api/v1/internal/users", handleListUsers(normalizedDeps.ListUsers, cfg.RequestTimeout)) + engine.GET("/api/v1/internal/users/:user_id/eligibility", handleGetUserEligibility(normalizedDeps.GetUserEligibility, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/declared-country/sync", handleSyncDeclaredCountry(normalizedDeps.SyncDeclaredCountry, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/entitlements/grant", handleGrantEntitlement(normalizedDeps.GrantEntitlement, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/entitlements/extend", handleExtendEntitlement(normalizedDeps.ExtendEntitlement, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/entitlements/revoke", handleRevokeEntitlement(normalizedDeps.RevokeEntitlement, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/sanctions/apply", handleApplySanction(normalizedDeps.ApplySanction, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/sanctions/remove", handleRemoveSanction(normalizedDeps.RemoveSanction, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/limits/set", handleSetLimit(normalizedDeps.SetLimit, cfg.RequestTimeout)) + engine.POST("/api/v1/internal/users/:user_id/limits/remove", handleRemoveLimit(normalizedDeps.RemoveLimit, cfg.RequestTimeout)) + + return engine, nil +} + +func handleResolveByEmail(useCase ResolveByEmailUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request resolveByEmailRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, authdirectory.ResolveByEmailInput{ + Email: request.Email, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, resolveByEmailResponse{ + Kind: result.Kind, + UserID: result.UserID, + BlockReasonCode: result.BlockReasonCode, + }) + } +} + +func handleExistsByUserID(useCase ExistsByUserIDUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, authdirectory.ExistsByUserIDInput{ + UserID: c.Param("user_id"), + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, existsByUserIDResponse{Exists: result.Exists}) + } +} + +func handleEnsureByEmail(useCase EnsureByEmailUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request ensureByEmailRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + if request.RegistrationContext == nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest("registration_context must be present"))) + return + } + + var registrationContext *authdirectory.RegistrationContext + registrationContext = &authdirectory.RegistrationContext{ + PreferredLanguage: request.RegistrationContext.PreferredLanguage, + TimeZone: request.RegistrationContext.TimeZone, + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, authdirectory.EnsureByEmailInput{ + Email: request.Email, + RegistrationContext: registrationContext, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, ensureByEmailResponse{ + Outcome: result.Outcome, + UserID: result.UserID, + BlockReasonCode: result.BlockReasonCode, + }) + } +} + +func handleBlockByUserID(useCase BlockByUserIDUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request blockByUserIDRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, authdirectory.BlockByUserIDInput{ + UserID: c.Param("user_id"), + ReasonCode: request.ReasonCode, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, blockResponse{ + Outcome: result.Outcome, + UserID: result.UserID, + }) + } +} + +func handleBlockByEmail(useCase BlockByEmailUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request blockByEmailRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, authdirectory.BlockByEmailInput{ + Email: request.Email, + ReasonCode: request.ReasonCode, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, blockResponse{ + Outcome: result.Outcome, + UserID: result.UserID, + }) + } +} + +func handleGetMyAccount(useCase GetMyAccountUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, selfservice.GetMyAccountInput{ + UserID: c.Param("user_id"), + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, getMyAccountResponse{ + Account: result.Account, + }) + } +} + +func handleUpdateMyProfile(useCase UpdateMyProfileUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request updateMyProfileRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, selfservice.UpdateMyProfileInput{ + UserID: c.Param("user_id"), + RaceName: request.RaceName, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, getMyAccountResponse{ + Account: result.Account, + }) + } +} + +func handleUpdateMySettings(useCase UpdateMySettingsUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request updateMySettingsRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, selfservice.UpdateMySettingsInput{ + UserID: c.Param("user_id"), + PreferredLanguage: request.PreferredLanguage, + TimeZone: request.TimeZone, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, getMyAccountResponse{ + Account: result.Account, + }) + } +} + +func handleGetUserEligibility(useCase GetUserEligibilityUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, lobbyeligibility.GetUserEligibilityInput{ + UserID: c.Param("user_id"), + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func handleSyncDeclaredCountry(useCase SyncDeclaredCountryUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request syncDeclaredCountryRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, geosync.SyncDeclaredCountryInput{ + UserID: c.Param("user_id"), + DeclaredCountry: request.DeclaredCountry, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, syncDeclaredCountryResponse{ + UserID: result.UserID, + DeclaredCountry: result.DeclaredCountry, + UpdatedAt: result.UpdatedAt.UTC(), + }) + } +} + +func handleGrantEntitlement(useCase GrantEntitlementUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request grantEntitlementRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, entitlementsvc.GrantInput{ + UserID: c.Param("user_id"), + PlanCode: request.PlanCode, + Source: request.Source, + ReasonCode: request.ReasonCode, + Actor: entitlementsvc.ActorInput{ + Type: request.Actor.Type, + ID: request.Actor.ID, + }, + StartsAt: request.StartsAt, + EndsAt: request.EndsAt, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, entitlementCommandResponseFromResult(result)) + } +} + +func handleExtendEntitlement(useCase ExtendEntitlementUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request extendEntitlementRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, entitlementsvc.ExtendInput{ + UserID: c.Param("user_id"), + Source: request.Source, + ReasonCode: request.ReasonCode, + Actor: entitlementsvc.ActorInput{ + Type: request.Actor.Type, + ID: request.Actor.ID, + }, + EndsAt: request.EndsAt, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, entitlementCommandResponseFromResult(result)) + } +} + +func handleRevokeEntitlement(useCase RevokeEntitlementUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request revokeEntitlementRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, entitlementsvc.RevokeInput{ + UserID: c.Param("user_id"), + Source: request.Source, + ReasonCode: request.ReasonCode, + Actor: entitlementsvc.ActorInput{ + Type: request.Actor.Type, + ID: request.Actor.ID, + }, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, entitlementCommandResponseFromResult(result)) + } +} + +func handleApplySanction(useCase ApplySanctionUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request applySanctionRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, policysvc.ApplySanctionInput{ + UserID: c.Param("user_id"), + SanctionCode: request.SanctionCode, + Scope: request.Scope, + ReasonCode: request.ReasonCode, + Actor: policysvc.ActorInput{ + Type: request.Actor.Type, + ID: request.Actor.ID, + }, + AppliedAt: request.AppliedAt, + ExpiresAt: request.ExpiresAt, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func handleRemoveSanction(useCase RemoveSanctionUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request removeSanctionRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, policysvc.RemoveSanctionInput{ + UserID: c.Param("user_id"), + SanctionCode: request.SanctionCode, + ReasonCode: request.ReasonCode, + Actor: policysvc.ActorInput{ + Type: request.Actor.Type, + ID: request.Actor.ID, + }, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func handleSetLimit(useCase SetLimitUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request setLimitRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, policysvc.SetLimitInput{ + UserID: c.Param("user_id"), + LimitCode: request.LimitCode, + Value: request.Value, + ReasonCode: request.ReasonCode, + Actor: policysvc.ActorInput{ + Type: request.Actor.Type, + ID: request.Actor.ID, + }, + AppliedAt: request.AppliedAt, + ExpiresAt: request.ExpiresAt, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func handleRemoveLimit(useCase RemoveLimitUseCase, timeout time.Duration) gin.HandlerFunc { + return func(c *gin.Context) { + var request removeLimitRequest + if err := decodeJSONRequest(c.Request, &request); err != nil { + abortWithProjection(c, shared.ProjectInternalError(shared.InvalidRequest(err.Error()))) + return + } + + callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) + defer cancel() + + result, err := useCase.Execute(callCtx, policysvc.RemoveLimitInput{ + UserID: c.Param("user_id"), + LimitCode: request.LimitCode, + ReasonCode: request.ReasonCode, + Actor: policysvc.ActorInput{ + Type: request.Actor.Type, + ID: request.Actor.ID, + }, + }) + if err != nil { + abortWithProjection(c, shared.ProjectInternalError(err)) + return + } + + c.JSON(http.StatusOK, result) + } +} + +func normalizeDependencies(deps Dependencies) (Dependencies, error) { + switch { + case deps.ResolveByEmail == nil: + return Dependencies{}, fmt.Errorf("resolve-by-email use case must not be nil") + case deps.EnsureByEmail == nil: + return Dependencies{}, fmt.Errorf("ensure-by-email use case must not be nil") + case deps.ExistsByUserID == nil: + return Dependencies{}, fmt.Errorf("exists-by-user-id use case must not be nil") + case deps.BlockByUserID == nil: + return Dependencies{}, fmt.Errorf("block-by-user-id use case must not be nil") + case deps.BlockByEmail == nil: + return Dependencies{}, fmt.Errorf("block-by-email use case must not be nil") + case deps.GetMyAccount == nil: + return Dependencies{}, fmt.Errorf("get-my-account use case must not be nil") + case deps.UpdateMyProfile == nil: + return Dependencies{}, fmt.Errorf("update-my-profile use case must not be nil") + case deps.UpdateMySettings == nil: + return Dependencies{}, fmt.Errorf("update-my-settings use case must not be nil") + case deps.GetUserByID == nil: + return Dependencies{}, fmt.Errorf("get-user-by-id use case must not be nil") + case deps.GetUserByEmail == nil: + return Dependencies{}, fmt.Errorf("get-user-by-email use case must not be nil") + case deps.GetUserByRaceName == nil: + return Dependencies{}, fmt.Errorf("get-user-by-race-name use case must not be nil") + case deps.ListUsers == nil: + return Dependencies{}, fmt.Errorf("list-users use case must not be nil") + case deps.GetUserEligibility == nil: + return Dependencies{}, fmt.Errorf("get-user-eligibility use case must not be nil") + case deps.SyncDeclaredCountry == nil: + return Dependencies{}, fmt.Errorf("sync-declared-country use case must not be nil") + case deps.GrantEntitlement == nil: + return Dependencies{}, fmt.Errorf("grant-entitlement use case must not be nil") + case deps.ExtendEntitlement == nil: + return Dependencies{}, fmt.Errorf("extend-entitlement use case must not be nil") + case deps.RevokeEntitlement == nil: + return Dependencies{}, fmt.Errorf("revoke-entitlement use case must not be nil") + case deps.ApplySanction == nil: + return Dependencies{}, fmt.Errorf("apply-sanction use case must not be nil") + case deps.RemoveSanction == nil: + return Dependencies{}, fmt.Errorf("remove-sanction use case must not be nil") + case deps.SetLimit == nil: + return Dependencies{}, fmt.Errorf("set-limit use case must not be nil") + case deps.RemoveLimit == nil: + return Dependencies{}, fmt.Errorf("remove-limit use case must not be nil") + default: + if deps.Logger == nil { + deps.Logger = slog.Default() + } + return deps, nil + } +} + +func entitlementCommandResponseFromResult(result entitlementsvc.CommandResult) entitlementCommandResponse { + response := entitlementCommandResponse{ + UserID: result.UserID, + Entitlement: entitlementSnapshotResponse{ + PlanCode: string(result.Entitlement.PlanCode), + IsPaid: result.Entitlement.IsPaid, + Source: result.Entitlement.Source.String(), + Actor: actorDTO{Type: result.Entitlement.Actor.Type.String(), ID: result.Entitlement.Actor.ID.String()}, + ReasonCode: result.Entitlement.ReasonCode.String(), + StartsAt: result.Entitlement.StartsAt.UTC(), + UpdatedAt: result.Entitlement.UpdatedAt.UTC(), + }, + } + if result.Entitlement.EndsAt != nil { + value := result.Entitlement.EndsAt.UTC() + response.Entitlement.EndsAt = &value + } + + return response +} + +func newOTelMiddleware(runtime *telemetry.Runtime) gin.HandlerFunc { + options := []otelgin.Option{} + if runtime != nil { + options = append( + options, + otelgin.WithTracerProvider(runtime.TracerProvider()), + otelgin.WithMeterProvider(runtime.MeterProvider()), + ) + } + + return otelgin.Middleware(internalHTTPServiceName, options...) +} + +func withObservability(logger *slog.Logger, metrics *telemetry.Runtime) gin.HandlerFunc { + if logger == nil { + logger = slog.Default() + } + + return func(c *gin.Context) { + startedAt := time.Now() + c.Next() + + statusCode := c.Writer.Status() + route := c.FullPath() + if route == "" { + route = "unmatched" + } + + errorCode, _ := c.Get(internalErrorCodeContextKey) + errorCodeValue, _ := errorCode.(string) + outcome := outcomeFromStatusCode(statusCode) + duration := time.Since(startedAt) + + attrs := []any{ + "transport", "http", + "route", route, + "method", c.Request.Method, + "status_code", statusCode, + "duration_ms", float64(duration.Microseconds()) / 1000, + "edge_outcome", string(outcome), + } + if errorCodeValue != "" { + attrs = append(attrs, "error_code", errorCodeValue) + } + attrs = append(attrs, logging.TraceAttrsFromContext(c.Request.Context())...) + + metricAttrs := []attribute.KeyValue{ + attribute.String("route", route), + attribute.String("method", c.Request.Method), + attribute.String("edge_outcome", string(outcome)), + } + if errorCodeValue != "" { + metricAttrs = append(metricAttrs, attribute.String("error_code", errorCodeValue)) + } + metrics.RecordInternalHTTPRequest(c.Request.Context(), metricAttrs, duration) + + switch outcome { + case edgeOutcomeSuccess: + logger.InfoContext(c.Request.Context(), "internal request completed", attrs...) + case edgeOutcomeFailed: + logger.ErrorContext(c.Request.Context(), "internal request failed", attrs...) + default: + logger.WarnContext(c.Request.Context(), "internal request rejected", attrs...) + } + } +} + +type edgeOutcome string + +const ( + edgeOutcomeSuccess edgeOutcome = "success" + edgeOutcomeRejected edgeOutcome = "rejected" + edgeOutcomeFailed edgeOutcome = "failed" +) + +func outcomeFromStatusCode(statusCode int) edgeOutcome { + switch { + case statusCode >= 500: + return edgeOutcomeFailed + case statusCode >= 400: + return edgeOutcomeRejected + default: + return edgeOutcomeSuccess + } +} diff --git a/user/internal/api/internalhttp/handler_test.go b/user/internal/api/internalhttp/handler_test.go new file mode 100644 index 0000000..d368dd8 --- /dev/null +++ b/user/internal/api/internalhttp/handler_test.go @@ -0,0 +1,1264 @@ +package internalhttp + +import ( + "bytes" + "context" + "net/http" + "net/http/httptest" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/adminusers" + "galaxy/user/internal/service/authdirectory" + "galaxy/user/internal/service/entitlementsvc" + "galaxy/user/internal/service/geosync" + "galaxy/user/internal/service/lobbyeligibility" + "galaxy/user/internal/service/policysvc" + "galaxy/user/internal/service/selfservice" + "galaxy/user/internal/service/shared" + + "github.com/stretchr/testify/require" +) + +func TestAuthFacingHandlersSuccessCases(t *testing.T) { + t.Parallel() + + handler := mustNewHandler(t, Dependencies{ + ResolveByEmail: resolveByEmailFunc(func(_ context.Context, input authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) { + require.Equal(t, "pilot@example.com", input.Email) + return authdirectory.ResolveByEmailResult{Kind: "existing", UserID: "user-123"}, nil + }), + EnsureByEmail: ensureByEmailFunc(func(_ context.Context, input authdirectory.EnsureByEmailInput) (authdirectory.EnsureByEmailResult, error) { + require.Equal(t, "created@example.com", input.Email) + require.NotNil(t, input.RegistrationContext) + return authdirectory.EnsureByEmailResult{Outcome: "created", UserID: "user-234"}, nil + }), + ExistsByUserID: existsByUserIDFunc(func(_ context.Context, input authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) { + require.Equal(t, "user-123", input.UserID) + return authdirectory.ExistsByUserIDResult{Exists: true}, nil + }), + BlockByUserID: blockByUserIDFunc(func(_ context.Context, input authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) { + require.Equal(t, "user-123", input.UserID) + return authdirectory.BlockResult{Outcome: "blocked", UserID: "user-123"}, nil + }), + BlockByEmail: blockByEmailFunc(func(_ context.Context, input authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) { + require.Equal(t, "blocked@example.com", input.Email) + return authdirectory.BlockResult{Outcome: "already_blocked", UserID: "user-345"}, nil + }), + GetMyAccount: getMyAccountFunc(func(_ context.Context, input selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) { + require.Equal(t, "user-123", input.UserID) + return selfservice.GetMyAccountResult{Account: sampleAccountView()}, nil + }), + UpdateMyProfile: updateMyProfileFunc(func(_ context.Context, input selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "Nova Prime", input.RaceName) + accountView := sampleAccountView() + accountView.RaceName = input.RaceName + return selfservice.UpdateMyProfileResult{Account: accountView}, nil + }), + UpdateMySettings: updateMySettingsFunc(func(_ context.Context, input selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "en-US", input.PreferredLanguage) + require.Equal(t, "UTC", input.TimeZone) + accountView := sampleAccountView() + accountView.PreferredLanguage = input.PreferredLanguage + accountView.TimeZone = input.TimeZone + return selfservice.UpdateMySettingsResult{Account: accountView}, nil + }), + GetUserEligibility: getUserEligibilityFunc(func(_ context.Context, input lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) { + switch input.UserID { + case "user-123": + return sampleEligibilityView(true), nil + case "user-missing": + return sampleEligibilityView(false), nil + default: + return lobbyeligibility.GetUserEligibilityResult{}, shared.InvalidRequest("unexpected user id") + } + }), + SyncDeclaredCountry: syncDeclaredCountryFunc(func(_ context.Context, input geosync.SyncDeclaredCountryInput) (geosync.SyncDeclaredCountryResult, error) { + require.Equal(t, "user-123", input.UserID) + switch input.DeclaredCountry { + case "FR": + return geosync.SyncDeclaredCountryResult{ + UserID: "user-123", + DeclaredCountry: "FR", + UpdatedAt: time.Date(2026, time.April, 9, 11, 0, 0, 0, time.UTC), + }, nil + case "DE": + return geosync.SyncDeclaredCountryResult{ + UserID: "user-123", + DeclaredCountry: "DE", + UpdatedAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + }, nil + default: + return geosync.SyncDeclaredCountryResult{}, shared.InvalidRequest("unexpected declared country") + } + }), + GrantEntitlement: grantEntitlementFunc(func(_ context.Context, input entitlementsvc.GrantInput) (entitlementsvc.CommandResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "paid_monthly", input.PlanCode) + require.Equal(t, "admin", input.Source) + require.Equal(t, "manual_grant", input.ReasonCode) + require.Equal(t, "admin", input.Actor.Type) + require.Equal(t, "admin-1", input.Actor.ID) + return entitlementsvc.CommandResult{ + UserID: "user-123", + Entitlement: entitlement.CurrentSnapshot{ + UserID: common.UserID("user-123"), + PlanCode: entitlement.PlanCodePaidMonthly, + IsPaid: true, + StartsAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + EndsAt: timePointer(time.Date(2026, time.May, 9, 10, 0, 0, 0, time.UTC)), + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_grant"), + UpdatedAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + }, + }, nil + }), + ExtendEntitlement: extendEntitlementFunc(func(_ context.Context, input entitlementsvc.ExtendInput) (entitlementsvc.CommandResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "admin", input.Source) + require.Equal(t, "manual_extend", input.ReasonCode) + require.Equal(t, "2026-06-09T10:00:00Z", input.EndsAt) + return entitlementsvc.CommandResult{ + UserID: "user-123", + Entitlement: entitlement.CurrentSnapshot{ + UserID: common.UserID("user-123"), + PlanCode: entitlement.PlanCodePaidMonthly, + IsPaid: true, + StartsAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + EndsAt: timePointer(time.Date(2026, time.June, 9, 10, 0, 0, 0, time.UTC)), + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_extend"), + UpdatedAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + }, + }, nil + }), + RevokeEntitlement: revokeEntitlementFunc(func(_ context.Context, input entitlementsvc.RevokeInput) (entitlementsvc.CommandResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "admin", input.Source) + require.Equal(t, "manual_revoke", input.ReasonCode) + return entitlementsvc.CommandResult{ + UserID: "user-123", + Entitlement: entitlement.CurrentSnapshot{ + UserID: common.UserID("user-123"), + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_revoke"), + UpdatedAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + }, + }, nil + }), + ApplySanction: applySanctionFunc(func(_ context.Context, input policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "login_block", input.SanctionCode) + require.Equal(t, "auth", input.Scope) + require.Equal(t, "manual_block", input.ReasonCode) + require.Equal(t, "admin", input.Actor.Type) + require.Equal(t, "admin-1", input.Actor.ID) + return policysvc.SanctionCommandResult{ + UserID: "user-123", + ActiveSanctions: []policysvc.ActiveSanctionView{ + { + SanctionCode: "login_block", + Scope: "auth", + ReasonCode: "manual_block", + Actor: policysvc.ActorRefView{Type: "admin", ID: "admin-1"}, + AppliedAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + ExpiresAt: timePointer(time.Date(2026, time.May, 9, 10, 0, 0, 0, time.UTC)), + }, + }, + }, nil + }), + RemoveSanction: removeSanctionFunc(func(_ context.Context, input policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "login_block", input.SanctionCode) + require.Equal(t, "manual_remove", input.ReasonCode) + return policysvc.SanctionCommandResult{UserID: "user-123", ActiveSanctions: []policysvc.ActiveSanctionView{}}, nil + }), + SetLimit: setLimitFunc(func(_ context.Context, input policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "max_owned_private_games", input.LimitCode) + require.Equal(t, 5, input.Value) + require.Equal(t, "manual_override", input.ReasonCode) + return policysvc.LimitCommandResult{ + UserID: "user-123", + ActiveLimits: []policysvc.ActiveLimitView{ + { + LimitCode: "max_owned_private_games", + Value: 5, + ReasonCode: "manual_override", + Actor: policysvc.ActorRefView{Type: "admin", ID: "admin-1"}, + AppliedAt: time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC), + ExpiresAt: timePointer(time.Date(2026, time.June, 9, 10, 0, 0, 0, time.UTC)), + }, + }, + }, nil + }), + RemoveLimit: removeLimitFunc(func(_ context.Context, input policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) { + require.Equal(t, "user-123", input.UserID) + require.Equal(t, "max_owned_private_games", input.LimitCode) + require.Equal(t, "manual_remove", input.ReasonCode) + return policysvc.LimitCommandResult{UserID: "user-123", ActiveLimits: []policysvc.ActiveLimitView{}}, nil + }), + }) + + tests := []struct { + name string + method string + path string + body string + wantStatus int + wantBody string + }{ + { + name: "resolve by email", + method: http.MethodPost, + path: "/api/v1/internal/user-resolutions/by-email", + body: `{"email":"pilot@example.com"}`, + wantStatus: http.StatusOK, + wantBody: `{"kind":"existing","user_id":"user-123"}`, + }, + { + name: "exists by user id", + method: http.MethodGet, + path: "/api/v1/internal/users/user-123/exists", + wantStatus: http.StatusOK, + wantBody: `{"exists":true}`, + }, + { + name: "ensure by email", + method: http.MethodPost, + path: "/api/v1/internal/users/ensure-by-email", + body: `{"email":"created@example.com","registration_context":{"preferred_language":"en","time_zone":"Europe/Kaliningrad"}}`, + wantStatus: http.StatusOK, + wantBody: `{"outcome":"created","user_id":"user-234"}`, + }, + { + name: "block by user id", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/block", + body: `{"reason_code":"policy_blocked"}`, + wantStatus: http.StatusOK, + wantBody: `{"outcome":"blocked","user_id":"user-123"}`, + }, + { + name: "block by email", + method: http.MethodPost, + path: "/api/v1/internal/user-blocks/by-email", + body: `{"email":"blocked@example.com","reason_code":"policy_blocked"}`, + wantStatus: http.StatusOK, + wantBody: `{"outcome":"already_blocked","user_id":"user-345"}`, + }, + { + name: "get my account", + method: http.MethodGet, + path: "/api/v1/internal/users/user-123/account", + wantStatus: http.StatusOK, + wantBody: `{"account":{"user_id":"user-123","email":"pilot@example.com","race_name":"Pilot Nova","preferred_language":"en","time_zone":"Europe/Kaliningrad","declared_country":"DE","entitlement":{"plan_code":"free","is_paid":false,"source":"auth_registration","actor":{"type":"service","id":"user-service"},"reason_code":"initial_free_entitlement","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[],"active_limits":[],"created_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "update my profile", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/profile", + body: `{"race_name":"Nova Prime"}`, + wantStatus: http.StatusOK, + wantBody: `{"account":{"user_id":"user-123","email":"pilot@example.com","race_name":"Nova Prime","preferred_language":"en","time_zone":"Europe/Kaliningrad","declared_country":"DE","entitlement":{"plan_code":"free","is_paid":false,"source":"auth_registration","actor":{"type":"service","id":"user-service"},"reason_code":"initial_free_entitlement","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[],"active_limits":[],"created_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "update my settings", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/settings", + body: `{"preferred_language":"en-US","time_zone":"UTC"}`, + wantStatus: http.StatusOK, + wantBody: `{"account":{"user_id":"user-123","email":"pilot@example.com","race_name":"Pilot Nova","preferred_language":"en-US","time_zone":"UTC","declared_country":"DE","entitlement":{"plan_code":"free","is_paid":false,"source":"auth_registration","actor":{"type":"service","id":"user-service"},"reason_code":"initial_free_entitlement","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[],"active_limits":[],"created_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "get user eligibility", + method: http.MethodGet, + path: "/api/v1/internal/users/user-123/eligibility", + wantStatus: http.StatusOK, + wantBody: `{"exists":true,"user_id":"user-123","entitlement":{"plan_code":"paid_monthly","is_paid":true,"source":"billing","actor":{"type":"billing","id":"invoice-1"},"reason_code":"renewal","starts_at":"2026-04-09T10:00:00Z","ends_at":"2026-05-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"},"active_sanctions":[{"sanction_code":"private_game_create_block","scope":"lobby","reason_code":"manual_block","actor":{"type":"admin","id":"admin-1"},"applied_at":"2026-04-09T10:00:00Z","expires_at":"2026-05-09T10:00:00Z"}],"effective_limits":[{"limit_code":"max_owned_private_games","value":3},{"limit_code":"max_pending_public_applications","value":10},{"limit_code":"max_active_game_memberships","value":10}],"markers":{"can_login":true,"can_create_private_game":false,"can_manage_private_game":true,"can_join_game":true,"can_update_profile":true}}`, + }, + { + name: "get user eligibility not found snapshot", + method: http.MethodGet, + path: "/api/v1/internal/users/user-missing/eligibility", + wantStatus: http.StatusOK, + wantBody: `{"exists":false,"user_id":"user-missing","active_sanctions":[],"effective_limits":[],"markers":{"can_login":false,"can_create_private_game":false,"can_manage_private_game":false,"can_join_game":false,"can_update_profile":false}}`, + }, + { + name: "sync declared country change", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/declared-country/sync", + body: `{"declared_country":"FR"}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","declared_country":"FR","updated_at":"2026-04-09T11:00:00Z"}`, + }, + { + name: "sync declared country same value no-op", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/declared-country/sync", + body: `{"declared_country":"DE"}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","declared_country":"DE","updated_at":"2026-04-09T10:00:00Z"}`, + }, + { + name: "grant entitlement", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/entitlements/grant", + body: `{"plan_code":"paid_monthly","source":"admin","reason_code":"manual_grant","actor":{"type":"admin","id":"admin-1"},"starts_at":"2026-04-09T10:00:00Z","ends_at":"2026-05-09T10:00:00Z"}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","entitlement":{"plan_code":"paid_monthly","is_paid":true,"source":"admin","actor":{"type":"admin","id":"admin-1"},"reason_code":"manual_grant","starts_at":"2026-04-09T10:00:00Z","ends_at":"2026-05-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "extend entitlement", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/entitlements/extend", + body: `{"source":"admin","reason_code":"manual_extend","actor":{"type":"admin","id":"admin-1"},"ends_at":"2026-06-09T10:00:00Z"}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","entitlement":{"plan_code":"paid_monthly","is_paid":true,"source":"admin","actor":{"type":"admin","id":"admin-1"},"reason_code":"manual_extend","starts_at":"2026-04-09T10:00:00Z","ends_at":"2026-06-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "revoke entitlement", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/entitlements/revoke", + body: `{"source":"admin","reason_code":"manual_revoke","actor":{"type":"admin","id":"admin-1"}}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","entitlement":{"plan_code":"free","is_paid":false,"source":"admin","actor":{"type":"admin","id":"admin-1"},"reason_code":"manual_revoke","starts_at":"2026-04-09T10:00:00Z","updated_at":"2026-04-09T10:00:00Z"}}`, + }, + { + name: "apply sanction", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/sanctions/apply", + body: `{"sanction_code":"login_block","scope":"auth","reason_code":"manual_block","actor":{"type":"admin","id":"admin-1"},"applied_at":"2026-04-09T10:00:00Z","expires_at":"2026-05-09T10:00:00Z"}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","active_sanctions":[{"sanction_code":"login_block","scope":"auth","reason_code":"manual_block","actor":{"type":"admin","id":"admin-1"},"applied_at":"2026-04-09T10:00:00Z","expires_at":"2026-05-09T10:00:00Z"}]}`, + }, + { + name: "remove sanction", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/sanctions/remove", + body: `{"sanction_code":"login_block","reason_code":"manual_remove","actor":{"type":"admin","id":"admin-1"}}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","active_sanctions":[]}`, + }, + { + name: "set limit", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/limits/set", + body: `{"limit_code":"max_owned_private_games","value":5,"reason_code":"manual_override","actor":{"type":"admin","id":"admin-1"},"applied_at":"2026-04-09T10:00:00Z","expires_at":"2026-06-09T10:00:00Z"}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","active_limits":[{"limit_code":"max_owned_private_games","value":5,"reason_code":"manual_override","actor":{"type":"admin","id":"admin-1"},"applied_at":"2026-04-09T10:00:00Z","expires_at":"2026-06-09T10:00:00Z"}]}`, + }, + { + name: "remove limit", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/limits/remove", + body: `{"limit_code":"max_owned_private_games","reason_code":"manual_remove","actor":{"type":"admin","id":"admin-1"}}`, + wantStatus: http.StatusOK, + wantBody: `{"user_id":"user-123","active_limits":[]}`, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + recorder := httptest.NewRecorder() + request := httptest.NewRequest(tt.method, tt.path, bytes.NewBufferString(tt.body)) + if tt.body != "" { + request.Header.Set("Content-Type", "application/json") + } + + handler.ServeHTTP(recorder, request) + + require.Equal(t, tt.wantStatus, recorder.Code) + require.Equal(t, jsonContentType, recorder.Header().Get("Content-Type")) + assertJSONEq(t, recorder.Body.String(), tt.wantBody) + }) + } +} + +func TestHandlersRejectInvalidJSONAndMissingRegistrationContext(t *testing.T) { + t.Parallel() + + handler := mustNewHandler(t, Dependencies{ + ResolveByEmail: resolveByEmailFunc(func(context.Context, authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) { + return authdirectory.ResolveByEmailResult{}, nil + }), + EnsureByEmail: ensureByEmailFunc(func(context.Context, authdirectory.EnsureByEmailInput) (authdirectory.EnsureByEmailResult, error) { + return authdirectory.EnsureByEmailResult{}, nil + }), + ExistsByUserID: existsByUserIDFunc(func(context.Context, authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) { + return authdirectory.ExistsByUserIDResult{}, nil + }), + BlockByUserID: blockByUserIDFunc(func(context.Context, authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }), + BlockByEmail: blockByEmailFunc(func(context.Context, authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }), + GetMyAccount: getMyAccountFunc(func(context.Context, selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) { + return selfservice.GetMyAccountResult{}, nil + }), + UpdateMyProfile: updateMyProfileFunc(func(context.Context, selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) { + return selfservice.UpdateMyProfileResult{}, nil + }), + UpdateMySettings: updateMySettingsFunc(func(context.Context, selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) { + return selfservice.UpdateMySettingsResult{}, nil + }), + GetUserEligibility: getUserEligibilityFunc(func(context.Context, lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) { + return lobbyeligibility.GetUserEligibilityResult{}, nil + }), + SyncDeclaredCountry: syncDeclaredCountryFunc(func(context.Context, geosync.SyncDeclaredCountryInput) (geosync.SyncDeclaredCountryResult, error) { + return geosync.SyncDeclaredCountryResult{}, nil + }), + ApplySanction: applySanctionFunc(func(context.Context, policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, nil + }), + RemoveSanction: removeSanctionFunc(func(context.Context, policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, nil + }), + SetLimit: setLimitFunc(func(_ context.Context, input policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) { + if input.LimitCode == string(policy.LimitCodeMaxPendingPrivateJoinRequests) { + return policysvc.LimitCommandResult{}, shared.InvalidRequest("limit_code is unsupported") + } + return policysvc.LimitCommandResult{}, nil + }), + RemoveLimit: removeLimitFunc(func(_ context.Context, input policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) { + if input.LimitCode == string(policy.LimitCodeMaxPendingPrivateInvitesSent) { + return policysvc.LimitCommandResult{}, shared.InvalidRequest("limit_code is unsupported") + } + return policysvc.LimitCommandResult{}, nil + }), + }) + + tests := []struct { + name string + method string + path string + body string + wantBody string + }{ + { + name: "resolve empty body", + method: http.MethodPost, + path: "/api/v1/internal/user-resolutions/by-email", + body: ``, + wantBody: `{"error":{"code":"invalid_request","message":"request body must not be empty"}}`, + }, + { + name: "resolve malformed json", + method: http.MethodPost, + path: "/api/v1/internal/user-resolutions/by-email", + body: `{"email":`, + wantBody: `{"error":{"code":"invalid_request","message":"request body contains malformed JSON"}}`, + }, + { + name: "ensure trailing json", + method: http.MethodPost, + path: "/api/v1/internal/users/ensure-by-email", + body: `{"email":"pilot@example.com","registration_context":{"preferred_language":"en","time_zone":"UTC"}}{}`, + wantBody: `{"error":{"code":"invalid_request","message":"request body must contain a single JSON object"}}`, + }, + { + name: "block by email unknown field", + method: http.MethodPost, + path: "/api/v1/internal/user-blocks/by-email", + body: `{"email":"pilot@example.com","reason_code":"policy_blocked","extra":true}`, + wantBody: `{"error":{"code":"invalid_request","message":"request body contains unknown field \"extra\""}}`, + }, + { + name: "ensure missing registration context", + method: http.MethodPost, + path: "/api/v1/internal/users/ensure-by-email", + body: `{"email":"pilot@example.com"}`, + wantBody: `{"error":{"code":"invalid_request","message":"registration_context must be present"}}`, + }, + { + name: "sync declared country unknown field", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/declared-country/sync", + body: `{"declared_country":"DE","extra":true}`, + wantBody: `{"error":{"code":"invalid_request","message":"request body contains unknown field \"extra\""}}`, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + recorder := httptest.NewRecorder() + request := httptest.NewRequest(tt.method, tt.path, bytes.NewBufferString(tt.body)) + if tt.body != "" { + request.Header.Set("Content-Type", "application/json") + } + + handler.ServeHTTP(recorder, request) + + require.Equal(t, http.StatusBadRequest, recorder.Code) + assertJSONEq(t, recorder.Body.String(), tt.wantBody) + }) + } +} + +func TestBlockByUserIDNotFound(t *testing.T) { + t.Parallel() + + handler := mustNewHandler(t, Dependencies{ + ResolveByEmail: resolveByEmailFunc(func(context.Context, authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) { + return authdirectory.ResolveByEmailResult{}, nil + }), + EnsureByEmail: ensureByEmailFunc(func(context.Context, authdirectory.EnsureByEmailInput) (authdirectory.EnsureByEmailResult, error) { + return authdirectory.EnsureByEmailResult{}, nil + }), + ExistsByUserID: existsByUserIDFunc(func(context.Context, authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) { + return authdirectory.ExistsByUserIDResult{}, nil + }), + BlockByUserID: blockByUserIDFunc(func(context.Context, authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, shared.SubjectNotFound() + }), + BlockByEmail: blockByEmailFunc(func(context.Context, authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }), + GetMyAccount: getMyAccountFunc(func(context.Context, selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) { + return selfservice.GetMyAccountResult{}, nil + }), + UpdateMyProfile: updateMyProfileFunc(func(context.Context, selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) { + return selfservice.UpdateMyProfileResult{}, nil + }), + UpdateMySettings: updateMySettingsFunc(func(context.Context, selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) { + return selfservice.UpdateMySettingsResult{}, nil + }), + GetUserEligibility: getUserEligibilityFunc(func(context.Context, lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) { + return lobbyeligibility.GetUserEligibilityResult{}, nil + }), + ApplySanction: applySanctionFunc(func(context.Context, policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, nil + }), + RemoveSanction: removeSanctionFunc(func(context.Context, policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, nil + }), + SetLimit: setLimitFunc(func(context.Context, policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) { + return policysvc.LimitCommandResult{}, shared.InvalidRequest("limit_code is unsupported") + }), + RemoveLimit: removeLimitFunc(func(context.Context, policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) { + return policysvc.LimitCommandResult{}, shared.InvalidRequest("limit_code is unsupported") + }), + }) + + recorder := httptest.NewRecorder() + request := httptest.NewRequest( + http.MethodPost, + "/api/v1/internal/users/user-missing/block", + bytes.NewBufferString(`{"reason_code":"policy_blocked"}`), + ) + request.Header.Set("Content-Type", "application/json") + + handler.ServeHTTP(recorder, request) + + require.Equal(t, http.StatusNotFound, recorder.Code) + assertJSONEq(t, recorder.Body.String(), `{"error":{"code":"subject_not_found","message":"subject not found"}}`) +} + +func TestSelfServiceHandlersRejectUnknownFieldsAndProjectErrors(t *testing.T) { + t.Parallel() + + handler := mustNewHandler(t, Dependencies{ + ResolveByEmail: resolveByEmailFunc(func(context.Context, authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) { + return authdirectory.ResolveByEmailResult{}, nil + }), + EnsureByEmail: ensureByEmailFunc(func(context.Context, authdirectory.EnsureByEmailInput) (authdirectory.EnsureByEmailResult, error) { + return authdirectory.EnsureByEmailResult{}, nil + }), + ExistsByUserID: existsByUserIDFunc(func(context.Context, authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) { + return authdirectory.ExistsByUserIDResult{}, nil + }), + BlockByUserID: blockByUserIDFunc(func(context.Context, authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }), + BlockByEmail: blockByEmailFunc(func(context.Context, authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }), + GetMyAccount: getMyAccountFunc(func(context.Context, selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) { + return selfservice.GetMyAccountResult{}, shared.SubjectNotFound() + }), + UpdateMyProfile: updateMyProfileFunc(func(context.Context, selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) { + return selfservice.UpdateMyProfileResult{}, shared.Conflict() + }), + UpdateMySettings: updateMySettingsFunc(func(context.Context, selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) { + return selfservice.UpdateMySettingsResult{}, nil + }), + GetUserEligibility: getUserEligibilityFunc(func(_ context.Context, input lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) { + if input.UserID == "bad-id" { + return lobbyeligibility.GetUserEligibilityResult{}, shared.InvalidRequest("user id must start with \"user-\"") + } + return lobbyeligibility.GetUserEligibilityResult{}, nil + }), + SyncDeclaredCountry: syncDeclaredCountryFunc(func(_ context.Context, input geosync.SyncDeclaredCountryInput) (geosync.SyncDeclaredCountryResult, error) { + if input.UserID == "user-missing" { + return geosync.SyncDeclaredCountryResult{}, shared.SubjectNotFound() + } + if input.DeclaredCountry == "ZZ" { + return geosync.SyncDeclaredCountryResult{}, shared.InvalidRequest("declared_country must be a valid ISO 3166-1 alpha-2 country code") + } + return geosync.SyncDeclaredCountryResult{}, nil + }), + GrantEntitlement: grantEntitlementFunc(func(context.Context, entitlementsvc.GrantInput) (entitlementsvc.CommandResult, error) { + return entitlementsvc.CommandResult{}, shared.Conflict() + }), + ExtendEntitlement: extendEntitlementFunc(func(context.Context, entitlementsvc.ExtendInput) (entitlementsvc.CommandResult, error) { + return entitlementsvc.CommandResult{}, nil + }), + RevokeEntitlement: revokeEntitlementFunc(func(context.Context, entitlementsvc.RevokeInput) (entitlementsvc.CommandResult, error) { + return entitlementsvc.CommandResult{}, nil + }), + ApplySanction: applySanctionFunc(func(context.Context, policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, shared.Conflict() + }), + RemoveSanction: removeSanctionFunc(func(context.Context, policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, shared.SubjectNotFound() + }), + SetLimit: setLimitFunc(func(context.Context, policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) { + return policysvc.LimitCommandResult{}, shared.InvalidRequest("limit_code is unsupported") + }), + RemoveLimit: removeLimitFunc(func(context.Context, policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) { + return policysvc.LimitCommandResult{}, shared.InvalidRequest("limit_code is unsupported") + }), + }) + + tests := []struct { + name string + method string + path string + body string + wantStatus int + wantBody string + }{ + { + name: "get my account not found", + method: http.MethodGet, + path: "/api/v1/internal/users/user-missing/account", + wantStatus: http.StatusNotFound, + wantBody: `{"error":{"code":"subject_not_found","message":"subject not found"}}`, + }, + { + name: "update my profile conflict", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/profile", + body: `{"race_name":"Taken Name"}`, + wantStatus: http.StatusConflict, + wantBody: `{"error":{"code":"conflict","message":"request conflicts with current state"}}`, + }, + { + name: "update my profile rejects email field", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/profile", + body: `{"race_name":"Nova Prime","email":"pilot@example.com"}`, + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"request body contains unknown field \"email\""}}`, + }, + { + name: "update my settings rejects declared country field", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/settings", + body: `{"preferred_language":"en","time_zone":"UTC","declared_country":"DE"}`, + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"request body contains unknown field \"declared_country\""}}`, + }, + { + name: "grant entitlement conflict", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/entitlements/grant", + body: `{"plan_code":"paid_monthly","source":"admin","reason_code":"manual_grant","actor":{"type":"admin","id":"admin-1"},"starts_at":"2026-04-09T10:00:00Z","ends_at":"2026-05-09T10:00:00Z"}`, + wantStatus: http.StatusConflict, + wantBody: `{"error":{"code":"conflict","message":"request conflicts with current state"}}`, + }, + { + name: "apply sanction conflict", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/sanctions/apply", + body: `{"sanction_code":"login_block","scope":"auth","reason_code":"manual_block","actor":{"type":"admin","id":"admin-1"},"applied_at":"2026-04-09T10:00:00Z"}`, + wantStatus: http.StatusConflict, + wantBody: `{"error":{"code":"conflict","message":"request conflicts with current state"}}`, + }, + { + name: "eligibility invalid user id", + method: http.MethodGet, + path: "/api/v1/internal/users/bad-id/eligibility", + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"user id must start with \"user-\""}}`, + }, + { + name: "sync declared country invalid", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/declared-country/sync", + body: `{"declared_country":"ZZ"}`, + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"declared_country must be a valid ISO 3166-1 alpha-2 country code"}}`, + }, + { + name: "sync declared country not found", + method: http.MethodPost, + path: "/api/v1/internal/users/user-missing/declared-country/sync", + body: `{"declared_country":"DE"}`, + wantStatus: http.StatusNotFound, + wantBody: `{"error":{"code":"subject_not_found","message":"subject not found"}}`, + }, + { + name: "set limit retired code rejected", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/limits/set", + body: `{"limit_code":"max_pending_private_join_requests","value":1,"reason_code":"manual_override","actor":{"type":"admin","id":"admin-1"},"applied_at":"2026-04-09T10:00:00Z"}`, + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"limit_code is unsupported"}}`, + }, + { + name: "remove limit retired code rejected", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/limits/remove", + body: `{"limit_code":"max_pending_private_invites_sent","reason_code":"manual_remove","actor":{"type":"admin","id":"admin-1"}}`, + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"limit_code is unsupported"}}`, + }, + { + name: "apply sanction rejects unknown field", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/sanctions/apply", + body: `{"sanction_code":"login_block","scope":"auth","reason_code":"manual_block","actor":{"type":"admin","id":"admin-1"},"applied_at":"2026-04-09T10:00:00Z","extra":true}`, + wantStatus: http.StatusBadRequest, + wantBody: `{"error":{"code":"invalid_request","message":"request body contains unknown field \"extra\""}}`, + }, + { + name: "remove sanction not found", + method: http.MethodPost, + path: "/api/v1/internal/users/user-123/sanctions/remove", + body: `{"sanction_code":"login_block","reason_code":"manual_remove","actor":{"type":"admin","id":"admin-1"}}`, + wantStatus: http.StatusNotFound, + wantBody: `{"error":{"code":"subject_not_found","message":"subject not found"}}`, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + recorder := httptest.NewRecorder() + request := httptest.NewRequest(tt.method, tt.path, bytes.NewBufferString(tt.body)) + if tt.body != "" { + request.Header.Set("Content-Type", "application/json") + } + + handler.ServeHTTP(recorder, request) + + require.Equal(t, tt.wantStatus, recorder.Code) + assertJSONEq(t, recorder.Body.String(), tt.wantBody) + }) + } +} + +func TestEnsureByEmailHandlerRejectsSemanticRegistrationContext(t *testing.T) { + t.Parallel() + + ensurer, err := authdirectory.NewEnsurer(handlerTestStore{}, handlerTestClock{now: time.Unix(1_775_240_000, 0).UTC()}, handlerTestIDGenerator{ + userID: common.UserID("user-created"), + raceName: common.RaceName("player-test123"), + entitlementRecordID: entitlement.EntitlementRecordID("entitlement-created"), + }, handlerTestRaceNamePolicy{}) + require.NoError(t, err) + + handler := mustNewHandler(t, Dependencies{ + ResolveByEmail: resolveByEmailFunc(func(context.Context, authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) { + return authdirectory.ResolveByEmailResult{}, nil + }), + EnsureByEmail: ensurer, + ExistsByUserID: existsByUserIDFunc(func(context.Context, authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) { + return authdirectory.ExistsByUserIDResult{}, nil + }), + BlockByUserID: blockByUserIDFunc(func(context.Context, authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }), + BlockByEmail: blockByEmailFunc(func(context.Context, authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }), + GetMyAccount: getMyAccountFunc(func(context.Context, selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) { + return selfservice.GetMyAccountResult{}, nil + }), + UpdateMyProfile: updateMyProfileFunc(func(context.Context, selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) { + return selfservice.UpdateMyProfileResult{}, nil + }), + UpdateMySettings: updateMySettingsFunc(func(context.Context, selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) { + return selfservice.UpdateMySettingsResult{}, nil + }), + GetUserEligibility: getUserEligibilityFunc(func(context.Context, lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) { + return lobbyeligibility.GetUserEligibilityResult{}, nil + }), + ApplySanction: applySanctionFunc(func(context.Context, policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, nil + }), + RemoveSanction: removeSanctionFunc(func(context.Context, policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, nil + }), + SetLimit: setLimitFunc(func(context.Context, policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) { + return policysvc.LimitCommandResult{}, nil + }), + RemoveLimit: removeLimitFunc(func(context.Context, policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) { + return policysvc.LimitCommandResult{}, nil + }), + }) + + tests := []struct { + name string + body string + wantBody string + }{ + { + name: "invalid preferred language", + body: `{"email":"pilot@example.com","registration_context":{"preferred_language":"bad@@tag","time_zone":"Europe/Kaliningrad"}}`, + wantBody: `{"error":{"code":"invalid_request","message":"registration_context.preferred_language must be a valid BCP 47 language tag"}}`, + }, + { + name: "invalid time zone", + body: `{"email":"pilot@example.com","registration_context":{"preferred_language":"en","time_zone":"Mars/Olympus"}}`, + wantBody: `{"error":{"code":"invalid_request","message":"registration_context.time_zone must be a valid IANA time zone name"}}`, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + recorder := httptest.NewRecorder() + request := httptest.NewRequest( + http.MethodPost, + "/api/v1/internal/users/ensure-by-email", + bytes.NewBufferString(tt.body), + ) + request.Header.Set("Content-Type", "application/json") + + handler.ServeHTTP(recorder, request) + + require.Equal(t, http.StatusBadRequest, recorder.Code) + assertJSONEq(t, recorder.Body.String(), tt.wantBody) + }) + } +} + +func mustNewHandler(t *testing.T, deps Dependencies) http.Handler { + t.Helper() + + if deps.ResolveByEmail == nil { + deps.ResolveByEmail = resolveByEmailFunc(func(context.Context, authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) { + return authdirectory.ResolveByEmailResult{}, nil + }) + } + if deps.EnsureByEmail == nil { + deps.EnsureByEmail = ensureByEmailFunc(func(context.Context, authdirectory.EnsureByEmailInput) (authdirectory.EnsureByEmailResult, error) { + return authdirectory.EnsureByEmailResult{}, nil + }) + } + if deps.ExistsByUserID == nil { + deps.ExistsByUserID = existsByUserIDFunc(func(context.Context, authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) { + return authdirectory.ExistsByUserIDResult{}, nil + }) + } + if deps.BlockByUserID == nil { + deps.BlockByUserID = blockByUserIDFunc(func(context.Context, authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }) + } + if deps.BlockByEmail == nil { + deps.BlockByEmail = blockByEmailFunc(func(context.Context, authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) { + return authdirectory.BlockResult{}, nil + }) + } + if deps.GetMyAccount == nil { + deps.GetMyAccount = getMyAccountFunc(func(context.Context, selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) { + return selfservice.GetMyAccountResult{}, nil + }) + } + if deps.UpdateMyProfile == nil { + deps.UpdateMyProfile = updateMyProfileFunc(func(context.Context, selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) { + return selfservice.UpdateMyProfileResult{}, nil + }) + } + if deps.UpdateMySettings == nil { + deps.UpdateMySettings = updateMySettingsFunc(func(context.Context, selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) { + return selfservice.UpdateMySettingsResult{}, nil + }) + } + if deps.GrantEntitlement == nil { + deps.GrantEntitlement = grantEntitlementFunc(func(context.Context, entitlementsvc.GrantInput) (entitlementsvc.CommandResult, error) { + return entitlementsvc.CommandResult{}, nil + }) + } + if deps.ExtendEntitlement == nil { + deps.ExtendEntitlement = extendEntitlementFunc(func(context.Context, entitlementsvc.ExtendInput) (entitlementsvc.CommandResult, error) { + return entitlementsvc.CommandResult{}, nil + }) + } + if deps.RevokeEntitlement == nil { + deps.RevokeEntitlement = revokeEntitlementFunc(func(context.Context, entitlementsvc.RevokeInput) (entitlementsvc.CommandResult, error) { + return entitlementsvc.CommandResult{}, nil + }) + } + if deps.ApplySanction == nil { + deps.ApplySanction = applySanctionFunc(func(context.Context, policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, nil + }) + } + if deps.GetUserEligibility == nil { + deps.GetUserEligibility = getUserEligibilityFunc(func(context.Context, lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) { + return lobbyeligibility.GetUserEligibilityResult{}, nil + }) + } + if deps.GetUserByID == nil { + deps.GetUserByID = getUserByIDFunc(func(context.Context, adminusers.GetUserByIDInput) (adminusers.LookupResult, error) { + return adminusers.LookupResult{}, nil + }) + } + if deps.GetUserByEmail == nil { + deps.GetUserByEmail = getUserByEmailFunc(func(context.Context, adminusers.GetUserByEmailInput) (adminusers.LookupResult, error) { + return adminusers.LookupResult{}, nil + }) + } + if deps.GetUserByRaceName == nil { + deps.GetUserByRaceName = getUserByRaceNameFunc(func(context.Context, adminusers.GetUserByRaceNameInput) (adminusers.LookupResult, error) { + return adminusers.LookupResult{}, nil + }) + } + if deps.ListUsers == nil { + deps.ListUsers = listUsersFunc(func(context.Context, adminusers.ListUsersInput) (adminusers.ListUsersResult, error) { + return adminusers.ListUsersResult{}, nil + }) + } + if deps.SyncDeclaredCountry == nil { + deps.SyncDeclaredCountry = syncDeclaredCountryFunc(func(context.Context, geosync.SyncDeclaredCountryInput) (geosync.SyncDeclaredCountryResult, error) { + return geosync.SyncDeclaredCountryResult{}, nil + }) + } + if deps.RemoveSanction == nil { + deps.RemoveSanction = removeSanctionFunc(func(context.Context, policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) { + return policysvc.SanctionCommandResult{}, nil + }) + } + if deps.SetLimit == nil { + deps.SetLimit = setLimitFunc(func(context.Context, policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) { + return policysvc.LimitCommandResult{}, nil + }) + } + if deps.RemoveLimit == nil { + deps.RemoveLimit = removeLimitFunc(func(context.Context, policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) { + return policysvc.LimitCommandResult{}, nil + }) + } + + handler, err := newHandlerWithConfig(Config{ + Addr: "127.0.0.1:0", + ReadHeaderTimeout: time.Second, + ReadTimeout: 2 * time.Second, + IdleTimeout: time.Minute, + RequestTimeout: time.Second, + }, deps) + require.NoError(t, err) + + return handler +} + +func assertJSONEq(t *testing.T, got string, want string) { + t.Helper() + + require.JSONEq(t, want, got) +} + +type resolveByEmailFunc func(context.Context, authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) + +func (fn resolveByEmailFunc) Execute(ctx context.Context, input authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) { + return fn(ctx, input) +} + +type ensureByEmailFunc func(context.Context, authdirectory.EnsureByEmailInput) (authdirectory.EnsureByEmailResult, error) + +func (fn ensureByEmailFunc) Execute(ctx context.Context, input authdirectory.EnsureByEmailInput) (authdirectory.EnsureByEmailResult, error) { + return fn(ctx, input) +} + +type existsByUserIDFunc func(context.Context, authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) + +func (fn existsByUserIDFunc) Execute(ctx context.Context, input authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) { + return fn(ctx, input) +} + +type blockByUserIDFunc func(context.Context, authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) + +func (fn blockByUserIDFunc) Execute(ctx context.Context, input authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) { + return fn(ctx, input) +} + +type blockByEmailFunc func(context.Context, authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) + +func (fn blockByEmailFunc) Execute(ctx context.Context, input authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) { + return fn(ctx, input) +} + +type getMyAccountFunc func(context.Context, selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) + +func (fn getMyAccountFunc) Execute(ctx context.Context, input selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) { + return fn(ctx, input) +} + +type updateMyProfileFunc func(context.Context, selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) + +func (fn updateMyProfileFunc) Execute(ctx context.Context, input selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) { + return fn(ctx, input) +} + +type updateMySettingsFunc func(context.Context, selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) + +func (fn updateMySettingsFunc) Execute(ctx context.Context, input selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) { + return fn(ctx, input) +} + +type getUserByIDFunc func(context.Context, adminusers.GetUserByIDInput) (adminusers.LookupResult, error) + +func (fn getUserByIDFunc) Execute(ctx context.Context, input adminusers.GetUserByIDInput) (adminusers.LookupResult, error) { + return fn(ctx, input) +} + +type getUserByEmailFunc func(context.Context, adminusers.GetUserByEmailInput) (adminusers.LookupResult, error) + +func (fn getUserByEmailFunc) Execute(ctx context.Context, input adminusers.GetUserByEmailInput) (adminusers.LookupResult, error) { + return fn(ctx, input) +} + +type getUserByRaceNameFunc func(context.Context, adminusers.GetUserByRaceNameInput) (adminusers.LookupResult, error) + +func (fn getUserByRaceNameFunc) Execute(ctx context.Context, input adminusers.GetUserByRaceNameInput) (adminusers.LookupResult, error) { + return fn(ctx, input) +} + +type listUsersFunc func(context.Context, adminusers.ListUsersInput) (adminusers.ListUsersResult, error) + +func (fn listUsersFunc) Execute(ctx context.Context, input adminusers.ListUsersInput) (adminusers.ListUsersResult, error) { + return fn(ctx, input) +} + +type getUserEligibilityFunc func(context.Context, lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) + +func (fn getUserEligibilityFunc) Execute(ctx context.Context, input lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) { + return fn(ctx, input) +} + +type syncDeclaredCountryFunc func(context.Context, geosync.SyncDeclaredCountryInput) (geosync.SyncDeclaredCountryResult, error) + +func (fn syncDeclaredCountryFunc) Execute(ctx context.Context, input geosync.SyncDeclaredCountryInput) (geosync.SyncDeclaredCountryResult, error) { + return fn(ctx, input) +} + +type grantEntitlementFunc func(context.Context, entitlementsvc.GrantInput) (entitlementsvc.CommandResult, error) + +func (fn grantEntitlementFunc) Execute(ctx context.Context, input entitlementsvc.GrantInput) (entitlementsvc.CommandResult, error) { + return fn(ctx, input) +} + +type extendEntitlementFunc func(context.Context, entitlementsvc.ExtendInput) (entitlementsvc.CommandResult, error) + +func (fn extendEntitlementFunc) Execute(ctx context.Context, input entitlementsvc.ExtendInput) (entitlementsvc.CommandResult, error) { + return fn(ctx, input) +} + +type revokeEntitlementFunc func(context.Context, entitlementsvc.RevokeInput) (entitlementsvc.CommandResult, error) + +func (fn revokeEntitlementFunc) Execute(ctx context.Context, input entitlementsvc.RevokeInput) (entitlementsvc.CommandResult, error) { + return fn(ctx, input) +} + +type applySanctionFunc func(context.Context, policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) + +func (fn applySanctionFunc) Execute(ctx context.Context, input policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) { + return fn(ctx, input) +} + +type removeSanctionFunc func(context.Context, policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) + +func (fn removeSanctionFunc) Execute(ctx context.Context, input policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) { + return fn(ctx, input) +} + +type setLimitFunc func(context.Context, policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) + +func (fn setLimitFunc) Execute(ctx context.Context, input policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) { + return fn(ctx, input) +} + +type removeLimitFunc func(context.Context, policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) + +func (fn removeLimitFunc) Execute(ctx context.Context, input policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) { + return fn(ctx, input) +} + +type handlerTestStore struct{} + +func (handlerTestStore) ResolveByEmail(context.Context, common.Email) (ports.ResolveByEmailResult, error) { + return ports.ResolveByEmailResult{}, nil +} + +func (handlerTestStore) ExistsByUserID(context.Context, common.UserID) (bool, error) { + return false, nil +} + +func (handlerTestStore) EnsureByEmail(context.Context, ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + return ports.EnsureByEmailResult{}, nil +} + +func (handlerTestStore) BlockByUserID(context.Context, ports.BlockByUserIDInput) (ports.BlockResult, error) { + return ports.BlockResult{}, nil +} + +func (handlerTestStore) BlockByEmail(context.Context, ports.BlockByEmailInput) (ports.BlockResult, error) { + return ports.BlockResult{}, nil +} + +type handlerTestClock struct { + now time.Time +} + +func (clock handlerTestClock) Now() time.Time { + return clock.now +} + +type handlerTestIDGenerator struct { + userID common.UserID + raceName common.RaceName + entitlementRecordID entitlement.EntitlementRecordID + sanctionRecordID policy.SanctionRecordID + limitRecordID policy.LimitRecordID +} + +func (generator handlerTestIDGenerator) NewUserID() (common.UserID, error) { + return generator.userID, nil +} + +func (generator handlerTestIDGenerator) NewInitialRaceName() (common.RaceName, error) { + return generator.raceName, nil +} + +func (generator handlerTestIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + return generator.entitlementRecordID, nil +} + +func (generator handlerTestIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + return generator.sanctionRecordID, nil +} + +func (generator handlerTestIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + return generator.limitRecordID, nil +} + +type handlerTestRaceNamePolicy struct{} + +func (handlerTestRaceNamePolicy) CanonicalKey(raceName common.RaceName) (account.RaceNameCanonicalKey, error) { + return account.RaceNameCanonicalKey("key:" + raceName.String()), nil +} + +func sampleAccountView() selfservice.AccountView { + timestamp := time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC) + return selfservice.AccountView{ + UserID: "user-123", + Email: "pilot@example.com", + RaceName: "Pilot Nova", + PreferredLanguage: "en", + TimeZone: "Europe/Kaliningrad", + DeclaredCountry: "DE", + Entitlement: selfservice.EntitlementSnapshotView{ + PlanCode: "free", + IsPaid: false, + Source: "auth_registration", + Actor: selfservice.ActorRefView{Type: "service", ID: "user-service"}, + ReasonCode: "initial_free_entitlement", + StartsAt: timestamp, + UpdatedAt: timestamp, + }, + ActiveSanctions: []selfservice.ActiveSanctionView{}, + ActiveLimits: []selfservice.ActiveLimitView{}, + CreatedAt: timestamp, + UpdatedAt: timestamp, + } +} + +func sampleEligibilityView(exists bool) lobbyeligibility.GetUserEligibilityResult { + timestamp := time.Date(2026, time.April, 9, 10, 0, 0, 0, time.UTC) + if !exists { + return lobbyeligibility.GetUserEligibilityResult{ + Exists: false, + UserID: "user-missing", + ActiveSanctions: []lobbyeligibility.ActiveSanctionView{}, + EffectiveLimits: []lobbyeligibility.EffectiveLimitView{}, + Markers: lobbyeligibility.EligibilityMarkersView{}, + } + } + + return lobbyeligibility.GetUserEligibilityResult{ + Exists: true, + UserID: "user-123", + Entitlement: &lobbyeligibility.EntitlementSnapshotView{ + PlanCode: "paid_monthly", + IsPaid: true, + Source: "billing", + Actor: lobbyeligibility.ActorRefView{Type: "billing", ID: "invoice-1"}, + ReasonCode: "renewal", + StartsAt: timestamp, + EndsAt: timePointer(timestamp.Add(30 * 24 * time.Hour)), + UpdatedAt: timestamp, + }, + ActiveSanctions: []lobbyeligibility.ActiveSanctionView{ + { + SanctionCode: "private_game_create_block", + Scope: "lobby", + ReasonCode: "manual_block", + Actor: lobbyeligibility.ActorRefView{Type: "admin", ID: "admin-1"}, + AppliedAt: timestamp, + ExpiresAt: timePointer(timestamp.Add(30 * 24 * time.Hour)), + }, + }, + EffectiveLimits: []lobbyeligibility.EffectiveLimitView{ + {LimitCode: "max_owned_private_games", Value: 3}, + {LimitCode: "max_pending_public_applications", Value: 10}, + {LimitCode: "max_active_game_memberships", Value: 10}, + }, + Markers: lobbyeligibility.EligibilityMarkersView{ + CanLogin: true, + CanCreatePrivateGame: false, + CanManagePrivateGame: true, + CanJoinGame: true, + CanUpdateProfile: true, + }, + } +} + +func timePointer(value time.Time) *time.Time { + utcValue := value.UTC() + return &utcValue +} + +var ( + _ ports.AuthDirectoryStore = handlerTestStore{} + _ ports.Clock = handlerTestClock{} + _ ports.IDGenerator = handlerTestIDGenerator{} + _ ports.RaceNamePolicy = handlerTestRaceNamePolicy{} +) diff --git a/user/internal/api/internalhttp/json.go b/user/internal/api/internalhttp/json.go new file mode 100644 index 0000000..3f7ba92 --- /dev/null +++ b/user/internal/api/internalhttp/json.go @@ -0,0 +1,88 @@ +package internalhttp + +import ( + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "strings" + + "galaxy/user/internal/service/shared" + + "github.com/gin-gonic/gin" +) + +const internalErrorCodeContextKey = "internal_error_code" + +type malformedJSONRequestError struct { + message string +} + +func (err *malformedJSONRequestError) Error() string { + if err == nil { + return "" + } + + return err.message +} + +func decodeJSONRequest(request *http.Request, target any) error { + if request == nil || request.Body == nil { + return &malformedJSONRequestError{message: "request body must not be empty"} + } + + decoder := json.NewDecoder(request.Body) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return describeJSONDecodeError(err) + } + if err := decoder.Decode(&struct{}{}); err != nil { + if errors.Is(err, io.EOF) { + return nil + } + + return &malformedJSONRequestError{message: "request body must contain a single JSON object"} + } + + return &malformedJSONRequestError{message: "request body must contain a single JSON object"} +} + +func describeJSONDecodeError(err error) error { + var syntaxErr *json.SyntaxError + var typeErr *json.UnmarshalTypeError + + switch { + case errors.Is(err, io.EOF): + return &malformedJSONRequestError{message: "request body must not be empty"} + case errors.As(err, &syntaxErr): + return &malformedJSONRequestError{message: "request body contains malformed JSON"} + case errors.Is(err, io.ErrUnexpectedEOF): + return &malformedJSONRequestError{message: "request body contains malformed JSON"} + case errors.As(err, &typeErr): + if strings.TrimSpace(typeErr.Field) != "" { + return &malformedJSONRequestError{ + message: fmt.Sprintf("request body contains an invalid value for %q", typeErr.Field), + } + } + + return &malformedJSONRequestError{message: "request body contains an invalid JSON value"} + case strings.HasPrefix(err.Error(), "json: unknown field "): + return &malformedJSONRequestError{ + message: fmt.Sprintf("request body contains unknown field %s", strings.TrimPrefix(err.Error(), "json: unknown field ")), + } + default: + return &malformedJSONRequestError{message: "request body contains invalid JSON"} + } +} + +func abortWithProjection(c *gin.Context, projection shared.InternalErrorProjection) { + c.Set(internalErrorCodeContextKey, projection.Code) + c.AbortWithStatusJSON(projection.StatusCode, errorResponse{ + Error: errorBody{ + Code: projection.Code, + Message: projection.Message, + }, + }) +} diff --git a/user/internal/api/internalhttp/observability_test.go b/user/internal/api/internalhttp/observability_test.go new file mode 100644 index 0000000..c8c8eca --- /dev/null +++ b/user/internal/api/internalhttp/observability_test.go @@ -0,0 +1,112 @@ +package internalhttp + +import ( + "bytes" + "context" + "log/slog" + "net/http" + "net/http/httptest" + "testing" + + "galaxy/user/internal/service/authdirectory" + usertelemetry "galaxy/user/internal/telemetry" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "go.opentelemetry.io/otel/attribute" + sdkmetric "go.opentelemetry.io/otel/sdk/metric" + "go.opentelemetry.io/otel/sdk/metric/metricdata" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/sdk/trace/tracetest" +) + +func TestInternalHandlerEmitsTraceFieldsAndMetrics(t *testing.T) { + t.Parallel() + + logger, buffer := newObservedLogger() + telemetryRuntime, reader, recorder := newObservedInternalTelemetryRuntime(t) + handler := mustNewHandler(t, Dependencies{ + Logger: logger, + Telemetry: telemetryRuntime, + ExistsByUserID: existsByUserIDFunc(func(context.Context, authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) { + return authdirectory.ExistsByUserIDResult{Exists: true}, nil + }), + }) + + recorderHTTP := httptest.NewRecorder() + request := httptest.NewRequest(http.MethodGet, "/api/v1/internal/users/user-123/exists", nil) + request.Header.Set("traceparent", "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01") + + handler.ServeHTTP(recorderHTTP, request) + + require.Equal(t, http.StatusOK, recorderHTTP.Code) + require.NotEmpty(t, recorder.Ended()) + assert.Contains(t, buffer.String(), "otel_trace_id") + assert.Contains(t, buffer.String(), "otel_span_id") + + assertMetricCount(t, reader, "user.internal_http.requests", map[string]string{ + "route": "/api/v1/internal/users/:user_id/exists", + "method": http.MethodGet, + "edge_outcome": "success", + }, 1) +} + +func newObservedInternalTelemetryRuntime(t *testing.T) (*usertelemetry.Runtime, *sdkmetric.ManualReader, *tracetest.SpanRecorder) { + t.Helper() + + reader := sdkmetric.NewManualReader() + meterProvider := sdkmetric.NewMeterProvider(sdkmetric.WithReader(reader)) + recorder := tracetest.NewSpanRecorder() + tracerProvider := sdktrace.NewTracerProvider(sdktrace.WithSpanProcessor(recorder)) + + runtime, err := usertelemetry.NewWithProviders(meterProvider, tracerProvider) + require.NoError(t, err) + + return runtime, reader, recorder +} + +func newObservedLogger() (*slog.Logger, *bytes.Buffer) { + buffer := &bytes.Buffer{} + return slog.New(slog.NewJSONHandler(buffer, &slog.HandlerOptions{Level: slog.LevelDebug})), buffer +} + +func assertMetricCount(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string, wantValue int64) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + sum, ok := metric.Data.(metricdata.Sum[int64]) + require.True(t, ok) + + for _, point := range sum.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + assert.Equal(t, wantValue, point.Value) + return + } + } + } + } + + require.Failf(t, "test failed", "metric %q with attrs %v not found", metricName, wantAttrs) +} + +func hasMetricAttributes(values []attribute.KeyValue, want map[string]string) bool { + if len(values) != len(want) { + return false + } + + for _, value := range values { + if want[string(value.Key)] != value.Value.AsString() { + return false + } + } + + return true +} diff --git a/user/internal/api/internalhttp/server.go b/user/internal/api/internalhttp/server.go new file mode 100644 index 0000000..538935c --- /dev/null +++ b/user/internal/api/internalhttp/server.go @@ -0,0 +1,411 @@ +// Package internalhttp exposes the trusted internal HTTP API used by auth, +// gateway self-service, and internal administrative workflows. +package internalhttp + +import ( + "context" + "errors" + "fmt" + "log/slog" + "net" + "net/http" + "sync" + "time" + + "galaxy/user/internal/service/adminusers" + "galaxy/user/internal/service/authdirectory" + "galaxy/user/internal/service/entitlementsvc" + "galaxy/user/internal/service/geosync" + "galaxy/user/internal/service/lobbyeligibility" + "galaxy/user/internal/service/policysvc" + "galaxy/user/internal/service/selfservice" + "galaxy/user/internal/telemetry" +) + +const jsonContentType = "application/json; charset=utf-8" + +var configureGinModeOnce sync.Once + +// ResolveByEmailUseCase describes the auth-facing resolve-by-email service +// consumed by the HTTP transport layer. +type ResolveByEmailUseCase interface { + // Execute resolves one e-mail subject without creating any account. + Execute(ctx context.Context, input authdirectory.ResolveByEmailInput) (authdirectory.ResolveByEmailResult, error) +} + +// EnsureByEmailUseCase describes the auth-facing ensure-by-email service +// consumed by the HTTP transport layer. +type EnsureByEmailUseCase interface { + // Execute returns an existing user, creates a new one, or reports a blocked + // outcome for one e-mail subject. + Execute(ctx context.Context, input authdirectory.EnsureByEmailInput) (authdirectory.EnsureByEmailResult, error) +} + +// ExistsByUserIDUseCase describes the auth-facing exists-by-user-id service +// consumed by the HTTP transport layer. +type ExistsByUserIDUseCase interface { + // Execute reports whether one stable user identifier exists. + Execute(ctx context.Context, input authdirectory.ExistsByUserIDInput) (authdirectory.ExistsByUserIDResult, error) +} + +// BlockByUserIDUseCase describes the auth-facing block-by-user-id service +// consumed by the HTTP transport layer. +type BlockByUserIDUseCase interface { + // Execute blocks one account addressed by stable user identifier. + Execute(ctx context.Context, input authdirectory.BlockByUserIDInput) (authdirectory.BlockResult, error) +} + +// BlockByEmailUseCase describes the auth-facing block-by-email service +// consumed by the HTTP transport layer. +type BlockByEmailUseCase interface { + // Execute blocks one exact normalized e-mail subject. + Execute(ctx context.Context, input authdirectory.BlockByEmailInput) (authdirectory.BlockResult, error) +} + +// GetMyAccountUseCase describes the self-service account-read use case +// consumed by the HTTP transport layer. +type GetMyAccountUseCase interface { + // Execute returns the authenticated account aggregate for one user. + Execute(ctx context.Context, input selfservice.GetMyAccountInput) (selfservice.GetMyAccountResult, error) +} + +// UpdateMyProfileUseCase describes the self-service profile-mutation use case +// consumed by the HTTP transport layer. +type UpdateMyProfileUseCase interface { + // Execute updates the allowed self-service profile fields for one user. + Execute(ctx context.Context, input selfservice.UpdateMyProfileInput) (selfservice.UpdateMyProfileResult, error) +} + +// UpdateMySettingsUseCase describes the self-service settings-mutation use +// case consumed by the HTTP transport layer. +type UpdateMySettingsUseCase interface { + // Execute updates the allowed self-service settings fields for one user. + Execute(ctx context.Context, input selfservice.UpdateMySettingsInput) (selfservice.UpdateMySettingsResult, error) +} + +// GetUserByIDUseCase describes the trusted admin exact-read by stable user id +// consumed by the HTTP transport layer. +type GetUserByIDUseCase interface { + // Execute returns the full current account aggregate for one user id. + Execute(ctx context.Context, input adminusers.GetUserByIDInput) (adminusers.LookupResult, error) +} + +// GetUserByEmailUseCase describes the trusted admin exact-read by normalized +// e-mail consumed by the HTTP transport layer. +type GetUserByEmailUseCase interface { + // Execute returns the full current account aggregate for one normalized + // e-mail address. + Execute(ctx context.Context, input adminusers.GetUserByEmailInput) (adminusers.LookupResult, error) +} + +// GetUserByRaceNameUseCase describes the trusted admin exact-read by exact +// stored race name consumed by the HTTP transport layer. +type GetUserByRaceNameUseCase interface { + // Execute returns the full current account aggregate for one exact race + // name. + Execute(ctx context.Context, input adminusers.GetUserByRaceNameInput) (adminusers.LookupResult, error) +} + +// ListUsersUseCase describes the trusted admin paginated listing use case +// consumed by the HTTP transport layer. +type ListUsersUseCase interface { + // Execute returns one deterministic filtered page of full account + // aggregates. + Execute(ctx context.Context, input adminusers.ListUsersInput) (adminusers.ListUsersResult, error) +} + +// GetUserEligibilityUseCase describes the trusted lobby-facing eligibility +// snapshot use case consumed by the HTTP transport layer. +type GetUserEligibilityUseCase interface { + // Execute returns one read-optimized lobby eligibility snapshot for one + // user. + Execute(ctx context.Context, input lobbyeligibility.GetUserEligibilityInput) (lobbyeligibility.GetUserEligibilityResult, error) +} + +// SyncDeclaredCountryUseCase describes the trusted geo-facing declared-country +// sync use case consumed by the HTTP transport layer. +type SyncDeclaredCountryUseCase interface { + // Execute synchronizes the current effective declared country for one user. + Execute(ctx context.Context, input geosync.SyncDeclaredCountryInput) (geosync.SyncDeclaredCountryResult, error) +} + +// GrantEntitlementUseCase describes the trusted entitlement-grant use case +// consumed by the HTTP transport layer. +type GrantEntitlementUseCase interface { + // Execute grants a new current paid entitlement for one user. + Execute(ctx context.Context, input entitlementsvc.GrantInput) (entitlementsvc.CommandResult, error) +} + +// ExtendEntitlementUseCase describes the trusted entitlement-extend use case +// consumed by the HTTP transport layer. +type ExtendEntitlementUseCase interface { + // Execute extends the current finite paid entitlement for one user. + Execute(ctx context.Context, input entitlementsvc.ExtendInput) (entitlementsvc.CommandResult, error) +} + +// RevokeEntitlementUseCase describes the trusted entitlement-revoke use case +// consumed by the HTTP transport layer. +type RevokeEntitlementUseCase interface { + // Execute revokes the current paid entitlement for one user. + Execute(ctx context.Context, input entitlementsvc.RevokeInput) (entitlementsvc.CommandResult, error) +} + +// ApplySanctionUseCase describes the trusted sanction-apply use case consumed +// by the HTTP transport layer. +type ApplySanctionUseCase interface { + // Execute applies one new active sanction record. + Execute(ctx context.Context, input policysvc.ApplySanctionInput) (policysvc.SanctionCommandResult, error) +} + +// RemoveSanctionUseCase describes the trusted sanction-remove use case +// consumed by the HTTP transport layer. +type RemoveSanctionUseCase interface { + // Execute removes one current active sanction record by code. + Execute(ctx context.Context, input policysvc.RemoveSanctionInput) (policysvc.SanctionCommandResult, error) +} + +// SetLimitUseCase describes the trusted limit-set use case consumed by the +// HTTP transport layer. +type SetLimitUseCase interface { + // Execute creates or replaces one current active limit record. + Execute(ctx context.Context, input policysvc.SetLimitInput) (policysvc.LimitCommandResult, error) +} + +// RemoveLimitUseCase describes the trusted limit-remove use case consumed by +// the HTTP transport layer. +type RemoveLimitUseCase interface { + // Execute removes one current active limit record by code. + Execute(ctx context.Context, input policysvc.RemoveLimitInput) (policysvc.LimitCommandResult, error) +} + +// Config describes the trusted internal HTTP listener owned by the user +// service. +type Config struct { + // Addr stores the TCP listen address. + Addr string + + // ReadHeaderTimeout bounds how long the listener may spend reading request + // headers before rejecting the connection. + ReadHeaderTimeout time.Duration + + // ReadTimeout bounds how long the listener may spend reading one request. + ReadTimeout time.Duration + + // IdleTimeout bounds how long keep-alive connections stay open. + IdleTimeout time.Duration + + // RequestTimeout bounds one application-layer request execution. + RequestTimeout time.Duration +} + +// Validate reports whether cfg contains a usable internal HTTP listener +// configuration. +func (cfg Config) Validate() error { + switch { + case cfg.Addr == "": + return errors.New("internal HTTP addr must not be empty") + case cfg.ReadHeaderTimeout <= 0: + return errors.New("internal HTTP read header timeout must be positive") + case cfg.ReadTimeout <= 0: + return errors.New("internal HTTP read timeout must be positive") + case cfg.IdleTimeout <= 0: + return errors.New("internal HTTP idle timeout must be positive") + case cfg.RequestTimeout <= 0: + return errors.New("internal HTTP request timeout must be positive") + default: + return nil + } +} + +// Dependencies describes the collaborators used by the trusted internal HTTP +// transport layer. +type Dependencies struct { + // ResolveByEmail executes the auth-facing resolve-by-email use case. + ResolveByEmail ResolveByEmailUseCase + + // EnsureByEmail executes the auth-facing ensure-by-email use case. + EnsureByEmail EnsureByEmailUseCase + + // ExistsByUserID executes the auth-facing exists-by-user-id use case. + ExistsByUserID ExistsByUserIDUseCase + + // BlockByUserID executes the auth-facing block-by-user-id use case. + BlockByUserID BlockByUserIDUseCase + + // BlockByEmail executes the auth-facing block-by-email use case. + BlockByEmail BlockByEmailUseCase + + // GetMyAccount executes the self-service authenticated account-read use + // case. + GetMyAccount GetMyAccountUseCase + + // UpdateMyProfile executes the self-service profile-mutation use case. + UpdateMyProfile UpdateMyProfileUseCase + + // UpdateMySettings executes the self-service settings-mutation use case. + UpdateMySettings UpdateMySettingsUseCase + + // GetUserByID executes the trusted admin exact-read by stable user id. + GetUserByID GetUserByIDUseCase + + // GetUserByEmail executes the trusted admin exact-read by normalized + // e-mail. + GetUserByEmail GetUserByEmailUseCase + + // GetUserByRaceName executes the trusted admin exact-read by exact stored + // race name. + GetUserByRaceName GetUserByRaceNameUseCase + + // ListUsers executes the trusted admin paginated filtered listing use case. + ListUsers ListUsersUseCase + + // GetUserEligibility executes the trusted lobby-facing eligibility snapshot + // read. + GetUserEligibility GetUserEligibilityUseCase + + // SyncDeclaredCountry executes the trusted geo-facing declared-country sync + // command. + SyncDeclaredCountry SyncDeclaredCountryUseCase + + // GrantEntitlement executes the trusted entitlement-grant use case. + GrantEntitlement GrantEntitlementUseCase + + // ExtendEntitlement executes the trusted entitlement-extend use case. + ExtendEntitlement ExtendEntitlementUseCase + + // RevokeEntitlement executes the trusted entitlement-revoke use case. + RevokeEntitlement RevokeEntitlementUseCase + + // ApplySanction executes the trusted sanction-apply use case. + ApplySanction ApplySanctionUseCase + + // RemoveSanction executes the trusted sanction-remove use case. + RemoveSanction RemoveSanctionUseCase + + // SetLimit executes the trusted limit-set use case. + SetLimit SetLimitUseCase + + // RemoveLimit executes the trusted limit-remove use case. + RemoveLimit RemoveLimitUseCase + + // Logger writes structured transport logs. When nil, the default logger is + // used. + Logger *slog.Logger + + // Telemetry records OpenTelemetry spans and low-cardinality HTTP metrics. + Telemetry *telemetry.Runtime +} + +// Server owns the trusted internal HTTP listener exposed by the user service. +type Server struct { + cfg Config + + handler http.Handler + logger *slog.Logger + + stateMu sync.RWMutex + server *http.Server + listener net.Listener +} + +// NewServer constructs one trusted internal HTTP server for cfg and deps. +func NewServer(cfg Config, deps Dependencies) (*Server, error) { + if err := cfg.Validate(); err != nil { + return nil, fmt.Errorf("new internal HTTP server: %w", err) + } + + handler, err := newHandlerWithConfig(cfg, deps) + if err != nil { + return nil, fmt.Errorf("new internal HTTP server: %w", err) + } + + logger := deps.Logger + if logger == nil { + logger = slog.Default() + } + + return &Server{ + cfg: cfg, + handler: handler, + logger: logger, + }, nil +} + +// Run binds the configured listener and serves the trusted internal HTTP +// surface until ctx is cancelled or Shutdown closes the server. +func (server *Server) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run internal HTTP server: nil context") + } + if err := ctx.Err(); err != nil { + return err + } + + listener, err := net.Listen("tcp", server.cfg.Addr) + if err != nil { + return fmt.Errorf("run internal HTTP server: listen on %q: %w", server.cfg.Addr, err) + } + + httpServer := &http.Server{ + Handler: server.handler, + ReadHeaderTimeout: server.cfg.ReadHeaderTimeout, + ReadTimeout: server.cfg.ReadTimeout, + IdleTimeout: server.cfg.IdleTimeout, + } + + server.stateMu.Lock() + server.server = httpServer + server.listener = listener + server.stateMu.Unlock() + + server.logger.Info("internal HTTP server started", "addr", listener.Addr().String()) + + shutdownDone := make(chan struct{}) + go func() { + defer close(shutdownDone) + <-ctx.Done() + shutdownCtx, cancel := context.WithTimeout(context.Background(), server.cfg.RequestTimeout) + defer cancel() + _ = server.Shutdown(shutdownCtx) + }() + + defer func() { + server.stateMu.Lock() + server.server = nil + server.listener = nil + server.stateMu.Unlock() + <-shutdownDone + }() + + err = httpServer.Serve(listener) + switch { + case err == nil: + return nil + case errors.Is(err, http.ErrServerClosed): + server.logger.Info("internal HTTP server stopped") + return nil + default: + return fmt.Errorf("run internal HTTP server: serve on %q: %w", server.cfg.Addr, err) + } +} + +// Shutdown gracefully stops the internal HTTP server within ctx. +func (server *Server) Shutdown(ctx context.Context) error { + if ctx == nil { + return errors.New("shutdown internal HTTP server: nil context") + } + + server.stateMu.RLock() + httpServer := server.server + server.stateMu.RUnlock() + + if httpServer == nil { + return nil + } + + if err := httpServer.Shutdown(ctx); err != nil && !errors.Is(err, http.ErrServerClosed) { + return fmt.Errorf("shutdown internal HTTP server: %w", err) + } + + return nil +} diff --git a/user/internal/app/runtime.go b/user/internal/app/runtime.go new file mode 100644 index 0000000..10f7715 --- /dev/null +++ b/user/internal/app/runtime.go @@ -0,0 +1,493 @@ +// Package app wires the runnable user-service process. +package app + +import ( + "context" + "errors" + "fmt" + "log/slog" + "strings" + "sync" + + "galaxy/user/internal/adapters/local" + "galaxy/user/internal/adapters/redis/domainevents" + "galaxy/user/internal/adapters/redis/userstore" + "galaxy/user/internal/adminapi" + "galaxy/user/internal/api/internalhttp" + "galaxy/user/internal/config" + "galaxy/user/internal/service/adminusers" + "galaxy/user/internal/service/authdirectory" + "galaxy/user/internal/service/entitlementsvc" + "galaxy/user/internal/service/geosync" + "galaxy/user/internal/service/lobbyeligibility" + "galaxy/user/internal/service/policysvc" + "galaxy/user/internal/service/selfservice" + "galaxy/user/internal/telemetry" +) + +type pinger interface { + Ping(context.Context) error +} + +type closer interface { + Close() error +} + +// Runtime owns the runnable user-service process plus the cleanup functions +// that release runtime resources after shutdown. +type Runtime struct { + cfg config.Config + logger *slog.Logger + + // Server owns the internal HTTP listener exposed by the user service. + Server *internalhttp.Server + + // AdminServer owns the optional private admin HTTP listener. + AdminServer *adminapi.Server + + // Telemetry owns the process-wide OpenTelemetry providers and Prometheus + // handler. + Telemetry *telemetry.Runtime + + cleanupFns []func() error +} + +// NewRuntime constructs the runnable user-service process from cfg. +func NewRuntime(ctx context.Context, cfg config.Config, logger *slog.Logger) (*Runtime, error) { + if ctx == nil { + return nil, fmt.Errorf("new user-service runtime: nil context") + } + if err := cfg.Validate(); err != nil { + return nil, fmt.Errorf("new user-service runtime: %w", err) + } + if logger == nil { + logger = slog.Default() + } + + runtime := &Runtime{ + cfg: cfg, + logger: logger, + } + cleanupOnError := func(err error) (*Runtime, error) { + return nil, fmt.Errorf("%w; cleanup: %w", err, runtime.Close()) + } + + telemetryRuntime, err := telemetry.NewProcess(ctx, telemetry.ProcessConfig{ + ServiceName: cfg.Telemetry.ServiceName, + TracesExporter: cfg.Telemetry.TracesExporter, + MetricsExporter: cfg.Telemetry.MetricsExporter, + TracesProtocol: cfg.Telemetry.TracesProtocol, + MetricsProtocol: cfg.Telemetry.MetricsProtocol, + StdoutTracesEnabled: cfg.Telemetry.StdoutTracesEnabled, + StdoutMetricsEnabled: cfg.Telemetry.StdoutMetricsEnabled, + }, logger.With("component", "telemetry")) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: telemetry runtime: %w", err)) + } + runtime.Telemetry = telemetryRuntime + runtime.cleanupFns = append(runtime.cleanupFns, func() error { + shutdownCtx, cancel := context.WithTimeout(context.Background(), cfg.ShutdownTimeout) + defer cancel() + return telemetryRuntime.Shutdown(shutdownCtx) + }) + + store, err := userstore.New(userstore.Config{ + Addr: cfg.Redis.Addr, + Username: cfg.Redis.Username, + Password: cfg.Redis.Password, + DB: cfg.Redis.DB, + TLSEnabled: cfg.Redis.TLSEnabled, + KeyspacePrefix: cfg.Redis.KeyspacePrefix, + OperationTimeout: cfg.Redis.OperationTimeout, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: redis user store: %w", err)) + } + runtime.cleanupFns = append(runtime.cleanupFns, store.Close) + + if err := pingDependency(ctx, "redis user store", store); err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: %w", err)) + } + + domainEventPublisher, err := domainevents.New(domainevents.Config{ + Addr: cfg.Redis.Addr, + Username: cfg.Redis.Username, + Password: cfg.Redis.Password, + DB: cfg.Redis.DB, + TLSEnabled: cfg.Redis.TLSEnabled, + Stream: cfg.Redis.DomainEventsStream, + StreamMaxLen: cfg.Redis.DomainEventsStreamMaxLen, + OperationTimeout: cfg.Redis.OperationTimeout, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: redis domain-event publisher: %w", err)) + } + runtime.cleanupFns = append(runtime.cleanupFns, domainEventPublisher.Close) + + if err := pingDependency(ctx, "redis domain-event publisher", domainEventPublisher); err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: %w", err)) + } + + clock := local.Clock{} + idGenerator := local.IDGenerator{} + raceNamePolicy, err := local.NewRaceNamePolicy() + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: race-name policy: %w", err)) + } + + componentLogger := func(component string) *slog.Logger { + return logger.With("component", component) + } + + resolver, err := authdirectory.NewResolverWithObservability(store, componentLogger("authdirectory"), telemetryRuntime) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: resolver: %w", err)) + } + ensurer, err := authdirectory.NewEnsurerWithObservability( + store, + clock, + idGenerator, + raceNamePolicy, + componentLogger("authdirectory"), + telemetryRuntime, + domainEventPublisher, + domainEventPublisher, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: ensurer: %w", err)) + } + existenceChecker, err := authdirectory.NewExistenceChecker(store) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: existence checker: %w", err)) + } + blockByUserID, err := authdirectory.NewBlockByUserIDService(store, clock) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: block-by-user-id service: %w", err)) + } + blockByEmail, err := authdirectory.NewBlockByEmailService(store, clock) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: block-by-email service: %w", err)) + } + entitlementReader, err := entitlementsvc.NewReaderWithObservability( + store.EntitlementSnapshots(), + store.EntitlementLifecycle(), + clock, + idGenerator, + componentLogger("entitlementsvc"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: entitlement reader: %w", err)) + } + grantEntitlement, err := entitlementsvc.NewGrantServiceWithObservability( + store.Accounts(), + store.EntitlementHistory(), + entitlementReader, + store.EntitlementLifecycle(), + clock, + idGenerator, + componentLogger("entitlementsvc"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: grant entitlement service: %w", err)) + } + extendEntitlement, err := entitlementsvc.NewExtendServiceWithObservability( + store.Accounts(), + store.EntitlementHistory(), + entitlementReader, + store.EntitlementLifecycle(), + clock, + idGenerator, + componentLogger("entitlementsvc"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: extend entitlement service: %w", err)) + } + revokeEntitlement, err := entitlementsvc.NewRevokeServiceWithObservability( + store.Accounts(), + store.EntitlementHistory(), + entitlementReader, + store.EntitlementLifecycle(), + clock, + idGenerator, + componentLogger("entitlementsvc"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: revoke entitlement service: %w", err)) + } + accountGetter, err := selfservice.NewAccountGetter(store.Accounts(), entitlementReader, store.Sanctions(), store.Limits(), clock) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: account getter: %w", err)) + } + profileUpdater, err := selfservice.NewProfileUpdaterWithObservability( + store.Accounts(), + entitlementReader, + store.Sanctions(), + store.Limits(), + clock, + raceNamePolicy, + componentLogger("selfservice"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: profile updater: %w", err)) + } + settingsUpdater, err := selfservice.NewSettingsUpdaterWithObservability( + store.Accounts(), + entitlementReader, + store.Sanctions(), + store.Limits(), + clock, + componentLogger("selfservice"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: settings updater: %w", err)) + } + getUserByID, err := adminusers.NewByIDGetter(store.Accounts(), entitlementReader, store.Sanctions(), store.Limits(), clock) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: admin get-user-by-id: %w", err)) + } + getUserByEmail, err := adminusers.NewByEmailGetter(store.Accounts(), entitlementReader, store.Sanctions(), store.Limits(), clock) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: admin get-user-by-email: %w", err)) + } + getUserByRaceName, err := adminusers.NewByRaceNameGetter(store.Accounts(), entitlementReader, store.Sanctions(), store.Limits(), clock) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: admin get-user-by-race-name: %w", err)) + } + listUsers, err := adminusers.NewLister(store.Accounts(), entitlementReader, store.Sanctions(), store.Limits(), clock, store) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: admin list-users: %w", err)) + } + userEligibility, err := lobbyeligibility.NewSnapshotReader(store.Accounts(), entitlementReader, store.Sanctions(), store.Limits(), clock) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: lobby eligibility snapshot reader: %w", err)) + } + syncDeclaredCountry, err := geosync.NewSyncServiceWithObservability( + store.Accounts(), + clock, + domainEventPublisher, + componentLogger("geosync"), + telemetryRuntime, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: geo declared-country sync service: %w", err)) + } + applySanction, err := policysvc.NewApplySanctionServiceWithObservability( + store.Accounts(), + store.Sanctions(), + store.Limits(), + store.PolicyLifecycle(), + clock, + idGenerator, + componentLogger("policysvc"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: apply sanction service: %w", err)) + } + removeSanction, err := policysvc.NewRemoveSanctionServiceWithObservability( + store.Accounts(), + store.Sanctions(), + store.Limits(), + store.PolicyLifecycle(), + clock, + idGenerator, + componentLogger("policysvc"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: remove sanction service: %w", err)) + } + setLimit, err := policysvc.NewSetLimitServiceWithObservability( + store.Accounts(), + store.Sanctions(), + store.Limits(), + store.PolicyLifecycle(), + clock, + idGenerator, + componentLogger("policysvc"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: set limit service: %w", err)) + } + removeLimit, err := policysvc.NewRemoveLimitServiceWithObservability( + store.Accounts(), + store.Sanctions(), + store.Limits(), + store.PolicyLifecycle(), + clock, + idGenerator, + componentLogger("policysvc"), + telemetryRuntime, + domainEventPublisher, + ) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: remove limit service: %w", err)) + } + + server, err := internalhttp.NewServer(internalhttp.Config{ + Addr: cfg.InternalHTTP.Addr, + ReadHeaderTimeout: cfg.InternalHTTP.ReadHeaderTimeout, + ReadTimeout: cfg.InternalHTTP.ReadTimeout, + IdleTimeout: cfg.InternalHTTP.IdleTimeout, + RequestTimeout: cfg.InternalHTTP.RequestTimeout, + }, internalhttp.Dependencies{ + ResolveByEmail: resolver, + EnsureByEmail: ensurer, + ExistsByUserID: existenceChecker, + BlockByUserID: blockByUserID, + BlockByEmail: blockByEmail, + GetMyAccount: accountGetter, + UpdateMyProfile: profileUpdater, + UpdateMySettings: settingsUpdater, + GetUserByID: getUserByID, + GetUserByEmail: getUserByEmail, + GetUserByRaceName: getUserByRaceName, + ListUsers: listUsers, + GetUserEligibility: userEligibility, + SyncDeclaredCountry: syncDeclaredCountry, + GrantEntitlement: grantEntitlement, + ExtendEntitlement: extendEntitlement, + RevokeEntitlement: revokeEntitlement, + ApplySanction: applySanction, + RemoveSanction: removeSanction, + SetLimit: setLimit, + RemoveLimit: removeLimit, + Logger: logger.With("component", "internal_http"), + Telemetry: telemetryRuntime, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new user-service runtime: internal HTTP server: %w", err)) + } + + adminServer := adminapi.NewServer(cfg.AdminHTTP, telemetryRuntime.Handler(), logger) + + runtime.Server = server + runtime.AdminServer = adminServer + return runtime, nil +} + +// Run serves the internal and admin HTTP listeners until ctx is canceled or a +// listener fails. +func (runtime *Runtime) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run user-service runtime: nil context") + } + if runtime == nil { + return errors.New("run user-service runtime: nil runtime") + } + if runtime.Server == nil { + return errors.New("run user-service runtime: nil internal HTTP server") + } + if runtime.AdminServer == nil { + return errors.New("run user-service runtime: nil admin HTTP server") + } + + runCtx, cancel := context.WithCancel(ctx) + defer cancel() + + var ( + wg sync.WaitGroup + shutdownMu sync.Mutex + shutdownDone bool + shutdownErr error + ) + shutdownServers := func() { + shutdownMu.Lock() + defer shutdownMu.Unlock() + if shutdownDone { + return + } + shutdownDone = true + + shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), runtime.cfg.ShutdownTimeout) + defer shutdownCancel() + shutdownErr = errors.Join( + runtime.Server.Shutdown(shutdownCtx), + runtime.AdminServer.Shutdown(shutdownCtx), + ) + } + + errCh := make(chan error, 2) + runServer := func(name string, serve func(context.Context) error) { + wg.Add(1) + go func() { + defer wg.Done() + if err := serve(runCtx); err != nil { + select { + case errCh <- fmt.Errorf("%s: %w", name, err): + default: + } + cancel() + } + }() + } + + runServer("internal HTTP server", runtime.Server.Run) + runServer("admin HTTP server", runtime.AdminServer.Run) + + done := make(chan struct{}) + go func() { + defer close(done) + <-runCtx.Done() + shutdownServers() + wg.Wait() + }() + + var runErr error + select { + case runErr = <-errCh: + cancel() + case <-ctx.Done(): + cancel() + case <-done: + } + + <-done + return errors.Join(runErr, shutdownErr) +} + +// Close releases every runtime dependency in reverse construction order. +func (runtime *Runtime) Close() error { + if runtime == nil { + return nil + } + + var messages []string + for index := len(runtime.cleanupFns) - 1; index >= 0; index-- { + if err := runtime.cleanupFns[index](); err != nil { + messages = append(messages, err.Error()) + } + } + if len(messages) == 0 { + return nil + } + + return errors.New(strings.Join(messages, "; ")) +} + +func pingDependency(ctx context.Context, name string, dependency pinger) error { + if err := dependency.Ping(ctx); err != nil { + return fmt.Errorf("ping %s: %w", name, err) + } + + return nil +} + +var _ closer = (*userstore.Store)(nil) diff --git a/user/internal/config/config.go b/user/internal/config/config.go new file mode 100644 index 0000000..c95edca --- /dev/null +++ b/user/internal/config/config.go @@ -0,0 +1,551 @@ +// Package config loads the user-service process configuration from environment +// variables. +package config + +import ( + "crypto/tls" + "fmt" + "net" + "os" + "strconv" + "strings" + "time" +) + +const ( + shutdownTimeoutEnvVar = "USERSERVICE_SHUTDOWN_TIMEOUT" + logLevelEnvVar = "USERSERVICE_LOG_LEVEL" + + internalHTTPAddrEnvVar = "USERSERVICE_INTERNAL_HTTP_ADDR" + internalHTTPReadHeaderTimeoutEnvVar = "USERSERVICE_INTERNAL_HTTP_READ_HEADER_TIMEOUT" + internalHTTPReadTimeoutEnvVar = "USERSERVICE_INTERNAL_HTTP_READ_TIMEOUT" + internalHTTPIdleTimeoutEnvVar = "USERSERVICE_INTERNAL_HTTP_IDLE_TIMEOUT" + internalHTTPRequestTimeoutEnvVar = "USERSERVICE_INTERNAL_HTTP_REQUEST_TIMEOUT" + + adminHTTPAddrEnvVar = "USERSERVICE_ADMIN_HTTP_ADDR" + adminHTTPReadHeaderTimeoutEnvVar = "USERSERVICE_ADMIN_HTTP_READ_HEADER_TIMEOUT" + adminHTTPReadTimeoutEnvVar = "USERSERVICE_ADMIN_HTTP_READ_TIMEOUT" + adminHTTPIdleTimeoutEnvVar = "USERSERVICE_ADMIN_HTTP_IDLE_TIMEOUT" + + redisAddrEnvVar = "USERSERVICE_REDIS_ADDR" + redisUsernameEnvVar = "USERSERVICE_REDIS_USERNAME" + redisPasswordEnvVar = "USERSERVICE_REDIS_PASSWORD" + redisDBEnvVar = "USERSERVICE_REDIS_DB" + redisTLSEnabledEnvVar = "USERSERVICE_REDIS_TLS_ENABLED" + redisOperationTimeoutEnvVar = "USERSERVICE_REDIS_OPERATION_TIMEOUT" + redisKeyspacePrefixEnvVar = "USERSERVICE_REDIS_KEYSPACE_PREFIX" + redisDomainEventsStreamEnvVar = "USERSERVICE_REDIS_DOMAIN_EVENTS_STREAM" + redisDomainEventsStreamMaxLenEnvVar = "USERSERVICE_REDIS_DOMAIN_EVENTS_STREAM_MAX_LEN" + + otelServiceNameEnvVar = "OTEL_SERVICE_NAME" + otelTracesExporterEnvVar = "OTEL_TRACES_EXPORTER" + otelMetricsExporterEnvVar = "OTEL_METRICS_EXPORTER" + otelExporterOTLPProtocolEnvVar = "OTEL_EXPORTER_OTLP_PROTOCOL" + otelExporterOTLPTracesProtocolEnvVar = "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL" + otelExporterOTLPMetricsProtocolEnvVar = "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL" + otelStdoutTracesEnabledEnvVar = "USERSERVICE_OTEL_STDOUT_TRACES_ENABLED" + otelStdoutMetricsEnabledEnvVar = "USERSERVICE_OTEL_STDOUT_METRICS_ENABLED" + + defaultShutdownTimeout = 5 * time.Second + defaultLogLevel = "info" + defaultInternalHTTPAddr = ":8091" + defaultAdminHTTPAddr = "" + defaultReadHeaderTimeout = 2 * time.Second + defaultReadTimeout = 10 * time.Second + defaultIdleTimeout = time.Minute + defaultRequestTimeout = 3 * time.Second + defaultRedisDB = 0 + defaultRedisOperationTimeout = 250 * time.Millisecond + defaultRedisKeyspacePrefix = "user:" + defaultDomainEventsStream = "user:domain_events" + defaultDomainEventsStreamMaxLen = 1024 + defaultOTelServiceName = "galaxy-user" + otelExporterNone = "none" + otelExporterOTLP = "otlp" + otelProtocolHTTPProtobuf = "http/protobuf" + otelProtocolGRPC = "grpc" +) + +// Config stores the full user-service process configuration. +type Config struct { + // ShutdownTimeout bounds graceful shutdown of the long-lived listeners and + // runtime resources. + ShutdownTimeout time.Duration + + // Logging configures the process-wide logger. + Logging LoggingConfig + + // InternalHTTP configures the trusted internal HTTP listener. + InternalHTTP InternalHTTPConfig + + // AdminHTTP configures the optional private admin HTTP listener. + AdminHTTP AdminHTTPConfig + + // Redis configures the Redis-backed user store and domain-event publisher. + Redis RedisConfig + + // Telemetry configures the process-wide OpenTelemetry runtime. + Telemetry TelemetryConfig +} + +// LoggingConfig configures the process-wide logger. +type LoggingConfig struct { + // Level stores the process log level. + Level string +} + +// InternalHTTPConfig configures the internal HTTP listener. +type InternalHTTPConfig struct { + // Addr stores the TCP listen address. + Addr string + + // ReadHeaderTimeout bounds request-header reading. + ReadHeaderTimeout time.Duration + + // ReadTimeout bounds reading one request. + ReadTimeout time.Duration + + // IdleTimeout bounds how long keep-alive connections stay open. + IdleTimeout time.Duration + + // RequestTimeout bounds one application-layer request execution. + RequestTimeout time.Duration +} + +// Validate reports whether cfg stores a usable internal HTTP listener +// configuration. +func (cfg InternalHTTPConfig) Validate() error { + switch { + case strings.TrimSpace(cfg.Addr) == "": + return fmt.Errorf("internal HTTP addr must not be empty") + case cfg.ReadHeaderTimeout <= 0: + return fmt.Errorf("internal HTTP read header timeout must be positive") + case cfg.ReadTimeout <= 0: + return fmt.Errorf("internal HTTP read timeout must be positive") + case cfg.IdleTimeout <= 0: + return fmt.Errorf("internal HTTP idle timeout must be positive") + case cfg.RequestTimeout <= 0: + return fmt.Errorf("internal HTTP request timeout must be positive") + default: + return nil + } +} + +// AdminHTTPConfig describes the private operational HTTP listener used for +// Prometheus metrics exposure. The listener remains disabled when Addr is +// empty. +type AdminHTTPConfig struct { + // Addr stores the TCP listen address used by the admin HTTP server. + Addr string + + // ReadHeaderTimeout bounds request-header reading. + ReadHeaderTimeout time.Duration + + // ReadTimeout bounds reading one request. + ReadTimeout time.Duration + + // IdleTimeout bounds how long keep-alive connections stay open. + IdleTimeout time.Duration +} + +// Validate reports whether cfg stores a usable optional admin HTTP listener +// configuration. +func (cfg AdminHTTPConfig) Validate() error { + if strings.TrimSpace(cfg.Addr) == "" { + return nil + } + + switch { + case cfg.ReadHeaderTimeout <= 0: + return fmt.Errorf("admin HTTP read header timeout must be positive") + case cfg.ReadTimeout <= 0: + return fmt.Errorf("admin HTTP read timeout must be positive") + case cfg.IdleTimeout <= 0: + return fmt.Errorf("admin HTTP idle timeout must be positive") + default: + return nil + } +} + +// RedisConfig configures the Redis-backed store and domain-event publisher. +type RedisConfig struct { + // Addr stores the Redis network address. + Addr string + + // Username stores the optional Redis ACL username. + Username string + + // Password stores the optional Redis ACL password. + Password string + + // DB stores the Redis logical database index. + DB int + + // TLSEnabled reports whether TLS must be used for Redis connections. + TLSEnabled bool + + // OperationTimeout bounds one Redis round trip. + OperationTimeout time.Duration + + // KeyspacePrefix stores the root prefix of the service-owned Redis keyspace. + KeyspacePrefix string + + // DomainEventsStream stores the Redis Stream key used for auxiliary + // post-commit domain events. + DomainEventsStream string + + // DomainEventsStreamMaxLen bounds the domain-events Redis Stream with + // approximate trimming. + DomainEventsStreamMaxLen int64 +} + +// TLSConfig returns the conservative TLS configuration used by Redis adapters +// when TLSEnabled is true. +func (cfg RedisConfig) TLSConfig() *tls.Config { + if !cfg.TLSEnabled { + return nil + } + + return &tls.Config{MinVersion: tls.VersionTLS12} +} + +// Validate reports whether cfg stores a usable Redis configuration. +func (cfg RedisConfig) Validate() error { + switch { + case strings.TrimSpace(cfg.Addr) == "": + return fmt.Errorf("redis addr must not be empty") + case cfg.DB < 0: + return fmt.Errorf("redis db must not be negative") + case cfg.OperationTimeout <= 0: + return fmt.Errorf("redis operation timeout must be positive") + case strings.TrimSpace(cfg.KeyspacePrefix) == "": + return fmt.Errorf("redis keyspace prefix must not be empty") + case strings.TrimSpace(cfg.DomainEventsStream) == "": + return fmt.Errorf("redis domain events stream must not be empty") + case cfg.DomainEventsStreamMaxLen <= 0: + return fmt.Errorf("redis domain events stream max len must be positive") + default: + return nil + } +} + +// TelemetryConfig configures the user-service OpenTelemetry runtime. +type TelemetryConfig struct { + // ServiceName overrides the default OpenTelemetry service name. + ServiceName string + + // TracesExporter selects the external traces exporter. Supported values are + // `none` and `otlp`. + TracesExporter string + + // MetricsExporter selects the external metrics exporter. Supported values + // are `none` and `otlp`. + MetricsExporter string + + // TracesProtocol selects the OTLP traces protocol when TracesExporter is + // `otlp`. + TracesProtocol string + + // MetricsProtocol selects the OTLP metrics protocol when MetricsExporter is + // `otlp`. + MetricsProtocol string + + // StdoutTracesEnabled enables the additional stdout trace exporter used for + // local development and debugging. + StdoutTracesEnabled bool + + // StdoutMetricsEnabled enables the additional stdout metric exporter used + // for local development and debugging. + StdoutMetricsEnabled bool +} + +// Validate reports whether cfg contains a supported OpenTelemetry exporter +// configuration. +func (cfg TelemetryConfig) Validate() error { + switch cfg.TracesExporter { + case otelExporterNone, otelExporterOTLP: + default: + return fmt.Errorf("%s %q is unsupported", otelTracesExporterEnvVar, cfg.TracesExporter) + } + + switch cfg.MetricsExporter { + case otelExporterNone, otelExporterOTLP: + default: + return fmt.Errorf("%s %q is unsupported", otelMetricsExporterEnvVar, cfg.MetricsExporter) + } + + if cfg.TracesProtocol != "" && cfg.TracesProtocol != otelProtocolHTTPProtobuf && cfg.TracesProtocol != otelProtocolGRPC { + return fmt.Errorf("%s %q is unsupported", otelExporterOTLPTracesProtocolEnvVar, cfg.TracesProtocol) + } + if cfg.MetricsProtocol != "" && cfg.MetricsProtocol != otelProtocolHTTPProtobuf && cfg.MetricsProtocol != otelProtocolGRPC { + return fmt.Errorf("%s %q is unsupported", otelExporterOTLPMetricsProtocolEnvVar, cfg.MetricsProtocol) + } + + return nil +} + +// DefaultAdminHTTPConfig returns the default settings for the optional private +// admin HTTP listener. +func DefaultAdminHTTPConfig() AdminHTTPConfig { + return AdminHTTPConfig{ + Addr: defaultAdminHTTPAddr, + ReadHeaderTimeout: defaultReadHeaderTimeout, + ReadTimeout: defaultReadTimeout, + IdleTimeout: defaultIdleTimeout, + } +} + +// DefaultConfig returns the default process configuration with all optional +// values filled. +func DefaultConfig() Config { + return Config{ + ShutdownTimeout: defaultShutdownTimeout, + Logging: LoggingConfig{ + Level: defaultLogLevel, + }, + InternalHTTP: InternalHTTPConfig{ + Addr: defaultInternalHTTPAddr, + ReadHeaderTimeout: defaultReadHeaderTimeout, + ReadTimeout: defaultReadTimeout, + IdleTimeout: defaultIdleTimeout, + RequestTimeout: defaultRequestTimeout, + }, + AdminHTTP: DefaultAdminHTTPConfig(), + Redis: RedisConfig{ + DB: defaultRedisDB, + OperationTimeout: defaultRedisOperationTimeout, + KeyspacePrefix: defaultRedisKeyspacePrefix, + DomainEventsStream: defaultDomainEventsStream, + DomainEventsStreamMaxLen: defaultDomainEventsStreamMaxLen, + }, + Telemetry: TelemetryConfig{ + ServiceName: defaultOTelServiceName, + TracesExporter: otelExporterNone, + MetricsExporter: otelExporterNone, + }, + } +} + +// Validate reports whether cfg is process-ready. +func (cfg Config) Validate() error { + switch { + case cfg.ShutdownTimeout <= 0: + return fmt.Errorf("shutdown timeout must be positive") + } + if err := cfg.InternalHTTP.Validate(); err != nil { + return fmt.Errorf("internal HTTP config: %w", err) + } + if err := cfg.AdminHTTP.Validate(); err != nil { + return fmt.Errorf("admin HTTP config: %w", err) + } + if err := cfg.Redis.Validate(); err != nil { + return fmt.Errorf("redis config: %w", err) + } + if _, err := parseLogLevel(cfg.Logging.Level); err != nil { + return fmt.Errorf("logging config: %w", err) + } + if err := cfg.Telemetry.Validate(); err != nil { + return fmt.Errorf("telemetry config: %w", err) + } + + return nil +} + +// LoadFromEnv loads Config from the process environment. +func LoadFromEnv() (Config, error) { + cfg := DefaultConfig() + + var err error + cfg.ShutdownTimeout, err = loadDuration(shutdownTimeoutEnvVar, cfg.ShutdownTimeout) + if err != nil { + return Config{}, err + } + cfg.Logging.Level = loadString(logLevelEnvVar, cfg.Logging.Level) + + cfg.InternalHTTP.Addr = loadString(internalHTTPAddrEnvVar, cfg.InternalHTTP.Addr) + cfg.InternalHTTP.ReadHeaderTimeout, err = loadDuration(internalHTTPReadHeaderTimeoutEnvVar, cfg.InternalHTTP.ReadHeaderTimeout) + if err != nil { + return Config{}, err + } + cfg.InternalHTTP.ReadTimeout, err = loadDuration(internalHTTPReadTimeoutEnvVar, cfg.InternalHTTP.ReadTimeout) + if err != nil { + return Config{}, err + } + cfg.InternalHTTP.IdleTimeout, err = loadDuration(internalHTTPIdleTimeoutEnvVar, cfg.InternalHTTP.IdleTimeout) + if err != nil { + return Config{}, err + } + cfg.InternalHTTP.RequestTimeout, err = loadDuration(internalHTTPRequestTimeoutEnvVar, cfg.InternalHTTP.RequestTimeout) + if err != nil { + return Config{}, err + } + + cfg.AdminHTTP.Addr = loadString(adminHTTPAddrEnvVar, cfg.AdminHTTP.Addr) + cfg.AdminHTTP.ReadHeaderTimeout, err = loadDuration(adminHTTPReadHeaderTimeoutEnvVar, cfg.AdminHTTP.ReadHeaderTimeout) + if err != nil { + return Config{}, err + } + cfg.AdminHTTP.ReadTimeout, err = loadDuration(adminHTTPReadTimeoutEnvVar, cfg.AdminHTTP.ReadTimeout) + if err != nil { + return Config{}, err + } + cfg.AdminHTTP.IdleTimeout, err = loadDuration(adminHTTPIdleTimeoutEnvVar, cfg.AdminHTTP.IdleTimeout) + if err != nil { + return Config{}, err + } + + cfg.Redis.Addr = loadString(redisAddrEnvVar, cfg.Redis.Addr) + cfg.Redis.Username = loadString(redisUsernameEnvVar, cfg.Redis.Username) + cfg.Redis.Password = loadString(redisPasswordEnvVar, cfg.Redis.Password) + cfg.Redis.DB, err = loadInt(redisDBEnvVar, cfg.Redis.DB) + if err != nil { + return Config{}, err + } + cfg.Redis.TLSEnabled, err = loadBool(redisTLSEnabledEnvVar, cfg.Redis.TLSEnabled) + if err != nil { + return Config{}, err + } + cfg.Redis.OperationTimeout, err = loadDuration(redisOperationTimeoutEnvVar, cfg.Redis.OperationTimeout) + if err != nil { + return Config{}, err + } + cfg.Redis.KeyspacePrefix = loadString(redisKeyspacePrefixEnvVar, cfg.Redis.KeyspacePrefix) + cfg.Redis.DomainEventsStream = loadString(redisDomainEventsStreamEnvVar, cfg.Redis.DomainEventsStream) + cfg.Redis.DomainEventsStreamMaxLen, err = loadInt64(redisDomainEventsStreamMaxLenEnvVar, cfg.Redis.DomainEventsStreamMaxLen) + if err != nil { + return Config{}, err + } + + cfg.Telemetry.ServiceName = loadString(otelServiceNameEnvVar, cfg.Telemetry.ServiceName) + cfg.Telemetry.TracesExporter = normalizeExporterValue(loadString(otelTracesExporterEnvVar, cfg.Telemetry.TracesExporter)) + cfg.Telemetry.MetricsExporter = normalizeExporterValue(loadString(otelMetricsExporterEnvVar, cfg.Telemetry.MetricsExporter)) + cfg.Telemetry.TracesProtocol = loadOTLPProtocol( + os.Getenv(otelExporterOTLPTracesProtocolEnvVar), + os.Getenv(otelExporterOTLPProtocolEnvVar), + cfg.Telemetry.TracesExporter, + ) + cfg.Telemetry.MetricsProtocol = loadOTLPProtocol( + os.Getenv(otelExporterOTLPMetricsProtocolEnvVar), + os.Getenv(otelExporterOTLPProtocolEnvVar), + cfg.Telemetry.MetricsExporter, + ) + cfg.Telemetry.StdoutTracesEnabled, err = loadBool(otelStdoutTracesEnabledEnvVar, cfg.Telemetry.StdoutTracesEnabled) + if err != nil { + return Config{}, err + } + cfg.Telemetry.StdoutMetricsEnabled, err = loadBool(otelStdoutMetricsEnabledEnvVar, cfg.Telemetry.StdoutMetricsEnabled) + if err != nil { + return Config{}, err + } + + if err := cfg.Validate(); err != nil { + return Config{}, err + } + + return cfg, nil +} + +func loadString(envName string, defaultValue string) string { + value, ok := os.LookupEnv(envName) + if !ok { + return defaultValue + } + + return strings.TrimSpace(value) +} + +func loadDuration(envName string, defaultValue time.Duration) (time.Duration, error) { + value, ok := os.LookupEnv(envName) + if !ok { + return defaultValue, nil + } + + duration, err := time.ParseDuration(strings.TrimSpace(value)) + if err != nil { + return 0, fmt.Errorf("%s: parse duration: %w", envName, err) + } + + return duration, nil +} + +func loadInt(envName string, defaultValue int) (int, error) { + value, ok := os.LookupEnv(envName) + if !ok { + return defaultValue, nil + } + + parsedValue, err := strconv.Atoi(strings.TrimSpace(value)) + if err != nil { + return 0, fmt.Errorf("%s: parse int: %w", envName, err) + } + + return parsedValue, nil +} + +func loadInt64(envName string, defaultValue int64) (int64, error) { + value, ok := os.LookupEnv(envName) + if !ok { + return defaultValue, nil + } + + parsedValue, err := strconv.ParseInt(strings.TrimSpace(value), 10, 64) + if err != nil { + return 0, fmt.Errorf("%s: parse int64: %w", envName, err) + } + + return parsedValue, nil +} + +func loadBool(envName string, defaultValue bool) (bool, error) { + value, ok := os.LookupEnv(envName) + if !ok { + return defaultValue, nil + } + + parsedValue, err := strconv.ParseBool(strings.TrimSpace(value)) + if err != nil { + return false, fmt.Errorf("%s: parse bool: %w", envName, err) + } + + return parsedValue, nil +} + +func parseLogLevel(value string) (string, error) { + switch strings.ToLower(strings.TrimSpace(value)) { + case "debug", "info", "warn", "error": + return value, nil + default: + return "", fmt.Errorf("unsupported log level %q", value) + } +} + +func normalizeExporterValue(value string) string { + switch strings.TrimSpace(value) { + case "", otelExporterNone: + return otelExporterNone + default: + return strings.TrimSpace(value) + } +} + +func loadOTLPProtocol(primary string, fallback string, exporter string) string { + protocol := strings.TrimSpace(primary) + if protocol == "" { + protocol = strings.TrimSpace(fallback) + } + if protocol == "" && exporter == otelExporterOTLP { + return otelProtocolHTTPProtobuf + } + + return protocol +} + +// ListenAddress returns the resolved listen address used by tests and process +// startup. +func (cfg InternalHTTPConfig) ListenAddress() string { + if strings.HasPrefix(cfg.Addr, ":") { + return net.JoinHostPort("", strings.TrimPrefix(cfg.Addr, ":")) + } + + return cfg.Addr +} diff --git a/user/internal/config/config_test.go b/user/internal/config/config_test.go new file mode 100644 index 0000000..82fc75b --- /dev/null +++ b/user/internal/config/config_test.go @@ -0,0 +1,106 @@ +package config + +import ( + "testing" + "time" + + "github.com/stretchr/testify/require" +) + +func TestLoadFromEnvUsesDefaults(t *testing.T) { + t.Setenv(redisAddrEnvVar, "127.0.0.1:6379") + + cfg, err := LoadFromEnv() + require.NoError(t, err) + + defaults := DefaultConfig() + require.Equal(t, defaults.ShutdownTimeout, cfg.ShutdownTimeout) + require.Equal(t, defaults.Logging.Level, cfg.Logging.Level) + require.Equal(t, defaults.InternalHTTP, cfg.InternalHTTP) + require.Equal(t, defaults.AdminHTTP, cfg.AdminHTTP) + require.Equal(t, "127.0.0.1:6379", cfg.Redis.Addr) + require.Equal(t, defaults.Redis.DB, cfg.Redis.DB) + require.Equal(t, defaults.Redis.DomainEventsStream, cfg.Redis.DomainEventsStream) + require.Equal(t, defaults.Redis.DomainEventsStreamMaxLen, cfg.Redis.DomainEventsStreamMaxLen) + require.Equal(t, defaults.Telemetry, cfg.Telemetry) +} + +func TestLoadFromEnvAppliesOverrides(t *testing.T) { + t.Setenv(shutdownTimeoutEnvVar, "9s") + t.Setenv(logLevelEnvVar, "debug") + t.Setenv(internalHTTPAddrEnvVar, "127.0.0.1:18091") + t.Setenv(internalHTTPReadHeaderTimeoutEnvVar, "3s") + t.Setenv(internalHTTPRequestTimeoutEnvVar, "750ms") + t.Setenv(adminHTTPAddrEnvVar, "127.0.0.1:19091") + t.Setenv(adminHTTPIdleTimeoutEnvVar, "90s") + t.Setenv(redisAddrEnvVar, "127.0.0.1:6380") + t.Setenv(redisUsernameEnvVar, "alice") + t.Setenv(redisPasswordEnvVar, "secret") + t.Setenv(redisDBEnvVar, "3") + t.Setenv(redisTLSEnabledEnvVar, "true") + t.Setenv(redisOperationTimeoutEnvVar, "900ms") + t.Setenv(redisKeyspacePrefixEnvVar, "user:custom:") + t.Setenv(redisDomainEventsStreamEnvVar, "user:test_events") + t.Setenv(redisDomainEventsStreamMaxLenEnvVar, "2048") + t.Setenv(otelServiceNameEnvVar, "galaxy-user-stage12") + t.Setenv(otelTracesExporterEnvVar, "otlp") + t.Setenv(otelMetricsExporterEnvVar, "otlp") + t.Setenv(otelExporterOTLPTracesProtocolEnvVar, "grpc") + t.Setenv(otelExporterOTLPMetricsProtocolEnvVar, "http/protobuf") + t.Setenv(otelStdoutTracesEnabledEnvVar, "true") + t.Setenv(otelStdoutMetricsEnabledEnvVar, "true") + + cfg, err := LoadFromEnv() + require.NoError(t, err) + + require.Equal(t, 9*time.Second, cfg.ShutdownTimeout) + require.Equal(t, "debug", cfg.Logging.Level) + require.Equal(t, "127.0.0.1:18091", cfg.InternalHTTP.Addr) + require.Equal(t, 3*time.Second, cfg.InternalHTTP.ReadHeaderTimeout) + require.Equal(t, 750*time.Millisecond, cfg.InternalHTTP.RequestTimeout) + require.Equal(t, "127.0.0.1:19091", cfg.AdminHTTP.Addr) + require.Equal(t, 90*time.Second, cfg.AdminHTTP.IdleTimeout) + require.Equal(t, "127.0.0.1:6380", cfg.Redis.Addr) + require.Equal(t, "alice", cfg.Redis.Username) + require.Equal(t, "secret", cfg.Redis.Password) + require.Equal(t, 3, cfg.Redis.DB) + require.True(t, cfg.Redis.TLSEnabled) + require.Equal(t, 900*time.Millisecond, cfg.Redis.OperationTimeout) + require.Equal(t, "user:custom:", cfg.Redis.KeyspacePrefix) + require.Equal(t, "user:test_events", cfg.Redis.DomainEventsStream) + require.Equal(t, int64(2048), cfg.Redis.DomainEventsStreamMaxLen) + require.Equal(t, "galaxy-user-stage12", cfg.Telemetry.ServiceName) + require.Equal(t, "otlp", cfg.Telemetry.TracesExporter) + require.Equal(t, "otlp", cfg.Telemetry.MetricsExporter) + require.Equal(t, "grpc", cfg.Telemetry.TracesProtocol) + require.Equal(t, "http/protobuf", cfg.Telemetry.MetricsProtocol) + require.True(t, cfg.Telemetry.StdoutTracesEnabled) + require.True(t, cfg.Telemetry.StdoutMetricsEnabled) +} + +func TestLoadFromEnvRejectsInvalidValues(t *testing.T) { + tests := []struct { + name string + envName string + envVal string + }{ + {name: "invalid duration", envName: shutdownTimeoutEnvVar, envVal: "later"}, + {name: "invalid bool", envName: redisTLSEnabledEnvVar, envVal: "sometimes"}, + {name: "invalid log level", envName: logLevelEnvVar, envVal: "verbose"}, + {name: "invalid int", envName: redisDBEnvVar, envVal: "db-three"}, + {name: "invalid stream max len", envName: redisDomainEventsStreamMaxLenEnvVar, envVal: "many"}, + {name: "invalid traces exporter", envName: otelTracesExporterEnvVar, envVal: "zipkin"}, + {name: "invalid metrics protocol", envName: otelExporterOTLPMetricsProtocolEnvVar, envVal: "udp"}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Setenv(redisAddrEnvVar, "127.0.0.1:6379") + t.Setenv(tt.envName, tt.envVal) + + _, err := LoadFromEnv() + require.Error(t, err) + }) + } +} diff --git a/user/internal/domain/account/model.go b/user/internal/domain/account/model.go new file mode 100644 index 0000000..92e569d --- /dev/null +++ b/user/internal/domain/account/model.go @@ -0,0 +1,136 @@ +// Package account defines the logical user-account entities owned directly by +// User Service. +package account + +import ( + "fmt" + "strings" + "time" + + "galaxy/user/internal/domain/common" +) + +// RaceNameCanonicalKey stores the policy-produced reservation key used to +// enforce replaceable race-name uniqueness. +type RaceNameCanonicalKey string + +// String returns RaceNameCanonicalKey as its stored canonical string. +func (key RaceNameCanonicalKey) String() string { + return string(key) +} + +// IsZero reports whether RaceNameCanonicalKey does not contain a usable value. +func (key RaceNameCanonicalKey) IsZero() bool { + return strings.TrimSpace(string(key)) == "" +} + +// Validate reports whether RaceNameCanonicalKey is non-empty and trimmed. +func (key RaceNameCanonicalKey) Validate() error { + switch { + case key.IsZero(): + return fmt.Errorf("race name canonical key must not be empty") + case strings.TrimSpace(string(key)) != string(key): + return fmt.Errorf("race name canonical key must not contain surrounding whitespace") + default: + return nil + } +} + +// UserAccount stores the current editable account state of one regular user. +type UserAccount struct { + // UserID identifies the durable regular-user account. + UserID common.UserID + + // Email stores the normalized login/contact address of the account. + Email common.Email + + // RaceName stores the original-casing user-facing race name. + RaceName common.RaceName + + // PreferredLanguage stores the current declared language tag. + PreferredLanguage common.LanguageTag + + // TimeZone stores the current declared time-zone name. + TimeZone common.TimeZoneName + + // DeclaredCountry stores the latest effective declared-country value. The + // zero value means the geo workflow has not synchronized any country yet. + DeclaredCountry common.CountryCode + + // CreatedAt stores the account creation timestamp. + CreatedAt time.Time + + // UpdatedAt stores the last account mutation timestamp. + UpdatedAt time.Time +} + +// Validate reports whether UserAccount satisfies the frozen Stage 02 +// structural invariants. +func (record UserAccount) Validate() error { + if err := record.UserID.Validate(); err != nil { + return fmt.Errorf("user account user id: %w", err) + } + if err := record.Email.Validate(); err != nil { + return fmt.Errorf("user account email: %w", err) + } + if err := record.RaceName.Validate(); err != nil { + return fmt.Errorf("user account race name: %w", err) + } + if err := record.PreferredLanguage.Validate(); err != nil { + return fmt.Errorf("user account preferred language: %w", err) + } + if err := record.TimeZone.Validate(); err != nil { + return fmt.Errorf("user account time zone: %w", err) + } + if !record.DeclaredCountry.IsZero() { + if err := record.DeclaredCountry.Validate(); err != nil { + return fmt.Errorf("user account declared country: %w", err) + } + } + if err := common.ValidateTimestamp("user account created at", record.CreatedAt); err != nil { + return err + } + if err := common.ValidateTimestamp("user account updated at", record.UpdatedAt); err != nil { + return err + } + if record.UpdatedAt.Before(record.CreatedAt) { + return fmt.Errorf("user account updated at must not be before created at") + } + + return nil +} + +// RaceNameReservation stores the current uniqueness reservation for one +// canonicalized race-name key. +type RaceNameReservation struct { + // CanonicalKey stores the policy-produced uniqueness key. + CanonicalKey RaceNameCanonicalKey + + // UserID identifies the account that owns the reservation. + UserID common.UserID + + // RaceName stores the original-casing name linked to the reservation. + RaceName common.RaceName + + // ReservedAt stores when the reservation was acquired. + ReservedAt time.Time +} + +// Validate reports whether RaceNameReservation satisfies the frozen Stage 02 +// structural invariants. +func (record RaceNameReservation) Validate() error { + if err := record.CanonicalKey.Validate(); err != nil { + return fmt.Errorf("race name reservation canonical key: %w", err) + } + if err := record.UserID.Validate(); err != nil { + return fmt.Errorf("race name reservation user id: %w", err) + } + if err := record.RaceName.Validate(); err != nil { + return fmt.Errorf("race name reservation race name: %w", err) + } + if err := common.ValidateTimestamp("race name reservation reserved at", record.ReservedAt); err != nil { + return err + } + + return nil +} diff --git a/user/internal/domain/account/model_test.go b/user/internal/domain/account/model_test.go new file mode 100644 index 0000000..3591b4d --- /dev/null +++ b/user/internal/domain/account/model_test.go @@ -0,0 +1,119 @@ +package account + +import ( + "testing" + "time" + + "galaxy/user/internal/domain/common" + + "github.com/stretchr/testify/require" +) + +func TestUserAccountValidate(t *testing.T) { + t.Parallel() + + createdAt := time.Unix(1_775_240_000, 0).UTC() + updatedAt := createdAt.Add(2 * time.Hour) + + tests := []struct { + name string + record UserAccount + wantErr bool + }{ + { + name: "valid without declared country", + record: UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Berlin"), + CreatedAt: createdAt, + UpdatedAt: updatedAt, + }, + }, + { + name: "valid with declared country", + record: UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Berlin"), + DeclaredCountry: common.CountryCode("DE"), + CreatedAt: createdAt, + UpdatedAt: updatedAt, + }, + }, + { + name: "updated before created", + record: UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Berlin"), + CreatedAt: createdAt, + UpdatedAt: createdAt.Add(-time.Second), + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} + +func TestRaceNameReservationValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + record RaceNameReservation + wantErr bool + }{ + { + name: "valid", + record: RaceNameReservation{ + CanonicalKey: RaceNameCanonicalKey("pilot-nova"), + UserID: common.UserID("user-123"), + RaceName: common.RaceName("Pilot Nova"), + ReservedAt: time.Unix(1_775_240_100, 0).UTC(), + }, + }, + { + name: "empty canonical key", + record: RaceNameReservation{ + UserID: common.UserID("user-123"), + RaceName: common.RaceName("Pilot Nova"), + ReservedAt: time.Unix(1_775_240_100, 0).UTC(), + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} diff --git a/user/internal/domain/authblock/model.go b/user/internal/domain/authblock/model.go new file mode 100644 index 0000000..518c4d5 --- /dev/null +++ b/user/internal/domain/authblock/model.go @@ -0,0 +1,56 @@ +// Package authblock defines the dedicated pre-user auth-block entity stored by +// User Service. +package authblock + +import ( + "fmt" + "time" + + "galaxy/user/internal/domain/common" +) + +// BlockedEmailSubject stores a blocked e-mail subject that may exist before +// any user account exists. +type BlockedEmailSubject struct { + // Email stores the normalized blocked e-mail subject. + Email common.Email + + // ReasonCode stores the machine-readable reason for the block. + ReasonCode common.ReasonCode + + // BlockedAt stores when the block became effective. + BlockedAt time.Time + + // Actor stores optional audit metadata for the block initiator. + Actor common.ActorRef + + // ResolvedUserID stores the linked user when the blocked e-mail already + // belongs to an existing account. + ResolvedUserID common.UserID +} + +// Validate reports whether BlockedEmailSubject satisfies the frozen Stage 02 +// structural invariants. +func (record BlockedEmailSubject) Validate() error { + if err := record.Email.Validate(); err != nil { + return fmt.Errorf("blocked email subject email: %w", err) + } + if err := record.ReasonCode.Validate(); err != nil { + return fmt.Errorf("blocked email subject reason code: %w", err) + } + if err := common.ValidateTimestamp("blocked email subject blocked at", record.BlockedAt); err != nil { + return err + } + if !record.Actor.IsZero() { + if err := record.Actor.Validate(); err != nil { + return fmt.Errorf("blocked email subject actor: %w", err) + } + } + if !record.ResolvedUserID.IsZero() { + if err := record.ResolvedUserID.Validate(); err != nil { + return fmt.Errorf("blocked email subject resolved user id: %w", err) + } + } + + return nil +} diff --git a/user/internal/domain/authblock/model_test.go b/user/internal/domain/authblock/model_test.go new file mode 100644 index 0000000..d45349f --- /dev/null +++ b/user/internal/domain/authblock/model_test.go @@ -0,0 +1,61 @@ +package authblock + +import ( + "testing" + "time" + + "galaxy/user/internal/domain/common" + + "github.com/stretchr/testify/require" +) + +func TestBlockedEmailSubjectValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + record BlockedEmailSubject + wantErr bool + }{ + { + name: "valid without actor or user", + record: BlockedEmailSubject{ + Email: common.Email("pilot@example.com"), + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: time.Unix(1_775_240_000, 0).UTC(), + }, + }, + { + name: "valid with actor and user", + record: BlockedEmailSubject{ + Email: common.Email("pilot@example.com"), + ReasonCode: common.ReasonCode("policy_blocked"), + BlockedAt: time.Unix(1_775_240_000, 0).UTC(), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ResolvedUserID: common.UserID("user-123"), + }, + }, + { + name: "missing blocked at", + record: BlockedEmailSubject{ + Email: common.Email("pilot@example.com"), + ReasonCode: common.ReasonCode("policy_blocked"), + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} diff --git a/user/internal/domain/common/types.go b/user/internal/domain/common/types.go new file mode 100644 index 0000000..bc370b0 --- /dev/null +++ b/user/internal/domain/common/types.go @@ -0,0 +1,338 @@ +// Package common defines shared value objects used across the user-service +// domain model. +package common + +import ( + "errors" + "fmt" + "net/mail" + "strings" + "time" +) + +const ( + maxRaceNameLength = 64 + maxLanguageTagLength = 32 + maxTimeZoneNameLength = 128 +) + +// UserID identifies one regular-platform user owned by User Service. +type UserID string + +// String returns UserID as its stored identifier string. +func (id UserID) String() string { + return string(id) +} + +// IsZero reports whether UserID does not contain a usable identifier. +func (id UserID) IsZero() bool { + return strings.TrimSpace(string(id)) == "" +} + +// Validate reports whether UserID is non-empty, normalized, and uses the +// frozen Stage 02 prefix. +func (id UserID) Validate() error { + return validatePrefixedToken("user id", string(id), "user-") +} + +// Email stores one normalized user-login e-mail address. +type Email string + +// String returns Email as its stored canonical string. +func (email Email) String() string { + return string(email) +} + +// IsZero reports whether Email does not contain a usable address. +func (email Email) IsZero() bool { + return strings.TrimSpace(string(email)) == "" +} + +// Validate reports whether Email is non-empty, trimmed, and matches the same +// single-address syntax expected by internal REST contracts. +func (email Email) Validate() error { + raw := string(email) + if err := validateToken("email", raw); err != nil { + return err + } + + parsedAddress, err := mail.ParseAddress(raw) + if err != nil || parsedAddress.Name != "" || parsedAddress.Address != raw { + return fmt.Errorf("email %q must be a single valid email address", raw) + } + + return nil +} + +// RaceName stores one original-casing race name selected for the user +// account. +type RaceName string + +// String returns RaceName as its stored value. +func (name RaceName) String() string { + return string(name) +} + +// IsZero reports whether RaceName does not contain a usable value. +func (name RaceName) IsZero() bool { + return strings.TrimSpace(string(name)) == "" +} + +// Validate reports whether RaceName is non-empty, trimmed, and within the +// frozen OpenAPI length bound. +func (name RaceName) Validate() error { + raw := string(name) + if err := validateToken("race name", raw); err != nil { + return err + } + if len(raw) > maxRaceNameLength { + return fmt.Errorf("race name must be at most %d bytes", maxRaceNameLength) + } + + return nil +} + +// LanguageTag stores one declared BCP 47 language-tag string. +type LanguageTag string + +// String returns LanguageTag as its stored value. +func (tag LanguageTag) String() string { + return string(tag) +} + +// IsZero reports whether LanguageTag does not contain a usable value. +func (tag LanguageTag) IsZero() bool { + return strings.TrimSpace(string(tag)) == "" +} + +// Validate reports whether LanguageTag is non-empty, trimmed, and within the +// frozen OpenAPI length bound. Stage 02 intentionally freezes the storage +// shape and not the later boundary-level BCP 47 parser choice. +func (tag LanguageTag) Validate() error { + raw := string(tag) + if err := validateToken("language tag", raw); err != nil { + return err + } + if len(raw) > maxLanguageTagLength { + return fmt.Errorf("language tag must be at most %d bytes", maxLanguageTagLength) + } + + return nil +} + +// TimeZoneName stores one declared IANA time-zone name. +type TimeZoneName string + +// String returns TimeZoneName as its stored value. +func (name TimeZoneName) String() string { + return string(name) +} + +// IsZero reports whether TimeZoneName does not contain a usable value. +func (name TimeZoneName) IsZero() bool { + return strings.TrimSpace(string(name)) == "" +} + +// Validate reports whether TimeZoneName is non-empty, trimmed, and within the +// frozen OpenAPI length bound. Later application stages may tighten +// boundary-level validation further. +func (name TimeZoneName) Validate() error { + raw := string(name) + if err := validateToken("time zone name", raw); err != nil { + return err + } + if len(raw) > maxTimeZoneNameLength { + return fmt.Errorf("time zone name must be at most %d bytes", maxTimeZoneNameLength) + } + + return nil +} + +// CountryCode stores one ISO 3166-1 alpha-2 code. +type CountryCode string + +// String returns CountryCode as its stored value. +func (code CountryCode) String() string { + return string(code) +} + +// IsZero reports whether CountryCode does not contain a usable value. +func (code CountryCode) IsZero() bool { + return strings.TrimSpace(string(code)) == "" +} + +// Validate reports whether CountryCode is an uppercase ISO 3166-1 alpha-2 +// code. +func (code CountryCode) Validate() error { + raw := string(code) + if len(raw) != 2 { + return fmt.Errorf("country code %q must contain exactly two letters", raw) + } + for idx := 0; idx < len(raw); idx++ { + if raw[idx] < 'A' || raw[idx] > 'Z' { + return fmt.Errorf("country code %q must contain only uppercase ASCII letters", raw) + } + } + + return nil +} + +// ActorType stores one machine-readable actor type for audit metadata. +type ActorType string + +// String returns ActorType as its stored value. +func (actorType ActorType) String() string { + return string(actorType) +} + +// IsZero reports whether ActorType does not contain a usable value. +func (actorType ActorType) IsZero() bool { + return strings.TrimSpace(string(actorType)) == "" +} + +// Validate reports whether ActorType is non-empty and trimmed. +func (actorType ActorType) Validate() error { + return validateToken("actor type", string(actorType)) +} + +// ActorID stores one optional stable actor identifier. +type ActorID string + +// String returns ActorID as its stored value. +func (actorID ActorID) String() string { + return string(actorID) +} + +// IsZero reports whether ActorID does not contain a usable value. +func (actorID ActorID) IsZero() bool { + return strings.TrimSpace(string(actorID)) == "" +} + +// Validate reports whether ActorID is trimmed when present. +func (actorID ActorID) Validate() error { + if actorID.IsZero() { + return nil + } + + return validateToken("actor id", string(actorID)) +} + +// ActorRef stores actor metadata captured on trusted mutations. +type ActorRef struct { + // Type identifies the machine-readable actor class such as `admin`, + // `service`, or `billing`. + Type ActorType + + // ID stores the optional stable actor identifier. + ID ActorID +} + +// IsZero reports whether ActorRef does not contain any audit actor metadata. +func (ref ActorRef) IsZero() bool { + return ref.Type.IsZero() && ref.ID.IsZero() +} + +// Validate reports whether ActorRef contains a required type and an optional +// trimmed identifier. +func (ref ActorRef) Validate() error { + if err := ref.Type.Validate(); err != nil { + return fmt.Errorf("actor ref type: %w", err) + } + if err := ref.ID.Validate(); err != nil { + return fmt.Errorf("actor ref id: %w", err) + } + + return nil +} + +// ReasonCode stores one machine-readable reason code. +type ReasonCode string + +// String returns ReasonCode as its stored value. +func (code ReasonCode) String() string { + return string(code) +} + +// IsZero reports whether ReasonCode does not contain a usable value. +func (code ReasonCode) IsZero() bool { + return strings.TrimSpace(string(code)) == "" +} + +// Validate reports whether ReasonCode is non-empty and trimmed. +func (code ReasonCode) Validate() error { + return validateToken("reason code", string(code)) +} + +// Source stores one machine-readable mutation source. +type Source string + +// String returns Source as its stored value. +func (source Source) String() string { + return string(source) +} + +// IsZero reports whether Source does not contain a usable value. +func (source Source) IsZero() bool { + return strings.TrimSpace(string(source)) == "" +} + +// Validate reports whether Source is non-empty and trimmed. +func (source Source) Validate() error { + return validateToken("source", string(source)) +} + +// Scope stores one machine-readable sanction scope. +type Scope string + +// String returns Scope as its stored value. +func (scope Scope) String() string { + return string(scope) +} + +// IsZero reports whether Scope does not contain a usable value. +func (scope Scope) IsZero() bool { + return strings.TrimSpace(string(scope)) == "" +} + +// Validate reports whether Scope is non-empty and trimmed. +func (scope Scope) Validate() error { + return validateToken("scope", string(scope)) +} + +// ValidateTimestamp reports whether value is set. +func ValidateTimestamp(name string, value time.Time) error { + if value.IsZero() { + return fmt.Errorf("%s must not be zero", name) + } + + return nil +} + +func validateToken(name string, value string) error { + switch { + case strings.TrimSpace(value) == "": + return fmt.Errorf("%s must not be empty", name) + case strings.TrimSpace(value) != value: + return fmt.Errorf("%s must not contain surrounding whitespace", name) + default: + return nil + } +} + +func validatePrefixedToken(name string, value string, prefix string) error { + if err := validateToken(name, value); err != nil { + return err + } + if !strings.HasPrefix(value, prefix) { + return fmt.Errorf("%s must start with %q", name, prefix) + } + if len(value) == len(prefix) { + return fmt.Errorf("%s must contain opaque data after %q", name, prefix) + } + + return nil +} + +// ErrInvertedTimeRange reports that the logical end of a range is not after +// its start. +var ErrInvertedTimeRange = errors.New("time range end must be after start") diff --git a/user/internal/domain/common/types_test.go b/user/internal/domain/common/types_test.go new file mode 100644 index 0000000..f88a167 --- /dev/null +++ b/user/internal/domain/common/types_test.go @@ -0,0 +1,207 @@ +package common + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestUserIDValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value UserID + wantErr bool + }{ + {name: "valid", value: UserID("user-abc123")}, + {name: "empty", value: UserID(""), wantErr: true}, + {name: "surrounding whitespace", value: UserID(" user-abc123 "), wantErr: true}, + {name: "wrong prefix", value: UserID("account-abc123"), wantErr: true}, + {name: "prefix only", value: UserID("user-"), wantErr: true}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} + +func TestEmailValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value Email + wantErr bool + }{ + {name: "valid", value: Email("pilot@example.com")}, + {name: "empty", value: Email(""), wantErr: true}, + {name: "display name", value: Email("Pilot "), wantErr: true}, + {name: "invalid", value: Email("not-an-email"), wantErr: true}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} + +func TestRaceNameValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value RaceName + wantErr bool + }{ + {name: "valid", value: RaceName("Admiral Nova")}, + {name: "empty", value: RaceName(""), wantErr: true}, + {name: "too long", value: RaceName("abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmn"), wantErr: true}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} + +func TestLanguageTagValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value LanguageTag + wantErr bool + }{ + {name: "valid", value: LanguageTag("en-US")}, + {name: "empty", value: LanguageTag(""), wantErr: true}, + {name: "surrounding whitespace", value: LanguageTag(" en "), wantErr: true}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} + +func TestTimeZoneNameValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value TimeZoneName + wantErr bool + }{ + {name: "valid", value: TimeZoneName("Europe/Berlin")}, + {name: "empty", value: TimeZoneName(""), wantErr: true}, + {name: "surrounding whitespace", value: TimeZoneName(" UTC "), wantErr: true}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} + +func TestCountryCodeValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value CountryCode + wantErr bool + }{ + {name: "valid", value: CountryCode("DE")}, + {name: "lowercase", value: CountryCode("de"), wantErr: true}, + {name: "wrong length", value: CountryCode("DEU"), wantErr: true}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} + +func TestActorRefValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value ActorRef + wantErr bool + }{ + {name: "valid without id", value: ActorRef{Type: ActorType("service")}}, + {name: "valid with id", value: ActorRef{Type: ActorType("admin"), ID: ActorID("admin-1")}}, + {name: "missing type", value: ActorRef{ID: ActorID("admin-1")}, wantErr: true}, + {name: "invalid id whitespace", value: ActorRef{Type: ActorType("admin"), ID: ActorID(" admin-1 ")}, wantErr: true}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} diff --git a/user/internal/domain/entitlement/model.go b/user/internal/domain/entitlement/model.go new file mode 100644 index 0000000..79cd677 --- /dev/null +++ b/user/internal/domain/entitlement/model.go @@ -0,0 +1,325 @@ +// Package entitlement defines the logical entitlement entities owned by User +// Service. +package entitlement + +import ( + "fmt" + "strings" + "time" + + "galaxy/user/internal/domain/common" +) + +// PlanCode identifies one supported entitlement plan. +type PlanCode string + +const ( + // PlanCodeFree reports the free default entitlement. + PlanCodeFree PlanCode = "free" + + // PlanCodePaidMonthly reports a finite monthly paid entitlement. + PlanCodePaidMonthly PlanCode = "paid_monthly" + + // PlanCodePaidYearly reports a finite yearly paid entitlement. + PlanCodePaidYearly PlanCode = "paid_yearly" + + // PlanCodePaidLifetime reports a non-expiring paid entitlement. + PlanCodePaidLifetime PlanCode = "paid_lifetime" +) + +// IsKnown reports whether PlanCode belongs to the frozen v1 catalog. +func (code PlanCode) IsKnown() bool { + switch code { + case PlanCodeFree, PlanCodePaidMonthly, PlanCodePaidYearly, PlanCodePaidLifetime: + return true + default: + return false + } +} + +// IsPaid reports whether PlanCode represents a paid entitlement state. +func (code PlanCode) IsPaid() bool { + switch code { + case PlanCodePaidMonthly, PlanCodePaidYearly, PlanCodePaidLifetime: + return true + default: + return false + } +} + +// HasFiniteExpiry reports whether PlanCode requires a bounded `ends_at` +// value in the Stage 07 entitlement timeline model. +func (code PlanCode) HasFiniteExpiry() bool { + switch code { + case PlanCodePaidMonthly, PlanCodePaidYearly: + return true + default: + return false + } +} + +// EntitlementRecordID identifies one immutable entitlement history record. +type EntitlementRecordID string + +// String returns EntitlementRecordID as its stored identifier string. +func (id EntitlementRecordID) String() string { + return string(id) +} + +// IsZero reports whether EntitlementRecordID does not contain a usable value. +func (id EntitlementRecordID) IsZero() bool { + return strings.TrimSpace(string(id)) == "" +} + +// Validate reports whether EntitlementRecordID is non-empty, normalized, and +// uses the frozen Stage 02 prefix. +func (id EntitlementRecordID) Validate() error { + switch { + case id.IsZero(): + return fmt.Errorf("entitlement record id must not be empty") + case strings.TrimSpace(string(id)) != string(id): + return fmt.Errorf("entitlement record id must not contain surrounding whitespace") + case !strings.HasPrefix(string(id), "entitlement-"): + return fmt.Errorf("entitlement record id must start with %q", "entitlement-") + case len(string(id)) == len("entitlement-"): + return fmt.Errorf("entitlement record id must contain opaque data after %q", "entitlement-") + default: + return nil + } +} + +// PeriodRecord stores one entitlement-period history record. +type PeriodRecord struct { + // RecordID identifies the immutable history record. + RecordID EntitlementRecordID + + // UserID identifies the account that owns the entitlement record. + UserID common.UserID + + // PlanCode stores the effective plan for the recorded period. + PlanCode PlanCode + + // Source stores the machine-readable mutation source. + Source common.Source + + // Actor stores the audit actor metadata captured for the mutation. + Actor common.ActorRef + + // ReasonCode stores the machine-readable reason for the mutation. + ReasonCode common.ReasonCode + + // StartsAt stores when the period becomes effective. + StartsAt time.Time + + // EndsAt stores the optional planned end of the period. + EndsAt *time.Time + + // CreatedAt stores when the history record was created. + CreatedAt time.Time + + // ClosedAt stores when the period was later closed early by another trusted + // mutation. + ClosedAt *time.Time + + // ClosedBy stores optional audit actor metadata for the close mutation. + ClosedBy common.ActorRef + + // ClosedReasonCode stores the reason for closing the period early. + ClosedReasonCode common.ReasonCode +} + +// Validate reports whether PeriodRecord satisfies the frozen Stage 02 +// structural invariants. +func (record PeriodRecord) Validate() error { + if err := record.RecordID.Validate(); err != nil { + return fmt.Errorf("entitlement period record id: %w", err) + } + if err := record.UserID.Validate(); err != nil { + return fmt.Errorf("entitlement period user id: %w", err) + } + if !record.PlanCode.IsKnown() { + return fmt.Errorf("entitlement period plan code %q is unsupported", record.PlanCode) + } + if err := record.Source.Validate(); err != nil { + return fmt.Errorf("entitlement period source: %w", err) + } + if err := record.Actor.Validate(); err != nil { + return fmt.Errorf("entitlement period actor: %w", err) + } + if err := record.ReasonCode.Validate(); err != nil { + return fmt.Errorf("entitlement period reason code: %w", err) + } + if err := common.ValidateTimestamp("entitlement period starts at", record.StartsAt); err != nil { + return err + } + if err := validatePlanBounds("entitlement period", record.PlanCode, record.StartsAt, record.EndsAt); err != nil { + return err + } + if err := common.ValidateTimestamp("entitlement period created at", record.CreatedAt); err != nil { + return err + } + if record.ClosedAt == nil { + if !record.ClosedBy.IsZero() { + return fmt.Errorf("entitlement period closed by must be empty when closed at is absent") + } + if !record.ClosedReasonCode.IsZero() { + return fmt.Errorf("entitlement period closed reason code must be empty when closed at is absent") + } + return nil + } + if record.ClosedAt.Before(record.StartsAt) { + return fmt.Errorf("entitlement period closed at must not be before starts at") + } + if record.EndsAt != nil && record.ClosedAt.After(*record.EndsAt) { + return fmt.Errorf("entitlement period closed at must not be after ends at") + } + if record.ClosedAt.Before(record.CreatedAt) { + return fmt.Errorf("entitlement period closed at must not be before created at") + } + if err := record.ClosedBy.Validate(); err != nil { + return fmt.Errorf("entitlement period closed by: %w", err) + } + if err := record.ClosedReasonCode.Validate(); err != nil { + return fmt.Errorf("entitlement period closed reason code: %w", err) + } + + return nil +} + +// IsEffectiveAt reports whether PeriodRecord is the currently effective +// segment at the supplied timestamp. +func (record PeriodRecord) IsEffectiveAt(now time.Time) bool { + if record.ClosedAt != nil { + return false + } + if record.StartsAt.After(now) { + return false + } + if record.EndsAt != nil && !record.EndsAt.After(now) { + return false + } + + return true +} + +// CurrentSnapshot stores the read-optimized current entitlement state of one +// user account. +type CurrentSnapshot struct { + // UserID identifies the account that owns the current entitlement. + UserID common.UserID + + // PlanCode stores the current effective plan code. + PlanCode PlanCode + + // IsPaid stores the materialized paid/free state used on hot read paths. + IsPaid bool + + // StartsAt stores when the current effective state started. + StartsAt time.Time + + // EndsAt stores the optional end of the current finite entitlement. + EndsAt *time.Time + + // Source stores the machine-readable source of the current state. + Source common.Source + + // Actor stores the actor metadata attached to the last successful mutation. + Actor common.ActorRef + + // ReasonCode stores the machine-readable reason attached to the last + // successful mutation. + ReasonCode common.ReasonCode + + // UpdatedAt stores when the snapshot was last recomputed. + UpdatedAt time.Time +} + +// Validate reports whether CurrentSnapshot satisfies the frozen Stage 02 +// structural invariants. +func (record CurrentSnapshot) Validate() error { + if err := record.UserID.Validate(); err != nil { + return fmt.Errorf("entitlement snapshot user id: %w", err) + } + if !record.PlanCode.IsKnown() { + return fmt.Errorf("entitlement snapshot plan code %q is unsupported", record.PlanCode) + } + if record.IsPaid != record.PlanCode.IsPaid() { + return fmt.Errorf("entitlement snapshot paid flag must match plan code %q", record.PlanCode) + } + if err := common.ValidateTimestamp("entitlement snapshot starts at", record.StartsAt); err != nil { + return err + } + if err := validatePlanBounds("entitlement snapshot", record.PlanCode, record.StartsAt, record.EndsAt); err != nil { + return err + } + if err := record.Source.Validate(); err != nil { + return fmt.Errorf("entitlement snapshot source: %w", err) + } + if err := record.Actor.Validate(); err != nil { + return fmt.Errorf("entitlement snapshot actor: %w", err) + } + if err := record.ReasonCode.Validate(); err != nil { + return fmt.Errorf("entitlement snapshot reason code: %w", err) + } + if err := common.ValidateTimestamp("entitlement snapshot updated at", record.UpdatedAt); err != nil { + return err + } + + return nil +} + +// HasFiniteExpiry reports whether CurrentSnapshot participates in the finite +// paid-expiry index. +func (record CurrentSnapshot) HasFiniteExpiry() bool { + return record.IsPaid && record.EndsAt != nil +} + +// IsExpiredAt reports whether CurrentSnapshot represents a finite paid state +// that has already reached its stored expiry. +func (record CurrentSnapshot) IsExpiredAt(now time.Time) bool { + return record.HasFiniteExpiry() && !record.EndsAt.After(now) +} + +// PaidState identifies the coarse free-versus-paid filter used by admin +// listing. +type PaidState string + +const ( + // PaidStateFree filters accounts whose current entitlement is free. + PaidStateFree PaidState = "free" + + // PaidStatePaid filters accounts whose current entitlement is paid. + PaidStatePaid PaidState = "paid" +) + +// IsKnown reports whether PaidState belongs to the frozen Stage 02 filter +// vocabulary. +func (state PaidState) IsKnown() bool { + switch state { + case "", PaidStateFree, PaidStatePaid: + return true + default: + return false + } +} + +func validatePlanBounds( + name string, + planCode PlanCode, + startsAt time.Time, + endsAt *time.Time, +) error { + switch { + case planCode.HasFiniteExpiry(): + if endsAt == nil { + return fmt.Errorf("%s ends at must be present for plan code %q", name, planCode) + } + if !endsAt.After(startsAt) { + return common.ErrInvertedTimeRange + } + case endsAt != nil: + return fmt.Errorf("%s ends at must be empty for plan code %q", name, planCode) + } + + return nil +} diff --git a/user/internal/domain/entitlement/model_test.go b/user/internal/domain/entitlement/model_test.go new file mode 100644 index 0000000..8c8dc56 --- /dev/null +++ b/user/internal/domain/entitlement/model_test.go @@ -0,0 +1,159 @@ +package entitlement + +import ( + "testing" + "time" + + "galaxy/user/internal/domain/common" + + "github.com/stretchr/testify/require" +) + +func TestPeriodRecordValidate(t *testing.T) { + t.Parallel() + + startsAt := time.Unix(1_775_240_000, 0).UTC() + endsAt := startsAt.Add(30 * 24 * time.Hour) + createdAt := startsAt.Add(-time.Hour) + closedAt := startsAt.Add(12 * time.Hour) + + tests := []struct { + name string + record PeriodRecord + wantErr bool + }{ + { + name: "valid open record", + record: PeriodRecord{ + RecordID: EntitlementRecordID("entitlement-123"), + UserID: common.UserID("user-123"), + PlanCode: PlanCodePaidMonthly, + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_grant"), + StartsAt: startsAt, + EndsAt: &endsAt, + CreatedAt: createdAt, + }, + }, + { + name: "valid closed record", + record: PeriodRecord{ + RecordID: EntitlementRecordID("entitlement-123"), + UserID: common.UserID("user-123"), + PlanCode: PlanCodePaidMonthly, + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_grant"), + StartsAt: startsAt, + EndsAt: &endsAt, + CreatedAt: createdAt, + ClosedAt: &closedAt, + ClosedBy: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")}, + ClosedReasonCode: common.ReasonCode("manual_revoke"), + }, + }, + { + name: "close metadata without closed at", + record: PeriodRecord{ + RecordID: EntitlementRecordID("entitlement-123"), + UserID: common.UserID("user-123"), + PlanCode: PlanCodePaidMonthly, + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_grant"), + StartsAt: startsAt, + CreatedAt: createdAt, + ClosedReasonCode: common.ReasonCode("manual_revoke"), + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + }) + } +} + +func TestCurrentSnapshotValidate(t *testing.T) { + t.Parallel() + + startsAt := time.Unix(1_775_240_000, 0).UTC() + endsAt := startsAt.Add(30 * 24 * time.Hour) + updatedAt := startsAt.Add(2 * time.Hour) + + tests := []struct { + name string + record CurrentSnapshot + wantErr bool + wantFinite bool + }{ + { + name: "valid finite paid snapshot", + record: CurrentSnapshot{ + UserID: common.UserID("user-123"), + PlanCode: PlanCodePaidMonthly, + IsPaid: true, + StartsAt: startsAt, + EndsAt: &endsAt, + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_grant"), + UpdatedAt: updatedAt, + }, + wantFinite: true, + }, + { + name: "valid free snapshot", + record: CurrentSnapshot{ + UserID: common.UserID("user-123"), + PlanCode: PlanCodeFree, + IsPaid: false, + StartsAt: startsAt, + Source: common.Source("system"), + Actor: common.ActorRef{Type: common.ActorType("service")}, + ReasonCode: common.ReasonCode("default_free_plan"), + UpdatedAt: updatedAt, + }, + }, + { + name: "paid flag mismatch", + record: CurrentSnapshot{ + UserID: common.UserID("user-123"), + PlanCode: PlanCodeFree, + IsPaid: true, + StartsAt: startsAt, + Source: common.Source("system"), + Actor: common.ActorRef{Type: common.ActorType("service")}, + ReasonCode: common.ReasonCode("default_free_plan"), + UpdatedAt: updatedAt, + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + require.Equal(t, tt.wantFinite, tt.record.HasFiniteExpiry()) + }) + } +} diff --git a/user/internal/domain/policy/model.go b/user/internal/domain/policy/model.go new file mode 100644 index 0000000..a2f328c --- /dev/null +++ b/user/internal/domain/policy/model.go @@ -0,0 +1,511 @@ +// Package policy defines sanction, limit, and eligibility-domain entities used +// by User Service. +package policy + +import ( + "fmt" + "slices" + "strings" + "time" + + "galaxy/user/internal/domain/common" +) + +// SanctionCode identifies one supported sanction in the v1 policy catalog. +type SanctionCode string + +const ( + // SanctionCodeLoginBlock denies login. + SanctionCodeLoginBlock SanctionCode = "login_block" + + // SanctionCodePrivateGameCreateBlock denies private-game creation. + SanctionCodePrivateGameCreateBlock SanctionCode = "private_game_create_block" + + // SanctionCodePrivateGameManageBlock denies private-game management. + SanctionCodePrivateGameManageBlock SanctionCode = "private_game_manage_block" + + // SanctionCodeGameJoinBlock denies game joining. + SanctionCodeGameJoinBlock SanctionCode = "game_join_block" + + // SanctionCodeProfileUpdateBlock denies self-service profile/settings + // mutations. + SanctionCodeProfileUpdateBlock SanctionCode = "profile_update_block" +) + +// IsKnown reports whether SanctionCode belongs to the frozen v1 catalog. +func (code SanctionCode) IsKnown() bool { + switch code { + case SanctionCodeLoginBlock, + SanctionCodePrivateGameCreateBlock, + SanctionCodePrivateGameManageBlock, + SanctionCodeGameJoinBlock, + SanctionCodeProfileUpdateBlock: + return true + default: + return false + } +} + +// LimitCode identifies one user-specific limit code recognized by User +// Service. +type LimitCode string + +const ( + // LimitCodeMaxOwnedPrivateGames limits how many private games the user may + // own while the current entitlement is paid. + LimitCodeMaxOwnedPrivateGames LimitCode = "max_owned_private_games" + + // LimitCodeMaxPendingPublicApplications stores the total public-games budget + // consumed together with current active public memberships when Game Lobby + // derives remaining pending application headroom. + LimitCodeMaxPendingPublicApplications LimitCode = "max_pending_public_applications" + + // LimitCodeMaxActiveGameMemberships limits how many active public-game + // memberships the user may hold at once. + LimitCodeMaxActiveGameMemberships LimitCode = "max_active_game_memberships" +) + +const ( + // LimitCodeMaxActivePrivateGames is a retired legacy code recognized only + // so old stored records do not break current reads. + LimitCodeMaxActivePrivateGames LimitCode = "max_active_private_games" + + // LimitCodeMaxPendingPrivateJoinRequests is a retired legacy code + // recognized only so old stored records do not break current reads. + LimitCodeMaxPendingPrivateJoinRequests LimitCode = "max_pending_private_join_requests" + + // LimitCodeMaxPendingPrivateInvitesSent is a retired legacy code + // recognized only so old stored records do not break current reads. + LimitCodeMaxPendingPrivateInvitesSent LimitCode = "max_pending_private_invites_sent" +) + +// IsKnown reports whether LimitCode belongs to the current supported write/API +// catalog. +func (code LimitCode) IsKnown() bool { + return code.IsSupported() +} + +// IsSupported reports whether LimitCode belongs to the current supported +// write/API catalog. +func (code LimitCode) IsSupported() bool { + switch code { + case LimitCodeMaxOwnedPrivateGames, + LimitCodeMaxPendingPublicApplications, + LimitCodeMaxActiveGameMemberships: + return true + default: + return false + } +} + +// IsRetired reports whether LimitCode is a retired legacy code recognized +// only for read compatibility with already stored history records. +func (code LimitCode) IsRetired() bool { + switch code { + case LimitCodeMaxActivePrivateGames, + LimitCodeMaxPendingPrivateJoinRequests, + LimitCodeMaxPendingPrivateInvitesSent: + return true + default: + return false + } +} + +// IsRecognized reports whether LimitCode is either currently supported or +// retired-but-recognized for read compatibility. +func (code LimitCode) IsRecognized() bool { + return code.IsSupported() || code.IsRetired() +} + +// EligibilityMarker identifies one derived eligibility boolean that may be +// indexed for admin listing. +type EligibilityMarker string + +const ( + // EligibilityMarkerCanLogin tracks whether the user may currently log in. + EligibilityMarkerCanLogin EligibilityMarker = "can_login" + + // EligibilityMarkerCanCreatePrivateGame tracks whether the user may create + // a private game. + EligibilityMarkerCanCreatePrivateGame EligibilityMarker = "can_create_private_game" + + // EligibilityMarkerCanManagePrivateGame tracks whether the user may manage + // a private game. + EligibilityMarkerCanManagePrivateGame EligibilityMarker = "can_manage_private_game" + + // EligibilityMarkerCanJoinGame tracks whether the user may join a game. + EligibilityMarkerCanJoinGame EligibilityMarker = "can_join_game" + + // EligibilityMarkerCanUpdateProfile tracks whether the user may update + // self-service profile/settings fields. + EligibilityMarkerCanUpdateProfile EligibilityMarker = "can_update_profile" +) + +// IsKnown reports whether EligibilityMarker belongs to the frozen v1 set. +func (marker EligibilityMarker) IsKnown() bool { + switch marker { + case EligibilityMarkerCanLogin, + EligibilityMarkerCanCreatePrivateGame, + EligibilityMarkerCanManagePrivateGame, + EligibilityMarkerCanJoinGame, + EligibilityMarkerCanUpdateProfile: + return true + default: + return false + } +} + +// SanctionRecordID identifies one sanction history record. +type SanctionRecordID string + +// String returns SanctionRecordID as its stored identifier string. +func (id SanctionRecordID) String() string { + return string(id) +} + +// IsZero reports whether SanctionRecordID does not contain a usable value. +func (id SanctionRecordID) IsZero() bool { + return strings.TrimSpace(string(id)) == "" +} + +// Validate reports whether SanctionRecordID is non-empty, normalized, and +// uses the frozen Stage 02 prefix. +func (id SanctionRecordID) Validate() error { + return validatePrefixedRecordID("sanction record id", string(id), "sanction-") +} + +// LimitRecordID identifies one limit history record. +type LimitRecordID string + +// String returns LimitRecordID as its stored identifier string. +func (id LimitRecordID) String() string { + return string(id) +} + +// IsZero reports whether LimitRecordID does not contain a usable value. +func (id LimitRecordID) IsZero() bool { + return strings.TrimSpace(string(id)) == "" +} + +// Validate reports whether LimitRecordID is non-empty, normalized, and uses +// the frozen Stage 02 prefix. +func (id LimitRecordID) Validate() error { + return validatePrefixedRecordID("limit record id", string(id), "limit-") +} + +// SanctionRecord stores one sanction history record. +type SanctionRecord struct { + // RecordID identifies the sanction history record. + RecordID SanctionRecordID + + // UserID identifies the account that owns the sanction. + UserID common.UserID + + // SanctionCode stores the sanction applied to the account. + SanctionCode SanctionCode + + // Scope stores the machine-readable scope attached to the sanction. + Scope common.Scope + + // ReasonCode stores the reason for the sanction mutation. + ReasonCode common.ReasonCode + + // Actor stores the audit actor metadata for the apply mutation. + Actor common.ActorRef + + // AppliedAt stores when the sanction becomes effective. + AppliedAt time.Time + + // ExpiresAt stores the optional planned expiry of the sanction. + ExpiresAt *time.Time + + // RemovedAt stores when the sanction was later removed explicitly. + RemovedAt *time.Time + + // RemovedBy stores the audit actor metadata for the remove mutation. + RemovedBy common.ActorRef + + // RemovedReasonCode stores the reason for the remove mutation. + RemovedReasonCode common.ReasonCode +} + +// Validate reports whether SanctionRecord satisfies the frozen structural +// invariants that do not depend on a caller-supplied clock. +func (record SanctionRecord) Validate() error { + if err := record.RecordID.Validate(); err != nil { + return fmt.Errorf("sanction record id: %w", err) + } + if err := record.UserID.Validate(); err != nil { + return fmt.Errorf("sanction user id: %w", err) + } + if !record.SanctionCode.IsKnown() { + return fmt.Errorf("sanction code %q is unsupported", record.SanctionCode) + } + if err := record.Scope.Validate(); err != nil { + return fmt.Errorf("sanction scope: %w", err) + } + if err := record.ReasonCode.Validate(); err != nil { + return fmt.Errorf("sanction reason code: %w", err) + } + if err := record.Actor.Validate(); err != nil { + return fmt.Errorf("sanction actor: %w", err) + } + if err := common.ValidateTimestamp("sanction applied at", record.AppliedAt); err != nil { + return err + } + if record.ExpiresAt != nil && !record.ExpiresAt.After(record.AppliedAt) { + return common.ErrInvertedTimeRange + } + if record.RemovedAt == nil { + if !record.RemovedBy.IsZero() { + return fmt.Errorf("sanction removed by must be empty when removed at is absent") + } + if !record.RemovedReasonCode.IsZero() { + return fmt.Errorf("sanction removed reason code must be empty when removed at is absent") + } + return nil + } + if record.RemovedAt.Before(record.AppliedAt) { + return fmt.Errorf("sanction removed at must not be before applied at") + } + if err := record.RemovedBy.Validate(); err != nil { + return fmt.Errorf("sanction removed by: %w", err) + } + if err := record.RemovedReasonCode.Validate(); err != nil { + return fmt.Errorf("sanction removed reason code: %w", err) + } + + return nil +} + +// ValidateAt reports whether SanctionRecord also satisfies the current-time +// Stage 02 invariant that `applied_at` must not be in the future. +func (record SanctionRecord) ValidateAt(now time.Time) error { + if err := record.Validate(); err != nil { + return err + } + if now.IsZero() { + return fmt.Errorf("sanction validation time must not be zero") + } + if record.AppliedAt.After(now.UTC()) { + return fmt.Errorf("sanction applied at must not be in the future") + } + + return nil +} + +// IsActiveAt reports whether SanctionRecord is active at now according to the +// frozen Stage 02 rules. +func (record SanctionRecord) IsActiveAt(now time.Time) bool { + now = now.UTC() + switch { + case now.IsZero(): + return false + case record.AppliedAt.After(now): + return false + case record.RemovedAt != nil: + return false + case record.ExpiresAt != nil && !record.ExpiresAt.After(now): + return false + default: + return true + } +} + +// LimitRecord stores one user-specific limit history record. +type LimitRecord struct { + // RecordID identifies the limit history record. + RecordID LimitRecordID + + // UserID identifies the account that owns the limit. + UserID common.UserID + + // LimitCode stores which count-based limit is overridden. + LimitCode LimitCode + + // Value stores the override value. + Value int + + // ReasonCode stores the reason for the limit mutation. + ReasonCode common.ReasonCode + + // Actor stores the audit actor metadata for the set mutation. + Actor common.ActorRef + + // AppliedAt stores when the limit becomes effective. + AppliedAt time.Time + + // ExpiresAt stores the optional planned expiry of the limit. + ExpiresAt *time.Time + + // RemovedAt stores when the limit was later removed explicitly. + RemovedAt *time.Time + + // RemovedBy stores the audit actor metadata for the remove mutation. + RemovedBy common.ActorRef + + // RemovedReasonCode stores the reason for the remove mutation. + RemovedReasonCode common.ReasonCode +} + +// Validate reports whether LimitRecord satisfies the structural invariants +// that do not depend on a caller-supplied clock. Retired legacy limit codes +// remain recognized here so already stored records still decode safely. +func (record LimitRecord) Validate() error { + if err := record.RecordID.Validate(); err != nil { + return fmt.Errorf("limit record id: %w", err) + } + if err := record.UserID.Validate(); err != nil { + return fmt.Errorf("limit user id: %w", err) + } + if !record.LimitCode.IsRecognized() { + return fmt.Errorf("limit code %q is unsupported", record.LimitCode) + } + if record.Value < 0 { + return fmt.Errorf("limit value must not be negative") + } + if err := record.ReasonCode.Validate(); err != nil { + return fmt.Errorf("limit reason code: %w", err) + } + if err := record.Actor.Validate(); err != nil { + return fmt.Errorf("limit actor: %w", err) + } + if err := common.ValidateTimestamp("limit applied at", record.AppliedAt); err != nil { + return err + } + if record.ExpiresAt != nil && !record.ExpiresAt.After(record.AppliedAt) { + return common.ErrInvertedTimeRange + } + if record.RemovedAt == nil { + if !record.RemovedBy.IsZero() { + return fmt.Errorf("limit removed by must be empty when removed at is absent") + } + if !record.RemovedReasonCode.IsZero() { + return fmt.Errorf("limit removed reason code must be empty when removed at is absent") + } + return nil + } + if record.RemovedAt.Before(record.AppliedAt) { + return fmt.Errorf("limit removed at must not be before applied at") + } + if err := record.RemovedBy.Validate(); err != nil { + return fmt.Errorf("limit removed by: %w", err) + } + if err := record.RemovedReasonCode.Validate(); err != nil { + return fmt.Errorf("limit removed reason code: %w", err) + } + + return nil +} + +// ValidateAt reports whether LimitRecord also satisfies the current-time Stage +// 02 invariant that `applied_at` must not be in the future. +func (record LimitRecord) ValidateAt(now time.Time) error { + if err := record.Validate(); err != nil { + return err + } + if now.IsZero() { + return fmt.Errorf("limit validation time must not be zero") + } + if record.AppliedAt.After(now.UTC()) { + return fmt.Errorf("limit applied at must not be in the future") + } + + return nil +} + +// IsActiveAt reports whether LimitRecord is active at now according to the +// frozen Stage 02 rules. +func (record LimitRecord) IsActiveAt(now time.Time) bool { + now = now.UTC() + switch { + case now.IsZero(): + return false + case record.AppliedAt.After(now): + return false + case record.RemovedAt != nil: + return false + case record.ExpiresAt != nil && !record.ExpiresAt.After(now): + return false + default: + return true + } +} + +// ActiveSanctionsAt returns the active sanctions at now, sorted +// deterministically by `sanction_code`. The function returns an error when the +// input contains structurally invalid records or more than one active sanction +// for the same `user_id + sanction_code`. +func ActiveSanctionsAt(records []SanctionRecord, now time.Time) ([]SanctionRecord, error) { + active := make([]SanctionRecord, 0, len(records)) + seen := make(map[SanctionCode]struct{}, len(records)) + + for _, record := range records { + if err := record.ValidateAt(now); err != nil { + return nil, err + } + if !record.IsActiveAt(now) { + continue + } + if _, ok := seen[record.SanctionCode]; ok { + return nil, fmt.Errorf("multiple active sanctions for code %q", record.SanctionCode) + } + seen[record.SanctionCode] = struct{}{} + active = append(active, record) + } + + slices.SortFunc(active, func(left SanctionRecord, right SanctionRecord) int { + return strings.Compare(string(left.SanctionCode), string(right.SanctionCode)) + }) + + return active, nil +} + +// ActiveLimitsAt returns the active limits at now, sorted deterministically by +// `limit_code`. Retired legacy limit codes are ignored so historical records +// stored under the old catalog do not affect current effective reads. The +// function returns an error when the input contains structurally invalid +// records or more than one active current limit for the same +// `user_id + limit_code`. +func ActiveLimitsAt(records []LimitRecord, now time.Time) ([]LimitRecord, error) { + active := make([]LimitRecord, 0, len(records)) + seen := make(map[LimitCode]struct{}, len(records)) + + for _, record := range records { + if err := record.ValidateAt(now); err != nil { + return nil, err + } + if !record.IsActiveAt(now) { + continue + } + if !record.LimitCode.IsSupported() { + continue + } + if _, ok := seen[record.LimitCode]; ok { + return nil, fmt.Errorf("multiple active limits for code %q", record.LimitCode) + } + seen[record.LimitCode] = struct{}{} + active = append(active, record) + } + + slices.SortFunc(active, func(left LimitRecord, right LimitRecord) int { + return strings.Compare(string(left.LimitCode), string(right.LimitCode)) + }) + + return active, nil +} + +func validatePrefixedRecordID(name string, value string, prefix string) error { + switch { + case strings.TrimSpace(value) == "": + return fmt.Errorf("%s must not be empty", name) + case strings.TrimSpace(value) != value: + return fmt.Errorf("%s must not contain surrounding whitespace", name) + case !strings.HasPrefix(value, prefix): + return fmt.Errorf("%s must start with %q", name, prefix) + case len(value) == len(prefix): + return fmt.Errorf("%s must contain opaque data after %q", name, prefix) + default: + return nil + } +} diff --git a/user/internal/domain/policy/model_test.go b/user/internal/domain/policy/model_test.go new file mode 100644 index 0000000..c309240 --- /dev/null +++ b/user/internal/domain/policy/model_test.go @@ -0,0 +1,236 @@ +package policy + +import ( + "testing" + "time" + + "galaxy/user/internal/domain/common" + + "github.com/stretchr/testify/require" +) + +func TestSanctionRecordValidateAt(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + expiresAt := now.Add(time.Hour) + removedAt := now.Add(30 * time.Minute) + + tests := []struct { + name string + record SanctionRecord + wantErr bool + wantActive bool + }{ + { + name: "active", + record: SanctionRecord{ + RecordID: SanctionRecordID("sanction-1"), + UserID: common.UserID("user-123"), + SanctionCode: SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy_blocked"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Minute), + ExpiresAt: &expiresAt, + }, + wantActive: true, + }, + { + name: "expired", + record: SanctionRecord{ + RecordID: SanctionRecordID("sanction-1"), + UserID: common.UserID("user-123"), + SanctionCode: SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy_blocked"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-2 * time.Hour), + ExpiresAt: ptrTime(now.Add(-time.Minute)), + }, + }, + { + name: "removed", + record: SanctionRecord{ + RecordID: SanctionRecordID("sanction-1"), + UserID: common.UserID("user-123"), + SanctionCode: SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy_blocked"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Hour), + RemovedAt: &removedAt, + RemovedBy: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")}, + RemovedReasonCode: common.ReasonCode("manual_remove"), + }, + }, + { + name: "future applied at", + record: SanctionRecord{ + RecordID: SanctionRecordID("sanction-1"), + UserID: common.UserID("user-123"), + SanctionCode: SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy_blocked"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(time.Minute), + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.ValidateAt(now) + if tt.wantErr { + require.Error(t, err) + return + } + require.NoError(t, err) + require.Equal(t, tt.wantActive, tt.record.IsActiveAt(now)) + }) + } +} + +func TestActiveSanctionsAt(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + records := []SanctionRecord{ + { + RecordID: SanctionRecordID("sanction-1"), + UserID: common.UserID("user-123"), + SanctionCode: SanctionCodeProfileUpdateBlock, + Scope: common.Scope("profile"), + ReasonCode: common.ReasonCode("moderation"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Hour), + }, + { + RecordID: SanctionRecordID("sanction-2"), + UserID: common.UserID("user-123"), + SanctionCode: SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")}, + AppliedAt: now.Add(-2 * time.Hour), + ExpiresAt: ptrTime(now.Add(-time.Minute)), + }, + } + + active, err := ActiveSanctionsAt(records, now) + require.NoError(t, err) + require.Len(t, active, 1) + require.Equal(t, SanctionCodeProfileUpdateBlock, active[0].SanctionCode) +} + +func TestActiveSanctionsAtDuplicateActiveCode(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + _, err := ActiveSanctionsAt([]SanctionRecord{ + { + RecordID: SanctionRecordID("sanction-1"), + UserID: common.UserID("user-123"), + SanctionCode: SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy"), + Actor: common.ActorRef{Type: common.ActorType("admin")}, + AppliedAt: now.Add(-time.Hour), + }, + { + RecordID: SanctionRecordID("sanction-2"), + UserID: common.UserID("user-123"), + SanctionCode: SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy"), + Actor: common.ActorRef{Type: common.ActorType("admin")}, + AppliedAt: now.Add(-2 * time.Hour), + }, + }, now) + require.Error(t, err) +} + +func TestLimitRecordValidateAtAndActiveLimits(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + + record := LimitRecord{ + RecordID: LimitRecordID("limit-1"), + UserID: common.UserID("user-123"), + LimitCode: LimitCodeMaxOwnedPrivateGames, + Value: 3, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Minute), + } + require.NoError(t, record.ValidateAt(now)) + require.True(t, record.IsActiveAt(now)) + + active, err := ActiveLimitsAt([]LimitRecord{ + record, + { + RecordID: LimitRecordID("limit-2"), + UserID: common.UserID("user-123"), + LimitCode: LimitCodeMaxActivePrivateGames, + Value: 7, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin")}, + AppliedAt: now.Add(-time.Hour), + }, + }, now) + require.NoError(t, err) + require.Len(t, active, 1) + require.Equal(t, LimitCodeMaxOwnedPrivateGames, active[0].LimitCode) +} + +func TestLimitCodeSupportAndRetiredRecognition(t *testing.T) { + t.Parallel() + + require.True(t, LimitCodeMaxOwnedPrivateGames.IsSupported()) + require.True(t, LimitCodeMaxPendingPublicApplications.IsSupported()) + require.True(t, LimitCodeMaxActiveGameMemberships.IsSupported()) + + require.True(t, LimitCodeMaxActivePrivateGames.IsRetired()) + require.True(t, LimitCodeMaxPendingPrivateJoinRequests.IsRetired()) + require.True(t, LimitCodeMaxPendingPrivateInvitesSent.IsRetired()) + + require.True(t, LimitCodeMaxActivePrivateGames.IsRecognized()) + require.False(t, LimitCode("unknown_limit").IsRecognized()) + require.False(t, LimitCodeMaxActivePrivateGames.IsKnown()) +} + +func TestActiveLimitsAtDuplicateActiveCode(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + _, err := ActiveLimitsAt([]LimitRecord{ + { + RecordID: LimitRecordID("limit-1"), + UserID: common.UserID("user-123"), + LimitCode: LimitCodeMaxOwnedPrivateGames, + Value: 2, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin")}, + AppliedAt: now.Add(-time.Hour), + }, + { + RecordID: LimitRecordID("limit-2"), + UserID: common.UserID("user-123"), + LimitCode: LimitCodeMaxOwnedPrivateGames, + Value: 5, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin")}, + AppliedAt: now.Add(-2 * time.Hour), + }, + }, now) + require.Error(t, err) +} + +func ptrTime(value time.Time) *time.Time { + return &value +} diff --git a/user/internal/logging/logger.go b/user/internal/logging/logger.go new file mode 100644 index 0000000..54826b6 --- /dev/null +++ b/user/internal/logging/logger.go @@ -0,0 +1,43 @@ +// Package logging configures the user-service process logger and provides +// context-aware helpers for attaching OpenTelemetry trace identifiers. +package logging + +import ( + "context" + "fmt" + "log/slog" + "os" + "strings" + + "go.opentelemetry.io/otel/trace" +) + +// New constructs the process-wide JSON logger from level. +func New(level string) (*slog.Logger, error) { + var slogLevel slog.Level + if err := slogLevel.UnmarshalText([]byte(strings.TrimSpace(level))); err != nil { + return nil, fmt.Errorf("build logger: %w", err) + } + + return slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{ + Level: slogLevel, + })), nil +} + +// TraceAttrsFromContext returns slog key-value pairs for the active +// OpenTelemetry span when ctx carries a valid span context. +func TraceAttrsFromContext(ctx context.Context) []any { + if ctx == nil { + return nil + } + + spanContext := trace.SpanContextFromContext(ctx) + if !spanContext.IsValid() { + return nil + } + + return []any{ + "otel_trace_id", spanContext.TraceID().String(), + "otel_span_id", spanContext.SpanID().String(), + } +} diff --git a/user/internal/ports/account_store.go b/user/internal/ports/account_store.go new file mode 100644 index 0000000..f6d20bb --- /dev/null +++ b/user/internal/ports/account_store.go @@ -0,0 +1,130 @@ +package ports + +import ( + "context" + "fmt" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" +) + +// CreateAccountInput stores the atomic account-create state that must commit +// together. +type CreateAccountInput struct { + // Account stores the durable user-account state. + Account account.UserAccount + + // Reservation stores the canonical race-name reservation linked to Account. + Reservation account.RaceNameReservation +} + +// Validate reports whether CreateAccountInput is structurally complete. +func (input CreateAccountInput) Validate() error { + if err := input.Account.Validate(); err != nil { + return fmt.Errorf("create account input account: %w", err) + } + if err := input.Reservation.Validate(); err != nil { + return fmt.Errorf("create account input reservation: %w", err) + } + if input.Account.UserID != input.Reservation.UserID { + return fmt.Errorf("create account input reservation user id must match account user id") + } + if input.Account.RaceName != input.Reservation.RaceName { + return fmt.Errorf("create account input reservation race name must match account race name") + } + + return nil +} + +// RenameRaceNameInput stores the atomic state required to replace one stored +// race name and its canonical reservation. +type RenameRaceNameInput struct { + // UserID identifies the account that must be updated. + UserID common.UserID + + // CurrentCanonicalKey stores the currently owned canonical reservation key. + CurrentCanonicalKey account.RaceNameCanonicalKey + + // NewRaceName stores the replacement exact stored race name. + NewRaceName common.RaceName + + // NewReservation stores the replacement canonical reservation. + NewReservation account.RaceNameReservation + + // UpdatedAt stores the account mutation timestamp. + UpdatedAt time.Time +} + +// Validate reports whether RenameRaceNameInput is structurally complete. +func (input RenameRaceNameInput) Validate() error { + if err := input.UserID.Validate(); err != nil { + return fmt.Errorf("rename race name input user id: %w", err) + } + if err := input.CurrentCanonicalKey.Validate(); err != nil { + return fmt.Errorf("rename race name input current canonical key: %w", err) + } + if err := input.NewRaceName.Validate(); err != nil { + return fmt.Errorf("rename race name input race name: %w", err) + } + if err := input.NewReservation.Validate(); err != nil { + return fmt.Errorf("rename race name input reservation: %w", err) + } + if err := common.ValidateTimestamp("rename race name input updated at", input.UpdatedAt); err != nil { + return err + } + if input.NewReservation.UserID != input.UserID { + return fmt.Errorf("rename race name input reservation user id must match user id") + } + if input.NewReservation.RaceName != input.NewRaceName { + return fmt.Errorf("rename race name input reservation race name must match new race name") + } + + return nil +} + +// UserAccountStore persists source-of-truth user-account records and their +// exact lookup mappings. +type UserAccountStore interface { + // Create stores one new account record. Implementations must wrap + // ErrConflict when the user id, e-mail, or exact race-name lookup already + // exists. + Create(ctx context.Context, input CreateAccountInput) error + + // GetByUserID returns the stored account identified by userID. + GetByUserID(ctx context.Context, userID common.UserID) (account.UserAccount, error) + + // GetByEmail returns the stored account identified by the normalized e-mail + // address. + GetByEmail(ctx context.Context, email common.Email) (account.UserAccount, error) + + // GetByRaceName returns the stored account identified by the exact stored + // race name. + GetByRaceName(ctx context.Context, raceName common.RaceName) (account.UserAccount, error) + + // ExistsByUserID reports whether userID currently identifies a stored + // account. + ExistsByUserID(ctx context.Context, userID common.UserID) (bool, error) + + // RenameRaceName replaces the stored race name of userID and swaps the + // exact race-name lookup atomically. Implementations must wrap ErrConflict + // when newRaceName is already owned by another account. + RenameRaceName(ctx context.Context, input RenameRaceNameInput) error + + // Update replaces the stored account state for record.UserID. + Update(ctx context.Context, record account.UserAccount) error +} + +// RaceNameReservationStore persists source-of-truth race-name reservations. +type RaceNameReservationStore interface { + // Create stores one new race-name reservation keyed by its canonical + // uniqueness key. Implementations must wrap ErrConflict when the canonical + // key is already reserved. + Create(ctx context.Context, record account.RaceNameReservation) error + + // GetByCanonicalKey returns the stored reservation identified by key. + GetByCanonicalKey(ctx context.Context, key account.RaceNameCanonicalKey) (account.RaceNameReservation, error) + + // DeleteByCanonicalKey removes the reservation identified by key. + DeleteByCanonicalKey(ctx context.Context, key account.RaceNameCanonicalKey) error +} diff --git a/user/internal/ports/auth_directory_store.go b/user/internal/ports/auth_directory_store.go new file mode 100644 index 0000000..37009d7 --- /dev/null +++ b/user/internal/ports/auth_directory_store.go @@ -0,0 +1,369 @@ +package ports + +import ( + "context" + "fmt" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" +) + +// AuthResolutionKind identifies the coarse auth-facing resolution state of one +// e-mail subject. +type AuthResolutionKind string + +const ( + // AuthResolutionKindExisting reports that the e-mail belongs to an existing + // account. + AuthResolutionKindExisting AuthResolutionKind = "existing" + + // AuthResolutionKindCreatable reports that the e-mail is not blocked and no + // account exists yet. + AuthResolutionKindCreatable AuthResolutionKind = "creatable" + + // AuthResolutionKindBlocked reports that the e-mail subject is blocked. + AuthResolutionKindBlocked AuthResolutionKind = "blocked" +) + +// IsKnown reports whether AuthResolutionKind belongs to the supported +// auth-facing vocabulary. +func (kind AuthResolutionKind) IsKnown() bool { + switch kind { + case AuthResolutionKindExisting, AuthResolutionKindCreatable, AuthResolutionKindBlocked: + return true + default: + return false + } +} + +// ResolveByEmailResult stores the coarse auth-facing state of one e-mail +// subject. +type ResolveByEmailResult struct { + // Kind stores the coarse resolution state. + Kind AuthResolutionKind + + // UserID is present only when Kind is AuthResolutionKindExisting. + UserID common.UserID + + // BlockReasonCode is present only when Kind is AuthResolutionKindBlocked. + BlockReasonCode common.ReasonCode +} + +// Validate reports whether ResolveByEmailResult satisfies the auth-facing +// invariant set. +func (result ResolveByEmailResult) Validate() error { + if !result.Kind.IsKnown() { + return fmt.Errorf("resolve-by-email result kind %q is unsupported", result.Kind) + } + + switch result.Kind { + case AuthResolutionKindExisting: + if err := result.UserID.Validate(); err != nil { + return fmt.Errorf("resolve-by-email result user id: %w", err) + } + if !result.BlockReasonCode.IsZero() { + return fmt.Errorf("resolve-by-email result block reason code must be empty for existing outcome") + } + case AuthResolutionKindCreatable: + if !result.UserID.IsZero() { + return fmt.Errorf("resolve-by-email result user id must be empty for creatable outcome") + } + if !result.BlockReasonCode.IsZero() { + return fmt.Errorf("resolve-by-email result block reason code must be empty for creatable outcome") + } + case AuthResolutionKindBlocked: + if !result.UserID.IsZero() { + return fmt.Errorf("resolve-by-email result user id must be empty for blocked outcome") + } + if err := result.BlockReasonCode.Validate(); err != nil { + return fmt.Errorf("resolve-by-email result block reason code: %w", err) + } + } + + return nil +} + +// EnsureByEmailOutcome identifies the coarse auth-facing ensure result. +type EnsureByEmailOutcome string + +const ( + // EnsureByEmailOutcomeExisting reports that the e-mail already belongs to an + // existing account. + EnsureByEmailOutcomeExisting EnsureByEmailOutcome = "existing" + + // EnsureByEmailOutcomeCreated reports that a new account was created. + EnsureByEmailOutcomeCreated EnsureByEmailOutcome = "created" + + // EnsureByEmailOutcomeBlocked reports that creation or reuse is blocked by + // policy. + EnsureByEmailOutcomeBlocked EnsureByEmailOutcome = "blocked" +) + +// IsKnown reports whether EnsureByEmailOutcome belongs to the supported +// auth-facing vocabulary. +func (outcome EnsureByEmailOutcome) IsKnown() bool { + switch outcome { + case EnsureByEmailOutcomeExisting, EnsureByEmailOutcomeCreated, EnsureByEmailOutcomeBlocked: + return true + default: + return false + } +} + +// EnsureByEmailInput stores the complete create payload required for atomic +// ensure-by-email behavior. +type EnsureByEmailInput struct { + // Email stores the exact normalized e-mail subject addressed by the ensure + // call. + Email common.Email + + // Account stores the fully initialized account that should be persisted when + // the e-mail does not yet exist and is not blocked. + Account account.UserAccount + + // Entitlement stores the initial current entitlement snapshot for the new + // account. + Entitlement entitlement.CurrentSnapshot + + // EntitlementRecord stores the initial entitlement history record that must + // be created atomically with Entitlement. + EntitlementRecord entitlement.PeriodRecord + + // Reservation stores the canonical race-name reservation for Account. + Reservation account.RaceNameReservation +} + +// Validate reports whether EnsureByEmailInput is structurally complete. +func (input EnsureByEmailInput) Validate() error { + if err := input.Email.Validate(); err != nil { + return fmt.Errorf("ensure-by-email input email: %w", err) + } + if err := input.Account.Validate(); err != nil { + return fmt.Errorf("ensure-by-email input account: %w", err) + } + if err := input.Entitlement.Validate(); err != nil { + return fmt.Errorf("ensure-by-email input entitlement snapshot: %w", err) + } + if err := input.EntitlementRecord.Validate(); err != nil { + return fmt.Errorf("ensure-by-email input entitlement record: %w", err) + } + if err := input.Reservation.Validate(); err != nil { + return fmt.Errorf("ensure-by-email input reservation: %w", err) + } + if input.Account.Email != input.Email { + return fmt.Errorf("ensure-by-email input account email must match request email") + } + if input.Account.UserID != input.Entitlement.UserID { + return fmt.Errorf("ensure-by-email input account user id must match entitlement user id") + } + if input.Account.UserID != input.EntitlementRecord.UserID { + return fmt.Errorf("ensure-by-email input account user id must match entitlement record user id") + } + if input.Account.UserID != input.Reservation.UserID { + return fmt.Errorf("ensure-by-email input account user id must match reservation user id") + } + if input.Account.RaceName != input.Reservation.RaceName { + return fmt.Errorf("ensure-by-email input account race name must match reservation race name") + } + if input.EntitlementRecord.PlanCode != input.Entitlement.PlanCode { + return fmt.Errorf("ensure-by-email input entitlement record plan code must match entitlement snapshot plan code") + } + if input.EntitlementRecord.Source != input.Entitlement.Source { + return fmt.Errorf("ensure-by-email input entitlement record source must match entitlement snapshot source") + } + if input.EntitlementRecord.Actor != input.Entitlement.Actor { + return fmt.Errorf("ensure-by-email input entitlement record actor must match entitlement snapshot actor") + } + if input.EntitlementRecord.ReasonCode != input.Entitlement.ReasonCode { + return fmt.Errorf("ensure-by-email input entitlement record reason code must match entitlement snapshot reason code") + } + if !input.EntitlementRecord.StartsAt.Equal(input.Entitlement.StartsAt) { + return fmt.Errorf("ensure-by-email input entitlement record starts at must match entitlement snapshot starts at") + } + if !equalOptionalTimes(input.EntitlementRecord.EndsAt, input.Entitlement.EndsAt) { + return fmt.Errorf("ensure-by-email input entitlement record ends at must match entitlement snapshot ends at") + } + + return nil +} + +// EnsureByEmailResult stores the coarse auth-facing outcome of an atomic +// ensure-by-email call. +type EnsureByEmailResult struct { + // Outcome stores the coarse ensure result. + Outcome EnsureByEmailOutcome + + // UserID is present only for existing or created outcomes. + UserID common.UserID + + // BlockReasonCode is present only for the blocked outcome. + BlockReasonCode common.ReasonCode +} + +// Validate reports whether EnsureByEmailResult satisfies the auth-facing +// invariant set. +func (result EnsureByEmailResult) Validate() error { + if !result.Outcome.IsKnown() { + return fmt.Errorf("ensure-by-email result outcome %q is unsupported", result.Outcome) + } + + switch result.Outcome { + case EnsureByEmailOutcomeExisting, EnsureByEmailOutcomeCreated: + if err := result.UserID.Validate(); err != nil { + return fmt.Errorf("ensure-by-email result user id: %w", err) + } + if !result.BlockReasonCode.IsZero() { + return fmt.Errorf("ensure-by-email result block reason code must be empty for existing or created outcome") + } + case EnsureByEmailOutcomeBlocked: + if !result.UserID.IsZero() { + return fmt.Errorf("ensure-by-email result user id must be empty for blocked outcome") + } + if err := result.BlockReasonCode.Validate(); err != nil { + return fmt.Errorf("ensure-by-email result block reason code: %w", err) + } + } + + return nil +} + +// AuthBlockOutcome identifies the coarse result of blocking one auth subject. +type AuthBlockOutcome string + +const ( + // AuthBlockOutcomeBlocked reports that the current mutation created a new + // block record. + AuthBlockOutcomeBlocked AuthBlockOutcome = "blocked" + + // AuthBlockOutcomeAlreadyBlocked reports that the block already existed. + AuthBlockOutcomeAlreadyBlocked AuthBlockOutcome = "already_blocked" +) + +// IsKnown reports whether AuthBlockOutcome belongs to the supported +// auth-facing vocabulary. +func (outcome AuthBlockOutcome) IsKnown() bool { + switch outcome { + case AuthBlockOutcomeBlocked, AuthBlockOutcomeAlreadyBlocked: + return true + default: + return false + } +} + +// BlockByUserIDInput stores one auth-facing block request addressed by stable +// user identifier. +type BlockByUserIDInput struct { + // UserID identifies the account that must be blocked. + UserID common.UserID + + // ReasonCode stores the machine-readable block reason. + ReasonCode common.ReasonCode + + // BlockedAt stores the timestamp applied to the blocked e-mail subject + // record when a new block is created. + BlockedAt time.Time +} + +// Validate reports whether BlockByUserIDInput is structurally complete. +func (input BlockByUserIDInput) Validate() error { + if err := input.UserID.Validate(); err != nil { + return fmt.Errorf("block-by-user-id input user id: %w", err) + } + if err := input.ReasonCode.Validate(); err != nil { + return fmt.Errorf("block-by-user-id input reason code: %w", err) + } + if err := common.ValidateTimestamp("block-by-user-id input blocked at", input.BlockedAt); err != nil { + return err + } + + return nil +} + +// BlockByEmailInput stores one auth-facing block request addressed by exact +// normalized e-mail subject. +type BlockByEmailInput struct { + // Email identifies the e-mail subject that must be blocked. + Email common.Email + + // ReasonCode stores the machine-readable block reason. + ReasonCode common.ReasonCode + + // BlockedAt stores the timestamp applied to the blocked e-mail subject + // record when a new block is created. + BlockedAt time.Time +} + +// Validate reports whether BlockByEmailInput is structurally complete. +func (input BlockByEmailInput) Validate() error { + if err := input.Email.Validate(); err != nil { + return fmt.Errorf("block-by-email input email: %w", err) + } + if err := input.ReasonCode.Validate(); err != nil { + return fmt.Errorf("block-by-email input reason code: %w", err) + } + if err := common.ValidateTimestamp("block-by-email input blocked at", input.BlockedAt); err != nil { + return err + } + + return nil +} + +// BlockResult stores the coarse auth-facing result of a block mutation. +type BlockResult struct { + // Outcome reports whether a new block was applied or already existed. + Outcome AuthBlockOutcome + + // UserID stores the resolved account when the blocked subject belongs to one + // existing user. + UserID common.UserID +} + +// Validate reports whether BlockResult satisfies the auth-facing invariant +// set. +func (result BlockResult) Validate() error { + if !result.Outcome.IsKnown() { + return fmt.Errorf("block result outcome %q is unsupported", result.Outcome) + } + if !result.UserID.IsZero() { + if err := result.UserID.Validate(); err != nil { + return fmt.Errorf("block result user id: %w", err) + } + } + + return nil +} + +// AuthDirectoryStore performs the narrow set of atomic auth-facing reads and +// mutations that must not observe inconsistent cross-key Redis state. +type AuthDirectoryStore interface { + // ResolveByEmail returns the current coarse auth-facing resolution state for + // email. + ResolveByEmail(ctx context.Context, email common.Email) (ResolveByEmailResult, error) + + // ExistsByUserID reports whether userID currently identifies a stored + // account. + ExistsByUserID(ctx context.Context, userID common.UserID) (bool, error) + + // EnsureByEmail returns an existing user, creates a new one, or reports a + // blocked outcome atomically for one e-mail subject. + EnsureByEmail(ctx context.Context, input EnsureByEmailInput) (EnsureByEmailResult, error) + + // BlockByUserID applies a block to the account identified by userID. + BlockByUserID(ctx context.Context, input BlockByUserIDInput) (BlockResult, error) + + // BlockByEmail applies a block to email even when no account exists yet. + BlockByEmail(ctx context.Context, input BlockByEmailInput) (BlockResult, error) +} + +func equalOptionalTimes(left *time.Time, right *time.Time) bool { + switch { + case left == nil && right == nil: + return true + case left == nil || right == nil: + return false + default: + return left.Equal(*right) + } +} diff --git a/user/internal/ports/authblock_store.go b/user/internal/ports/authblock_store.go new file mode 100644 index 0000000..0cf84af --- /dev/null +++ b/user/internal/ports/authblock_store.go @@ -0,0 +1,18 @@ +package ports + +import ( + "context" + + "galaxy/user/internal/domain/authblock" + "galaxy/user/internal/domain/common" +) + +// BlockedEmailStore persists the dedicated blocked-email-subject model used by +// auth-facing flows. +type BlockedEmailStore interface { + // GetByEmail returns the blocked-email subject for email. + GetByEmail(ctx context.Context, email common.Email) (authblock.BlockedEmailSubject, error) + + // Upsert stores or replaces the blocked-email subject for record.Email. + Upsert(ctx context.Context, record authblock.BlockedEmailSubject) error +} diff --git a/user/internal/ports/clock.go b/user/internal/ports/clock.go new file mode 100644 index 0000000..e51f631 --- /dev/null +++ b/user/internal/ports/clock.go @@ -0,0 +1,9 @@ +package ports + +import "time" + +// Clock returns the current wall-clock time used by timestamped mutations. +type Clock interface { + // Now returns the current time. + Now() time.Time +} diff --git a/user/internal/ports/declared_country_changed_publisher.go b/user/internal/ports/declared_country_changed_publisher.go new file mode 100644 index 0000000..8e96908 --- /dev/null +++ b/user/internal/ports/declared_country_changed_publisher.go @@ -0,0 +1,55 @@ +package ports + +import ( + "context" + "fmt" + "time" + + "galaxy/user/internal/domain/common" +) + +const ( + // DeclaredCountryChangedEventType identifies declared-country change events + // in the shared auxiliary event stream. + DeclaredCountryChangedEventType = "user.declared_country.changed" +) + +// DeclaredCountryChangedEvent stores one auxiliary declared-country change +// notification emitted after a successful source-of-truth update. +type DeclaredCountryChangedEvent struct { + // UserID identifies the user whose current declared country changed. + UserID common.UserID + + // TraceID stores the optional OpenTelemetry trace identifier propagated + // from the current request context. + TraceID string + + // DeclaredCountry stores the latest effective declared country. + DeclaredCountry common.CountryCode + + // UpdatedAt stores the persisted account mutation timestamp. + UpdatedAt time.Time + + // Source stores the machine-readable upstream mutation source. + Source common.Source +} + +// Validate reports whether event is structurally complete. +func (event DeclaredCountryChangedEvent) Validate() error { + if err := validateEventEnvelope("declared-country changed event", event.UserID, event.UpdatedAt, event.Source, event.TraceID); err != nil { + return err + } + if err := event.DeclaredCountry.Validate(); err != nil { + return fmt.Errorf("declared-country changed event declared country: %w", err) + } + + return nil +} + +// DeclaredCountryChangedPublisher publishes auxiliary declared-country change +// notifications after source-of-truth account updates. +type DeclaredCountryChangedPublisher interface { + // PublishDeclaredCountryChanged propagates one committed declared-country + // change event. + PublishDeclaredCountryChanged(ctx context.Context, event DeclaredCountryChangedEvent) error +} diff --git a/user/internal/ports/domain_event_publishers.go b/user/internal/ports/domain_event_publishers.go new file mode 100644 index 0000000..695e29d --- /dev/null +++ b/user/internal/ports/domain_event_publishers.go @@ -0,0 +1,537 @@ +package ports + +import ( + "context" + "fmt" + "strings" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" +) + +const ( + // ProfileChangedEventType identifies profile-change events in the shared + // auxiliary event stream. + ProfileChangedEventType = "user.profile.changed" + + // SettingsChangedEventType identifies settings-change events in the shared + // auxiliary event stream. + SettingsChangedEventType = "user.settings.changed" + + // EntitlementChangedEventType identifies entitlement-change events in the + // shared auxiliary event stream. + EntitlementChangedEventType = "user.entitlement.changed" + + // SanctionChangedEventType identifies sanction-change events in the shared + // auxiliary event stream. + SanctionChangedEventType = "user.sanction.changed" + + // LimitChangedEventType identifies limit-change events in the shared + // auxiliary event stream. + LimitChangedEventType = "user.limit.changed" +) + +// ProfileChangedOperation identifies one profile-change event kind. +type ProfileChangedOperation string + +const ( + // ProfileChangedOperationInitialized reports the initial account + // materialization performed during auth-driven user creation. + ProfileChangedOperationInitialized ProfileChangedOperation = "initialized" + + // ProfileChangedOperationUpdated reports a later self-service profile + // update. + ProfileChangedOperationUpdated ProfileChangedOperation = "updated" +) + +// IsKnown reports whether operation belongs to the frozen profile-change +// event vocabulary. +func (operation ProfileChangedOperation) IsKnown() bool { + switch operation { + case ProfileChangedOperationInitialized, ProfileChangedOperationUpdated: + return true + default: + return false + } +} + +// SettingsChangedOperation identifies one settings-change event kind. +type SettingsChangedOperation string + +const ( + // SettingsChangedOperationInitialized reports the initial account settings + // materialization performed during auth-driven user creation. + SettingsChangedOperationInitialized SettingsChangedOperation = "initialized" + + // SettingsChangedOperationUpdated reports a later self-service settings + // update. + SettingsChangedOperationUpdated SettingsChangedOperation = "updated" +) + +// IsKnown reports whether operation belongs to the frozen settings-change +// event vocabulary. +func (operation SettingsChangedOperation) IsKnown() bool { + switch operation { + case SettingsChangedOperationInitialized, SettingsChangedOperationUpdated: + return true + default: + return false + } +} + +// EntitlementChangedOperation identifies one entitlement-change event kind. +type EntitlementChangedOperation string + +const ( + // EntitlementChangedOperationInitialized reports the initial free snapshot + // created for a new user. + EntitlementChangedOperationInitialized EntitlementChangedOperation = "initialized" + + // EntitlementChangedOperationGranted reports an explicit paid grant. + EntitlementChangedOperationGranted EntitlementChangedOperation = "granted" + + // EntitlementChangedOperationExtended reports an explicit paid extension. + EntitlementChangedOperationExtended EntitlementChangedOperation = "extended" + + // EntitlementChangedOperationRevoked reports an explicit paid revoke. + EntitlementChangedOperationRevoked EntitlementChangedOperation = "revoked" + + // EntitlementChangedOperationExpiredRepaired reports lazy repair of a + // naturally expired finite paid snapshot. + EntitlementChangedOperationExpiredRepaired EntitlementChangedOperation = "expired_repaired" +) + +// IsKnown reports whether operation belongs to the frozen entitlement-change +// event vocabulary. +func (operation EntitlementChangedOperation) IsKnown() bool { + switch operation { + case EntitlementChangedOperationInitialized, + EntitlementChangedOperationGranted, + EntitlementChangedOperationExtended, + EntitlementChangedOperationRevoked, + EntitlementChangedOperationExpiredRepaired: + return true + default: + return false + } +} + +// SanctionChangedOperation identifies one sanction-change event kind. +type SanctionChangedOperation string + +const ( + // SanctionChangedOperationApplied reports a new active sanction. + SanctionChangedOperationApplied SanctionChangedOperation = "applied" + + // SanctionChangedOperationRemoved reports explicit removal of an active + // sanction. + SanctionChangedOperationRemoved SanctionChangedOperation = "removed" +) + +// IsKnown reports whether operation belongs to the frozen sanction-change +// event vocabulary. +func (operation SanctionChangedOperation) IsKnown() bool { + switch operation { + case SanctionChangedOperationApplied, SanctionChangedOperationRemoved: + return true + default: + return false + } +} + +// LimitChangedOperation identifies one limit-change event kind. +type LimitChangedOperation string + +const ( + // LimitChangedOperationSet reports a new or replacement active limit. + LimitChangedOperationSet LimitChangedOperation = "set" + + // LimitChangedOperationRemoved reports explicit removal of an active limit. + LimitChangedOperationRemoved LimitChangedOperation = "removed" +) + +// IsKnown reports whether operation belongs to the frozen limit-change event +// vocabulary. +func (operation LimitChangedOperation) IsKnown() bool { + switch operation { + case LimitChangedOperationSet, LimitChangedOperationRemoved: + return true + default: + return false + } +} + +// ProfileChangedEvent stores one post-commit auxiliary profile-change event. +type ProfileChangedEvent struct { + // UserID identifies the changed user. + UserID common.UserID + + // OccurredAt stores the mutation timestamp emitted into the shared event + // envelope. + OccurredAt time.Time + + // Source stores the machine-readable mutation source. + Source common.Source + + // TraceID stores the optional OpenTelemetry trace identifier propagated + // from the current request context. + TraceID string + + // Operation stores the profile-change event kind. + Operation ProfileChangedOperation + + // RaceName stores the latest exact race name after the commit. + RaceName common.RaceName +} + +// Validate reports whether event is structurally complete. +func (event ProfileChangedEvent) Validate() error { + if err := validateEventEnvelope("profile changed event", event.UserID, event.OccurredAt, event.Source, event.TraceID); err != nil { + return err + } + if !event.Operation.IsKnown() { + return fmt.Errorf("profile changed event operation %q is unsupported", event.Operation) + } + if err := event.RaceName.Validate(); err != nil { + return fmt.Errorf("profile changed event race name: %w", err) + } + + return nil +} + +// SettingsChangedEvent stores one post-commit auxiliary settings-change event. +type SettingsChangedEvent struct { + // UserID identifies the changed user. + UserID common.UserID + + // OccurredAt stores the mutation timestamp emitted into the shared event + // envelope. + OccurredAt time.Time + + // Source stores the machine-readable mutation source. + Source common.Source + + // TraceID stores the optional OpenTelemetry trace identifier propagated + // from the current request context. + TraceID string + + // Operation stores the settings-change event kind. + Operation SettingsChangedOperation + + // PreferredLanguage stores the latest preferred language after the commit. + PreferredLanguage common.LanguageTag + + // TimeZone stores the latest time-zone name after the commit. + TimeZone common.TimeZoneName +} + +// Validate reports whether event is structurally complete. +func (event SettingsChangedEvent) Validate() error { + if err := validateEventEnvelope("settings changed event", event.UserID, event.OccurredAt, event.Source, event.TraceID); err != nil { + return err + } + if !event.Operation.IsKnown() { + return fmt.Errorf("settings changed event operation %q is unsupported", event.Operation) + } + if err := event.PreferredLanguage.Validate(); err != nil { + return fmt.Errorf("settings changed event preferred language: %w", err) + } + if err := event.TimeZone.Validate(); err != nil { + return fmt.Errorf("settings changed event time zone: %w", err) + } + + return nil +} + +// EntitlementChangedEvent stores one post-commit auxiliary entitlement-change +// event. +type EntitlementChangedEvent struct { + // UserID identifies the changed user. + UserID common.UserID + + // OccurredAt stores the mutation timestamp emitted into the shared event + // envelope. + OccurredAt time.Time + + // Source stores the machine-readable mutation source. + Source common.Source + + // TraceID stores the optional OpenTelemetry trace identifier propagated + // from the current request context. + TraceID string + + // Operation stores the entitlement-change event kind. + Operation EntitlementChangedOperation + + // PlanCode stores the effective plan after the commit. + PlanCode entitlement.PlanCode + + // IsPaid stores the effective paid/free flag after the commit. + IsPaid bool + + // StartsAt stores when the effective entitlement state started. + StartsAt time.Time + + // EndsAt stores the optional finite paid expiry. + EndsAt *time.Time + + // ReasonCode stores the mutation reason. + ReasonCode common.ReasonCode + + // Actor stores the audit actor metadata attached to the mutation. + Actor common.ActorRef + + // UpdatedAt stores when the current entitlement snapshot was recomputed. + UpdatedAt time.Time +} + +// Validate reports whether event is structurally complete. +func (event EntitlementChangedEvent) Validate() error { + if err := validateEventEnvelope("entitlement changed event", event.UserID, event.OccurredAt, event.Source, event.TraceID); err != nil { + return err + } + if !event.Operation.IsKnown() { + return fmt.Errorf("entitlement changed event operation %q is unsupported", event.Operation) + } + if !event.PlanCode.IsKnown() { + return fmt.Errorf("entitlement changed event plan code %q is unsupported", event.PlanCode) + } + if event.IsPaid != event.PlanCode.IsPaid() { + return fmt.Errorf("entitlement changed event paid flag must match plan code %q", event.PlanCode) + } + if err := common.ValidateTimestamp("entitlement changed event starts at", event.StartsAt); err != nil { + return err + } + if event.PlanCode.HasFiniteExpiry() { + if event.EndsAt == nil { + return fmt.Errorf("entitlement changed event ends at must be present for plan code %q", event.PlanCode) + } + if !event.EndsAt.After(event.StartsAt) { + return common.ErrInvertedTimeRange + } + } else if event.EndsAt != nil { + return fmt.Errorf("entitlement changed event ends at must be empty for plan code %q", event.PlanCode) + } + if err := event.ReasonCode.Validate(); err != nil { + return fmt.Errorf("entitlement changed event reason code: %w", err) + } + if err := event.Actor.Validate(); err != nil { + return fmt.Errorf("entitlement changed event actor: %w", err) + } + if err := common.ValidateTimestamp("entitlement changed event updated at", event.UpdatedAt); err != nil { + return err + } + + return nil +} + +// SanctionChangedEvent stores one post-commit auxiliary sanction-change event. +type SanctionChangedEvent struct { + // UserID identifies the changed user. + UserID common.UserID + + // OccurredAt stores the mutation timestamp emitted into the shared event + // envelope. + OccurredAt time.Time + + // Source stores the machine-readable mutation source. + Source common.Source + + // TraceID stores the optional OpenTelemetry trace identifier propagated + // from the current request context. + TraceID string + + // Operation stores the sanction-change event kind. + Operation SanctionChangedOperation + + // SanctionCode stores the affected sanction code. + SanctionCode policy.SanctionCode + + // Scope stores the machine-readable sanction scope. + Scope common.Scope + + // ReasonCode stores the mutation reason. + ReasonCode common.ReasonCode + + // Actor stores the audit actor metadata attached to the mutation. + Actor common.ActorRef + + // AppliedAt stores when the sanction became effective. + AppliedAt time.Time + + // ExpiresAt stores the optional planned sanction expiry. + ExpiresAt *time.Time + + // RemovedAt stores the optional sanction removal timestamp. + RemovedAt *time.Time +} + +// Validate reports whether event is structurally complete. +func (event SanctionChangedEvent) Validate() error { + if err := validateEventEnvelope("sanction changed event", event.UserID, event.OccurredAt, event.Source, event.TraceID); err != nil { + return err + } + if !event.Operation.IsKnown() { + return fmt.Errorf("sanction changed event operation %q is unsupported", event.Operation) + } + if !event.SanctionCode.IsKnown() { + return fmt.Errorf("sanction changed event sanction code %q is unsupported", event.SanctionCode) + } + if err := event.Scope.Validate(); err != nil { + return fmt.Errorf("sanction changed event scope: %w", err) + } + if err := event.ReasonCode.Validate(); err != nil { + return fmt.Errorf("sanction changed event reason code: %w", err) + } + if err := event.Actor.Validate(); err != nil { + return fmt.Errorf("sanction changed event actor: %w", err) + } + if err := common.ValidateTimestamp("sanction changed event applied at", event.AppliedAt); err != nil { + return err + } + if event.ExpiresAt != nil && !event.ExpiresAt.After(event.AppliedAt) { + return common.ErrInvertedTimeRange + } + if event.RemovedAt != nil && event.RemovedAt.Before(event.AppliedAt) { + return fmt.Errorf("sanction changed event removed at must not be before applied at") + } + + return nil +} + +// LimitChangedEvent stores one post-commit auxiliary limit-change event. +type LimitChangedEvent struct { + // UserID identifies the changed user. + UserID common.UserID + + // OccurredAt stores the mutation timestamp emitted into the shared event + // envelope. + OccurredAt time.Time + + // Source stores the machine-readable mutation source. + Source common.Source + + // TraceID stores the optional OpenTelemetry trace identifier propagated + // from the current request context. + TraceID string + + // Operation stores the limit-change event kind. + Operation LimitChangedOperation + + // LimitCode stores the affected limit code. + LimitCode policy.LimitCode + + // Value stores the active limit value when the operation is `set`. + Value *int + + // ReasonCode stores the mutation reason. + ReasonCode common.ReasonCode + + // Actor stores the audit actor metadata attached to the mutation. + Actor common.ActorRef + + // AppliedAt stores when the limit became effective. + AppliedAt time.Time + + // ExpiresAt stores the optional planned limit expiry. + ExpiresAt *time.Time + + // RemovedAt stores the optional explicit limit removal timestamp. + RemovedAt *time.Time +} + +// Validate reports whether event is structurally complete. +func (event LimitChangedEvent) Validate() error { + if err := validateEventEnvelope("limit changed event", event.UserID, event.OccurredAt, event.Source, event.TraceID); err != nil { + return err + } + if !event.Operation.IsKnown() { + return fmt.Errorf("limit changed event operation %q is unsupported", event.Operation) + } + if !event.LimitCode.IsSupported() { + return fmt.Errorf("limit changed event limit code %q is unsupported", event.LimitCode) + } + switch event.Operation { + case LimitChangedOperationSet: + if event.Value == nil { + return fmt.Errorf("limit changed event value must be present for operation %q", event.Operation) + } + if *event.Value < 0 { + return fmt.Errorf("limit changed event value must not be negative") + } + case LimitChangedOperationRemoved: + if event.Value != nil && *event.Value < 0 { + return fmt.Errorf("limit changed event value must not be negative") + } + } + if err := event.ReasonCode.Validate(); err != nil { + return fmt.Errorf("limit changed event reason code: %w", err) + } + if err := event.Actor.Validate(); err != nil { + return fmt.Errorf("limit changed event actor: %w", err) + } + if err := common.ValidateTimestamp("limit changed event applied at", event.AppliedAt); err != nil { + return err + } + if event.ExpiresAt != nil && !event.ExpiresAt.After(event.AppliedAt) { + return common.ErrInvertedTimeRange + } + if event.RemovedAt != nil && event.RemovedAt.Before(event.AppliedAt) { + return fmt.Errorf("limit changed event removed at must not be before applied at") + } + + return nil +} + +// ProfileChangedPublisher publishes auxiliary profile-change notifications. +type ProfileChangedPublisher interface { + // PublishProfileChanged propagates one committed profile-change event. + PublishProfileChanged(ctx context.Context, event ProfileChangedEvent) error +} + +// SettingsChangedPublisher publishes auxiliary settings-change notifications. +type SettingsChangedPublisher interface { + // PublishSettingsChanged propagates one committed settings-change event. + PublishSettingsChanged(ctx context.Context, event SettingsChangedEvent) error +} + +// EntitlementChangedPublisher publishes auxiliary entitlement-change +// notifications. +type EntitlementChangedPublisher interface { + // PublishEntitlementChanged propagates one committed entitlement-change + // event. + PublishEntitlementChanged(ctx context.Context, event EntitlementChangedEvent) error +} + +// SanctionChangedPublisher publishes auxiliary sanction-change notifications. +type SanctionChangedPublisher interface { + // PublishSanctionChanged propagates one committed sanction-change event. + PublishSanctionChanged(ctx context.Context, event SanctionChangedEvent) error +} + +// LimitChangedPublisher publishes auxiliary limit-change notifications. +type LimitChangedPublisher interface { + // PublishLimitChanged propagates one committed limit-change event. + PublishLimitChanged(ctx context.Context, event LimitChangedEvent) error +} + +func validateEventEnvelope(name string, userID common.UserID, occurredAt time.Time, source common.Source, traceID string) error { + if err := userID.Validate(); err != nil { + return fmt.Errorf("%s user id: %w", name, err) + } + if err := common.ValidateTimestamp(name+" occurred at", occurredAt); err != nil { + return err + } + if err := source.Validate(); err != nil { + return fmt.Errorf("%s source: %w", name, err) + } + if traceID != "" { + if strings.TrimSpace(traceID) != traceID { + return fmt.Errorf("%s trace id must not contain surrounding whitespace", name) + } + } + + return nil +} diff --git a/user/internal/ports/entitlement_store.go b/user/internal/ports/entitlement_store.go new file mode 100644 index 0000000..76bc96e --- /dev/null +++ b/user/internal/ports/entitlement_store.go @@ -0,0 +1,230 @@ +package ports + +import ( + "context" + "fmt" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" +) + +// EntitlementHistoryStore persists immutable entitlement period records and +// later close-state updates. +type EntitlementHistoryStore interface { + // Create stores one new entitlement period history record. Implementations + // must wrap ErrConflict when record.RecordID already exists. + Create(ctx context.Context, record entitlement.PeriodRecord) error + + // GetByRecordID returns the entitlement period history record identified by + // recordID. + GetByRecordID(ctx context.Context, recordID entitlement.EntitlementRecordID) (entitlement.PeriodRecord, error) + + // ListByUserID returns every entitlement period history record owned by + // userID. + ListByUserID(ctx context.Context, userID common.UserID) ([]entitlement.PeriodRecord, error) + + // Update replaces one stored entitlement period history record. + Update(ctx context.Context, record entitlement.PeriodRecord) error +} + +// EntitlementSnapshotStore persists the read-optimized current entitlement +// snapshot. +type EntitlementSnapshotStore interface { + // GetByUserID returns the current entitlement snapshot for userID. + GetByUserID(ctx context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) + + // Put stores the current entitlement snapshot for record.UserID. + Put(ctx context.Context, record entitlement.CurrentSnapshot) error +} + +// GrantEntitlementInput stores one atomic transition from a current free +// entitlement state to a current paid state. +type GrantEntitlementInput struct { + // ExpectedCurrentSnapshot stores the exact snapshot that must still be + // current before the mutation commits. + ExpectedCurrentSnapshot entitlement.CurrentSnapshot + + // ExpectedCurrentRecord stores the current effective free period that must + // still be current before the mutation commits. + ExpectedCurrentRecord entitlement.PeriodRecord + + // UpdatedCurrentRecord stores ExpectedCurrentRecord after the close metadata + // is applied. + UpdatedCurrentRecord entitlement.PeriodRecord + + // NewRecord stores the new paid entitlement history segment. + NewRecord entitlement.PeriodRecord + + // NewSnapshot stores the new current effective entitlement snapshot. + NewSnapshot entitlement.CurrentSnapshot +} + +// Validate reports whether GrantEntitlementInput is structurally complete. +func (input GrantEntitlementInput) Validate() error { + if err := input.ExpectedCurrentSnapshot.Validate(); err != nil { + return fmt.Errorf("grant entitlement input expected current snapshot: %w", err) + } + if err := input.ExpectedCurrentRecord.Validate(); err != nil { + return fmt.Errorf("grant entitlement input expected current record: %w", err) + } + if err := input.UpdatedCurrentRecord.Validate(); err != nil { + return fmt.Errorf("grant entitlement input updated current record: %w", err) + } + if err := input.NewRecord.Validate(); err != nil { + return fmt.Errorf("grant entitlement input new record: %w", err) + } + if err := input.NewSnapshot.Validate(); err != nil { + return fmt.Errorf("grant entitlement input new snapshot: %w", err) + } + if input.ExpectedCurrentSnapshot.UserID != input.ExpectedCurrentRecord.UserID || + input.ExpectedCurrentSnapshot.UserID != input.UpdatedCurrentRecord.UserID || + input.ExpectedCurrentSnapshot.UserID != input.NewRecord.UserID || + input.ExpectedCurrentSnapshot.UserID != input.NewSnapshot.UserID { + return fmt.Errorf("grant entitlement input all records must belong to the same user id") + } + if input.ExpectedCurrentRecord.RecordID != input.UpdatedCurrentRecord.RecordID { + return fmt.Errorf("grant entitlement input updated current record must preserve record id") + } + + return nil +} + +// ExtendEntitlementInput stores one atomic extension of a current finite paid +// entitlement state. +type ExtendEntitlementInput struct { + // ExpectedCurrentSnapshot stores the exact snapshot that must still be + // current before the mutation commits. + ExpectedCurrentSnapshot entitlement.CurrentSnapshot + + // NewRecord stores the appended entitlement history segment that extends the + // current paid state. + NewRecord entitlement.PeriodRecord + + // NewSnapshot stores the replacement current effective entitlement snapshot. + NewSnapshot entitlement.CurrentSnapshot +} + +// Validate reports whether ExtendEntitlementInput is structurally complete. +func (input ExtendEntitlementInput) Validate() error { + if err := input.ExpectedCurrentSnapshot.Validate(); err != nil { + return fmt.Errorf("extend entitlement input expected current snapshot: %w", err) + } + if err := input.NewRecord.Validate(); err != nil { + return fmt.Errorf("extend entitlement input new record: %w", err) + } + if err := input.NewSnapshot.Validate(); err != nil { + return fmt.Errorf("extend entitlement input new snapshot: %w", err) + } + if input.ExpectedCurrentSnapshot.UserID != input.NewRecord.UserID || + input.ExpectedCurrentSnapshot.UserID != input.NewSnapshot.UserID { + return fmt.Errorf("extend entitlement input all records must belong to the same user id") + } + + return nil +} + +// RevokeEntitlementInput stores one atomic transition from a current paid +// entitlement state to a new free state. +type RevokeEntitlementInput struct { + // ExpectedCurrentSnapshot stores the exact snapshot that must still be + // current before the mutation commits. + ExpectedCurrentSnapshot entitlement.CurrentSnapshot + + // ExpectedCurrentRecord stores the current effective paid period that must + // still be current before the mutation commits. + ExpectedCurrentRecord entitlement.PeriodRecord + + // UpdatedCurrentRecord stores ExpectedCurrentRecord after the close metadata + // is applied. + UpdatedCurrentRecord entitlement.PeriodRecord + + // NewRecord stores the newly created free entitlement period. + NewRecord entitlement.PeriodRecord + + // NewSnapshot stores the replacement current effective free snapshot. + NewSnapshot entitlement.CurrentSnapshot +} + +// Validate reports whether RevokeEntitlementInput is structurally complete. +func (input RevokeEntitlementInput) Validate() error { + if err := input.ExpectedCurrentSnapshot.Validate(); err != nil { + return fmt.Errorf("revoke entitlement input expected current snapshot: %w", err) + } + if err := input.ExpectedCurrentRecord.Validate(); err != nil { + return fmt.Errorf("revoke entitlement input expected current record: %w", err) + } + if err := input.UpdatedCurrentRecord.Validate(); err != nil { + return fmt.Errorf("revoke entitlement input updated current record: %w", err) + } + if err := input.NewRecord.Validate(); err != nil { + return fmt.Errorf("revoke entitlement input new record: %w", err) + } + if err := input.NewSnapshot.Validate(); err != nil { + return fmt.Errorf("revoke entitlement input new snapshot: %w", err) + } + if input.ExpectedCurrentSnapshot.UserID != input.ExpectedCurrentRecord.UserID || + input.ExpectedCurrentSnapshot.UserID != input.UpdatedCurrentRecord.UserID || + input.ExpectedCurrentSnapshot.UserID != input.NewRecord.UserID || + input.ExpectedCurrentSnapshot.UserID != input.NewSnapshot.UserID { + return fmt.Errorf("revoke entitlement input all records must belong to the same user id") + } + if input.ExpectedCurrentRecord.RecordID != input.UpdatedCurrentRecord.RecordID { + return fmt.Errorf("revoke entitlement input updated current record must preserve record id") + } + + return nil +} + +// RepairExpiredEntitlementInput stores one atomic lazy-repair transition from +// an expired finite paid snapshot to a materialized free state. +type RepairExpiredEntitlementInput struct { + // ExpectedExpiredSnapshot stores the exact expired snapshot that must still + // be current before the repair commits. + ExpectedExpiredSnapshot entitlement.CurrentSnapshot + + // NewRecord stores the newly created free entitlement period. + NewRecord entitlement.PeriodRecord + + // NewSnapshot stores the replacement current effective free snapshot. + NewSnapshot entitlement.CurrentSnapshot +} + +// Validate reports whether RepairExpiredEntitlementInput is structurally +// complete. +func (input RepairExpiredEntitlementInput) Validate() error { + if err := input.ExpectedExpiredSnapshot.Validate(); err != nil { + return fmt.Errorf("repair expired entitlement input expected expired snapshot: %w", err) + } + if err := input.NewRecord.Validate(); err != nil { + return fmt.Errorf("repair expired entitlement input new record: %w", err) + } + if err := input.NewSnapshot.Validate(); err != nil { + return fmt.Errorf("repair expired entitlement input new snapshot: %w", err) + } + if input.ExpectedExpiredSnapshot.UserID != input.NewRecord.UserID || + input.ExpectedExpiredSnapshot.UserID != input.NewSnapshot.UserID { + return fmt.Errorf("repair expired entitlement input all records must belong to the same user id") + } + + return nil +} + +// EntitlementLifecycleStore persists atomic entitlement timeline transitions +// that must keep history and current snapshot consistent. +type EntitlementLifecycleStore interface { + // Grant atomically closes the current free period, creates a new paid + // period, and replaces the current snapshot. + Grant(ctx context.Context, input GrantEntitlementInput) error + + // Extend atomically appends one paid-history segment and replaces the + // current snapshot. + Extend(ctx context.Context, input ExtendEntitlementInput) error + + // Revoke atomically closes the current paid period, creates a new free + // period, and replaces the current snapshot. + Revoke(ctx context.Context, input RevokeEntitlementInput) error + + // RepairExpired atomically replaces one expired finite paid snapshot with a + // materialized free state. + RepairExpired(ctx context.Context, input RepairExpiredEntitlementInput) error +} diff --git a/user/internal/ports/errors.go b/user/internal/ports/errors.go new file mode 100644 index 0000000..2882ca5 --- /dev/null +++ b/user/internal/ports/errors.go @@ -0,0 +1,31 @@ +// Package ports defines the storage-agnostic boundaries used by the user +// service. +package ports + +import ( + "errors" + "fmt" +) + +var ( + // ErrNotFound reports that a requested source-of-truth record does not + // exist in the dependency behind the port. + ErrNotFound = errors.New("ports: record not found") + + // ErrConflict reports that a create or update cannot be applied because the + // dependency state conflicts with the requested mutation. + ErrConflict = errors.New("ports: conflict") + + // ErrInvalidPageToken reports that a supplied pagination token cannot be + // decoded or does not match the expected filter set. + ErrInvalidPageToken = errors.New("ports: invalid page token") +) + +var ( + // ErrRaceNameConflict reports that a mutation specifically failed because a + // race-name lookup or canonical reservation is already owned by another + // user. The sentinel still matches ErrConflict via errors.Is so callers can + // preserve the stable public conflict semantics while collecting more + // precise observability. + ErrRaceNameConflict = fmt.Errorf("%w: race name conflict", ErrConflict) +) diff --git a/user/internal/ports/id_generator.go b/user/internal/ports/id_generator.go new file mode 100644 index 0000000..5d673ea --- /dev/null +++ b/user/internal/ports/id_generator.go @@ -0,0 +1,29 @@ +package ports + +import ( + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" +) + +// IDGenerator creates new user identifiers and generated initial race names. +type IDGenerator interface { + // NewUserID returns one newly generated stable user identifier. + NewUserID() (common.UserID, error) + + // NewInitialRaceName returns one generated initial race name in the + // `player-` form. + NewInitialRaceName() (common.RaceName, error) + + // NewEntitlementRecordID returns one newly generated entitlement history + // record identifier. + NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) + + // NewSanctionRecordID returns one newly generated sanction history record + // identifier. + NewSanctionRecordID() (policy.SanctionRecordID, error) + + // NewLimitRecordID returns one newly generated limit history record + // identifier. + NewLimitRecordID() (policy.LimitRecordID, error) +} diff --git a/user/internal/ports/policy_store.go b/user/internal/ports/policy_store.go new file mode 100644 index 0000000..f19aca3 --- /dev/null +++ b/user/internal/ports/policy_store.go @@ -0,0 +1,188 @@ +package ports + +import ( + "context" + "fmt" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/policy" +) + +// SanctionStore persists sanction history records and later remove-state +// updates. +type SanctionStore interface { + // Create stores one new sanction history record. Implementations must wrap + // ErrConflict when record.RecordID already exists. + Create(ctx context.Context, record policy.SanctionRecord) error + + // GetByRecordID returns the sanction history record identified by recordID. + GetByRecordID(ctx context.Context, recordID policy.SanctionRecordID) (policy.SanctionRecord, error) + + // ListByUserID returns every sanction history record owned by userID. + ListByUserID(ctx context.Context, userID common.UserID) ([]policy.SanctionRecord, error) + + // Update replaces one stored sanction history record. + Update(ctx context.Context, record policy.SanctionRecord) error +} + +// LimitStore persists user-specific limit history records and later +// remove-state updates. +type LimitStore interface { + // Create stores one new limit history record. Implementations must wrap + // ErrConflict when record.RecordID already exists. + Create(ctx context.Context, record policy.LimitRecord) error + + // GetByRecordID returns the limit history record identified by recordID. + GetByRecordID(ctx context.Context, recordID policy.LimitRecordID) (policy.LimitRecord, error) + + // ListByUserID returns every limit history record owned by userID. + ListByUserID(ctx context.Context, userID common.UserID) ([]policy.LimitRecord, error) + + // Update replaces one stored limit history record. + Update(ctx context.Context, record policy.LimitRecord) error +} + +// ApplySanctionInput stores one atomic creation of a new active sanction. +type ApplySanctionInput struct { + // NewRecord stores the sanction history record that must become active. + NewRecord policy.SanctionRecord +} + +// Validate reports whether ApplySanctionInput is structurally complete. +func (input ApplySanctionInput) Validate() error { + if err := input.NewRecord.Validate(); err != nil { + return fmt.Errorf("apply sanction input new record: %w", err) + } + + return nil +} + +// RemoveSanctionInput stores one atomic removal of the current active +// sanction for one `user_id + sanction_code`. +type RemoveSanctionInput struct { + // ExpectedActiveRecord stores the exact sanction record that must still be + // active before the mutation commits. + ExpectedActiveRecord policy.SanctionRecord + + // UpdatedRecord stores ExpectedActiveRecord after remove metadata is + // applied. + UpdatedRecord policy.SanctionRecord +} + +// Validate reports whether RemoveSanctionInput is structurally complete. +func (input RemoveSanctionInput) Validate() error { + if err := input.ExpectedActiveRecord.Validate(); err != nil { + return fmt.Errorf("remove sanction input expected active record: %w", err) + } + if err := input.UpdatedRecord.Validate(); err != nil { + return fmt.Errorf("remove sanction input updated record: %w", err) + } + if input.ExpectedActiveRecord.RecordID != input.UpdatedRecord.RecordID { + return fmt.Errorf("remove sanction input updated record must preserve record id") + } + if input.ExpectedActiveRecord.UserID != input.UpdatedRecord.UserID { + return fmt.Errorf("remove sanction input records must belong to the same user id") + } + if input.ExpectedActiveRecord.SanctionCode != input.UpdatedRecord.SanctionCode { + return fmt.Errorf("remove sanction input records must preserve sanction code") + } + + return nil +} + +// SetLimitInput stores one atomic creation or replacement of the current +// active limit for one `user_id + limit_code`. +type SetLimitInput struct { + // ExpectedActiveRecord stores the currently active limit that must still be + // active before replacement commits. It stays nil when no active limit + // exists yet. + ExpectedActiveRecord *policy.LimitRecord + + // UpdatedActiveRecord stores ExpectedActiveRecord after remove metadata is + // applied. It stays nil when no active limit exists yet. + UpdatedActiveRecord *policy.LimitRecord + + // NewRecord stores the limit history record that must become active. + NewRecord policy.LimitRecord +} + +// Validate reports whether SetLimitInput is structurally complete. +func (input SetLimitInput) Validate() error { + if err := input.NewRecord.Validate(); err != nil { + return fmt.Errorf("set limit input new record: %w", err) + } + switch { + case input.ExpectedActiveRecord == nil && input.UpdatedActiveRecord == nil: + return nil + case input.ExpectedActiveRecord == nil || input.UpdatedActiveRecord == nil: + return fmt.Errorf("set limit input active replacement records must both be present or absent") + } + if err := input.ExpectedActiveRecord.Validate(); err != nil { + return fmt.Errorf("set limit input expected active record: %w", err) + } + if err := input.UpdatedActiveRecord.Validate(); err != nil { + return fmt.Errorf("set limit input updated active record: %w", err) + } + if input.ExpectedActiveRecord.RecordID != input.UpdatedActiveRecord.RecordID { + return fmt.Errorf("set limit input updated active record must preserve record id") + } + if input.ExpectedActiveRecord.UserID != input.UpdatedActiveRecord.UserID || + input.ExpectedActiveRecord.UserID != input.NewRecord.UserID { + return fmt.Errorf("set limit input records must belong to the same user id") + } + if input.ExpectedActiveRecord.LimitCode != input.UpdatedActiveRecord.LimitCode || + input.ExpectedActiveRecord.LimitCode != input.NewRecord.LimitCode { + return fmt.Errorf("set limit input records must preserve limit code") + } + + return nil +} + +// RemoveLimitInput stores one atomic removal of the current active limit for +// one `user_id + limit_code`. +type RemoveLimitInput struct { + // ExpectedActiveRecord stores the exact limit record that must still be + // active before the mutation commits. + ExpectedActiveRecord policy.LimitRecord + + // UpdatedRecord stores ExpectedActiveRecord after remove metadata is + // applied. + UpdatedRecord policy.LimitRecord +} + +// Validate reports whether RemoveLimitInput is structurally complete. +func (input RemoveLimitInput) Validate() error { + if err := input.ExpectedActiveRecord.Validate(); err != nil { + return fmt.Errorf("remove limit input expected active record: %w", err) + } + if err := input.UpdatedRecord.Validate(); err != nil { + return fmt.Errorf("remove limit input updated record: %w", err) + } + if input.ExpectedActiveRecord.RecordID != input.UpdatedRecord.RecordID { + return fmt.Errorf("remove limit input updated record must preserve record id") + } + if input.ExpectedActiveRecord.UserID != input.UpdatedRecord.UserID { + return fmt.Errorf("remove limit input records must belong to the same user id") + } + if input.ExpectedActiveRecord.LimitCode != input.UpdatedRecord.LimitCode { + return fmt.Errorf("remove limit input records must preserve limit code") + } + + return nil +} + +// PolicyLifecycleStore persists atomic sanction and limit transitions that +// must keep history and active-slot state consistent. +type PolicyLifecycleStore interface { + // ApplySanction atomically creates one new active sanction record. + ApplySanction(ctx context.Context, input ApplySanctionInput) error + + // RemoveSanction atomically removes one active sanction record. + RemoveSanction(ctx context.Context, input RemoveSanctionInput) error + + // SetLimit atomically creates or replaces one active limit record. + SetLimit(ctx context.Context, input SetLimitInput) error + + // RemoveLimit atomically removes one active limit record. + RemoveLimit(ctx context.Context, input RemoveLimitInput) error +} diff --git a/user/internal/ports/race_name_policy.go b/user/internal/ports/race_name_policy.go new file mode 100644 index 0000000..02447d9 --- /dev/null +++ b/user/internal/ports/race_name_policy.go @@ -0,0 +1,14 @@ +package ports + +import ( + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" +) + +// RaceNamePolicy produces the canonical uniqueness key used to reserve one +// replaceable race-name slot. +type RaceNamePolicy interface { + // CanonicalKey returns the stable reservation key for raceName. Callers are + // expected to pass a validated raceName value. + CanonicalKey(raceName common.RaceName) (account.RaceNameCanonicalKey, error) +} diff --git a/user/internal/ports/user_list_store.go b/user/internal/ports/user_list_store.go new file mode 100644 index 0000000..d5eef2a --- /dev/null +++ b/user/internal/ports/user_list_store.go @@ -0,0 +1,129 @@ +package ports + +import ( + "context" + "fmt" + "strings" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" +) + +const ( + // DefaultUserListPageSize stores the frozen default page size used by the + // trusted admin listing surface when the caller omits `page_size`. + DefaultUserListPageSize = 50 + + // MaxUserListPageSize stores the frozen maximum page size accepted by the + // trusted admin listing surface. + MaxUserListPageSize = 200 +) + +// UserListFilters stores the frozen admin-listing filter set. +type UserListFilters struct { + // PaidState stores the optional coarse free-versus-paid filter. + PaidState entitlement.PaidState + + // PaidExpiresBefore stores the optional strict upper bound for finite paid + // expiry. + PaidExpiresBefore *time.Time + + // PaidExpiresAfter stores the optional strict lower bound for finite paid + // expiry. + PaidExpiresAfter *time.Time + + // DeclaredCountry stores the optional current declared-country filter. + DeclaredCountry common.CountryCode + + // SanctionCode stores the optional active-sanction filter. + SanctionCode policy.SanctionCode + + // LimitCode stores the optional active user-specific limit filter. + LimitCode policy.LimitCode + + // CanLogin stores the optional derived login-eligibility filter. + CanLogin *bool + + // CanCreatePrivateGame stores the optional derived private-game-create + // eligibility filter. + CanCreatePrivateGame *bool + + // CanJoinGame stores the optional derived game-join eligibility filter. + CanJoinGame *bool +} + +// Validate reports whether filters is structurally valid. +func (filters UserListFilters) Validate() error { + if !filters.PaidState.IsKnown() { + return fmt.Errorf("paid state %q is unsupported", filters.PaidState) + } + if filters.PaidExpiresBefore != nil && filters.PaidExpiresBefore.IsZero() { + return fmt.Errorf("paid expires before must not be zero") + } + if filters.PaidExpiresAfter != nil && filters.PaidExpiresAfter.IsZero() { + return fmt.Errorf("paid expires after must not be zero") + } + if !filters.DeclaredCountry.IsZero() { + if err := filters.DeclaredCountry.Validate(); err != nil { + return fmt.Errorf("declared country: %w", err) + } + } + if filters.SanctionCode != "" && !filters.SanctionCode.IsKnown() { + return fmt.Errorf("sanction code %q is unsupported", filters.SanctionCode) + } + if filters.LimitCode != "" && !filters.LimitCode.IsKnown() { + return fmt.Errorf("limit code %q is unsupported", filters.LimitCode) + } + + return nil +} + +// ListUsersInput stores one trusted admin-listing read request. +type ListUsersInput struct { + // PageSize stores the maximum number of ordered user identifiers returned + // in one storage page. + PageSize int + + // PageToken stores the optional opaque continuation cursor. + PageToken string + + // Filters stores the normalized filter set bound into PageToken. + Filters UserListFilters +} + +// Validate reports whether input is structurally complete. +func (input ListUsersInput) Validate() error { + switch { + case input.PageSize < 1: + return fmt.Errorf("page size must be at least 1") + case input.PageSize > MaxUserListPageSize: + return fmt.Errorf("page size must be at most %d", MaxUserListPageSize) + case strings.TrimSpace(input.PageToken) != input.PageToken: + return fmt.Errorf("page token must not contain surrounding whitespace") + } + if err := input.Filters.Validate(); err != nil { + return fmt.Errorf("filters: %w", err) + } + + return nil +} + +// ListUsersResult stores one deterministic ordered storage page of user ids. +type ListUsersResult struct { + // UserIDs stores the ordered user identifiers returned for the requested + // page. + UserIDs []common.UserID + + // NextPageToken stores the optional opaque continuation cursor for the next + // page. + NextPageToken string +} + +// UserListStore provides deterministic ordered admin-listing pagination over +// stored user identifiers. +type UserListStore interface { + // ListUserIDs returns one deterministic storage page of user identifiers. + ListUserIDs(ctx context.Context, input ListUsersInput) (ListUsersResult, error) +} diff --git a/user/internal/service/accountview/service.go b/user/internal/service/accountview/service.go new file mode 100644 index 0000000..a705bd9 --- /dev/null +++ b/user/internal/service/accountview/service.go @@ -0,0 +1,336 @@ +// Package accountview materializes the shared account aggregate view used by +// self-service and trusted administrative reads. +package accountview + +import ( + "context" + "errors" + "fmt" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" +) + +// ActorRefView stores transport-ready audit actor metadata. +type ActorRefView struct { + // Type stores the machine-readable actor type. + Type string `json:"type"` + + // ID stores the optional stable actor identifier. + ID string `json:"id,omitempty"` +} + +// EntitlementSnapshotView stores the transport-ready current entitlement +// snapshot of one account. +type EntitlementSnapshotView struct { + // PlanCode stores the effective entitlement plan code. + PlanCode string `json:"plan_code"` + + // IsPaid reports whether the effective plan is paid. + IsPaid bool `json:"is_paid"` + + // Source stores the machine-readable mutation source. + Source string `json:"source"` + + // Actor stores the audit actor metadata attached to the snapshot. + Actor ActorRefView `json:"actor"` + + // ReasonCode stores the machine-readable reason attached to the snapshot. + ReasonCode string `json:"reason_code"` + + // StartsAt stores when the effective state started. + StartsAt time.Time `json:"starts_at"` + + // EndsAt stores the optional finite effective expiry. + EndsAt *time.Time `json:"ends_at,omitempty"` + + // UpdatedAt stores when the snapshot was last recomputed. + UpdatedAt time.Time `json:"updated_at"` +} + +// ActiveSanctionView stores one transport-ready active sanction. +type ActiveSanctionView struct { + // SanctionCode stores the active sanction code. + SanctionCode string `json:"sanction_code"` + + // Scope stores the machine-readable sanction scope. + Scope string `json:"scope"` + + // ReasonCode stores the machine-readable sanction reason. + ReasonCode string `json:"reason_code"` + + // Actor stores the audit actor metadata attached to the sanction. + Actor ActorRefView `json:"actor"` + + // AppliedAt stores when the sanction became active. + AppliedAt time.Time `json:"applied_at"` + + // ExpiresAt stores the optional planned sanction expiry. + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +// ActiveLimitView stores one transport-ready active user-specific limit. +type ActiveLimitView struct { + // LimitCode stores the active limit code. + LimitCode string `json:"limit_code"` + + // Value stores the current override value. + Value int `json:"value"` + + // ReasonCode stores the machine-readable limit reason. + ReasonCode string `json:"reason_code"` + + // Actor stores the audit actor metadata attached to the limit. + Actor ActorRefView `json:"actor"` + + // AppliedAt stores when the limit became active. + AppliedAt time.Time `json:"applied_at"` + + // ExpiresAt stores the optional planned limit expiry. + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +// AccountView stores the transport-ready account aggregate shared by +// self-service and admin reads. +type AccountView struct { + // UserID stores the durable regular-user identifier. + UserID string `json:"user_id"` + + // Email stores the exact normalized login e-mail address. + Email string `json:"email"` + + // RaceName stores the current user-facing race name. + RaceName string `json:"race_name"` + + // PreferredLanguage stores the current BCP 47 preferred language. + PreferredLanguage string `json:"preferred_language"` + + // TimeZone stores the current IANA time-zone name. + TimeZone string `json:"time_zone"` + + // DeclaredCountry stores the optional latest effective declared country. + DeclaredCountry string `json:"declared_country,omitempty"` + + // Entitlement stores the current entitlement snapshot. + Entitlement EntitlementSnapshotView `json:"entitlement"` + + // ActiveSanctions stores the current active sanctions sorted by code. + ActiveSanctions []ActiveSanctionView `json:"active_sanctions"` + + // ActiveLimits stores the current active user-specific limits sorted by + // code. + ActiveLimits []ActiveLimitView `json:"active_limits"` + + // CreatedAt stores when the account was created. + CreatedAt time.Time `json:"created_at"` + + // UpdatedAt stores when the account was last mutated. + UpdatedAt time.Time `json:"updated_at"` +} + +// Aggregate stores the raw domain state that backs one shared account view. +type Aggregate struct { + // AccountRecord stores the current editable account record. + AccountRecord account.UserAccount + + // EntitlementSnapshot stores the current effective entitlement snapshot. + EntitlementSnapshot entitlement.CurrentSnapshot + + // ActiveSanctions stores the active sanctions sorted by code. + ActiveSanctions []policy.SanctionRecord + + // ActiveLimits stores the active user-specific limits sorted by code. + ActiveLimits []policy.LimitRecord +} + +// HasActiveSanction reports whether aggregate currently contains code in its +// active sanction set. +func (aggregate Aggregate) HasActiveSanction(code policy.SanctionCode) bool { + for _, record := range aggregate.ActiveSanctions { + if record.SanctionCode == code { + return true + } + } + + return false +} + +// HasActiveLimit reports whether aggregate currently contains code in its +// active user-specific limit set. +func (aggregate Aggregate) HasActiveLimit(code policy.LimitCode) bool { + for _, record := range aggregate.ActiveLimits { + if record.LimitCode == code { + return true + } + } + + return false +} + +// View materializes Aggregate into the shared transport-ready account view. +func (aggregate Aggregate) View() AccountView { + view := AccountView{ + UserID: aggregate.AccountRecord.UserID.String(), + Email: aggregate.AccountRecord.Email.String(), + RaceName: aggregate.AccountRecord.RaceName.String(), + PreferredLanguage: aggregate.AccountRecord.PreferredLanguage.String(), + TimeZone: aggregate.AccountRecord.TimeZone.String(), + Entitlement: EntitlementSnapshotView{ + PlanCode: string(aggregate.EntitlementSnapshot.PlanCode), + IsPaid: aggregate.EntitlementSnapshot.IsPaid, + Source: aggregate.EntitlementSnapshot.Source.String(), + Actor: actorRefView(aggregate.EntitlementSnapshot.Actor), + ReasonCode: aggregate.EntitlementSnapshot.ReasonCode.String(), + StartsAt: aggregate.EntitlementSnapshot.StartsAt.UTC(), + EndsAt: cloneOptionalTime(aggregate.EntitlementSnapshot.EndsAt), + UpdatedAt: aggregate.EntitlementSnapshot.UpdatedAt.UTC(), + }, + ActiveSanctions: make([]ActiveSanctionView, 0, len(aggregate.ActiveSanctions)), + ActiveLimits: make([]ActiveLimitView, 0, len(aggregate.ActiveLimits)), + CreatedAt: aggregate.AccountRecord.CreatedAt.UTC(), + UpdatedAt: aggregate.AccountRecord.UpdatedAt.UTC(), + } + if !aggregate.AccountRecord.DeclaredCountry.IsZero() { + view.DeclaredCountry = aggregate.AccountRecord.DeclaredCountry.String() + } + + for _, sanctionRecord := range aggregate.ActiveSanctions { + view.ActiveSanctions = append(view.ActiveSanctions, ActiveSanctionView{ + SanctionCode: string(sanctionRecord.SanctionCode), + Scope: sanctionRecord.Scope.String(), + ReasonCode: sanctionRecord.ReasonCode.String(), + Actor: actorRefView(sanctionRecord.Actor), + AppliedAt: sanctionRecord.AppliedAt.UTC(), + ExpiresAt: cloneOptionalTime(sanctionRecord.ExpiresAt), + }) + } + for _, limitRecord := range aggregate.ActiveLimits { + view.ActiveLimits = append(view.ActiveLimits, ActiveLimitView{ + LimitCode: string(limitRecord.LimitCode), + Value: limitRecord.Value, + ReasonCode: limitRecord.ReasonCode.String(), + Actor: actorRefView(limitRecord.Actor), + AppliedAt: limitRecord.AppliedAt.UTC(), + ExpiresAt: cloneOptionalTime(limitRecord.ExpiresAt), + }) + } + + return view +} + +type entitlementReader interface { + GetByUserID(ctx context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) +} + +// Loader materializes the shared current account aggregate for one user id. +type Loader struct { + accounts ports.UserAccountStore + entitlements entitlementReader + sanctions ports.SanctionStore + limits ports.LimitStore + clock ports.Clock +} + +// NewLoader constructs one shared account-aggregate loader. +func NewLoader( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, +) (*Loader, error) { + switch { + case accounts == nil: + return nil, fmt.Errorf("account view loader: user account store must not be nil") + case entitlements == nil: + return nil, fmt.Errorf("account view loader: entitlement reader must not be nil") + case sanctions == nil: + return nil, fmt.Errorf("account view loader: sanction store must not be nil") + case limits == nil: + return nil, fmt.Errorf("account view loader: limit store must not be nil") + case clock == nil: + return nil, fmt.Errorf("account view loader: clock must not be nil") + default: + return &Loader{ + accounts: accounts, + entitlements: entitlements, + sanctions: sanctions, + limits: limits, + clock: clock, + }, nil + } +} + +// Load materializes the shared account aggregate identified by userID. +func (loader *Loader) Load(ctx context.Context, userID common.UserID) (Aggregate, error) { + if loader == nil { + return Aggregate{}, shared.InternalError(fmt.Errorf("account view loader must not be nil")) + } + + accountRecord, err := loader.accounts.GetByUserID(ctx, userID) + switch { + case err == nil: + case errors.Is(err, ports.ErrNotFound): + return Aggregate{}, shared.SubjectNotFound() + default: + return Aggregate{}, shared.ServiceUnavailable(err) + } + + entitlementSnapshot, err := loader.entitlements.GetByUserID(ctx, userID) + switch { + case err == nil: + case errors.Is(err, ports.ErrNotFound): + return Aggregate{}, shared.InternalError(fmt.Errorf("user %q is missing entitlement snapshot", userID)) + default: + return Aggregate{}, shared.ServiceUnavailable(err) + } + + sanctionRecords, err := loader.sanctions.ListByUserID(ctx, userID) + if err != nil { + return Aggregate{}, shared.ServiceUnavailable(err) + } + + limitRecords, err := loader.limits.ListByUserID(ctx, userID) + if err != nil { + return Aggregate{}, shared.ServiceUnavailable(err) + } + + now := loader.clock.Now().UTC() + + activeSanctions, err := policy.ActiveSanctionsAt(sanctionRecords, now) + if err != nil { + return Aggregate{}, shared.InternalError(fmt.Errorf("evaluate active sanctions for user %q: %w", userID, err)) + } + activeLimits, err := policy.ActiveLimitsAt(limitRecords, now) + if err != nil { + return Aggregate{}, shared.InternalError(fmt.Errorf("evaluate active limits for user %q: %w", userID, err)) + } + + return Aggregate{ + AccountRecord: accountRecord, + EntitlementSnapshot: entitlementSnapshot, + ActiveSanctions: activeSanctions, + ActiveLimits: activeLimits, + }, nil +} + +func actorRefView(ref common.ActorRef) ActorRefView { + return ActorRefView{ + Type: ref.Type.String(), + ID: ref.ID.String(), + } +} + +func cloneOptionalTime(value *time.Time) *time.Time { + if value == nil { + return nil + } + + cloned := value.UTC() + return &cloned +} diff --git a/user/internal/service/adminusers/service.go b/user/internal/service/adminusers/service.go new file mode 100644 index 0000000..53cf69c --- /dev/null +++ b/user/internal/service/adminusers/service.go @@ -0,0 +1,504 @@ +// Package adminusers implements the trusted administrative user-read surface +// owned by User Service. +package adminusers + +import ( + "context" + "errors" + "fmt" + "strings" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/accountview" + "galaxy/user/internal/service/shared" +) + +// LookupResult stores one exact trusted admin user lookup result. +type LookupResult struct { + // User stores the shared account aggregate of the resolved user. + User accountview.AccountView `json:"user"` +} + +// GetUserByIDInput stores one exact trusted lookup by stable user identifier. +type GetUserByIDInput struct { + // UserID stores the stable regular-user identifier to resolve. + UserID string +} + +// GetUserByEmailInput stores one exact trusted lookup by normalized e-mail. +type GetUserByEmailInput struct { + // Email stores the normalized login/contact e-mail to resolve. + Email string +} + +// GetUserByRaceNameInput stores one exact trusted lookup by exact stored race +// name. +type GetUserByRaceNameInput struct { + // RaceName stores the exact current race name to resolve. + RaceName string +} + +// ListUsersInput stores one trusted administrative user-list request. +type ListUsersInput struct { + // PageSize stores the requested maximum number of returned users. The zero + // value selects the frozen default page size. + PageSize int + + // PageToken stores the optional opaque continuation cursor. + PageToken string + + // PaidState stores the optional coarse free-versus-paid filter. + PaidState string + + // PaidExpiresBefore stores the optional strict finite paid-expiry upper + // bound. + PaidExpiresBefore *time.Time + + // PaidExpiresAfter stores the optional strict finite paid-expiry lower + // bound. + PaidExpiresAfter *time.Time + + // DeclaredCountry stores the optional current declared-country filter. + DeclaredCountry string + + // SanctionCode stores the optional active-sanction filter. + SanctionCode string + + // LimitCode stores the optional active user-specific limit filter. + LimitCode string + + // CanLogin stores the optional derived login-eligibility filter. + CanLogin *bool + + // CanCreatePrivateGame stores the optional derived private-game-create + // eligibility filter. + CanCreatePrivateGame *bool + + // CanJoinGame stores the optional derived game-join eligibility filter. + CanJoinGame *bool +} + +// ListUsersResult stores one trusted administrative page of user aggregates. +type ListUsersResult struct { + // Items stores the returned user aggregates in deterministic order. + Items []accountview.AccountView `json:"items"` + + // NextPageToken stores the optional continuation cursor for the next page. + NextPageToken string `json:"next_page_token,omitempty"` +} + +type entitlementReader interface { + GetByUserID(ctx context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) +} + +type readSupport struct { + accounts ports.UserAccountStore + loader *accountview.Loader +} + +func newReadSupport( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, +) (readSupport, error) { + loader, err := accountview.NewLoader(accounts, entitlements, sanctions, limits, clock) + if err != nil { + return readSupport{}, fmt.Errorf("account view loader: %w", err) + } + + return readSupport{ + accounts: accounts, + loader: loader, + }, nil +} + +// ByIDGetter executes exact trusted lookups by stable user identifier. +type ByIDGetter struct { + support readSupport +} + +// NewByIDGetter constructs one exact admin lookup by user id. +func NewByIDGetter( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, +) (*ByIDGetter, error) { + support, err := newReadSupport(accounts, entitlements, sanctions, limits, clock) + if err != nil { + return nil, fmt.Errorf("admin users by-id getter: %w", err) + } + + return &ByIDGetter{support: support}, nil +} + +// Execute resolves one exact user by stable user identifier. +func (service *ByIDGetter) Execute(ctx context.Context, input GetUserByIDInput) (LookupResult, error) { + if ctx == nil { + return LookupResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + return LookupResult{}, err + } + + aggregate, err := service.support.loader.Load(ctx, userID) + if err != nil { + return LookupResult{}, err + } + + return LookupResult{User: aggregate.View()}, nil +} + +// ByEmailGetter executes exact trusted lookups by normalized e-mail. +type ByEmailGetter struct { + support readSupport +} + +// NewByEmailGetter constructs one exact admin lookup by normalized e-mail. +func NewByEmailGetter( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, +) (*ByEmailGetter, error) { + support, err := newReadSupport(accounts, entitlements, sanctions, limits, clock) + if err != nil { + return nil, fmt.Errorf("admin users by-email getter: %w", err) + } + + return &ByEmailGetter{support: support}, nil +} + +// Execute resolves one exact user by normalized e-mail. +func (service *ByEmailGetter) Execute(ctx context.Context, input GetUserByEmailInput) (LookupResult, error) { + if ctx == nil { + return LookupResult{}, shared.InvalidRequest("context must not be nil") + } + + email, err := shared.ParseEmail(input.Email) + if err != nil { + return LookupResult{}, err + } + + record, err := service.support.accounts.GetByEmail(ctx, email) + switch { + case err == nil: + case errors.Is(err, ports.ErrNotFound): + return LookupResult{}, shared.SubjectNotFound() + default: + return LookupResult{}, shared.ServiceUnavailable(err) + } + + aggregate, err := service.support.loader.Load(ctx, record.UserID) + if err != nil { + return LookupResult{}, err + } + + return LookupResult{User: aggregate.View()}, nil +} + +// ByRaceNameGetter executes exact trusted lookups by exact stored race name. +type ByRaceNameGetter struct { + support readSupport +} + +// NewByRaceNameGetter constructs one exact admin lookup by exact stored race +// name. +func NewByRaceNameGetter( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, +) (*ByRaceNameGetter, error) { + support, err := newReadSupport(accounts, entitlements, sanctions, limits, clock) + if err != nil { + return nil, fmt.Errorf("admin users by-race-name getter: %w", err) + } + + return &ByRaceNameGetter{support: support}, nil +} + +// Execute resolves one exact user by exact stored race name. +func (service *ByRaceNameGetter) Execute(ctx context.Context, input GetUserByRaceNameInput) (LookupResult, error) { + if ctx == nil { + return LookupResult{}, shared.InvalidRequest("context must not be nil") + } + + raceName, err := shared.ParseRaceName(input.RaceName) + if err != nil { + return LookupResult{}, err + } + + record, err := service.support.accounts.GetByRaceName(ctx, raceName) + switch { + case err == nil: + case errors.Is(err, ports.ErrNotFound): + return LookupResult{}, shared.SubjectNotFound() + default: + return LookupResult{}, shared.ServiceUnavailable(err) + } + + aggregate, err := service.support.loader.Load(ctx, record.UserID) + if err != nil { + return LookupResult{}, err + } + + return LookupResult{User: aggregate.View()}, nil +} + +// Lister executes the trusted administrative filtered user listing. +type Lister struct { + support readSupport + listStore ports.UserListStore +} + +// NewLister constructs one trusted administrative filtered user lister. +func NewLister( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, + listStore ports.UserListStore, +) (*Lister, error) { + if listStore == nil { + return nil, fmt.Errorf("admin users lister: user list store must not be nil") + } + + support, err := newReadSupport(accounts, entitlements, sanctions, limits, clock) + if err != nil { + return nil, fmt.Errorf("admin users lister: %w", err) + } + + return &Lister{ + support: support, + listStore: listStore, + }, nil +} + +// Execute lists users in deterministic newest-first order and combines all +// supplied filters with logical AND semantics. +func (service *Lister) Execute(ctx context.Context, input ListUsersInput) (ListUsersResult, error) { + if ctx == nil { + return ListUsersResult{}, shared.InvalidRequest("context must not be nil") + } + if strings.TrimSpace(input.PageToken) != input.PageToken { + return ListUsersResult{}, shared.InvalidRequest("page_token must not contain surrounding whitespace") + } + + pageSize, err := normalizePageSize(input.PageSize) + if err != nil { + return ListUsersResult{}, err + } + filters, err := parseListFilters(input) + if err != nil { + return ListUsersResult{}, err + } + + result := ListUsersResult{ + Items: make([]accountview.AccountView, 0, pageSize), + } + currentToken := input.PageToken + + for len(result.Items) < pageSize { + candidatePage, err := service.listStore.ListUserIDs(ctx, ports.ListUsersInput{ + PageSize: 1, + PageToken: currentToken, + Filters: filters, + }) + switch { + case err == nil: + case errors.Is(err, ports.ErrInvalidPageToken): + return ListUsersResult{}, shared.InvalidRequest("page_token is invalid or does not match current filters") + default: + return ListUsersResult{}, shared.ServiceUnavailable(err) + } + if len(candidatePage.UserIDs) == 0 { + result.NextPageToken = "" + return result, nil + } + + nextToken := candidatePage.NextPageToken + candidateID := candidatePage.UserIDs[0] + + aggregate, err := service.support.loader.Load(ctx, candidateID) + if err != nil { + return ListUsersResult{}, err + } + if matchesFilters(aggregate, filters) { + result.Items = append(result.Items, aggregate.View()) + result.NextPageToken = nextToken + } + + if nextToken == "" { + result.NextPageToken = "" + return result, nil + } + + currentToken = nextToken + } + + return result, nil +} + +func normalizePageSize(value int) (int, error) { + switch { + case value == 0: + return ports.DefaultUserListPageSize, nil + case value < 0: + return 0, shared.InvalidRequest("page_size must be between 1 and 200") + case value > ports.MaxUserListPageSize: + return 0, shared.InvalidRequest("page_size must be between 1 and 200") + default: + return value, nil + } +} + +func parseListFilters(input ListUsersInput) (ports.UserListFilters, error) { + paidState, err := parsePaidState(input.PaidState) + if err != nil { + return ports.UserListFilters{}, err + } + declaredCountry, err := parseCountryCode(input.DeclaredCountry) + if err != nil { + return ports.UserListFilters{}, err + } + sanctionCode, err := parseSanctionCode(input.SanctionCode) + if err != nil { + return ports.UserListFilters{}, err + } + limitCode, err := parseLimitCode(input.LimitCode) + if err != nil { + return ports.UserListFilters{}, err + } + + filters := ports.UserListFilters{ + PaidState: paidState, + PaidExpiresBefore: input.PaidExpiresBefore, + PaidExpiresAfter: input.PaidExpiresAfter, + DeclaredCountry: declaredCountry, + SanctionCode: sanctionCode, + LimitCode: limitCode, + CanLogin: input.CanLogin, + CanCreatePrivateGame: input.CanCreatePrivateGame, + CanJoinGame: input.CanJoinGame, + } + if err := filters.Validate(); err != nil { + return ports.UserListFilters{}, shared.InvalidRequest(err.Error()) + } + + return filters, nil +} + +func parsePaidState(value string) (entitlement.PaidState, error) { + state := entitlement.PaidState(shared.NormalizeString(value)) + if !state.IsKnown() { + return "", shared.InvalidRequest(fmt.Sprintf("paid_state %q is unsupported", state)) + } + + return state, nil +} + +func parseCountryCode(value string) (common.CountryCode, error) { + code := common.CountryCode(shared.NormalizeString(value)) + if code.IsZero() { + return "", nil + } + if err := code.Validate(); err != nil { + return "", shared.InvalidRequest(fmt.Sprintf("declared_country: %s", err.Error())) + } + + return code, nil +} + +func parseSanctionCode(value string) (policy.SanctionCode, error) { + code := policy.SanctionCode(shared.NormalizeString(value)) + if code == "" { + return "", nil + } + if !code.IsKnown() { + return "", shared.InvalidRequest(fmt.Sprintf("sanction_code %q is unsupported", code)) + } + + return code, nil +} + +func parseLimitCode(value string) (policy.LimitCode, error) { + code := policy.LimitCode(shared.NormalizeString(value)) + if code == "" { + return "", nil + } + if !code.IsKnown() { + return "", shared.InvalidRequest(fmt.Sprintf("limit_code %q is unsupported", code)) + } + + return code, nil +} + +func matchesFilters(aggregate accountview.Aggregate, filters ports.UserListFilters) bool { + switch filters.PaidState { + case entitlement.PaidStateFree: + if aggregate.EntitlementSnapshot.IsPaid { + return false + } + case entitlement.PaidStatePaid: + if !aggregate.EntitlementSnapshot.IsPaid { + return false + } + } + + if filters.PaidExpiresBefore != nil { + if !aggregate.EntitlementSnapshot.HasFiniteExpiry() || !aggregate.EntitlementSnapshot.EndsAt.Before(filters.PaidExpiresBefore.UTC()) { + return false + } + } + if filters.PaidExpiresAfter != nil { + if !aggregate.EntitlementSnapshot.HasFiniteExpiry() || !aggregate.EntitlementSnapshot.EndsAt.After(filters.PaidExpiresAfter.UTC()) { + return false + } + } + if !filters.DeclaredCountry.IsZero() && aggregate.AccountRecord.DeclaredCountry != filters.DeclaredCountry { + return false + } + if filters.SanctionCode != "" && !aggregate.HasActiveSanction(filters.SanctionCode) { + return false + } + if filters.LimitCode != "" && !aggregate.HasActiveLimit(filters.LimitCode) { + return false + } + + canLogin, canCreatePrivateGame, canJoinGame := deriveFilterEligibility(aggregate) + if filters.CanLogin != nil && canLogin != *filters.CanLogin { + return false + } + if filters.CanCreatePrivateGame != nil && canCreatePrivateGame != *filters.CanCreatePrivateGame { + return false + } + if filters.CanJoinGame != nil && canJoinGame != *filters.CanJoinGame { + return false + } + + return true +} + +func deriveFilterEligibility(aggregate accountview.Aggregate) (bool, bool, bool) { + canLogin := !aggregate.HasActiveSanction(policy.SanctionCodeLoginBlock) + canCreatePrivateGame := canLogin && + aggregate.EntitlementSnapshot.IsPaid && + !aggregate.HasActiveSanction(policy.SanctionCodePrivateGameCreateBlock) + canJoinGame := canLogin && + !aggregate.HasActiveSanction(policy.SanctionCodeGameJoinBlock) + + return canLogin, canCreatePrivateGame, canJoinGame +} diff --git a/user/internal/service/adminusers/service_test.go b/user/internal/service/adminusers/service_test.go new file mode 100644 index 0000000..6263d47 --- /dev/null +++ b/user/internal/service/adminusers/service_test.go @@ -0,0 +1,623 @@ +package adminusers + +import ( + "context" + "errors" + "fmt" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/entitlementsvc" + "galaxy/user/internal/service/shared" + + "github.com/stretchr/testify/require" +) + +func TestByIDGetterExecuteReturnsAggregate(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + service, err := NewByIDGetter( + newFakeAdminAccountStore(validAdminUserAccount("user-123", "pilot@example.com", "Pilot Nova", now)), + &fakeAdminEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validAdminFreeSnapshot(common.UserID("user-123"), now), + }, + }, + fakeAdminSanctionStore{ + byUserID: map[common.UserID][]policy.SanctionRecord{ + common.UserID("user-123"): { + validAdminActiveSanction(common.UserID("user-123"), policy.SanctionCodeLoginBlock, now.Add(-time.Hour)), + expiredAdminSanction(common.UserID("user-123"), policy.SanctionCodeGameJoinBlock, now.Add(-2*time.Hour)), + }, + }, + }, + fakeAdminLimitStore{ + byUserID: map[common.UserID][]policy.LimitRecord{ + common.UserID("user-123"): { + validAdminActiveLimit(common.UserID("user-123"), policy.LimitCodeMaxOwnedPrivateGames, 3, now.Add(-time.Hour)), + }, + }, + }, + adminFixedClock{now: now}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GetUserByIDInput{UserID: " user-123 "}) + require.NoError(t, err) + require.Equal(t, "user-123", result.User.UserID) + require.Equal(t, "pilot@example.com", result.User.Email) + require.Len(t, result.User.ActiveSanctions, 1) + require.Equal(t, string(policy.SanctionCodeLoginBlock), result.User.ActiveSanctions[0].SanctionCode) + require.Len(t, result.User.ActiveLimits, 1) + require.Equal(t, string(policy.LimitCodeMaxOwnedPrivateGames), result.User.ActiveLimits[0].LimitCode) +} + +func TestByEmailGetterExecuteUnknownUserReturnsNotFound(t *testing.T) { + t.Parallel() + + service, err := NewByEmailGetter( + newFakeAdminAccountStore(), + &fakeAdminEntitlementSnapshotStore{}, + fakeAdminSanctionStore{}, + fakeAdminLimitStore{}, + adminFixedClock{now: time.Unix(1_775_240_500, 0).UTC()}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), GetUserByEmailInput{Email: "missing@example.com"}) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeSubjectNotFound, shared.CodeOf(err)) +} + +func TestByRaceNameGetterExecuteReturnsAggregate(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + service, err := NewByRaceNameGetter( + newFakeAdminAccountStore(validAdminUserAccount("user-123", "pilot@example.com", "Pilot Nova", now)), + &fakeAdminEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validAdminFreeSnapshot(common.UserID("user-123"), now), + }, + }, + fakeAdminSanctionStore{}, + fakeAdminLimitStore{}, + adminFixedClock{now: now}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GetUserByRaceNameInput{RaceName: " Pilot Nova "}) + require.NoError(t, err) + require.Equal(t, "user-123", result.User.UserID) + require.Equal(t, "Pilot Nova", result.User.RaceName) +} + +func TestListerExecuteAppliesCombinedFiltersWithLogicalAND(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + firstExpiry := now.Add(48 * time.Hour) + secondExpiry := now.Add(72 * time.Hour) + before := now.Add(96 * time.Hour) + after := now.Add(24 * time.Hour) + canLogin := false + canCreatePrivateGame := false + canJoinGame := false + + accountStore := newFakeAdminAccountStore( + validAdminUserAccount("user-300", "u300@example.com", "User 300", now), + validAdminUserAccount("user-200", "u200@example.com", "User 200", now), + validAdminUserAccount("user-100", "u100@example.com", "User 100", now), + ) + snapshotStore := &fakeAdminEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-300"): validAdminPaidSnapshot(common.UserID("user-300"), now, firstExpiry), + common.UserID("user-200"): validAdminPaidSnapshot(common.UserID("user-200"), now, secondExpiry), + common.UserID("user-100"): validAdminPaidSnapshot(common.UserID("user-100"), now, secondExpiry), + }, + } + sanctionStore := fakeAdminSanctionStore{ + byUserID: map[common.UserID][]policy.SanctionRecord{ + common.UserID("user-300"): { + validAdminActiveSanction(common.UserID("user-300"), policy.SanctionCodeLoginBlock, now.Add(-time.Hour)), + }, + common.UserID("user-200"): { + validAdminActiveSanction(common.UserID("user-200"), policy.SanctionCodeLoginBlock, now.Add(-time.Hour)), + }, + common.UserID("user-100"): { + validAdminActiveSanction(common.UserID("user-100"), policy.SanctionCodeLoginBlock, now.Add(-time.Hour)), + }, + }, + } + limitStore := fakeAdminLimitStore{ + byUserID: map[common.UserID][]policy.LimitRecord{ + common.UserID("user-300"): { + validAdminActiveLimit(common.UserID("user-300"), policy.LimitCodeMaxOwnedPrivateGames, 3, now.Add(-time.Hour)), + }, + common.UserID("user-100"): { + validAdminActiveLimit(common.UserID("user-100"), policy.LimitCodeMaxOwnedPrivateGames, 3, now.Add(-time.Hour)), + }, + }, + } + listStore := &fakeAdminListStore{ + pages: map[string]ports.ListUsersResult{ + "": { + UserIDs: []common.UserID{common.UserID("user-300")}, + NextPageToken: "cursor-1", + }, + "cursor-1": { + UserIDs: []common.UserID{common.UserID("user-200")}, + NextPageToken: "cursor-2", + }, + "cursor-2": { + UserIDs: []common.UserID{common.UserID("user-100")}, + NextPageToken: "", + }, + }, + } + + service, err := NewLister(accountStore, snapshotStore, sanctionStore, limitStore, adminFixedClock{now: now}, listStore) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), ListUsersInput{ + PageSize: 2, + PaidState: "paid", + PaidExpiresBefore: &before, + PaidExpiresAfter: &after, + DeclaredCountry: "DE", + SanctionCode: "login_block", + LimitCode: "max_owned_private_games", + CanLogin: &canLogin, + CanCreatePrivateGame: &canCreatePrivateGame, + CanJoinGame: &canJoinGame, + }) + require.NoError(t, err) + require.Len(t, result.Items, 2) + require.Equal(t, "user-300", result.Items[0].UserID) + require.Equal(t, "user-100", result.Items[1].UserID) + require.Equal(t, "", result.NextPageToken) + require.Len(t, listStore.calls, 3) + for _, call := range listStore.calls { + require.Equal(t, 1, call.PageSize) + require.Equal(t, entitlement.PaidStatePaid, call.Filters.PaidState) + require.Equal(t, common.CountryCode("DE"), call.Filters.DeclaredCountry) + require.Equal(t, policy.SanctionCodeLoginBlock, call.Filters.SanctionCode) + require.Equal(t, policy.LimitCodeMaxOwnedPrivateGames, call.Filters.LimitCode) + } +} + +func TestListerExecuteDefaultAndMaximumPageSize(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAdminAccountStore( + validAdminUserAccount("user-300", "u300@example.com", "User 300", now), + validAdminUserAccount("user-200", "u200@example.com", "User 200", now), + validAdminUserAccount("user-100", "u100@example.com", "User 100", now), + ) + snapshotStore := &fakeAdminEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-300"): validAdminFreeSnapshot(common.UserID("user-300"), now), + common.UserID("user-200"): validAdminFreeSnapshot(common.UserID("user-200"), now), + common.UserID("user-100"): validAdminFreeSnapshot(common.UserID("user-100"), now), + }, + } + + t.Run("default page size", func(t *testing.T) { + t.Parallel() + + listStore := &fakeAdminListStore{ + pages: map[string]ports.ListUsersResult{ + "": { + UserIDs: []common.UserID{common.UserID("user-300")}, + NextPageToken: "cursor-1", + }, + "cursor-1": { + UserIDs: []common.UserID{common.UserID("user-200")}, + NextPageToken: "cursor-2", + }, + "cursor-2": { + UserIDs: []common.UserID{common.UserID("user-100")}, + NextPageToken: "", + }, + }, + } + service, err := NewLister(accountStore, snapshotStore, fakeAdminSanctionStore{}, fakeAdminLimitStore{}, adminFixedClock{now: now}, listStore) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), ListUsersInput{}) + require.NoError(t, err) + require.Len(t, result.Items, 3) + }) + + t.Run("maximum page size", func(t *testing.T) { + t.Parallel() + + listStore := &fakeAdminListStore{ + pages: map[string]ports.ListUsersResult{ + "": { + UserIDs: []common.UserID{common.UserID("user-300")}, + NextPageToken: "", + }, + }, + } + service, err := NewLister(accountStore, snapshotStore, fakeAdminSanctionStore{}, fakeAdminLimitStore{}, adminFixedClock{now: now}, listStore) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), ListUsersInput{PageSize: ports.MaxUserListPageSize}) + require.NoError(t, err) + require.Len(t, result.Items, 1) + }) + + t.Run("above maximum is rejected", func(t *testing.T) { + t.Parallel() + + service, err := NewLister(accountStore, snapshotStore, fakeAdminSanctionStore{}, fakeAdminLimitStore{}, adminFixedClock{now: now}, &fakeAdminListStore{}) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), ListUsersInput{PageSize: ports.MaxUserListPageSize + 1}) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInvalidRequest, shared.CodeOf(err)) + require.Equal(t, "page_size must be between 1 and 200", err.Error()) + }) +} + +func TestListerExecuteInvalidPageTokenReturnsInvalidRequest(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + service, err := NewLister( + newFakeAdminAccountStore(validAdminUserAccount("user-123", "pilot@example.com", "Pilot Nova", now)), + &fakeAdminEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validAdminFreeSnapshot(common.UserID("user-123"), now), + }, + }, + fakeAdminSanctionStore{}, + fakeAdminLimitStore{}, + adminFixedClock{now: now}, + &fakeAdminListStore{err: fmt.Errorf("wrapped: %w", ports.ErrInvalidPageToken)}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), ListUsersInput{PageToken: "bad-token"}) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInvalidRequest, shared.CodeOf(err)) + require.Equal(t, "page_token is invalid or does not match current filters", err.Error()) +} + +func TestListerExecuteRepairsExpiredPaidSnapshotBeforeFiltering(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + expiredAt := now.Add(-time.Hour) + accountStore := newFakeAdminAccountStore(validAdminUserAccount("user-123", "pilot@example.com", "Pilot Nova", now)) + snapshotStore := &fakeAdminEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): { + UserID: common.UserID("user-123"), + PlanCode: entitlement.PlanCodePaidMonthly, + IsPaid: true, + StartsAt: now.Add(-30 * 24 * time.Hour), + EndsAt: adminTimePointer(expiredAt), + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_grant"), + UpdatedAt: expiredAt, + }, + }, + } + reader, err := entitlementsvc.NewReader( + snapshotStore, + &fakeAdminEntitlementLifecycleStore{snapshotStore: snapshotStore}, + adminFixedClock{now: now}, + adminReaderIDGenerator{recordID: entitlement.EntitlementRecordID("entitlement-repair-free-record")}, + ) + require.NoError(t, err) + listStore := &fakeAdminListStore{ + pages: map[string]ports.ListUsersResult{ + "": { + UserIDs: []common.UserID{common.UserID("user-123")}, + NextPageToken: "", + }, + }, + } + service, err := NewLister(accountStore, reader, fakeAdminSanctionStore{}, fakeAdminLimitStore{}, adminFixedClock{now: now}, listStore) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), ListUsersInput{PaidState: "free"}) + require.NoError(t, err) + require.Len(t, result.Items, 1) + require.Equal(t, "free", result.Items[0].Entitlement.PlanCode) + require.False(t, result.Items[0].Entitlement.IsPaid) + + storedSnapshot, err := snapshotStore.GetByUserID(context.Background(), common.UserID("user-123")) + require.NoError(t, err) + require.Equal(t, entitlement.PlanCodeFree, storedSnapshot.PlanCode) + require.False(t, storedSnapshot.IsPaid) + require.Equal(t, expiredAt, storedSnapshot.StartsAt) +} + +type adminFixedClock struct { + now time.Time +} + +func (clock adminFixedClock) Now() time.Time { + return clock.now +} + +type adminReaderIDGenerator struct { + recordID entitlement.EntitlementRecordID +} + +func (generator adminReaderIDGenerator) NewUserID() (common.UserID, error) { + return "", errors.New("unexpected NewUserID call") +} + +func (generator adminReaderIDGenerator) NewInitialRaceName() (common.RaceName, error) { + return "", errors.New("unexpected NewInitialRaceName call") +} + +func (generator adminReaderIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + return generator.recordID, nil +} + +func (generator adminReaderIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + return "", errors.New("unexpected NewSanctionRecordID call") +} + +func (generator adminReaderIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + return "", errors.New("unexpected NewLimitRecordID call") +} + +type fakeAdminAccountStore struct { + byUserID map[common.UserID]account.UserAccount + byEmail map[common.Email]common.UserID + byRaceName map[common.RaceName]common.UserID + updateErr error + renameErr error + createErr error + existsByID map[common.UserID]bool +} + +func newFakeAdminAccountStore(records ...account.UserAccount) *fakeAdminAccountStore { + store := &fakeAdminAccountStore{ + byUserID: make(map[common.UserID]account.UserAccount, len(records)), + byEmail: make(map[common.Email]common.UserID, len(records)), + byRaceName: make(map[common.RaceName]common.UserID, len(records)), + existsByID: make(map[common.UserID]bool, len(records)), + } + + for _, record := range records { + store.byUserID[record.UserID] = record + store.byEmail[record.Email] = record.UserID + store.byRaceName[record.RaceName] = record.UserID + store.existsByID[record.UserID] = true + } + + return store +} + +func (store *fakeAdminAccountStore) Create(context.Context, ports.CreateAccountInput) error { + return store.createErr +} + +func (store *fakeAdminAccountStore) GetByUserID(_ context.Context, userID common.UserID) (account.UserAccount, error) { + record, ok := store.byUserID[userID] + if !ok { + return account.UserAccount{}, ports.ErrNotFound + } + + return record, nil +} + +func (store *fakeAdminAccountStore) GetByEmail(_ context.Context, email common.Email) (account.UserAccount, error) { + userID, ok := store.byEmail[email] + if !ok { + return account.UserAccount{}, ports.ErrNotFound + } + + return store.byUserID[userID], nil +} + +func (store *fakeAdminAccountStore) GetByRaceName(_ context.Context, raceName common.RaceName) (account.UserAccount, error) { + userID, ok := store.byRaceName[raceName] + if !ok { + return account.UserAccount{}, ports.ErrNotFound + } + + return store.byUserID[userID], nil +} + +func (store *fakeAdminAccountStore) ExistsByUserID(_ context.Context, userID common.UserID) (bool, error) { + return store.existsByID[userID], nil +} + +func (store *fakeAdminAccountStore) RenameRaceName(context.Context, ports.RenameRaceNameInput) error { + return store.renameErr +} + +func (store *fakeAdminAccountStore) Update(context.Context, account.UserAccount) error { + return store.updateErr +} + +type fakeAdminEntitlementSnapshotStore struct { + byUserID map[common.UserID]entitlement.CurrentSnapshot +} + +func (store *fakeAdminEntitlementSnapshotStore) GetByUserID(_ context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) { + record, ok := store.byUserID[userID] + if !ok { + return entitlement.CurrentSnapshot{}, ports.ErrNotFound + } + + return record, nil +} + +func (store *fakeAdminEntitlementSnapshotStore) Put(_ context.Context, record entitlement.CurrentSnapshot) error { + if store.byUserID == nil { + store.byUserID = make(map[common.UserID]entitlement.CurrentSnapshot) + } + store.byUserID[record.UserID] = record + return nil +} + +type fakeAdminEntitlementLifecycleStore struct { + snapshotStore *fakeAdminEntitlementSnapshotStore +} + +func (store *fakeAdminEntitlementLifecycleStore) Grant(context.Context, ports.GrantEntitlementInput) error { + return errors.New("unexpected Grant call") +} + +func (store *fakeAdminEntitlementLifecycleStore) Extend(context.Context, ports.ExtendEntitlementInput) error { + return errors.New("unexpected Extend call") +} + +func (store *fakeAdminEntitlementLifecycleStore) Revoke(context.Context, ports.RevokeEntitlementInput) error { + return errors.New("unexpected Revoke call") +} + +func (store *fakeAdminEntitlementLifecycleStore) RepairExpired(ctx context.Context, input ports.RepairExpiredEntitlementInput) error { + return store.snapshotStore.Put(ctx, input.NewSnapshot) +} + +type fakeAdminSanctionStore struct { + byUserID map[common.UserID][]policy.SanctionRecord +} + +func (store fakeAdminSanctionStore) Create(context.Context, policy.SanctionRecord) error { + return nil +} + +func (store fakeAdminSanctionStore) GetByRecordID(context.Context, policy.SanctionRecordID) (policy.SanctionRecord, error) { + return policy.SanctionRecord{}, ports.ErrNotFound +} + +func (store fakeAdminSanctionStore) ListByUserID(_ context.Context, userID common.UserID) ([]policy.SanctionRecord, error) { + return append([]policy.SanctionRecord(nil), store.byUserID[userID]...), nil +} + +func (store fakeAdminSanctionStore) Update(context.Context, policy.SanctionRecord) error { + return nil +} + +type fakeAdminLimitStore struct { + byUserID map[common.UserID][]policy.LimitRecord +} + +func (store fakeAdminLimitStore) Create(context.Context, policy.LimitRecord) error { + return nil +} + +func (store fakeAdminLimitStore) GetByRecordID(context.Context, policy.LimitRecordID) (policy.LimitRecord, error) { + return policy.LimitRecord{}, ports.ErrNotFound +} + +func (store fakeAdminLimitStore) ListByUserID(_ context.Context, userID common.UserID) ([]policy.LimitRecord, error) { + return append([]policy.LimitRecord(nil), store.byUserID[userID]...), nil +} + +func (store fakeAdminLimitStore) Update(context.Context, policy.LimitRecord) error { + return nil +} + +type fakeAdminListStore struct { + pages map[string]ports.ListUsersResult + err error + calls []ports.ListUsersInput +} + +func (store *fakeAdminListStore) ListUserIDs(_ context.Context, input ports.ListUsersInput) (ports.ListUsersResult, error) { + store.calls = append(store.calls, input) + if store.err != nil { + return ports.ListUsersResult{}, store.err + } + result, ok := store.pages[input.PageToken] + if !ok { + return ports.ListUsersResult{}, nil + } + + return result, nil +} + +func validAdminUserAccount(userID string, email string, raceName string, now time.Time) account.UserAccount { + return account.UserAccount{ + UserID: common.UserID(userID), + Email: common.Email(email), + RaceName: common.RaceName(raceName), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Kaliningrad"), + DeclaredCountry: common.CountryCode("DE"), + CreatedAt: now, + UpdatedAt: now, + } +} + +func validAdminFreeSnapshot(userID common.UserID, now time.Time) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: now, + Source: common.Source("auth_registration"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: common.ReasonCode("initial_free_entitlement"), + UpdatedAt: now, + } +} + +func validAdminPaidSnapshot(userID common.UserID, now time.Time, endsAt time.Time) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodePaidMonthly, + IsPaid: true, + StartsAt: now.Add(-24 * time.Hour), + EndsAt: adminTimePointer(endsAt), + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_grant"), + UpdatedAt: now, + } +} + +func validAdminActiveSanction(userID common.UserID, code policy.SanctionCode, appliedAt time.Time) policy.SanctionRecord { + return policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-" + string(code) + "-" + userID.String()), + UserID: userID, + SanctionCode: code, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("manual_block"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: appliedAt, + } +} + +func expiredAdminSanction(userID common.UserID, code policy.SanctionCode, appliedAt time.Time) policy.SanctionRecord { + record := validAdminActiveSanction(userID, code, appliedAt) + record.ExpiresAt = adminTimePointer(appliedAt.Add(30 * time.Minute)) + return record +} + +func validAdminActiveLimit(userID common.UserID, code policy.LimitCode, value int, appliedAt time.Time) policy.LimitRecord { + return policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-" + string(code) + "-" + userID.String()), + UserID: userID, + LimitCode: code, + Value: value, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: appliedAt, + } +} + +func adminTimePointer(value time.Time) *time.Time { + copied := value.UTC() + return &copied +} diff --git a/user/internal/service/authdirectory/service.go b/user/internal/service/authdirectory/service.go new file mode 100644 index 0000000..6b13cfc --- /dev/null +++ b/user/internal/service/authdirectory/service.go @@ -0,0 +1,614 @@ +// Package authdirectory implements the auth-facing user-resolution, ensure, +// existence, and block use cases owned by the user service. +package authdirectory + +import ( + "context" + "errors" + "fmt" + "log/slog" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" + "galaxy/user/internal/telemetry" +) + +const ( + initialEntitlementSource common.Source = "auth_registration" + initialEntitlementReasonCode common.ReasonCode = "initial_free_entitlement" + initialEntitlementActorType common.ActorType = "service" + initialEntitlementActorID common.ActorID = "user-service" + + ensureCreateRetryLimit = 8 +) + +// ResolveByEmailInput stores one auth-facing resolve-by-email request. +type ResolveByEmailInput struct { + // Email stores the caller-supplied e-mail subject. + Email string +} + +// ResolveByEmailResult stores one auth-facing resolve-by-email response. +type ResolveByEmailResult struct { + // Kind stores the coarse user-resolution outcome. + Kind string + + // UserID is present only when Kind is `existing`. + UserID string + + // BlockReasonCode is present only when Kind is `blocked`. + BlockReasonCode string +} + +// Resolver executes the auth-facing resolve-by-email use case. +type Resolver struct { + store ports.AuthDirectoryStore + logger *slog.Logger + telemetry *telemetry.Runtime +} + +// NewResolver returns one resolve-by-email use case instance. +func NewResolver(store ports.AuthDirectoryStore) (*Resolver, error) { + return NewResolverWithObservability(store, nil, nil) +} + +// NewResolverWithObservability returns one resolve-by-email use case instance +// with optional structured logging and metrics hooks. +func NewResolverWithObservability( + store ports.AuthDirectoryStore, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, +) (*Resolver, error) { + if store == nil { + return nil, fmt.Errorf("authdirectory resolver: auth directory store must not be nil") + } + + return &Resolver{ + store: store, + logger: logger, + telemetry: telemetryRuntime, + }, nil +} + +// Execute resolves one e-mail subject without creating any account. +func (service *Resolver) Execute(ctx context.Context, input ResolveByEmailInput) (result ResolveByEmailResult, err error) { + outcome := "failed" + defer func() { + if service.telemetry != nil { + service.telemetry.RecordAuthResolutionOutcome(ctx, "resolve_by_email", outcome) + } + if err != nil { + shared.LogServiceOutcome(service.logger, ctx, "auth resolution failed", err, + "use_case", "resolve_by_email", + "outcome", outcome, + ) + } + }() + + if ctx == nil { + return ResolveByEmailResult{}, shared.InvalidRequest("context must not be nil") + } + + email, err := shared.ParseEmail(input.Email) + if err != nil { + return ResolveByEmailResult{}, err + } + + resolution, err := service.store.ResolveByEmail(ctx, email) + if err != nil { + return ResolveByEmailResult{}, shared.ServiceUnavailable(err) + } + if err := resolution.Validate(); err != nil { + return ResolveByEmailResult{}, shared.InternalError(err) + } + + result = ResolveByEmailResult{ + Kind: string(resolution.Kind), + } + if !resolution.UserID.IsZero() { + result.UserID = resolution.UserID.String() + } + if !resolution.BlockReasonCode.IsZero() { + result.BlockReasonCode = resolution.BlockReasonCode.String() + } + outcome = result.Kind + + return result, nil +} + +// RegistrationContext stores the create-only auth-facing initialization +// context forwarded by authsession. +type RegistrationContext struct { + // PreferredLanguage stores the initial preferred language. + PreferredLanguage string + + // TimeZone stores the initial declared time-zone name. + TimeZone string +} + +// EnsureByEmailInput stores one auth-facing ensure-by-email request. +type EnsureByEmailInput struct { + // Email stores the caller-supplied e-mail subject. + Email string + + // RegistrationContext stores the required create-only registration context. + RegistrationContext *RegistrationContext +} + +// EnsureByEmailResult stores one auth-facing ensure-by-email response. +type EnsureByEmailResult struct { + // Outcome stores the coarse ensure outcome. + Outcome string + + // UserID is present only for `existing` and `created`. + UserID string + + // BlockReasonCode is present only for `blocked`. + BlockReasonCode string +} + +// Ensurer executes the auth-facing ensure-by-email use case. +type Ensurer struct { + store ports.AuthDirectoryStore + clock ports.Clock + idGenerator ports.IDGenerator + policy ports.RaceNamePolicy + logger *slog.Logger + telemetry *telemetry.Runtime + profilePublisher ports.ProfileChangedPublisher + settingsPublisher ports.SettingsChangedPublisher + entitlementPublisher ports.EntitlementChangedPublisher +} + +// NewEnsurer returns one ensure-by-email use case instance. +func NewEnsurer( + store ports.AuthDirectoryStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + policy ports.RaceNamePolicy, +) (*Ensurer, error) { + return NewEnsurerWithObservability(store, clock, idGenerator, policy, nil, nil, nil, nil, nil) +} + +// NewEnsurerWithObservability returns one ensure-by-email use case instance +// with optional structured logging, metrics, and post-commit event +// publication hooks. +func NewEnsurerWithObservability( + store ports.AuthDirectoryStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + policy ports.RaceNamePolicy, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + profilePublisher ports.ProfileChangedPublisher, + settingsPublisher ports.SettingsChangedPublisher, + entitlementPublisher ports.EntitlementChangedPublisher, +) (*Ensurer, error) { + switch { + case store == nil: + return nil, fmt.Errorf("authdirectory ensurer: auth directory store must not be nil") + case clock == nil: + return nil, fmt.Errorf("authdirectory ensurer: clock must not be nil") + case idGenerator == nil: + return nil, fmt.Errorf("authdirectory ensurer: id generator must not be nil") + case policy == nil: + return nil, fmt.Errorf("authdirectory ensurer: race-name policy must not be nil") + default: + return &Ensurer{ + store: store, + clock: clock, + idGenerator: idGenerator, + policy: policy, + logger: logger, + telemetry: telemetryRuntime, + profilePublisher: profilePublisher, + settingsPublisher: settingsPublisher, + entitlementPublisher: entitlementPublisher, + }, nil + } +} + +// Execute ensures that one e-mail subject maps to an existing user, a newly +// created user, or a blocked outcome. +func (service *Ensurer) Execute(ctx context.Context, input EnsureByEmailInput) (result EnsureByEmailResult, err error) { + outcome := "failed" + userIDString := "" + defer func() { + if service.telemetry != nil { + service.telemetry.RecordUserCreationOutcome(ctx, outcome) + } + shared.LogServiceOutcome(service.logger, ctx, "ensure by email completed", err, + "use_case", "ensure_by_email", + "outcome", outcome, + "user_id", userIDString, + "source", initialEntitlementSource.String(), + ) + }() + + if ctx == nil { + return EnsureByEmailResult{}, shared.InvalidRequest("context must not be nil") + } + + email, err := shared.ParseEmail(input.Email) + if err != nil { + return EnsureByEmailResult{}, err + } + if input.RegistrationContext == nil { + return EnsureByEmailResult{}, shared.InvalidRequest("registration_context must be present") + } + + preferredLanguage, err := shared.ParseRegistrationPreferredLanguage(input.RegistrationContext.PreferredLanguage) + if err != nil { + return EnsureByEmailResult{}, err + } + timeZone, err := shared.ParseRegistrationTimeZoneName(input.RegistrationContext.TimeZone) + if err != nil { + return EnsureByEmailResult{}, err + } + + now := service.clock.Now().UTC() + + for attempt := 0; attempt < ensureCreateRetryLimit; attempt++ { + userID, err := service.idGenerator.NewUserID() + if err != nil { + return EnsureByEmailResult{}, shared.ServiceUnavailable(err) + } + raceName, err := service.idGenerator.NewInitialRaceName() + if err != nil { + return EnsureByEmailResult{}, shared.ServiceUnavailable(err) + } + + accountRecord := account.UserAccount{ + UserID: userID, + Email: email, + RaceName: raceName, + PreferredLanguage: preferredLanguage, + TimeZone: timeZone, + CreatedAt: now, + UpdatedAt: now, + } + entitlementSnapshot := entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: now, + Source: initialEntitlementSource, + Actor: common.ActorRef{Type: initialEntitlementActorType, ID: initialEntitlementActorID}, + ReasonCode: initialEntitlementReasonCode, + UpdatedAt: now, + } + entitlementRecordID, err := service.idGenerator.NewEntitlementRecordID() + if err != nil { + return EnsureByEmailResult{}, shared.ServiceUnavailable(err) + } + entitlementRecord := entitlement.PeriodRecord{ + RecordID: entitlementRecordID, + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + Source: initialEntitlementSource, + Actor: common.ActorRef{Type: initialEntitlementActorType, ID: initialEntitlementActorID}, + ReasonCode: initialEntitlementReasonCode, + StartsAt: now, + CreatedAt: now, + } + reservation, err := shared.BuildRaceNameReservation(service.policy, userID, raceName, now) + if err != nil { + return EnsureByEmailResult{}, shared.ServiceUnavailable(err) + } + + ensureResult, err := service.store.EnsureByEmail(ctx, ports.EnsureByEmailInput{ + Email: email, + Account: accountRecord, + Entitlement: entitlementSnapshot, + EntitlementRecord: entitlementRecord, + Reservation: reservation, + }) + if err != nil { + if errors.Is(err, ports.ErrRaceNameConflict) && service.telemetry != nil { + service.telemetry.RecordRaceNameReservationConflict(ctx, "ensure_by_email") + } + if errors.Is(err, ports.ErrConflict) { + continue + } + return EnsureByEmailResult{}, shared.ServiceUnavailable(err) + } + if err := ensureResult.Validate(); err != nil { + return EnsureByEmailResult{}, shared.InternalError(err) + } + + result = EnsureByEmailResult{ + Outcome: string(ensureResult.Outcome), + } + if !ensureResult.UserID.IsZero() { + result.UserID = ensureResult.UserID.String() + userIDString = result.UserID + } + if !ensureResult.BlockReasonCode.IsZero() { + result.BlockReasonCode = ensureResult.BlockReasonCode.String() + } + outcome = result.Outcome + + if result.Outcome == string(ports.EnsureByEmailOutcomeCreated) { + service.publishInitializedEvents(ctx, accountRecord, entitlementSnapshot) + } + + return result, nil + } + + return EnsureByEmailResult{}, shared.ServiceUnavailable(fmt.Errorf("ensure-by-email conflict retry limit exceeded")) +} + +func (service *Ensurer) publishInitializedEvents( + ctx context.Context, + accountRecord account.UserAccount, + entitlementSnapshot entitlement.CurrentSnapshot, +) { + occurredAt := accountRecord.UpdatedAt.UTC() + + service.publishProfileChanged(ctx, ports.ProfileChangedEvent{ + UserID: accountRecord.UserID, + OccurredAt: occurredAt, + Source: initialEntitlementSource, + Operation: ports.ProfileChangedOperationInitialized, + RaceName: accountRecord.RaceName, + }) + service.publishSettingsChanged(ctx, ports.SettingsChangedEvent{ + UserID: accountRecord.UserID, + OccurredAt: occurredAt, + Source: initialEntitlementSource, + Operation: ports.SettingsChangedOperationInitialized, + PreferredLanguage: accountRecord.PreferredLanguage, + TimeZone: accountRecord.TimeZone, + }) + service.publishEntitlementChanged(ctx, ports.EntitlementChangedEvent{ + UserID: entitlementSnapshot.UserID, + OccurredAt: occurredAt, + Source: initialEntitlementSource, + Operation: ports.EntitlementChangedOperationInitialized, + PlanCode: entitlementSnapshot.PlanCode, + IsPaid: entitlementSnapshot.IsPaid, + StartsAt: entitlementSnapshot.StartsAt, + EndsAt: entitlementSnapshot.EndsAt, + ReasonCode: entitlementSnapshot.ReasonCode, + Actor: entitlementSnapshot.Actor, + UpdatedAt: entitlementSnapshot.UpdatedAt, + }) +} + +func (service *Ensurer) publishProfileChanged(ctx context.Context, event ports.ProfileChangedEvent) { + if service.profilePublisher == nil { + return + } + if err := service.profilePublisher.PublishProfileChanged(ctx, event); err != nil { + if service.telemetry != nil { + service.telemetry.RecordEventPublicationFailure(ctx, ports.ProfileChangedEventType) + } + shared.LogEventPublicationFailure(service.logger, ctx, ports.ProfileChangedEventType, err, + "use_case", "ensure_by_email", + "user_id", event.UserID.String(), + "source", event.Source.String(), + ) + } +} + +func (service *Ensurer) publishSettingsChanged(ctx context.Context, event ports.SettingsChangedEvent) { + if service.settingsPublisher == nil { + return + } + if err := service.settingsPublisher.PublishSettingsChanged(ctx, event); err != nil { + if service.telemetry != nil { + service.telemetry.RecordEventPublicationFailure(ctx, ports.SettingsChangedEventType) + } + shared.LogEventPublicationFailure(service.logger, ctx, ports.SettingsChangedEventType, err, + "use_case", "ensure_by_email", + "user_id", event.UserID.String(), + "source", event.Source.String(), + ) + } +} + +func (service *Ensurer) publishEntitlementChanged(ctx context.Context, event ports.EntitlementChangedEvent) { + if service.entitlementPublisher == nil { + return + } + if err := service.entitlementPublisher.PublishEntitlementChanged(ctx, event); err != nil { + if service.telemetry != nil { + service.telemetry.RecordEventPublicationFailure(ctx, ports.EntitlementChangedEventType) + } + shared.LogEventPublicationFailure(service.logger, ctx, ports.EntitlementChangedEventType, err, + "use_case", "ensure_by_email", + "user_id", event.UserID.String(), + "source", event.Source.String(), + "reason_code", event.ReasonCode.String(), + "actor_type", event.Actor.Type.String(), + "actor_id", event.Actor.ID.String(), + ) + } +} + +// ExistsByUserIDInput stores one auth-facing existence check request. +type ExistsByUserIDInput struct { + // UserID stores the caller-supplied stable user identifier. + UserID string +} + +// ExistsByUserIDResult stores one auth-facing existence check response. +type ExistsByUserIDResult struct { + // Exists reports whether the supplied user identifier currently exists. + Exists bool +} + +// ExistenceChecker executes the auth-facing exists-by-user-id use case. +type ExistenceChecker struct { + store ports.AuthDirectoryStore +} + +// NewExistenceChecker returns one exists-by-user-id use case instance. +func NewExistenceChecker(store ports.AuthDirectoryStore) (*ExistenceChecker, error) { + if store == nil { + return nil, fmt.Errorf("authdirectory existence checker: auth directory store must not be nil") + } + + return &ExistenceChecker{store: store}, nil +} + +// Execute reports whether one stable user identifier exists. +func (service *ExistenceChecker) Execute(ctx context.Context, input ExistsByUserIDInput) (ExistsByUserIDResult, error) { + if ctx == nil { + return ExistsByUserIDResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + return ExistsByUserIDResult{}, err + } + + exists, err := service.store.ExistsByUserID(ctx, userID) + if err != nil { + return ExistsByUserIDResult{}, shared.ServiceUnavailable(err) + } + + return ExistsByUserIDResult{Exists: exists}, nil +} + +// BlockByUserIDInput stores one auth-facing block-by-user-id request. +type BlockByUserIDInput struct { + // UserID stores the stable account identifier that must be blocked. + UserID string + + // ReasonCode stores the machine-readable block reason. + ReasonCode string +} + +// BlockByEmailInput stores one auth-facing block-by-email request. +type BlockByEmailInput struct { + // Email stores the exact normalized e-mail subject that must be blocked. + Email string + + // ReasonCode stores the machine-readable block reason. + ReasonCode string +} + +// BlockResult stores one auth-facing block response. +type BlockResult struct { + // Outcome reports whether the current call created a new block. + Outcome string + + // UserID stores the resolved account when the blocked subject belongs to an + // existing user. + UserID string +} + +// BlockByUserIDService executes the auth-facing block-by-user-id use case. +type BlockByUserIDService struct { + store ports.AuthDirectoryStore + clock ports.Clock +} + +// NewBlockByUserIDService returns one block-by-user-id use case instance. +func NewBlockByUserIDService(store ports.AuthDirectoryStore, clock ports.Clock) (*BlockByUserIDService, error) { + switch { + case store == nil: + return nil, fmt.Errorf("authdirectory block-by-user-id service: auth directory store must not be nil") + case clock == nil: + return nil, fmt.Errorf("authdirectory block-by-user-id service: clock must not be nil") + default: + return &BlockByUserIDService{store: store, clock: clock}, nil + } +} + +// Execute blocks one account addressed by stable user identifier. +func (service *BlockByUserIDService) Execute(ctx context.Context, input BlockByUserIDInput) (BlockResult, error) { + if ctx == nil { + return BlockResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + return BlockResult{}, err + } + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + return BlockResult{}, err + } + + result, err := service.store.BlockByUserID(ctx, ports.BlockByUserIDInput{ + UserID: userID, + ReasonCode: reasonCode, + BlockedAt: service.clock.Now().UTC(), + }) + if err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return BlockResult{}, shared.SubjectNotFound() + default: + return BlockResult{}, shared.ServiceUnavailable(err) + } + } + if err := result.Validate(); err != nil { + return BlockResult{}, shared.InternalError(err) + } + + response := BlockResult{Outcome: string(result.Outcome)} + if !result.UserID.IsZero() { + response.UserID = result.UserID.String() + } + + return response, nil +} + +// BlockByEmailService executes the auth-facing block-by-email use case. +type BlockByEmailService struct { + store ports.AuthDirectoryStore + clock ports.Clock +} + +// NewBlockByEmailService returns one block-by-email use case instance. +func NewBlockByEmailService(store ports.AuthDirectoryStore, clock ports.Clock) (*BlockByEmailService, error) { + switch { + case store == nil: + return nil, fmt.Errorf("authdirectory block-by-email service: auth directory store must not be nil") + case clock == nil: + return nil, fmt.Errorf("authdirectory block-by-email service: clock must not be nil") + default: + return &BlockByEmailService{store: store, clock: clock}, nil + } +} + +// Execute blocks one exact normalized e-mail subject. +func (service *BlockByEmailService) Execute(ctx context.Context, input BlockByEmailInput) (BlockResult, error) { + if ctx == nil { + return BlockResult{}, shared.InvalidRequest("context must not be nil") + } + + email, err := shared.ParseEmail(input.Email) + if err != nil { + return BlockResult{}, err + } + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + return BlockResult{}, err + } + + result, err := service.store.BlockByEmail(ctx, ports.BlockByEmailInput{ + Email: email, + ReasonCode: reasonCode, + BlockedAt: service.clock.Now().UTC(), + }) + if err != nil { + return BlockResult{}, shared.ServiceUnavailable(err) + } + if err := result.Validate(); err != nil { + return BlockResult{}, shared.InternalError(err) + } + + response := BlockResult{Outcome: string(result.Outcome)} + if !result.UserID.IsZero() { + response.UserID = result.UserID.String() + } + + return response, nil +} diff --git a/user/internal/service/authdirectory/service_test.go b/user/internal/service/authdirectory/service_test.go new file mode 100644 index 0000000..7fa6fe7 --- /dev/null +++ b/user/internal/service/authdirectory/service_test.go @@ -0,0 +1,717 @@ +package authdirectory + +import ( + "context" + "errors" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" + "galaxy/user/internal/telemetry" + + "github.com/stretchr/testify/require" + "go.opentelemetry.io/otel/attribute" + sdkmetric "go.opentelemetry.io/otel/sdk/metric" + "go.opentelemetry.io/otel/sdk/metric/metricdata" + sdktrace "go.opentelemetry.io/otel/sdk/trace" +) + +func TestResolverExecute(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + store stubAuthDirectoryStore + wantKind string + wantUserID string + wantBlock string + }{ + { + name: "existing", + store: stubAuthDirectoryStore{ + resolveByEmail: func(_ context.Context, email common.Email) (ports.ResolveByEmailResult, error) { + require.Equal(t, common.Email("pilot@example.com"), email) + return ports.ResolveByEmailResult{ + Kind: ports.AuthResolutionKindExisting, + UserID: common.UserID("user-123"), + }, nil + }, + }, + wantKind: "existing", + wantUserID: "user-123", + }, + { + name: "creatable", + store: stubAuthDirectoryStore{ + resolveByEmail: func(_ context.Context, email common.Email) (ports.ResolveByEmailResult, error) { + require.Equal(t, common.Email("pilot@example.com"), email) + return ports.ResolveByEmailResult{ + Kind: ports.AuthResolutionKindCreatable, + }, nil + }, + }, + wantKind: "creatable", + }, + { + name: "blocked", + store: stubAuthDirectoryStore{ + resolveByEmail: func(_ context.Context, email common.Email) (ports.ResolveByEmailResult, error) { + require.Equal(t, common.Email("pilot@example.com"), email) + return ports.ResolveByEmailResult{ + Kind: ports.AuthResolutionKindBlocked, + BlockReasonCode: common.ReasonCode("policy_blocked"), + }, nil + }, + }, + wantKind: "blocked", + wantBlock: "policy_blocked", + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + resolver, err := NewResolver(tt.store) + require.NoError(t, err) + + result, err := resolver.Execute(context.Background(), ResolveByEmailInput{ + Email: " pilot@example.com ", + }) + require.NoError(t, err) + require.Equal(t, tt.wantKind, result.Kind) + require.Equal(t, tt.wantUserID, result.UserID) + require.Equal(t, tt.wantBlock, result.BlockReasonCode) + }) + } +} + +func TestEnsurerExecuteCreatedBuildsInitialRecords(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + + ensurer, err := NewEnsurer(stubAuthDirectoryStore{ + ensureByEmail: func(_ context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + require.Equal(t, common.Email("created@example.com"), input.Email) + require.Equal(t, common.UserID("user-created"), input.Account.UserID) + require.Equal(t, common.RaceName("player-test123"), input.Account.RaceName) + require.Equal(t, common.LanguageTag("en-US"), input.Account.PreferredLanguage) + require.Equal(t, common.TimeZoneName("Europe/Kaliningrad"), input.Account.TimeZone) + require.Equal(t, input.Account.UserID, input.Reservation.UserID) + require.Equal(t, input.Account.RaceName, input.Reservation.RaceName) + require.Equal(t, accountTestCanonicalKey(input.Account.RaceName), input.Reservation.CanonicalKey) + require.Equal(t, entitlement.PlanCodeFree, input.Entitlement.PlanCode) + require.False(t, input.Entitlement.IsPaid) + require.Equal(t, input.Account.UserID, input.Entitlement.UserID) + require.Equal(t, entitlement.EntitlementRecordID("entitlement-created"), input.EntitlementRecord.RecordID) + require.Equal(t, input.Account.UserID, input.EntitlementRecord.UserID) + require.Equal(t, input.Entitlement.PlanCode, input.EntitlementRecord.PlanCode) + require.Equal(t, input.Entitlement.StartsAt, input.EntitlementRecord.StartsAt) + require.Equal(t, input.Entitlement.Source, input.EntitlementRecord.Source) + require.Equal(t, input.Entitlement.Actor, input.EntitlementRecord.Actor) + require.Equal(t, input.Entitlement.ReasonCode, input.EntitlementRecord.ReasonCode) + return ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeCreated, + UserID: input.Account.UserID, + }, nil + }, + }, fixedClock{now: now}, fixedIDGenerator{ + userID: common.UserID("user-created"), + raceName: common.RaceName("player-test123"), + entitlementRecordID: entitlement.EntitlementRecordID("entitlement-created"), + }, stubRaceNamePolicy{}) + require.NoError(t, err) + + result, err := ensurer.Execute(context.Background(), EnsureByEmailInput{ + Email: "created@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "en-us", + TimeZone: "Europe/Kaliningrad", + }, + }) + require.NoError(t, err) + require.Equal(t, "created", result.Outcome) + require.Equal(t, "user-created", result.UserID) +} + +func TestEnsurerExecuteRejectsInvalidRegistrationContext(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + input EnsureByEmailInput + wantErr string + }{ + { + name: "invalid preferred language", + input: EnsureByEmailInput{ + Email: "pilot@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "bad@@tag", + TimeZone: "Europe/Kaliningrad", + }, + }, + wantErr: "registration_context.preferred_language must be a valid BCP 47 language tag", + }, + { + name: "invalid time zone", + input: EnsureByEmailInput{ + Email: "pilot@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "en", + TimeZone: "Mars/Olympus", + }, + }, + wantErr: "registration_context.time_zone must be a valid IANA time zone name", + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + ensurer, err := NewEnsurer(stubAuthDirectoryStore{}, fixedClock{now: time.Unix(1_775_240_000, 0).UTC()}, fixedIDGenerator{ + userID: common.UserID("user-created"), + raceName: common.RaceName("player-test123"), + entitlementRecordID: entitlement.EntitlementRecordID("entitlement-created"), + }, stubRaceNamePolicy{}) + require.NoError(t, err) + + _, err = ensurer.Execute(context.Background(), tt.input) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInvalidRequest, shared.CodeOf(err)) + require.Equal(t, tt.wantErr, err.Error()) + }) + } +} + +func TestEnsurerExecuteRetriesConflicts(t *testing.T) { + t.Parallel() + + attempt := 0 + ensurer, err := NewEnsurer(stubAuthDirectoryStore{ + ensureByEmail: func(_ context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + attempt++ + if attempt == 1 { + return ports.EnsureByEmailResult{}, ports.ErrConflict + } + return ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeCreated, + UserID: input.Account.UserID, + }, nil + }, + }, fixedClock{now: time.Unix(1_775_240_000, 0).UTC()}, &sequenceIDGenerator{ + userIDs: []common.UserID{"user-first", "user-second"}, + raceNames: []common.RaceName{"player-first", "player-second"}, + entitlementRecordIDs: []entitlement.EntitlementRecordID{"entitlement-first", "entitlement-second"}, + }, stubRaceNamePolicy{}) + require.NoError(t, err) + + result, err := ensurer.Execute(context.Background(), EnsureByEmailInput{ + Email: "retry@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "en", + TimeZone: "UTC", + }, + }) + require.NoError(t, err) + require.Equal(t, 2, attempt) + require.Equal(t, "user-second", result.UserID) +} + +func TestEnsurerExecuteReturnsExistingAndBlocked(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + store stubAuthDirectoryStore + want EnsureByEmailResult + }{ + { + name: "existing", + store: stubAuthDirectoryStore{ + ensureByEmail: func(_ context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + require.Equal(t, common.Email("pilot@example.com"), input.Email) + return ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeExisting, + UserID: common.UserID("user-existing"), + }, nil + }, + }, + want: EnsureByEmailResult{ + Outcome: "existing", + UserID: "user-existing", + }, + }, + { + name: "blocked", + store: stubAuthDirectoryStore{ + ensureByEmail: func(_ context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + require.Equal(t, common.Email("pilot@example.com"), input.Email) + return ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeBlocked, + BlockReasonCode: common.ReasonCode("policy_blocked"), + }, nil + }, + }, + want: EnsureByEmailResult{ + Outcome: "blocked", + BlockReasonCode: "policy_blocked", + }, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + ensurer, err := NewEnsurer(tt.store, fixedClock{now: time.Unix(1_775_240_000, 0).UTC()}, fixedIDGenerator{ + userID: common.UserID("user-created"), + raceName: common.RaceName("player-test123"), + entitlementRecordID: entitlement.EntitlementRecordID("entitlement-created"), + }, stubRaceNamePolicy{}) + require.NoError(t, err) + + result, err := ensurer.Execute(context.Background(), EnsureByEmailInput{ + Email: "pilot@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "en", + TimeZone: "UTC", + }, + }) + require.NoError(t, err) + require.Equal(t, tt.want, result) + }) + } +} + +func TestEnsurerExecuteCreatedPublishesInitializedEvents(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + publisher := &recordingAuthDomainEventPublisher{} + telemetryRuntime, reader := newObservedAuthTelemetryRuntime(t) + + ensurer, err := NewEnsurerWithObservability(stubAuthDirectoryStore{ + ensureByEmail: func(_ context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + return ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeCreated, + UserID: input.Account.UserID, + }, nil + }, + }, fixedClock{now: now}, fixedIDGenerator{ + userID: common.UserID("user-created"), + raceName: common.RaceName("player-test123"), + entitlementRecordID: entitlement.EntitlementRecordID("entitlement-created"), + }, stubRaceNamePolicy{}, nil, telemetryRuntime, publisher, publisher, publisher) + require.NoError(t, err) + + result, err := ensurer.Execute(context.Background(), EnsureByEmailInput{ + Email: "created@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "en-us", + TimeZone: "Europe/Kaliningrad", + }, + }) + require.NoError(t, err) + require.Equal(t, "created", result.Outcome) + + require.Len(t, publisher.profileEvents, 1) + require.Equal(t, ports.ProfileChangedOperationInitialized, publisher.profileEvents[0].Operation) + require.Equal(t, common.Source("auth_registration"), publisher.profileEvents[0].Source) + require.Len(t, publisher.settingsEvents, 1) + require.Equal(t, ports.SettingsChangedOperationInitialized, publisher.settingsEvents[0].Operation) + require.Len(t, publisher.entitlementEvents, 1) + require.Equal(t, ports.EntitlementChangedOperationInitialized, publisher.entitlementEvents[0].Operation) + + assertMetricCount(t, reader, "user.user_creation.outcomes", map[string]string{ + "outcome": "created", + }, 1) +} + +func TestEnsurerExecuteExistingBlockedAndFailedDoNotPublishEvents(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + store stubAuthDirectoryStore + input EnsureByEmailInput + wantMetric string + wantErrCode string + wantProfileLen int + }{ + { + name: "existing", + store: stubAuthDirectoryStore{ + ensureByEmail: func(_ context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + return ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeExisting, + UserID: common.UserID("user-existing"), + }, nil + }, + }, + input: EnsureByEmailInput{ + Email: "pilot@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "en", + TimeZone: "UTC", + }, + }, + wantMetric: "existing", + }, + { + name: "blocked", + store: stubAuthDirectoryStore{ + ensureByEmail: func(_ context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + return ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeBlocked, + BlockReasonCode: common.ReasonCode("policy_blocked"), + }, nil + }, + }, + input: EnsureByEmailInput{ + Email: "pilot@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "en", + TimeZone: "UTC", + }, + }, + wantMetric: "blocked", + }, + { + name: "failed", + store: stubAuthDirectoryStore{}, + input: EnsureByEmailInput{ + Email: "pilot@example.com", + }, + wantMetric: "failed", + wantErrCode: shared.ErrorCodeInvalidRequest, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + publisher := &recordingAuthDomainEventPublisher{} + telemetryRuntime, reader := newObservedAuthTelemetryRuntime(t) + ensurer, err := NewEnsurerWithObservability(tt.store, fixedClock{now: time.Unix(1_775_240_000, 0).UTC()}, fixedIDGenerator{ + userID: common.UserID("user-created"), + raceName: common.RaceName("player-test123"), + entitlementRecordID: entitlement.EntitlementRecordID("entitlement-created"), + }, stubRaceNamePolicy{}, nil, telemetryRuntime, publisher, publisher, publisher) + require.NoError(t, err) + + _, err = ensurer.Execute(context.Background(), tt.input) + if tt.wantErrCode != "" { + require.Error(t, err) + require.Equal(t, tt.wantErrCode, shared.CodeOf(err)) + } else { + require.NoError(t, err) + } + + require.Empty(t, publisher.profileEvents) + require.Empty(t, publisher.settingsEvents) + require.Empty(t, publisher.entitlementEvents) + assertMetricCount(t, reader, "user.user_creation.outcomes", map[string]string{ + "outcome": tt.wantMetric, + }, 1) + }) + } +} + +func TestEnsurerExecutePublishFailureDoesNotRollbackCreatedUser(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + publisher := &recordingAuthDomainEventPublisher{err: errors.New("publisher unavailable")} + telemetryRuntime, reader := newObservedAuthTelemetryRuntime(t) + + ensurer, err := NewEnsurerWithObservability(stubAuthDirectoryStore{ + ensureByEmail: func(_ context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + return ports.EnsureByEmailResult{ + Outcome: ports.EnsureByEmailOutcomeCreated, + UserID: input.Account.UserID, + }, nil + }, + }, fixedClock{now: now}, fixedIDGenerator{ + userID: common.UserID("user-created"), + raceName: common.RaceName("player-test123"), + entitlementRecordID: entitlement.EntitlementRecordID("entitlement-created"), + }, stubRaceNamePolicy{}, nil, telemetryRuntime, publisher, publisher, publisher) + require.NoError(t, err) + + result, err := ensurer.Execute(context.Background(), EnsureByEmailInput{ + Email: "created@example.com", + RegistrationContext: &RegistrationContext{ + PreferredLanguage: "en-us", + TimeZone: "Europe/Kaliningrad", + }, + }) + require.NoError(t, err) + require.Equal(t, "created", result.Outcome) + require.Len(t, publisher.profileEvents, 1) + require.Len(t, publisher.settingsEvents, 1) + require.Len(t, publisher.entitlementEvents, 1) + + assertMetricCount(t, reader, "user.event_publication_failures", map[string]string{ + "event_type": ports.ProfileChangedEventType, + }, 1) + assertMetricCount(t, reader, "user.event_publication_failures", map[string]string{ + "event_type": ports.SettingsChangedEventType, + }, 1) + assertMetricCount(t, reader, "user.event_publication_failures", map[string]string{ + "event_type": ports.EntitlementChangedEventType, + }, 1) +} + +func TestBlockByUserIDServiceMapsNotFound(t *testing.T) { + t.Parallel() + + service, err := NewBlockByUserIDService(stubAuthDirectoryStore{ + blockByUserID: func(context.Context, ports.BlockByUserIDInput) (ports.BlockResult, error) { + return ports.BlockResult{}, ports.ErrNotFound + }, + }, fixedClock{now: time.Unix(1_775_240_000, 0).UTC()}) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), BlockByUserIDInput{ + UserID: "user-missing", + ReasonCode: "policy_blocked", + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeSubjectNotFound, shared.CodeOf(err)) +} + +type stubAuthDirectoryStore struct { + resolveByEmail func(context.Context, common.Email) (ports.ResolveByEmailResult, error) + ensureByEmail func(context.Context, ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) + existsByUserID func(context.Context, common.UserID) (bool, error) + blockByUserID func(context.Context, ports.BlockByUserIDInput) (ports.BlockResult, error) + blockByEmail func(context.Context, ports.BlockByEmailInput) (ports.BlockResult, error) +} + +func (store stubAuthDirectoryStore) ResolveByEmail(ctx context.Context, email common.Email) (ports.ResolveByEmailResult, error) { + if store.resolveByEmail == nil { + return ports.ResolveByEmailResult{}, errors.New("unexpected ResolveByEmail call") + } + return store.resolveByEmail(ctx, email) +} + +func (store stubAuthDirectoryStore) ExistsByUserID(ctx context.Context, userID common.UserID) (bool, error) { + if store.existsByUserID == nil { + return false, errors.New("unexpected ExistsByUserID call") + } + return store.existsByUserID(ctx, userID) +} + +func (store stubAuthDirectoryStore) EnsureByEmail(ctx context.Context, input ports.EnsureByEmailInput) (ports.EnsureByEmailResult, error) { + if store.ensureByEmail == nil { + return ports.EnsureByEmailResult{}, errors.New("unexpected EnsureByEmail call") + } + return store.ensureByEmail(ctx, input) +} + +func (store stubAuthDirectoryStore) BlockByUserID(ctx context.Context, input ports.BlockByUserIDInput) (ports.BlockResult, error) { + if store.blockByUserID == nil { + return ports.BlockResult{}, errors.New("unexpected BlockByUserID call") + } + return store.blockByUserID(ctx, input) +} + +func (store stubAuthDirectoryStore) BlockByEmail(ctx context.Context, input ports.BlockByEmailInput) (ports.BlockResult, error) { + if store.blockByEmail == nil { + return ports.BlockResult{}, errors.New("unexpected BlockByEmail call") + } + return store.blockByEmail(ctx, input) +} + +type fixedClock struct { + now time.Time +} + +func (clock fixedClock) Now() time.Time { + return clock.now +} + +type fixedIDGenerator struct { + userID common.UserID + raceName common.RaceName + entitlementRecordID entitlement.EntitlementRecordID + sanctionRecordID policy.SanctionRecordID + limitRecordID policy.LimitRecordID +} + +func (generator fixedIDGenerator) NewUserID() (common.UserID, error) { + return generator.userID, nil +} + +func (generator fixedIDGenerator) NewInitialRaceName() (common.RaceName, error) { + return generator.raceName, nil +} + +func (generator fixedIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + return generator.entitlementRecordID, nil +} + +func (generator fixedIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + return generator.sanctionRecordID, nil +} + +func (generator fixedIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + return generator.limitRecordID, nil +} + +type sequenceIDGenerator struct { + userIDs []common.UserID + raceNames []common.RaceName + entitlementRecordIDs []entitlement.EntitlementRecordID + sanctionRecordIDs []policy.SanctionRecordID + limitRecordIDs []policy.LimitRecordID +} + +func (generator *sequenceIDGenerator) NewUserID() (common.UserID, error) { + value := generator.userIDs[0] + generator.userIDs = generator.userIDs[1:] + return value, nil +} + +func (generator *sequenceIDGenerator) NewInitialRaceName() (common.RaceName, error) { + value := generator.raceNames[0] + generator.raceNames = generator.raceNames[1:] + return value, nil +} + +func (generator *sequenceIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + value := generator.entitlementRecordIDs[0] + generator.entitlementRecordIDs = generator.entitlementRecordIDs[1:] + return value, nil +} + +func (generator *sequenceIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + value := generator.sanctionRecordIDs[0] + generator.sanctionRecordIDs = generator.sanctionRecordIDs[1:] + return value, nil +} + +func (generator *sequenceIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + value := generator.limitRecordIDs[0] + generator.limitRecordIDs = generator.limitRecordIDs[1:] + return value, nil +} + +type stubRaceNamePolicy struct{} + +func (stubRaceNamePolicy) CanonicalKey(raceName common.RaceName) (account.RaceNameCanonicalKey, error) { + return accountTestCanonicalKey(raceName), nil +} + +func accountTestCanonicalKey(raceName common.RaceName) account.RaceNameCanonicalKey { + return account.RaceNameCanonicalKey("key:" + raceName.String()) +} + +type recordingAuthDomainEventPublisher struct { + err error + profileEvents []ports.ProfileChangedEvent + settingsEvents []ports.SettingsChangedEvent + entitlementEvents []ports.EntitlementChangedEvent +} + +func (publisher *recordingAuthDomainEventPublisher) PublishProfileChanged(_ context.Context, event ports.ProfileChangedEvent) error { + if err := event.Validate(); err != nil { + return err + } + publisher.profileEvents = append(publisher.profileEvents, event) + return publisher.err +} + +func (publisher *recordingAuthDomainEventPublisher) PublishSettingsChanged(_ context.Context, event ports.SettingsChangedEvent) error { + if err := event.Validate(); err != nil { + return err + } + publisher.settingsEvents = append(publisher.settingsEvents, event) + return publisher.err +} + +func (publisher *recordingAuthDomainEventPublisher) PublishEntitlementChanged(_ context.Context, event ports.EntitlementChangedEvent) error { + if err := event.Validate(); err != nil { + return err + } + publisher.entitlementEvents = append(publisher.entitlementEvents, event) + return publisher.err +} + +func newObservedAuthTelemetryRuntime(t *testing.T) (*telemetry.Runtime, *sdkmetric.ManualReader) { + t.Helper() + + reader := sdkmetric.NewManualReader() + meterProvider := sdkmetric.NewMeterProvider(sdkmetric.WithReader(reader)) + tracerProvider := sdktrace.NewTracerProvider() + + runtime, err := telemetry.NewWithProviders(meterProvider, tracerProvider) + require.NoError(t, err) + + return runtime, reader +} + +func assertMetricCount(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string, wantValue int64) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + sum, ok := metric.Data.(metricdata.Sum[int64]) + require.True(t, ok) + + for _, point := range sum.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + require.Equal(t, wantValue, point.Value) + return + } + } + } + } + + require.Failf(t, "test failed", "metric %q with attrs %v not found", metricName, wantAttrs) +} + +func hasMetricAttributes(values []attribute.KeyValue, want map[string]string) bool { + if len(values) != len(want) { + return false + } + + for _, value := range values { + if want[string(value.Key)] != value.Value.AsString() { + return false + } + } + + return true +} + +var ( + _ ports.AuthDirectoryStore = stubAuthDirectoryStore{} + _ ports.Clock = fixedClock{} + _ ports.IDGenerator = fixedIDGenerator{} + _ ports.IDGenerator = (*sequenceIDGenerator)(nil) + _ ports.RaceNamePolicy = stubRaceNamePolicy{} + _ ports.ProfileChangedPublisher = (*recordingAuthDomainEventPublisher)(nil) + _ ports.SettingsChangedPublisher = (*recordingAuthDomainEventPublisher)(nil) + _ ports.EntitlementChangedPublisher = (*recordingAuthDomainEventPublisher)(nil) +) diff --git a/user/internal/service/entitlementsvc/observability_test.go b/user/internal/service/entitlementsvc/observability_test.go new file mode 100644 index 0000000..4aa8cb5 --- /dev/null +++ b/user/internal/service/entitlementsvc/observability_test.go @@ -0,0 +1,121 @@ +package entitlementsvc + +import ( + "context" + "errors" + "testing" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/ports" + + "github.com/stretchr/testify/require" +) + +func TestReaderGetByUserIDPublishesExpiredRepairEvent(t *testing.T) { + t.Parallel() + + userID := common.UserID("user-123") + startsAt := time.Unix(1_775_240_000, 0).UTC() + endsAt := startsAt.Add(24 * time.Hour) + now := endsAt.Add(time.Hour) + snapshotStore := &fakeSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + userID: paidSnapshot( + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + endsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ), + }, + } + historyStore := &fakeHistoryStore{ + byUserID: map[common.UserID][]entitlement.PeriodRecord{ + userID: { + paidRecord( + entitlement.EntitlementRecordID("entitlement-paid"), + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + endsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ), + }, + }, + } + lifecycleStore := &fakeLifecycleStore{ + historyStore: historyStore, + snapshotStore: snapshotStore, + } + publisher := &recordingEntitlementPublisher{} + + reader, err := NewReaderWithObservability(snapshotStore, lifecycleStore, fixedClock{now: now}, fixedIDGenerator{ + recordID: entitlement.EntitlementRecordID("entitlement-free"), + }, nil, nil, publisher) + require.NoError(t, err) + + got, err := reader.GetByUserID(context.Background(), userID) + require.NoError(t, err) + require.Equal(t, entitlement.PlanCodeFree, got.PlanCode) + require.Len(t, publisher.events, 1) + require.Equal(t, ports.EntitlementChangedOperationExpiredRepaired, publisher.events[0].Operation) + require.Equal(t, common.Source("entitlement_expiry_repair"), publisher.events[0].Source) +} + +func TestGrantServiceExecutePublisherFailureDoesNotRollbackResult(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + currentFreeStartsAt := now.Add(-24 * time.Hour) + currentSnapshot := freeSnapshot(userID, currentFreeStartsAt, common.Source("auth_registration"), common.ReasonCode("initial_free_entitlement")) + currentRecord := freeRecord(entitlement.EntitlementRecordID("entitlement-free"), userID, currentFreeStartsAt, common.Source("auth_registration"), common.ReasonCode("initial_free_entitlement")) + lifecycleStore := &fakeLifecycleStore{} + publisher := &recordingEntitlementPublisher{err: errors.New("publisher unavailable")} + + service, err := NewGrantServiceWithObservability( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + &fakeHistoryStore{byUserID: map[common.UserID][]entitlement.PeriodRecord{userID: {currentRecord}}}, + fakeEffectiveReader{byUserID: map[common.UserID]entitlement.CurrentSnapshot{userID: currentSnapshot}}, + lifecycleStore, + fixedClock{now: now}, + fixedIDGenerator{recordID: entitlement.EntitlementRecordID("entitlement-paid")}, + nil, + nil, + publisher, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GrantInput{ + UserID: userID.String(), + PlanCode: string(entitlement.PlanCodePaidMonthly), + Source: "admin", + ReasonCode: "manual_grant", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + StartsAt: now.Format(time.RFC3339Nano), + EndsAt: now.Add(30 * 24 * time.Hour).Format(time.RFC3339Nano), + }) + require.NoError(t, err) + require.Equal(t, entitlement.PlanCodePaidMonthly, result.Entitlement.PlanCode) + require.Len(t, publisher.events, 1) + require.Equal(t, ports.EntitlementChangedOperationGranted, publisher.events[0].Operation) +} + +type recordingEntitlementPublisher struct { + err error + events []ports.EntitlementChangedEvent +} + +func (publisher *recordingEntitlementPublisher) PublishEntitlementChanged(_ context.Context, event ports.EntitlementChangedEvent) error { + if err := event.Validate(); err != nil { + return err + } + publisher.events = append(publisher.events, event) + return publisher.err +} + +var _ ports.EntitlementChangedPublisher = (*recordingEntitlementPublisher)(nil) diff --git a/user/internal/service/entitlementsvc/service.go b/user/internal/service/entitlementsvc/service.go new file mode 100644 index 0000000..20365db --- /dev/null +++ b/user/internal/service/entitlementsvc/service.go @@ -0,0 +1,1114 @@ +// Package entitlementsvc implements the trusted entitlement lifecycle and +// effective-read use cases owned by User Service. +package entitlementsvc + +import ( + "context" + "errors" + "fmt" + "log/slog" + "strings" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" + "galaxy/user/internal/telemetry" +) + +const ( + expiryRepairSource common.Source = "entitlement_expiry_repair" + expiryRepairReasonCode common.ReasonCode = "paid_entitlement_expired" + expiryRepairActorType common.ActorType = "service" + expiryRepairActorID common.ActorID = "user-service" + + expiryRepairRetryLimit = 4 +) + +// ActorInput stores one transport-facing audit actor payload. +type ActorInput struct { + // Type stores the machine-readable actor type. + Type string + + // ID stores the optional stable actor identifier. + ID string +} + +// GrantInput stores one trusted entitlement-grant command request. +type GrantInput struct { + // UserID identifies the user whose current entitlement must be replaced. + UserID string + + // PlanCode stores the paid plan that must become current. + PlanCode string + + // Source stores the machine-readable mutation source. + Source string + + // ReasonCode stores the machine-readable mutation reason. + ReasonCode string + + // Actor stores the audit actor metadata. + Actor ActorInput + + // StartsAt stores when the granted paid state becomes effective. + StartsAt string + + // EndsAt stores the optional finite paid expiry. + EndsAt string +} + +// ExtendInput stores one trusted entitlement-extension command request. +type ExtendInput struct { + // UserID identifies the user whose current finite paid entitlement must be + // extended. + UserID string + + // Source stores the machine-readable mutation source. + Source string + + // ReasonCode stores the machine-readable mutation reason. + ReasonCode string + + // Actor stores the audit actor metadata. + Actor ActorInput + + // EndsAt stores the replacement finite paid expiry. + EndsAt string +} + +// RevokeInput stores one trusted entitlement-revoke command request. +type RevokeInput struct { + // UserID identifies the user whose current paid entitlement must be + // revoked. + UserID string + + // Source stores the machine-readable mutation source. + Source string + + // ReasonCode stores the machine-readable mutation reason. + ReasonCode string + + // Actor stores the audit actor metadata. + Actor ActorInput +} + +// CommandResult stores one trusted entitlement mutation result. +type CommandResult struct { + // UserID identifies the mutated user. + UserID string + + // Entitlement stores the refreshed current effective snapshot. + Entitlement entitlement.CurrentSnapshot +} + +type effectiveReader interface { + GetByUserID(ctx context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) +} + +// Reader loads the current effective entitlement snapshot and lazily repairs +// expired finite paid states. +type Reader struct { + snapshots ports.EntitlementSnapshotStore + lifecycle ports.EntitlementLifecycleStore + clock ports.Clock + idGenerator ports.IDGenerator + logger *slog.Logger + telemetry *telemetry.Runtime + publisher ports.EntitlementChangedPublisher +} + +// NewReader constructs one effective entitlement reader. +func NewReader( + snapshots ports.EntitlementSnapshotStore, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (*Reader, error) { + return NewReaderWithObservability(snapshots, lifecycle, clock, idGenerator, nil, nil, nil) +} + +// NewReaderWithObservability constructs one effective entitlement reader with +// optional observability hooks. +func NewReaderWithObservability( + snapshots ports.EntitlementSnapshotStore, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + publisher ports.EntitlementChangedPublisher, +) (*Reader, error) { + switch { + case snapshots == nil: + return nil, fmt.Errorf("entitlement reader: entitlement snapshot store must not be nil") + case lifecycle == nil: + return nil, fmt.Errorf("entitlement reader: entitlement lifecycle store must not be nil") + case clock == nil: + return nil, fmt.Errorf("entitlement reader: clock must not be nil") + case idGenerator == nil: + return nil, fmt.Errorf("entitlement reader: id generator must not be nil") + default: + return &Reader{ + snapshots: snapshots, + lifecycle: lifecycle, + clock: clock, + idGenerator: idGenerator, + logger: logger, + telemetry: telemetryRuntime, + publisher: publisher, + }, nil + } +} + +// GetByUserID returns the current effective entitlement snapshot for userID. +// When the stored snapshot is a naturally expired finite paid state, it +// lazily materializes the replacement free state before returning. +func (service *Reader) GetByUserID(ctx context.Context, userID common.UserID) (snapshot entitlement.CurrentSnapshot, err error) { + repairOutcome := "" + userIDString := userID.String() + defer func() { + if repairOutcome == "" { + return + } + if service.telemetry != nil { + service.telemetry.RecordEntitlementMutation(ctx, "expiry_repair", repairOutcome) + } + shared.LogServiceOutcome(service.logger, ctx, "entitlement expiry repair completed", err, + "use_case", "repair_expired_entitlement", + "command", "expiry_repair", + "outcome", repairOutcome, + "user_id", userIDString, + "source", expiryRepairSource.String(), + "reason_code", expiryRepairReasonCode.String(), + "actor_type", expiryRepairActorType.String(), + "actor_id", expiryRepairActorID.String(), + ) + }() + + if err := userID.Validate(); err != nil { + return entitlement.CurrentSnapshot{}, fmt.Errorf("entitlement reader: %w", err) + } + if ctx == nil { + return entitlement.CurrentSnapshot{}, fmt.Errorf("entitlement reader: nil context") + } + + for attempt := 0; attempt < expiryRepairRetryLimit; attempt++ { + currentSnapshot, err := service.snapshots.GetByUserID(ctx, userID) + if err != nil { + return entitlement.CurrentSnapshot{}, err + } + + now := service.clock.Now().UTC() + if !currentSnapshot.IsExpiredAt(now) { + return currentSnapshot, nil + } + if repairOutcome == "" { + repairOutcome = "conflict" + } + + recordID, err := service.idGenerator.NewEntitlementRecordID() + if err != nil { + repairOutcome = shared.ErrorCodeServiceUnavailable + return entitlement.CurrentSnapshot{}, err + } + + freeRecord, freeSnapshot, err := buildExpiryRepairState(currentSnapshot, recordID, now) + if err != nil { + repairOutcome = shared.ErrorCodeInternalError + return entitlement.CurrentSnapshot{}, err + } + + err = service.lifecycle.RepairExpired(ctx, ports.RepairExpiredEntitlementInput{ + ExpectedExpiredSnapshot: currentSnapshot, + NewRecord: freeRecord, + NewSnapshot: freeSnapshot, + }) + switch { + case err == nil: + repairOutcome = "success" + publishEntitlementChanged(ctx, service.publisher, service.telemetry, service.logger, "repair_expired_entitlement", ports.EntitlementChangedOperationExpiredRepaired, freeSnapshot) + return freeSnapshot, nil + case errors.Is(err, ports.ErrConflict): + continue + default: + repairOutcome = shared.ErrorCodeServiceUnavailable + return entitlement.CurrentSnapshot{}, err + } + } + + latestSnapshot, err := service.snapshots.GetByUserID(ctx, userID) + if err != nil { + repairOutcome = shared.ErrorCodeServiceUnavailable + return entitlement.CurrentSnapshot{}, err + } + if latestSnapshot.IsExpiredAt(service.clock.Now().UTC()) { + repairOutcome = "conflict" + return entitlement.CurrentSnapshot{}, fmt.Errorf("entitlement reader: expiry repair retry limit exceeded for user %q", userID) + } + + return latestSnapshot, nil +} + +type commandSupport struct { + accounts ports.UserAccountStore + history ports.EntitlementHistoryStore + reader effectiveReader + lifecycle ports.EntitlementLifecycleStore + clock ports.Clock + idGenerator ports.IDGenerator +} + +func newCommandSupport( + accounts ports.UserAccountStore, + history ports.EntitlementHistoryStore, + reader effectiveReader, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (commandSupport, error) { + switch { + case accounts == nil: + return commandSupport{}, fmt.Errorf("user account store must not be nil") + case history == nil: + return commandSupport{}, fmt.Errorf("entitlement history store must not be nil") + case reader == nil: + return commandSupport{}, fmt.Errorf("effective entitlement reader must not be nil") + case lifecycle == nil: + return commandSupport{}, fmt.Errorf("entitlement lifecycle store must not be nil") + case clock == nil: + return commandSupport{}, fmt.Errorf("clock must not be nil") + case idGenerator == nil: + return commandSupport{}, fmt.Errorf("id generator must not be nil") + default: + return commandSupport{ + accounts: accounts, + history: history, + reader: reader, + lifecycle: lifecycle, + clock: clock, + idGenerator: idGenerator, + }, nil + } +} + +func (support commandSupport) ensureUserExists(ctx context.Context, userID common.UserID) error { + exists, err := support.accounts.ExistsByUserID(ctx, userID) + switch { + case err != nil: + return shared.ServiceUnavailable(err) + case !exists: + return shared.SubjectNotFound() + default: + return nil + } +} + +func (support commandSupport) loadEffectiveSnapshot( + ctx context.Context, + userID common.UserID, +) (entitlement.CurrentSnapshot, error) { + currentSnapshot, err := support.reader.GetByUserID(ctx, userID) + switch { + case err == nil: + return currentSnapshot, nil + case errors.Is(err, ports.ErrNotFound): + return entitlement.CurrentSnapshot{}, shared.InternalError(fmt.Errorf("user %q is missing entitlement snapshot", userID)) + default: + return entitlement.CurrentSnapshot{}, shared.ServiceUnavailable(err) + } +} + +func (support commandSupport) loadCurrentRecord( + ctx context.Context, + userID common.UserID, + now time.Time, +) (entitlement.PeriodRecord, error) { + historyRecords, err := support.history.ListByUserID(ctx, userID) + if err != nil { + return entitlement.PeriodRecord{}, shared.ServiceUnavailable(err) + } + + currentRecord, ok := currentRecordAt(historyRecords, now) + if !ok { + return entitlement.PeriodRecord{}, shared.InternalError(fmt.Errorf("user %q is missing current entitlement history record", userID)) + } + + return currentRecord, nil +} + +// GrantService executes the explicit trusted paid-entitlement grant command. +type GrantService struct { + support commandSupport + logger *slog.Logger + telemetry *telemetry.Runtime + publisher ports.EntitlementChangedPublisher +} + +// NewGrantService constructs one entitlement-grant use case. +func NewGrantService( + accounts ports.UserAccountStore, + history ports.EntitlementHistoryStore, + reader effectiveReader, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (*GrantService, error) { + return NewGrantServiceWithObservability(accounts, history, reader, lifecycle, clock, idGenerator, nil, nil, nil) +} + +// NewGrantServiceWithObservability constructs one entitlement-grant use case +// with optional observability hooks. +func NewGrantServiceWithObservability( + accounts ports.UserAccountStore, + history ports.EntitlementHistoryStore, + reader effectiveReader, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + publisher ports.EntitlementChangedPublisher, +) (*GrantService, error) { + support, err := newCommandSupport(accounts, history, reader, lifecycle, clock, idGenerator) + if err != nil { + return nil, fmt.Errorf("entitlement grant service: %w", err) + } + + return &GrantService{ + support: support, + logger: logger, + telemetry: telemetryRuntime, + publisher: publisher, + }, nil +} + +// Execute grants a new current paid entitlement when the current effective +// entitlement is free. +func (service *GrantService) Execute(ctx context.Context, input GrantInput) (result CommandResult, err error) { + outcome := shared.ErrorCodeInternalError + userIDString := strings.TrimSpace(input.UserID) + sourceValue := strings.TrimSpace(input.Source) + reasonCodeValue := strings.TrimSpace(input.ReasonCode) + actorTypeValue := strings.TrimSpace(input.Actor.Type) + actorIDValue := strings.TrimSpace(input.Actor.ID) + defer func() { + if service.telemetry != nil { + service.telemetry.RecordEntitlementMutation(ctx, "grant", outcome) + } + shared.LogServiceOutcome(service.logger, ctx, "entitlement grant completed", err, + "use_case", "grant_entitlement", + "command", "grant", + "outcome", outcome, + "user_id", userIDString, + "source", sourceValue, + "reason_code", reasonCodeValue, + "actor_type", actorTypeValue, + "actor_id", actorIDValue, + ) + }() + + if ctx == nil { + outcome = shared.ErrorCodeInvalidRequest + return CommandResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + userIDString = userID.String() + if err := service.support.ensureUserExists(ctx, userID); err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + + planCode, err := parsePlanCode(input.PlanCode) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + if planCode == entitlement.PlanCodeFree { + outcome = shared.ErrorCodeInvalidRequest + return CommandResult{}, shared.InvalidRequest("plan_code must not be \"free\" for grant") + } + source, err := parseSource(input.Source) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + sourceValue = source.String() + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + reasonCodeValue = reasonCode.String() + actor, err := parseActor(input.Actor) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + actorTypeValue = actor.Type.String() + actorIDValue = actor.ID.String() + startsAt, err := parseTimestamp("starts_at", input.StartsAt) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + endsAt, err := parseOptionalTimestamp("ends_at", input.EndsAt) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + + now := service.support.clock.Now().UTC() + if startsAt.After(now) { + outcome = shared.ErrorCodeInvalidRequest + return CommandResult{}, shared.InvalidRequest("starts_at must not be in the future") + } + if err := validateGrantBounds(planCode, startsAt, endsAt); err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + + currentSnapshot, err := service.support.loadEffectiveSnapshot(ctx, userID) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + if currentSnapshot.IsPaid { + outcome = shared.ErrorCodeConflict + return CommandResult{}, shared.Conflict() + } + + currentRecord, err := service.support.loadCurrentRecord(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + if currentRecord.PlanCode != entitlement.PlanCodeFree { + outcome = shared.ErrorCodeInternalError + return CommandResult{}, shared.InternalError(fmt.Errorf("user %q current entitlement record must be free before grant", userID)) + } + if startsAt.Before(currentRecord.StartsAt) { + outcome = shared.ErrorCodeInvalidRequest + return CommandResult{}, shared.InvalidRequest("starts_at must not be before the current free entitlement started") + } + + recordID, err := service.support.idGenerator.NewEntitlementRecordID() + if err != nil { + outcome = shared.ErrorCodeServiceUnavailable + return CommandResult{}, shared.ServiceUnavailable(err) + } + + updatedCurrentRecord := currentRecord + updatedCurrentRecord.ClosedAt = &startsAt + updatedCurrentRecord.ClosedBy = actor + updatedCurrentRecord.ClosedReasonCode = reasonCode + + newRecord := entitlement.PeriodRecord{ + RecordID: recordID, + UserID: userID, + PlanCode: planCode, + Source: source, + Actor: actor, + ReasonCode: reasonCode, + StartsAt: startsAt, + EndsAt: endsAt, + CreatedAt: now, + } + newSnapshot := entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: planCode, + IsPaid: true, + StartsAt: startsAt, + EndsAt: endsAt, + Source: source, + Actor: actor, + ReasonCode: reasonCode, + UpdatedAt: now, + } + + if err := service.support.lifecycle.Grant(ctx, ports.GrantEntitlementInput{ + ExpectedCurrentSnapshot: currentSnapshot, + ExpectedCurrentRecord: currentRecord, + UpdatedCurrentRecord: updatedCurrentRecord, + NewRecord: newRecord, + NewSnapshot: newSnapshot, + }); err != nil { + switch { + case errors.Is(err, ports.ErrConflict): + outcome = shared.ErrorCodeConflict + return CommandResult{}, shared.Conflict() + default: + outcome = shared.ErrorCodeServiceUnavailable + return CommandResult{}, shared.ServiceUnavailable(err) + } + } + outcome = "success" + result = CommandResult{UserID: userID.String(), Entitlement: newSnapshot} + publishEntitlementChanged(ctx, service.publisher, service.telemetry, service.logger, "grant_entitlement", ports.EntitlementChangedOperationGranted, newSnapshot) + + return result, nil +} + +// ExtendService executes the explicit trusted paid-entitlement extend command. +type ExtendService struct { + support commandSupport + logger *slog.Logger + telemetry *telemetry.Runtime + publisher ports.EntitlementChangedPublisher +} + +// NewExtendService constructs one entitlement-extend use case. +func NewExtendService( + accounts ports.UserAccountStore, + history ports.EntitlementHistoryStore, + reader effectiveReader, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (*ExtendService, error) { + return NewExtendServiceWithObservability(accounts, history, reader, lifecycle, clock, idGenerator, nil, nil, nil) +} + +// NewExtendServiceWithObservability constructs one entitlement-extend use +// case with optional observability hooks. +func NewExtendServiceWithObservability( + accounts ports.UserAccountStore, + history ports.EntitlementHistoryStore, + reader effectiveReader, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + publisher ports.EntitlementChangedPublisher, +) (*ExtendService, error) { + support, err := newCommandSupport(accounts, history, reader, lifecycle, clock, idGenerator) + if err != nil { + return nil, fmt.Errorf("entitlement extend service: %w", err) + } + + return &ExtendService{ + support: support, + logger: logger, + telemetry: telemetryRuntime, + publisher: publisher, + }, nil +} + +// Execute extends the current finite paid entitlement by appending a new +// history segment and updating the current snapshot. +func (service *ExtendService) Execute(ctx context.Context, input ExtendInput) (result CommandResult, err error) { + outcome := shared.ErrorCodeInternalError + userIDString := strings.TrimSpace(input.UserID) + sourceValue := strings.TrimSpace(input.Source) + reasonCodeValue := strings.TrimSpace(input.ReasonCode) + actorTypeValue := strings.TrimSpace(input.Actor.Type) + actorIDValue := strings.TrimSpace(input.Actor.ID) + defer func() { + if service.telemetry != nil { + service.telemetry.RecordEntitlementMutation(ctx, "extend", outcome) + } + shared.LogServiceOutcome(service.logger, ctx, "entitlement extend completed", err, + "use_case", "extend_entitlement", + "command", "extend", + "outcome", outcome, + "user_id", userIDString, + "source", sourceValue, + "reason_code", reasonCodeValue, + "actor_type", actorTypeValue, + "actor_id", actorIDValue, + ) + }() + + if ctx == nil { + outcome = shared.ErrorCodeInvalidRequest + return CommandResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + userIDString = userID.String() + if err := service.support.ensureUserExists(ctx, userID); err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + source, err := parseSource(input.Source) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + sourceValue = source.String() + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + reasonCodeValue = reasonCode.String() + actor, err := parseActor(input.Actor) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + actorTypeValue = actor.Type.String() + actorIDValue = actor.ID.String() + newEndsAt, err := parseTimestamp("ends_at", input.EndsAt) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + + now := service.support.clock.Now().UTC() + currentSnapshot, err := service.support.loadEffectiveSnapshot(ctx, userID) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + if !currentSnapshot.IsPaid || currentSnapshot.EndsAt == nil { + outcome = shared.ErrorCodeConflict + return CommandResult{}, shared.Conflict() + } + if !newEndsAt.After(*currentSnapshot.EndsAt) { + outcome = shared.ErrorCodeInvalidRequest + return CommandResult{}, shared.InvalidRequest("ends_at must be after the current paid entitlement ends_at") + } + + currentRecord, err := service.support.loadCurrentRecord(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + if currentRecord.PlanCode != currentSnapshot.PlanCode || currentRecord.EndsAt == nil { + outcome = shared.ErrorCodeInternalError + return CommandResult{}, shared.InternalError(fmt.Errorf("user %q current entitlement record is inconsistent with current snapshot", userID)) + } + + recordID, err := service.support.idGenerator.NewEntitlementRecordID() + if err != nil { + outcome = shared.ErrorCodeServiceUnavailable + return CommandResult{}, shared.ServiceUnavailable(err) + } + + segmentStartsAt := currentSnapshot.EndsAt.UTC() + newRecord := entitlement.PeriodRecord{ + RecordID: recordID, + UserID: userID, + PlanCode: currentSnapshot.PlanCode, + Source: source, + Actor: actor, + ReasonCode: reasonCode, + StartsAt: segmentStartsAt, + EndsAt: &newEndsAt, + CreatedAt: now, + } + newSnapshot := entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: currentSnapshot.PlanCode, + IsPaid: true, + StartsAt: currentSnapshot.StartsAt, + EndsAt: &newEndsAt, + Source: source, + Actor: actor, + ReasonCode: reasonCode, + UpdatedAt: now, + } + + if err := service.support.lifecycle.Extend(ctx, ports.ExtendEntitlementInput{ + ExpectedCurrentSnapshot: currentSnapshot, + NewRecord: newRecord, + NewSnapshot: newSnapshot, + }); err != nil { + switch { + case errors.Is(err, ports.ErrConflict): + outcome = shared.ErrorCodeConflict + return CommandResult{}, shared.Conflict() + default: + outcome = shared.ErrorCodeServiceUnavailable + return CommandResult{}, shared.ServiceUnavailable(err) + } + } + outcome = "success" + result = CommandResult{UserID: userID.String(), Entitlement: newSnapshot} + publishEntitlementChanged(ctx, service.publisher, service.telemetry, service.logger, "extend_entitlement", ports.EntitlementChangedOperationExtended, newSnapshot) + + return result, nil +} + +// RevokeService executes the explicit trusted paid-entitlement revoke command. +type RevokeService struct { + support commandSupport + logger *slog.Logger + telemetry *telemetry.Runtime + publisher ports.EntitlementChangedPublisher +} + +// NewRevokeService constructs one entitlement-revoke use case. +func NewRevokeService( + accounts ports.UserAccountStore, + history ports.EntitlementHistoryStore, + reader effectiveReader, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (*RevokeService, error) { + return NewRevokeServiceWithObservability(accounts, history, reader, lifecycle, clock, idGenerator, nil, nil, nil) +} + +// NewRevokeServiceWithObservability constructs one entitlement-revoke use case +// with optional observability hooks. +func NewRevokeServiceWithObservability( + accounts ports.UserAccountStore, + history ports.EntitlementHistoryStore, + reader effectiveReader, + lifecycle ports.EntitlementLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + publisher ports.EntitlementChangedPublisher, +) (*RevokeService, error) { + support, err := newCommandSupport(accounts, history, reader, lifecycle, clock, idGenerator) + if err != nil { + return nil, fmt.Errorf("entitlement revoke service: %w", err) + } + + return &RevokeService{ + support: support, + logger: logger, + telemetry: telemetryRuntime, + publisher: publisher, + }, nil +} + +// Execute revokes the current paid entitlement and materializes a new free +// state starting at the revoke timestamp. +func (service *RevokeService) Execute(ctx context.Context, input RevokeInput) (result CommandResult, err error) { + outcome := shared.ErrorCodeInternalError + userIDString := strings.TrimSpace(input.UserID) + sourceValue := strings.TrimSpace(input.Source) + reasonCodeValue := strings.TrimSpace(input.ReasonCode) + actorTypeValue := strings.TrimSpace(input.Actor.Type) + actorIDValue := strings.TrimSpace(input.Actor.ID) + defer func() { + if service.telemetry != nil { + service.telemetry.RecordEntitlementMutation(ctx, "revoke", outcome) + } + shared.LogServiceOutcome(service.logger, ctx, "entitlement revoke completed", err, + "use_case", "revoke_entitlement", + "command", "revoke", + "outcome", outcome, + "user_id", userIDString, + "source", sourceValue, + "reason_code", reasonCodeValue, + "actor_type", actorTypeValue, + "actor_id", actorIDValue, + ) + }() + + if ctx == nil { + outcome = shared.ErrorCodeInvalidRequest + return CommandResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + userIDString = userID.String() + if err := service.support.ensureUserExists(ctx, userID); err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + source, err := parseSource(input.Source) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + sourceValue = source.String() + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + reasonCodeValue = reasonCode.String() + actor, err := parseActor(input.Actor) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + actorTypeValue = actor.Type.String() + actorIDValue = actor.ID.String() + + now := service.support.clock.Now().UTC() + currentSnapshot, err := service.support.loadEffectiveSnapshot(ctx, userID) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + if !currentSnapshot.IsPaid { + outcome = shared.ErrorCodeConflict + return CommandResult{}, shared.Conflict() + } + + currentRecord, err := service.support.loadCurrentRecord(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return CommandResult{}, err + } + if currentRecord.PlanCode != currentSnapshot.PlanCode { + outcome = shared.ErrorCodeInternalError + return CommandResult{}, shared.InternalError(fmt.Errorf("user %q current entitlement record is inconsistent with current snapshot", userID)) + } + + recordID, err := service.support.idGenerator.NewEntitlementRecordID() + if err != nil { + outcome = shared.ErrorCodeServiceUnavailable + return CommandResult{}, shared.ServiceUnavailable(err) + } + + updatedCurrentRecord := currentRecord + updatedCurrentRecord.ClosedAt = &now + updatedCurrentRecord.ClosedBy = actor + updatedCurrentRecord.ClosedReasonCode = reasonCode + + newRecord := entitlement.PeriodRecord{ + RecordID: recordID, + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + Source: source, + Actor: actor, + ReasonCode: reasonCode, + StartsAt: now, + CreatedAt: now, + } + newSnapshot := entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: now, + Source: source, + Actor: actor, + ReasonCode: reasonCode, + UpdatedAt: now, + } + + if err := service.support.lifecycle.Revoke(ctx, ports.RevokeEntitlementInput{ + ExpectedCurrentSnapshot: currentSnapshot, + ExpectedCurrentRecord: currentRecord, + UpdatedCurrentRecord: updatedCurrentRecord, + NewRecord: newRecord, + NewSnapshot: newSnapshot, + }); err != nil { + switch { + case errors.Is(err, ports.ErrConflict): + outcome = shared.ErrorCodeConflict + return CommandResult{}, shared.Conflict() + default: + outcome = shared.ErrorCodeServiceUnavailable + return CommandResult{}, shared.ServiceUnavailable(err) + } + } + outcome = "success" + result = CommandResult{UserID: userID.String(), Entitlement: newSnapshot} + publishEntitlementChanged(ctx, service.publisher, service.telemetry, service.logger, "revoke_entitlement", ports.EntitlementChangedOperationRevoked, newSnapshot) + + return result, nil +} + +func buildExpiryRepairState( + expiredSnapshot entitlement.CurrentSnapshot, + recordID entitlement.EntitlementRecordID, + now time.Time, +) (entitlement.PeriodRecord, entitlement.CurrentSnapshot, error) { + if !expiredSnapshot.IsExpiredAt(now) { + return entitlement.PeriodRecord{}, entitlement.CurrentSnapshot{}, fmt.Errorf("expired snapshot repair requires an expired finite paid snapshot") + } + + freeStartsAt := expiredSnapshot.EndsAt.UTC() + freeRecord := entitlement.PeriodRecord{ + RecordID: recordID, + UserID: expiredSnapshot.UserID, + PlanCode: entitlement.PlanCodeFree, + Source: expiryRepairSource, + Actor: common.ActorRef{Type: expiryRepairActorType, ID: expiryRepairActorID}, + ReasonCode: expiryRepairReasonCode, + StartsAt: freeStartsAt, + CreatedAt: now, + } + freeSnapshot := entitlement.CurrentSnapshot{ + UserID: expiredSnapshot.UserID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: freeStartsAt, + Source: expiryRepairSource, + Actor: common.ActorRef{Type: expiryRepairActorType, ID: expiryRepairActorID}, + ReasonCode: expiryRepairReasonCode, + UpdatedAt: now, + } + + if err := freeRecord.Validate(); err != nil { + return entitlement.PeriodRecord{}, entitlement.CurrentSnapshot{}, err + } + if err := freeSnapshot.Validate(); err != nil { + return entitlement.PeriodRecord{}, entitlement.CurrentSnapshot{}, err + } + + return freeRecord, freeSnapshot, nil +} + +func currentRecordAt(records []entitlement.PeriodRecord, now time.Time) (entitlement.PeriodRecord, bool) { + var ( + currentRecord entitlement.PeriodRecord + found bool + ) + + for _, record := range records { + if !record.IsEffectiveAt(now) { + continue + } + if !found || record.StartsAt.After(currentRecord.StartsAt) || + (record.StartsAt.Equal(currentRecord.StartsAt) && record.CreatedAt.After(currentRecord.CreatedAt)) { + currentRecord = record + found = true + } + } + + return currentRecord, found +} + +func parsePlanCode(value string) (entitlement.PlanCode, error) { + planCode := entitlement.PlanCode(shared.NormalizeString(value)) + if !planCode.IsKnown() { + return "", shared.InvalidRequest("plan_code is unsupported") + } + + return planCode, nil +} + +func parseSource(value string) (common.Source, error) { + source := common.Source(shared.NormalizeString(value)) + if err := source.Validate(); err != nil { + return "", shared.InvalidRequest(err.Error()) + } + + return source, nil +} + +func parseActor(input ActorInput) (common.ActorRef, error) { + ref := common.ActorRef{ + Type: common.ActorType(shared.NormalizeString(input.Type)), + ID: common.ActorID(shared.NormalizeString(input.ID)), + } + if err := ref.Validate(); err != nil { + switch { + case ref.Type.IsZero(): + return common.ActorRef{}, shared.InvalidRequest("actor.type must not be empty") + default: + return common.ActorRef{}, shared.InvalidRequest(err.Error()) + } + } + + return ref, nil +} + +func parseTimestamp(fieldName string, value string) (time.Time, error) { + trimmed := shared.NormalizeString(value) + if trimmed == "" { + return time.Time{}, shared.InvalidRequest(fieldName + " must not be empty") + } + + parsed, err := time.Parse(time.RFC3339Nano, trimmed) + if err != nil { + return time.Time{}, shared.InvalidRequest(fieldName + " must be a valid RFC 3339 timestamp") + } + + return parsed.UTC(), nil +} + +func parseOptionalTimestamp(fieldName string, value string) (*time.Time, error) { + trimmed := shared.NormalizeString(value) + if trimmed == "" { + return nil, nil + } + + parsed, err := parseTimestamp(fieldName, trimmed) + if err != nil { + return nil, err + } + + return &parsed, nil +} + +func publishEntitlementChanged( + ctx context.Context, + publisher ports.EntitlementChangedPublisher, + telemetryRuntime *telemetry.Runtime, + logger *slog.Logger, + useCase string, + operation ports.EntitlementChangedOperation, + snapshot entitlement.CurrentSnapshot, +) { + if publisher == nil { + return + } + + event := ports.EntitlementChangedEvent{ + UserID: snapshot.UserID, + OccurredAt: snapshot.UpdatedAt.UTC(), + Source: snapshot.Source, + Operation: operation, + PlanCode: snapshot.PlanCode, + IsPaid: snapshot.IsPaid, + StartsAt: snapshot.StartsAt, + EndsAt: snapshot.EndsAt, + ReasonCode: snapshot.ReasonCode, + Actor: snapshot.Actor, + UpdatedAt: snapshot.UpdatedAt, + } + if err := publisher.PublishEntitlementChanged(ctx, event); err != nil { + if telemetryRuntime != nil { + telemetryRuntime.RecordEventPublicationFailure(ctx, ports.EntitlementChangedEventType) + } + shared.LogEventPublicationFailure(logger, ctx, ports.EntitlementChangedEventType, err, + "use_case", useCase, + "user_id", snapshot.UserID.String(), + "source", snapshot.Source.String(), + "reason_code", snapshot.ReasonCode.String(), + "actor_type", snapshot.Actor.Type.String(), + "actor_id", snapshot.Actor.ID.String(), + ) + } +} + +func validateGrantBounds( + planCode entitlement.PlanCode, + startsAt time.Time, + endsAt *time.Time, +) error { + switch { + case planCode.HasFiniteExpiry(): + if endsAt == nil { + return shared.InvalidRequest("ends_at must be present for finite paid plans") + } + case planCode == entitlement.PlanCodePaidLifetime: + if endsAt != nil { + return shared.InvalidRequest("ends_at must be empty for paid_lifetime") + } + default: + return shared.InvalidRequest("plan_code is unsupported") + } + if endsAt != nil && !endsAt.After(startsAt) { + return shared.InvalidRequest("ends_at must be after starts_at") + } + + return nil +} diff --git a/user/internal/service/entitlementsvc/service_test.go b/user/internal/service/entitlementsvc/service_test.go new file mode 100644 index 0000000..165ed3a --- /dev/null +++ b/user/internal/service/entitlementsvc/service_test.go @@ -0,0 +1,565 @@ +package entitlementsvc + +import ( + "context" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" + + "github.com/stretchr/testify/require" +) + +func TestReaderGetByUserIDRepairsExpiredFinitePaidSnapshot(t *testing.T) { + t.Parallel() + + userID := common.UserID("user-123") + startsAt := time.Unix(1_775_240_000, 0).UTC() + endsAt := startsAt.Add(24 * time.Hour) + now := endsAt.Add(2 * time.Hour) + snapshotStore := &fakeSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + userID: paidSnapshot( + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + endsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ), + }, + } + historyStore := &fakeHistoryStore{ + byUserID: map[common.UserID][]entitlement.PeriodRecord{ + userID: { + paidRecord( + entitlement.EntitlementRecordID("entitlement-paid"), + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + endsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ), + }, + }, + } + lifecycleStore := &fakeLifecycleStore{ + historyStore: historyStore, + snapshotStore: snapshotStore, + } + + reader, err := NewReader(snapshotStore, lifecycleStore, fixedClock{now: now}, fixedIDGenerator{ + recordID: entitlement.EntitlementRecordID("entitlement-free"), + }) + require.NoError(t, err) + + got, err := reader.GetByUserID(context.Background(), userID) + require.NoError(t, err) + require.Equal(t, entitlement.PlanCodeFree, got.PlanCode) + require.False(t, got.IsPaid) + require.Equal(t, endsAt, got.StartsAt) + require.Equal(t, expiryRepairSource, got.Source) + require.Equal(t, expiryRepairReasonCode, got.ReasonCode) + require.Equal(t, common.ActorRef{Type: expiryRepairActorType, ID: expiryRepairActorID}, got.Actor) + require.Len(t, historyStore.byUserID[userID], 2) + require.Equal(t, got, snapshotStore.byUserID[userID]) + require.Equal(t, entitlement.EntitlementRecordID("entitlement-free"), lifecycleStore.repairInput.NewRecord.RecordID) +} + +func TestGrantServiceExecuteRejectsInvalidPlanRules(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + freeSnapshot := freeSnapshot(userID, now.Add(-24*time.Hour), common.Source("auth_registration"), common.ReasonCode("initial_free_entitlement")) + freeRecord := freeRecord(entitlement.EntitlementRecordID("entitlement-free"), userID, now.Add(-24*time.Hour), common.Source("auth_registration"), common.ReasonCode("initial_free_entitlement")) + + tests := []struct { + name string + input GrantInput + wantErr string + }{ + { + name: "free plan not allowed", + input: GrantInput{ + UserID: userID.String(), + PlanCode: string(entitlement.PlanCodeFree), + Source: "admin", + ReasonCode: "manual_grant", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + StartsAt: now.Format(time.RFC3339Nano), + }, + wantErr: shared.ErrorCodeInvalidRequest, + }, + { + name: "future starts at rejected", + input: GrantInput{ + UserID: userID.String(), + PlanCode: string(entitlement.PlanCodePaidMonthly), + Source: "admin", + ReasonCode: "manual_grant", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + StartsAt: now.Add(time.Hour).Format(time.RFC3339Nano), + EndsAt: now.Add(31 * 24 * time.Hour).Format(time.RFC3339Nano), + }, + wantErr: shared.ErrorCodeInvalidRequest, + }, + { + name: "finite plan requires ends at", + input: GrantInput{ + UserID: userID.String(), + PlanCode: string(entitlement.PlanCodePaidMonthly), + Source: "admin", + ReasonCode: "manual_grant", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + StartsAt: now.Format(time.RFC3339Nano), + }, + wantErr: shared.ErrorCodeInvalidRequest, + }, + { + name: "lifetime plan forbids ends at", + input: GrantInput{ + UserID: userID.String(), + PlanCode: string(entitlement.PlanCodePaidLifetime), + Source: "admin", + ReasonCode: "manual_grant", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + StartsAt: now.Format(time.RFC3339Nano), + EndsAt: now.Add(24 * time.Hour).Format(time.RFC3339Nano), + }, + wantErr: shared.ErrorCodeInvalidRequest, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + service, err := NewGrantService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + &fakeHistoryStore{byUserID: map[common.UserID][]entitlement.PeriodRecord{userID: {freeRecord}}}, + fakeEffectiveReader{byUserID: map[common.UserID]entitlement.CurrentSnapshot{userID: freeSnapshot}}, + &fakeLifecycleStore{}, + fixedClock{now: now}, + fixedIDGenerator{recordID: entitlement.EntitlementRecordID("entitlement-paid")}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), tt.input) + require.Error(t, err) + require.Equal(t, tt.wantErr, shared.CodeOf(err)) + }) + } +} + +func TestGrantServiceExecuteBuildsTransition(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + currentFreeStartsAt := now.Add(-24 * time.Hour) + currentSnapshot := freeSnapshot(userID, currentFreeStartsAt, common.Source("auth_registration"), common.ReasonCode("initial_free_entitlement")) + currentRecord := freeRecord(entitlement.EntitlementRecordID("entitlement-free"), userID, currentFreeStartsAt, common.Source("auth_registration"), common.ReasonCode("initial_free_entitlement")) + lifecycleStore := &fakeLifecycleStore{} + + service, err := NewGrantService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + &fakeHistoryStore{byUserID: map[common.UserID][]entitlement.PeriodRecord{userID: {currentRecord}}}, + fakeEffectiveReader{byUserID: map[common.UserID]entitlement.CurrentSnapshot{userID: currentSnapshot}}, + lifecycleStore, + fixedClock{now: now}, + fixedIDGenerator{recordID: entitlement.EntitlementRecordID("entitlement-paid")}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GrantInput{ + UserID: userID.String(), + PlanCode: string(entitlement.PlanCodePaidMonthly), + Source: "admin", + ReasonCode: "manual_grant", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + StartsAt: now.Format(time.RFC3339Nano), + EndsAt: now.Add(30 * 24 * time.Hour).Format(time.RFC3339Nano), + }) + require.NoError(t, err) + require.Equal(t, userID.String(), result.UserID) + require.Equal(t, entitlement.PlanCodePaidMonthly, result.Entitlement.PlanCode) + require.Equal(t, entitlement.EntitlementRecordID("entitlement-paid"), lifecycleStore.grantInput.NewRecord.RecordID) + require.Equal(t, currentSnapshot, lifecycleStore.grantInput.ExpectedCurrentSnapshot) + require.Equal(t, currentRecord.RecordID, lifecycleStore.grantInput.UpdatedCurrentRecord.RecordID) + require.NotNil(t, lifecycleStore.grantInput.UpdatedCurrentRecord.ClosedAt) + require.True(t, lifecycleStore.grantInput.UpdatedCurrentRecord.ClosedAt.Equal(now)) +} + +func TestExtendServiceExecuteBuildsExtensionSegment(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + startsAt := now.Add(-24 * time.Hour) + currentEndsAt := now.Add(24 * time.Hour) + currentSnapshot := paidSnapshot( + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + currentEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + currentRecord := paidRecord( + entitlement.EntitlementRecordID("entitlement-paid-1"), + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + currentEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + lifecycleStore := &fakeLifecycleStore{} + + service, err := NewExtendService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + &fakeHistoryStore{byUserID: map[common.UserID][]entitlement.PeriodRecord{userID: {currentRecord}}}, + fakeEffectiveReader{byUserID: map[common.UserID]entitlement.CurrentSnapshot{userID: currentSnapshot}}, + lifecycleStore, + fixedClock{now: now}, + fixedIDGenerator{recordID: entitlement.EntitlementRecordID("entitlement-paid-2")}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), ExtendInput{ + UserID: userID.String(), + Source: "admin", + ReasonCode: "manual_extend", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + EndsAt: currentEndsAt.Add(30 * 24 * time.Hour).Format(time.RFC3339Nano), + }) + require.NoError(t, err) + require.Equal(t, currentEndsAt, lifecycleStore.extendInput.NewRecord.StartsAt) + require.Equal(t, startsAt, lifecycleStore.extendInput.NewSnapshot.StartsAt) + require.Equal(t, entitlement.PlanCodePaidMonthly, result.Entitlement.PlanCode) +} + +func TestRevokeServiceExecuteBuildsFreeTransition(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + startsAt := now.Add(-24 * time.Hour) + currentEndsAt := now.Add(24 * time.Hour) + currentSnapshot := paidSnapshot( + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + currentEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + currentRecord := paidRecord( + entitlement.EntitlementRecordID("entitlement-paid-1"), + userID, + entitlement.PlanCodePaidMonthly, + startsAt, + currentEndsAt, + common.Source("admin"), + common.ReasonCode("manual_grant"), + ) + lifecycleStore := &fakeLifecycleStore{} + + service, err := NewRevokeService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + &fakeHistoryStore{byUserID: map[common.UserID][]entitlement.PeriodRecord{userID: {currentRecord}}}, + fakeEffectiveReader{byUserID: map[common.UserID]entitlement.CurrentSnapshot{userID: currentSnapshot}}, + lifecycleStore, + fixedClock{now: now}, + fixedIDGenerator{recordID: entitlement.EntitlementRecordID("entitlement-free-2")}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), RevokeInput{ + UserID: userID.String(), + Source: "admin", + ReasonCode: "manual_revoke", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + }) + require.NoError(t, err) + require.Equal(t, entitlement.PlanCodeFree, result.Entitlement.PlanCode) + require.NotNil(t, lifecycleStore.revokeInput.UpdatedCurrentRecord.ClosedAt) + require.True(t, lifecycleStore.revokeInput.UpdatedCurrentRecord.ClosedAt.Equal(now)) + require.Equal(t, now, lifecycleStore.revokeInput.NewRecord.StartsAt) +} + +type fakeAccountStore struct { + existsByUserID map[common.UserID]bool +} + +func (store fakeAccountStore) Create(context.Context, ports.CreateAccountInput) error { + return nil +} + +func (store fakeAccountStore) GetByUserID(context.Context, common.UserID) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) GetByEmail(context.Context, common.Email) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) GetByRaceName(context.Context, common.RaceName) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) ExistsByUserID(_ context.Context, userID common.UserID) (bool, error) { + return store.existsByUserID[userID], nil +} + +func (store fakeAccountStore) RenameRaceName(context.Context, ports.RenameRaceNameInput) error { + return nil +} + +func (store fakeAccountStore) Update(context.Context, account.UserAccount) error { + return nil +} + +type fakeSnapshotStore struct { + byUserID map[common.UserID]entitlement.CurrentSnapshot +} + +func (store *fakeSnapshotStore) GetByUserID(_ context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) { + record, ok := store.byUserID[userID] + if !ok { + return entitlement.CurrentSnapshot{}, ports.ErrNotFound + } + + return record, nil +} + +func (store *fakeSnapshotStore) Put(_ context.Context, record entitlement.CurrentSnapshot) error { + store.byUserID[record.UserID] = record + return nil +} + +type fakeHistoryStore struct { + byUserID map[common.UserID][]entitlement.PeriodRecord +} + +func (store *fakeHistoryStore) Create(_ context.Context, record entitlement.PeriodRecord) error { + store.byUserID[record.UserID] = append(store.byUserID[record.UserID], record) + return nil +} + +func (store *fakeHistoryStore) GetByRecordID(_ context.Context, recordID entitlement.EntitlementRecordID) (entitlement.PeriodRecord, error) { + for _, records := range store.byUserID { + for _, record := range records { + if record.RecordID == recordID { + return record, nil + } + } + } + + return entitlement.PeriodRecord{}, ports.ErrNotFound +} + +func (store *fakeHistoryStore) ListByUserID(_ context.Context, userID common.UserID) ([]entitlement.PeriodRecord, error) { + records := store.byUserID[userID] + cloned := make([]entitlement.PeriodRecord, len(records)) + copy(cloned, records) + return cloned, nil +} + +func (store *fakeHistoryStore) Update(_ context.Context, record entitlement.PeriodRecord) error { + records := store.byUserID[record.UserID] + for idx := range records { + if records[idx].RecordID == record.RecordID { + records[idx] = record + store.byUserID[record.UserID] = records + return nil + } + } + + return ports.ErrNotFound +} + +type fakeEffectiveReader struct { + byUserID map[common.UserID]entitlement.CurrentSnapshot +} + +func (reader fakeEffectiveReader) GetByUserID(_ context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) { + record, ok := reader.byUserID[userID] + if !ok { + return entitlement.CurrentSnapshot{}, ports.ErrNotFound + } + + return record, nil +} + +type fakeLifecycleStore struct { + historyStore *fakeHistoryStore + snapshotStore *fakeSnapshotStore + + grantInput ports.GrantEntitlementInput + extendInput ports.ExtendEntitlementInput + revokeInput ports.RevokeEntitlementInput + repairInput ports.RepairExpiredEntitlementInput +} + +func (store *fakeLifecycleStore) Grant(_ context.Context, input ports.GrantEntitlementInput) error { + store.grantInput = input + return nil +} + +func (store *fakeLifecycleStore) Extend(_ context.Context, input ports.ExtendEntitlementInput) error { + store.extendInput = input + return nil +} + +func (store *fakeLifecycleStore) Revoke(_ context.Context, input ports.RevokeEntitlementInput) error { + store.revokeInput = input + return nil +} + +func (store *fakeLifecycleStore) RepairExpired(_ context.Context, input ports.RepairExpiredEntitlementInput) error { + store.repairInput = input + if store.historyStore != nil { + store.historyStore.byUserID[input.NewRecord.UserID] = append(store.historyStore.byUserID[input.NewRecord.UserID], input.NewRecord) + } + if store.snapshotStore != nil { + store.snapshotStore.byUserID[input.NewSnapshot.UserID] = input.NewSnapshot + } + return nil +} + +type fixedClock struct { + now time.Time +} + +func (clock fixedClock) Now() time.Time { + return clock.now +} + +type fixedIDGenerator struct { + recordID entitlement.EntitlementRecordID + sanctionRecordID policy.SanctionRecordID + limitRecordID policy.LimitRecordID +} + +func (generator fixedIDGenerator) NewUserID() (common.UserID, error) { + return "", nil +} + +func (generator fixedIDGenerator) NewInitialRaceName() (common.RaceName, error) { + return "", nil +} + +func (generator fixedIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + return generator.recordID, nil +} + +func (generator fixedIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + return generator.sanctionRecordID, nil +} + +func (generator fixedIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + return generator.limitRecordID, nil +} + +func freeSnapshot( + userID common.UserID, + startsAt time.Time, + source common.Source, + reasonCode common.ReasonCode, +) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: startsAt, + Source: source, + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: reasonCode, + UpdatedAt: startsAt, + } +} + +func freeRecord( + recordID entitlement.EntitlementRecordID, + userID common.UserID, + startsAt time.Time, + source common.Source, + reasonCode common.ReasonCode, +) entitlement.PeriodRecord { + return entitlement.PeriodRecord{ + RecordID: recordID, + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + Source: source, + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: reasonCode, + StartsAt: startsAt, + CreatedAt: startsAt, + } +} + +func paidSnapshot( + userID common.UserID, + planCode entitlement.PlanCode, + startsAt time.Time, + endsAt time.Time, + source common.Source, + reasonCode common.ReasonCode, +) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: planCode, + IsPaid: true, + StartsAt: startsAt, + EndsAt: timePointer(endsAt), + Source: source, + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: reasonCode, + UpdatedAt: startsAt, + } +} + +func paidRecord( + recordID entitlement.EntitlementRecordID, + userID common.UserID, + planCode entitlement.PlanCode, + startsAt time.Time, + endsAt time.Time, + source common.Source, + reasonCode common.ReasonCode, +) entitlement.PeriodRecord { + return entitlement.PeriodRecord{ + RecordID: recordID, + UserID: userID, + PlanCode: planCode, + Source: source, + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: reasonCode, + StartsAt: startsAt, + EndsAt: timePointer(endsAt), + CreatedAt: startsAt, + } +} + +func timePointer(value time.Time) *time.Time { + utcValue := value.UTC() + return &utcValue +} + +var ( + _ ports.UserAccountStore = fakeAccountStore{} + _ ports.EntitlementSnapshotStore = (*fakeSnapshotStore)(nil) + _ ports.EntitlementHistoryStore = (*fakeHistoryStore)(nil) + _ ports.EntitlementLifecycleStore = (*fakeLifecycleStore)(nil) + _ ports.Clock = fixedClock{} + _ ports.IDGenerator = fixedIDGenerator{} +) diff --git a/user/internal/service/geosync/service.go b/user/internal/service/geosync/service.go new file mode 100644 index 0000000..3c2bf5e --- /dev/null +++ b/user/internal/service/geosync/service.go @@ -0,0 +1,194 @@ +// Package geosync implements the trusted geo-facing declared-country sync +// command owned by User Service. +package geosync + +import ( + "context" + "errors" + "fmt" + "log/slog" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" + "galaxy/user/internal/telemetry" + + "golang.org/x/text/language" +) + +const geoProfileServiceSource = common.Source("geo_profile_service") + +// SyncDeclaredCountryInput stores one trusted geo-facing country-sync request. +type SyncDeclaredCountryInput struct { + // UserID identifies the regular user whose current declared country must be + // synchronized. + UserID string + + // DeclaredCountry stores the new current effective declared country. + DeclaredCountry string +} + +// SyncDeclaredCountryResult stores one trusted geo-facing country-sync result. +type SyncDeclaredCountryResult struct { + // UserID identifies the synchronized user. + UserID string `json:"user_id"` + + // DeclaredCountry stores the current effective declared country after the + // command completes. + DeclaredCountry string `json:"declared_country"` + + // UpdatedAt stores the effective account mutation timestamp. Same-value + // no-op syncs return the current stored timestamp unchanged. + UpdatedAt time.Time `json:"updated_at"` +} + +// SyncService executes the trusted geo-facing declared-country sync command. +type SyncService struct { + accounts ports.UserAccountStore + clock ports.Clock + publisher ports.DeclaredCountryChangedPublisher + logger *slog.Logger + telemetry *telemetry.Runtime +} + +// NewSyncService constructs one trusted declared-country sync command. +func NewSyncService( + accounts ports.UserAccountStore, + clock ports.Clock, + publisher ports.DeclaredCountryChangedPublisher, +) (*SyncService, error) { + return NewSyncServiceWithObservability(accounts, clock, publisher, nil, nil) +} + +// NewSyncServiceWithObservability constructs one trusted declared-country sync +// command with optional structured logging and event-publication metrics. +func NewSyncServiceWithObservability( + accounts ports.UserAccountStore, + clock ports.Clock, + publisher ports.DeclaredCountryChangedPublisher, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, +) (*SyncService, error) { + switch { + case accounts == nil: + return nil, fmt.Errorf("geo declared-country sync service: user account store must not be nil") + case clock == nil: + return nil, fmt.Errorf("geo declared-country sync service: clock must not be nil") + case publisher == nil: + return nil, fmt.Errorf("geo declared-country sync service: declared-country changed publisher must not be nil") + default: + return &SyncService{ + accounts: accounts, + clock: clock, + publisher: publisher, + logger: logger, + telemetry: telemetryRuntime, + }, nil + } +} + +// Execute synchronizes the current effective declared country of one user. +func (service *SyncService) Execute( + ctx context.Context, + input SyncDeclaredCountryInput, +) (result SyncDeclaredCountryResult, err error) { + outcome := "failed" + userIDString := "" + defer func() { + shared.LogServiceOutcome(service.logger, ctx, "declared-country sync completed", err, + "use_case", "sync_declared_country", + "outcome", outcome, + "user_id", userIDString, + "source", geoProfileServiceSource.String(), + ) + }() + + if ctx == nil { + return SyncDeclaredCountryResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + return SyncDeclaredCountryResult{}, err + } + userIDString = userID.String() + declaredCountry, err := parseDeclaredCountry(input.DeclaredCountry) + if err != nil { + return SyncDeclaredCountryResult{}, err + } + + record, err := service.accounts.GetByUserID(ctx, userID) + switch { + case err == nil: + case errors.Is(err, ports.ErrNotFound): + return SyncDeclaredCountryResult{}, shared.SubjectNotFound() + default: + return SyncDeclaredCountryResult{}, shared.ServiceUnavailable(err) + } + + if record.DeclaredCountry == declaredCountry { + outcome = "noop" + return resultFromAccount(record), nil + } + + record.DeclaredCountry = declaredCountry + record.UpdatedAt = service.clock.Now().UTC() + + if err := service.accounts.Update(ctx, record); err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return SyncDeclaredCountryResult{}, shared.SubjectNotFound() + case errors.Is(err, ports.ErrConflict): + return SyncDeclaredCountryResult{}, shared.ServiceUnavailable(err) + default: + return SyncDeclaredCountryResult{}, shared.ServiceUnavailable(err) + } + } + + result = resultFromAccount(record) + outcome = "updated" + + if err := service.publisher.PublishDeclaredCountryChanged(ctx, ports.DeclaredCountryChangedEvent{ + UserID: record.UserID, + DeclaredCountry: record.DeclaredCountry, + UpdatedAt: record.UpdatedAt, + Source: geoProfileServiceSource, + }); err != nil { + if service.telemetry != nil { + service.telemetry.RecordEventPublicationFailure(ctx, ports.DeclaredCountryChangedEventType) + } + shared.LogEventPublicationFailure(service.logger, ctx, ports.DeclaredCountryChangedEventType, err, + "use_case", "sync_declared_country", + "user_id", record.UserID.String(), + "source", geoProfileServiceSource.String(), + ) + } + + return result, nil +} + +func parseDeclaredCountry(value string) (common.CountryCode, error) { + const message = "declared_country must be a valid ISO 3166-1 alpha-2 country code" + + code := common.CountryCode(shared.NormalizeString(value)) + if err := code.Validate(); err != nil { + return "", shared.InvalidRequest(message) + } + + region, err := language.ParseRegion(code.String()) + if err != nil || !region.IsCountry() || region.Canonicalize().String() != code.String() { + return "", shared.InvalidRequest(message) + } + + return code, nil +} + +func resultFromAccount(record account.UserAccount) SyncDeclaredCountryResult { + return SyncDeclaredCountryResult{ + UserID: record.UserID.String(), + DeclaredCountry: record.DeclaredCountry.String(), + UpdatedAt: record.UpdatedAt.UTC(), + } +} diff --git a/user/internal/service/geosync/service_test.go b/user/internal/service/geosync/service_test.go new file mode 100644 index 0000000..f2ea5ba --- /dev/null +++ b/user/internal/service/geosync/service_test.go @@ -0,0 +1,299 @@ +package geosync + +import ( + "context" + "errors" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" + + "github.com/stretchr/testify/require" +) + +func TestSyncServiceExecuteUpdatesDeclaredCountryAndPublishesEvent(t *testing.T) { + t.Parallel() + + createdAt := time.Unix(1_775_240_000, 0).UTC() + updatedAt := createdAt.Add(5 * time.Minute) + record := validAccountRecord(createdAt, createdAt) + store := newFakeAccountStore(record) + publisher := &recordingDeclaredCountryChangedPublisher{ + publishHook: func(event ports.DeclaredCountryChangedEvent) error { + stored, err := store.GetByUserID(context.Background(), record.UserID) + require.NoError(t, err) + require.Equal(t, common.CountryCode("FR"), stored.DeclaredCountry) + require.Equal(t, updatedAt, stored.UpdatedAt) + require.Equal(t, common.Source("geo_profile_service"), event.Source) + return nil + }, + } + + service, err := NewSyncService(store, fixedClock{now: updatedAt}, publisher) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), SyncDeclaredCountryInput{ + UserID: record.UserID.String(), + DeclaredCountry: "FR", + }) + require.NoError(t, err) + require.Equal(t, record.UserID.String(), result.UserID) + require.Equal(t, "FR", result.DeclaredCountry) + require.Equal(t, updatedAt, result.UpdatedAt) + require.Equal(t, 1, store.updateCalls) + + stored, err := store.GetByUserID(context.Background(), record.UserID) + require.NoError(t, err) + require.Equal(t, record.Email, stored.Email) + require.Equal(t, record.RaceName, stored.RaceName) + require.Equal(t, record.PreferredLanguage, stored.PreferredLanguage) + require.Equal(t, record.TimeZone, stored.TimeZone) + require.Equal(t, common.CountryCode("FR"), stored.DeclaredCountry) + require.Equal(t, record.CreatedAt, stored.CreatedAt) + require.Equal(t, updatedAt, stored.UpdatedAt) + + published := publisher.PublishedEvents() + require.Len(t, published, 1) + require.Equal(t, record.UserID, published[0].UserID) + require.Equal(t, common.CountryCode("FR"), published[0].DeclaredCountry) + require.Equal(t, updatedAt, published[0].UpdatedAt) + require.Equal(t, common.Source("geo_profile_service"), published[0].Source) +} + +func TestSyncServiceExecuteSameCountryIsNoOp(t *testing.T) { + t.Parallel() + + createdAt := time.Unix(1_775_240_000, 0).UTC() + record := validAccountRecord(createdAt, createdAt.Add(5*time.Minute)) + store := newFakeAccountStore(record) + publisher := &recordingDeclaredCountryChangedPublisher{} + + service, err := NewSyncService(store, fixedClock{now: createdAt.Add(time.Hour)}, publisher) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), SyncDeclaredCountryInput{ + UserID: record.UserID.String(), + DeclaredCountry: record.DeclaredCountry.String(), + }) + require.NoError(t, err) + require.Equal(t, record.UserID.String(), result.UserID) + require.Equal(t, record.DeclaredCountry.String(), result.DeclaredCountry) + require.Equal(t, record.UpdatedAt, result.UpdatedAt) + require.Zero(t, store.updateCalls) + require.Empty(t, publisher.PublishedEvents()) +} + +func TestSyncServiceExecuteRejectsInvalidDeclaredCountry(t *testing.T) { + t.Parallel() + + service, err := NewSyncService( + newFakeAccountStore(validAccountRecord(time.Unix(1_775_240_000, 0).UTC(), time.Unix(1_775_240_000, 0).UTC())), + fixedClock{now: time.Unix(1_775_240_000, 0).UTC()}, + &recordingDeclaredCountryChangedPublisher{}, + ) + require.NoError(t, err) + + tests := []struct { + name string + value string + }{ + {name: "alias country code", value: "UK"}, + {name: "lowercase", value: "de"}, + {name: "non-country region", value: "EU"}, + {name: "wrong length", value: "DEU"}, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + _, err := service.Execute(context.Background(), SyncDeclaredCountryInput{ + UserID: "user-123", + DeclaredCountry: tt.value, + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInvalidRequest, shared.CodeOf(err)) + require.EqualError(t, err, "declared_country must be a valid ISO 3166-1 alpha-2 country code") + }) + } +} + +func TestSyncServiceExecuteUnknownUserReturnsNotFound(t *testing.T) { + t.Parallel() + + service, err := NewSyncService( + newFakeAccountStore(), + fixedClock{now: time.Unix(1_775_240_000, 0).UTC()}, + &recordingDeclaredCountryChangedPublisher{}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), SyncDeclaredCountryInput{ + UserID: "user-missing", + DeclaredCountry: "DE", + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeSubjectNotFound, shared.CodeOf(err)) +} + +func TestSyncServiceExecutePublisherFailureDoesNotRollbackCommit(t *testing.T) { + t.Parallel() + + createdAt := time.Unix(1_775_240_000, 0).UTC() + updatedAt := createdAt.Add(time.Minute) + record := validAccountRecord(createdAt, createdAt) + store := newFakeAccountStore(record) + publisher := &recordingDeclaredCountryChangedPublisher{ + err: errors.New("publisher unavailable"), + } + + service, err := NewSyncService(store, fixedClock{now: updatedAt}, publisher) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), SyncDeclaredCountryInput{ + UserID: record.UserID.String(), + DeclaredCountry: "FR", + }) + require.NoError(t, err) + require.Equal(t, "FR", result.DeclaredCountry) + require.Equal(t, updatedAt, result.UpdatedAt) + + stored, err := store.GetByUserID(context.Background(), record.UserID) + require.NoError(t, err) + require.Equal(t, common.CountryCode("FR"), stored.DeclaredCountry) + require.Equal(t, updatedAt, stored.UpdatedAt) + + published := publisher.PublishedEvents() + require.Len(t, published, 1) + require.Equal(t, common.CountryCode("FR"), published[0].DeclaredCountry) +} + +type fakeAccountStore struct { + records map[common.UserID]account.UserAccount + updateCalls int + updateErr error +} + +func newFakeAccountStore(records ...account.UserAccount) *fakeAccountStore { + byUserID := make(map[common.UserID]account.UserAccount, len(records)) + for _, record := range records { + byUserID[record.UserID] = record + } + + return &fakeAccountStore{records: byUserID} +} + +func (store *fakeAccountStore) Create(context.Context, ports.CreateAccountInput) error { + return nil +} + +func (store *fakeAccountStore) GetByUserID(_ context.Context, userID common.UserID) (account.UserAccount, error) { + record, ok := store.records[userID] + if !ok { + return account.UserAccount{}, ports.ErrNotFound + } + + return record, nil +} + +func (store *fakeAccountStore) GetByEmail(_ context.Context, email common.Email) (account.UserAccount, error) { + for _, record := range store.records { + if record.Email == email { + return record, nil + } + } + + return account.UserAccount{}, ports.ErrNotFound +} + +func (store *fakeAccountStore) GetByRaceName(_ context.Context, raceName common.RaceName) (account.UserAccount, error) { + for _, record := range store.records { + if record.RaceName == raceName { + return record, nil + } + } + + return account.UserAccount{}, ports.ErrNotFound +} + +func (store *fakeAccountStore) ExistsByUserID(_ context.Context, userID common.UserID) (bool, error) { + _, ok := store.records[userID] + return ok, nil +} + +func (store *fakeAccountStore) RenameRaceName(context.Context, ports.RenameRaceNameInput) error { + return nil +} + +func (store *fakeAccountStore) Update(_ context.Context, record account.UserAccount) error { + store.updateCalls++ + if store.updateErr != nil { + return store.updateErr + } + if _, ok := store.records[record.UserID]; !ok { + return ports.ErrNotFound + } + store.records[record.UserID] = record + return nil +} + +type recordingDeclaredCountryChangedPublisher struct { + err error + publishHook func(event ports.DeclaredCountryChangedEvent) error + published []ports.DeclaredCountryChangedEvent +} + +func (publisher *recordingDeclaredCountryChangedPublisher) PublishDeclaredCountryChanged( + _ context.Context, + event ports.DeclaredCountryChangedEvent, +) error { + if err := event.Validate(); err != nil { + return err + } + + publisher.published = append(publisher.published, event) + if publisher.publishHook != nil { + if err := publisher.publishHook(event); err != nil { + return err + } + } + + return publisher.err +} + +func (publisher *recordingDeclaredCountryChangedPublisher) PublishedEvents() []ports.DeclaredCountryChangedEvent { + events := make([]ports.DeclaredCountryChangedEvent, len(publisher.published)) + copy(events, publisher.published) + return events +} + +type fixedClock struct { + now time.Time +} + +func (clock fixedClock) Now() time.Time { + return clock.now +} + +func validAccountRecord(createdAt time.Time, updatedAt time.Time) account.UserAccount { + return account.UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Kaliningrad"), + DeclaredCountry: common.CountryCode("DE"), + CreatedAt: createdAt, + UpdatedAt: updatedAt, + } +} + +var ( + _ ports.UserAccountStore = (*fakeAccountStore)(nil) + _ ports.DeclaredCountryChangedPublisher = (*recordingDeclaredCountryChangedPublisher)(nil) + _ ports.Clock = fixedClock{} +) diff --git a/user/internal/service/lobbyeligibility/service.go b/user/internal/service/lobbyeligibility/service.go new file mode 100644 index 0000000..26b0cb6 --- /dev/null +++ b/user/internal/service/lobbyeligibility/service.go @@ -0,0 +1,397 @@ +// Package lobbyeligibility implements the trusted lobby-facing eligibility +// snapshot read owned by User Service. +package lobbyeligibility + +import ( + "context" + "errors" + "fmt" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" +) + +// limitCatalogEntry stores one frozen default quota for free and paid +// entitlement states. +type limitCatalogEntry struct { + code policy.LimitCode + freeValue int + paidValue int + freeEnabled bool +} + +// limitCatalog stores the frozen lobby-facing effective limit defaults used +// to materialize numeric quotas from the current entitlement state. +var limitCatalog = []limitCatalogEntry{ + { + code: policy.LimitCodeMaxOwnedPrivateGames, + paidValue: 3, + }, + { + code: policy.LimitCodeMaxPendingPublicApplications, + freeValue: 3, + paidValue: 10, + freeEnabled: true, + }, + { + code: policy.LimitCodeMaxActiveGameMemberships, + freeValue: 3, + paidValue: 10, + freeEnabled: true, + }, +} + +// ActorRefView stores transport-ready audit actor metadata. +type ActorRefView struct { + // Type stores the machine-readable actor type. + Type string `json:"type"` + + // ID stores the optional stable actor identifier. + ID string `json:"id,omitempty"` +} + +// EntitlementSnapshotView stores the transport-ready current entitlement +// snapshot used by lobby reads. +type EntitlementSnapshotView struct { + // PlanCode stores the effective entitlement plan code. + PlanCode string `json:"plan_code"` + + // IsPaid reports whether the effective plan is paid. + IsPaid bool `json:"is_paid"` + + // Source stores the machine-readable mutation source. + Source string `json:"source"` + + // Actor stores the audit actor metadata attached to the snapshot. + Actor ActorRefView `json:"actor"` + + // ReasonCode stores the machine-readable reason attached to the snapshot. + ReasonCode string `json:"reason_code"` + + // StartsAt stores when the effective state started. + StartsAt time.Time `json:"starts_at"` + + // EndsAt stores the optional finite effective expiry. + EndsAt *time.Time `json:"ends_at,omitempty"` + + // UpdatedAt stores when the snapshot was last recomputed. + UpdatedAt time.Time `json:"updated_at"` +} + +// ActiveSanctionView stores one transport-ready active sanction that matters +// to lobby flows. +type ActiveSanctionView struct { + // SanctionCode stores the active sanction code. + SanctionCode string `json:"sanction_code"` + + // Scope stores the machine-readable sanction scope. + Scope string `json:"scope"` + + // ReasonCode stores the machine-readable sanction reason. + ReasonCode string `json:"reason_code"` + + // Actor stores the audit actor metadata attached to the sanction. + Actor ActorRefView `json:"actor"` + + // AppliedAt stores when the sanction became active. + AppliedAt time.Time `json:"applied_at"` + + // ExpiresAt stores the optional planned sanction expiry. + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +// EffectiveLimitView stores one materialized effective lobby quota. +type EffectiveLimitView struct { + // LimitCode stores the machine-readable quota identifier. + LimitCode string `json:"limit_code"` + + // Value stores the effective numeric quota after defaults and user + // overrides are applied. + Value int `json:"value"` +} + +// EligibilityMarkersView stores the derived booleans consumed by Game Lobby. +type EligibilityMarkersView struct { + // CanLogin reports whether the user may currently log in. + CanLogin bool `json:"can_login"` + + // CanCreatePrivateGame reports whether the user may currently create a + // private game. + CanCreatePrivateGame bool `json:"can_create_private_game"` + + // CanManagePrivateGame reports whether the user may currently manage a + // private game. + CanManagePrivateGame bool `json:"can_manage_private_game"` + + // CanJoinGame reports whether the user may currently join a game. + CanJoinGame bool `json:"can_join_game"` + + // CanUpdateProfile reports whether the user may currently update self- + // service profile and settings fields. + CanUpdateProfile bool `json:"can_update_profile"` +} + +// GetUserEligibilityInput stores one lobby-facing eligibility read request. +type GetUserEligibilityInput struct { + // UserID identifies the regular user whose effective lobby state is needed. + UserID string +} + +// GetUserEligibilityResult stores one lobby-facing eligibility snapshot. +type GetUserEligibilityResult struct { + // Exists reports whether UserID currently identifies a stored user. + Exists bool `json:"exists"` + + // UserID echoes the requested stable user identifier. + UserID string `json:"user_id"` + + // Entitlement stores the current effective entitlement snapshot for known + // users. + Entitlement *EntitlementSnapshotView `json:"entitlement,omitempty"` + + // ActiveSanctions stores only the currently active sanctions relevant to + // lobby decisions. + ActiveSanctions []ActiveSanctionView `json:"active_sanctions"` + + // EffectiveLimits stores the materialized numeric quotas used by Game + // Lobby. + EffectiveLimits []EffectiveLimitView `json:"effective_limits"` + + // Markers stores the derived decision booleans consumed by Game Lobby. + Markers EligibilityMarkersView `json:"markers"` +} + +type entitlementReader interface { + GetByUserID(ctx context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) +} + +// SnapshotReader executes the trusted lobby-facing eligibility snapshot read. +type SnapshotReader struct { + accounts ports.UserAccountStore + entitlements entitlementReader + sanctions ports.SanctionStore + limits ports.LimitStore + clock ports.Clock +} + +// NewSnapshotReader constructs one lobby-facing eligibility snapshot reader. +func NewSnapshotReader( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, +) (*SnapshotReader, error) { + switch { + case accounts == nil: + return nil, fmt.Errorf("lobby eligibility snapshot reader: user account store must not be nil") + case entitlements == nil: + return nil, fmt.Errorf("lobby eligibility snapshot reader: entitlement reader must not be nil") + case sanctions == nil: + return nil, fmt.Errorf("lobby eligibility snapshot reader: sanction store must not be nil") + case limits == nil: + return nil, fmt.Errorf("lobby eligibility snapshot reader: limit store must not be nil") + case clock == nil: + return nil, fmt.Errorf("lobby eligibility snapshot reader: clock must not be nil") + default: + return &SnapshotReader{ + accounts: accounts, + entitlements: entitlements, + sanctions: sanctions, + limits: limits, + clock: clock, + }, nil + } +} + +// Execute returns one read-optimized eligibility snapshot for Game Lobby. +func (service *SnapshotReader) Execute( + ctx context.Context, + input GetUserEligibilityInput, +) (GetUserEligibilityResult, error) { + if ctx == nil { + return GetUserEligibilityResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + return GetUserEligibilityResult{}, err + } + + result := GetUserEligibilityResult{ + UserID: userID.String(), + ActiveSanctions: []ActiveSanctionView{}, + EffectiveLimits: []EffectiveLimitView{}, + } + + exists, err := service.accounts.ExistsByUserID(ctx, userID) + if err != nil { + return GetUserEligibilityResult{}, shared.ServiceUnavailable(err) + } + if !exists { + return result, nil + } + + now := service.clock.Now().UTC() + + entitlementSnapshot, err := service.entitlements.GetByUserID(ctx, userID) + switch { + case err == nil: + case errors.Is(err, ports.ErrNotFound): + return GetUserEligibilityResult{}, shared.InternalError(fmt.Errorf("user %q is missing entitlement snapshot", userID)) + default: + return GetUserEligibilityResult{}, shared.ServiceUnavailable(err) + } + + sanctionRecords, err := service.sanctions.ListByUserID(ctx, userID) + if err != nil { + return GetUserEligibilityResult{}, shared.ServiceUnavailable(err) + } + activeSanctions, err := policy.ActiveSanctionsAt(sanctionRecords, now) + if err != nil { + return GetUserEligibilityResult{}, shared.InternalError(fmt.Errorf("evaluate active sanctions for user %q: %w", userID, err)) + } + + limitRecords, err := service.limits.ListByUserID(ctx, userID) + if err != nil { + return GetUserEligibilityResult{}, shared.ServiceUnavailable(err) + } + activeLimits, err := policy.ActiveLimitsAt(limitRecords, now) + if err != nil { + return GetUserEligibilityResult{}, shared.InternalError(fmt.Errorf("evaluate active limits for user %q: %w", userID, err)) + } + + result.Exists = true + result.Entitlement = entitlementSnapshotView(entitlementSnapshot) + result.ActiveSanctions = lobbyRelevantSanctionViews(activeSanctions) + result.EffectiveLimits = materializeEffectiveLimits(entitlementSnapshot.IsPaid, activeLimits) + result.Markers = deriveEligibilityMarkers(entitlementSnapshot.IsPaid, activeSanctions) + + return result, nil +} + +func entitlementSnapshotView(snapshot entitlement.CurrentSnapshot) *EntitlementSnapshotView { + return &EntitlementSnapshotView{ + PlanCode: string(snapshot.PlanCode), + IsPaid: snapshot.IsPaid, + Source: snapshot.Source.String(), + Actor: actorRefView(snapshot.Actor), + ReasonCode: snapshot.ReasonCode.String(), + StartsAt: snapshot.StartsAt.UTC(), + EndsAt: cloneOptionalTime(snapshot.EndsAt), + UpdatedAt: snapshot.UpdatedAt.UTC(), + } +} + +func lobbyRelevantSanctionViews(records []policy.SanctionRecord) []ActiveSanctionView { + views := make([]ActiveSanctionView, 0, len(records)) + + for _, record := range records { + if !isLobbyRelevantSanction(record.SanctionCode) { + continue + } + + views = append(views, ActiveSanctionView{ + SanctionCode: string(record.SanctionCode), + Scope: record.Scope.String(), + ReasonCode: record.ReasonCode.String(), + Actor: actorRefView(record.Actor), + AppliedAt: record.AppliedAt.UTC(), + ExpiresAt: cloneOptionalTime(record.ExpiresAt), + }) + } + + return views +} + +func materializeEffectiveLimits(isPaid bool, overrides []policy.LimitRecord) []EffectiveLimitView { + overrideValues := make(map[policy.LimitCode]int, len(overrides)) + for _, record := range overrides { + overrideValues[record.LimitCode] = record.Value + } + + limits := make([]EffectiveLimitView, 0, len(limitCatalog)) + for _, entry := range limitCatalog { + if !isPaid && !entry.freeEnabled { + continue + } + + value := entry.freeValue + if isPaid { + value = entry.paidValue + } + if override, ok := overrideValues[entry.code]; ok { + value = override + } + + limits = append(limits, EffectiveLimitView{ + LimitCode: string(entry.code), + Value: value, + }) + } + + return limits +} + +func deriveEligibilityMarkers( + isPaid bool, + activeSanctions []policy.SanctionRecord, +) EligibilityMarkersView { + loginBlocked := hasActiveSanction(activeSanctions, policy.SanctionCodeLoginBlock) + createBlocked := hasActiveSanction(activeSanctions, policy.SanctionCodePrivateGameCreateBlock) + manageBlocked := hasActiveSanction(activeSanctions, policy.SanctionCodePrivateGameManageBlock) + joinBlocked := hasActiveSanction(activeSanctions, policy.SanctionCodeGameJoinBlock) + profileBlocked := hasActiveSanction(activeSanctions, policy.SanctionCodeProfileUpdateBlock) + + canLogin := !loginBlocked + + return EligibilityMarkersView{ + CanLogin: canLogin, + CanCreatePrivateGame: canLogin && isPaid && !createBlocked, + CanManagePrivateGame: canLogin && isPaid && !manageBlocked, + CanJoinGame: canLogin && !joinBlocked, + CanUpdateProfile: canLogin && !profileBlocked, + } +} + +func hasActiveSanction(records []policy.SanctionRecord, code policy.SanctionCode) bool { + for _, record := range records { + if record.SanctionCode == code { + return true + } + } + + return false +} + +func isLobbyRelevantSanction(code policy.SanctionCode) bool { + switch code { + case policy.SanctionCodeLoginBlock, + policy.SanctionCodePrivateGameCreateBlock, + policy.SanctionCodePrivateGameManageBlock, + policy.SanctionCodeGameJoinBlock: + return true + default: + return false + } +} + +func actorRefView(actor common.ActorRef) ActorRefView { + return ActorRefView{ + Type: actor.Type.String(), + ID: actor.ID.String(), + } +} + +func cloneOptionalTime(value *time.Time) *time.Time { + if value == nil { + return nil + } + + cloned := value.UTC() + return &cloned +} diff --git a/user/internal/service/lobbyeligibility/service_test.go b/user/internal/service/lobbyeligibility/service_test.go new file mode 100644 index 0000000..3d98393 --- /dev/null +++ b/user/internal/service/lobbyeligibility/service_test.go @@ -0,0 +1,524 @@ +package lobbyeligibility + +import ( + "context" + "testing" + "time" + + "galaxy/user/internal/adapters/redis/userstore" + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/entitlementsvc" + + "github.com/alicebob/miniredis/v2" + "github.com/stretchr/testify/require" +) + +func TestSnapshotReaderExecuteReturnsStableNotFound(t *testing.T) { + t.Parallel() + + service, err := NewSnapshotReader( + fakeAccountStore{existsByUserID: map[common.UserID]bool{}}, + fakeEntitlementReader{}, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: time.Unix(1_775_240_500, 0).UTC()}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GetUserEligibilityInput{UserID: " user-missing "}) + require.NoError(t, err) + require.False(t, result.Exists) + require.Equal(t, "user-missing", result.UserID) + require.Nil(t, result.Entitlement) + require.Empty(t, result.ActiveSanctions) + require.Empty(t, result.EffectiveLimits) + require.Equal(t, EligibilityMarkersView{}, result.Markers) +} + +func TestSnapshotReaderExecuteBuildsPaidSnapshotAndDerivedState(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + userID := common.UserID("user-123") + + service, err := NewSnapshotReader( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + fakeEntitlementReader{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + userID: paidEntitlementSnapshot(userID, now.Add(-24*time.Hour), now.Add(24*time.Hour)), + }, + }, + fakeSanctionStore{ + byUserID: map[common.UserID][]policy.SanctionRecord{ + userID: { + activeSanction(userID, policy.SanctionCodePrivateGameManageBlock, "lobby", now.Add(-time.Hour)), + activeSanction(userID, policy.SanctionCodeProfileUpdateBlock, "profile", now.Add(-30*time.Minute)), + expiredSanction(userID, policy.SanctionCodeGameJoinBlock, "lobby", now.Add(-2*time.Hour)), + }, + }, + }, + fakeLimitStore{ + byUserID: map[common.UserID][]policy.LimitRecord{ + userID: { + activeLimit(userID, policy.LimitCodeMaxPendingPrivateInvitesSent, 17, now.Add(-time.Hour)), + activeLimit(userID, policy.LimitCodeMaxActivePrivateGames, 2, now.Add(-2*time.Hour)), + }, + }, + }, + fixedClock{now: now}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GetUserEligibilityInput{UserID: userID.String()}) + require.NoError(t, err) + require.True(t, result.Exists) + require.NotNil(t, result.Entitlement) + require.Equal(t, "paid_monthly", result.Entitlement.PlanCode) + require.True(t, result.Entitlement.IsPaid) + + require.Len(t, result.ActiveSanctions, 1) + require.Equal(t, "private_game_manage_block", result.ActiveSanctions[0].SanctionCode) + + require.Equal(t, EligibilityMarkersView{ + CanLogin: true, + CanCreatePrivateGame: true, + CanManagePrivateGame: false, + CanJoinGame: true, + CanUpdateProfile: false, + }, result.Markers) + + require.Equal(t, []EffectiveLimitView{ + {LimitCode: "max_owned_private_games", Value: 3}, + {LimitCode: "max_pending_public_applications", Value: 10}, + {LimitCode: "max_active_game_memberships", Value: 10}, + }, result.EffectiveLimits) +} + +func TestSnapshotReaderExecuteDeniesUnpaidAndLoginBlockedUsers(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + userID := common.UserID("user-123") + + tests := []struct { + name string + snapshot entitlement.CurrentSnapshot + sanctions []policy.SanctionRecord + limits []policy.LimitRecord + wantSanctions []string + wantMarkers EligibilityMarkersView + wantLimits []EffectiveLimitView + }{ + { + name: "unpaid defaults", + snapshot: freeEntitlementSnapshot(userID, now.Add(-24*time.Hour)), + limits: []policy.LimitRecord{activeLimit(userID, policy.LimitCodeMaxOwnedPrivateGames, 9, now.Add(-time.Hour))}, + wantSanctions: []string{}, + wantMarkers: EligibilityMarkersView{ + CanLogin: true, + CanCreatePrivateGame: false, + CanManagePrivateGame: false, + CanJoinGame: true, + CanUpdateProfile: true, + }, + wantLimits: []EffectiveLimitView{ + {LimitCode: "max_pending_public_applications", Value: 3}, + {LimitCode: "max_active_game_memberships", Value: 3}, + }, + }, + { + name: "login block denies all markers", + snapshot: paidEntitlementSnapshot(userID, now.Add(-24*time.Hour), now.Add(24*time.Hour)), + sanctions: []policy.SanctionRecord{ + activeSanction(userID, policy.SanctionCodeLoginBlock, "auth", now.Add(-time.Hour)), + activeSanction(userID, policy.SanctionCodeGameJoinBlock, "lobby", now.Add(-30*time.Minute)), + }, + wantSanctions: []string{"game_join_block", "login_block"}, + wantMarkers: EligibilityMarkersView{ + CanLogin: false, + CanCreatePrivateGame: false, + CanManagePrivateGame: false, + CanJoinGame: false, + CanUpdateProfile: false, + }, + wantLimits: []EffectiveLimitView{ + {LimitCode: "max_owned_private_games", Value: 3}, + {LimitCode: "max_pending_public_applications", Value: 10}, + {LimitCode: "max_active_game_memberships", Value: 10}, + }, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + service, err := NewSnapshotReader( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + fakeEntitlementReader{byUserID: map[common.UserID]entitlement.CurrentSnapshot{userID: tt.snapshot}}, + fakeSanctionStore{byUserID: map[common.UserID][]policy.SanctionRecord{userID: tt.sanctions}}, + fakeLimitStore{byUserID: map[common.UserID][]policy.LimitRecord{userID: tt.limits}}, + fixedClock{now: now}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GetUserEligibilityInput{UserID: userID.String()}) + require.NoError(t, err) + require.Equal(t, tt.wantMarkers, result.Markers) + require.Equal(t, tt.wantLimits, result.EffectiveLimits) + + gotSanctions := make([]string, 0, len(result.ActiveSanctions)) + for _, sanction := range result.ActiveSanctions { + gotSanctions = append(gotSanctions, sanction.SanctionCode) + } + require.Equal(t, tt.wantSanctions, gotSanctions) + }) + } +} + +func TestSnapshotReaderExecuteRepairsExpiredPaidSnapshotWithStore(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + store := newRedisStore(t) + userID := common.UserID("user-123") + accountRecord := validAccountRecord() + + require.NoError(t, store.Accounts().Create(context.Background(), ports.CreateAccountInput{ + Account: accountRecord, + Reservation: account.RaceNameReservation{ + CanonicalKey: account.RaceNameCanonicalKey("pilot nova"), + UserID: userID, + RaceName: accountRecord.RaceName, + ReservedAt: accountRecord.UpdatedAt, + }, + })) + + expiredEndsAt := now.Add(-time.Minute) + require.NoError(t, store.EntitlementSnapshots().Put(context.Background(), entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodePaidMonthly, + IsPaid: true, + StartsAt: now.Add(-30 * 24 * time.Hour), + EndsAt: timePointer(expiredEndsAt), + Source: common.Source("billing"), + Actor: common.ActorRef{Type: common.ActorType("billing"), ID: common.ActorID("invoice-1")}, + ReasonCode: common.ReasonCode("renewal"), + UpdatedAt: now.Add(-2 * time.Hour), + })) + + entitlementReader, err := entitlementsvc.NewReader( + store.EntitlementSnapshots(), + store.EntitlementLifecycle(), + fixedClock{now: now}, + fixedIDGenerator{entitlementRecordID: entitlement.EntitlementRecordID("entitlement-expiry-repair")}, + ) + require.NoError(t, err) + + service, err := NewSnapshotReader( + store.Accounts(), + entitlementReader, + store.Sanctions(), + store.Limits(), + fixedClock{now: now}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GetUserEligibilityInput{UserID: userID.String()}) + require.NoError(t, err) + require.True(t, result.Exists) + require.NotNil(t, result.Entitlement) + require.Equal(t, "free", result.Entitlement.PlanCode) + require.False(t, result.Entitlement.IsPaid) + require.Equal(t, expiredEndsAt, result.Entitlement.StartsAt) + require.Equal(t, []EffectiveLimitView{ + {LimitCode: "max_pending_public_applications", Value: 3}, + {LimitCode: "max_active_game_memberships", Value: 3}, + }, result.EffectiveLimits) + + storedSnapshot, err := store.EntitlementSnapshots().GetByUserID(context.Background(), userID) + require.NoError(t, err) + require.Equal(t, entitlement.PlanCodeFree, storedSnapshot.PlanCode) + require.False(t, storedSnapshot.IsPaid) +} + +type fakeAccountStore struct { + existsByUserID map[common.UserID]bool + err error +} + +func (store fakeAccountStore) Create(context.Context, ports.CreateAccountInput) error { + return nil +} + +func (store fakeAccountStore) GetByUserID(context.Context, common.UserID) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) GetByEmail(context.Context, common.Email) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) GetByRaceName(context.Context, common.RaceName) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) ExistsByUserID(_ context.Context, userID common.UserID) (bool, error) { + if store.err != nil { + return false, store.err + } + + return store.existsByUserID[userID], nil +} + +func (store fakeAccountStore) RenameRaceName(context.Context, ports.RenameRaceNameInput) error { + return nil +} + +func (store fakeAccountStore) Update(context.Context, account.UserAccount) error { + return nil +} + +type fakeEntitlementReader struct { + byUserID map[common.UserID]entitlement.CurrentSnapshot + err error +} + +func (reader fakeEntitlementReader) GetByUserID(_ context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) { + if reader.err != nil { + return entitlement.CurrentSnapshot{}, reader.err + } + + record, ok := reader.byUserID[userID] + if !ok { + return entitlement.CurrentSnapshot{}, ports.ErrNotFound + } + + return record, nil +} + +type fakeSanctionStore struct { + byUserID map[common.UserID][]policy.SanctionRecord + err error +} + +func (store fakeSanctionStore) Create(context.Context, policy.SanctionRecord) error { + return nil +} + +func (store fakeSanctionStore) GetByRecordID(context.Context, policy.SanctionRecordID) (policy.SanctionRecord, error) { + return policy.SanctionRecord{}, ports.ErrNotFound +} + +func (store fakeSanctionStore) ListByUserID(_ context.Context, userID common.UserID) ([]policy.SanctionRecord, error) { + if store.err != nil { + return nil, store.err + } + + records := store.byUserID[userID] + cloned := make([]policy.SanctionRecord, len(records)) + copy(cloned, records) + return cloned, nil +} + +func (store fakeSanctionStore) Update(context.Context, policy.SanctionRecord) error { + return nil +} + +type fakeLimitStore struct { + byUserID map[common.UserID][]policy.LimitRecord + err error +} + +func (store fakeLimitStore) Create(context.Context, policy.LimitRecord) error { + return nil +} + +func (store fakeLimitStore) GetByRecordID(context.Context, policy.LimitRecordID) (policy.LimitRecord, error) { + return policy.LimitRecord{}, ports.ErrNotFound +} + +func (store fakeLimitStore) ListByUserID(_ context.Context, userID common.UserID) ([]policy.LimitRecord, error) { + if store.err != nil { + return nil, store.err + } + + records := store.byUserID[userID] + cloned := make([]policy.LimitRecord, len(records)) + copy(cloned, records) + return cloned, nil +} + +func (store fakeLimitStore) Update(context.Context, policy.LimitRecord) error { + return nil +} + +type fixedClock struct { + now time.Time +} + +func (clock fixedClock) Now() time.Time { + return clock.now +} + +type fixedIDGenerator struct { + entitlementRecordID entitlement.EntitlementRecordID +} + +func (generator fixedIDGenerator) NewUserID() (common.UserID, error) { + return "", nil +} + +func (generator fixedIDGenerator) NewInitialRaceName() (common.RaceName, error) { + return "", nil +} + +func (generator fixedIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + return generator.entitlementRecordID, nil +} + +func (generator fixedIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + return "", nil +} + +func (generator fixedIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + return "", nil +} + +func activeSanction( + userID common.UserID, + code policy.SanctionCode, + scope string, + appliedAt time.Time, +) policy.SanctionRecord { + return policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-" + string(code)), + UserID: userID, + SanctionCode: code, + Scope: common.Scope(scope), + ReasonCode: common.ReasonCode("manual_block"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: appliedAt.UTC(), + } +} + +func expiredSanction( + userID common.UserID, + code policy.SanctionCode, + scope string, + appliedAt time.Time, +) policy.SanctionRecord { + record := activeSanction(userID, code, scope, appliedAt) + expiresAt := appliedAt.Add(30 * time.Minute) + record.ExpiresAt = &expiresAt + return record +} + +func activeLimit( + userID common.UserID, + code policy.LimitCode, + value int, + appliedAt time.Time, +) policy.LimitRecord { + return policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-" + string(code)), + UserID: userID, + LimitCode: code, + Value: value, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: appliedAt.UTC(), + } +} + +func removedLimit( + userID common.UserID, + code policy.LimitCode, + value int, + appliedAt time.Time, +) policy.LimitRecord { + record := activeLimit(userID, code, value, appliedAt) + removedAt := appliedAt.Add(15 * time.Minute) + record.RemovedAt = &removedAt + record.RemovedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")} + record.RemovedReasonCode = common.ReasonCode("manual_remove") + return record +} + +func paidEntitlementSnapshot( + userID common.UserID, + startsAt time.Time, + endsAt time.Time, +) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodePaidMonthly, + IsPaid: true, + StartsAt: startsAt.UTC(), + EndsAt: timePointer(endsAt), + Source: common.Source("billing"), + Actor: common.ActorRef{Type: common.ActorType("billing"), ID: common.ActorID("invoice-1")}, + ReasonCode: common.ReasonCode("renewal"), + UpdatedAt: startsAt.UTC(), + } +} + +func freeEntitlementSnapshot(userID common.UserID, startsAt time.Time) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: startsAt.UTC(), + Source: common.Source("auth_registration"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: common.ReasonCode("initial_free_entitlement"), + UpdatedAt: startsAt.UTC(), + } +} + +func validAccountRecord() account.UserAccount { + createdAt := time.Unix(1_775_240_000, 0).UTC() + return account.UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Kaliningrad"), + CreatedAt: createdAt, + UpdatedAt: createdAt, + } +} + +func newRedisStore(t *testing.T) *userstore.Store { + t.Helper() + + server := miniredis.RunT(t) + store, err := userstore.New(userstore.Config{ + Addr: server.Addr(), + DB: 0, + KeyspacePrefix: "user:test:", + OperationTimeout: 250 * time.Millisecond, + }) + require.NoError(t, err) + t.Cleanup(func() { + _ = store.Close() + }) + + return store +} + +func timePointer(value time.Time) *time.Time { + utcValue := value.UTC() + return &utcValue +} + +var _ ports.UserAccountStore = fakeAccountStore{} +var _ ports.SanctionStore = fakeSanctionStore{} +var _ ports.LimitStore = fakeLimitStore{} +var _ ports.Clock = fixedClock{} +var _ ports.IDGenerator = fixedIDGenerator{} diff --git a/user/internal/service/policysvc/observability_test.go b/user/internal/service/policysvc/observability_test.go new file mode 100644 index 0000000..1769cda --- /dev/null +++ b/user/internal/service/policysvc/observability_test.go @@ -0,0 +1,178 @@ +package policysvc + +import ( + "context" + "testing" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + + "github.com/stretchr/testify/require" +) + +func TestApplySanctionServiceExecutePublishesEvent(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + publisher := &recordingPolicyPublisher{} + + service, err := NewApplySanctionServiceWithObservability( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{sanctionRecordID: policy.SanctionRecordID("sanction-1")}, + nil, + nil, + publisher, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), ApplySanctionInput{ + UserID: userID.String(), + SanctionCode: string(policy.SanctionCodeLoginBlock), + Scope: "auth", + ReasonCode: "policy_blocked", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + AppliedAt: now.Add(-time.Minute).Format(time.RFC3339Nano), + ExpiresAt: now.Add(time.Hour).Format(time.RFC3339Nano), + }) + require.NoError(t, err) + require.Len(t, publisher.sanctionEvents, 1) + require.Equal(t, ports.SanctionChangedOperationApplied, publisher.sanctionEvents[0].Operation) + require.Equal(t, common.Source("admin_internal_api"), publisher.sanctionEvents[0].Source) +} + +func TestRemoveSanctionServiceExecuteMissingDoesNotPublishEvent(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + publisher := &recordingPolicyPublisher{} + + service, err := NewRemoveSanctionServiceWithObservability( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{}, + nil, + nil, + publisher, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), RemoveSanctionInput{ + UserID: userID.String(), + SanctionCode: string(policy.SanctionCodeLoginBlock), + ReasonCode: "manual_remove", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + }) + require.NoError(t, err) + require.Empty(t, publisher.sanctionEvents) +} + +func TestSetLimitServiceExecutePublishesEvent(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + publisher := &recordingPolicyPublisher{} + + service, err := NewSetLimitServiceWithObservability( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{limitRecordID: policy.LimitRecordID("limit-1")}, + nil, + nil, + publisher, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), SetLimitInput{ + UserID: userID.String(), + LimitCode: string(policy.LimitCodeMaxOwnedPrivateGames), + Value: 5, + ReasonCode: "manual_override", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + AppliedAt: now.Add(-time.Minute).Format(time.RFC3339Nano), + ExpiresAt: now.Add(time.Hour).Format(time.RFC3339Nano), + }) + require.NoError(t, err) + require.Len(t, publisher.limitEvents, 1) + require.Equal(t, ports.LimitChangedOperationSet, publisher.limitEvents[0].Operation) + require.NotNil(t, publisher.limitEvents[0].Value) + require.Equal(t, 5, *publisher.limitEvents[0].Value) +} + +func TestRemoveLimitServiceExecuteMissingDoesNotPublishEvent(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + publisher := &recordingPolicyPublisher{} + + service, err := NewRemoveLimitServiceWithObservability( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{}, + nil, + nil, + publisher, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), RemoveLimitInput{ + UserID: userID.String(), + LimitCode: string(policy.LimitCodeMaxOwnedPrivateGames), + ReasonCode: "manual_remove", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + }) + require.NoError(t, err) + require.Empty(t, publisher.limitEvents) +} + +type recordingPolicyPublisher struct { + sanctionEvents []ports.SanctionChangedEvent + limitEvents []ports.LimitChangedEvent +} + +func (publisher *recordingPolicyPublisher) PublishSanctionChanged(_ context.Context, event ports.SanctionChangedEvent) error { + if err := event.Validate(); err != nil { + return err + } + publisher.sanctionEvents = append(publisher.sanctionEvents, event) + return nil +} + +func (publisher *recordingPolicyPublisher) PublishLimitChanged(_ context.Context, event ports.LimitChangedEvent) error { + if err := event.Validate(); err != nil { + return err + } + publisher.limitEvents = append(publisher.limitEvents, event) + return nil +} + +var ( + _ ports.SanctionChangedPublisher = (*recordingPolicyPublisher)(nil) + _ ports.LimitChangedPublisher = (*recordingPolicyPublisher)(nil) +) diff --git a/user/internal/service/policysvc/service.go b/user/internal/service/policysvc/service.go new file mode 100644 index 0000000..f2e1de4 --- /dev/null +++ b/user/internal/service/policysvc/service.go @@ -0,0 +1,1245 @@ +// Package policysvc implements the trusted sanction and limit command use +// cases owned by User Service. +package policysvc + +import ( + "context" + "errors" + "fmt" + "log/slog" + "strings" + "time" + + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" + "galaxy/user/internal/telemetry" +) + +const adminInternalAPISource = common.Source("admin_internal_api") + +// ActorInput stores one transport-facing audit actor payload. +type ActorInput struct { + // Type stores the machine-readable actor type. + Type string + + // ID stores the optional stable actor identifier. + ID string +} + +// ApplySanctionInput stores one trusted sanction-apply command request. +type ApplySanctionInput struct { + // UserID identifies the user whose sanction set must change. + UserID string + + // SanctionCode stores the sanction that must become active. + SanctionCode string + + // Scope stores the machine-readable sanction scope. + Scope string + + // ReasonCode stores the machine-readable mutation reason. + ReasonCode string + + // Actor stores the audit actor metadata. + Actor ActorInput + + // AppliedAt stores when the sanction becomes effective. + AppliedAt string + + // ExpiresAt stores the optional planned sanction expiry. + ExpiresAt string +} + +// RemoveSanctionInput stores one trusted sanction-remove command request. +type RemoveSanctionInput struct { + // UserID identifies the user whose sanction set must change. + UserID string + + // SanctionCode stores the sanction that must no longer stay active. + SanctionCode string + + // ReasonCode stores the machine-readable mutation reason. + ReasonCode string + + // Actor stores the audit actor metadata. + Actor ActorInput +} + +// SetLimitInput stores one trusted limit-set command request. +type SetLimitInput struct { + // UserID identifies the user whose limit set must change. + UserID string + + // LimitCode stores the limit override that must become active. + LimitCode string + + // Value stores the active numeric override value. + Value int + + // ReasonCode stores the machine-readable mutation reason. + ReasonCode string + + // Actor stores the audit actor metadata. + Actor ActorInput + + // AppliedAt stores when the limit becomes effective. + AppliedAt string + + // ExpiresAt stores the optional planned limit expiry. + ExpiresAt string +} + +// RemoveLimitInput stores one trusted limit-remove command request. +type RemoveLimitInput struct { + // UserID identifies the user whose limit set must change. + UserID string + + // LimitCode stores the limit override that must no longer stay active. + LimitCode string + + // ReasonCode stores the machine-readable mutation reason. + ReasonCode string + + // Actor stores the audit actor metadata. + Actor ActorInput +} + +// ActorRefView stores transport-ready audit actor metadata. +type ActorRefView struct { + // Type stores the machine-readable actor type. + Type string `json:"type"` + + // ID stores the optional stable actor identifier. + ID string `json:"id,omitempty"` +} + +// ActiveSanctionView stores one transport-ready active sanction. +type ActiveSanctionView struct { + // SanctionCode stores the active sanction code. + SanctionCode string `json:"sanction_code"` + + // Scope stores the machine-readable sanction scope. + Scope string `json:"scope"` + + // ReasonCode stores the machine-readable sanction reason. + ReasonCode string `json:"reason_code"` + + // Actor stores the audit actor metadata attached to the sanction. + Actor ActorRefView `json:"actor"` + + // AppliedAt stores when the sanction became active. + AppliedAt time.Time `json:"applied_at"` + + // ExpiresAt stores the optional planned sanction expiry. + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +// ActiveLimitView stores one transport-ready active limit. +type ActiveLimitView struct { + // LimitCode stores the active limit code. + LimitCode string `json:"limit_code"` + + // Value stores the active override value. + Value int `json:"value"` + + // ReasonCode stores the machine-readable limit reason. + ReasonCode string `json:"reason_code"` + + // Actor stores the audit actor metadata attached to the limit. + Actor ActorRefView `json:"actor"` + + // AppliedAt stores when the limit became active. + AppliedAt time.Time `json:"applied_at"` + + // ExpiresAt stores the optional planned limit expiry. + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +// SanctionCommandResult stores one trusted sanction-command result. +type SanctionCommandResult struct { + // UserID identifies the mutated user. + UserID string `json:"user_id"` + + // ActiveSanctions stores the current active sanctions sorted by code. + ActiveSanctions []ActiveSanctionView `json:"active_sanctions"` +} + +// LimitCommandResult stores one trusted limit-command result. +type LimitCommandResult struct { + // UserID identifies the mutated user. + UserID string `json:"user_id"` + + // ActiveLimits stores the current active limits sorted by code. + ActiveLimits []ActiveLimitView `json:"active_limits"` +} + +type commandSupport struct { + accounts ports.UserAccountStore + sanctions ports.SanctionStore + limits ports.LimitStore + lifecycle ports.PolicyLifecycleStore + clock ports.Clock + idGenerator ports.IDGenerator +} + +func newCommandSupport( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (commandSupport, error) { + switch { + case accounts == nil: + return commandSupport{}, fmt.Errorf("user account store must not be nil") + case sanctions == nil: + return commandSupport{}, fmt.Errorf("sanction store must not be nil") + case limits == nil: + return commandSupport{}, fmt.Errorf("limit store must not be nil") + case lifecycle == nil: + return commandSupport{}, fmt.Errorf("policy lifecycle store must not be nil") + case clock == nil: + return commandSupport{}, fmt.Errorf("clock must not be nil") + case idGenerator == nil: + return commandSupport{}, fmt.Errorf("id generator must not be nil") + default: + return commandSupport{ + accounts: accounts, + sanctions: sanctions, + limits: limits, + lifecycle: lifecycle, + clock: clock, + idGenerator: idGenerator, + }, nil + } +} + +func (support commandSupport) ensureUserExists(ctx context.Context, userID common.UserID) error { + exists, err := support.accounts.ExistsByUserID(ctx, userID) + switch { + case err != nil: + return shared.ServiceUnavailable(err) + case !exists: + return shared.SubjectNotFound() + default: + return nil + } +} + +func (support commandSupport) loadActiveSanctions( + ctx context.Context, + userID common.UserID, + now time.Time, +) ([]policy.SanctionRecord, error) { + records, err := support.sanctions.ListByUserID(ctx, userID) + if err != nil { + return nil, shared.ServiceUnavailable(err) + } + + active, err := policy.ActiveSanctionsAt(records, now) + if err != nil { + return nil, shared.InternalError(fmt.Errorf("evaluate active sanctions for user %q: %w", userID, err)) + } + + return active, nil +} + +func (support commandSupport) loadActiveLimits( + ctx context.Context, + userID common.UserID, + now time.Time, +) ([]policy.LimitRecord, error) { + records, err := support.limits.ListByUserID(ctx, userID) + if err != nil { + return nil, shared.ServiceUnavailable(err) + } + + active, err := policy.ActiveLimitsAt(records, now) + if err != nil { + return nil, shared.InternalError(fmt.Errorf("evaluate active limits for user %q: %w", userID, err)) + } + + return active, nil +} + +// ApplySanctionService executes the explicit trusted sanction-apply command. +type ApplySanctionService struct { + support commandSupport + logger *slog.Logger + telemetry *telemetry.Runtime + publisher ports.SanctionChangedPublisher +} + +// NewApplySanctionService constructs one sanction-apply use case. +func NewApplySanctionService( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (*ApplySanctionService, error) { + return NewApplySanctionServiceWithObservability(accounts, sanctions, limits, lifecycle, clock, idGenerator, nil, nil, nil) +} + +// NewApplySanctionServiceWithObservability constructs one sanction-apply use +// case with optional observability hooks. +func NewApplySanctionServiceWithObservability( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + publisher ports.SanctionChangedPublisher, +) (*ApplySanctionService, error) { + support, err := newCommandSupport(accounts, sanctions, limits, lifecycle, clock, idGenerator) + if err != nil { + return nil, fmt.Errorf("policy apply sanction service: %w", err) + } + + return &ApplySanctionService{ + support: support, + logger: logger, + telemetry: telemetryRuntime, + publisher: publisher, + }, nil +} + +// Execute applies one new active sanction when the current state does not +// already contain an active sanction with the same code. +func (service *ApplySanctionService) Execute(ctx context.Context, input ApplySanctionInput) (result SanctionCommandResult, err error) { + outcome := shared.ErrorCodeInternalError + userIDString := strings.TrimSpace(input.UserID) + reasonCodeValue := strings.TrimSpace(input.ReasonCode) + actorTypeValue := strings.TrimSpace(input.Actor.Type) + actorIDValue := strings.TrimSpace(input.Actor.ID) + defer func() { + if service.telemetry != nil { + service.telemetry.RecordSanctionMutation(ctx, "apply", outcome) + } + shared.LogServiceOutcome(service.logger, ctx, "sanction apply completed", err, + "use_case", "apply_sanction", + "command", "apply", + "outcome", outcome, + "user_id", userIDString, + "source", adminInternalAPISource.String(), + "reason_code", reasonCodeValue, + "actor_type", actorTypeValue, + "actor_id", actorIDValue, + ) + }() + + if ctx == nil { + outcome = shared.ErrorCodeInvalidRequest + return SanctionCommandResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + userIDString = userID.String() + if err := service.support.ensureUserExists(ctx, userID); err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + + recordID, err := service.support.idGenerator.NewSanctionRecordID() + if err != nil { + outcome = shared.ErrorCodeServiceUnavailable + return SanctionCommandResult{}, shared.ServiceUnavailable(err) + } + record, now, err := buildSanctionRecord(recordID, userID, input, service.support.clock.Now().UTC()) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + reasonCodeValue = record.ReasonCode.String() + actorTypeValue = record.Actor.Type.String() + actorIDValue = record.Actor.ID.String() + + if err := service.support.lifecycle.ApplySanction(ctx, ports.ApplySanctionInput{ + NewRecord: record, + }); err != nil { + switch { + case errors.Is(err, ports.ErrConflict): + outcome = shared.ErrorCodeConflict + return SanctionCommandResult{}, shared.Conflict() + default: + outcome = shared.ErrorCodeServiceUnavailable + return SanctionCommandResult{}, shared.ServiceUnavailable(err) + } + } + + active, err := service.support.loadActiveSanctions(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + outcome = "success" + result = SanctionCommandResult{ + UserID: userID.String(), + ActiveSanctions: sanctionViews(active), + } + publishSanctionChanged(ctx, service.publisher, service.telemetry, service.logger, "apply_sanction", ports.SanctionChangedOperationApplied, record) + + return result, nil +} + +// RemoveSanctionService executes the explicit trusted sanction-remove +// command. +type RemoveSanctionService struct { + support commandSupport + logger *slog.Logger + telemetry *telemetry.Runtime + publisher ports.SanctionChangedPublisher +} + +// NewRemoveSanctionService constructs one sanction-remove use case. +func NewRemoveSanctionService( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (*RemoveSanctionService, error) { + return NewRemoveSanctionServiceWithObservability(accounts, sanctions, limits, lifecycle, clock, idGenerator, nil, nil, nil) +} + +// NewRemoveSanctionServiceWithObservability constructs one sanction-remove use +// case with optional observability hooks. +func NewRemoveSanctionServiceWithObservability( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + publisher ports.SanctionChangedPublisher, +) (*RemoveSanctionService, error) { + support, err := newCommandSupport(accounts, sanctions, limits, lifecycle, clock, idGenerator) + if err != nil { + return nil, fmt.Errorf("policy remove sanction service: %w", err) + } + + return &RemoveSanctionService{ + support: support, + logger: logger, + telemetry: telemetryRuntime, + publisher: publisher, + }, nil +} + +// Execute removes the current active sanction of input.SanctionCode. When no +// active sanction exists, the command succeeds without changing state. +func (service *RemoveSanctionService) Execute(ctx context.Context, input RemoveSanctionInput) (result SanctionCommandResult, err error) { + outcome := shared.ErrorCodeInternalError + userIDString := strings.TrimSpace(input.UserID) + reasonCodeValue := strings.TrimSpace(input.ReasonCode) + actorTypeValue := strings.TrimSpace(input.Actor.Type) + actorIDValue := strings.TrimSpace(input.Actor.ID) + defer func() { + if service.telemetry != nil { + service.telemetry.RecordSanctionMutation(ctx, "remove", outcome) + } + shared.LogServiceOutcome(service.logger, ctx, "sanction remove completed", err, + "use_case", "remove_sanction", + "command", "remove", + "outcome", outcome, + "user_id", userIDString, + "source", adminInternalAPISource.String(), + "reason_code", reasonCodeValue, + "actor_type", actorTypeValue, + "actor_id", actorIDValue, + ) + }() + + if ctx == nil { + outcome = shared.ErrorCodeInvalidRequest + return SanctionCommandResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + userIDString = userID.String() + if err := service.support.ensureUserExists(ctx, userID); err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + + sanctionCode, err := parseSanctionCode(input.SanctionCode) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + reasonCodeValue = reasonCode.String() + actor, err := parseActor(input.Actor) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + actorTypeValue = actor.Type.String() + actorIDValue = actor.ID.String() + + now := service.support.clock.Now().UTC() + active, err := service.support.loadActiveSanctions(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + + current, ok := findActiveSanction(active, sanctionCode) + if !ok { + outcome = "success" + return SanctionCommandResult{ + UserID: userID.String(), + ActiveSanctions: sanctionViews(active), + }, nil + } + + updated := current + updated.RemovedAt = &now + updated.RemovedBy = actor + updated.RemovedReasonCode = reasonCode + + if err := service.support.lifecycle.RemoveSanction(ctx, ports.RemoveSanctionInput{ + ExpectedActiveRecord: current, + UpdatedRecord: updated, + }); err != nil { + switch { + case errors.Is(err, ports.ErrConflict): + active, loadErr := service.support.loadActiveSanctions(ctx, userID, now) + if loadErr != nil { + outcome = shared.MetricOutcome(loadErr) + return SanctionCommandResult{}, loadErr + } + next, ok := findActiveSanction(active, sanctionCode) + if !ok { + outcome = "success" + return SanctionCommandResult{ + UserID: userID.String(), + ActiveSanctions: sanctionViews(active), + }, nil + } + if next.RecordID != current.RecordID { + outcome = shared.ErrorCodeConflict + return SanctionCommandResult{}, shared.Conflict() + } + outcome = shared.ErrorCodeConflict + return SanctionCommandResult{}, shared.Conflict() + default: + outcome = shared.ErrorCodeServiceUnavailable + return SanctionCommandResult{}, shared.ServiceUnavailable(err) + } + } + + active, err = service.support.loadActiveSanctions(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return SanctionCommandResult{}, err + } + outcome = "success" + result = SanctionCommandResult{ + UserID: userID.String(), + ActiveSanctions: sanctionViews(active), + } + publishSanctionChanged(ctx, service.publisher, service.telemetry, service.logger, "remove_sanction", ports.SanctionChangedOperationRemoved, updated) + + return result, nil +} + +// SetLimitService executes the explicit trusted limit-set command. +type SetLimitService struct { + support commandSupport + logger *slog.Logger + telemetry *telemetry.Runtime + publisher ports.LimitChangedPublisher +} + +// NewSetLimitService constructs one limit-set use case. +func NewSetLimitService( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (*SetLimitService, error) { + return NewSetLimitServiceWithObservability(accounts, sanctions, limits, lifecycle, clock, idGenerator, nil, nil, nil) +} + +// NewSetLimitServiceWithObservability constructs one limit-set use case with +// optional observability hooks. +func NewSetLimitServiceWithObservability( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + publisher ports.LimitChangedPublisher, +) (*SetLimitService, error) { + support, err := newCommandSupport(accounts, sanctions, limits, lifecycle, clock, idGenerator) + if err != nil { + return nil, fmt.Errorf("policy set limit service: %w", err) + } + + return &SetLimitService{ + support: support, + logger: logger, + telemetry: telemetryRuntime, + publisher: publisher, + }, nil +} + +// Execute creates one new active limit or replaces the current active limit of +// the same code. +func (service *SetLimitService) Execute(ctx context.Context, input SetLimitInput) (result LimitCommandResult, err error) { + outcome := shared.ErrorCodeInternalError + userIDString := strings.TrimSpace(input.UserID) + reasonCodeValue := strings.TrimSpace(input.ReasonCode) + actorTypeValue := strings.TrimSpace(input.Actor.Type) + actorIDValue := strings.TrimSpace(input.Actor.ID) + defer func() { + if service.telemetry != nil { + service.telemetry.RecordLimitMutation(ctx, "set", outcome) + } + shared.LogServiceOutcome(service.logger, ctx, "limit set completed", err, + "use_case", "set_limit", + "command", "set", + "outcome", outcome, + "user_id", userIDString, + "source", adminInternalAPISource.String(), + "reason_code", reasonCodeValue, + "actor_type", actorTypeValue, + "actor_id", actorIDValue, + ) + }() + + if ctx == nil { + outcome = shared.ErrorCodeInvalidRequest + return LimitCommandResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + userIDString = userID.String() + if err := service.support.ensureUserExists(ctx, userID); err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + + recordID, err := service.support.idGenerator.NewLimitRecordID() + if err != nil { + outcome = shared.ErrorCodeServiceUnavailable + return LimitCommandResult{}, shared.ServiceUnavailable(err) + } + record, now, err := buildLimitRecord(recordID, userID, input, service.support.clock.Now().UTC()) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + reasonCodeValue = record.ReasonCode.String() + actorTypeValue = record.Actor.Type.String() + actorIDValue = record.Actor.ID.String() + + active, err := service.support.loadActiveLimits(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + + current, ok := findActiveLimit(active, record.LimitCode) + setInput := ports.SetLimitInput{NewRecord: record} + if ok { + if record.AppliedAt.Before(current.AppliedAt) { + outcome = shared.ErrorCodeInvalidRequest + return LimitCommandResult{}, shared.InvalidRequest("applied_at must not be before the current active limit applied_at") + } + + updated := current + removedAt := record.AppliedAt + updated.RemovedAt = &removedAt + updated.RemovedBy = record.Actor + updated.RemovedReasonCode = record.ReasonCode + setInput.ExpectedActiveRecord = ¤t + setInput.UpdatedActiveRecord = &updated + } + + if err := service.support.lifecycle.SetLimit(ctx, setInput); err != nil { + switch { + case errors.Is(err, ports.ErrConflict): + outcome = shared.ErrorCodeConflict + return LimitCommandResult{}, shared.Conflict() + default: + outcome = shared.ErrorCodeServiceUnavailable + return LimitCommandResult{}, shared.ServiceUnavailable(err) + } + } + + active, err = service.support.loadActiveLimits(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + outcome = "success" + result = LimitCommandResult{ + UserID: userID.String(), + ActiveLimits: limitViews(active), + } + publishLimitChanged(ctx, service.publisher, service.telemetry, service.logger, "set_limit", ports.LimitChangedOperationSet, record) + + return result, nil +} + +// RemoveLimitService executes the explicit trusted limit-remove command. +type RemoveLimitService struct { + support commandSupport + logger *slog.Logger + telemetry *telemetry.Runtime + publisher ports.LimitChangedPublisher +} + +// NewRemoveLimitService constructs one limit-remove use case. +func NewRemoveLimitService( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, +) (*RemoveLimitService, error) { + return NewRemoveLimitServiceWithObservability(accounts, sanctions, limits, lifecycle, clock, idGenerator, nil, nil, nil) +} + +// NewRemoveLimitServiceWithObservability constructs one limit-remove use case +// with optional observability hooks. +func NewRemoveLimitServiceWithObservability( + accounts ports.UserAccountStore, + sanctions ports.SanctionStore, + limits ports.LimitStore, + lifecycle ports.PolicyLifecycleStore, + clock ports.Clock, + idGenerator ports.IDGenerator, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + publisher ports.LimitChangedPublisher, +) (*RemoveLimitService, error) { + support, err := newCommandSupport(accounts, sanctions, limits, lifecycle, clock, idGenerator) + if err != nil { + return nil, fmt.Errorf("policy remove limit service: %w", err) + } + + return &RemoveLimitService{ + support: support, + logger: logger, + telemetry: telemetryRuntime, + publisher: publisher, + }, nil +} + +// Execute removes the current active limit of input.LimitCode. When no active +// limit exists, the command succeeds without changing state. +func (service *RemoveLimitService) Execute(ctx context.Context, input RemoveLimitInput) (result LimitCommandResult, err error) { + outcome := shared.ErrorCodeInternalError + userIDString := strings.TrimSpace(input.UserID) + reasonCodeValue := strings.TrimSpace(input.ReasonCode) + actorTypeValue := strings.TrimSpace(input.Actor.Type) + actorIDValue := strings.TrimSpace(input.Actor.ID) + defer func() { + if service.telemetry != nil { + service.telemetry.RecordLimitMutation(ctx, "remove", outcome) + } + shared.LogServiceOutcome(service.logger, ctx, "limit remove completed", err, + "use_case", "remove_limit", + "command", "remove", + "outcome", outcome, + "user_id", userIDString, + "source", adminInternalAPISource.String(), + "reason_code", reasonCodeValue, + "actor_type", actorTypeValue, + "actor_id", actorIDValue, + ) + }() + + if ctx == nil { + outcome = shared.ErrorCodeInvalidRequest + return LimitCommandResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + userIDString = userID.String() + if err := service.support.ensureUserExists(ctx, userID); err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + + limitCode, err := parseLimitCode(input.LimitCode) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + reasonCodeValue = reasonCode.String() + actor, err := parseActor(input.Actor) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + actorTypeValue = actor.Type.String() + actorIDValue = actor.ID.String() + + now := service.support.clock.Now().UTC() + active, err := service.support.loadActiveLimits(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + + current, ok := findActiveLimit(active, limitCode) + if !ok { + outcome = "success" + return LimitCommandResult{ + UserID: userID.String(), + ActiveLimits: limitViews(active), + }, nil + } + + updated := current + updated.RemovedAt = &now + updated.RemovedBy = actor + updated.RemovedReasonCode = reasonCode + + if err := service.support.lifecycle.RemoveLimit(ctx, ports.RemoveLimitInput{ + ExpectedActiveRecord: current, + UpdatedRecord: updated, + }); err != nil { + switch { + case errors.Is(err, ports.ErrConflict): + active, loadErr := service.support.loadActiveLimits(ctx, userID, now) + if loadErr != nil { + outcome = shared.MetricOutcome(loadErr) + return LimitCommandResult{}, loadErr + } + next, ok := findActiveLimit(active, limitCode) + if !ok { + outcome = "success" + return LimitCommandResult{ + UserID: userID.String(), + ActiveLimits: limitViews(active), + }, nil + } + if next.RecordID != current.RecordID { + outcome = shared.ErrorCodeConflict + return LimitCommandResult{}, shared.Conflict() + } + outcome = shared.ErrorCodeConflict + return LimitCommandResult{}, shared.Conflict() + default: + outcome = shared.ErrorCodeServiceUnavailable + return LimitCommandResult{}, shared.ServiceUnavailable(err) + } + } + + active, err = service.support.loadActiveLimits(ctx, userID, now) + if err != nil { + outcome = shared.MetricOutcome(err) + return LimitCommandResult{}, err + } + outcome = "success" + result = LimitCommandResult{ + UserID: userID.String(), + ActiveLimits: limitViews(active), + } + publishLimitChanged(ctx, service.publisher, service.telemetry, service.logger, "remove_limit", ports.LimitChangedOperationRemoved, updated) + + return result, nil +} + +func buildSanctionRecord( + recordID policy.SanctionRecordID, + userID common.UserID, + input ApplySanctionInput, + now time.Time, +) (policy.SanctionRecord, time.Time, error) { + sanctionCode, err := parseSanctionCode(input.SanctionCode) + if err != nil { + return policy.SanctionRecord{}, time.Time{}, err + } + scope, err := parseScope(input.Scope) + if err != nil { + return policy.SanctionRecord{}, time.Time{}, err + } + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + return policy.SanctionRecord{}, time.Time{}, err + } + actor, err := parseActor(input.Actor) + if err != nil { + return policy.SanctionRecord{}, time.Time{}, err + } + appliedAt, err := parseTimestamp("applied_at", input.AppliedAt) + if err != nil { + return policy.SanctionRecord{}, time.Time{}, err + } + expiresAt, err := parseOptionalTimestamp("expires_at", input.ExpiresAt) + if err != nil { + return policy.SanctionRecord{}, time.Time{}, err + } + + record := policy.SanctionRecord{ + RecordID: recordID, + UserID: userID, + SanctionCode: sanctionCode, + Scope: scope, + ReasonCode: reasonCode, + Actor: actor, + AppliedAt: appliedAt, + ExpiresAt: expiresAt, + } + if err := record.ValidateAt(now); err != nil { + return policy.SanctionRecord{}, time.Time{}, shared.InvalidRequest(err.Error()) + } + if !record.IsActiveAt(now) { + return policy.SanctionRecord{}, time.Time{}, shared.InvalidRequest("expires_at must be in the future relative to current service time") + } + + return record, now, nil +} + +func buildLimitRecord( + recordID policy.LimitRecordID, + userID common.UserID, + input SetLimitInput, + now time.Time, +) (policy.LimitRecord, time.Time, error) { + limitCode, err := parseLimitCode(input.LimitCode) + if err != nil { + return policy.LimitRecord{}, time.Time{}, err + } + reasonCode, err := shared.ParseReasonCode(input.ReasonCode) + if err != nil { + return policy.LimitRecord{}, time.Time{}, err + } + actor, err := parseActor(input.Actor) + if err != nil { + return policy.LimitRecord{}, time.Time{}, err + } + appliedAt, err := parseTimestamp("applied_at", input.AppliedAt) + if err != nil { + return policy.LimitRecord{}, time.Time{}, err + } + expiresAt, err := parseOptionalTimestamp("expires_at", input.ExpiresAt) + if err != nil { + return policy.LimitRecord{}, time.Time{}, err + } + + record := policy.LimitRecord{ + RecordID: recordID, + UserID: userID, + LimitCode: limitCode, + Value: input.Value, + ReasonCode: reasonCode, + Actor: actor, + AppliedAt: appliedAt, + ExpiresAt: expiresAt, + } + if err := record.ValidateAt(now); err != nil { + return policy.LimitRecord{}, time.Time{}, shared.InvalidRequest(err.Error()) + } + if !record.IsActiveAt(now) { + return policy.LimitRecord{}, time.Time{}, shared.InvalidRequest("expires_at must be in the future relative to current service time") + } + + return record, now, nil +} + +func parseSanctionCode(value string) (policy.SanctionCode, error) { + code := policy.SanctionCode(shared.NormalizeString(value)) + if !code.IsKnown() { + return "", shared.InvalidRequest("sanction_code is unsupported") + } + + return code, nil +} + +func parseLimitCode(value string) (policy.LimitCode, error) { + code := policy.LimitCode(shared.NormalizeString(value)) + if !code.IsSupported() { + return "", shared.InvalidRequest("limit_code is unsupported") + } + + return code, nil +} + +func parseScope(value string) (common.Scope, error) { + scope := common.Scope(shared.NormalizeString(value)) + if err := scope.Validate(); err != nil { + return "", shared.InvalidRequest(err.Error()) + } + + return scope, nil +} + +func parseActor(input ActorInput) (common.ActorRef, error) { + ref := common.ActorRef{ + Type: common.ActorType(shared.NormalizeString(input.Type)), + ID: common.ActorID(shared.NormalizeString(input.ID)), + } + if err := ref.Validate(); err != nil { + if ref.Type.IsZero() { + return common.ActorRef{}, shared.InvalidRequest("actor.type must not be empty") + } + return common.ActorRef{}, shared.InvalidRequest(err.Error()) + } + + return ref, nil +} + +func parseTimestamp(fieldName string, value string) (time.Time, error) { + trimmed := shared.NormalizeString(value) + if trimmed == "" { + return time.Time{}, shared.InvalidRequest(fieldName + " must not be empty") + } + + parsed, err := time.Parse(time.RFC3339Nano, trimmed) + if err != nil { + return time.Time{}, shared.InvalidRequest(fieldName + " must be a valid RFC 3339 timestamp") + } + + return parsed.UTC(), nil +} + +func parseOptionalTimestamp(fieldName string, value string) (*time.Time, error) { + trimmed := shared.NormalizeString(value) + if trimmed == "" { + return nil, nil + } + + parsed, err := parseTimestamp(fieldName, trimmed) + if err != nil { + return nil, err + } + + return &parsed, nil +} + +func findActiveSanction( + records []policy.SanctionRecord, + code policy.SanctionCode, +) (policy.SanctionRecord, bool) { + for _, record := range records { + if record.SanctionCode == code { + return record, true + } + } + + return policy.SanctionRecord{}, false +} + +func findActiveLimit( + records []policy.LimitRecord, + code policy.LimitCode, +) (policy.LimitRecord, bool) { + for _, record := range records { + if record.LimitCode == code { + return record, true + } + } + + return policy.LimitRecord{}, false +} + +func sanctionViews(records []policy.SanctionRecord) []ActiveSanctionView { + views := make([]ActiveSanctionView, 0, len(records)) + for _, record := range records { + views = append(views, ActiveSanctionView{ + SanctionCode: string(record.SanctionCode), + Scope: record.Scope.String(), + ReasonCode: record.ReasonCode.String(), + Actor: actorRefView(record.Actor), + AppliedAt: record.AppliedAt.UTC(), + ExpiresAt: cloneOptionalTime(record.ExpiresAt), + }) + } + + return views +} + +func limitViews(records []policy.LimitRecord) []ActiveLimitView { + views := make([]ActiveLimitView, 0, len(records)) + for _, record := range records { + views = append(views, ActiveLimitView{ + LimitCode: string(record.LimitCode), + Value: record.Value, + ReasonCode: record.ReasonCode.String(), + Actor: actorRefView(record.Actor), + AppliedAt: record.AppliedAt.UTC(), + ExpiresAt: cloneOptionalTime(record.ExpiresAt), + }) + } + + return views +} + +func actorRefView(ref common.ActorRef) ActorRefView { + return ActorRefView{ + Type: ref.Type.String(), + ID: ref.ID.String(), + } +} + +func cloneOptionalTime(value *time.Time) *time.Time { + if value == nil { + return nil + } + + cloned := value.UTC() + return &cloned +} + +func publishSanctionChanged( + ctx context.Context, + publisher ports.SanctionChangedPublisher, + telemetryRuntime *telemetry.Runtime, + logger *slog.Logger, + useCase string, + operation ports.SanctionChangedOperation, + record policy.SanctionRecord, +) { + if publisher == nil { + return + } + + reasonCode := record.ReasonCode + actor := record.Actor + if operation == ports.SanctionChangedOperationRemoved { + reasonCode = record.RemovedReasonCode + actor = record.RemovedBy + } + + event := ports.SanctionChangedEvent{ + UserID: record.UserID, + OccurredAt: sanctionOccurredAt(record), + Source: adminInternalAPISource, + Operation: operation, + SanctionCode: record.SanctionCode, + Scope: record.Scope, + ReasonCode: reasonCode, + Actor: actor, + AppliedAt: record.AppliedAt, + ExpiresAt: record.ExpiresAt, + RemovedAt: record.RemovedAt, + } + if err := publisher.PublishSanctionChanged(ctx, event); err != nil { + if telemetryRuntime != nil { + telemetryRuntime.RecordEventPublicationFailure(ctx, ports.SanctionChangedEventType) + } + shared.LogEventPublicationFailure(logger, ctx, ports.SanctionChangedEventType, err, + "use_case", useCase, + "user_id", record.UserID.String(), + "source", adminInternalAPISource.String(), + "reason_code", reasonCode.String(), + "actor_type", actor.Type.String(), + "actor_id", actor.ID.String(), + ) + } +} + +func publishLimitChanged( + ctx context.Context, + publisher ports.LimitChangedPublisher, + telemetryRuntime *telemetry.Runtime, + logger *slog.Logger, + useCase string, + operation ports.LimitChangedOperation, + record policy.LimitRecord, +) { + if publisher == nil { + return + } + + reasonCode := record.ReasonCode + actor := record.Actor + if operation == ports.LimitChangedOperationRemoved { + reasonCode = record.RemovedReasonCode + actor = record.RemovedBy + } + + value := record.Value + event := ports.LimitChangedEvent{ + UserID: record.UserID, + OccurredAt: limitOccurredAt(record), + Source: adminInternalAPISource, + Operation: operation, + LimitCode: record.LimitCode, + ReasonCode: reasonCode, + Actor: actor, + AppliedAt: record.AppliedAt, + ExpiresAt: record.ExpiresAt, + RemovedAt: record.RemovedAt, + } + if operation == ports.LimitChangedOperationSet || record.RemovedAt == nil { + event.Value = &value + } + if err := publisher.PublishLimitChanged(ctx, event); err != nil { + if telemetryRuntime != nil { + telemetryRuntime.RecordEventPublicationFailure(ctx, ports.LimitChangedEventType) + } + shared.LogEventPublicationFailure(logger, ctx, ports.LimitChangedEventType, err, + "use_case", useCase, + "user_id", record.UserID.String(), + "source", adminInternalAPISource.String(), + "reason_code", reasonCode.String(), + "actor_type", actor.Type.String(), + "actor_id", actor.ID.String(), + ) + } +} + +func sanctionOccurredAt(record policy.SanctionRecord) time.Time { + if record.RemovedAt != nil { + return record.RemovedAt.UTC() + } + + return record.AppliedAt.UTC() +} + +func limitOccurredAt(record policy.LimitRecord) time.Time { + if record.RemovedAt != nil { + return record.RemovedAt.UTC() + } + + return record.AppliedAt.UTC() +} diff --git a/user/internal/service/policysvc/service_test.go b/user/internal/service/policysvc/service_test.go new file mode 100644 index 0000000..4508ec9 --- /dev/null +++ b/user/internal/service/policysvc/service_test.go @@ -0,0 +1,705 @@ +package policysvc + +import ( + "context" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/shared" + + "github.com/stretchr/testify/require" +) + +func TestApplySanctionServiceExecuteBuildsActiveRecord(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + + service, err := NewApplySanctionService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{sanctionRecordID: policy.SanctionRecordID("sanction-1")}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), ApplySanctionInput{ + UserID: userID.String(), + SanctionCode: string(policy.SanctionCodeLoginBlock), + Scope: "auth", + ReasonCode: "policy_blocked", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + AppliedAt: now.Add(-time.Minute).Format(time.RFC3339Nano), + ExpiresAt: now.Add(time.Hour).Format(time.RFC3339Nano), + }) + require.NoError(t, err) + require.Equal(t, userID.String(), result.UserID) + require.Len(t, result.ActiveSanctions, 1) + require.Equal(t, string(policy.SanctionCodeLoginBlock), result.ActiveSanctions[0].SanctionCode) + + records, err := sanctionStore.ListByUserID(context.Background(), userID) + require.NoError(t, err) + require.Len(t, records, 1) + require.Equal(t, policy.SanctionRecordID("sanction-1"), records[0].RecordID) +} + +func TestApplySanctionServiceExecuteRejectsExpiredSanction(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + + service, err := NewApplySanctionService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{sanctionRecordID: policy.SanctionRecordID("sanction-1")}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), ApplySanctionInput{ + UserID: userID.String(), + SanctionCode: string(policy.SanctionCodeLoginBlock), + Scope: "auth", + ReasonCode: "policy_blocked", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + AppliedAt: now.Add(-2 * time.Hour).Format(time.RFC3339Nano), + ExpiresAt: now.Add(-time.Minute).Format(time.RFC3339Nano), + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInvalidRequest, shared.CodeOf(err)) +} + +func TestApplySanctionServiceExecuteReturnsConflictWhenActiveSanctionExists(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + existing := policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-existing"), + UserID: userID, + SanctionCode: policy.SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy_blocked"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Hour), + } + require.NoError(t, sanctionStore.Create(context.Background(), existing)) + + service, err := NewApplySanctionService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + newFakeLimitStore(), + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: newFakeLimitStore()}, + fixedClock{now: now}, + fixedIDGenerator{sanctionRecordID: policy.SanctionRecordID("sanction-1")}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), ApplySanctionInput{ + UserID: userID.String(), + SanctionCode: string(policy.SanctionCodeLoginBlock), + Scope: "auth", + ReasonCode: "policy_blocked", + Actor: ActorInput{Type: "admin", ID: "admin-2"}, + AppliedAt: now.Add(-time.Minute).Format(time.RFC3339Nano), + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeConflict, shared.CodeOf(err)) +} + +func TestApplySanctionServiceExecuteReturnsNotFoundForUnknownUser(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + service, err := NewApplySanctionService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{}}, + newFakeSanctionStore(), + newFakeLimitStore(), + &fakePolicyLifecycleStore{sanctions: newFakeSanctionStore(), limits: newFakeLimitStore()}, + fixedClock{now: now}, + fixedIDGenerator{sanctionRecordID: policy.SanctionRecordID("sanction-1")}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), ApplySanctionInput{ + UserID: "user-missing", + SanctionCode: string(policy.SanctionCodeLoginBlock), + Scope: "auth", + ReasonCode: "policy_blocked", + Actor: ActorInput{Type: "admin"}, + AppliedAt: now.Format(time.RFC3339Nano), + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeSubjectNotFound, shared.CodeOf(err)) +} + +func TestRemoveSanctionServiceExecuteIsIdempotentWhenMissing(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + + service, err := NewRemoveSanctionService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), RemoveSanctionInput{ + UserID: userID.String(), + SanctionCode: string(policy.SanctionCodeLoginBlock), + ReasonCode: "manual_remove", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + }) + require.NoError(t, err) + require.Equal(t, userID.String(), result.UserID) + require.Empty(t, result.ActiveSanctions) +} + +func TestRemoveSanctionServiceExecuteTreatsConcurrentRemovalAsSuccess(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + record := policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-1"), + UserID: userID, + SanctionCode: policy.SanctionCodeLoginBlock, + Scope: common.Scope("auth"), + ReasonCode: common.ReasonCode("policy_blocked"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Hour), + } + require.NoError(t, sanctionStore.Create(context.Background(), record)) + + lifecycle := &fakePolicyLifecycleStore{ + sanctions: sanctionStore, + limits: limitStore, + removeSanctionHook: func(input ports.RemoveSanctionInput) error { + updated := input.ExpectedActiveRecord + removedAt := now.Add(-time.Minute) + updated.RemovedAt = &removedAt + updated.RemovedBy = common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-2")} + updated.RemovedReasonCode = common.ReasonCode("manual_remove") + if err := sanctionStore.Update(context.Background(), updated); err != nil { + return err + } + + return ports.ErrConflict + }, + } + + service, err := NewRemoveSanctionService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + lifecycle, + fixedClock{now: now}, + fixedIDGenerator{}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), RemoveSanctionInput{ + UserID: userID.String(), + SanctionCode: string(policy.SanctionCodeLoginBlock), + ReasonCode: "manual_remove", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + }) + require.NoError(t, err) + require.Empty(t, result.ActiveSanctions) +} + +func TestSetLimitServiceExecuteReplacesActiveLimit(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + current := policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-existing"), + UserID: userID, + LimitCode: policy.LimitCodeMaxOwnedPrivateGames, + Value: 3, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Hour), + } + require.NoError(t, limitStore.Create(context.Background(), current)) + + service, err := NewSetLimitService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{limitRecordID: policy.LimitRecordID("limit-new")}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), SetLimitInput{ + UserID: userID.String(), + LimitCode: string(policy.LimitCodeMaxOwnedPrivateGames), + Value: 5, + ReasonCode: "manual_override", + Actor: ActorInput{Type: "admin", ID: "admin-2"}, + AppliedAt: now.Format(time.RFC3339Nano), + }) + require.NoError(t, err) + require.Len(t, result.ActiveLimits, 1) + require.Equal(t, 5, result.ActiveLimits[0].Value) + + storedCurrent, err := limitStore.GetByRecordID(context.Background(), current.RecordID) + require.NoError(t, err) + require.NotNil(t, storedCurrent.RemovedAt) + require.True(t, storedCurrent.RemovedAt.Equal(now)) +} + +func TestSetLimitServiceExecuteRejectsRetroactiveReplacement(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + limitStore := newFakeLimitStore() + current := policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-existing"), + UserID: userID, + LimitCode: policy.LimitCodeMaxOwnedPrivateGames, + Value: 3, + ReasonCode: common.ReasonCode("manual_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Hour), + } + require.NoError(t, limitStore.Create(context.Background(), current)) + + service, err := NewSetLimitService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + newFakeSanctionStore(), + limitStore, + &fakePolicyLifecycleStore{sanctions: newFakeSanctionStore(), limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{limitRecordID: policy.LimitRecordID("limit-new")}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), SetLimitInput{ + UserID: userID.String(), + LimitCode: string(policy.LimitCodeMaxOwnedPrivateGames), + Value: 5, + ReasonCode: "manual_override", + Actor: ActorInput{Type: "admin", ID: "admin-2"}, + AppliedAt: now.Add(-2 * time.Hour).Format(time.RFC3339Nano), + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInvalidRequest, shared.CodeOf(err)) +} + +func TestSetLimitServiceExecuteRejectsRetiredLimitCodes(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + + tests := []string{ + string(policy.LimitCodeMaxActivePrivateGames), + string(policy.LimitCodeMaxPendingPrivateJoinRequests), + string(policy.LimitCodeMaxPendingPrivateInvitesSent), + } + + for _, limitCode := range tests { + limitCode := limitCode + t.Run(limitCode, func(t *testing.T) { + t.Parallel() + + service, err := NewSetLimitService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + newFakeSanctionStore(), + newFakeLimitStore(), + &fakePolicyLifecycleStore{sanctions: newFakeSanctionStore(), limits: newFakeLimitStore()}, + fixedClock{now: now}, + fixedIDGenerator{limitRecordID: policy.LimitRecordID("limit-new")}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), SetLimitInput{ + UserID: userID.String(), + LimitCode: limitCode, + Value: 5, + ReasonCode: "manual_override", + Actor: ActorInput{Type: "admin", ID: "admin-2"}, + AppliedAt: now.Format(time.RFC3339Nano), + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInvalidRequest, shared.CodeOf(err)) + }) + } +} + +func TestSetLimitServiceExecuteIgnoresRetiredRecordsDuringReload(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + limitStore := newFakeLimitStore() + require.NoError(t, limitStore.Create(context.Background(), policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-legacy"), + UserID: userID, + LimitCode: policy.LimitCodeMaxActivePrivateGames, + Value: 9, + ReasonCode: common.ReasonCode("legacy_override"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + AppliedAt: now.Add(-time.Hour), + })) + + service, err := NewSetLimitService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + newFakeSanctionStore(), + limitStore, + &fakePolicyLifecycleStore{sanctions: newFakeSanctionStore(), limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{limitRecordID: policy.LimitRecordID("limit-new")}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), SetLimitInput{ + UserID: userID.String(), + LimitCode: string(policy.LimitCodeMaxOwnedPrivateGames), + Value: 5, + ReasonCode: "manual_override", + Actor: ActorInput{Type: "admin", ID: "admin-2"}, + AppliedAt: now.Format(time.RFC3339Nano), + }) + require.NoError(t, err) + require.Len(t, result.ActiveLimits, 1) + require.Equal(t, string(policy.LimitCodeMaxOwnedPrivateGames), result.ActiveLimits[0].LimitCode) +} + +func TestRemoveLimitServiceExecuteIsIdempotentWhenMissing(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + sanctionStore := newFakeSanctionStore() + limitStore := newFakeLimitStore() + + service, err := NewRemoveLimitService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + sanctionStore, + limitStore, + &fakePolicyLifecycleStore{sanctions: sanctionStore, limits: limitStore}, + fixedClock{now: now}, + fixedIDGenerator{}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), RemoveLimitInput{ + UserID: userID.String(), + LimitCode: string(policy.LimitCodeMaxOwnedPrivateGames), + ReasonCode: "manual_remove", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + }) + require.NoError(t, err) + require.Empty(t, result.ActiveLimits) +} + +func TestRemoveLimitServiceExecuteRejectsRetiredLimitCode(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_000, 0).UTC() + userID := common.UserID("user-123") + + service, err := NewRemoveLimitService( + fakeAccountStore{existsByUserID: map[common.UserID]bool{userID: true}}, + newFakeSanctionStore(), + newFakeLimitStore(), + &fakePolicyLifecycleStore{sanctions: newFakeSanctionStore(), limits: newFakeLimitStore()}, + fixedClock{now: now}, + fixedIDGenerator{}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), RemoveLimitInput{ + UserID: userID.String(), + LimitCode: string(policy.LimitCodeMaxPendingPrivateJoinRequests), + ReasonCode: "manual_remove", + Actor: ActorInput{Type: "admin", ID: "admin-1"}, + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInvalidRequest, shared.CodeOf(err)) +} + +type fakeAccountStore struct { + existsByUserID map[common.UserID]bool +} + +func (store fakeAccountStore) Create(context.Context, ports.CreateAccountInput) error { + return nil +} + +func (store fakeAccountStore) GetByUserID(context.Context, common.UserID) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) GetByEmail(context.Context, common.Email) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) GetByRaceName(context.Context, common.RaceName) (account.UserAccount, error) { + return account.UserAccount{}, ports.ErrNotFound +} + +func (store fakeAccountStore) ExistsByUserID(_ context.Context, userID common.UserID) (bool, error) { + return store.existsByUserID[userID], nil +} + +func (store fakeAccountStore) RenameRaceName(context.Context, ports.RenameRaceNameInput) error { + return nil +} + +func (store fakeAccountStore) Update(context.Context, account.UserAccount) error { + return nil +} + +type fakeSanctionStore struct { + byUserID map[common.UserID][]policy.SanctionRecord + byRecordID map[policy.SanctionRecordID]policy.SanctionRecord +} + +func newFakeSanctionStore() *fakeSanctionStore { + return &fakeSanctionStore{ + byUserID: make(map[common.UserID][]policy.SanctionRecord), + byRecordID: make(map[policy.SanctionRecordID]policy.SanctionRecord), + } +} + +func (store *fakeSanctionStore) Create(_ context.Context, record policy.SanctionRecord) error { + if err := record.Validate(); err != nil { + return err + } + if _, exists := store.byRecordID[record.RecordID]; exists { + return ports.ErrConflict + } + store.byRecordID[record.RecordID] = record + store.byUserID[record.UserID] = append(store.byUserID[record.UserID], record) + return nil +} + +func (store *fakeSanctionStore) GetByRecordID(_ context.Context, recordID policy.SanctionRecordID) (policy.SanctionRecord, error) { + record, ok := store.byRecordID[recordID] + if !ok { + return policy.SanctionRecord{}, ports.ErrNotFound + } + return record, nil +} + +func (store *fakeSanctionStore) ListByUserID(_ context.Context, userID common.UserID) ([]policy.SanctionRecord, error) { + records := store.byUserID[userID] + cloned := make([]policy.SanctionRecord, len(records)) + copy(cloned, records) + return cloned, nil +} + +func (store *fakeSanctionStore) Update(_ context.Context, record policy.SanctionRecord) error { + if err := record.Validate(); err != nil { + return err + } + if _, exists := store.byRecordID[record.RecordID]; !exists { + return ports.ErrNotFound + } + store.byRecordID[record.RecordID] = record + records := store.byUserID[record.UserID] + for index := range records { + if records[index].RecordID == record.RecordID { + records[index] = record + store.byUserID[record.UserID] = records + return nil + } + } + return ports.ErrNotFound +} + +type fakeLimitStore struct { + byUserID map[common.UserID][]policy.LimitRecord + byRecordID map[policy.LimitRecordID]policy.LimitRecord +} + +func newFakeLimitStore() *fakeLimitStore { + return &fakeLimitStore{ + byUserID: make(map[common.UserID][]policy.LimitRecord), + byRecordID: make(map[policy.LimitRecordID]policy.LimitRecord), + } +} + +func (store *fakeLimitStore) Create(_ context.Context, record policy.LimitRecord) error { + if err := record.Validate(); err != nil { + return err + } + if _, exists := store.byRecordID[record.RecordID]; exists { + return ports.ErrConflict + } + store.byRecordID[record.RecordID] = record + store.byUserID[record.UserID] = append(store.byUserID[record.UserID], record) + return nil +} + +func (store *fakeLimitStore) GetByRecordID(_ context.Context, recordID policy.LimitRecordID) (policy.LimitRecord, error) { + record, ok := store.byRecordID[recordID] + if !ok { + return policy.LimitRecord{}, ports.ErrNotFound + } + return record, nil +} + +func (store *fakeLimitStore) ListByUserID(_ context.Context, userID common.UserID) ([]policy.LimitRecord, error) { + records := store.byUserID[userID] + cloned := make([]policy.LimitRecord, len(records)) + copy(cloned, records) + return cloned, nil +} + +func (store *fakeLimitStore) Update(_ context.Context, record policy.LimitRecord) error { + if err := record.Validate(); err != nil { + return err + } + if _, exists := store.byRecordID[record.RecordID]; !exists { + return ports.ErrNotFound + } + store.byRecordID[record.RecordID] = record + records := store.byUserID[record.UserID] + for index := range records { + if records[index].RecordID == record.RecordID { + records[index] = record + store.byUserID[record.UserID] = records + return nil + } + } + return ports.ErrNotFound +} + +type fakePolicyLifecycleStore struct { + sanctions *fakeSanctionStore + limits *fakeLimitStore + + applySanctionHook func(input ports.ApplySanctionInput) error + removeSanctionHook func(input ports.RemoveSanctionInput) error + setLimitHook func(input ports.SetLimitInput) error + removeLimitHook func(input ports.RemoveLimitInput) error +} + +func (store *fakePolicyLifecycleStore) ApplySanction(ctx context.Context, input ports.ApplySanctionInput) error { + if store.applySanctionHook != nil { + return store.applySanctionHook(input) + } + + records, err := store.sanctions.ListByUserID(ctx, input.NewRecord.UserID) + if err != nil { + return err + } + active, err := policy.ActiveSanctionsAt(records, input.NewRecord.AppliedAt) + if err != nil { + return err + } + for _, record := range active { + if record.SanctionCode == input.NewRecord.SanctionCode { + return ports.ErrConflict + } + } + + return store.sanctions.Create(ctx, input.NewRecord) +} + +func (store *fakePolicyLifecycleStore) RemoveSanction(ctx context.Context, input ports.RemoveSanctionInput) error { + if store.removeSanctionHook != nil { + return store.removeSanctionHook(input) + } + + return store.sanctions.Update(ctx, input.UpdatedRecord) +} + +func (store *fakePolicyLifecycleStore) SetLimit(ctx context.Context, input ports.SetLimitInput) error { + if store.setLimitHook != nil { + return store.setLimitHook(input) + } + + if input.ExpectedActiveRecord != nil { + if err := store.limits.Update(ctx, *input.UpdatedActiveRecord); err != nil { + return err + } + } + + return store.limits.Create(ctx, input.NewRecord) +} + +func (store *fakePolicyLifecycleStore) RemoveLimit(ctx context.Context, input ports.RemoveLimitInput) error { + if store.removeLimitHook != nil { + return store.removeLimitHook(input) + } + + return store.limits.Update(ctx, input.UpdatedRecord) +} + +type fixedClock struct { + now time.Time +} + +func (clock fixedClock) Now() time.Time { + return clock.now +} + +type fixedIDGenerator struct { + sanctionRecordID policy.SanctionRecordID + limitRecordID policy.LimitRecordID +} + +func (generator fixedIDGenerator) NewUserID() (common.UserID, error) { + return "", nil +} + +func (generator fixedIDGenerator) NewInitialRaceName() (common.RaceName, error) { + return "", nil +} + +func (generator fixedIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + return "", nil +} + +func (generator fixedIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + return generator.sanctionRecordID, nil +} + +func (generator fixedIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + return generator.limitRecordID, nil +} + +var ( + _ ports.UserAccountStore = fakeAccountStore{} + _ ports.SanctionStore = (*fakeSanctionStore)(nil) + _ ports.LimitStore = (*fakeLimitStore)(nil) + _ ports.PolicyLifecycleStore = (*fakePolicyLifecycleStore)(nil) + _ ports.Clock = fixedClock{} + _ ports.IDGenerator = fixedIDGenerator{} +) diff --git a/user/internal/service/selfservice/observability_test.go b/user/internal/service/selfservice/observability_test.go new file mode 100644 index 0000000..43b4b4d --- /dev/null +++ b/user/internal/service/selfservice/observability_test.go @@ -0,0 +1,159 @@ +package selfservice + +import ( + "context" + "errors" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/ports" + + "github.com/stretchr/testify/require" +) + +func TestProfileUpdaterExecutePublishesProfileChangedEvent(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAccountStore(validUserAccount()) + publisher := &recordingSelfServicePublisher{} + + service, err := NewProfileUpdaterWithObservability( + accountStore, + &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validEntitlementSnapshot(common.UserID("user-123"), now), + }, + }, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: now}, + stubRaceNamePolicy{}, + nil, + nil, + publisher, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), UpdateMyProfileInput{ + UserID: "user-123", + RaceName: "Nova Prime", + }) + require.NoError(t, err) + require.Equal(t, "Nova Prime", result.Account.RaceName) + require.Len(t, publisher.profileEvents, 1) + require.Equal(t, ports.ProfileChangedOperationUpdated, publisher.profileEvents[0].Operation) + require.Equal(t, common.Source("gateway_self_service"), publisher.profileEvents[0].Source) + require.Equal(t, common.RaceName("Nova Prime"), publisher.profileEvents[0].RaceName) +} + +func TestProfileUpdaterExecutePublisherFailureDoesNotRollbackCommit(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAccountStore(validUserAccount()) + publisher := &recordingSelfServicePublisher{profileErr: errors.New("publisher unavailable")} + + service, err := NewProfileUpdaterWithObservability( + accountStore, + &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validEntitlementSnapshot(common.UserID("user-123"), now), + }, + }, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: now}, + stubRaceNamePolicy{}, + nil, + nil, + publisher, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), UpdateMyProfileInput{ + UserID: "user-123", + RaceName: "Nova Prime", + }) + require.NoError(t, err) + require.Equal(t, "Nova Prime", result.Account.RaceName) + require.Len(t, publisher.profileEvents, 1) + + storedAccount, err := accountStore.GetByUserID(context.Background(), common.UserID("user-123")) + require.NoError(t, err) + require.Equal(t, common.RaceName("Nova Prime"), storedAccount.RaceName) +} + +func TestSettingsUpdaterExecuteNoOpDoesNotPublishEvent(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAccountStore(account.UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en-US"), + TimeZone: common.TimeZoneName("UTC"), + DeclaredCountry: common.CountryCode("DE"), + CreatedAt: time.Unix(1_775_240_000, 0).UTC(), + UpdatedAt: time.Unix(1_775_240_100, 0).UTC(), + }) + publisher := &recordingSelfServicePublisher{} + + service, err := NewSettingsUpdaterWithObservability( + accountStore, + &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validEntitlementSnapshot(common.UserID("user-123"), now), + }, + }, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: now}, + nil, + nil, + publisher, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), UpdateMySettingsInput{ + UserID: "user-123", + PreferredLanguage: "en-us", + TimeZone: " UTC ", + }) + require.NoError(t, err) + require.Equal(t, "en-US", result.Account.PreferredLanguage) + require.Equal(t, "UTC", result.Account.TimeZone) + require.Empty(t, publisher.settingsEvents) +} + +type recordingSelfServicePublisher struct { + profileErr error + settingsErr error + profileEvents []ports.ProfileChangedEvent + settingsEvents []ports.SettingsChangedEvent +} + +func (publisher *recordingSelfServicePublisher) PublishProfileChanged(_ context.Context, event ports.ProfileChangedEvent) error { + if err := event.Validate(); err != nil { + return err + } + publisher.profileEvents = append(publisher.profileEvents, event) + return publisher.profileErr +} + +func (publisher *recordingSelfServicePublisher) PublishSettingsChanged(_ context.Context, event ports.SettingsChangedEvent) error { + if err := event.Validate(); err != nil { + return err + } + publisher.settingsEvents = append(publisher.settingsEvents, event) + return publisher.settingsErr +} + +var ( + _ ports.ProfileChangedPublisher = (*recordingSelfServicePublisher)(nil) + _ ports.SettingsChangedPublisher = (*recordingSelfServicePublisher)(nil) +) diff --git a/user/internal/service/selfservice/service.go b/user/internal/service/selfservice/service.go new file mode 100644 index 0000000..5e5b159 --- /dev/null +++ b/user/internal/service/selfservice/service.go @@ -0,0 +1,467 @@ +// Package selfservice implements the authenticated self-service account read +// and mutation use cases owned by User Service. +package selfservice + +import ( + "context" + "errors" + "fmt" + "log/slog" + "strings" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/accountview" + "galaxy/user/internal/service/shared" + "galaxy/user/internal/telemetry" +) + +const gatewaySelfServiceSource = common.Source("gateway_self_service") + +// ActorRefView stores transport-ready audit actor metadata. +type ActorRefView = accountview.ActorRefView + +// EntitlementSnapshotView stores the transport-ready current entitlement +// snapshot of one account. +type EntitlementSnapshotView = accountview.EntitlementSnapshotView + +// ActiveSanctionView stores one transport-ready active sanction. +type ActiveSanctionView = accountview.ActiveSanctionView + +// ActiveLimitView stores one transport-ready active user-specific limit. +type ActiveLimitView = accountview.ActiveLimitView + +// AccountView stores the transport-ready authenticated self-service account +// aggregate. +type AccountView = accountview.AccountView + +// GetMyAccountInput stores one authenticated account-read request. +type GetMyAccountInput struct { + // UserID stores the authenticated regular-user identifier. + UserID string +} + +// GetMyAccountResult stores one authenticated account-read result. +type GetMyAccountResult struct { + // Account stores the read-optimized current account aggregate. + Account AccountView `json:"account"` +} + +// UpdateMyProfileInput stores one self-service profile mutation request. +type UpdateMyProfileInput struct { + // UserID stores the authenticated regular-user identifier. + UserID string + + // RaceName stores the requested exact replacement race name. + RaceName string +} + +// UpdateMyProfileResult stores one self-service profile mutation result. +type UpdateMyProfileResult struct { + // Account stores the refreshed account aggregate after the mutation. + Account AccountView `json:"account"` +} + +// UpdateMySettingsInput stores one self-service settings mutation request. +type UpdateMySettingsInput struct { + // UserID stores the authenticated regular-user identifier. + UserID string + + // PreferredLanguage stores the requested BCP 47 preferred language. + PreferredLanguage string + + // TimeZone stores the requested IANA time-zone name. + TimeZone string +} + +// UpdateMySettingsResult stores one self-service settings mutation result. +type UpdateMySettingsResult struct { + // Account stores the refreshed account aggregate after the mutation. + Account AccountView `json:"account"` +} + +type entitlementReader interface { + GetByUserID(ctx context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) +} + +// AccountGetter executes the `GetMyAccount` use case. +type AccountGetter struct { + loader *accountview.Loader +} + +// NewAccountGetter constructs one authenticated account-read use case. +func NewAccountGetter( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, +) (*AccountGetter, error) { + loader, err := accountview.NewLoader(accounts, entitlements, sanctions, limits, clock) + if err != nil { + return nil, fmt.Errorf("selfservice account getter: %w", err) + } + + return &AccountGetter{loader: loader}, nil +} + +// Execute reads the current self-service account aggregate of input.UserID. +func (service *AccountGetter) Execute(ctx context.Context, input GetMyAccountInput) (GetMyAccountResult, error) { + if ctx == nil { + return GetMyAccountResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + return GetMyAccountResult{}, err + } + + state, err := service.loader.Load(ctx, userID) + if err != nil { + return GetMyAccountResult{}, err + } + + return GetMyAccountResult{Account: state.View()}, nil +} + +// ProfileUpdater executes the `UpdateMyProfile` use case. +type ProfileUpdater struct { + accounts ports.UserAccountStore + loader *accountview.Loader + policy ports.RaceNamePolicy + clock ports.Clock + logger *slog.Logger + telemetry *telemetry.Runtime + profilePublisher ports.ProfileChangedPublisher +} + +// NewProfileUpdater constructs one self-service profile-mutation use case. +func NewProfileUpdater( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, + policy ports.RaceNamePolicy, +) (*ProfileUpdater, error) { + return NewProfileUpdaterWithObservability(accounts, entitlements, sanctions, limits, clock, policy, nil, nil, nil) +} + +// NewProfileUpdaterWithObservability constructs one self-service +// profile-mutation use case with optional observability hooks. +func NewProfileUpdaterWithObservability( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, + policy ports.RaceNamePolicy, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + profilePublisher ports.ProfileChangedPublisher, +) (*ProfileUpdater, error) { + loader, err := accountview.NewLoader(accounts, entitlements, sanctions, limits, clock) + if err != nil { + return nil, fmt.Errorf("selfservice profile updater: %w", err) + } + if policy == nil { + return nil, fmt.Errorf("selfservice profile updater: race-name policy must not be nil") + } + + return &ProfileUpdater{ + accounts: accounts, + loader: loader, + policy: policy, + clock: clock, + logger: logger, + telemetry: telemetryRuntime, + profilePublisher: profilePublisher, + }, nil +} + +// Execute updates the current self-service profile fields of input.UserID. +func (service *ProfileUpdater) Execute(ctx context.Context, input UpdateMyProfileInput) (result UpdateMyProfileResult, err error) { + outcome := "failed" + userIDString := "" + defer func() { + shared.LogServiceOutcome(service.logger, ctx, "profile update completed", err, + "use_case", "update_my_profile", + "outcome", outcome, + "user_id", userIDString, + "source", gatewaySelfServiceSource.String(), + ) + }() + + if ctx == nil { + return UpdateMyProfileResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + return UpdateMyProfileResult{}, err + } + userIDString = userID.String() + raceName, err := parseRaceName(input.RaceName) + if err != nil { + return UpdateMyProfileResult{}, err + } + + state, err := service.loader.Load(ctx, userID) + if err != nil { + return UpdateMyProfileResult{}, err + } + if state.HasActiveSanction(policy.SanctionCodeProfileUpdateBlock) { + return UpdateMyProfileResult{}, shared.Conflict() + } + if state.AccountRecord.RaceName == raceName { + outcome = "noop" + return UpdateMyProfileResult{Account: state.View()}, nil + } + + now := service.clock.Now().UTC() + currentCanonicalKey, err := service.policy.CanonicalKey(state.AccountRecord.RaceName) + if err != nil { + return UpdateMyProfileResult{}, shared.ServiceUnavailable(fmt.Errorf("canonicalize current race name: %w", err)) + } + reservation, err := shared.BuildRaceNameReservation(service.policy, userID, raceName, now) + if err != nil { + return UpdateMyProfileResult{}, shared.ServiceUnavailable(err) + } + if err := service.accounts.RenameRaceName(ctx, ports.RenameRaceNameInput{ + UserID: userID, + CurrentCanonicalKey: currentCanonicalKey, + NewRaceName: raceName, + NewReservation: reservation, + UpdatedAt: now, + }); err != nil { + if errors.Is(err, ports.ErrRaceNameConflict) && service.telemetry != nil { + service.telemetry.RecordRaceNameReservationConflict(ctx, "update_my_profile") + } + switch { + case errors.Is(err, ports.ErrNotFound): + return UpdateMyProfileResult{}, shared.SubjectNotFound() + case errors.Is(err, ports.ErrConflict): + return UpdateMyProfileResult{}, shared.Conflict() + default: + return UpdateMyProfileResult{}, shared.ServiceUnavailable(err) + } + } + + updatedState, err := service.loader.Load(ctx, userID) + if err != nil { + return UpdateMyProfileResult{}, err + } + outcome = "updated" + result = UpdateMyProfileResult{Account: updatedState.View()} + service.publishProfileChanged(ctx, updatedState.AccountRecord) + + return result, nil +} + +// SettingsUpdater executes the `UpdateMySettings` use case. +type SettingsUpdater struct { + accounts ports.UserAccountStore + loader *accountview.Loader + clock ports.Clock + logger *slog.Logger + telemetry *telemetry.Runtime + settingsPublisher ports.SettingsChangedPublisher +} + +// NewSettingsUpdater constructs one self-service settings-mutation use case. +func NewSettingsUpdater( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, +) (*SettingsUpdater, error) { + return NewSettingsUpdaterWithObservability(accounts, entitlements, sanctions, limits, clock, nil, nil, nil) +} + +// NewSettingsUpdaterWithObservability constructs one self-service +// settings-mutation use case with optional observability hooks. +func NewSettingsUpdaterWithObservability( + accounts ports.UserAccountStore, + entitlements entitlementReader, + sanctions ports.SanctionStore, + limits ports.LimitStore, + clock ports.Clock, + logger *slog.Logger, + telemetryRuntime *telemetry.Runtime, + settingsPublisher ports.SettingsChangedPublisher, +) (*SettingsUpdater, error) { + loader, err := accountview.NewLoader(accounts, entitlements, sanctions, limits, clock) + if err != nil { + return nil, fmt.Errorf("selfservice settings updater: %w", err) + } + + return &SettingsUpdater{ + accounts: accounts, + loader: loader, + clock: clock, + logger: logger, + telemetry: telemetryRuntime, + settingsPublisher: settingsPublisher, + }, nil +} + +// Execute updates the current self-service settings fields of input.UserID. +func (service *SettingsUpdater) Execute(ctx context.Context, input UpdateMySettingsInput) (result UpdateMySettingsResult, err error) { + outcome := "failed" + userIDString := "" + defer func() { + shared.LogServiceOutcome(service.logger, ctx, "settings update completed", err, + "use_case", "update_my_settings", + "outcome", outcome, + "user_id", userIDString, + "source", gatewaySelfServiceSource.String(), + ) + }() + + if ctx == nil { + return UpdateMySettingsResult{}, shared.InvalidRequest("context must not be nil") + } + + userID, err := shared.ParseUserID(input.UserID) + if err != nil { + return UpdateMySettingsResult{}, err + } + userIDString = userID.String() + preferredLanguage, err := parsePreferredLanguage(input.PreferredLanguage) + if err != nil { + return UpdateMySettingsResult{}, err + } + timeZone, err := parseTimeZoneName(input.TimeZone) + if err != nil { + return UpdateMySettingsResult{}, err + } + + state, err := service.loader.Load(ctx, userID) + if err != nil { + return UpdateMySettingsResult{}, err + } + if state.HasActiveSanction(policy.SanctionCodeProfileUpdateBlock) { + return UpdateMySettingsResult{}, shared.Conflict() + } + if state.AccountRecord.PreferredLanguage == preferredLanguage && state.AccountRecord.TimeZone == timeZone { + outcome = "noop" + return UpdateMySettingsResult{Account: state.View()}, nil + } + + record := state.AccountRecord + record.PreferredLanguage = preferredLanguage + record.TimeZone = timeZone + record.UpdatedAt = service.clock.Now().UTC() + + if err := service.accounts.Update(ctx, record); err != nil { + switch { + case errors.Is(err, ports.ErrNotFound): + return UpdateMySettingsResult{}, shared.SubjectNotFound() + case errors.Is(err, ports.ErrConflict): + return UpdateMySettingsResult{}, shared.Conflict() + default: + return UpdateMySettingsResult{}, shared.ServiceUnavailable(err) + } + } + + updatedState, err := service.loader.Load(ctx, userID) + if err != nil { + return UpdateMySettingsResult{}, err + } + outcome = "updated" + result = UpdateMySettingsResult{Account: updatedState.View()} + service.publishSettingsChanged(ctx, updatedState.AccountRecord) + + return result, nil +} + +func parseRaceName(value string) (common.RaceName, error) { + return shared.ParseRaceName(value) +} + +func parsePreferredLanguage(value string) (common.LanguageTag, error) { + languageTag, err := shared.ParseLanguageTag(value) + if err != nil { + return "", reframeFieldError("preferred_language", "language tag", err) + } + + return languageTag, nil +} + +func parseTimeZoneName(value string) (common.TimeZoneName, error) { + timeZoneName, err := shared.ParseTimeZoneName(value) + if err != nil { + return "", reframeFieldError("time_zone", "time zone name", err) + } + + return timeZoneName, nil +} + +func reframeFieldError(fieldName string, valueName string, err error) error { + if err == nil { + return nil + } + + message := err.Error() + prefix := valueName + " " + if strings.HasPrefix(message, prefix) { + message = fieldName + " " + strings.TrimPrefix(message, prefix) + } else { + message = fmt.Sprintf("%s: %s", fieldName, message) + } + + return shared.InvalidRequest(message) +} + +func (service *ProfileUpdater) publishProfileChanged(ctx context.Context, record account.UserAccount) { + if service.profilePublisher == nil { + return + } + + event := ports.ProfileChangedEvent{ + UserID: record.UserID, + OccurredAt: record.UpdatedAt.UTC(), + Source: gatewaySelfServiceSource, + Operation: ports.ProfileChangedOperationUpdated, + RaceName: record.RaceName, + } + if err := service.profilePublisher.PublishProfileChanged(ctx, event); err != nil { + if service.telemetry != nil { + service.telemetry.RecordEventPublicationFailure(ctx, ports.ProfileChangedEventType) + } + shared.LogEventPublicationFailure(service.logger, ctx, ports.ProfileChangedEventType, err, + "use_case", "update_my_profile", + "user_id", record.UserID.String(), + "source", gatewaySelfServiceSource.String(), + ) + } +} + +func (service *SettingsUpdater) publishSettingsChanged(ctx context.Context, record account.UserAccount) { + if service.settingsPublisher == nil { + return + } + + event := ports.SettingsChangedEvent{ + UserID: record.UserID, + OccurredAt: record.UpdatedAt.UTC(), + Source: gatewaySelfServiceSource, + Operation: ports.SettingsChangedOperationUpdated, + PreferredLanguage: record.PreferredLanguage, + TimeZone: record.TimeZone, + } + if err := service.settingsPublisher.PublishSettingsChanged(ctx, event); err != nil { + if service.telemetry != nil { + service.telemetry.RecordEventPublicationFailure(ctx, ports.SettingsChangedEventType) + } + shared.LogEventPublicationFailure(service.logger, ctx, ports.SettingsChangedEventType, err, + "use_case", "update_my_settings", + "user_id", record.UserID.String(), + "source", gatewaySelfServiceSource.String(), + ) + } +} diff --git a/user/internal/service/selfservice/service_test.go b/user/internal/service/selfservice/service_test.go new file mode 100644 index 0000000..965d3eb --- /dev/null +++ b/user/internal/service/selfservice/service_test.go @@ -0,0 +1,732 @@ +package selfservice + +import ( + "context" + "strings" + "testing" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/domain/entitlement" + "galaxy/user/internal/domain/policy" + "galaxy/user/internal/ports" + "galaxy/user/internal/service/entitlementsvc" + "galaxy/user/internal/service/shared" + + "github.com/stretchr/testify/require" +) + +func TestAccountGetterExecuteReturnsAggregate(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAccountStore(validUserAccount()) + snapshotStore := &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validEntitlementSnapshot(common.UserID("user-123"), now), + }, + } + sanctionStore := fakeSanctionStore{ + byUserID: map[common.UserID][]policy.SanctionRecord{ + common.UserID("user-123"): { + validActiveSanction(common.UserID("user-123"), policy.SanctionCodeLoginBlock, now.Add(-time.Hour)), + expiredSanction(common.UserID("user-123"), policy.SanctionCodeGameJoinBlock, now.Add(-2*time.Hour)), + }, + }, + } + limitStore := fakeLimitStore{ + byUserID: map[common.UserID][]policy.LimitRecord{ + common.UserID("user-123"): { + validActiveLimit(common.UserID("user-123"), policy.LimitCodeMaxOwnedPrivateGames, 3, now.Add(-time.Hour)), + validActiveLimit(common.UserID("user-123"), policy.LimitCodeMaxActivePrivateGames, 1, now.Add(-2*time.Hour)), + }, + }, + } + + service, err := NewAccountGetter(accountStore, snapshotStore, sanctionStore, limitStore, fixedClock{now: now}) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GetMyAccountInput{UserID: " user-123 "}) + require.NoError(t, err) + require.Equal(t, "user-123", result.Account.UserID) + require.Equal(t, "DE", result.Account.DeclaredCountry) + require.Len(t, result.Account.ActiveSanctions, 1) + require.Equal(t, string(policy.SanctionCodeLoginBlock), result.Account.ActiveSanctions[0].SanctionCode) + require.Len(t, result.Account.ActiveLimits, 1) + require.Equal(t, string(policy.LimitCodeMaxOwnedPrivateGames), result.Account.ActiveLimits[0].LimitCode) +} + +func TestAccountGetterExecuteUnknownUserReturnsNotFound(t *testing.T) { + t.Parallel() + + service, err := NewAccountGetter( + newFakeAccountStore(), + &fakeEntitlementSnapshotStore{}, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: time.Unix(1_775_240_500, 0).UTC()}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), GetMyAccountInput{UserID: "user-missing"}) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeSubjectNotFound, shared.CodeOf(err)) +} + +func TestAccountGetterExecuteMissingSnapshotReturnsInternalError(t *testing.T) { + t.Parallel() + + service, err := NewAccountGetter( + newFakeAccountStore(validUserAccount()), + &fakeEntitlementSnapshotStore{}, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: time.Unix(1_775_240_500, 0).UTC()}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), GetMyAccountInput{UserID: "user-123"}) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeInternalError, shared.CodeOf(err)) +} + +func TestAccountGetterExecuteRepairsExpiredPaidSnapshot(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + expiredAt := now.Add(-time.Hour) + snapshotStore := &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): { + UserID: common.UserID("user-123"), + PlanCode: entitlement.PlanCodePaidMonthly, + IsPaid: true, + StartsAt: now.Add(-30 * 24 * time.Hour), + EndsAt: timePointer(expiredAt), + Source: common.Source("admin"), + Actor: common.ActorRef{Type: common.ActorType("admin"), ID: common.ActorID("admin-1")}, + ReasonCode: common.ReasonCode("manual_grant"), + UpdatedAt: expiredAt, + }, + }, + } + reader, err := entitlementsvc.NewReader( + snapshotStore, + &fakeEntitlementLifecycleStore{snapshotStore: snapshotStore}, + fixedClock{now: now}, + readerIDGenerator{recordID: entitlement.EntitlementRecordID("entitlement-free-after-expiry")}, + ) + require.NoError(t, err) + + service, err := NewAccountGetter( + newFakeAccountStore(validUserAccount()), + reader, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: now}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), GetMyAccountInput{UserID: "user-123"}) + require.NoError(t, err) + require.Equal(t, "free", result.Account.Entitlement.PlanCode) + require.False(t, result.Account.Entitlement.IsPaid) + require.Equal(t, expiredAt, result.Account.Entitlement.StartsAt) +} + +func TestProfileUpdaterExecuteBlockedBySanction(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAccountStore(validUserAccount()) + service, err := NewProfileUpdater( + accountStore, + &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validEntitlementSnapshot(common.UserID("user-123"), now), + }, + }, + fakeSanctionStore{ + byUserID: map[common.UserID][]policy.SanctionRecord{ + common.UserID("user-123"): { + validActiveSanction(common.UserID("user-123"), policy.SanctionCodeProfileUpdateBlock, now.Add(-time.Minute)), + }, + }, + }, + fakeLimitStore{}, + fixedClock{now: now}, + stubRaceNamePolicy{}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), UpdateMyProfileInput{ + UserID: "user-123", + RaceName: "Nova Prime", + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeConflict, shared.CodeOf(err)) + require.Equal(t, 0, accountStore.renameCalls) +} + +func TestProfileUpdaterExecuteSuccessNoOpAndConflict(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + inputRaceName string + renameErr error + wantCode string + wantRaceName string + wantRenameCalls int + wantCurrentKey account.RaceNameCanonicalKey + wantNewKey account.RaceNameCanonicalKey + }{ + { + name: "success", + inputRaceName: "Nova Prime", + wantRaceName: "Nova Prime", + wantRenameCalls: 1, + wantCurrentKey: canonicalKey(common.RaceName("Pilot Nova")), + wantNewKey: canonicalKey(common.RaceName("Nova Prime")), + }, + { + name: "same canonical different exact", + inputRaceName: "P1lot Nova", + wantRaceName: "P1lot Nova", + wantRenameCalls: 1, + wantCurrentKey: canonicalKey(common.RaceName("Pilot Nova")), + wantNewKey: canonicalKey(common.RaceName("P1lot Nova")), + }, + { + name: "no-op", + inputRaceName: " Pilot Nova ", + wantRaceName: "Pilot Nova", + wantRenameCalls: 0, + }, + { + name: "conflict", + inputRaceName: "Taken Name", + renameErr: ports.ErrConflict, + wantCode: shared.ErrorCodeConflict, + wantRaceName: "Pilot Nova", + wantRenameCalls: 1, + wantCurrentKey: canonicalKey(common.RaceName("Pilot Nova")), + wantNewKey: canonicalKey(common.RaceName("Taken Name")), + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAccountStore(validUserAccount()) + accountStore.renameErr = tt.renameErr + service, err := NewProfileUpdater( + accountStore, + &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validEntitlementSnapshot(common.UserID("user-123"), now), + }, + }, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: now}, + stubRaceNamePolicy{}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), UpdateMyProfileInput{ + UserID: "user-123", + RaceName: tt.inputRaceName, + }) + if tt.wantCode != "" { + require.Error(t, err) + require.Equal(t, tt.wantCode, shared.CodeOf(err)) + } else { + require.NoError(t, err) + } + + require.Equal(t, tt.wantRenameCalls, accountStore.renameCalls) + if tt.wantRenameCalls > 0 { + require.Equal(t, tt.wantCurrentKey, accountStore.lastRenameInput.CurrentCanonicalKey) + require.Equal(t, tt.wantNewKey, accountStore.lastRenameInput.NewReservation.CanonicalKey) + } + + storedAccount, err := accountStore.GetByUserID(context.Background(), common.UserID("user-123")) + require.NoError(t, err) + require.Equal(t, tt.wantRaceName, storedAccount.RaceName.String()) + if tt.wantCode == "" { + require.Equal(t, tt.wantRaceName, result.Account.RaceName) + } + }) + } +} + +func TestSettingsUpdaterExecuteBlockedBySanction(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAccountStore(validUserAccount()) + service, err := NewSettingsUpdater( + accountStore, + &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validEntitlementSnapshot(common.UserID("user-123"), now), + }, + }, + fakeSanctionStore{ + byUserID: map[common.UserID][]policy.SanctionRecord{ + common.UserID("user-123"): { + validActiveSanction(common.UserID("user-123"), policy.SanctionCodeProfileUpdateBlock, now.Add(-time.Minute)), + }, + }, + }, + fakeLimitStore{}, + fixedClock{now: now}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), UpdateMySettingsInput{ + UserID: "user-123", + PreferredLanguage: "en-US", + TimeZone: "UTC", + }) + require.Error(t, err) + require.Equal(t, shared.ErrorCodeConflict, shared.CodeOf(err)) + require.Equal(t, 0, accountStore.updateCalls) +} + +func TestSettingsUpdaterExecuteCanonicalizedNoOpAndInvalidInputs(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + accountRecord account.UserAccount + inputLanguage string + inputTimeZone string + wantCode string + wantLanguage string + wantTimeZone string + wantUpdateCalls int + }{ + { + name: "canonicalized success", + accountRecord: validUserAccount(), + inputLanguage: " en-us ", + inputTimeZone: " UTC ", + wantLanguage: "en-US", + wantTimeZone: "UTC", + wantUpdateCalls: 1, + }, + { + name: "no-op", + accountRecord: account.UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en-US"), + TimeZone: common.TimeZoneName("UTC"), + DeclaredCountry: common.CountryCode("DE"), + CreatedAt: time.Unix(1_775_240_000, 0).UTC(), + UpdatedAt: time.Unix(1_775_240_000, 0).UTC(), + }, + inputLanguage: "en-us", + inputTimeZone: " UTC ", + wantLanguage: "en-US", + wantTimeZone: "UTC", + wantUpdateCalls: 0, + }, + { + name: "invalid preferred language", + accountRecord: validUserAccount(), + inputLanguage: "bad@@tag", + inputTimeZone: "UTC", + wantCode: shared.ErrorCodeInvalidRequest, + wantLanguage: "en", + wantTimeZone: "Europe/Kaliningrad", + }, + { + name: "invalid time zone", + accountRecord: validUserAccount(), + inputLanguage: "en", + inputTimeZone: "Mars/Olympus", + wantCode: shared.ErrorCodeInvalidRequest, + wantLanguage: "en", + wantTimeZone: "Europe/Kaliningrad", + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + now := time.Unix(1_775_240_500, 0).UTC() + accountStore := newFakeAccountStore(tt.accountRecord) + service, err := NewSettingsUpdater( + accountStore, + &fakeEntitlementSnapshotStore{ + byUserID: map[common.UserID]entitlement.CurrentSnapshot{ + common.UserID("user-123"): validEntitlementSnapshot(common.UserID("user-123"), now), + }, + }, + fakeSanctionStore{}, + fakeLimitStore{}, + fixedClock{now: now}, + ) + require.NoError(t, err) + + result, err := service.Execute(context.Background(), UpdateMySettingsInput{ + UserID: "user-123", + PreferredLanguage: tt.inputLanguage, + TimeZone: tt.inputTimeZone, + }) + if tt.wantCode != "" { + require.Error(t, err) + require.Equal(t, tt.wantCode, shared.CodeOf(err)) + } else { + require.NoError(t, err) + } + + require.Equal(t, tt.wantUpdateCalls, accountStore.updateCalls) + + storedAccount, err := accountStore.GetByUserID(context.Background(), common.UserID("user-123")) + require.NoError(t, err) + require.Equal(t, tt.wantLanguage, storedAccount.PreferredLanguage.String()) + require.Equal(t, tt.wantTimeZone, storedAccount.TimeZone.String()) + if tt.wantCode == "" { + require.Equal(t, tt.wantLanguage, result.Account.PreferredLanguage) + require.Equal(t, tt.wantTimeZone, result.Account.TimeZone) + } + }) + } +} + +type fakeAccountStore struct { + records map[common.UserID]account.UserAccount + renameErr error + updateErr error + renameCalls int + updateCalls int + lastRenameInput ports.RenameRaceNameInput +} + +func newFakeAccountStore(records ...account.UserAccount) *fakeAccountStore { + byUserID := make(map[common.UserID]account.UserAccount, len(records)) + for _, record := range records { + byUserID[record.UserID] = record + } + + return &fakeAccountStore{records: byUserID} +} + +func (store *fakeAccountStore) Create(_ context.Context, input ports.CreateAccountInput) error { + if input.Account.Validate() != nil || input.Reservation.Validate() != nil { + return ports.ErrConflict + } + + return nil +} + +func (store *fakeAccountStore) GetByUserID(_ context.Context, userID common.UserID) (account.UserAccount, error) { + record, ok := store.records[userID] + if !ok { + return account.UserAccount{}, ports.ErrNotFound + } + + return record, nil +} + +func (store *fakeAccountStore) GetByEmail(_ context.Context, email common.Email) (account.UserAccount, error) { + for _, record := range store.records { + if record.Email == email { + return record, nil + } + } + + return account.UserAccount{}, ports.ErrNotFound +} + +func (store *fakeAccountStore) GetByRaceName(_ context.Context, raceName common.RaceName) (account.UserAccount, error) { + for _, record := range store.records { + if record.RaceName == raceName { + return record, nil + } + } + + return account.UserAccount{}, ports.ErrNotFound +} + +func (store *fakeAccountStore) ExistsByUserID(_ context.Context, userID common.UserID) (bool, error) { + _, ok := store.records[userID] + return ok, nil +} + +func (store *fakeAccountStore) RenameRaceName(_ context.Context, input ports.RenameRaceNameInput) error { + store.renameCalls++ + store.lastRenameInput = input + if store.renameErr != nil { + return store.renameErr + } + if err := input.Validate(); err != nil { + return err + } + + record, ok := store.records[input.UserID] + if !ok { + return ports.ErrNotFound + } + record.RaceName = input.NewRaceName + record.UpdatedAt = input.UpdatedAt.UTC() + store.records[input.UserID] = record + + return nil +} + +func (store *fakeAccountStore) Update(_ context.Context, record account.UserAccount) error { + store.updateCalls++ + if store.updateErr != nil { + return store.updateErr + } + if _, ok := store.records[record.UserID]; !ok { + return ports.ErrNotFound + } + store.records[record.UserID] = record + return nil +} + +type fakeEntitlementSnapshotStore struct { + byUserID map[common.UserID]entitlement.CurrentSnapshot +} + +func (store *fakeEntitlementSnapshotStore) GetByUserID(_ context.Context, userID common.UserID) (entitlement.CurrentSnapshot, error) { + record, ok := store.byUserID[userID] + if !ok { + return entitlement.CurrentSnapshot{}, ports.ErrNotFound + } + + return record, nil +} + +func (store *fakeEntitlementSnapshotStore) Put(_ context.Context, record entitlement.CurrentSnapshot) error { + if store.byUserID != nil { + store.byUserID[record.UserID] = record + } + + return nil +} + +type fakeEntitlementLifecycleStore struct { + snapshotStore *fakeEntitlementSnapshotStore +} + +func (store *fakeEntitlementLifecycleStore) Grant(context.Context, ports.GrantEntitlementInput) error { + return nil +} + +func (store *fakeEntitlementLifecycleStore) Extend(context.Context, ports.ExtendEntitlementInput) error { + return nil +} + +func (store *fakeEntitlementLifecycleStore) Revoke(context.Context, ports.RevokeEntitlementInput) error { + return nil +} + +func (store *fakeEntitlementLifecycleStore) RepairExpired(ctx context.Context, input ports.RepairExpiredEntitlementInput) error { + if store.snapshotStore != nil { + return store.snapshotStore.Put(ctx, input.NewSnapshot) + } + + return nil +} + +type readerIDGenerator struct { + recordID entitlement.EntitlementRecordID + sanctionRecordID policy.SanctionRecordID + limitRecordID policy.LimitRecordID +} + +func (generator readerIDGenerator) NewUserID() (common.UserID, error) { + return "", nil +} + +func (generator readerIDGenerator) NewInitialRaceName() (common.RaceName, error) { + return "", nil +} + +func (generator readerIDGenerator) NewEntitlementRecordID() (entitlement.EntitlementRecordID, error) { + return generator.recordID, nil +} + +func (generator readerIDGenerator) NewSanctionRecordID() (policy.SanctionRecordID, error) { + return generator.sanctionRecordID, nil +} + +func (generator readerIDGenerator) NewLimitRecordID() (policy.LimitRecordID, error) { + return generator.limitRecordID, nil +} + +type fakeSanctionStore struct { + byUserID map[common.UserID][]policy.SanctionRecord + err error +} + +func (store fakeSanctionStore) Create(context.Context, policy.SanctionRecord) error { + return nil +} + +func (store fakeSanctionStore) GetByRecordID(context.Context, policy.SanctionRecordID) (policy.SanctionRecord, error) { + return policy.SanctionRecord{}, ports.ErrNotFound +} + +func (store fakeSanctionStore) ListByUserID(_ context.Context, userID common.UserID) ([]policy.SanctionRecord, error) { + if store.err != nil { + return nil, store.err + } + + records := store.byUserID[userID] + cloned := make([]policy.SanctionRecord, len(records)) + copy(cloned, records) + return cloned, nil +} + +func (store fakeSanctionStore) Update(context.Context, policy.SanctionRecord) error { + return nil +} + +type fakeLimitStore struct { + byUserID map[common.UserID][]policy.LimitRecord + err error +} + +func (store fakeLimitStore) Create(context.Context, policy.LimitRecord) error { + return nil +} + +func (store fakeLimitStore) GetByRecordID(context.Context, policy.LimitRecordID) (policy.LimitRecord, error) { + return policy.LimitRecord{}, ports.ErrNotFound +} + +func (store fakeLimitStore) ListByUserID(_ context.Context, userID common.UserID) ([]policy.LimitRecord, error) { + if store.err != nil { + return nil, store.err + } + + records := store.byUserID[userID] + cloned := make([]policy.LimitRecord, len(records)) + copy(cloned, records) + return cloned, nil +} + +func (store fakeLimitStore) Update(context.Context, policy.LimitRecord) error { + return nil +} + +type fixedClock struct { + now time.Time +} + +func (clock fixedClock) Now() time.Time { + return clock.now +} + +type stubRaceNamePolicy struct{} + +func (stubRaceNamePolicy) CanonicalKey(raceName common.RaceName) (account.RaceNameCanonicalKey, error) { + return canonicalKey(raceName), nil +} + +func canonicalKey(raceName common.RaceName) account.RaceNameCanonicalKey { + return account.RaceNameCanonicalKey(strings.NewReplacer( + "1", "i", + "0", "o", + "8", "b", + ).Replace(strings.ToLower(raceName.String()))) +} + +func validUserAccount() account.UserAccount { + createdAt := time.Unix(1_775_240_000, 0).UTC() + return account.UserAccount{ + UserID: common.UserID("user-123"), + Email: common.Email("pilot@example.com"), + RaceName: common.RaceName("Pilot Nova"), + PreferredLanguage: common.LanguageTag("en"), + TimeZone: common.TimeZoneName("Europe/Kaliningrad"), + DeclaredCountry: common.CountryCode("DE"), + CreatedAt: createdAt, + UpdatedAt: createdAt, + } +} + +func validEntitlementSnapshot(userID common.UserID, now time.Time) entitlement.CurrentSnapshot { + return entitlement.CurrentSnapshot{ + UserID: userID, + PlanCode: entitlement.PlanCodeFree, + IsPaid: false, + StartsAt: now.Add(-time.Hour), + Source: common.Source("auth_registration"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + ReasonCode: common.ReasonCode("initial_free_entitlement"), + UpdatedAt: now, + } +} + +func validActiveSanction(userID common.UserID, code policy.SanctionCode, appliedAt time.Time) policy.SanctionRecord { + return policy.SanctionRecord{ + RecordID: policy.SanctionRecordID("sanction-" + string(code)), + UserID: userID, + SanctionCode: code, + Scope: common.Scope("self_service"), + ReasonCode: common.ReasonCode("policy_enforced"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + AppliedAt: appliedAt.UTC(), + } +} + +func expiredSanction(userID common.UserID, code policy.SanctionCode, appliedAt time.Time) policy.SanctionRecord { + expiresAt := appliedAt.Add(30 * time.Minute) + record := validActiveSanction(userID, code, appliedAt) + record.RecordID = policy.SanctionRecordID(record.RecordID.String() + "-expired") + record.ExpiresAt = &expiresAt + return record +} + +func validActiveLimit(userID common.UserID, code policy.LimitCode, value int, appliedAt time.Time) policy.LimitRecord { + return policy.LimitRecord{ + RecordID: policy.LimitRecordID("limit-" + string(code)), + UserID: userID, + LimitCode: code, + Value: value, + ReasonCode: common.ReasonCode("policy_enforced"), + Actor: common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")}, + AppliedAt: appliedAt.UTC(), + } +} + +func removedLimit(userID common.UserID, code policy.LimitCode, value int, appliedAt time.Time) policy.LimitRecord { + removedAt := appliedAt.Add(30 * time.Minute) + record := validActiveLimit(userID, code, value, appliedAt) + record.RecordID = policy.LimitRecordID(record.RecordID.String() + "-removed") + record.RemovedAt = &removedAt + record.RemovedBy = common.ActorRef{Type: common.ActorType("service"), ID: common.ActorID("user-service")} + record.RemovedReasonCode = common.ReasonCode("policy_reset") + return record +} + +func timePointer(value time.Time) *time.Time { + utcValue := value.UTC() + return &utcValue +} + +var ( + _ ports.UserAccountStore = (*fakeAccountStore)(nil) + _ ports.EntitlementSnapshotStore = (*fakeEntitlementSnapshotStore)(nil) + _ ports.EntitlementLifecycleStore = (*fakeEntitlementLifecycleStore)(nil) + _ ports.SanctionStore = fakeSanctionStore{} + _ ports.LimitStore = fakeLimitStore{} + _ ports.RaceNamePolicy = stubRaceNamePolicy{} + _ ports.IDGenerator = readerIDGenerator{} +) diff --git a/user/internal/service/shared/errors.go b/user/internal/service/shared/errors.go new file mode 100644 index 0000000..cc11e34 --- /dev/null +++ b/user/internal/service/shared/errors.go @@ -0,0 +1,175 @@ +// Package shared provides shared request parsing and error normalization used +// by the user-service application and transport layers. +package shared + +import ( + "errors" + "net/http" + "strings" +) + +const ( + // ErrorCodeInvalidRequest reports malformed or semantically invalid caller + // input. + ErrorCodeInvalidRequest = "invalid_request" + + // ErrorCodeConflict reports that the requested mutation conflicts with the + // current source-of-truth state. + ErrorCodeConflict = "conflict" + + // ErrorCodeSubjectNotFound reports that the requested user subject does not + // exist. + ErrorCodeSubjectNotFound = "subject_not_found" + + // ErrorCodeServiceUnavailable reports that a required dependency is + // temporarily unavailable. + ErrorCodeServiceUnavailable = "service_unavailable" + + // ErrorCodeInternalError reports that a local invariant failed unexpectedly. + ErrorCodeInternalError = "internal_error" +) + +var internalErrorStatusCodes = map[string]int{ + ErrorCodeInvalidRequest: http.StatusBadRequest, + ErrorCodeConflict: http.StatusConflict, + ErrorCodeSubjectNotFound: http.StatusNotFound, + ErrorCodeServiceUnavailable: http.StatusServiceUnavailable, + ErrorCodeInternalError: http.StatusInternalServerError, +} + +var internalStableMessages = map[string]string{ + ErrorCodeConflict: "request conflicts with current state", + ErrorCodeSubjectNotFound: "subject not found", + ErrorCodeServiceUnavailable: "service is unavailable", + ErrorCodeInternalError: "internal server error", +} + +// InternalErrorProjection stores the transport-ready representation of one +// normalized trusted-internal error. +type InternalErrorProjection struct { + // StatusCode stores the HTTP status returned to the trusted caller. + StatusCode int + + // Code stores the stable machine-readable error code written into the JSON + // envelope. + Code string + + // Message stores the stable or caller-safe message written into the JSON + // envelope. + Message string +} + +// ServiceError stores one normalized application-layer failure. +type ServiceError struct { + // Code stores the stable machine-readable error code. + Code string + + // Message stores the caller-safe error message. + Message string + + // Err stores the wrapped underlying cause when one exists. + Err error +} + +// Error returns the caller-safe message of ServiceError. +func (err *ServiceError) Error() string { + if err == nil { + return "" + } + if strings.TrimSpace(err.Message) != "" { + return err.Message + } + if strings.TrimSpace(err.Code) != "" { + return err.Code + } + if err.Err != nil { + return err.Err.Error() + } + + return ErrorCodeInternalError +} + +// Unwrap returns the wrapped underlying cause. +func (err *ServiceError) Unwrap() error { + if err == nil { + return nil + } + + return err.Err +} + +// NewServiceError returns one new normalized application-layer error. +func NewServiceError(code string, message string, err error) *ServiceError { + return &ServiceError{ + Code: strings.TrimSpace(code), + Message: strings.TrimSpace(message), + Err: err, + } +} + +// InvalidRequest returns one normalized invalid-request error. +func InvalidRequest(message string) *ServiceError { + return NewServiceError(ErrorCodeInvalidRequest, strings.TrimSpace(message), nil) +} + +// Conflict returns one normalized conflict error. +func Conflict() *ServiceError { + return NewServiceError(ErrorCodeConflict, "", nil) +} + +// SubjectNotFound returns one normalized subject-not-found error. +func SubjectNotFound() *ServiceError { + return NewServiceError(ErrorCodeSubjectNotFound, "", nil) +} + +// ServiceUnavailable returns one normalized dependency-unavailable error. +func ServiceUnavailable(err error) *ServiceError { + return NewServiceError(ErrorCodeServiceUnavailable, "", err) +} + +// InternalError returns one normalized invariant-failure error. +func InternalError(err error) *ServiceError { + return NewServiceError(ErrorCodeInternalError, "", err) +} + +// CodeOf returns the normalized service error code carried by err when one is +// available. +func CodeOf(err error) string { + serviceErr, ok := errors.AsType[*ServiceError](err) + if !ok || serviceErr == nil { + return "" + } + + return serviceErr.Code +} + +// ProjectInternalError normalizes err to the frozen trusted-internal HTTP +// error surface. +func ProjectInternalError(err error) InternalErrorProjection { + serviceErr, ok := errors.AsType[*ServiceError](err) + code := CodeOf(err) + if _, exists := internalErrorStatusCodes[code]; !exists { + return InternalErrorProjection{ + StatusCode: http.StatusInternalServerError, + Code: ErrorCodeInternalError, + Message: internalStableMessages[ErrorCodeInternalError], + } + } + + message := "" + if ok && serviceErr != nil { + message = serviceErr.Message + } + if stable, exists := internalStableMessages[code]; exists { + message = stable + } + if strings.TrimSpace(message) == "" { + message = internalStableMessages[ErrorCodeInternalError] + } + + return InternalErrorProjection{ + StatusCode: internalErrorStatusCodes[code], + Code: code, + Message: message, + } +} diff --git a/user/internal/service/shared/normalize.go b/user/internal/service/shared/normalize.go new file mode 100644 index 0000000..55be37f --- /dev/null +++ b/user/internal/service/shared/normalize.go @@ -0,0 +1,131 @@ +package shared + +import ( + "fmt" + "strings" + "time" + + "galaxy/user/internal/domain/common" + + "golang.org/x/text/language" +) + +// NormalizeString trims surrounding Unicode whitespace from value. +func NormalizeString(value string) string { + return strings.TrimSpace(value) +} + +// ParseEmail trims value and validates it as one exact normalized e-mail +// subject used by the auth-facing contract. +func ParseEmail(value string) (common.Email, error) { + email := common.Email(NormalizeString(value)) + if err := email.Validate(); err != nil { + return "", InvalidRequest(err.Error()) + } + + return email, nil +} + +// ParseUserID trims value and validates it as one stable user identifier. +func ParseUserID(value string) (common.UserID, error) { + userID := common.UserID(NormalizeString(value)) + if err := userID.Validate(); err != nil { + return "", InvalidRequest(err.Error()) + } + + return userID, nil +} + +// ParseRaceName trims value and validates it as one exact stored race name. +func ParseRaceName(value string) (common.RaceName, error) { + raceName := common.RaceName(NormalizeString(value)) + if err := raceName.Validate(); err != nil { + return "", InvalidRequest(err.Error()) + } + + return raceName, nil +} + +// ParseReasonCode trims value and validates it as one machine-readable reason +// code. +func ParseReasonCode(value string) (common.ReasonCode, error) { + reasonCode := common.ReasonCode(NormalizeString(value)) + if err := reasonCode.Validate(); err != nil { + return "", InvalidRequest(err.Error()) + } + + return reasonCode, nil +} + +// ParseLanguageTag trims value and validates it against the current Stage 03 +// boundary and BCP 47 semantics, returning the canonical tag form. +func ParseLanguageTag(value string) (common.LanguageTag, error) { + languageTag := common.LanguageTag(NormalizeString(value)) + if err := languageTag.Validate(); err != nil { + return "", InvalidRequest(err.Error()) + } + + parsedTag, err := language.Parse(languageTag.String()) + if err != nil { + return "", InvalidRequest("language tag must be a valid BCP 47 language tag") + } + + canonicalTag := common.LanguageTag(parsedTag.String()) + if err := canonicalTag.Validate(); err != nil { + return "", InvalidRequest(err.Error()) + } + + return canonicalTag, nil +} + +// ParseTimeZoneName trims value and validates it against the current Stage 03 +// boundary and IANA time-zone semantics. +func ParseTimeZoneName(value string) (common.TimeZoneName, error) { + timeZoneName := common.TimeZoneName(NormalizeString(value)) + if err := timeZoneName.Validate(); err != nil { + return "", InvalidRequest(err.Error()) + } + if _, err := time.LoadLocation(timeZoneName.String()); err != nil { + return "", InvalidRequest("time zone name must be a valid IANA time zone name") + } + + return timeZoneName, nil +} + +// ParseRegistrationPreferredLanguage trims value, validates it as one create- +// only BCP 47 registration language tag, and returns the canonical tag form. +func ParseRegistrationPreferredLanguage(value string) (common.LanguageTag, error) { + languageTag, err := ParseLanguageTag(value) + if err != nil { + return "", reframeFieldError("registration_context.preferred_language", "language tag", err) + } + + return languageTag, nil +} + +// ParseRegistrationTimeZoneName trims value and validates it as one create- +// only IANA registration time-zone name. +func ParseRegistrationTimeZoneName(value string) (common.TimeZoneName, error) { + timeZoneName, err := ParseTimeZoneName(value) + if err != nil { + return "", reframeFieldError("registration_context.time_zone", "time zone name", err) + } + + return timeZoneName, nil +} + +func reframeFieldError(fieldName string, valueName string, err error) error { + if err == nil { + return nil + } + + message := err.Error() + prefix := valueName + " " + if strings.HasPrefix(message, prefix) { + message = fieldName + " " + strings.TrimPrefix(message, prefix) + } else { + message = fmt.Sprintf("%s: %s", fieldName, message) + } + + return InvalidRequest(message) +} diff --git a/user/internal/service/shared/normalize_test.go b/user/internal/service/shared/normalize_test.go new file mode 100644 index 0000000..dcad5ea --- /dev/null +++ b/user/internal/service/shared/normalize_test.go @@ -0,0 +1,119 @@ +package shared + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestParseLanguageTag(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + input string + want string + wantErrCode string + wantErr string + }{ + { + name: "canonicalizes valid tag", + input: " en-us ", + want: "en-US", + }, + { + name: "rejects invalid tag", + input: "en-@", + wantErrCode: ErrorCodeInvalidRequest, + wantErr: "language tag must be a valid BCP 47 language tag", + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + got, err := ParseLanguageTag(tt.input) + if tt.wantErr != "" { + require.Error(t, err) + require.Empty(t, got) + require.Equal(t, tt.wantErrCode, CodeOf(err)) + require.Equal(t, tt.wantErr, err.Error()) + return + } + + require.NoError(t, err) + require.Equal(t, tt.want, got.String()) + }) + } +} + +func TestParseTimeZoneName(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + input string + want string + wantErrCode string + wantErr string + }{ + { + name: "accepts valid zone", + input: " Europe/Kaliningrad ", + want: "Europe/Kaliningrad", + }, + { + name: "rejects invalid zone", + input: "Mars/Olympus", + wantErrCode: ErrorCodeInvalidRequest, + wantErr: "time zone name must be a valid IANA time zone name", + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + got, err := ParseTimeZoneName(tt.input) + if tt.wantErr != "" { + require.Error(t, err) + require.Empty(t, got) + require.Equal(t, tt.wantErrCode, CodeOf(err)) + require.Equal(t, tt.wantErr, err.Error()) + return + } + + require.NoError(t, err) + require.Equal(t, tt.want, got.String()) + }) + } +} + +func TestParseRegistrationPreferredLanguage(t *testing.T) { + t.Parallel() + + got, err := ParseRegistrationPreferredLanguage(" en-us ") + require.NoError(t, err) + require.Equal(t, "en-US", got.String()) + + _, err = ParseRegistrationPreferredLanguage("bad@@tag") + require.Error(t, err) + require.Equal(t, ErrorCodeInvalidRequest, CodeOf(err)) + require.Equal(t, "registration_context.preferred_language must be a valid BCP 47 language tag", err.Error()) +} + +func TestParseRegistrationTimeZoneName(t *testing.T) { + t.Parallel() + + got, err := ParseRegistrationTimeZoneName(" Europe/Kaliningrad ") + require.NoError(t, err) + require.Equal(t, "Europe/Kaliningrad", got.String()) + + _, err = ParseRegistrationTimeZoneName("Mars/Olympus") + require.Error(t, err) + require.Equal(t, ErrorCodeInvalidRequest, CodeOf(err)) + require.Equal(t, "registration_context.time_zone must be a valid IANA time zone name", err.Error()) +} diff --git a/user/internal/service/shared/observability.go b/user/internal/service/shared/observability.go new file mode 100644 index 0000000..5c95fdb --- /dev/null +++ b/user/internal/service/shared/observability.go @@ -0,0 +1,73 @@ +package shared + +import ( + "context" + "log/slog" + + "galaxy/user/internal/logging" +) + +// LogServiceOutcome writes one structured service-level outcome log with a +// stable severity derived from err and with trace fields attached when ctx +// carries an active span. +func LogServiceOutcome(logger *slog.Logger, ctx context.Context, message string, err error, attrs ...any) { + if logger == nil { + logger = slog.Default() + } + + attrs = append(attrs, logging.TraceAttrsFromContext(ctx)...) + + switch { + case err == nil: + logger.InfoContext(ctx, message, attrs...) + case isExpectedServiceErrorCode(CodeOf(err)): + logger.WarnContext(ctx, message, append(attrs, "error", err.Error())...) + default: + logger.ErrorContext(ctx, message, append(attrs, "error", err.Error())...) + } +} + +// MetricOutcome returns the stable low-cardinality outcome label derived from +// err for service metrics. +func MetricOutcome(err error) string { + if err == nil { + return "success" + } + + code := CodeOf(err) + if code == "" { + return ErrorCodeInternalError + } + + return code +} + +// LogEventPublicationFailure writes one structured error log for an auxiliary +// post-commit event publication failure. +func LogEventPublicationFailure(logger *slog.Logger, ctx context.Context, eventType string, err error, attrs ...any) { + if err == nil { + return + } + if logger == nil { + logger = slog.Default() + } + + attrs = append(attrs, + "event_type", eventType, + "error", err.Error(), + ) + attrs = append(attrs, logging.TraceAttrsFromContext(ctx)...) + + logger.ErrorContext(ctx, "auxiliary event publication failed", attrs...) +} + +func isExpectedServiceErrorCode(code string) bool { + switch code { + case ErrorCodeInvalidRequest, + ErrorCodeConflict, + ErrorCodeSubjectNotFound: + return true + default: + return false + } +} diff --git a/user/internal/service/shared/race_name.go b/user/internal/service/shared/race_name.go new file mode 100644 index 0000000..dc516e6 --- /dev/null +++ b/user/internal/service/shared/race_name.go @@ -0,0 +1,49 @@ +package shared + +import ( + "fmt" + "time" + + "galaxy/user/internal/domain/account" + "galaxy/user/internal/domain/common" + "galaxy/user/internal/ports" +) + +// BuildRaceNameReservation constructs one validated race-name reservation +// record for userID and raceName at reservedAt. +func BuildRaceNameReservation( + policy ports.RaceNamePolicy, + userID common.UserID, + raceName common.RaceName, + reservedAt time.Time, +) (account.RaceNameReservation, error) { + if policy == nil { + return account.RaceNameReservation{}, fmt.Errorf("build race-name reservation: race-name policy must not be nil") + } + if err := userID.Validate(); err != nil { + return account.RaceNameReservation{}, fmt.Errorf("build race-name reservation: %w", err) + } + if err := raceName.Validate(); err != nil { + return account.RaceNameReservation{}, fmt.Errorf("build race-name reservation: %w", err) + } + if err := common.ValidateTimestamp("build race-name reservation reserved at", reservedAt); err != nil { + return account.RaceNameReservation{}, err + } + + canonicalKey, err := policy.CanonicalKey(raceName) + if err != nil { + return account.RaceNameReservation{}, fmt.Errorf("build race-name reservation: %w", err) + } + + record := account.RaceNameReservation{ + CanonicalKey: canonicalKey, + UserID: userID, + RaceName: raceName, + ReservedAt: reservedAt.UTC(), + } + if err := record.Validate(); err != nil { + return account.RaceNameReservation{}, fmt.Errorf("build race-name reservation: %w", err) + } + + return record, nil +} diff --git a/user/internal/telemetry/runtime.go b/user/internal/telemetry/runtime.go new file mode 100644 index 0000000..b377765 --- /dev/null +++ b/user/internal/telemetry/runtime.go @@ -0,0 +1,549 @@ +// Package telemetry provides shared OpenTelemetry runtime helpers and +// low-cardinality user-service instruments. +package telemetry + +import ( + "context" + "errors" + "fmt" + "io" + "log/slog" + "net/http" + "os" + "strings" + "sync" + "time" + + "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/client_golang/prometheus/promhttp" + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc" + "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" + otelprom "go.opentelemetry.io/otel/exporters/prometheus" + "go.opentelemetry.io/otel/exporters/stdout/stdoutmetric" + "go.opentelemetry.io/otel/exporters/stdout/stdouttrace" + "go.opentelemetry.io/otel/metric" + "go.opentelemetry.io/otel/propagation" + sdkmetric "go.opentelemetry.io/otel/sdk/metric" + "go.opentelemetry.io/otel/sdk/resource" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + oteltrace "go.opentelemetry.io/otel/trace" +) + +const meterName = "galaxy/user" + +const ( + defaultServiceName = "galaxy-user" + + processExporterNone = "none" + processExporterOTLP = "otlp" + processProtocolHTTPProtobuf = "http/protobuf" + processProtocolGRPC = "grpc" +) + +// ProcessConfig configures the process-wide OpenTelemetry runtime. +type ProcessConfig struct { + // ServiceName overrides the default OpenTelemetry service name. + ServiceName string + + // TracesExporter selects the external traces exporter. Supported values are + // `none` and `otlp`. + TracesExporter string + + // MetricsExporter selects the external metrics exporter. Supported values + // are `none` and `otlp`. + MetricsExporter string + + // TracesProtocol selects the OTLP traces protocol when TracesExporter is + // `otlp`. + TracesProtocol string + + // MetricsProtocol selects the OTLP metrics protocol when MetricsExporter is + // `otlp`. + MetricsProtocol string + + // StdoutTracesEnabled enables the additional stdout trace exporter used for + // local development and debugging. + StdoutTracesEnabled bool + + // StdoutMetricsEnabled enables the additional stdout metric exporter used + // for local development and debugging. + StdoutMetricsEnabled bool +} + +// Validate reports whether cfg contains a supported OpenTelemetry exporter +// configuration. +func (cfg ProcessConfig) Validate() error { + switch cfg.TracesExporter { + case processExporterNone, processExporterOTLP: + default: + return fmt.Errorf("unsupported traces exporter %q", cfg.TracesExporter) + } + + switch cfg.MetricsExporter { + case processExporterNone, processExporterOTLP: + default: + return fmt.Errorf("unsupported metrics exporter %q", cfg.MetricsExporter) + } + + if cfg.TracesProtocol != "" && cfg.TracesProtocol != processProtocolHTTPProtobuf && cfg.TracesProtocol != processProtocolGRPC { + return fmt.Errorf("unsupported OTLP traces protocol %q", cfg.TracesProtocol) + } + if cfg.MetricsProtocol != "" && cfg.MetricsProtocol != processProtocolHTTPProtobuf && cfg.MetricsProtocol != processProtocolGRPC { + return fmt.Errorf("unsupported OTLP metrics protocol %q", cfg.MetricsProtocol) + } + + return nil +} + +// Runtime owns the user-service OpenTelemetry providers, the Prometheus +// metrics handler, and the custom low-cardinality instruments. +type Runtime struct { + tracerProvider oteltrace.TracerProvider + meterProvider metric.MeterProvider + promHandler http.Handler + + shutdownMu sync.Mutex + shutdownDone bool + shutdownErr error + shutdownFns []func(context.Context) error + + internalHTTPRequests metric.Int64Counter + internalHTTPDuration metric.Float64Histogram + authResolutionOutcomes metric.Int64Counter + userCreationOutcomes metric.Int64Counter + raceNameReservationConflicts metric.Int64Counter + entitlementMutations metric.Int64Counter + sanctionMutations metric.Int64Counter + limitMutations metric.Int64Counter + eventPublicationFailures metric.Int64Counter +} + +// New constructs a lightweight telemetry runtime around meterProvider for +// tests and embedded use cases that do not need process-level exporter wiring. +func New(meterProvider metric.MeterProvider) (*Runtime, error) { + return NewWithProviders(meterProvider, nil) +} + +// NewWithProviders constructs a telemetry runtime around explicitly supplied +// meterProvider and tracerProvider values. +func NewWithProviders(meterProvider metric.MeterProvider, tracerProvider oteltrace.TracerProvider) (*Runtime, error) { + if meterProvider == nil { + meterProvider = otel.GetMeterProvider() + } + if tracerProvider == nil { + tracerProvider = otel.GetTracerProvider() + } + if meterProvider == nil { + return nil, errors.New("new user telemetry runtime: nil meter provider") + } + if tracerProvider == nil { + return nil, errors.New("new user telemetry runtime: nil tracer provider") + } + + return buildRuntime(meterProvider, tracerProvider, http.NotFoundHandler(), nil) +} + +// NewProcess constructs the process-wide user-service OpenTelemetry runtime +// from cfg, installs the resulting providers globally, and returns the +// runtime. +func NewProcess(ctx context.Context, cfg ProcessConfig, logger *slog.Logger) (*Runtime, error) { + return newProcess(ctx, cfg, logger, os.Stdout, os.Stdout) +} + +// TracerProvider returns the runtime tracer provider. +func (r *Runtime) TracerProvider() oteltrace.TracerProvider { + if r == nil || r.tracerProvider == nil { + return otel.GetTracerProvider() + } + + return r.tracerProvider +} + +// MeterProvider returns the runtime meter provider. +func (r *Runtime) MeterProvider() metric.MeterProvider { + if r == nil || r.meterProvider == nil { + return otel.GetMeterProvider() + } + + return r.meterProvider +} + +// Handler returns the Prometheus handler that should be mounted on the admin +// listener. +func (r *Runtime) Handler() http.Handler { + if r == nil || r.promHandler == nil { + return http.NotFoundHandler() + } + + return r.promHandler +} + +// Shutdown flushes and stops the configured telemetry providers. Shutdown is +// idempotent. +func (r *Runtime) Shutdown(ctx context.Context) error { + if r == nil { + return nil + } + + r.shutdownMu.Lock() + if r.shutdownDone { + err := r.shutdownErr + r.shutdownMu.Unlock() + return err + } + r.shutdownDone = true + r.shutdownMu.Unlock() + + var shutdownErr error + for index := len(r.shutdownFns) - 1; index >= 0; index-- { + shutdownErr = errors.Join(shutdownErr, r.shutdownFns[index](ctx)) + } + + r.shutdownMu.Lock() + r.shutdownErr = shutdownErr + r.shutdownMu.Unlock() + + return shutdownErr +} + +// RecordInternalHTTPRequest records one internal HTTP request outcome. +func (r *Runtime) RecordInternalHTTPRequest(ctx context.Context, attrs []attribute.KeyValue, duration time.Duration) { + if r == nil { + return + } + + options := metric.WithAttributes(attrs...) + r.internalHTTPRequests.Add(normalizeContext(ctx), 1, options) + r.internalHTTPDuration.Record(normalizeContext(ctx), duration.Seconds()*1000, options) +} + +// RecordAuthResolutionOutcome records one auth-facing resolution outcome. +func (r *Runtime) RecordAuthResolutionOutcome(ctx context.Context, operation string, outcome string) { + if r == nil { + return + } + + r.authResolutionOutcomes.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes( + attribute.String("operation", strings.TrimSpace(operation)), + attribute.String("outcome", strings.TrimSpace(outcome)), + ), + ) +} + +// RecordUserCreationOutcome records one ensure-by-email coarse outcome. +func (r *Runtime) RecordUserCreationOutcome(ctx context.Context, outcome string) { + if r == nil { + return + } + + r.userCreationOutcomes.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes(attribute.String("outcome", strings.TrimSpace(outcome))), + ) +} + +// RecordRaceNameReservationConflict records one race-name reservation conflict +// for operation. +func (r *Runtime) RecordRaceNameReservationConflict(ctx context.Context, operation string) { + if r == nil { + return + } + + r.raceNameReservationConflicts.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes(attribute.String("operation", strings.TrimSpace(operation))), + ) +} + +// RecordEntitlementMutation records one entitlement command outcome. +func (r *Runtime) RecordEntitlementMutation(ctx context.Context, command string, outcome string) { + if r == nil { + return + } + + r.entitlementMutations.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes( + attribute.String("command", strings.TrimSpace(command)), + attribute.String("outcome", strings.TrimSpace(outcome)), + ), + ) +} + +// RecordSanctionMutation records one sanction command outcome. +func (r *Runtime) RecordSanctionMutation(ctx context.Context, command string, outcome string) { + if r == nil { + return + } + + r.sanctionMutations.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes( + attribute.String("command", strings.TrimSpace(command)), + attribute.String("outcome", strings.TrimSpace(outcome)), + ), + ) +} + +// RecordLimitMutation records one limit command outcome. +func (r *Runtime) RecordLimitMutation(ctx context.Context, command string, outcome string) { + if r == nil { + return + } + + r.limitMutations.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes( + attribute.String("command", strings.TrimSpace(command)), + attribute.String("outcome", strings.TrimSpace(outcome)), + ), + ) +} + +// RecordEventPublicationFailure records one post-commit auxiliary event +// publication failure. +func (r *Runtime) RecordEventPublicationFailure(ctx context.Context, eventType string) { + if r == nil { + return + } + + r.eventPublicationFailures.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes(attribute.String("event_type", strings.TrimSpace(eventType))), + ) +} + +func newProcess(ctx context.Context, cfg ProcessConfig, logger *slog.Logger, stdoutTraceWriter io.Writer, stdoutMetricWriter io.Writer) (*Runtime, error) { + if ctx == nil { + return nil, errors.New("new user telemetry process: nil context") + } + if err := cfg.Validate(); err != nil { + return nil, fmt.Errorf("new user telemetry process: %w", err) + } + if logger == nil { + logger = slog.Default() + } + if strings.TrimSpace(cfg.ServiceName) == "" { + cfg.ServiceName = defaultServiceName + } + + res, err := resource.New( + ctx, + resource.WithAttributes(attribute.String("service.name", cfg.ServiceName)), + ) + if err != nil { + return nil, fmt.Errorf("new user telemetry process: resource: %w", err) + } + + tracerProvider, err := newTracerProvider(ctx, res, cfg, stdoutTraceWriter) + if err != nil { + return nil, fmt.Errorf("new user telemetry process: tracer provider: %w", err) + } + + registry := prometheus.NewRegistry() + prometheusExporter, err := otelprom.New(otelprom.WithRegisterer(registry)) + if err != nil { + return nil, fmt.Errorf("new user telemetry process: prometheus exporter: %w", err) + } + + meterProvider, err := newMeterProvider(ctx, res, cfg, prometheusExporter, stdoutMetricWriter) + if err != nil { + return nil, fmt.Errorf("new user telemetry process: meter provider: %w", err) + } + + otel.SetTracerProvider(tracerProvider) + otel.SetMeterProvider(meterProvider) + otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator( + propagation.TraceContext{}, + propagation.Baggage{}, + )) + + runtime, err := buildRuntime( + meterProvider, + tracerProvider, + promhttp.HandlerFor(registry, promhttp.HandlerOpts{}), + []func(context.Context) error{ + meterProvider.Shutdown, + tracerProvider.Shutdown, + }, + ) + if err != nil { + return nil, fmt.Errorf("new user telemetry process: %w", err) + } + + logger.InfoContext(ctx, "user telemetry configured", + "service_name", cfg.ServiceName, + "traces_exporter", cfg.TracesExporter, + "metrics_exporter", cfg.MetricsExporter, + "stdout_traces_enabled", cfg.StdoutTracesEnabled, + "stdout_metrics_enabled", cfg.StdoutMetricsEnabled, + ) + + return runtime, nil +} + +func buildRuntime( + meterProvider metric.MeterProvider, + tracerProvider oteltrace.TracerProvider, + promHandler http.Handler, + shutdownFns []func(context.Context) error, +) (*Runtime, error) { + meter := meterProvider.Meter(meterName) + + internalHTTPRequests, err := meter.Int64Counter("user.internal_http.requests") + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: internal_http.requests: %w", err) + } + internalHTTPDuration, err := meter.Float64Histogram("user.internal_http.duration", metric.WithUnit("ms")) + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: internal_http.duration: %w", err) + } + authResolutionOutcomes, err := meter.Int64Counter("user.auth_resolution.outcomes") + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: auth_resolution.outcomes: %w", err) + } + userCreationOutcomes, err := meter.Int64Counter("user.user_creation.outcomes") + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: user_creation.outcomes: %w", err) + } + raceNameReservationConflicts, err := meter.Int64Counter("user.race_name.reservation_conflicts") + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: race_name.reservation_conflicts: %w", err) + } + entitlementMutations, err := meter.Int64Counter("user.entitlement.mutations") + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: entitlement.mutations: %w", err) + } + sanctionMutations, err := meter.Int64Counter("user.sanction.mutations") + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: sanction.mutations: %w", err) + } + limitMutations, err := meter.Int64Counter("user.limit.mutations") + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: limit.mutations: %w", err) + } + eventPublicationFailures, err := meter.Int64Counter("user.event_publication_failures") + if err != nil { + return nil, fmt.Errorf("build user telemetry runtime: event_publication_failures: %w", err) + } + + if promHandler == nil { + promHandler = http.NotFoundHandler() + } + + return &Runtime{ + tracerProvider: tracerProvider, + meterProvider: meterProvider, + promHandler: promHandler, + shutdownFns: shutdownFns, + internalHTTPRequests: internalHTTPRequests, + internalHTTPDuration: internalHTTPDuration, + authResolutionOutcomes: authResolutionOutcomes, + userCreationOutcomes: userCreationOutcomes, + raceNameReservationConflicts: raceNameReservationConflicts, + entitlementMutations: entitlementMutations, + sanctionMutations: sanctionMutations, + limitMutations: limitMutations, + eventPublicationFailures: eventPublicationFailures, + }, nil +} + +func newTracerProvider(ctx context.Context, res *resource.Resource, cfg ProcessConfig, stdoutWriter io.Writer) (*sdktrace.TracerProvider, error) { + options := []sdktrace.TracerProviderOption{sdktrace.WithResource(res)} + + if cfg.TracesExporter == processExporterOTLP { + exporter, err := newOTLPTraceExporter(ctx, cfg.TracesProtocol) + if err != nil { + return nil, err + } + options = append(options, sdktrace.WithBatcher(exporter)) + } + if cfg.StdoutTracesEnabled { + exporter, err := stdouttrace.New( + stdouttrace.WithPrettyPrint(), + stdouttrace.WithWriter(stdoutWriter), + ) + if err != nil { + return nil, err + } + options = append(options, sdktrace.WithBatcher(exporter)) + } + + return sdktrace.NewTracerProvider(options...), nil +} + +func newMeterProvider( + ctx context.Context, + res *resource.Resource, + cfg ProcessConfig, + prometheusExporter sdkmetric.Reader, + stdoutWriter io.Writer, +) (*sdkmetric.MeterProvider, error) { + options := []sdkmetric.Option{ + sdkmetric.WithResource(res), + sdkmetric.WithReader(prometheusExporter), + } + + if cfg.MetricsExporter == processExporterOTLP { + exporter, err := newOTLPMetricExporter(ctx, cfg.MetricsProtocol) + if err != nil { + return nil, err + } + options = append(options, sdkmetric.WithReader(sdkmetric.NewPeriodicReader(exporter))) + } + if cfg.StdoutMetricsEnabled { + exporter, err := stdoutmetric.New( + stdoutmetric.WithPrettyPrint(), + stdoutmetric.WithWriter(stdoutWriter), + ) + if err != nil { + return nil, err + } + options = append(options, sdkmetric.WithReader(sdkmetric.NewPeriodicReader(exporter))) + } + + return sdkmetric.NewMeterProvider(options...), nil +} + +func newOTLPTraceExporter(ctx context.Context, protocol string) (sdktrace.SpanExporter, error) { + switch protocol { + case "", processProtocolHTTPProtobuf: + return otlptracehttp.New(ctx) + case processProtocolGRPC: + return otlptracegrpc.New(ctx) + default: + return nil, fmt.Errorf("unsupported OTLP traces protocol %q", protocol) + } +} + +func newOTLPMetricExporter(ctx context.Context, protocol string) (sdkmetric.Exporter, error) { + switch protocol { + case "", processProtocolHTTPProtobuf: + return otlpmetrichttp.New(ctx) + case processProtocolGRPC: + return otlpmetricgrpc.New(ctx) + default: + return nil, fmt.Errorf("unsupported OTLP metrics protocol %q", protocol) + } +} + +func normalizeContext(ctx context.Context) context.Context { + if ctx == nil { + return context.Background() + } + + return ctx +} diff --git a/user/internal/telemetry/runtime_test.go b/user/internal/telemetry/runtime_test.go new file mode 100644 index 0000000..5a5e4a8 --- /dev/null +++ b/user/internal/telemetry/runtime_test.go @@ -0,0 +1,186 @@ +package telemetry + +import ( + "bytes" + "context" + "io" + "log/slog" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "go.opentelemetry.io/otel/attribute" + sdkmetric "go.opentelemetry.io/otel/sdk/metric" + "go.opentelemetry.io/otel/sdk/metric/metricdata" + sdktrace "go.opentelemetry.io/otel/sdk/trace" +) + +func TestNewProcessBuildsWithoutExporters(t *testing.T) { + t.Parallel() + + runtime, err := newProcess(context.Background(), ProcessConfig{ + ServiceName: "galaxy-user-test", + TracesExporter: processExporterNone, + MetricsExporter: processExporterNone, + }, slog.New(slog.NewTextHandler(io.Discard, nil)), io.Discard, io.Discard) + require.NoError(t, err) + + assert.NotNil(t, runtime.TracerProvider()) + assert.NotNil(t, runtime.MeterProvider()) + assert.NotNil(t, runtime.Handler()) + require.NoError(t, runtime.Shutdown(context.Background())) + require.NoError(t, runtime.Shutdown(context.Background())) +} + +func TestNewProcessBuildsWithStdoutExporters(t *testing.T) { + t.Parallel() + + traceBuffer := &bytes.Buffer{} + metricBuffer := &bytes.Buffer{} + + runtime, err := newProcess(context.Background(), ProcessConfig{ + ServiceName: "galaxy-user-test", + TracesExporter: processExporterNone, + MetricsExporter: processExporterNone, + StdoutTracesEnabled: true, + StdoutMetricsEnabled: true, + }, slog.New(slog.NewTextHandler(io.Discard, nil)), traceBuffer, metricBuffer) + require.NoError(t, err) + + ctx, span := runtime.TracerProvider().Tracer("test").Start(context.Background(), "internal-request") + runtime.RecordUserCreationOutcome(ctx, "created") + span.End() + + require.NoError(t, runtime.Shutdown(context.Background())) + assert.NotEmpty(t, traceBuffer.String()) + assert.NotEmpty(t, metricBuffer.String()) +} + +func TestNewPreservesBusinessMetrics(t *testing.T) { + t.Parallel() + + reader := sdkmetric.NewManualReader() + meterProvider := sdkmetric.NewMeterProvider(sdkmetric.WithReader(reader)) + tracerProvider := sdktrace.NewTracerProvider() + + runtime, err := NewWithProviders(meterProvider, tracerProvider) + require.NoError(t, err) + + runtime.RecordInternalHTTPRequest(context.Background(), []attribute.KeyValue{ + attribute.String("route", "/api/v1/internal/users/:user_id/exists"), + attribute.String("method", "GET"), + attribute.String("edge_outcome", "success"), + }, 125*time.Millisecond) + runtime.RecordAuthResolutionOutcome(context.Background(), "resolve_by_email", "existing") + runtime.RecordUserCreationOutcome(context.Background(), "created") + runtime.RecordRaceNameReservationConflict(context.Background(), "update_my_profile") + runtime.RecordEntitlementMutation(context.Background(), "grant", "success") + runtime.RecordSanctionMutation(context.Background(), "apply", "conflict") + runtime.RecordLimitMutation(context.Background(), "remove", "subject_not_found") + runtime.RecordEventPublicationFailure(context.Background(), "user.profile.changed") + + assertMetricCount(t, reader, "user.internal_http.requests", map[string]string{ + "route": "/api/v1/internal/users/:user_id/exists", + "method": "GET", + "edge_outcome": "success", + }, 1) + assertHistogramCount(t, reader, "user.internal_http.duration", map[string]string{ + "route": "/api/v1/internal/users/:user_id/exists", + "method": "GET", + "edge_outcome": "success", + }, 1) + assertMetricCount(t, reader, "user.auth_resolution.outcomes", map[string]string{ + "operation": "resolve_by_email", + "outcome": "existing", + }, 1) + assertMetricCount(t, reader, "user.user_creation.outcomes", map[string]string{ + "outcome": "created", + }, 1) + assertMetricCount(t, reader, "user.race_name.reservation_conflicts", map[string]string{ + "operation": "update_my_profile", + }, 1) + assertMetricCount(t, reader, "user.entitlement.mutations", map[string]string{ + "command": "grant", + "outcome": "success", + }, 1) + assertMetricCount(t, reader, "user.sanction.mutations", map[string]string{ + "command": "apply", + "outcome": "conflict", + }, 1) + assertMetricCount(t, reader, "user.limit.mutations", map[string]string{ + "command": "remove", + "outcome": "subject_not_found", + }, 1) + assertMetricCount(t, reader, "user.event_publication_failures", map[string]string{ + "event_type": "user.profile.changed", + }, 1) +} + +func assertMetricCount(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string, wantValue int64) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + sum, ok := metric.Data.(metricdata.Sum[int64]) + require.True(t, ok) + + for _, point := range sum.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + assert.Equal(t, wantValue, point.Value) + return + } + } + } + } + + require.Failf(t, "test failed", "metric %q with attrs %v not found", metricName, wantAttrs) +} + +func assertHistogramCount(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string, wantCount uint64) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + histogram, ok := metric.Data.(metricdata.Histogram[float64]) + require.True(t, ok) + + for _, point := range histogram.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + assert.Equal(t, wantCount, point.Count) + return + } + } + } + } + + require.Failf(t, "test failed", "histogram %q with attrs %v not found", metricName, wantAttrs) +} + +func hasMetricAttributes(values []attribute.KeyValue, want map[string]string) bool { + if len(values) != len(want) { + return false + } + + for _, value := range values { + if want[string(value.Key)] != value.Value.AsString() { + return false + } + } + + return true +} diff --git a/user/openapi.yaml b/user/openapi.yaml index 4f23e04..0da01d2 100644 --- a/user/openapi.yaml +++ b/user/openapi.yaml @@ -3,10 +3,15 @@ info: title: Galaxy User Service Internal REST API version: v1 description: | - This specification documents the planned trusted internal REST contract of + This specification documents the trusted internal REST contract of `galaxy/user`. + The current runtime is implemented as an internal-only HTTP service backed + by Redis. + Scope: + - regular-user state only; system-admin identity belongs to future + `Admin Service` - auth-facing user resolution, ensure, existence, and subject blocking - gateway-facing authenticated account reads and self-service mutations - lobby-facing eligibility snapshots @@ -14,14 +19,25 @@ info: - admin/internal reads, filtered listing, and explicit mutation commands This specification is internal REST only. It intentionally does not - describe public edge transport, gateway gRPC, or any future asynchronous - event payloads. + describe public edge transport, gateway gRPC, or the auxiliary async + event contracts documented in `README.md` and `docs/flows.md`. + + The auth-facing paths listed under `AuthIntegration` are already reserved + by `Auth / Session Service` and their route shapes must remain stable. + + Current transport rules: + - request bodies are strict JSON only + - unknown fields are rejected + - trailing JSON input is rejected + - error responses use `{ "error": { "code", "message" } }` + - stable error codes are `invalid_request`, `conflict`, + `subject_not_found`, `internal_error`, and `service_unavailable` servers: - url: http://localhost:8091 - description: Example local internal listener for User Service. + description: Default local internal listener for User Service. tags: - name: AuthIntegration - description: Trusted auth-facing user ownership and block-policy endpoints. + description: Trusted auth-facing user ownership and block-policy endpoints with frozen route shapes reserved by `Auth / Session Service`. - name: MyAccount description: Gateway-facing authenticated account queries and self-service mutations. - name: LobbyIntegration @@ -88,9 +104,16 @@ paths: user when registration is allowed, or returns a blocked outcome when policy denies the flow. - `registration_context` is create-only. Implementations must ignore it - for existing users and must not overwrite settings of an already - existing account. + `registration_context` is required on the current auth-to-user call. + Its frozen shape is `preferred_language` plus `time_zone`. The + registration context is create-only. Implementations must ignore it for + existing users and must not overwrite settings of an already existing + account. + + During the current rollout `Auth / Session Service` sends temporary + `preferred_language="en"` and forwards the public confirm `time_zone`. + Gateway-side geoip language derivation is a later rollout and is not + part of the current source-of-truth contract. requestBody: required: true content: @@ -193,6 +216,10 @@ paths: - MyAccount operationId: updateMyProfile summary: Update self-service profile fields + description: | + `race_name` uniqueness is enforced through a canonical reservation + policy that is case-insensitive, rejects the frozen anti-fraud + confusable pairs, and preserves the original stored casing. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -279,6 +306,14 @@ paths: - GeoIntegration operationId: syncDeclaredCountry summary: Synchronize the current effective declared country + description: | + Applies the latest effective declared country chosen by + `Geo Profile Service`. + + `declared_country` must be a known uppercase ISO 3166-1 alpha-2 + country code. When the supplied value is already stored on the user + account, the command is a no-op and returns the existing + `updated_at` unchanged. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -330,7 +365,7 @@ paths: tags: - AdminUsers operationId: getUserByEmail - summary: Read one user by normalized e-mail + summary: Read one user by exact-after-trim e-mail requestBody: required: true content: @@ -385,6 +420,15 @@ paths: - AdminUsers operationId: listUsers summary: List users with deterministic pagination and rich filters + description: | + Returns full user account aggregates ordered by `created_at desc`, then + `user_id desc`. + + All supplied query filters combine with logical `AND`. + + `page_token` is opaque and bound to the normalized filter set that + produced it. Malformed or filter-mismatched tokens return + `400 invalid_request`. parameters: - $ref: "#/components/parameters/PageSize" - $ref: "#/components/parameters/PageToken" @@ -457,6 +501,9 @@ paths: - AdminUsers operationId: grantEntitlement summary: Grant a new entitlement period + description: | + Grants a current paid entitlement when the current effective state is + `free`. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -488,6 +535,8 @@ paths: - AdminUsers operationId: extendEntitlement summary: Extend the current entitlement period + description: | + Extends the current finite paid entitlement. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -519,6 +568,8 @@ paths: - AdminUsers operationId: revokeEntitlement summary: Revoke the effective paid entitlement + description: | + Revokes the current effective paid entitlement. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -550,6 +601,8 @@ paths: - AdminUsers operationId: applySanction summary: Apply one sanction record + description: | + Applies one new active sanction record. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -581,6 +634,8 @@ paths: - AdminUsers operationId: removeSanction summary: Remove one active sanction record + description: | + Removes the current active sanction for one `sanction_code`. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -612,6 +667,9 @@ paths: - AdminUsers operationId: setLimit summary: Set one active user-specific limit record + description: | + Creates one new active limit or replaces the current active record of + the same `limit_code`. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -643,6 +701,8 @@ paths: - AdminUsers operationId: removeLimit summary: Remove one active user-specific limit record + description: | + Removes the current active user-specific limit for one `limit_code`. parameters: - $ref: "#/components/parameters/UserIDPath" requestBody: @@ -689,7 +749,7 @@ components: PageToken: name: page_token in: query - description: Opaque deterministic pagination cursor returned by the previous page. + description: Opaque deterministic pagination cursor returned by the previous page and bound to the normalized filter set that produced it. Malformed or filter-mismatched tokens return `400 invalid_request`. schema: type: string schemas: @@ -700,27 +760,40 @@ components: Email: type: string format: email - description: Normalized login and contact e-mail address. + description: | + Login and contact e-mail address. The service trims surrounding + whitespace and validates the value structurally, then treats the + trimmed value as the exact stored and lookup value. The service does + not lowercase or otherwise canonicalize e-mail before storage or exact + lookup. RaceName: type: string description: | Stored race name preserving the user-selected casing after successful - uniqueness checks. + uniqueness checks. Uniqueness is enforced against a canonical + reservation key rather than exact string equality only. minLength: 1 maxLength: 64 LanguageTag: type: string - description: BCP 47 language tag. + description: | + BCP 47 language tag. User Service validates semantic correctness on + auth-driven creation and stores the canonical tag form. minLength: 1 maxLength: 32 TimeZoneName: type: string - description: IANA time zone name. + description: | + IANA time zone name. User Service validates semantic correctness on + auth-driven creation and stores the trimmed caller value without + additional alias canonicalization. minLength: 1 maxLength: 128 CountryCode: type: string - description: ISO 3166-1 alpha-2 country code. + description: | + ISO 3166-1 alpha-2 country code in uppercase ASCII form. The geo sync + command additionally rejects well-formed but unknown region codes. pattern: "^[A-Z]{2}$" UserResolutionKind: type: string @@ -756,12 +829,13 @@ components: - profile_update_block LimitCode: type: string + description: | + Current supported user-specific limit codes. Retired legacy codes may + still exist in stored history for backward compatibility, but they are + not part of this write or read contract. enum: - max_owned_private_games - - max_active_private_games - max_pending_public_applications - - max_pending_private_join_requests - - max_pending_private_invites_sent - max_active_game_memberships ActorRef: type: object @@ -777,6 +851,12 @@ components: description: Optional stable actor identifier. RegistrationContext: type: object + description: | + Frozen create-only initialization context used by the current + auth-facing ensure-by-email contract. `preferred_language` is + semantically validated as BCP 47 and stored in canonical tag form on + create. `time_zone` is semantically validated as an IANA time zone + name and stored after trim without additional alias canonicalization. additionalProperties: false required: - preferred_language @@ -786,8 +866,10 @@ components: $ref: "#/components/schemas/LanguageTag" description: | Create-only initial preferred language. During the current rollout - phase `Auth / Session Service` sends a temporary `"en"` default - until gateway geoip-based language derivation is deployed. + `Auth / Session Service` sends a temporary `"en"` default and + forwards `time_zone`. Gateway-side geoip derivation is not part of + the current source-of-truth contract. Future derived values must + remain valid BCP 47 tags. time_zone: $ref: "#/components/schemas/TimeZoneName" description: Create-only initial IANA time zone name. @@ -825,6 +907,7 @@ components: additionalProperties: false required: - email + - registration_context properties: email: $ref: "#/components/schemas/Email" @@ -840,6 +923,11 @@ components: $ref: "#/components/schemas/EnsureUserOutcome" user_id: $ref: "#/components/schemas/UserID" + description: | + Present for `existing` and `created`. A `created` outcome returns + the durable newly materialized `user_id` created together with an + initial generated `player-` race name and free + entitlement snapshot. block_reason_code: type: string description: Present only for `outcome=blocked`. @@ -874,6 +962,12 @@ components: $ref: "#/components/schemas/UserID" EntitlementSnapshot: type: object + description: | + Materialized current effective entitlement snapshot. + + The current snapshot is read-optimized and repaired lazily when a + finite paid state has already reached `ends_at`, so callers do not + observe stale paid/free state. additionalProperties: false required: - plan_code @@ -927,6 +1021,9 @@ components: ActiveLimit: type: object additionalProperties: false + description: | + Current supported active user-specific limit override. Retired legacy + limit codes are ignored on reads and are not returned. required: - limit_code - value @@ -948,6 +1045,32 @@ components: expires_at: type: string format: date-time + EffectiveLimit: + type: object + additionalProperties: false + description: | + Materialized numeric quota after the frozen `free` or `paid` default + catalog is combined with any active user-specific override for the same + `limit_code`. + + `max_owned_private_games` is meaningful only while the current + entitlement is paid and is omitted from free effective limits. + + `max_active_game_memberships` applies only to public games. + + `max_pending_public_applications` stores the total public-games budget. + `Game Lobby` subtracts current active public memberships from this + value and clamps at `0` to derive remaining pending-application + headroom. + required: + - limit_code + - value + properties: + limit_code: + $ref: "#/components/schemas/LimitCode" + value: + type: integer + minimum: 0 AccountView: type: object additionalProperties: false @@ -1002,6 +1125,10 @@ components: UpdateMyProfileRequest: type: object additionalProperties: false + description: | + The current implementation accepts only `race_name` here. Attempts to + mutate `email` or `declared_country` are rejected as `400 + invalid_request` through strict unknown-field handling. required: - race_name properties: @@ -1044,6 +1171,8 @@ components: required: - exists - user_id + - active_sanctions + - effective_limits - markers properties: exists: @@ -1051,20 +1180,30 @@ components: user_id: $ref: "#/components/schemas/UserID" entitlement: + description: | + Current effective entitlement snapshot. Omitted when `exists=false`. $ref: "#/components/schemas/EntitlementSnapshot" active_sanctions: type: array items: $ref: "#/components/schemas/ActiveSanction" effective_limits: + description: | + Materialized effective quotas for the current supported lobby + catalog. Unknown users return an empty array. Free users omit + `max_owned_private_games`. type: array items: - $ref: "#/components/schemas/ActiveLimit" + $ref: "#/components/schemas/EffectiveLimit" markers: $ref: "#/components/schemas/EligibilityMarkers" SyncDeclaredCountryRequest: type: object additionalProperties: false + description: | + Synchronizes the latest effective declared country selected by + `Geo Profile Service`. Repeating the current stored value is accepted + as a no-op. required: - declared_country properties: @@ -1085,6 +1224,9 @@ components: updated_at: type: string format: date-time + description: | + Effective account mutation timestamp. Same-value no-op syncs return + the existing stored timestamp unchanged. UserAdminView: allOf: - $ref: "#/components/schemas/AccountView" @@ -1127,6 +1269,12 @@ components: GrantEntitlementRequest: type: object additionalProperties: false + description: | + Grants one current paid entitlement. + + `plan_code=free` is invalid here. `starts_at` may be current or past, + but not future. Finite paid plans require `ends_at`, while + `paid_lifetime` forbids it. required: - plan_code - source @@ -1148,9 +1296,13 @@ components: ends_at: type: string format: date-time + description: Required for `paid_monthly` and `paid_yearly`; omitted for `paid_lifetime`. ExtendEntitlementRequest: type: object additionalProperties: false + description: | + Extends the current finite paid entitlement by appending one new paid + history segment. required: - source - reason_code @@ -1169,6 +1321,9 @@ components: RevokeEntitlementRequest: type: object additionalProperties: false + description: | + Revokes the current effective paid entitlement and materializes a new + `free` snapshot immediately. required: - source - reason_code @@ -1183,6 +1338,8 @@ components: EntitlementCommandResponse: type: object additionalProperties: false + description: Resulting current effective entitlement snapshot after one + successful trusted entitlement command. required: - user_id - entitlement diff --git a/user/openapi_contract_test.go b/user/openapi_contract_test.go new file mode 100644 index 0000000..b090581 --- /dev/null +++ b/user/openapi_contract_test.go @@ -0,0 +1,311 @@ +package user + +import ( + "context" + "encoding/json" + "net/http" + "path/filepath" + "runtime" + "slices" + "testing" + + "github.com/getkin/kin-openapi/openapi3" + "github.com/stretchr/testify/require" +) + +func TestInternalOpenAPISpecValidates(t *testing.T) { + t.Parallel() + + loadOpenAPISpec(t) +} + +func TestInternalOpenAPISpecFreezesEnsureByEmailRegistrationContext(t *testing.T) { + t.Parallel() + + doc := loadOpenAPISpec(t) + operation := getOpenAPIOperation(t, doc, "/api/v1/internal/users/ensure-by-email", http.MethodPost) + + assertSchemaRef(t, requestSchemaRef(t, operation), "#/components/schemas/EnsureByEmailRequest", "ensure-by-email request schema") + + requestSchema := componentSchemaRef(t, doc, "EnsureByEmailRequest") + assertRequiredFields(t, requestSchema, "email", "registration_context") + assertSchemaRef(t, requestSchema.Value.Properties["email"], "#/components/schemas/Email", "ensure-by-email email property") + assertSchemaRef(t, requestSchema.Value.Properties["registration_context"], "#/components/schemas/RegistrationContext", "ensure-by-email registration_context property") + require.Contains(t, marshalOpenAPIJSON(t, requestSchema.Value), `"additionalProperties":false`) + + registrationContext := componentSchemaRef(t, doc, "RegistrationContext") + assertRequiredFields(t, registrationContext, "preferred_language", "time_zone") + assertSchemaRef(t, registrationContext.Value.Properties["preferred_language"], "#/components/schemas/LanguageTag", "registration_context preferred_language property") + assertSchemaRef(t, registrationContext.Value.Properties["time_zone"], "#/components/schemas/TimeZoneName", "registration_context time_zone property") + require.Contains(t, marshalOpenAPIJSON(t, registrationContext.Value), `"additionalProperties":false`) +} + +func TestInternalOpenAPISpecFreezesSharedResponseSchemas(t *testing.T) { + t.Parallel() + + doc := loadOpenAPISpec(t) + + tests := []struct { + name string + path string + method string + status int + wantRef string + }{ + { + name: "get my account", + path: "/api/v1/internal/users/{user_id}/account", + method: http.MethodGet, + status: http.StatusOK, + wantRef: "#/components/schemas/GetMyAccountResponse", + }, + { + name: "update my profile", + path: "/api/v1/internal/users/{user_id}/profile", + method: http.MethodPost, + status: http.StatusOK, + wantRef: "#/components/schemas/GetMyAccountResponse", + }, + { + name: "update my settings", + path: "/api/v1/internal/users/{user_id}/settings", + method: http.MethodPost, + status: http.StatusOK, + wantRef: "#/components/schemas/GetMyAccountResponse", + }, + { + name: "get user eligibility", + path: "/api/v1/internal/users/{user_id}/eligibility", + method: http.MethodGet, + status: http.StatusOK, + wantRef: "#/components/schemas/UserEligibilityResponse", + }, + { + name: "sync declared country", + path: "/api/v1/internal/users/{user_id}/declared-country/sync", + method: http.MethodPost, + status: http.StatusOK, + wantRef: "#/components/schemas/DeclaredCountrySyncResponse", + }, + { + name: "get user by id", + path: "/api/v1/internal/users/{user_id}", + method: http.MethodGet, + status: http.StatusOK, + wantRef: "#/components/schemas/UserLookupResponse", + }, + { + name: "get user by email", + path: "/api/v1/internal/user-lookups/by-email", + method: http.MethodPost, + status: http.StatusOK, + wantRef: "#/components/schemas/UserLookupResponse", + }, + { + name: "get user by race name", + path: "/api/v1/internal/user-lookups/by-race-name", + method: http.MethodPost, + status: http.StatusOK, + wantRef: "#/components/schemas/UserLookupResponse", + }, + { + name: "list users", + path: "/api/v1/internal/users", + method: http.MethodGet, + status: http.StatusOK, + wantRef: "#/components/schemas/UserListResponse", + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + operation := getOpenAPIOperation(t, doc, tt.path, tt.method) + assertSchemaRef(t, responseSchemaRef(t, operation, tt.status), tt.wantRef, tt.name+" response schema") + }) + } +} + +func TestInternalOpenAPISpecErrorEnvelopeRemainsStable(t *testing.T) { + t.Parallel() + + doc := loadOpenAPISpec(t) + + errorResponse := componentSchemaRef(t, doc, "ErrorResponse") + assertRequiredFields(t, errorResponse, "error") + require.Contains(t, marshalOpenAPIJSON(t, errorResponse.Value), `"additionalProperties":false`) + assertSchemaRef(t, errorResponse.Value.Properties["error"], "#/components/schemas/ErrorBody", "ErrorResponse error property") + + errorBody := componentSchemaRef(t, doc, "ErrorBody") + assertRequiredFields(t, errorBody, "code", "message") + require.Contains(t, marshalOpenAPIJSON(t, errorBody.Value), `"additionalProperties":false`) + + require.JSONEq( + t, + `{"error":{"code":"invalid_request","message":"request is invalid"}}`, + string(mustMarshalJSON(t, responseExampleValue(t, doc, "InvalidRequestError", "invalidRequest"))), + ) + require.JSONEq( + t, + `{"error":{"code":"subject_not_found","message":"subject not found"}}`, + string(mustMarshalJSON(t, responseExampleValue(t, doc, "SubjectNotFoundError", "subjectNotFound"))), + ) +} + +func loadOpenAPISpec(t *testing.T) *openapi3.T { + t.Helper() + + _, thisFile, _, ok := runtime.Caller(0) + if !ok { + require.FailNow(t, "runtime.Caller failed") + } + + specPath := filepath.Join(filepath.Dir(thisFile), "openapi.yaml") + loader := openapi3.NewLoader() + doc, err := loader.LoadFromFile(specPath) + if err != nil { + require.Failf(t, "test failed", "load spec %s: %v", specPath, err) + } + if doc == nil { + require.Failf(t, "test failed", "load spec %s: returned nil document", specPath) + } + if doc.Info == nil { + require.Failf(t, "test failed", "load spec %s: missing info section", specPath) + } + if doc.Info.Version != "v1" { + require.Failf(t, "test failed", "spec %s version = %q, want v1", specPath, doc.Info.Version) + } + if err := doc.Validate(context.Background()); err != nil { + require.Failf(t, "test failed", "validate spec %s: %v", specPath, err) + } + + return doc +} + +func getOpenAPIOperation(t *testing.T, doc *openapi3.T, path string, method string) *openapi3.Operation { + t.Helper() + + if doc.Paths == nil { + require.Failf(t, "test failed", "spec is missing paths while looking up %s %s", method, path) + } + pathItem := doc.Paths.Value(path) + if pathItem == nil { + require.Failf(t, "test failed", "spec is missing path %s", path) + } + operation := pathItem.GetOperation(method) + if operation == nil { + require.Failf(t, "test failed", "spec is missing %s operation for path %s", method, path) + } + + return operation +} + +func requestSchemaRef(t *testing.T, operation *openapi3.Operation) *openapi3.SchemaRef { + t.Helper() + + if operation.RequestBody == nil || operation.RequestBody.Value == nil { + require.FailNow(t, "operation is missing request body") + } + mediaType := operation.RequestBody.Value.Content.Get("application/json") + if mediaType == nil || mediaType.Schema == nil { + require.FailNow(t, "operation is missing application/json request schema") + } + + return mediaType.Schema +} + +func responseSchemaRef(t *testing.T, operation *openapi3.Operation, status int) *openapi3.SchemaRef { + t.Helper() + + if operation.Responses == nil { + require.Failf(t, "test failed", "operation is missing responses for status %d", status) + } + response := operation.Responses.Status(status) + if response == nil || response.Value == nil { + require.Failf(t, "test failed", "operation is missing response for status %d", status) + } + mediaType := response.Value.Content.Get("application/json") + if mediaType == nil || mediaType.Schema == nil { + require.Failf(t, "test failed", "operation response %d is missing application/json schema", status) + } + + return mediaType.Schema +} + +func componentSchemaRef(t *testing.T, doc *openapi3.T, name string) *openapi3.SchemaRef { + t.Helper() + + if doc.Components == nil { + require.Failf(t, "test failed", "spec is missing components while looking up schema %s", name) + } + schema := doc.Components.Schemas[name] + if schema == nil || schema.Value == nil { + require.Failf(t, "test failed", "spec is missing schema %s", name) + } + + return schema +} + +func responseExampleValue(t *testing.T, doc *openapi3.T, responseName string, exampleName string) any { + t.Helper() + + if doc.Components == nil { + require.Failf(t, "test failed", "spec is missing components while looking up response %s", responseName) + } + response := doc.Components.Responses[responseName] + if response == nil || response.Value == nil { + require.Failf(t, "test failed", "spec is missing response %s", responseName) + } + mediaType := response.Value.Content.Get("application/json") + if mediaType == nil { + require.Failf(t, "test failed", "response %s is missing application/json content", responseName) + } + example := mediaType.Examples[exampleName] + if example == nil || example.Value == nil { + require.Failf(t, "test failed", "response %s is missing example %s", responseName, exampleName) + } + + return example.Value.Value +} + +func assertSchemaRef(t *testing.T, schemaRef *openapi3.SchemaRef, want string, name string) { + t.Helper() + + if schemaRef == nil { + require.Failf(t, "test failed", "%s schema ref is nil", name) + } + if schemaRef.Ref != want { + require.Failf(t, "test failed", "%s ref = %q, want %q", name, schemaRef.Ref, want) + } +} + +func assertRequiredFields(t *testing.T, schemaRef *openapi3.SchemaRef, fields ...string) { + t.Helper() + + required := append([]string(nil), schemaRef.Value.Required...) + slices.Sort(required) + want := append([]string(nil), fields...) + slices.Sort(want) + if !slices.Equal(required, want) { + require.Failf(t, "test failed", "schema required fields = %v, want %v", required, want) + } +} + +func mustMarshalJSON(t *testing.T, value any) []byte { + t.Helper() + + data, err := json.Marshal(value) + if err != nil { + require.Failf(t, "test failed", "marshal JSON: %v", err) + } + + return data +} + +func marshalOpenAPIJSON(t *testing.T, value any) string { + t.Helper() + + return string(mustMarshalJSON(t, value)) +} diff --git a/user/runtime_contract_test.go b/user/runtime_contract_test.go new file mode 100644 index 0000000..858aee4 --- /dev/null +++ b/user/runtime_contract_test.go @@ -0,0 +1,683 @@ +package user + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "io" + "log/slog" + "net" + "net/http" + "strings" + "testing" + "time" + + "galaxy/user/internal/app" + "galaxy/user/internal/config" + + "github.com/alicebob/miniredis/v2" + "github.com/stretchr/testify/require" +) + +type runtimeContractHarness struct { + baseURL string + client *http.Client + + runtime *app.Runtime + cancel context.CancelFunc + runErr chan error +} + +func newRuntimeContractHarness(t *testing.T) *runtimeContractHarness { + t.Helper() + + redisServer := miniredis.RunT(t) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = redisServer.Addr() + cfg.InternalHTTP.Addr = freeLoopbackAddress(t) + cfg.AdminHTTP.Addr = "" + cfg.ShutdownTimeout = 10 * time.Second + cfg.Telemetry.TracesExporter = "none" + cfg.Telemetry.MetricsExporter = "none" + + logger := slog.New(slog.NewTextHandler(io.Discard, nil)) + runtime, err := app.NewRuntime(context.Background(), cfg, logger) + require.NoError(t, err) + + runCtx, cancel := context.WithCancel(context.Background()) + runErr := make(chan error, 1) + go func() { + runErr <- runtime.Run(runCtx) + }() + + client := &http.Client{ + Timeout: 500 * time.Millisecond, + Transport: &http.Transport{ + DisableKeepAlives: true, + }, + } + + harness := &runtimeContractHarness{ + baseURL: "http://" + cfg.InternalHTTP.Addr, + client: client, + runtime: runtime, + cancel: cancel, + runErr: runErr, + } + harness.waitUntilReady(t) + + t.Cleanup(func() { + cancel() + select { + case err := <-runErr: + require.NoError(t, err) + case <-time.After(cfg.ShutdownTimeout + 2*time.Second): + t.Fatalf("runtime did not stop in time") + } + require.NoError(t, runtime.Close()) + client.CloseIdleConnections() + }) + + return harness +} + +func (h *runtimeContractHarness) waitUntilReady(t *testing.T) { + t.Helper() + + require.Eventually(t, func() bool { + request, err := http.NewRequest(http.MethodGet, h.baseURL+"/api/v1/internal/users/user-missing/exists", nil) + if err != nil { + return false + } + + response, err := h.client.Do(request) + if err != nil { + return false + } + defer response.Body.Close() + _, _ = io.Copy(io.Discard, response.Body) + + return response.StatusCode == http.StatusOK + }, 5*time.Second, 25*time.Millisecond, "user runtime did not become reachable") +} + +func (h *runtimeContractHarness) ensureUser(t *testing.T, email string, preferredLanguage string, timeZone string) ensureByEmailResponse { + t.Helper() + + response := h.postJSON(t, "/api/v1/internal/users/ensure-by-email", map[string]any{ + "email": email, + "registration_context": map[string]string{ + "preferred_language": preferredLanguage, + "time_zone": timeZone, + }, + }) + + var body ensureByEmailResponse + requireResponseJSON(t, response, http.StatusOK, &body) + return body +} + +func (h *runtimeContractHarness) getMyAccount(t *testing.T, userID string) accountResponse { + t.Helper() + + response := h.get(t, "/api/v1/internal/users/"+userID+"/account") + var body accountResponse + requireResponseJSON(t, response, http.StatusOK, &body) + return body +} + +func (h *runtimeContractHarness) currentEntitlementStartsAt(t *testing.T, userID string) time.Time { + t.Helper() + + return h.getMyAccount(t, userID).Account.Entitlement.StartsAt +} + +func (h *runtimeContractHarness) updateSettingsRaw(t *testing.T, userID string, body string) httpResponse { + t.Helper() + return h.postRawJSON(t, "/api/v1/internal/users/"+userID+"/settings", body) +} + +func (h *runtimeContractHarness) getEligibility(t *testing.T, userID string) eligibilityResponse { + t.Helper() + + response := h.get(t, "/api/v1/internal/users/"+userID+"/eligibility") + var body eligibilityResponse + requireResponseJSON(t, response, http.StatusOK, &body) + return body +} + +func (h *runtimeContractHarness) syncDeclaredCountry(t *testing.T, userID string, country string) declaredCountrySyncResponse { + t.Helper() + + response := h.postJSON(t, "/api/v1/internal/users/"+userID+"/declared-country/sync", map[string]string{ + "declared_country": country, + }) + var body declaredCountrySyncResponse + requireResponseJSON(t, response, http.StatusOK, &body) + return body +} + +func (h *runtimeContractHarness) lookupUserByEmail(t *testing.T, email string) userLookupResponse { + t.Helper() + + response := h.postJSON(t, "/api/v1/internal/user-lookups/by-email", map[string]string{ + "email": email, + }) + var body userLookupResponse + requireResponseJSON(t, response, http.StatusOK, &body) + return body +} + +func (h *runtimeContractHarness) grantPaidEntitlement(t *testing.T, userID string, startsAt time.Time, endsAt time.Time) { + t.Helper() + + response := h.postJSON(t, "/api/v1/internal/users/"+userID+"/entitlements/grant", map[string]any{ + "plan_code": "paid_monthly", + "source": "admin", + "reason_code": "manual_grant", + "actor": map[string]string{ + "type": "admin", + "id": "admin-1", + }, + "starts_at": startsAt.UTC().Format(time.RFC3339Nano), + "ends_at": endsAt.UTC().Format(time.RFC3339Nano), + }) + var body entitlementCommandResponse + requireResponseJSON(t, response, http.StatusOK, &body) +} + +func (h *runtimeContractHarness) applySanction(t *testing.T, userID string, sanctionCode string, scope string, appliedAt time.Time) { + t.Helper() + + response := h.postJSON(t, "/api/v1/internal/users/"+userID+"/sanctions/apply", map[string]any{ + "sanction_code": sanctionCode, + "scope": scope, + "reason_code": "manual_block", + "actor": map[string]string{ + "type": "admin", + "id": "admin-1", + }, + "applied_at": appliedAt.UTC().Format(time.RFC3339Nano), + }) + var body sanctionCommandResponse + requireResponseJSON(t, response, http.StatusOK, &body) +} + +func (h *runtimeContractHarness) setLimit(t *testing.T, userID string, limitCode string, value int, appliedAt time.Time) { + t.Helper() + + response := h.postJSON(t, "/api/v1/internal/users/"+userID+"/limits/set", map[string]any{ + "limit_code": limitCode, + "value": value, + "reason_code": "manual_override", + "actor": map[string]string{ + "type": "admin", + "id": "admin-1", + }, + "applied_at": appliedAt.UTC().Format(time.RFC3339Nano), + }) + var body limitCommandResponse + requireResponseJSON(t, response, http.StatusOK, &body) +} + +func (h *runtimeContractHarness) listUsers(t *testing.T, rawQuery string) httpResponse { + t.Helper() + + path := "/api/v1/internal/users" + if rawQuery != "" { + path += "?" + rawQuery + } + return h.get(t, path) +} + +func (h *runtimeContractHarness) get(t *testing.T, path string) httpResponse { + t.Helper() + + request, err := http.NewRequest(http.MethodGet, h.baseURL+path, nil) + require.NoError(t, err) + + response, err := h.client.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + body, err := io.ReadAll(response.Body) + require.NoError(t, err) + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(body), + Header: response.Header.Clone(), + } +} + +func (h *runtimeContractHarness) postJSON(t *testing.T, path string, body any) httpResponse { + t.Helper() + + payload, err := json.Marshal(body) + require.NoError(t, err) + return h.postRawJSON(t, path, string(payload)) +} + +func (h *runtimeContractHarness) postRawJSON(t *testing.T, path string, body string) httpResponse { + t.Helper() + + request, err := http.NewRequest(http.MethodPost, h.baseURL+path, bytes.NewBufferString(body)) + require.NoError(t, err) + request.Header.Set("Content-Type", "application/json") + + response, err := h.client.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + responseBody, err := io.ReadAll(response.Body) + require.NoError(t, err) + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(responseBody), + Header: response.Header.Clone(), + } +} + +func TestRuntimeContractGetMyAccountReturnsAggregateAndDeclaredCountryStaysReadOnly(t *testing.T) { + t.Parallel() + + h := newRuntimeContractHarness(t) + created := h.ensureUser(t, "pilot@example.com", "en", "Europe/Kaliningrad") + require.Equal(t, "created", created.Outcome) + + now := time.Now().UTC().Truncate(time.Second) + h.grantPaidEntitlement(t, created.UserID, h.currentEntitlementStartsAt(t, created.UserID), now.Add(48*time.Hour)) + h.applySanction(t, created.UserID, "login_block", "auth", now.Add(-30*time.Minute)) + h.setLimit(t, created.UserID, "max_owned_private_games", 7, now.Add(-20*time.Minute)) + syncResult := h.syncDeclaredCountry(t, created.UserID, "DE") + + account := h.getMyAccount(t, created.UserID) + require.Equal(t, created.UserID, account.Account.UserID) + require.Equal(t, "pilot@example.com", account.Account.Email) + require.Equal(t, "en", account.Account.PreferredLanguage) + require.Equal(t, "Europe/Kaliningrad", account.Account.TimeZone) + require.Equal(t, "DE", account.Account.DeclaredCountry) + require.Equal(t, syncResult.UpdatedAt, account.Account.UpdatedAt) + require.Equal(t, "paid_monthly", account.Account.Entitlement.PlanCode) + require.True(t, account.Account.Entitlement.IsPaid) + require.Len(t, account.Account.ActiveSanctions, 1) + require.Equal(t, "login_block", account.Account.ActiveSanctions[0].SanctionCode) + require.Len(t, account.Account.ActiveLimits, 1) + require.Equal(t, "max_owned_private_games", account.Account.ActiveLimits[0].LimitCode) + require.Equal(t, 7, account.Account.ActiveLimits[0].Value) + + response := h.updateSettingsRaw(t, created.UserID, `{"preferred_language":"en","time_zone":"UTC","declared_country":"FR"}`) + requireJSONBody(t, response, http.StatusBadRequest, `{"error":{"code":"invalid_request","message":"request body contains unknown field \"declared_country\""}}`) +} + +func TestRuntimeContractEligibilitySnapshotCoversUnknownFreeAndPaidUsers(t *testing.T) { + t.Parallel() + + h := newRuntimeContractHarness(t) + + unknown := h.getEligibility(t, "user-missing") + require.False(t, unknown.Exists) + require.Equal(t, "user-missing", unknown.UserID) + require.Nil(t, unknown.Entitlement) + require.Empty(t, unknown.ActiveSanctions) + require.Empty(t, unknown.EffectiveLimits) + require.Equal(t, eligibilityMarkers{}, unknown.Markers) + + freeUser := h.ensureUser(t, "free@example.com", "en", "UTC") + require.Equal(t, "created", freeUser.Outcome) + + free := h.getEligibility(t, freeUser.UserID) + require.True(t, free.Exists) + require.NotNil(t, free.Entitlement) + require.Equal(t, "free", free.Entitlement.PlanCode) + require.False(t, free.Entitlement.IsPaid) + require.Equal(t, eligibilityMarkers{ + CanLogin: true, + CanCreatePrivateGame: false, + CanManagePrivateGame: false, + CanJoinGame: true, + CanUpdateProfile: true, + }, free.Markers) + require.Equal(t, []effectiveLimitView{ + {LimitCode: "max_pending_public_applications", Value: 3}, + {LimitCode: "max_active_game_memberships", Value: 3}, + }, free.EffectiveLimits) + + paidUser := h.ensureUser(t, "paid@example.com", "en", "Europe/Paris") + require.Equal(t, "created", paidUser.Outcome) + now := time.Now().UTC().Truncate(time.Second) + h.grantPaidEntitlement(t, paidUser.UserID, h.currentEntitlementStartsAt(t, paidUser.UserID), now.Add(72*time.Hour)) + h.applySanction(t, paidUser.UserID, "private_game_manage_block", "lobby", now.Add(-30*time.Minute)) + h.setLimit(t, paidUser.UserID, "max_pending_public_applications", 17, now.Add(-20*time.Minute)) + + paid := h.getEligibility(t, paidUser.UserID) + require.True(t, paid.Exists) + require.NotNil(t, paid.Entitlement) + require.Equal(t, "paid_monthly", paid.Entitlement.PlanCode) + require.True(t, paid.Entitlement.IsPaid) + require.Len(t, paid.ActiveSanctions, 1) + require.Equal(t, "private_game_manage_block", paid.ActiveSanctions[0].SanctionCode) + require.Equal(t, eligibilityMarkers{ + CanLogin: true, + CanCreatePrivateGame: true, + CanManagePrivateGame: false, + CanJoinGame: true, + CanUpdateProfile: true, + }, paid.Markers) + require.Equal(t, []effectiveLimitView{ + {LimitCode: "max_owned_private_games", Value: 3}, + {LimitCode: "max_pending_public_applications", Value: 17}, + {LimitCode: "max_active_game_memberships", Value: 10}, + }, paid.EffectiveLimits) +} + +func TestRuntimeContractGeoSyncOnlyMutatesCurrentDeclaredCountry(t *testing.T) { + t.Parallel() + + h := newRuntimeContractHarness(t) + created := h.ensureUser(t, "geo@example.com", "en", "Europe/Berlin") + require.Equal(t, "created", created.Outcome) + + before := h.lookupUserByEmail(t, "geo@example.com") + require.Empty(t, before.User.DeclaredCountry) + + first := h.syncDeclaredCountry(t, created.UserID, "DE") + after := h.lookupUserByEmail(t, "geo@example.com") + require.Equal(t, before.User.UserID, after.User.UserID) + require.Equal(t, before.User.Email, after.User.Email) + require.Equal(t, before.User.RaceName, after.User.RaceName) + require.Equal(t, before.User.PreferredLanguage, after.User.PreferredLanguage) + require.Equal(t, before.User.TimeZone, after.User.TimeZone) + require.Equal(t, before.User.Entitlement, after.User.Entitlement) + require.Equal(t, before.User.ActiveSanctions, after.User.ActiveSanctions) + require.Equal(t, before.User.ActiveLimits, after.User.ActiveLimits) + require.Equal(t, "DE", after.User.DeclaredCountry) + require.Equal(t, first.UpdatedAt, after.User.UpdatedAt) + + second := h.syncDeclaredCountry(t, created.UserID, "DE") + require.Equal(t, first.UpdatedAt, second.UpdatedAt) + + repeated := h.lookupUserByEmail(t, "geo@example.com") + require.Equal(t, after.User, repeated.User) +} + +func TestRuntimeContractAdminListingPreservesOrderingFiltersAndPageTokenBinding(t *testing.T) { + t.Parallel() + + h := newRuntimeContractHarness(t) + + filtered := h.ensureUser(t, "filter@example.com", "en", "UTC") + require.Equal(t, "created", filtered.Outcome) + time.Sleep(10 * time.Millisecond) + + latest := h.ensureUser(t, "latest@example.com", "en", "UTC") + require.Equal(t, "created", latest.Outcome) + + now := time.Now().UTC().Truncate(time.Second) + h.grantPaidEntitlement(t, filtered.UserID, h.currentEntitlementStartsAt(t, filtered.UserID), now.Add(48*time.Hour)) + h.syncDeclaredCountry(t, filtered.UserID, "DE") + h.applySanction(t, filtered.UserID, "login_block", "auth", now.Add(-30*time.Minute)) + h.setLimit(t, filtered.UserID, "max_owned_private_games", 5, now.Add(-20*time.Minute)) + + firstPageResponse := h.listUsers(t, "page_size=1") + var firstPage userListResponse + requireResponseJSON(t, firstPageResponse, http.StatusOK, &firstPage) + require.Len(t, firstPage.Items, 1) + require.Equal(t, latest.UserID, firstPage.Items[0].UserID) + require.NotEmpty(t, firstPage.NextPageToken) + + mismatchResponse := h.listUsers(t, "page_size=1&page_token="+firstPage.NextPageToken+"&paid_state=paid") + requireJSONBody(t, mismatchResponse, http.StatusBadRequest, `{"error":{"code":"invalid_request","message":"page_token is invalid or does not match current filters"}}`) + + filteredResponse := h.listUsers( + t, + "paid_state=paid"+ + "&paid_expires_after="+now.Add(time.Hour).Format(time.RFC3339)+ + "&paid_expires_before="+now.Add(72*time.Hour).Format(time.RFC3339)+ + "&declared_country=DE"+ + "&sanction_code=login_block"+ + "&limit_code=max_owned_private_games"+ + "&can_login=false"+ + "&can_create_private_game=false"+ + "&can_join_game=false", + ) + + var filteredBody userListResponse + requireResponseJSON(t, filteredResponse, http.StatusOK, &filteredBody) + require.Len(t, filteredBody.Items, 1) + require.Equal(t, filtered.UserID, filteredBody.Items[0].UserID) + require.Equal(t, "DE", filteredBody.Items[0].DeclaredCountry) + require.Equal(t, "paid_monthly", filteredBody.Items[0].Entitlement.PlanCode) +} + +type httpResponse struct { + StatusCode int + Body string + Header http.Header +} + +type errorEnvelope struct { + Error struct { + Code string `json:"code"` + Message string `json:"message"` + } `json:"error"` +} + +type ensureByEmailResponse struct { + Outcome string `json:"outcome"` + UserID string `json:"user_id,omitempty"` +} + +type accountResponse struct { + Account accountView `json:"account"` +} + +type userLookupResponse struct { + User accountView `json:"user"` +} + +type userListResponse struct { + Items []accountView `json:"items"` + NextPageToken string `json:"next_page_token,omitempty"` +} + +type accountView struct { + UserID string `json:"user_id"` + Email string `json:"email"` + RaceName string `json:"race_name"` + PreferredLanguage string `json:"preferred_language"` + TimeZone string `json:"time_zone"` + DeclaredCountry string `json:"declared_country,omitempty"` + Entitlement entitlementSnapshotView `json:"entitlement"` + ActiveSanctions []activeSanctionView `json:"active_sanctions"` + ActiveLimits []activeLimitView `json:"active_limits"` + CreatedAt time.Time `json:"created_at"` + UpdatedAt time.Time `json:"updated_at"` +} + +type entitlementSnapshotView struct { + PlanCode string `json:"plan_code"` + IsPaid bool `json:"is_paid"` + Source string `json:"source"` + Actor actorRefView `json:"actor"` + ReasonCode string `json:"reason_code"` + StartsAt time.Time `json:"starts_at"` + EndsAt *time.Time `json:"ends_at,omitempty"` + UpdatedAt time.Time `json:"updated_at"` +} + +type activeSanctionView struct { + SanctionCode string `json:"sanction_code"` + Scope string `json:"scope"` + ReasonCode string `json:"reason_code"` + Actor actorRefView `json:"actor"` + AppliedAt time.Time `json:"applied_at"` + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +type activeLimitView struct { + LimitCode string `json:"limit_code"` + Value int `json:"value"` + ReasonCode string `json:"reason_code"` + Actor actorRefView `json:"actor"` + AppliedAt time.Time `json:"applied_at"` + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +type actorRefView struct { + Type string `json:"type"` + ID string `json:"id,omitempty"` +} + +type eligibilityResponse struct { + Exists bool `json:"exists"` + UserID string `json:"user_id"` + Entitlement *entitlementSnapshotView `json:"entitlement,omitempty"` + ActiveSanctions []activeSanctionView `json:"active_sanctions"` + EffectiveLimits []effectiveLimitView `json:"effective_limits"` + Markers eligibilityMarkers `json:"markers"` +} + +type effectiveLimitView struct { + LimitCode string `json:"limit_code"` + Value int `json:"value"` +} + +type eligibilityMarkers struct { + CanLogin bool `json:"can_login"` + CanCreatePrivateGame bool `json:"can_create_private_game"` + CanManagePrivateGame bool `json:"can_manage_private_game"` + CanJoinGame bool `json:"can_join_game"` + CanUpdateProfile bool `json:"can_update_profile"` +} + +type declaredCountrySyncResponse struct { + UserID string `json:"user_id"` + DeclaredCountry string `json:"declared_country"` + UpdatedAt time.Time `json:"updated_at"` +} + +type entitlementCommandResponse struct { + UserID string `json:"user_id"` + Entitlement entitlementSnapshotView `json:"entitlement"` +} + +type sanctionCommandResponse struct { + UserID string `json:"user_id"` + ActiveSanctions []activeSanctionView `json:"active_sanctions"` +} + +type limitCommandResponse struct { + UserID string `json:"user_id"` + ActiveLimits []activeLimitView `json:"active_limits"` +} + +func requireResponseJSON(t *testing.T, response httpResponse, wantStatus int, target any) { + t.Helper() + + require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body) + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), target)) +} + +func requireJSONBody(t *testing.T, response httpResponse, wantStatus int, wantBody string) { + t.Helper() + + require.Equal(t, wantStatus, response.StatusCode, "response body: %s", response.Body) + require.JSONEq(t, wantBody, response.Body) +} + +func decodeStrictJSONPayload(payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return err + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return errors.New("unexpected trailing JSON input") + } + return err + } + + return nil +} + +func freeLoopbackAddress(t *testing.T) string { + t.Helper() + + listener, err := net.Listen("tcp", "127.0.0.1:0") + require.NoError(t, err) + defer listener.Close() + + return listener.Addr().String() +} + +func (view entitlementSnapshotView) Equal(other entitlementSnapshotView) bool { + return view.PlanCode == other.PlanCode && + view.IsPaid == other.IsPaid && + view.Source == other.Source && + view.ReasonCode == other.ReasonCode && + view.StartsAt.Equal(other.StartsAt) && + optionalTimeEqual(view.EndsAt, other.EndsAt) && + view.UpdatedAt.Equal(other.UpdatedAt) +} + +func optionalTimeEqual(left *time.Time, right *time.Time) bool { + switch { + case left == nil && right == nil: + return true + case left == nil || right == nil: + return false + default: + return left.Equal(*right) + } +} + +func TestEntitlementSnapshotViewEqual(t *testing.T) { + t.Parallel() + + now := time.Now().UTC() + next := now.Add(time.Hour) + require.True(t, entitlementSnapshotView{ + PlanCode: "free", + IsPaid: false, + Source: "auth_registration", + ReasonCode: "initial_free_entitlement", + StartsAt: now, + UpdatedAt: now, + }.Equal(entitlementSnapshotView{ + PlanCode: "free", + IsPaid: false, + Source: "auth_registration", + ReasonCode: "initial_free_entitlement", + StartsAt: now, + UpdatedAt: now, + })) + require.False(t, entitlementSnapshotView{ + PlanCode: "paid_monthly", + IsPaid: true, + Source: "admin", + ReasonCode: "manual_grant", + StartsAt: now, + EndsAt: &next, + UpdatedAt: now, + }.Equal(entitlementSnapshotView{ + PlanCode: "paid_monthly", + IsPaid: true, + Source: "admin", + ReasonCode: "manual_grant", + StartsAt: now, + UpdatedAt: now, + })) +} + +func TestEligibilityUnknownMarkersZeroValueMatchesContract(t *testing.T) { + t.Parallel() + + require.Equal(t, eligibilityMarkers{}, eligibilityMarkers{}) + require.False(t, strings.HasPrefix("", "user-")) +}