diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md index 571b1ea..6b4d80d 100644 --- a/ARCHITECTURE.md +++ b/ARCHITECTURE.md @@ -271,13 +271,13 @@ Architectural rules fixed for this service: * `User Service` stores only the current effective `declared_country`; review workflow and history belong to `Geo Profile Service`. * During the current auth-registration rollout, `Auth / Session Service` - passes temporary `preferred_language="en"` plus the confirmed `time_zone` - into `User Service`. Gateway-side geoip language derivation is a later - rollout step and is not part of the current source-of-truth contract. + passes a preferred-language candidate derived from public + `Accept-Language`, falling back to `en` when no supported value is + available, plus the confirmed `time_zone` into `User Service`. Future billing does not become a direct dependency of other services. `Billing Service` will feed entitlement/payment outcomes into `User Service`, and the rest of the platform will continue to use `User Service` as the source of truth for current entitlements. -## 4. Mail Service +## 4. [Mail Service](mail/README.md) `Mail Service` is the internal email delivery service. @@ -286,7 +286,23 @@ Split of responsibility: * auth code emails: `Auth / Session Service -> Mail Service` directly; * all other user/admin notification emails: `Notification Service -> Mail Service`. -Mail delivery may be internally queued inside the mail service, but to its callers it is still a synchronous internal command where the caller needs a deterministic send-or-fail result. +Transport rules: + +* `Auth / Session Service -> Mail Service` uses the dedicated synchronous + trusted internal REST contract `POST /api/v1/internal/login-code-deliveries`; +* `Notification Service -> Mail Service` is an asynchronous internal command + flow carried through the event bus or an equivalent queue-backed handoff. + +`Mail Service` may internally queue both flows. +Its trusted operator read and resend APIs are part of the v1 service surface, +not a later add-on. +For auth callers, a successful result means the request was durably accepted +into the mail-delivery pipeline or intentionally suppressed; it does not +require that the external SMTP exchange already completed before the response +is returned. +Stable service-local delivery rules, retry semantics, and Redis-backed +processing details belong in [`mail/README.md`](mail/README.md), not in the +root architecture document. ## 5. [Geo Profile Service](geoprofile/README.md) @@ -297,7 +313,7 @@ It integrates with: * gateway as asynchronous ingest producer; * `User Service` for current effective `declared_country`; * `Auth / Session Service` for suspicious session blocking; -* `Mail Service` only for optional admin notifications. +* `Notification Service` for optional admin notifications. It owns: @@ -309,6 +325,8 @@ It owns: It does not block the request that triggered suspicion. It can only request block of suspicious sessions for subsequent requests. +It does not call `Mail Service` directly; optional admin mail must flow +through `Notification Service`. In this document, references to `Edge Service` in older geo documentation should be understood as `Edge Gateway`. @@ -519,7 +537,7 @@ It has a deliberately minimal role: * decide whether a given event should result in push, email, or both; * render and route notification payloads; * send push-targeted events toward gateway; -* send email-targeted commands toward `Mail Service`. +* send email-targeted asynchronous commands toward `Mail Service`. It is not a source of truth for user preferences in v1 unless a later feature requires it. diff --git a/authsession/README.md b/authsession/README.md index 0819313..7f90f82 100644 --- a/authsession/README.md +++ b/authsession/README.md @@ -84,7 +84,8 @@ The service is not responsible for: - downstream business authorization - direct push delivery to clients - long-lived hot-path session caching inside gateway -- mail-service implementation details beyond the mail-delivery contract +- mail-service implementation details beyond the dedicated login-code delivery + REST contract ## Position in the System @@ -140,15 +141,23 @@ The effective DTO contract is: | `POST /api/v1/public/auth/send-email-code` | `{ "email": string }` | `{ "challenge_id": string }` | | `POST /api/v1/public/auth/confirm-email-code` | `{ "challenge_id": string, "code": string, "client_public_key": string, "time_zone": string }` | `{ "device_session_id": string }` | +`send-email-code` may additionally receive the optional public +`Accept-Language` header through gateway. Auth resolves the first supported +BCP 47 language tag from that header, falls back to `en` when no supported +value is available, uses the resolved value as the auth-mail locale for the +dedicated `Mail Service` REST contract, and stores it on the challenge as the +create-only preferred-language candidate for a later first-user ensure step. +The created `challenge_id` is sent to `Mail Service` as the raw +`Idempotency-Key` header value of that dedicated REST call. `client_public_key` is the standard base64-encoded raw 32-byte Ed25519 public key registered for the created device session. `time_zone` is the client-selected IANA time zone name. During the current rollout phase, successful confirms forward create-only user registration -context to `User Service` as `preferred_language="en"` and the supplied -`time_zone` until gateway geoip-based language derivation is deployed. +context to `User Service` as the stored preferred-language candidate from +`send-email-code` and the supplied `time_zone`. `User Service` now validates `preferred_language` as BCP 47 and canonicalizes -the stored value on creation, so any future derived language must already be a -valid BCP 47 tag before auth forwards it. +the stored value on creation, so the derived public language value must +already be a valid BCP 47 tag before auth forwards it. Public boundary rules: @@ -162,6 +171,9 @@ Public boundary rules: IANA time zone name - `send-email-code` remains success-shaped for existing, new, blocked, and throttled e-mail paths +- `send-email-code` may use optional public `Accept-Language` to derive and + store the auth-mail locale plus future create-only `preferred_language` + candidate; unsupported or missing values fall back to `en` - `confirm-email-code` returns a ready `device_session_id` synchronously on success @@ -236,6 +248,7 @@ Core fields: - creation and expiration timestamps - send and confirm attempt counters - minimal abuse metadata +- stored preferred-language candidate derived at send time - optional confirmation metadata used for idempotent retry ### Challenge States @@ -259,6 +272,14 @@ Supported `challenge.DeliveryState` values: - `throttled` - `failed` +For the dedicated `Mail Service` REST contract, `delivery_state=sent` means +auth successfully handed the request off to +`POST /api/v1/internal/login-code-deliveries` and the mail-delivery pipeline. +That call uses the created `challenge_id` as the raw `Idempotency-Key` header +value. +It does not require that the SMTP provider exchange already completed before +`challenge_id` was returned to the caller. + Policy rules: - initial challenge TTL is `5m` diff --git a/authsession/api/public-openapi.yaml b/authsession/api/public-openapi.yaml index d8552e0..8f038dc 100644 --- a/authsession/api/public-openapi.yaml +++ b/authsession/api/public-openapi.yaml @@ -36,7 +36,15 @@ paths: Accepts one client e-mail address and starts the public challenge flow. The outward result remains success-shaped even when the underlying policy suppresses mail delivery for anti-enumeration purposes. + + The JSON body stays unchanged. Gateway may additionally forward the + optional public `Accept-Language` header so auth can derive the + auth-mail locale and the create-only preferred-language candidate used + later during first-user creation. Missing or unsupported values fall + back to `en`. security: [] + parameters: + - $ref: "#/components/parameters/AcceptLanguage" requestBody: required: true content: @@ -111,6 +119,18 @@ paths: "503": $ref: "#/components/responses/ServiceUnavailableError" components: + parameters: + AcceptLanguage: + name: Accept-Language + in: header + required: false + description: | + Optional RFC 9110 `Accept-Language` header forwarded by gateway so + auth can derive the auth-mail locale and create-only + preferred-language candidate. The first supported BCP 47 tag wins; + unsupported or missing values fall back to `en`. + schema: + type: string schemas: SendEmailCodeRequest: type: object diff --git a/authsession/contract_openapi_test.go b/authsession/contract_openapi_test.go index 530a417..0cee788 100644 --- a/authsession/contract_openapi_test.go +++ b/authsession/contract_openapi_test.go @@ -62,12 +62,37 @@ func TestPublicOpenAPISpecMatchesGatewayPublicAuthContract(t *testing.T) { responseSchemaRef(t, gatewayOperation, http.StatusOK), "path "+path+" success response schema", ) + compareParameterRefs( + t, + authOperation.Parameters, + gatewayOperation.Parameters, + "path "+path+" parameters", + ) for _, status := range publicErrorStatuses(path) { assertSchemaRef(t, responseSchemaRef(t, authOperation, status), errorResponseRef, "path "+path+" error response "+http.StatusText(status)+" envelope") } } + assertOperationParameterRefs( + t, + getOperation(t, authDoc, "/api/v1/public/auth/send-email-code", http.MethodPost), + "#/components/parameters/AcceptLanguage", + ) + assertOperationParameterRefs( + t, + getOperation(t, gatewayDoc, "/api/v1/public/auth/send-email-code", http.MethodPost), + "#/components/parameters/AcceptLanguage", + ) + assertOperationParameterRefs( + t, + getOperation(t, authDoc, "/api/v1/public/auth/confirm-email-code", http.MethodPost), + ) + assertOperationParameterRefs( + t, + getOperation(t, gatewayDoc, "/api/v1/public/auth/confirm-email-code", http.MethodPost), + ) + compareSchemaRefs( t, authErrorEnvelope, @@ -352,6 +377,16 @@ func compareSchemaRefs(t *testing.T, got *openapi3.SchemaRef, want *openapi3.Sch } } +func compareParameterRefs(t *testing.T, got openapi3.Parameters, want openapi3.Parameters, name string) { + t.Helper() + + gotJSON := mustJSON(t, got) + wantJSON := mustJSON(t, want) + if !bytes.Equal(gotJSON, wantJSON) { + require.Failf(t, "test failed", "%s mismatch:\n got: %s\nwant: %s", name, gotJSON, wantJSON) + } +} + func assertSchemaRef(t *testing.T, schemaRef *openapi3.SchemaRef, want string, name string) { t.Helper() @@ -360,6 +395,23 @@ func assertSchemaRef(t *testing.T, schemaRef *openapi3.SchemaRef, want string, n } } +func assertOperationParameterRefs(t *testing.T, operation *openapi3.Operation, refs ...string) { + t.Helper() + + if len(operation.Parameters) != len(refs) { + require.Failf(t, "test failed", "operation parameter count = %d, want %d", len(operation.Parameters), len(refs)) + } + + for index, want := range refs { + if operation.Parameters[index] == nil { + require.Failf(t, "test failed", "operation parameter %d is nil", index) + } + if operation.Parameters[index].Ref != want { + require.Failf(t, "test failed", "operation parameter %d ref = %q, want %q", index, operation.Parameters[index].Ref, want) + } + } +} + func assertRequiredFields(t *testing.T, schemaRef *openapi3.SchemaRef, fields ...string) { t.Helper() diff --git a/authsession/docs/flows.md b/authsession/docs/flows.md index 6ee5542..e60bf8b 100644 --- a/authsession/docs/flows.md +++ b/authsession/docs/flows.md @@ -9,14 +9,14 @@ sequenceDiagram participant Auth participant Abuse as Resend throttle participant User as UserDirectory - participant Mail as MailSender + participant Mail as Mail Service REST participant Challenge as ChallengeStore participant Session as SessionStore participant Config as ConfigProvider participant Projection as Gateway projection publisher - Client->>Gateway: POST /api/v1/public/auth/send-email-code - Gateway->>Auth: POST /api/v1/public/auth/send-email-code + Client->>Gateway: POST /api/v1/public/auth/send-email-code + Accept-Language + Gateway->>Auth: POST /api/v1/public/auth/send-email-code + Accept-Language Auth->>Abuse: check and reserve cooldown alt throttled Abuse-->>Auth: throttled @@ -30,8 +30,8 @@ sequenceDiagram alt blocked Auth->>Challenge: mark delivery_suppressed else not blocked - Auth->>Mail: SendLoginCode(email, code) - Mail-->>Auth: sent / suppressed / failure + Auth->>Mail: POST /api/v1/internal/login-code-deliveries + Idempotency-Key=challenge_id + Mail-->>Auth: 200 {outcome=sent|suppressed} / 503 Auth->>Challenge: persist final delivery outcome end Auth-->>Gateway: 200 {challenge_id} @@ -40,7 +40,7 @@ sequenceDiagram Client->>Gateway: POST /api/v1/public/auth/confirm-email-code Gateway->>Auth: POST /api/v1/public/auth/confirm-email-code Auth->>Challenge: load and validate challenge - Auth->>User: EnsureUserByEmail(email, registration_context) + Auth->>User: EnsureUserByEmail(email, stored preferred_language + time_zone) User-->>Auth: existing / created / blocked Auth->>Config: LoadSessionLimit() Auth->>Session: CountActiveByUserID(user_id) @@ -51,6 +51,13 @@ sequenceDiagram Auth-->>Gateway: 200 {device_session_id} ``` +Auth uses the dedicated trusted `Mail Service` REST route +`POST /api/v1/internal/login-code-deliveries`. +It sends the created `challenge_id` as the raw `Idempotency-Key` header +value. +For this boundary, `sent` means durable acceptance into the mail-delivery +pipeline; SMTP completion may still happen later in `Mail Service` workers. + ## Revoke and Block Flow ```mermaid diff --git a/authsession/gateway_compatibility_test.go b/authsession/gateway_compatibility_test.go index 7d8aaff..2f8a854 100644 --- a/authsession/gateway_compatibility_test.go +++ b/authsession/gateway_compatibility_test.go @@ -659,9 +659,21 @@ type gatewayCompatibilityHTTPResponse struct { func gatewayCompatibilityPostJSON(t *testing.T, url string, body string) gatewayCompatibilityHTTPResponse { t.Helper() + return gatewayCompatibilityPostJSONWithHeaders(t, url, body, nil) +} + +func gatewayCompatibilityPostJSONWithHeaders(t *testing.T, url string, body string, headers map[string]string) gatewayCompatibilityHTTPResponse { + t.Helper() + request, err := http.NewRequest(http.MethodPost, url, bytes.NewBufferString(body)) require.NoError(t, err) request.Header.Set("Content-Type", "application/json") + for key, value := range headers { + if strings.TrimSpace(value) == "" { + continue + } + request.Header.Set(key, value) + } response, err := http.DefaultClient.Do(request) require.NoError(t, err) @@ -685,6 +697,15 @@ func gatewayCompatibilityPostJSONValue(t *testing.T, url string, value any) gate return gatewayCompatibilityPostJSON(t, url, string(payload)) } +func gatewayCompatibilityPostJSONValueWithHeaders(t *testing.T, url string, value any, headers map[string]string) gatewayCompatibilityHTTPResponse { + t.Helper() + + payload, err := json.Marshal(value) + require.NoError(t, err) + + return gatewayCompatibilityPostJSONWithHeaders(t, url, string(payload), headers) +} + func gatewayCompatibilityActiveSession( t *testing.T, deviceSessionID string, diff --git a/authsession/go.mod b/authsession/go.mod index 074c8a4..6a1b07b 100644 --- a/authsession/go.mod +++ b/authsession/go.mod @@ -22,6 +22,7 @@ require ( go.opentelemetry.io/otel/trace v1.43.0 go.uber.org/zap v1.27.1 golang.org/x/crypto v0.49.0 + golang.org/x/text v0.36.0 ) require ( @@ -75,7 +76,6 @@ require ( golang.org/x/arch v0.25.0 // indirect golang.org/x/net v0.52.0 // indirect golang.org/x/sys v0.42.0 // indirect - golang.org/x/text v0.35.0 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect google.golang.org/grpc v1.80.0 // indirect diff --git a/authsession/go.sum b/authsession/go.sum index b94415b..faf28ad 100644 --- a/authsession/go.sum +++ b/authsession/go.sum @@ -152,8 +152,7 @@ golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo= golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw= -golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8= -golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA= +golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg= gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4= gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E= google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA= diff --git a/authsession/internal/adapters/contracttest/challenge_store.go b/authsession/internal/adapters/contracttest/challenge_store.go index 855715e..bec3193 100644 --- a/authsession/internal/adapters/contracttest/challenge_store.go +++ b/authsession/internal/adapters/contracttest/challenge_store.go @@ -149,13 +149,14 @@ func RunChallengeStoreContractTests(t *testing.T, newStore ChallengeStoreFactory func contractPendingChallenge(now time.Time) challenge.Challenge { record := challenge.Challenge{ - ID: common.ChallengeID("challenge-pending"), - Email: common.Email("pilot@example.com"), - CodeHash: []byte("hashed-pending-code"), - Status: challenge.StatusPendingSend, - DeliveryState: challenge.DeliveryPending, - CreatedAt: now, - ExpiresAt: now.Add(challenge.InitialTTL), + ID: common.ChallengeID("challenge-pending"), + Email: common.Email("pilot@example.com"), + CodeHash: []byte("hashed-pending-code"), + PreferredLanguage: "en", + Status: challenge.StatusPendingSend, + DeliveryState: challenge.DeliveryPending, + CreatedAt: now, + ExpiresAt: now.Add(challenge.InitialTTL), } if err := record.Validate(); err != nil { panic(err) @@ -176,13 +177,14 @@ func contractConfirmedChallenge(t *testing.T, now time.Time) challenge.Challenge require.NoError(t, err) record := challenge.Challenge{ - ID: common.ChallengeID("challenge-confirmed"), - Email: common.Email("pilot@example.com"), - CodeHash: []byte("hashed-code"), - Status: challenge.StatusConfirmedPendingExpire, - DeliveryState: challenge.DeliverySent, - CreatedAt: now, - ExpiresAt: now.Add(challenge.ConfirmedRetention), + ID: common.ChallengeID("challenge-confirmed"), + Email: common.Email("pilot@example.com"), + CodeHash: []byte("hashed-code"), + PreferredLanguage: "en", + Status: challenge.StatusConfirmedPendingExpire, + DeliveryState: challenge.DeliverySent, + CreatedAt: now, + ExpiresAt: now.Add(challenge.ConfirmedRetention), Attempts: challenge.AttemptCounters{ Send: 1, Confirm: 2, diff --git a/authsession/internal/adapters/mail/rest_client.go b/authsession/internal/adapters/mail/rest_client.go index 1563926..c14c131 100644 --- a/authsession/internal/adapters/mail/rest_client.go +++ b/authsession/internal/adapters/mail/rest_client.go @@ -95,9 +95,10 @@ func (c *RESTClient) SendLoginCode(ctx context.Context, input ports.SendLoginCod return ports.SendLoginCodeResult{}, fmt.Errorf("send login code: %w", err) } - payload, statusCode, err := c.doRequest(ctx, "send login code", map[string]string{ - "email": input.Email.String(), - "code": input.Code, + payload, statusCode, err := c.doRequest(ctx, "send login code", input.IdempotencyKey, map[string]string{ + "email": input.Email.String(), + "code": input.Code, + "locale": input.Locale, }) if err != nil { return ports.SendLoginCodeResult{}, err @@ -121,7 +122,7 @@ func (c *RESTClient) SendLoginCode(ctx context.Context, input ports.SendLoginCod return result, nil } -func (c *RESTClient) doRequest(ctx context.Context, operation string, requestBody any) ([]byte, int, error) { +func (c *RESTClient) doRequest(ctx context.Context, operation string, idempotencyKey string, requestBody any) ([]byte, int, error) { bodyBytes, err := json.Marshal(requestBody) if err != nil { return nil, 0, fmt.Errorf("%s: marshal request body: %w", operation, err) @@ -135,6 +136,7 @@ func (c *RESTClient) doRequest(ctx context.Context, operation string, requestBod return nil, 0, fmt.Errorf("%s: build request: %w", operation, err) } request.Header.Set("Content-Type", "application/json") + request.Header.Set("Idempotency-Key", idempotencyKey) response, err := c.httpClient.Do(request) if err != nil { diff --git a/authsession/internal/adapters/mail/rest_client_test.go b/authsession/internal/adapters/mail/rest_client_test.go index c43d3bf..7007ce8 100644 --- a/authsession/internal/adapters/mail/rest_client_test.go +++ b/authsession/internal/adapters/mail/rest_client_test.go @@ -128,7 +128,8 @@ func TestRESTClientSendLoginCodeSuccessCases(t *testing.T) { assert.Equal(t, http.MethodPost, requests[0].Method) assert.Equal(t, sendLoginCodePath, requests[0].Path) assert.Equal(t, "application/json", requests[0].ContentType) - assert.JSONEq(t, `{"email":"pilot@example.com","code":"654321"}`, requests[0].Body) + assert.Equal(t, "challenge-1", requests[0].IdempotencyKey) + assert.JSONEq(t, `{"email":"pilot@example.com","code":"654321","locale":"en"}`, requests[0].Body) }) } } @@ -136,9 +137,9 @@ func TestRESTClientSendLoginCodeSuccessCases(t *testing.T) { func TestRESTClientPreservesNormalizedEmailAndCodeExactly(t *testing.T) { t.Parallel() - var captured string + var captured capturedRequest server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - captured = captureRequest(t, r).Body + captured = captureRequest(t, r) writeJSON(t, w, http.StatusOK, map[string]string{"outcome": "sent"}) })) defer server.Close() @@ -146,12 +147,15 @@ func TestRESTClientPreservesNormalizedEmailAndCodeExactly(t *testing.T) { client := newTestRESTClient(t, server.URL, 250*time.Millisecond) result, err := client.SendLoginCode(context.Background(), ports.SendLoginCodeInput{ - Email: common.Email("Pilot+Alias@Example.com"), - Code: "123456", + Email: common.Email("Pilot+Alias@Example.com"), + IdempotencyKey: "challenge-1", + Code: "123456", + Locale: "fr-FR", }) require.NoError(t, err) assert.Equal(t, ports.SendLoginCodeOutcomeSent, result.Outcome) - assert.JSONEq(t, `{"email":"Pilot+Alias@Example.com","code":"123456"}`, captured) + assert.Equal(t, "challenge-1", captured.IdempotencyKey) + assert.JSONEq(t, `{"email":"Pilot+Alias@Example.com","code":"123456","locale":"fr-FR"}`, captured.Body) } func TestRESTClientSendLoginCodeDoesNotRetry(t *testing.T) { @@ -311,8 +315,10 @@ func TestRESTClientContextAndValidation(t *testing.T) { name: "invalid email", run: func() error { _, err := client.SendLoginCode(context.Background(), ports.SendLoginCodeInput{ - Email: common.Email(" bad@example.com "), - Code: "123456", + Email: common.Email(" bad@example.com "), + IdempotencyKey: "challenge-1", + Code: "123456", + Locale: "en", }) return err }, @@ -321,8 +327,34 @@ func TestRESTClientContextAndValidation(t *testing.T) { name: "invalid code", run: func() error { _, err := client.SendLoginCode(context.Background(), ports.SendLoginCodeInput{ - Email: common.Email("pilot@example.com"), - Code: " 123456 ", + Email: common.Email("pilot@example.com"), + IdempotencyKey: "challenge-1", + Code: " 123456 ", + Locale: "en", + }) + return err + }, + }, + { + name: "invalid locale", + run: func() error { + _, err := client.SendLoginCode(context.Background(), ports.SendLoginCodeInput{ + Email: common.Email("pilot@example.com"), + IdempotencyKey: "challenge-1", + Code: "123456", + Locale: " en ", + }) + return err + }, + }, + { + name: "invalid idempotency key", + run: func() error { + _, err := client.SendLoginCode(context.Background(), ports.SendLoginCodeInput{ + Email: common.Email("pilot@example.com"), + IdempotencyKey: " challenge-1 ", + Code: "123456", + Locale: "en", }) return err }, @@ -340,10 +372,11 @@ func TestRESTClientContextAndValidation(t *testing.T) { } type capturedRequest struct { - Method string - Path string - ContentType string - Body string + Method string + Path string + ContentType string + IdempotencyKey string + Body string } func captureRequest(t *testing.T, request *http.Request) capturedRequest { @@ -353,10 +386,11 @@ func captureRequest(t *testing.T, request *http.Request) capturedRequest { require.NoError(t, err) return capturedRequest{ - Method: request.Method, - Path: request.URL.Path, - ContentType: request.Header.Get("Content-Type"), - Body: strings.TrimSpace(string(body)), + Method: request.Method, + Path: request.URL.Path, + ContentType: request.Header.Get("Content-Type"), + IdempotencyKey: request.Header.Get("Idempotency-Key"), + Body: strings.TrimSpace(string(body)), } } diff --git a/authsession/internal/adapters/mail/stub_sender.go b/authsession/internal/adapters/mail/stub_sender.go index 530be15..dc6f89a 100644 --- a/authsession/internal/adapters/mail/stub_sender.go +++ b/authsession/internal/adapters/mail/stub_sender.go @@ -60,7 +60,8 @@ func (step StubStep) Validate() error { return nil } -// Attempt records one validated delivery request handled by StubSender. +// Attempt records one validated delivery request handled by StubSender, +// including the auth challenge-derived idempotency key. type Attempt struct { // Input stores the validated cleartext mail-delivery request exactly as it // was passed into SendLoginCode. diff --git a/authsession/internal/adapters/mail/stub_sender_test.go b/authsession/internal/adapters/mail/stub_sender_test.go index 1c0a502..2db3080 100644 --- a/authsession/internal/adapters/mail/stub_sender_test.go +++ b/authsession/internal/adapters/mail/stub_sender_test.go @@ -192,7 +192,9 @@ func TestStubSenderSendLoginCodeInvalidInput(t *testing.T) { func validInput() ports.SendLoginCodeInput { return ports.SendLoginCodeInput{ - Email: common.Email("pilot@example.com"), - Code: "654321", + Email: common.Email("pilot@example.com"), + IdempotencyKey: "challenge-1", + Code: "654321", + Locale: "en", } } diff --git a/authsession/internal/adapters/redis/challengestore/store.go b/authsession/internal/adapters/redis/challengestore/store.go index 3476487..a195408 100644 --- a/authsession/internal/adapters/redis/challengestore/store.go +++ b/authsession/internal/adapters/redis/challengestore/store.go @@ -24,6 +24,8 @@ import ( const expirationGracePeriod = 5 * time.Minute +const defaultPreferredLanguage = "en" + // Config configures one Redis-backed challenge store instance. type Config struct { // Addr is the Redis network address in host:port form. @@ -59,6 +61,7 @@ type redisRecord struct { ChallengeID string `json:"challenge_id"` Email string `json:"email"` CodeHashBase64 string `json:"code_hash_base64"` + PreferredLanguage string `json:"preferred_language,omitempty"` Status challenge.Status `json:"status"` DeliveryState challenge.DeliveryState `json:"delivery_state"` CreatedAt string `json:"created_at"` @@ -291,6 +294,7 @@ func redisRecordFromChallenge(record challenge.Challenge) (redisRecord, error) { ChallengeID: record.ID.String(), Email: record.Email.String(), CodeHashBase64: base64.StdEncoding.EncodeToString(record.CodeHash), + PreferredLanguage: record.PreferredLanguage, Status: record.Status, DeliveryState: record.DeliveryState, CreatedAt: formatTimestamp(record.CreatedAt), @@ -354,13 +358,14 @@ func challengeFromRedisRecord(stored redisRecord) (challenge.Challenge, error) { } record := challenge.Challenge{ - ID: common.ChallengeID(stored.ChallengeID), - Email: common.Email(stored.Email), - CodeHash: codeHash, - Status: stored.Status, - DeliveryState: stored.DeliveryState, - CreatedAt: createdAt, - ExpiresAt: expiresAt, + ID: common.ChallengeID(stored.ChallengeID), + Email: common.Email(stored.Email), + CodeHash: codeHash, + PreferredLanguage: normalizeStoredPreferredLanguage(stored.PreferredLanguage), + Status: stored.Status, + DeliveryState: stored.DeliveryState, + CreatedAt: createdAt, + ExpiresAt: expiresAt, Attempts: challenge.AttemptCounters{ Send: stored.SendAttemptCount, Confirm: stored.ConfirmAttemptCount, @@ -459,6 +464,15 @@ func formatOptionalTimestamp(value *time.Time) *string { return &formatted } +func normalizeStoredPreferredLanguage(value string) string { + preferredLanguage := strings.TrimSpace(value) + if preferredLanguage == "" { + return defaultPreferredLanguage + } + + return preferredLanguage +} + func redisTTL(expiresAt time.Time) time.Duration { ttl := time.Until(expiresAt.UTC()) if ttl < 0 { diff --git a/authsession/internal/adapters/redis/challengestore/store_test.go b/authsession/internal/adapters/redis/challengestore/store_test.go index d0b1510..b1729ba 100644 --- a/authsession/internal/adapters/redis/challengestore/store_test.go +++ b/authsession/internal/adapters/redis/challengestore/store_test.go @@ -451,13 +451,14 @@ func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store func testPendingChallenge(now time.Time) challenge.Challenge { return challenge.Challenge{ - ID: common.ChallengeID("challenge-pending"), - Email: common.Email("pilot@example.com"), - CodeHash: []byte("hashed-pending-code"), - Status: challenge.StatusPendingSend, - DeliveryState: challenge.DeliveryPending, - CreatedAt: now, - ExpiresAt: now.Add(challenge.InitialTTL), + ID: common.ChallengeID("challenge-pending"), + Email: common.Email("pilot@example.com"), + CodeHash: []byte("hashed-pending-code"), + PreferredLanguage: "en", + Status: challenge.StatusPendingSend, + DeliveryState: challenge.DeliveryPending, + CreatedAt: now, + ExpiresAt: now.Add(challenge.InitialTTL), } } @@ -473,13 +474,14 @@ func testChallenge(now time.Time) challenge.Challenge { } return challenge.Challenge{ - ID: common.ChallengeID("challenge-confirmed"), - Email: common.Email("pilot@example.com"), - CodeHash: []byte("hashed-code"), - Status: challenge.StatusConfirmedPendingExpire, - DeliveryState: challenge.DeliverySent, - CreatedAt: now, - ExpiresAt: now.Add(challenge.ConfirmedRetention), + ID: common.ChallengeID("challenge-confirmed"), + Email: common.Email("pilot@example.com"), + CodeHash: []byte("hashed-code"), + PreferredLanguage: "en", + Status: challenge.StatusConfirmedPendingExpire, + DeliveryState: challenge.DeliverySent, + CreatedAt: now, + ExpiresAt: now.Add(challenge.ConfirmedRetention), Attempts: challenge.AttemptCounters{ Send: 1, Confirm: 2, @@ -495,6 +497,36 @@ func testChallenge(now time.Time) challenge.Challenge { } } +func TestStoreGetDefaultsMissingPreferredLanguageToEnglish(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + store := newTestStore(t, server, Config{}) + now := time.Unix(1_775_130_250, 0).UTC() + + record := testPendingChallenge(now) + stored, err := redisRecordFromChallenge(record) + require.NoError(t, err) + stored.PreferredLanguage = "" + + payload := mustMarshalJSON(t, map[string]any{ + "challenge_id": stored.ChallengeID, + "email": stored.Email, + "code_hash_base64": stored.CodeHashBase64, + "status": stored.Status, + "delivery_state": stored.DeliveryState, + "created_at": stored.CreatedAt, + "expires_at": stored.ExpiresAt, + "send_attempt_count": stored.SendAttemptCount, + "confirm_attempt_count": stored.ConfirmAttemptCount, + }) + server.Set(store.lookupKey(record.ID), payload) + + got, err := store.Get(context.Background(), record.ID) + require.NoError(t, err) + assert.Equal(t, "en", got.PreferredLanguage) +} + func timePointer(value time.Time) *time.Time { return &value } diff --git a/authsession/internal/api/publichttp/e2e_test.go b/authsession/internal/api/publichttp/e2e_test.go index 2b6b2ad..2e81ca7 100644 --- a/authsession/internal/api/publichttp/e2e_test.go +++ b/authsession/internal/api/publichttp/e2e_test.go @@ -271,10 +271,11 @@ type endToEndOptions struct { } type seedChallengeOptions struct { - ID string - Code string - Status challenge.Status - ExpiresAt time.Time + ID string + Code string + Status challenge.Status + ExpiresAt time.Time + PreferredLanguage string } type endToEndApp struct { @@ -312,13 +313,17 @@ func newEndToEndApp(t *testing.T, options endToEndOptions) endToEndApp { } record := challenge.Challenge{ - ID: common.ChallengeID(options.SeedChallenge.ID), - Email: common.Email("pilot@example.com"), - CodeHash: mustHashCode(t, options.SeedChallenge.Code), - Status: options.SeedChallenge.Status, - DeliveryState: deliveryStateForSeedChallenge(options.SeedChallenge.Status), - CreatedAt: now.Add(-time.Minute), - ExpiresAt: expiresAt, + ID: common.ChallengeID(options.SeedChallenge.ID), + Email: common.Email("pilot@example.com"), + CodeHash: mustHashCode(t, options.SeedChallenge.Code), + PreferredLanguage: options.SeedChallenge.PreferredLanguage, + Status: options.SeedChallenge.Status, + DeliveryState: deliveryStateForSeedChallenge(options.SeedChallenge.Status), + CreatedAt: now.Add(-time.Minute), + ExpiresAt: expiresAt, + } + if record.PreferredLanguage == "" { + record.PreferredLanguage = "en" } require.NoError(t, challengeStore.Create(context.Background(), record)) } diff --git a/authsession/internal/api/publichttp/handler.go b/authsession/internal/api/publichttp/handler.go index 6e6a4b0..f2e9f97 100644 --- a/authsession/internal/api/publichttp/handler.go +++ b/authsession/internal/api/publichttp/handler.go @@ -110,7 +110,10 @@ func handleSendEmailCode(useCase SendEmailCodeUseCase, timeout time.Duration) gi callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) defer cancel() - result, err := useCase.Execute(callCtx, sendemailcode.Input{Email: request.Email}) + result, err := useCase.Execute(callCtx, sendemailcode.Input{ + Email: request.Email, + AcceptLanguage: c.GetHeader("Accept-Language"), + }) if err != nil { abortWithProjection(c, projectSendEmailCodeError(err)) return diff --git a/authsession/internal/api/publichttp/handler_test.go b/authsession/internal/api/publichttp/handler_test.go index 9ea9953..499a3dc 100644 --- a/authsession/internal/api/publichttp/handler_test.go +++ b/authsession/internal/api/publichttp/handler_test.go @@ -25,7 +25,11 @@ func TestSendEmailCodeHandlerSuccess(t *testing.T) { t.Parallel() handler := mustNewHandler(t, DefaultConfig(), Dependencies{ - SendEmailCode: sendEmailCodeFunc(func(context.Context, sendemailcode.Input) (sendemailcode.Result, error) { + SendEmailCode: sendEmailCodeFunc(func(_ context.Context, input sendemailcode.Input) (sendemailcode.Result, error) { + assert.Equal(t, sendemailcode.Input{ + Email: "pilot@example.com", + AcceptLanguage: "fr-FR, en;q=0.8", + }, input) return sendemailcode.Result{ChallengeID: "challenge-123"}, nil }), ConfirmEmailCode: confirmEmailCodeFunc(func(context.Context, confirmemailcode.Input) (confirmemailcode.Result, error) { @@ -40,6 +44,7 @@ func TestSendEmailCodeHandlerSuccess(t *testing.T) { bytes.NewBufferString(`{"email":" pilot@example.com "}`), ) request.Header.Set("Content-Type", "application/json") + request.Header.Set("Accept-Language", "fr-FR, en;q=0.8") handler.ServeHTTP(recorder, request) diff --git a/authsession/internal/domain/challenge/model.go b/authsession/internal/domain/challenge/model.go index 4555bd1..c51d48c 100644 --- a/authsession/internal/domain/challenge/model.go +++ b/authsession/internal/domain/challenge/model.go @@ -5,6 +5,7 @@ package challenge import ( "errors" "fmt" + "strings" "time" "galaxy/authsession/internal/domain/common" @@ -239,6 +240,10 @@ type Challenge struct { // CodeHash stores only the hashed confirmation code. CodeHash []byte + // PreferredLanguage stores the canonical create-only preferred-language + // candidate derived when the challenge was created. + PreferredLanguage string + // Status reports the coarse challenge lifecycle state. Status Status @@ -279,6 +284,12 @@ func (c Challenge) Validate() error { if len(c.CodeHash) == 0 { return errors.New("challenge code hash must not be empty") } + if strings.TrimSpace(c.PreferredLanguage) == "" { + return errors.New("challenge preferred language must not be empty") + } + if strings.TrimSpace(c.PreferredLanguage) != c.PreferredLanguage { + return errors.New("challenge preferred language must not contain surrounding whitespace") + } if !c.Status.IsKnown() { return fmt.Errorf("challenge status %q is unsupported", c.Status) } diff --git a/authsession/internal/domain/challenge/model_test.go b/authsession/internal/domain/challenge/model_test.go index 8769f2c..c9fd8d5 100644 --- a/authsession/internal/domain/challenge/model_test.go +++ b/authsession/internal/domain/challenge/model_test.go @@ -404,13 +404,14 @@ func validChallenge(t *testing.T) Challenge { t.Helper() return Challenge{ - ID: common.ChallengeID("challenge-123"), - Email: common.Email("pilot@example.com"), - CodeHash: []byte("hash-123"), - Status: StatusPendingSend, - DeliveryState: DeliveryPending, - CreatedAt: time.Unix(1_775_121_600, 0).UTC(), - ExpiresAt: time.Unix(1_775_121_900, 0).UTC(), + ID: common.ChallengeID("challenge-123"), + Email: common.Email("pilot@example.com"), + CodeHash: []byte("hash-123"), + PreferredLanguage: "en", + Status: StatusPendingSend, + DeliveryState: DeliveryPending, + CreatedAt: time.Unix(1_775_121_600, 0).UTC(), + ExpiresAt: time.Unix(1_775_121_900, 0).UTC(), Attempts: AttemptCounters{ Send: 0, Confirm: 0, diff --git a/authsession/internal/ports/mail_sender.go b/authsession/internal/ports/mail_sender.go index 4e503cb..f7fb11a 100644 --- a/authsession/internal/ports/mail_sender.go +++ b/authsession/internal/ports/mail_sender.go @@ -24,8 +24,16 @@ type SendLoginCodeInput struct { // Email identifies the normalized target e-mail address. Email common.Email + // IdempotencyKey stores the raw challenge_id value sent to Mail Service as + // the required Idempotency-Key header. + IdempotencyKey string + // Code stores the cleartext login code that should be delivered to Email. Code string + + // Locale stores the canonical BCP 47 language tag that selects the auth + // mail template locale. + Locale string } // Validate reports whether SendLoginCodeInput contains a complete delivery @@ -35,10 +43,18 @@ func (i SendLoginCodeInput) Validate() error { return fmt.Errorf("send login code input email: %w", err) } switch { + case strings.TrimSpace(i.IdempotencyKey) == "": + return errors.New("send login code input idempotency key must not be empty") + case strings.TrimSpace(i.IdempotencyKey) != i.IdempotencyKey: + return errors.New("send login code input idempotency key must not contain surrounding whitespace") case strings.TrimSpace(i.Code) == "": return errors.New("send login code input code must not be empty") case strings.TrimSpace(i.Code) != i.Code: return errors.New("send login code input code must not contain surrounding whitespace") + case strings.TrimSpace(i.Locale) == "": + return errors.New("send login code input locale must not be empty") + case strings.TrimSpace(i.Locale) != i.Locale: + return errors.New("send login code input locale must not contain surrounding whitespace") default: return nil } diff --git a/authsession/internal/ports/ports_test.go b/authsession/internal/ports/ports_test.go index c39f01b..60d1f5f 100644 --- a/authsession/internal/ports/ports_test.go +++ b/authsession/internal/ports/ports_test.go @@ -310,8 +310,10 @@ func TestSendLoginCodeInputAndResultValidate(t *testing.T) { t.Parallel() input := SendLoginCodeInput{ - Email: common.Email("pilot@example.com"), - Code: "654321", + Email: common.Email("pilot@example.com"), + IdempotencyKey: "challenge-1", + Code: "654321", + Locale: "en", } if err := input.Validate(); err != nil { require.Failf(t, "test failed", "SendLoginCodeInput.Validate() returned error: %v", err) @@ -339,13 +341,14 @@ func TestValidateComparableChallenges(t *testing.T) { func challengeFixture() challenge.Challenge { timestamp := time.Unix(10, 0).UTC() return challenge.Challenge{ - ID: common.ChallengeID("challenge-1"), - Email: common.Email("pilot@example.com"), - CodeHash: []byte("hash"), - Status: challenge.StatusPendingSend, - DeliveryState: challenge.DeliveryPending, - CreatedAt: timestamp, - ExpiresAt: timestamp.Add(5 * time.Minute), + ID: common.ChallengeID("challenge-1"), + Email: common.Email("pilot@example.com"), + CodeHash: []byte("hash"), + PreferredLanguage: "en", + Status: challenge.StatusPendingSend, + DeliveryState: challenge.DeliveryPending, + CreatedAt: timestamp, + ExpiresAt: timestamp.Add(5 * time.Minute), } } diff --git a/authsession/internal/service/confirmemailcode/service.go b/authsession/internal/service/confirmemailcode/service.go index f2b9ad2..4b4eb93 100644 --- a/authsession/internal/service/confirmemailcode/service.go +++ b/authsession/internal/service/confirmemailcode/service.go @@ -19,10 +19,9 @@ import ( ) const ( - revokeReasonConfirmRace common.RevokeReasonCode = "confirm_race_repair" - revokeActorTypeService common.RevokeActorType = "service" - revokeActorIDService = "confirmemailcode" - defaultPreferredLanguage = "en" + revokeReasonConfirmRace common.RevokeReasonCode = "confirm_race_repair" + revokeActorTypeService common.RevokeActorType = "service" + revokeActorIDService = "confirmemailcode" ) // Input describes one public confirm-email-code request. @@ -249,7 +248,7 @@ func (s *Service) Execute(ctx context.Context, input Input) (result Result, err ensureUserResult, err := s.userDirectory.EnsureUserByEmail(ctx, ports.EnsureUserInput{ Email: current.Email, RegistrationContext: &ports.RegistrationContext{ - PreferredLanguage: defaultPreferredLanguage, + PreferredLanguage: shared.ResolvePreferredLanguage(current.PreferredLanguage), TimeZone: timeZone, }, }) diff --git a/authsession/internal/service/confirmemailcode/service_test.go b/authsession/internal/service/confirmemailcode/service_test.go index 7c44436..1057389 100644 --- a/authsession/internal/service/confirmemailcode/service_test.go +++ b/authsession/internal/service/confirmemailcode/service_test.go @@ -70,7 +70,12 @@ func TestExecuteConfirmsChallengeByCreatingUser(t *testing.T) { if err := deps.userDirectory.QueueCreatedUserIDs(common.UserID("user-created")); err != nil { require.Failf(t, "test failed", "QueueCreatedUserIDs() returned error: %v", err) } - if err := deps.challengeStore.Create(context.Background(), sentChallengeFixture(t, deps.hasher, "challenge-1", "new@example.com", "654321", deps.now.Add(-time.Minute), deps.now.Add(time.Minute))); err != nil { + record := sentChallengeFixture(t, deps.hasher, "challenge-1", "new@example.com", "654321", deps.now.Add(-time.Minute), deps.now.Add(time.Minute)) + record.PreferredLanguage = "fr-FR" + if err := record.Validate(); err != nil { + require.Failf(t, "test failed", "Validate() returned error: %v", err) + } + if err := deps.challengeStore.Create(context.Background(), record); err != nil { require.Failf(t, "test failed", "Create() returned error: %v", err) } @@ -88,12 +93,12 @@ func TestExecuteConfirmsChallengeByCreatingUser(t *testing.T) { require.Failf(t, "test failed", "Execute().DeviceSessionID = %q, want %q", result.DeviceSessionID, "device-session-1") } - record, err := deps.sessionStore.Get(context.Background(), common.DeviceSessionID("device-session-1")) + session, err := deps.sessionStore.Get(context.Background(), common.DeviceSessionID("device-session-1")) if err != nil { require.Failf(t, "test failed", "Get() returned error: %v", err) } - if record.UserID != common.UserID("user-created") { - require.Failf(t, "test failed", "session user id = %q, want %q", record.UserID, common.UserID("user-created")) + if session.UserID != common.UserID("user-created") { + require.Failf(t, "test failed", "session user id = %q, want %q", session.UserID, common.UserID("user-created")) } } @@ -556,7 +561,12 @@ func TestExecutePassesRegistrationContextToUserDirectory(t *testing.T) { if err := recordingDirectory.delegate.QueueCreatedUserIDs(common.UserID("user-created")); err != nil { require.Failf(t, "test failed", "QueueCreatedUserIDs() returned error: %v", err) } - if err := deps.challengeStore.Create(context.Background(), sentChallengeFixture(t, deps.hasher, "challenge-1", "new@example.com", "654321", deps.now.Add(-time.Minute), deps.now.Add(time.Minute))); err != nil { + record := sentChallengeFixture(t, deps.hasher, "challenge-1", "new@example.com", "654321", deps.now.Add(-time.Minute), deps.now.Add(time.Minute)) + record.PreferredLanguage = "fr-FR" + if err := record.Validate(); err != nil { + require.Failf(t, "test failed", "Validate() returned error: %v", err) + } + if err := deps.challengeStore.Create(context.Background(), record); err != nil { require.Failf(t, "test failed", "Create() returned error: %v", err) } @@ -589,8 +599,8 @@ func TestExecutePassesRegistrationContextToUserDirectory(t *testing.T) { if recordingDirectory.lastEnsureInput.RegistrationContext == nil { require.FailNow(t, "last ensure registration context = nil, want value") } - if recordingDirectory.lastEnsureInput.RegistrationContext.PreferredLanguage != "en" { - require.Failf(t, "test failed", "preferred language = %q, want %q", recordingDirectory.lastEnsureInput.RegistrationContext.PreferredLanguage, "en") + if recordingDirectory.lastEnsureInput.RegistrationContext.PreferredLanguage != "fr-FR" { + require.Failf(t, "test failed", "preferred language = %q, want %q", recordingDirectory.lastEnsureInput.RegistrationContext.PreferredLanguage, "fr-FR") } if recordingDirectory.lastEnsureInput.RegistrationContext.TimeZone != confirmEmailCodeTimeZone { require.Failf(t, "test failed", "time zone = %q, want %q", recordingDirectory.lastEnsureInput.RegistrationContext.TimeZone, confirmEmailCodeTimeZone) @@ -700,13 +710,14 @@ func sentChallengeFixture( } record := challenge.Challenge{ - ID: common.ChallengeID(challengeID), - Email: common.Email(email), - CodeHash: codeHash, - Status: challenge.StatusSent, - DeliveryState: challenge.DeliverySent, - CreatedAt: createdAt, - ExpiresAt: expiresAt, + ID: common.ChallengeID(challengeID), + Email: common.Email(email), + CodeHash: codeHash, + PreferredLanguage: "en", + Status: challenge.StatusSent, + DeliveryState: challenge.DeliverySent, + CreatedAt: createdAt, + ExpiresAt: expiresAt, } if err := record.Validate(); err != nil { require.Failf(t, "test failed", "Validate() returned error: %v", err) diff --git a/authsession/internal/service/sendemailcode/service.go b/authsession/internal/service/sendemailcode/service.go index d583bb4..d169e94 100644 --- a/authsession/internal/service/sendemailcode/service.go +++ b/authsession/internal/service/sendemailcode/service.go @@ -20,6 +20,11 @@ type Input struct { // Email is the user-supplied e-mail address that should receive the login // code. Email string + + // AcceptLanguage stores the optional public Accept-Language header forwarded + // by gateway for auth-mail localization and create-only registration + // context. + AcceptLanguage string } // Result describes one public send-email-code response. @@ -160,6 +165,7 @@ func (s *Service) Execute(ctx context.Context, input Input) (result Result, err if err != nil { return Result{}, err } + preferredLanguage := shared.ResolvePreferredLanguage(input.AcceptLanguage) now := s.clock.Now().UTC() abuseResult, err := s.abuseProtector.CheckAndReserve(ctx, ports.SendEmailCodeAbuseInput{ @@ -191,13 +197,14 @@ func (s *Service) Execute(ctx context.Context, input Input) (result Result, err return Result{}, shared.InternalError(err) } pending := challenge.Challenge{ - ID: challengeID, - Email: email, - CodeHash: codeHash, - Status: pendingStatus, - DeliveryState: pendingDeliveryState, - CreatedAt: now, - ExpiresAt: now.Add(challenge.InitialTTL), + ID: challengeID, + Email: email, + CodeHash: codeHash, + PreferredLanguage: preferredLanguage, + Status: pendingStatus, + DeliveryState: pendingDeliveryState, + CreatedAt: now, + ExpiresAt: now.Add(challenge.InitialTTL), } if err := pending.Validate(); err != nil { return Result{}, shared.InternalError(err) @@ -240,8 +247,10 @@ func (s *Service) Execute(ctx context.Context, input Input) (result Result, err return result, err default: deliveryResult, err := s.mailSender.SendLoginCode(ctx, ports.SendLoginCodeInput{ - Email: email, - Code: code, + Email: email, + IdempotencyKey: challengeID.String(), + Code: code, + Locale: preferredLanguage, }) if err != nil { final.Status = challenge.StatusFailed diff --git a/authsession/internal/service/sendemailcode/service_test.go b/authsession/internal/service/sendemailcode/service_test.go index df781a7..af01b3f 100644 --- a/authsession/internal/service/sendemailcode/service_test.go +++ b/authsession/internal/service/sendemailcode/service_test.go @@ -72,6 +72,9 @@ func TestExecuteSendsChallengeForExistingAndCreatableUsers(t *testing.T) { if len(mailSender.RecordedInputs()) != 1 { require.Failf(t, "test failed", "RecordedInputs() length = %d, want 1", len(mailSender.RecordedInputs())) } + if mailSender.RecordedInputs()[0].Locale != "en" { + require.Failf(t, "test failed", "mail locale = %q, want %q", mailSender.RecordedInputs()[0].Locale, "en") + } record, err := challengeStore.Get(context.Background(), common.ChallengeID("challenge-1")) if err != nil { @@ -83,6 +86,9 @@ func TestExecuteSendsChallengeForExistingAndCreatableUsers(t *testing.T) { if record.Attempts.Send != 1 { require.Failf(t, "test failed", "Attempts.Send = %d, want 1", record.Attempts.Send) } + if record.PreferredLanguage != "en" { + require.Failf(t, "test failed", "PreferredLanguage = %q, want %q", record.PreferredLanguage, "en") + } if string(record.CodeHash) == "654321" { require.FailNow(t, "CodeHash stored cleartext code") } @@ -131,6 +137,9 @@ func TestExecuteSuppressesDeliveryForBlockedEmail(t *testing.T) { if record.Status != challenge.StatusDeliverySuppressed || record.DeliveryState != challenge.DeliverySuppressed { require.Failf(t, "test failed", "challenge state = %q/%q", record.Status, record.DeliveryState) } + if record.PreferredLanguage != "en" { + require.Failf(t, "test failed", "PreferredLanguage = %q, want %q", record.PreferredLanguage, "en") + } } func TestExecuteHandlesMailSenderSuppressedOutcome(t *testing.T) { @@ -166,6 +175,9 @@ func TestExecuteHandlesMailSenderSuppressedOutcome(t *testing.T) { if record.Status != challenge.StatusDeliverySuppressed || record.DeliveryState != challenge.DeliverySuppressed { require.Failf(t, "test failed", "challenge state = %q/%q", record.Status, record.DeliveryState) } + if record.PreferredLanguage != "en" { + require.Failf(t, "test failed", "PreferredLanguage = %q, want %q", record.PreferredLanguage, "en") + } } func TestExecuteMarksChallengeFailedWhenMailSenderFails(t *testing.T) { @@ -199,6 +211,9 @@ func TestExecuteMarksChallengeFailedWhenMailSenderFails(t *testing.T) { if record.Status != challenge.StatusFailed || record.DeliveryState != challenge.DeliveryFailed { require.Failf(t, "test failed", "challenge state = %q/%q", record.Status, record.DeliveryState) } + if record.PreferredLanguage != "en" { + require.Failf(t, "test failed", "PreferredLanguage = %q, want %q", record.PreferredLanguage, "en") + } } func TestExecuteReturnsInvalidRequestForBadEmail(t *testing.T) { @@ -308,3 +323,69 @@ func TestExecuteSetsChallengeExpirationFromInitialTTL(t *testing.T) { require.Failf(t, "test failed", "ExpiresAt = %s, want %s", record.ExpiresAt, wantExpiresAt) } } + +func TestExecuteResolvesPreferredLanguageFromAcceptLanguage(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + acceptLanguage string + wantPreferredLang string + }{ + { + name: "canonical valid tag wins", + acceptLanguage: "fr-FR, en;q=0.8", + wantPreferredLang: "fr-FR", + }, + { + name: "wildcard falls back to english", + acceptLanguage: "*", + wantPreferredLang: "en", + }, + { + name: "malformed header falls back to english", + acceptLanguage: "fr-FR, @@", + wantPreferredLang: "en", + }, + { + name: "missing header falls back to english", + acceptLanguage: "", + wantPreferredLang: "en", + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + challengeStore := &testkit.InMemoryChallengeStore{} + mailSender := &testkit.RecordingMailSender{} + service, err := New( + challengeStore, + &testkit.InMemoryUserDirectory{}, + &testkit.SequenceIDGenerator{ChallengeIDs: []common.ChallengeID{"challenge-1"}}, + testkit.FixedCodeGenerator{Code: "654321"}, + testkit.DeterministicCodeHasher{}, + mailSender, + testkit.FixedClock{Time: time.Unix(10, 0).UTC()}, + ) + require.NoError(t, err) + + _, err = service.Execute(context.Background(), Input{ + Email: "pilot@example.com", + AcceptLanguage: tt.acceptLanguage, + }) + require.NoError(t, err) + + record, err := challengeStore.Get(context.Background(), common.ChallengeID("challenge-1")) + require.NoError(t, err) + require.Equal(t, tt.wantPreferredLang, record.PreferredLanguage) + + attempts := mailSender.RecordedInputs() + require.Len(t, attempts, 1) + require.Equal(t, tt.wantPreferredLang, attempts[0].Locale) + }) + } +} diff --git a/authsession/internal/service/sendemailcode/stub_sender_test.go b/authsession/internal/service/sendemailcode/stub_sender_test.go index 5dc113a..7b66099 100644 --- a/authsession/internal/service/sendemailcode/stub_sender_test.go +++ b/authsession/internal/service/sendemailcode/stub_sender_test.go @@ -93,6 +93,7 @@ func TestExecuteWithStubSender(t *testing.T) { require.Len(t, attempts, tt.wantRecordedAttempt) assert.Equal(t, common.Email("pilot@example.com"), attempts[0].Input.Email) assert.Equal(t, "654321", attempts[0].Input.Code) + assert.Equal(t, "en", attempts[0].Input.Locale) }) } } diff --git a/authsession/internal/service/shared/preferred_language.go b/authsession/internal/service/shared/preferred_language.go new file mode 100644 index 0000000..b7cab4f --- /dev/null +++ b/authsession/internal/service/shared/preferred_language.go @@ -0,0 +1,27 @@ +package shared + +import "golang.org/x/text/language" + +const defaultPreferredLanguage = "en" + +// ResolvePreferredLanguage returns the first canonical BCP 47 language tag +// accepted from value, or the stable "en" fallback when the input is absent, +// malformed, or too unspecific for auth registration purposes. +func ResolvePreferredLanguage(value string) string { + tags, _, err := language.ParseAcceptLanguage(value) + if err != nil { + return defaultPreferredLanguage + } + + for _, tag := range tags { + canonical := tag.String() + switch canonical { + case "", "und", "mul": + continue + default: + return canonical + } + } + + return defaultPreferredLanguage +} diff --git a/authsession/internal/service/shared/preferred_language_test.go b/authsession/internal/service/shared/preferred_language_test.go new file mode 100644 index 0000000..4fd9dee --- /dev/null +++ b/authsession/internal/service/shared/preferred_language_test.go @@ -0,0 +1,51 @@ +package shared + +import "testing" + +func TestResolvePreferredLanguage(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value string + want string + }{ + { + name: "canonical valid tag", + value: "fr-FR, en;q=0.8", + want: "fr-FR", + }, + { + name: "quality ordering", + value: "en-US;q=0.9, fr", + want: "fr", + }, + { + name: "wildcard falls back", + value: "*", + want: "en", + }, + { + name: "malformed falls back", + value: "fr-FR, @@", + want: "en", + }, + { + name: "missing falls back", + value: "", + want: "en", + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + if got := ResolvePreferredLanguage(tt.value); got != tt.want { + t.Fatalf("ResolvePreferredLanguage(%q) = %q, want %q", tt.value, got, tt.want) + } + }) + } +} diff --git a/authsession/internal/testkit/challenge_store_test.go b/authsession/internal/testkit/challenge_store_test.go index 593b3ad..e49cf14 100644 --- a/authsession/internal/testkit/challenge_store_test.go +++ b/authsession/internal/testkit/challenge_store_test.go @@ -69,12 +69,13 @@ func TestInMemoryChallengeStoreCompareAndSwapConflict(t *testing.T) { func challengeFixture() challenge.Challenge { timestamp := time.Unix(20, 0).UTC() return challenge.Challenge{ - ID: common.ChallengeID("challenge-1"), - Email: common.Email("pilot@example.com"), - CodeHash: []byte("hash"), - Status: challenge.StatusPendingSend, - DeliveryState: challenge.DeliveryPending, - CreatedAt: timestamp, - ExpiresAt: timestamp.Add(10 * time.Minute), + ID: common.ChallengeID("challenge-1"), + Email: common.Email("pilot@example.com"), + CodeHash: []byte("hash"), + PreferredLanguage: "en", + Status: challenge.StatusPendingSend, + DeliveryState: challenge.DeliveryPending, + CreatedAt: timestamp, + ExpiresAt: timestamp.Add(10 * time.Minute), } } diff --git a/authsession/internal/testkit/mail_sender.go b/authsession/internal/testkit/mail_sender.go index 5eb86c8..f33c90a 100644 --- a/authsession/internal/testkit/mail_sender.go +++ b/authsession/internal/testkit/mail_sender.go @@ -8,7 +8,8 @@ import ( ) // RecordingMailSender is a deterministic MailSender double that records every -// delivery request and returns preconfigured outcomes or errors. +// delivery request, including the auth challenge-derived idempotency key, and +// returns preconfigured outcomes or errors. type RecordingMailSender struct { mu sync.Mutex diff --git a/authsession/internal/testkit/support_test.go b/authsession/internal/testkit/support_test.go index 4328107..6d5d1e7 100644 --- a/authsession/internal/testkit/support_test.go +++ b/authsession/internal/testkit/support_test.go @@ -97,8 +97,10 @@ func TestRecordingMailSender(t *testing.T) { } result, err := sender.SendLoginCode(context.Background(), ports.SendLoginCodeInput{ - Email: common.Email("pilot@example.com"), - Code: "654321", + Email: common.Email("pilot@example.com"), + IdempotencyKey: "challenge-1", + Code: "654321", + Locale: "en", }) if err != nil { require.Failf(t, "test failed", "SendLoginCode() returned error: %v", err) diff --git a/authsession/mail_service_rest_compatibility_test.go b/authsession/mail_service_rest_compatibility_test.go index c64689a..2abcfd1 100644 --- a/authsession/mail_service_rest_compatibility_test.go +++ b/authsession/mail_service_rest_compatibility_test.go @@ -5,6 +5,7 @@ import ( "io" "net/http" "net/http/httptest" + "strings" "sync" "testing" "time" @@ -35,6 +36,10 @@ func TestMailServiceRESTCompatibilitySendEmailCodeSent(t *testing.T) { assert.Equal(t, http.StatusOK, response.StatusCode) assert.JSONEq(t, `{"challenge_id":"challenge-1"}`, response.Body) assert.Equal(t, 1, harness.mailServer.CallCount()) + deliveries := harness.mailServer.RecordedDeliveries() + require.Len(t, deliveries, 1) + assert.Equal(t, "en", deliveries[0].Locale) + assert.Equal(t, "challenge-1", deliveries[0].IdempotencyKey) } func TestMailServiceRESTCompatibilitySendEmailCodeSuppressed(t *testing.T) { @@ -99,6 +104,29 @@ func TestMailServiceRESTCompatibilityThrottledSendSkipsMailService(t *testing.T) assert.Equal(t, 1, harness.mailServer.CallCount()) } +func TestMailServiceRESTCompatibilitySendEmailCodeForwardsLocalizedLocale(t *testing.T) { + t.Parallel() + + harness := newMailServiceRESTCompatibilityHarness(t, mailServiceRESTCompatibilityOptions{ + MailStatusCode: http.StatusOK, + MailResponse: `{"outcome":"sent"}`, + }) + + response := gatewayCompatibilityPostJSONWithHeaders( + t, + harness.publicBaseURL+"/api/v1/public/auth/send-email-code", + `{"email":"pilot@example.com"}`, + map[string]string{"Accept-Language": "fr-FR, en;q=0.8"}, + ) + assert.Equal(t, http.StatusOK, response.StatusCode) + assert.JSONEq(t, `{"challenge_id":"challenge-1"}`, response.Body) + + deliveries := harness.mailServer.RecordedDeliveries() + require.Len(t, deliveries, 1) + assert.Equal(t, "fr-FR", deliveries[0].Locale) + assert.Equal(t, "challenge-1", deliveries[0].IdempotencyKey) +} + type mailServiceRESTCompatibilityOptions struct { MailStatusCode int MailResponse string @@ -191,6 +219,14 @@ type mailServiceStubServer struct { statusCode int response string callCount int + deliveries []mailServiceStubDelivery +} + +type mailServiceStubDelivery struct { + Email string + Code string + Locale string + IdempotencyKey string } func newMailServiceStubServer(statusCode int, response string) *mailServiceStubServer { @@ -206,17 +242,18 @@ func (s *mailServiceStubServer) Handler() http.Handler { http.NotFound(writer, request) return } - - s.mu.Lock() - s.callCount++ - s.mu.Unlock() + if strings.TrimSpace(request.Header.Get("Idempotency-Key")) == "" { + http.Error(writer, "Idempotency-Key header must not be empty", http.StatusBadRequest) + return + } decoder := json.NewDecoder(request.Body) decoder.DisallowUnknownFields() var body struct { - Email string `json:"email"` - Code string `json:"code"` + Email string `json:"email"` + Code string `json:"code"` + Locale string `json:"locale"` } if err := decoder.Decode(&body); err != nil { http.Error(writer, err.Error(), http.StatusBadRequest) @@ -231,6 +268,16 @@ func (s *mailServiceStubServer) Handler() http.Handler { return } + s.mu.Lock() + s.callCount++ + s.deliveries = append(s.deliveries, mailServiceStubDelivery{ + Email: body.Email, + Code: body.Code, + Locale: body.Locale, + IdempotencyKey: request.Header.Get("Idempotency-Key"), + }) + s.mu.Unlock() + writer.Header().Set("Content-Type", "application/json") writer.WriteHeader(s.statusCode) _, _ = io.WriteString(writer, s.response) @@ -243,3 +290,12 @@ func (s *mailServiceStubServer) CallCount() int { return s.callCount } + +func (s *mailServiceStubServer) RecordedDeliveries() []mailServiceStubDelivery { + s.mu.Lock() + defer s.mu.Unlock() + + cloned := make([]mailServiceStubDelivery, len(s.deliveries)) + copy(cloned, s.deliveries) + return cloned +} diff --git a/authsession/production_hardening_test.go b/authsession/production_hardening_test.go index 6355105..833f955 100644 --- a/authsession/production_hardening_test.go +++ b/authsession/production_hardening_test.go @@ -732,13 +732,14 @@ func TestProductionHardeningExpiredChallengeReturnsExpiredDuringGraceAndNotFound require.NoError(t, err) record := challenge.Challenge{ - ID: common.ChallengeID("challenge-expired"), - Email: common.Email(gatewayCompatibilityEmail), - CodeHash: codeHash, - Status: challenge.StatusSent, - DeliveryState: challenge.DeliverySent, - CreatedAt: env.now.Add(-2 * time.Minute), - ExpiresAt: env.now.Add(-time.Second), + ID: common.ChallengeID("challenge-expired"), + Email: common.Email(gatewayCompatibilityEmail), + CodeHash: codeHash, + PreferredLanguage: "en", + Status: challenge.StatusSent, + DeliveryState: challenge.DeliverySent, + CreatedAt: env.now.Add(-2 * time.Minute), + ExpiresAt: env.now.Add(-time.Second), } require.NoError(t, record.Validate()) require.NoError(t, app.challengeStore.Create(context.Background(), record)) diff --git a/authsession/user_service_rest_compatibility_test.go b/authsession/user_service_rest_compatibility_test.go index 6265a34..6f05f3f 100644 --- a/authsession/user_service_rest_compatibility_test.go +++ b/authsession/user_service_rest_compatibility_test.go @@ -10,6 +10,7 @@ import ( "net/http/httptest" "net/url" "strings" + "sync" "testing" "time" @@ -57,7 +58,9 @@ func TestUserServiceRESTCompatibilityPublicSendUsesResolveByEmailOutcomes(t *tes attempts := harness.mailSender.RecordedAttempts() require.Len(t, attempts, 2) assert.Equal(t, common.Email("existing@example.com"), attempts[0].Input.Email) + assert.Equal(t, "en", attempts[0].Input.Locale) assert.Equal(t, common.Email("creatable@example.com"), attempts[1].Input.Email) + assert.Equal(t, "en", attempts[1].Input.Locale) } func TestUserServiceRESTCompatibilityPublicConfirmUsesEnsureOutcomes(t *testing.T) { @@ -162,12 +165,34 @@ func TestUserServiceRESTCompatibilityInternalBlockUserUsesRESTClient(t *testing. }) } +func TestUserServiceRESTCompatibilityAcceptLanguageDrivesMailLocaleAndRegistrationContext(t *testing.T) { + t.Parallel() + + harness := newUserServiceRESTCompatibilityHarness(t) + require.NoError(t, harness.directory.QueueCreatedUserIDs(common.UserID("user-created"))) + + challengeID := harness.sendChallengeIDWithAcceptLanguage(t, "localized@example.com", "fr-FR, en;q=0.8", "fr-FR") + + attempts := harness.mailSender.RecordedAttempts() + require.Len(t, attempts, 1) + assert.Equal(t, "fr-FR", attempts[0].Input.Locale) + + response := gatewayCompatibilityPostJSONValue( + t, + harness.publicBaseURL+"/api/v1/public/auth/confirm-email-code", + gatewayCompatibilityConfirmRequest(challengeID, userServiceRESTCompatibilityCode, gatewayCompatibilityClientPublicKey), + ) + assert.Equal(t, http.StatusOK, response.StatusCode) + assert.JSONEq(t, `{"device_session_id":"device-session-1"}`, response.Body) +} + type userServiceRESTCompatibilityHarness struct { - publicBaseURL string - internalBaseURL string - mailSender *mail.StubSender - sessionStore *testkit.InMemorySessionStore - directory *userservice.StubDirectory + publicBaseURL string + internalBaseURL string + mailSender *mail.StubSender + sessionStore *testkit.InMemorySessionStore + directory *userservice.StubDirectory + preferredLanguageExpectations *preferredLanguageExpectationStore } func newUserServiceRESTCompatibilityHarness(t *testing.T) userServiceRESTCompatibilityHarness { @@ -176,8 +201,9 @@ func newUserServiceRESTCompatibilityHarness(t *testing.T) userServiceRESTCompati challengeStore := &testkit.InMemoryChallengeStore{} sessionStore := &testkit.InMemorySessionStore{} directory := &userservice.StubDirectory{} + preferredLanguageExpectations := newPreferredLanguageExpectationStore() - userServiceServer := httptest.NewServer(newUserServiceStubHandler(directory)) + userServiceServer := httptest.NewServer(newUserServiceStubHandler(directory, preferredLanguageExpectations)) t.Cleanup(userServiceServer.Close) userDirectory, err := userservice.NewRESTClient(userservice.Config{ @@ -261,18 +287,31 @@ func newUserServiceRESTCompatibilityHarness(t *testing.T) userServiceRESTCompati gatewayCompatibilityRunServer(t, internalServer.Run, internalServer.Shutdown, internalCfg.Addr) return userServiceRESTCompatibilityHarness{ - publicBaseURL: "http://" + publicCfg.Addr, - internalBaseURL: "http://" + internalCfg.Addr, - mailSender: mailSender, - sessionStore: sessionStore, - directory: directory, + publicBaseURL: "http://" + publicCfg.Addr, + internalBaseURL: "http://" + internalCfg.Addr, + mailSender: mailSender, + sessionStore: sessionStore, + directory: directory, + preferredLanguageExpectations: preferredLanguageExpectations, } } func (h userServiceRESTCompatibilityHarness) sendChallengeID(t *testing.T, email string) string { t.Helper() - response := gatewayCompatibilityPostJSON(t, h.publicBaseURL+"/api/v1/public/auth/send-email-code", fmt.Sprintf(`{"email":"%s"}`, email)) + return h.sendChallengeIDWithAcceptLanguage(t, email, "", "en") +} + +func (h userServiceRESTCompatibilityHarness) sendChallengeIDWithAcceptLanguage(t *testing.T, email string, acceptLanguage string, expectedPreferredLanguage string) string { + t.Helper() + + h.preferredLanguageExpectations.Set(email, expectedPreferredLanguage) + response := gatewayCompatibilityPostJSONWithHeaders( + t, + h.publicBaseURL+"/api/v1/public/auth/send-email-code", + fmt.Sprintf(`{"email":"%s"}`, email), + map[string]string{"Accept-Language": acceptLanguage}, + ) assert.Equal(t, http.StatusOK, response.StatusCode) var body struct { @@ -284,7 +323,7 @@ func (h userServiceRESTCompatibilityHarness) sendChallengeID(t *testing.T, email return body.ChallengeID } -func newUserServiceStubHandler(directory *userservice.StubDirectory) http.Handler { +func newUserServiceStubHandler(directory *userservice.StubDirectory, preferredLanguageExpectations *preferredLanguageExpectationStore) http.Handler { return http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { switch { case request.Method == http.MethodPost && request.URL.Path == "/api/v1/internal/user-resolutions/by-email": @@ -349,8 +388,13 @@ func newUserServiceStubHandler(directory *userservice.StubDirectory) http.Handle writeUserServiceStubError(writer, http.StatusBadRequest, errors.New("registration_context must be present")) return } - if ensureInput.RegistrationContext.PreferredLanguage != "en" { - writeUserServiceStubError(writer, http.StatusBadRequest, errors.New("registration_context.preferred_language must equal en during rollout")) + expectedPreferredLanguage := preferredLanguageExpectations.Expected(input.Email) + if ensureInput.RegistrationContext.PreferredLanguage != expectedPreferredLanguage { + writeUserServiceStubError( + writer, + http.StatusBadRequest, + fmt.Errorf("registration_context.preferred_language must equal %s", expectedPreferredLanguage), + ) return } if ensureInput.RegistrationContext.TimeZone != gatewayCompatibilityTimeZone { @@ -434,6 +478,44 @@ func newUserServiceStubHandler(directory *userservice.StubDirectory) http.Handle }) } +type preferredLanguageExpectationStore struct { + mu sync.Mutex + byEmail map[string]string +} + +func newPreferredLanguageExpectationStore() *preferredLanguageExpectationStore { + return &preferredLanguageExpectationStore{ + byEmail: make(map[string]string), + } +} + +func (s *preferredLanguageExpectationStore) Set(email string, preferredLanguage string) { + if s == nil { + return + } + + s.mu.Lock() + defer s.mu.Unlock() + + s.byEmail[email] = preferredLanguage +} + +func (s *preferredLanguageExpectationStore) Expected(email string) string { + if s == nil { + return "en" + } + + s.mu.Lock() + defer s.mu.Unlock() + + preferredLanguage := s.byEmail[email] + if preferredLanguage == "" { + return "en" + } + + return preferredLanguage +} + func decodeUserServiceStubRequest(writer http.ResponseWriter, request *http.Request, target any) bool { decoder := json.NewDecoder(request.Body) decoder.DisallowUnknownFields() diff --git a/client/go.mod b/client/go.mod index 15f4e51..458b5e5 100644 --- a/client/go.mod +++ b/client/go.mod @@ -40,7 +40,7 @@ require ( golang.org/x/image v0.36.0 // indirect golang.org/x/net v0.52.0 // indirect golang.org/x/sys v0.42.0 // indirect - golang.org/x/text v0.35.0 // indirect + golang.org/x/text v0.36.0 // indirect gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect gopkg.in/yaml.v3 v3.0.1 // indirect ) diff --git a/client/go.sum b/client/go.sum index d63ecdb..f9915af 100644 --- a/client/go.sum +++ b/client/go.sum @@ -74,7 +74,7 @@ golang.org/x/image v0.36.0 h1:Iknbfm1afbgtwPTmHnS2gTM/6PPZfH+z2EFuOkSbqwc= golang.org/x/image v0.36.0/go.mod h1:YsWD2TyyGKiIX1kZlu9QfKIsQ4nAAK9bdgdrIsE7xy4= golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo= -golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8= +golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= diff --git a/game/go.mod b/game/go.mod index 5369e82..97ff1e2 100644 --- a/game/go.mod +++ b/game/go.mod @@ -39,7 +39,7 @@ require ( golang.org/x/crypto v0.49.0 // indirect golang.org/x/net v0.52.0 // indirect golang.org/x/sys v0.42.0 // indirect - golang.org/x/text v0.35.0 // indirect + golang.org/x/text v0.36.0 // indirect google.golang.org/protobuf v1.36.11 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect ) diff --git a/game/go.sum b/game/go.sum index 464a5d9..d498415 100644 --- a/game/go.sum +++ b/game/go.sum @@ -74,7 +74,7 @@ golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo= -golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8= +golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg= google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= diff --git a/gateway/README.md b/gateway/README.md index 912b505..0f7bd7b 100644 --- a/gateway/README.md +++ b/gateway/README.md @@ -116,14 +116,19 @@ The public auth JSON contract uses a challenge-token flow: `client_public_key`, and `time_zone`, then returns `device_session_id`. +The JSON body for `send-email-code` remains unchanged, but gateway may also +consume the standard `Accept-Language` header on that route. Gateway resolves +the first supported BCP 47 language tag, falls back to `en` when needed, and +forwards that derived preferred-language candidate to +`Auth / Session Service` for localized auth mail and possible first-user +creation. The public JSON DTO itself remains unchanged. `client_public_key` is the standard base64-encoded raw 32-byte Ed25519 public key for the device session being created. `time_zone` is the client-selected IANA time zone name forwarded unchanged to `Auth / Session Service`. -The current create-path source of truth for `preferred_language` is still the -temporary authsession-to-user rollout using `"en"`. Gateway-side language -derivation is a later rollout. The public `confirm-email-code` DTO itself -remains unchanged. +The current create-path source of truth for `preferred_language` is the +language candidate derived from public `Accept-Language`, with fallback to +`en`. The public `confirm-email-code` DTO itself remains unchanged. These routes remain unauthenticated and delegate only through an injected `AuthServiceClient`. diff --git a/gateway/TODO.md b/gateway/TODO.md index f9e1e12..7a86b06 100644 --- a/gateway/TODO.md +++ b/gateway/TODO.md @@ -1,10 +1,14 @@ # TODOs -## 1. Suggest User's Preferred Language when registering a new User +## 1. Improve Preferred-Language Fallback after the Current Accept-Language Rollout -Upon user's device/session registration flow, `preferred_language` value -must be obtained via existing [geoip](../pkg/geoip) package by returned -country. -The derived value must be emitted as a valid BCP 47 language tag because -`User Service` now validates that contract semantically on create. -When geoip fails to return country by IP, fallback is `en`. +The current auth-registration flow derives the preferred-language candidate +from the public `Accept-Language` header and falls back to `en` when no +supported tag is available. + +A later improvement may use the existing [geoip](../pkg/geoip) package as an +additional fallback when `Accept-Language` is absent or unusable, but it must: + +- preserve the current public JSON DTOs +- continue emitting a valid BCP 47 tag for `User Service` +- keep `en` as the final safe fallback diff --git a/gateway/go.mod b/gateway/go.mod index 70b6207..ca06495 100644 --- a/gateway/go.mod +++ b/gateway/go.mod @@ -23,6 +23,7 @@ require ( go.opentelemetry.io/otel/sdk/metric v1.43.0 go.opentelemetry.io/otel/trace v1.43.0 go.uber.org/zap v1.27.1 + golang.org/x/text v0.36.0 golang.org/x/time v0.15.0 google.golang.org/grpc v1.80.0 google.golang.org/protobuf v1.36.11 @@ -56,6 +57,7 @@ require ( github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect + github.com/klauspost/compress v1.18.5 // indirect github.com/klauspost/cpuid/v2 v2.3.0 // indirect github.com/leodido/go-urn v1.4.0 // indirect github.com/mailru/easyjson v0.7.7 // indirect @@ -91,7 +93,6 @@ require ( golang.org/x/exp v0.0.0-20250813145105-42675adae3e6 // indirect golang.org/x/net v0.52.0 // indirect golang.org/x/sys v0.42.0 // indirect - golang.org/x/text v0.35.0 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect diff --git a/gateway/go.sum b/gateway/go.sum index 0093f15..6db306d 100644 --- a/gateway/go.sum +++ b/gateway/go.sum @@ -17,9 +17,11 @@ github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdb github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM= +github.com/bytedance/gopkg v0.1.4/go.mod h1:v1zWfPm21Fb+OsyXN2VAHdL6TBb2L88anLQgdyje6R4= github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE= github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k= github.com/bytedance/sonic/loader v0.5.1 h1:Ygpfa9zwRCCKSlrp5bBP/b/Xzc3VxsAW+5NIYXrOOpI= +github.com/bytedance/sonic/loader v0.5.1/go.mod h1:AR4NYCk5DdzZizZ5djGqQ92eEhCCcdf5x77udYiSJRo= github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM= github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= @@ -29,12 +31,15 @@ github.com/cloudwego/base64x v0.1.6/go.mod h1:OFcloc187FXDaYHvrNIjxSe8ncn0OOM8gE github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM= github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s= github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg= +github.com/getkin/kin-openapi v0.135.0/go.mod h1:6dd5FJl6RdX4usBtFBaQhk9q62Yb2J0Mk5IhUO/QqFI= github.com/gin-contrib/sse v1.1.1 h1:uGYpNwTacv5R68bSGMapo62iLTRa9l5zxGCps4hK6ko= +github.com/gin-contrib/sse v1.1.1/go.mod h1:QXzuVkA0YO7o/gun03UI1Q+FTI8ZV/n5t03kIQAI89s= github.com/gin-gonic/gin v1.12.0 h1:b3YAbrZtnf8N//yjKeU2+MQsh2mY5htkZidOM7O0wG8= github.com/gin-gonic/gin v1.12.0/go.mod h1:VxccKfsSllpKshkBWgVgRniFFAzFb9csfngsqANjnLc= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= @@ -53,9 +58,11 @@ github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/Nu github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= github.com/go-playground/validator/v10 v10.30.2 h1:JiFIMtSSHb2/XBUbWM4i/MpeQm9ZK2xqPNk8vgvu5JQ= +github.com/go-playground/validator/v10 v10.30.2/go.mod h1:mAf2pIOVXjTEBrwUMGKkCWKKPs9NheYGabeB04txQSc= github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM= github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE= github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU= +github.com/goccy/go-json v0.10.6/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M= github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM= github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= @@ -75,8 +82,7 @@ github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8Hm github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= -github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= -github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= +github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE= github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y= github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= @@ -101,12 +107,16 @@ github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwd github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48= +github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM= github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g= +github.com/oasdiff/yaml3 v0.0.9/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o= github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM= +github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s= github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= @@ -116,6 +126,7 @@ github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxza github.com/prometheus/otlptranslator v1.0.0 h1:s0LJW/iN9dkIH+EnhiD3BlkkP5QVIUVEoIwkU+A6qos= github.com/prometheus/otlptranslator v1.0.0/go.mod h1:vRYWnXvI6aWGpsdY/mOT/cbeVRBlPWtBNDb7kGR3uKM= github.com/prometheus/procfs v0.20.1 h1:XwbrGOIplXW/AU3YhIhLODXMJYyC1isLFfYCsTEycfc= +github.com/prometheus/procfs v0.20.1/go.mod h1:o9EMBZGRyvDrSPH1RqdxhojkuXstoe4UlK79eF5TGGo= github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8= github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII= github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw= @@ -152,20 +163,33 @@ go.mongodb.org/mongo-driver/v2 v2.5.0/go.mod h1:yOI9kBsufol30iFsl1slpdq1I0eHPzyb go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 h1:5FXSL2s6afUC1bzNzl1iedZZ8yqR7GOhbCoEXtyeK6Q= +go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0/go.mod h1:MdHW7tLtkeGJnR4TyOrnd5D0zUGZQB1l84uHCe8hRpE= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.67.0 h1:yI1/OhfEPy7J9eoa6Sj051C7n5dvpj0QX8g4sRchg04= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.67.0/go.mod h1:NoUCKYWK+3ecatC4HjkRktREheMeEtrXoQxrqYFeHSc= go.opentelemetry.io/contrib/propagators/b3 v1.43.0 h1:CETqV3QLLPTy5yNrqyMr41VnAOOD4lsRved7n4QG00A= +go.opentelemetry.io/contrib/propagators/b3 v1.43.0/go.mod h1:Q4mCiCdziYzpNR0g+6UqVotAlCDZdzz6L8jwY4knOrw= go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I= +go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0/go.mod h1:Vl1/iaggsuRlrHf/hfPJPvVag77kKyvrLeD10kpMl+A= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0/go.mod h1:AGmbycVGEsRx9mXMZ75CsOyhSP6MFIcj/6dnG+vhVjk= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0/go.mod h1:/G+nUPfhq2e+qiXMGxMwumDrP5jtzU+mWN7/sjT2rak= go.opentelemetry.io/otel/exporters/prometheus v0.65.0 h1:jOveH/b4lU9HT7y+Gfamf18BqlOuz2PWEvs8yM7Q6XE= +go.opentelemetry.io/otel/exporters/prometheus v0.65.0/go.mod h1:i1P8pcumauPtUI4YNopea1dhzEMuEqWP1xoUZDylLHo= go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 h1:mS47AX77OtFfKG4vtp+84kuGSFZHTyxtXIN269vChY0= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0/go.mod h1:PJnsC41lAGncJlPUniSwM81gc80GkgWJWr3cu2nKEtU= go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM= +go.opentelemetry.io/otel/metric v1.43.0/go.mod h1:RDnPtIxvqlgO8GRW18W6Z/4P462ldprJtfxHxyKd2PY= go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg= +go.opentelemetry.io/otel/sdk v1.43.0/go.mod h1:P+IkVU3iWukmiit/Yf9AWvpyRDlUeBaRg6Y+C58QHzg= go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw= +go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A= go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A= +go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0= go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g= +go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk= go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= @@ -177,22 +201,29 @@ go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN8 go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc= go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= go.yaml.in/yaml/v2 v2.4.4 h1:tuyd0P+2Ont/d6e2rl3be67goVK4R6deVxCUX5vyPaQ= +go.yaml.in/yaml/v2 v2.4.4/go.mod h1:gMZqIpDtDqOfM0uNfy0SkpRhvUryYH0Z6wdMYcacYXQ= go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE= +golang.org/x/arch v0.25.0/go.mod h1:0X+GdSIP+kL5wPmpK7sdkEVTt2XoYP0cSjQSbZBwOi8= golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= +golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA= golang.org/x/exp v0.0.0-20250813145105-42675adae3e6 h1:SbTAbRFnd5kjQXbczszQ0hdk3ctwYf3qBNH9jIsGclE= golang.org/x/exp v0.0.0-20250813145105-42675adae3e6/go.mod h1:4QTo5u+SEIbbKW1RacMZq1YEfOBqeXa19JeshGi+zc4= golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= +golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo= -golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8= +golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw= +golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg= golang.org/x/time v0.15.0 h1:bbrp8t3bGUeFOx08pvsMYRTCVSMk89u4tKbNOZbp88U= golang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno= gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4= gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E= google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA= +google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M= google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg= +google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8= google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM= google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4= google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= diff --git a/gateway/internal/restapi/auth_service_http_client.go b/gateway/internal/restapi/auth_service_http_client.go index 65eec90..80afa05 100644 --- a/gateway/internal/restapi/auth_service_http_client.go +++ b/gateway/internal/restapi/auth_service_http_client.go @@ -92,7 +92,9 @@ func (c *HTTPAuthServiceClient) Close() error { // SendEmailCode delegates the public send-email-code route to the configured // Auth / Session Service public HTTP API. func (c *HTTPAuthServiceClient) SendEmailCode(ctx context.Context, input SendEmailCodeInput) (SendEmailCodeResult, error) { - payload, statusCode, err := c.doJSONRequest(ctx, authServiceSendEmailCodePath, input) + payload, statusCode, err := c.doJSONRequest(ctx, authServiceSendEmailCodePath, input, map[string]string{ + "Accept-Language": resolvePreferredLanguage(input.PreferredLanguage), + }) if err != nil { return SendEmailCodeResult{}, fmt.Errorf("send email code via auth service: %w", err) } @@ -123,7 +125,7 @@ func (c *HTTPAuthServiceClient) SendEmailCode(ctx context.Context, input SendEma // ConfirmEmailCode delegates the public confirm-email-code route to the // configured Auth / Session Service public HTTP API. func (c *HTTPAuthServiceClient) ConfirmEmailCode(ctx context.Context, input ConfirmEmailCodeInput) (ConfirmEmailCodeResult, error) { - payload, statusCode, err := c.doJSONRequest(ctx, authServiceConfirmEmailCodePath, input) + payload, statusCode, err := c.doJSONRequest(ctx, authServiceConfirmEmailCodePath, input, nil) if err != nil { return ConfirmEmailCodeResult{}, fmt.Errorf("confirm email code via auth service: %w", err) } @@ -151,7 +153,7 @@ func (c *HTTPAuthServiceClient) ConfirmEmailCode(ctx context.Context, input Conf } } -func (c *HTTPAuthServiceClient) doJSONRequest(ctx context.Context, path string, requestBody any) ([]byte, int, error) { +func (c *HTTPAuthServiceClient) doJSONRequest(ctx context.Context, path string, requestBody any, headers map[string]string) ([]byte, int, error) { if c == nil || c.httpClient == nil { return nil, 0, errors.New("nil client") } @@ -172,6 +174,12 @@ func (c *HTTPAuthServiceClient) doJSONRequest(ctx context.Context, path string, return nil, 0, fmt.Errorf("build request: %w", err) } request.Header.Set("Content-Type", "application/json") + for key, value := range headers { + if strings.TrimSpace(value) == "" { + continue + } + request.Header.Set(key, value) + } response, err := c.httpClient.Do(request) if err != nil { diff --git a/gateway/internal/restapi/auth_service_http_client_test.go b/gateway/internal/restapi/auth_service_http_client_test.go index d516807..b9f77a1 100644 --- a/gateway/internal/restapi/auth_service_http_client_test.go +++ b/gateway/internal/restapi/auth_service_http_client_test.go @@ -66,12 +66,14 @@ func TestHTTPAuthServiceClientSendEmailCodeSuccess(t *testing.T) { t.Parallel() var requestContentType string + var requestAcceptLanguage string var requestBody string server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { assert.Equal(t, http.MethodPost, r.Method) assert.Equal(t, authServiceSendEmailCodePath, r.URL.Path) requestContentType = r.Header.Get("Content-Type") + requestAcceptLanguage = r.Header.Get("Accept-Language") payload, err := io.ReadAll(r.Body) require.NoError(t, err) requestBody = string(payload) @@ -85,14 +87,35 @@ func TestHTTPAuthServiceClientSendEmailCodeSuccess(t *testing.T) { client := newTestHTTPAuthServiceClient(t, server) result, err := client.SendEmailCode(context.Background(), SendEmailCodeInput{ - Email: "pilot@example.com", + Email: "pilot@example.com", + PreferredLanguage: "fr-FR", }) require.NoError(t, err) assert.Equal(t, SendEmailCodeResult{ChallengeID: "challenge-123"}, result) assert.Equal(t, "application/json", requestContentType) + assert.Equal(t, "fr-FR", requestAcceptLanguage) assert.JSONEq(t, `{"email":"pilot@example.com"}`, requestBody) } +func TestHTTPAuthServiceClientSendEmailCodeDefaultsAcceptLanguageToEnglish(t *testing.T) { + t.Parallel() + + var requestAcceptLanguage string + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + requestAcceptLanguage = r.Header.Get("Accept-Language") + w.Header().Set("Content-Type", "application/json") + _, err := io.WriteString(w, `{"challenge_id":"challenge-123"}`) + require.NoError(t, err) + })) + defer server.Close() + + client := newTestHTTPAuthServiceClient(t, server) + + _, err := client.SendEmailCode(context.Background(), SendEmailCodeInput{Email: "pilot@example.com"}) + require.NoError(t, err) + assert.Equal(t, "en", requestAcceptLanguage) +} + func TestHTTPAuthServiceClientConfirmEmailCodeSuccess(t *testing.T) { t.Parallel() diff --git a/gateway/internal/restapi/preferred_language.go b/gateway/internal/restapi/preferred_language.go new file mode 100644 index 0000000..eb79924 --- /dev/null +++ b/gateway/internal/restapi/preferred_language.go @@ -0,0 +1,24 @@ +package restapi + +import "golang.org/x/text/language" + +const defaultPreferredLanguage = "en" + +func resolvePreferredLanguage(value string) string { + tags, _, err := language.ParseAcceptLanguage(value) + if err != nil { + return defaultPreferredLanguage + } + + for _, tag := range tags { + canonical := tag.String() + switch canonical { + case "", "und", "mul": + continue + default: + return canonical + } + } + + return defaultPreferredLanguage +} diff --git a/gateway/internal/restapi/preferred_language_test.go b/gateway/internal/restapi/preferred_language_test.go new file mode 100644 index 0000000..5e3c8cf --- /dev/null +++ b/gateway/internal/restapi/preferred_language_test.go @@ -0,0 +1,51 @@ +package restapi + +import "testing" + +func TestResolvePreferredLanguage(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value string + want string + }{ + { + name: "canonical valid tag", + value: "fr-FR, en;q=0.8", + want: "fr-FR", + }, + { + name: "quality ordering", + value: "en-US;q=0.9, fr", + want: "fr", + }, + { + name: "wildcard falls back", + value: "*", + want: "en", + }, + { + name: "malformed falls back", + value: "fr-FR, @@", + want: "en", + }, + { + name: "missing falls back", + value: "", + want: "en", + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + if got := resolvePreferredLanguage(tt.value); got != tt.want { + t.Fatalf("resolvePreferredLanguage(%q) = %q, want %q", tt.value, got, tt.want) + } + }) + } +} diff --git a/gateway/internal/restapi/public_auth.go b/gateway/internal/restapi/public_auth.go index bb61c0f..6e58e34 100644 --- a/gateway/internal/restapi/public_auth.go +++ b/gateway/internal/restapi/public_auth.go @@ -55,6 +55,11 @@ type SendEmailCodeInput struct { // Email is the single client e-mail address that should receive the login // code challenge. Email string `json:"email"` + + // PreferredLanguage stores the canonical BCP 47 language tag derived from + // the public Accept-Language header for upstream auth-mail localization and + // create-only user registration context. + PreferredLanguage string `json:"-"` } // SendEmailCodeResult describes the public REST and adapter payload returned @@ -204,6 +209,7 @@ func handleSendEmailCode(authService AuthServiceClient, timeout time.Duration) g abortInvalidRequest(c, err.Error()) return } + input.PreferredLanguage = resolvePreferredLanguage(c.Request.Header.Get("Accept-Language")) callCtx, cancel := context.WithTimeout(c.Request.Context(), timeout) defer cancel() diff --git a/gateway/internal/restapi/public_auth_test.go b/gateway/internal/restapi/public_auth_test.go index 6854657..b801394 100644 --- a/gateway/internal/restapi/public_auth_test.go +++ b/gateway/internal/restapi/public_auth_test.go @@ -34,6 +34,7 @@ func TestSendEmailCodeHandlerSuccess(t *testing.T) { strings.NewReader(`{"email":" pilot@example.com "}`), ) req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept-Language", "fr-FR, en;q=0.8") recorder := httptest.NewRecorder() handler.ServeHTTP(recorder, req) @@ -43,7 +44,10 @@ func TestSendEmailCodeHandlerSuccess(t *testing.T) { assert.Equal(t, `{"challenge_id":"challenge-123"}`, recorder.Body.String()) assert.Equal(t, 1, authService.sendEmailCodeCalls) assert.Equal(t, 0, authService.confirmEmailCodeCalls) - assert.Equal(t, SendEmailCodeInput{Email: "pilot@example.com"}, authService.sendEmailCodeInput) + assert.Equal(t, SendEmailCodeInput{ + Email: "pilot@example.com", + PreferredLanguage: "fr-FR", + }, authService.sendEmailCodeInput) assert.True(t, authService.sendEmailCodeRouteClassOK) assert.Equal(t, PublicRouteClassPublicAuth, authService.sendEmailCodeRouteClass) } diff --git a/gateway/openapi.yaml b/gateway/openapi.yaml index 980a6ed..7ee86c5 100644 --- a/gateway/openapi.yaml +++ b/gateway/openapi.yaml @@ -134,6 +134,11 @@ paths: that must later be confirmed through `POST /api/v1/public/auth/confirm-email-code`. + The JSON body stays unchanged. Callers may additionally supply the + standard `Accept-Language` header so the gateway can derive the + auth-mail locale and first-login preferred-language candidate. Missing + or unsupported values fall back to `en`. + This route is unauthenticated and classified as `public_auth`. Public REST anti-abuse applies a per-IP bucket derived from `RemoteAddr` and an additional normalized identity bucket derived from @@ -146,6 +151,8 @@ paths: gateway preserves that projected `4xx/5xx` status and serialized error envelope after normalizing blank or invalid fields. security: [] + parameters: + - $ref: "#/components/parameters/AcceptLanguage" x-public-route-classification-note: | This route is always classified as `public_auth`. requestBody: @@ -250,6 +257,18 @@ paths: default: $ref: "#/components/responses/ProjectedAuthServiceError" components: + parameters: + AcceptLanguage: + name: Accept-Language + in: header + required: false + description: | + Optional RFC 9110 `Accept-Language` header used by gateway to derive + the auth-mail locale and first-login preferred-language candidate. + The first supported BCP 47 tag wins; unsupported or missing values + fall back to `en`. + schema: + type: string schemas: HealthzResponse: type: object diff --git a/geoprofile/PLAN.md b/geoprofile/PLAN.md index a6db87f..c7ffce1 100644 --- a/geoprofile/PLAN.md +++ b/geoprofile/PLAN.md @@ -299,9 +299,11 @@ Tasks: - Define the event payload for `country_review_recommended=true`. - Implement event publication on transition to `true`. -- Implement configuration-driven email notification through `Mail Service`. +- Implement configuration-driven notification handoff through + `Notification Service`. - Add notification deduplication or transition-only logic to prevent spam. -- Add failure metrics for both event publication and mail send. +- Add failure metrics for both event publication and downstream notification + handoff. Important constraints: diff --git a/geoprofile/README.md b/geoprofile/README.md index 18ef10c..6bf5034 100644 --- a/geoprofile/README.md +++ b/geoprofile/README.md @@ -32,7 +32,7 @@ The service is embedded into an already existing trusted microservice environmen - `Edge Service` - `Auth / Session Service` - `User Service` -- `Mail Service` +- `Notification Service` - Internal event bus `Edge Service` is the producer of authenticated connection observations. @@ -41,7 +41,9 @@ The service is embedded into an already existing trusted microservice environmen `Auth / Session Service` remains the owner of session lifecycle and session blocking. -`Mail Service` is used only for optional administrative notifications. +`Notification Service` is used for optional administrative notifications, +which may later result in e-mail delivery through `Mail Service`. +Geo Profile Service does not call `Mail Service` directly. The event bus is used only as an auxiliary notification channel and not as the authoritative source of business state. @@ -146,7 +148,8 @@ flowchart LR Edge -. async flatbuffers ingest .-> Geo[Geo Profile Service] Geo --> User[User Service] - Geo --> Mail[Mail Service] + Geo --> Notify[Notification Service] + Notify --> Mail[Mail Service] Geo --> Bus[Event Bus] Geo --> Auth @@ -467,7 +470,7 @@ Meaning: Producer: -- Geo Profile Service via `Mail Service` +- Geo Profile Service via `Notification Service` Consumers: @@ -794,16 +797,19 @@ Contract assumptions: This keeps the hot path simple and avoids synchronous enforcement coupling. -## Integration with Mail Service +## Integration with Notification Service -Mail notifications are optional and configuration-driven. +Administrative notifications are optional and configuration-driven. -Mail is sent only when: +Notification routing is triggered only when: - `country_review_recommended` transitions to `true` - Email notifications are enabled -Mail is auxiliary and must not be required for business correctness. +`Notification Service` may then fan out e-mail delivery through +`Mail Service`. +Geo Profile Service itself never sends mail directly. +That path is auxiliary and must not be required for business correctness. ## Event Bus Integration diff --git a/go.work b/go.work index a0fb915..9558d56 100644 --- a/go.work +++ b/go.work @@ -6,6 +6,7 @@ use ( ./game ./gateway ./integration + ./mail ./pkg/calc ./pkg/connector ./pkg/error diff --git a/go.work.sum b/go.work.sum index 76fcfce..476f59f 100644 --- a/go.work.sum +++ b/go.work.sum @@ -11,6 +11,7 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5/go.mod h1:KdCmV+x/BuvyMxRnYBlmVaq4OLiKW6iRQfvC62cvdkI= +github.com/containerd/typeurl/v2 v2.2.0/go.mod h1:8XOOxnyatxSWuG8OfsZXVnAF4iZfedjS/8UHSPJnX4g= github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/envoyproxy/go-control-plane v0.14.0/go.mod h1:NcS5X47pLl/hfqxU70yPwL9ZMkUlwlKxtAohpi2wBEU= github.com/envoyproxy/go-control-plane/envoy v1.36.0/go.mod h1:ty89S1YCCVruQAm9OtKeEkQLTb+Lkz0k8v9W0Oxsv98= @@ -18,7 +19,7 @@ github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJP github.com/envoyproxy/protoc-gen-validate v1.3.0/go.mod h1:HvYl7zwPa5mffgyeTUHA9zHIH36nmrm7oCbo4YKoSWA= github.com/francoispqt/gojay v1.2.13/go.mod h1:ehT5mTG4ua4581f1++1WLG0vPdaA9HaiDsoyrBGkyDY= github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08= -github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE= github.com/golang/glog v1.2.5/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= @@ -36,6 +37,9 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/lucor/goinfo v0.9.0/go.mod h1:L6m6tN5Rlova5Z83h1ZaKsMP1iiaoZ9vGTNzu5QKOD4= github.com/mcuadros/go-version v0.0.0-20190830083331-035f6764e8d2/go.mod h1:76rfSfYPWj01Z85hUf/ituArm797mNKcvINh1OlsZKo= +github.com/moby/sys/mount v0.3.4/go.mod h1:KcQJMbQdJHPlq5lcYT+/CjatWM4PuxKe+XLSVS4J6Os= +github.com/moby/sys/mountinfo v0.7.2/go.mod h1:1YOa8w8Ih7uW0wALDUgT1dTTSBrZ+HiBLGws92L2RU4= +github.com/moby/sys/reexec v0.1.0/go.mod h1:EqjBg8F3X7iZe5pU6nRZnYCMUTXoxsjiIfHup5wYIN8= github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/natefinch/atomic v1.0.1/go.mod h1:N/D/ELrljoqDyT3rZrsUmtsuzvHkeB/wWjHV22AZRbM= github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= @@ -49,7 +53,9 @@ github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6L github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= +github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/santhosh-tekuri/jsonschema/v5 v5.3.1/go.mod h1:uToXkOrWAZ6/Oc07xWQrPOhJotwFIyu2bBVN41fcDUY= github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= @@ -84,6 +90,7 @@ golang.org/x/mod v0.21.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY= golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc= golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU= golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w= +golang.org/x/mod v0.34.0/go.mod h1:ykgH52iCZe79kzLLMhyCUzhMci+nQj+0XkbXpNYtVjY= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= @@ -136,7 +143,6 @@ golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY= golang.org/x/term v0.33.0/go.mod h1:s18+ql9tYWp1IfpV9DmCtQDDSRBUjKaw9M1eAv5UeF0= golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM= -golang.org/x/term v0.41.0/go.mod h1:3pfBgksrReYfZ5lvYM0kSO0LIkAl4Yl2bXOkKP7Ec2A= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= @@ -159,11 +165,11 @@ golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc= golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg= golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0= +golang.org/x/tools v0.43.0/go.mod h1:uHkMso649BX2cZK6+RpuIPXS3ho2hZo4FVwfoy1vIk0= golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= golang.org/x/tools/go/vcs v0.1.0-deprecated/go.mod h1:zUrvATBAvEI9535oC0yWYsLsHIV4Z7g63sNPVMtuBy8= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= google.golang.org/genproto/googleapis/api v0.0.0-20260120221211-b8f7ae30c516/go.mod h1:p3MLuOwURrGBRoEyFHBT3GjUwaCQVKeNqqWxlcISGdw= google.golang.org/genproto/googleapis/rpc v0.0.0-20260120221211-b8f7ae30c516/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ= google.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ= diff --git a/integration/README.md b/integration/README.md index 504ff3e..5aab219 100644 --- a/integration/README.md +++ b/integration/README.md @@ -8,6 +8,12 @@ Each suite must raise real service processes, speak only over public HTTP/gRPC/R ```text integration/ ├── README.md +├── authsessionmail/ +│ ├── authsession_mail_test.go +│ └── harness_test.go +├── gatewayauthsessionmail/ +│ ├── gateway_authsession_mail_test.go +│ └── harness_test.go ├── authsessionuser/ │ ├── authsession_user_test.go │ └── harness_test.go @@ -33,6 +39,8 @@ integration/ ├── keys.go ├── mail_stub.go ├── process.go + ├── redis_container.go + ├── smtp_capture.go └── user_stub.go ``` @@ -48,11 +56,20 @@ integration/ - `gatewayauthsession` verifies the integration boundary between real `Edge Gateway` and real `Auth / Session Service`. - `authsessionuser` verifies the integration boundary between real `Auth / Session Service` and real `User Service`. +- `authsessionmail` verifies the integration boundary between real `Auth / Session Service` and real `Mail Service`. +- `gatewayauthsessionmail` verifies the public auth flow across real `Edge Gateway`, real `Auth / Session Service`, and real `Mail Service`. - `gatewayuser` verifies the direct authenticated self-service boundary between real `Edge Gateway` and real `User Service`. - `gatewayauthsessionuser` verifies the full public-auth plus authenticated-account chain across real `Edge Gateway`, real `Auth / Session Service`, and real `User Service`. -The current fast suites use one isolated `miniredis` instance plus either +The current fast suites still use one isolated `miniredis` instance plus either real downstream processes or external stateful HTTP stubs where appropriate. +`authsessionmail` and `gatewayauthsessionmail` are the deliberate exceptions: +they use one real Redis container through `testcontainers-go`, because those +boundaries must exercise the real Redis-backed `Mail Service` runtime. +`authsessionmail` additionally contains one targeted SMTP-capture scenario for +the real `smtp` provider path, while `gatewayauthsessionmail` keeps `Mail +Service` in `stub` mode and extracts the confirmation code through the trusted +operator delivery surface. ## Running @@ -62,6 +79,8 @@ Run from the module directory: cd integration go test ./gatewayauthsession/... go test ./authsessionuser/... +go test ./authsessionmail/... +go test ./gatewayauthsessionmail/... go test ./gatewayuser/... go test ./gatewayauthsessionuser/... ``` @@ -71,6 +90,8 @@ Useful regression commands after boundary changes: ```bash go test ./gatewayauthsession/... go test ./authsessionuser/... +go test ./authsessionmail/... +go test ./gatewayauthsessionmail/... go test ./gatewayuser/... go test ./gatewayauthsessionuser/... cd ../gateway && go test ./... @@ -88,8 +109,17 @@ Do not use `go test ./...` from the repository root. The repository is organized 4. Add new helpers to `internal/contracts//` only when they describe a reusable public wire contract. 5. Prefer fast deterministic infrastructure by default: in-memory test doubles, `httptest` stubs, and `miniredis`. -## Future Real Redis Smoke Suites +## Real Redis Suites Fast suites stay on `miniredis` by default. -When a boundary needs one real Redis smoke suite later, keep it in the same boundary package and gate it explicitly with environment-driven configuration such as `INTEGRATION_REAL_REDIS_ADDR`. -That smoke suite should complement, not replace, the deterministic `miniredis` coverage. +When one boundary explicitly needs real Redis semantics, prefer a package-local +container setup through `testcontainers-go` plus reusable helpers in +`internal/harness`, as done by `authsessionmail` and +`gatewayauthsessionmail`. + +Current rule of thumb: + +- use `miniredis` when the boundary does not depend on Redis persistence or + scheduling behavior +- use `testcontainers-go` only when the real Redis process materially changes + the behavior being verified diff --git a/integration/authsessionmail/authsession_mail_test.go b/integration/authsessionmail/authsession_mail_test.go new file mode 100644 index 0000000..23a2246 --- /dev/null +++ b/integration/authsessionmail/authsession_mail_test.go @@ -0,0 +1,110 @@ +package authsessionmail_test + +import ( + "net/url" + "testing" + "time" + + "github.com/stretchr/testify/require" +) + +func TestAuthsessionMailBlackBoxSendEmailCodeCreatesSuppressedDelivery(t *testing.T) { + h := newAuthsessionMailHarness(t, authsessionMailHarnessOptions{}) + email := "pilot@example.com" + + response := h.sendChallengeWithAcceptLanguage(t, email, "fr-FR, en;q=0.8") + require.NotEmpty(t, response.ChallengeID) + + list := h.eventuallyListDeliveries(t, url.Values{ + "source": []string{"authsession"}, + "status": []string{"suppressed"}, + "recipient": []string{email}, + "template_id": []string{"auth.login_code"}, + }) + require.Len(t, list.Items, 1) + require.Equal(t, "authsession", list.Items[0].Source) + require.Equal(t, "suppressed", list.Items[0].Status) + require.Equal(t, "auth.login_code", list.Items[0].TemplateID) + require.Equal(t, "fr-FR", list.Items[0].Locale) + require.Equal(t, []string{email}, list.Items[0].To) + + detail := h.getDelivery(t, list.Items[0].DeliveryID) + require.Equal(t, "authsession", detail.Source) + require.Equal(t, "suppressed", detail.Status) + require.Equal(t, "auth.login_code", detail.TemplateID) + require.Equal(t, "fr-FR", detail.Locale) + require.False(t, detail.LocaleFallbackUsed) + require.Equal(t, []string{email}, detail.To) + require.NotEmpty(t, detail.IdempotencyKey) + + attempts := h.getDeliveryAttempts(t, detail.DeliveryID) + require.Empty(t, attempts.Items) +} + +func TestAuthsessionMailBlackBoxSendEmailCodeReturnsServiceUnavailableWhenMailServiceStops(t *testing.T) { + h := newAuthsessionMailHarness(t, authsessionMailHarnessOptions{}) + h.stopMail(t) + + response := postJSONValueWithHeaders( + t, + h.authsessionPublicURL+authSendEmailCodePath, + map[string]string{"email": "pilot@example.com"}, + nil, + ) + + require.Equal(t, 503, response.StatusCode) + require.JSONEq(t, `{"error":{"code":"service_unavailable","message":"service is unavailable"}}`, response.Body) +} + +func TestAuthsessionMailBlackBoxSMTPDeliveryReachesSentStateAndSMTPPayload(t *testing.T) { + h := newAuthsessionMailHarness(t, authsessionMailHarnessOptions{mailSMTPMode: "smtp"}) + email := "pilot@example.com" + + response := h.sendChallengeWithAcceptLanguage(t, email, "fr-FR, en;q=0.8") + require.NotEmpty(t, response.ChallengeID) + + list := h.eventuallyListDeliveries(t, url.Values{ + "source": []string{"authsession"}, + "recipient": []string{email}, + "template_id": []string{"auth.login_code"}, + }) + require.Len(t, list.Items, 1) + require.Equal(t, "authsession", list.Items[0].Source) + require.Equal(t, "auth.login_code", list.Items[0].TemplateID) + require.Equal(t, "fr-FR", list.Items[0].Locale) + require.Equal(t, []string{email}, list.Items[0].To) + + var detail mailDeliveryDetailResponse + require.Eventually(t, func() bool { + detail = h.getDelivery(t, list.Items[0].DeliveryID) + return detail.Status == "sent" + }, 10*time.Second, 50*time.Millisecond) + require.Equal(t, "authsession", detail.Source) + require.Equal(t, "sent", detail.Status) + require.Equal(t, "auth.login_code", detail.TemplateID) + require.Equal(t, "fr-FR", detail.Locale) + require.True(t, detail.LocaleFallbackUsed) + require.Equal(t, []string{email}, detail.To) + require.NotEmpty(t, detail.IdempotencyKey) + + code, ok := detail.TemplateVariables["code"].(string) + require.True(t, ok) + require.Len(t, code, 6) + + var attempts mailDeliveryAttemptsResponse + require.Eventually(t, func() bool { + attempts = h.getDeliveryAttempts(t, detail.DeliveryID) + return len(attempts.Items) == 1 && attempts.Items[0].Status == "provider_accepted" + }, 10*time.Second, 50*time.Millisecond) + require.Len(t, attempts.Items, 1) + require.Equal(t, "provider_accepted", attempts.Items[0].Status) + + require.NotNil(t, h.smtp) + var payload string + require.Eventually(t, func() bool { + payload = h.smtp.LatestPayload() + return payload != "" + }, 10*time.Second, 50*time.Millisecond) + require.Contains(t, payload, "Subject:") + require.Contains(t, payload, "Your login code is "+code+".") +} diff --git a/integration/authsessionmail/harness_test.go b/integration/authsessionmail/harness_test.go new file mode 100644 index 0000000..5ec0621 --- /dev/null +++ b/integration/authsessionmail/harness_test.go @@ -0,0 +1,394 @@ +package authsessionmail_test + +import ( + "bytes" + "encoding/json" + "errors" + "io" + "net/http" + "net/url" + "path/filepath" + "runtime" + "testing" + "time" + + "galaxy/integration/internal/harness" + + "github.com/stretchr/testify/require" +) + +const ( + authSendEmailCodePath = "/api/v1/public/auth/send-email-code" + mailDeliveriesPath = "/api/v1/internal/deliveries" +) + +type authsessionMailHarness struct { + userStub *harness.UserStub + smtp *harness.SMTPCapture + + authsessionPublicURL string + mailInternalURL string + + authsessionProcess *harness.Process + mailProcess *harness.Process +} + +type authsessionMailHarnessOptions struct { + mailSMTPMode string +} + +type httpResponse struct { + StatusCode int + Body string + Header http.Header +} + +type sendEmailCodeResponse struct { + ChallengeID string `json:"challenge_id"` +} + +type mailDeliveryListResponse struct { + Items []mailDeliverySummary `json:"items"` +} + +type mailDeliverySummary struct { + DeliveryID string `json:"delivery_id"` + Source string `json:"source"` + TemplateID string `json:"template_id"` + Locale string `json:"locale"` + To []string `json:"to"` + Status string `json:"status"` +} + +type mailDeliveryDetailResponse struct { + DeliveryID string `json:"delivery_id"` + Source string `json:"source"` + TemplateID string `json:"template_id"` + Locale string `json:"locale"` + LocaleFallbackUsed bool `json:"locale_fallback_used"` + To []string `json:"to"` + IdempotencyKey string `json:"idempotency_key"` + Status string `json:"status"` + TemplateVariables map[string]any `json:"template_variables,omitempty"` +} + +type mailDeliveryAttemptsResponse struct { + Items []mailAttemptResponse `json:"items"` +} + +type mailAttemptResponse struct { + Status string `json:"status"` +} + +func newAuthsessionMailHarness(t *testing.T, opts authsessionMailHarnessOptions) *authsessionMailHarness { + t.Helper() + + redisRuntime := harness.StartRedisContainer(t) + userStub := harness.NewUserStub(t) + + mailInternalAddr := harness.FreeTCPAddress(t) + authsessionPublicAddr := harness.FreeTCPAddress(t) + authsessionInternalAddr := harness.FreeTCPAddress(t) + + mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail") + authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession") + + if opts.mailSMTPMode == "" { + opts.mailSMTPMode = "stub" + } + + mailEnv := map[string]string{ + "MAIL_LOG_LEVEL": "info", + "MAIL_INTERNAL_HTTP_ADDR": mailInternalAddr, + "MAIL_REDIS_ADDR": redisRuntime.Addr, + "MAIL_TEMPLATE_DIR": moduleTemplateDir(t), + "MAIL_STREAM_BLOCK_TIMEOUT": "100ms", + "MAIL_OPERATOR_REQUEST_TIMEOUT": time.Second.String(), + "MAIL_SHUTDOWN_TIMEOUT": "2s", + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + } + + var smtpCapture *harness.SMTPCapture + switch opts.mailSMTPMode { + case "stub": + mailEnv["MAIL_SMTP_MODE"] = "stub" + case "smtp": + smtpCapture = harness.StartSMTPCapture(t, harness.SMTPCaptureConfig{ + SupportsSTARTTLS: true, + }) + mailEnv["MAIL_SMTP_MODE"] = "smtp" + mailEnv["MAIL_SMTP_ADDR"] = smtpCapture.Addr() + mailEnv["MAIL_SMTP_FROM_EMAIL"] = "noreply@example.com" + mailEnv["MAIL_SMTP_FROM_NAME"] = "Galaxy Mail" + mailEnv["MAIL_SMTP_TIMEOUT"] = "2s" + mailEnv["MAIL_SMTP_INSECURE_SKIP_VERIFY"] = "true" + mailEnv["SSL_CERT_FILE"] = smtpCapture.RootCAPath() + default: + t.Fatalf("unsupported mail SMTP mode %q", opts.mailSMTPMode) + } + + mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv) + waitForMailReady(t, mailProcess, "http://"+mailInternalAddr) + + authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{ + "AUTHSESSION_LOG_LEVEL": "info", + "AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr, + "AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr, + "AUTHSESSION_REDIS_ADDR": redisRuntime.Addr, + "AUTHSESSION_USER_SERVICE_MODE": "rest", + "AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(), + "AUTHSESSION_MAIL_SERVICE_MODE": "rest", + "AUTHSESSION_MAIL_SERVICE_BASE_URL": "http://" + mailInternalAddr, + "AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(), + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + }) + waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr) + + return &authsessionMailHarness{ + userStub: userStub, + smtp: smtpCapture, + authsessionPublicURL: "http://" + authsessionPublicAddr, + mailInternalURL: "http://" + mailInternalAddr, + authsessionProcess: authsessionProcess, + mailProcess: mailProcess, + } +} + +func (h *authsessionMailHarness) stopMail(t *testing.T) { + t.Helper() + + h.mailProcess.Stop(t) +} + +func (h *authsessionMailHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) sendEmailCodeResponse { + t.Helper() + + response := postJSONValueWithHeaders( + t, + h.authsessionPublicURL+authSendEmailCodePath, + map[string]string{"email": email}, + map[string]string{"Accept-Language": acceptLanguage}, + ) + require.Equal(t, http.StatusOK, response.StatusCode, response.Body) + + var body sendEmailCodeResponse + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body)) + require.NotEmpty(t, body.ChallengeID) + + return body +} + +func (h *authsessionMailHarness) eventuallyListDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse { + t.Helper() + + var response mailDeliveryListResponse + require.Eventually(t, func() bool { + response = h.listDeliveries(t, query) + return len(response.Items) > 0 + }, 10*time.Second, 50*time.Millisecond) + + return response +} + +func (h *authsessionMailHarness) listDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse { + t.Helper() + + target := h.mailInternalURL + mailDeliveriesPath + if encoded := query.Encode(); encoded != "" { + target += "?" + encoded + } + + request, err := http.NewRequest(http.MethodGet, target, nil) + require.NoError(t, err) + + return doJSONRequest[mailDeliveryListResponse](t, request, http.StatusOK) +} + +func (h *authsessionMailHarness) getDelivery(t *testing.T, deliveryID string) mailDeliveryDetailResponse { + t.Helper() + + request, err := http.NewRequest(http.MethodGet, h.mailInternalURL+mailDeliveriesPath+"/"+url.PathEscape(deliveryID), nil) + require.NoError(t, err) + + return doJSONRequest[mailDeliveryDetailResponse](t, request, http.StatusOK) +} + +func (h *authsessionMailHarness) getDeliveryAttempts(t *testing.T, deliveryID string) mailDeliveryAttemptsResponse { + t.Helper() + + request, err := http.NewRequest(http.MethodGet, h.mailInternalURL+mailDeliveriesPath+"/"+url.PathEscape(deliveryID)+"/attempts", nil) + require.NoError(t, err) + + return doJSONRequest[mailDeliveryAttemptsResponse](t, request, http.StatusOK) +} + +func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse { + t.Helper() + + payload, err := json.Marshal(body) + require.NoError(t, err) + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + require.NoError(t, err) + request.Header.Set("Content-Type", "application/json") + for key, value := range headers { + if value == "" { + continue + } + request.Header.Set(key, value) + } + + return doRequest(t, request) +} + +func doJSONRequest[T any](t *testing.T, request *http.Request, wantStatus int) T { + t.Helper() + + response := doRequest(t, request) + require.Equal(t, wantStatus, response.StatusCode, response.Body) + + var decoded T + require.NoError(t, json.Unmarshal([]byte(response.Body), &decoded), response.Body) + + return decoded +} + +func doRequest(t *testing.T, request *http.Request) httpResponse { + t.Helper() + + client := &http.Client{ + Timeout: 500 * time.Millisecond, + Transport: &http.Transport{ + DisableKeepAlives: true, + }, + } + t.Cleanup(client.CloseIdleConnections) + + response, err := client.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + payload, err := io.ReadAll(response.Body) + require.NoError(t, err) + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(payload), + Header: response.Header.Clone(), + } +} + +func decodeStrictJSONPayload(payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return err + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return errors.New("unexpected trailing JSON input") + } + return err + } + + return nil +} + +func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) { + t.Helper() + + client := &http.Client{Timeout: 250 * time.Millisecond} + t.Cleanup(client.CloseIdleConnections) + + deadline := time.Now().Add(10 * time.Second) + for time.Now().Before(deadline) { + request, err := http.NewRequest(http.MethodGet, baseURL+mailDeliveriesPath, nil) + require.NoError(t, err) + + response, err := client.Do(request) + if err == nil { + _, _ = io.Copy(io.Discard, response.Body) + response.Body.Close() + if response.StatusCode == http.StatusOK { + return + } + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs()) +} + +func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) { + t.Helper() + + client := &http.Client{Timeout: 250 * time.Millisecond} + t.Cleanup(client.CloseIdleConnections) + + deadline := time.Now().Add(10 * time.Second) + for time.Now().Before(deadline) { + response, err := postJSONValueMaybe(client, baseURL+authSendEmailCodePath, map[string]string{ + "email": "", + }) + if err == nil && response.StatusCode == http.StatusBadRequest { + return + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs()) +} + +func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) { + payload, err := json.Marshal(body) + if err != nil { + return httpResponse{}, err + } + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + if err != nil { + return httpResponse{}, err + } + request.Header.Set("Content-Type", "application/json") + + response, err := client.Do(request) + if err != nil { + return httpResponse{}, err + } + defer response.Body.Close() + + responseBody, err := io.ReadAll(response.Body) + if err != nil { + return httpResponse{}, err + } + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(responseBody), + Header: response.Header.Clone(), + }, nil +} + +func moduleTemplateDir(t *testing.T) string { + t.Helper() + + return filepath.Join(repositoryRoot(t), "mail", "templates") +} + +func repositoryRoot(t *testing.T) string { + t.Helper() + + _, file, _, ok := runtime.Caller(0) + if !ok { + t.Fatal("resolve repository root: runtime caller is unavailable") + } + + return filepath.Clean(filepath.Join(filepath.Dir(file), "..", "..")) +} diff --git a/integration/authsessionuser/authsession_user_test.go b/integration/authsessionuser/authsession_user_test.go index 9819785..5aec97f 100644 --- a/integration/authsessionuser/authsession_user_test.go +++ b/integration/authsessionuser/authsession_user_test.go @@ -64,6 +64,31 @@ func TestAuthsessionUserBlackBoxConfirmForExistingUserKeepsCreateOnlySettings(t require.Equal(t, "Europe/Paris", account.User.TimeZone) } +func TestAuthsessionUserBlackBoxAcceptLanguageSetsLocalizedPreferredLanguage(t *testing.T) { + t.Parallel() + + h := newAuthsessionUserHarness(t) + email := "localized@example.com" + + challengeID := h.sendChallengeWithAcceptLanguage(t, email, "fr-FR, en;q=0.8") + deliveries := h.mailStub.RecordedDeliveries() + require.NotEmpty(t, deliveries) + require.Equal(t, "fr-FR", deliveries[len(deliveries)-1].Locale) + + code := lastMailCodeFor(t, h.mailStub, email) + response := h.confirmCode(t, challengeID, code) + var confirmBody struct { + DeviceSessionID string `json:"device_session_id"` + } + requireJSONStatus(t, response, http.StatusOK, &confirmBody) + require.True(t, strings.HasPrefix(confirmBody.DeviceSessionID, "device-session-")) + + lookupResponse, account := lookupUserByEmail(t, h.userServiceURL, email) + require.Equalf(t, http.StatusOK, lookupResponse.StatusCode, formatStatusError(lookupResponse)) + require.Equal(t, "fr-FR", account.User.PreferredLanguage) + require.Equal(t, testTimeZone, account.User.TimeZone) +} + func TestAuthsessionUserBlackBoxBlockedEmailSendIsSuccessShapedAndConfirmIsRejectedWithoutCreatingUser(t *testing.T) { t.Parallel() diff --git a/integration/authsessionuser/harness_test.go b/integration/authsessionuser/harness_test.go index 0e95d15..fac157c 100644 --- a/integration/authsessionuser/harness_test.go +++ b/integration/authsessionuser/harness_test.go @@ -82,9 +82,18 @@ func newAuthsessionUserHarness(t *testing.T) *authsessionUserHarness { func (h *authsessionUserHarness) sendChallenge(t *testing.T, email string) string { t.Helper() - response := postJSONValue(t, h.authsessionPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{ - "email": email, - }) + return h.sendChallengeWithAcceptLanguage(t, email, "") +} + +func (h *authsessionUserHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) string { + t.Helper() + + response := postJSONValueWithHeaders( + t, + h.authsessionPublicURL+"/api/v1/public/auth/send-email-code", + map[string]string{"email": email}, + map[string]string{"Accept-Language": acceptLanguage}, + ) require.Equal(t, http.StatusOK, response.StatusCode) var body struct { @@ -116,12 +125,24 @@ type httpResponse struct { func postJSONValue(t *testing.T, targetURL string, body any) httpResponse { t.Helper() + return postJSONValueWithHeaders(t, targetURL, body, nil) +} + +func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse { + t.Helper() + payload, err := json.Marshal(body) require.NoError(t, err) request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) require.NoError(t, err) request.Header.Set("Content-Type", "application/json") + for key, value := range headers { + if value == "" { + continue + } + request.Header.Set(key, value) + } client := &http.Client{ Timeout: 250 * time.Millisecond, diff --git a/integration/gatewayauthsession/gateway_authsession_test.go b/integration/gatewayauthsession/gateway_authsession_test.go index 74fa372..38329ab 100644 --- a/integration/gatewayauthsession/gateway_authsession_test.go +++ b/integration/gatewayauthsession/gateway_authsession_test.go @@ -77,6 +77,26 @@ func TestGatewayAuthSessionConfirmCreatesProjectionAndAllowsSubscribeEvents(t *t assertBootstrapEvent(t, event, h.responseSignerPublicKey, "request-bootstrap") } +func TestGatewayAuthSessionAcceptLanguageIsForwardedToMailAndUser(t *testing.T) { + h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{}) + + clientPrivateKey := newClientPrivateKey("localized") + challengeID, code := h.sendChallengeWithAcceptLanguage(t, testEmail, "fr-FR, en;q=0.8") + + deliveries := h.mailStub.RecordedDeliveries() + require.NotEmpty(t, deliveries) + require.Equal(t, "fr-FR", deliveries[len(deliveries)-1].Locale) + + response := h.confirmCode(t, challengeID, code, clientPrivateKey) + require.Equal(t, http.StatusOK, response.StatusCode) + + ensureCalls := h.userStub.EnsureCalls() + require.Len(t, ensureCalls, 1) + require.Equal(t, testEmail, ensureCalls[0].Email) + require.Equal(t, "fr-FR", ensureCalls[0].PreferredLanguage) + require.Equal(t, testTimeZone, ensureCalls[0].TimeZone) +} + func TestGatewayAuthSessionRepeatedConfirmReturnsSameSessionID(t *testing.T) { h := newGatewayAuthSessionHarness(t, gatewayAuthSessionOptions{}) diff --git a/integration/gatewayauthsession/harness_test.go b/integration/gatewayauthsession/harness_test.go index 3c5c6d7..234cb58 100644 --- a/integration/gatewayauthsession/harness_test.go +++ b/integration/gatewayauthsession/harness_test.go @@ -196,9 +196,18 @@ func (h *gatewayAuthSessionHarness) readGatewaySessionRecord(t *testing.T, devic func (h *gatewayAuthSessionHarness) sendChallenge(t *testing.T, email string) (string, string) { t.Helper() - response := postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{ - "email": email, - }) + return h.sendChallengeWithAcceptLanguage(t, email, "") +} + +func (h *gatewayAuthSessionHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) (string, string) { + t.Helper() + + response := postJSONValueWithHeaders( + t, + h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", + map[string]string{"email": email}, + map[string]string{"Accept-Language": acceptLanguage}, + ) require.Equal(t, http.StatusOK, response.StatusCode) var body struct { @@ -284,12 +293,24 @@ type gatewaySessionRecord struct { func postJSONValue(t *testing.T, targetURL string, body any) httpResponse { t.Helper() + return postJSONValueWithHeaders(t, targetURL, body, nil) +} + +func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse { + t.Helper() + payload, err := json.Marshal(body) require.NoError(t, err) request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) require.NoError(t, err) request.Header.Set("Content-Type", "application/json") + for key, value := range headers { + if value == "" { + continue + } + request.Header.Set(key, value) + } client := &http.Client{Timeout: 5 * time.Second} diff --git a/integration/gatewayauthsessionmail/gateway_authsession_mail_test.go b/integration/gatewayauthsessionmail/gateway_authsession_mail_test.go new file mode 100644 index 0000000..11e3648 --- /dev/null +++ b/integration/gatewayauthsessionmail/gateway_authsession_mail_test.go @@ -0,0 +1,87 @@ +package gatewayauthsessionmail_test + +import ( + "context" + "crypto/ed25519" + "net/http" + "net/url" + "testing" + + gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1" + + "github.com/stretchr/testify/require" +) + +func TestGatewayAuthsessionMailSendAndConfirmWithRealMailService(t *testing.T) { + h := newGatewayAuthsessionMailHarness(t) + + clientPrivateKey := newClientPrivateKey("real-mail") + challengeID := h.sendChallengeWithAcceptLanguage(t, testEmail, "fr-FR, en;q=0.8") + + list := h.eventuallyListDeliveries(t, url.Values{ + "source": []string{"authsession"}, + "status": []string{"suppressed"}, + "recipient": []string{testEmail}, + "template_id": []string{"auth.login_code"}, + }) + require.Len(t, list.Items, 1) + require.Equal(t, "authsession", list.Items[0].Source) + require.Equal(t, "suppressed", list.Items[0].Status) + require.Equal(t, "auth.login_code", list.Items[0].TemplateID) + require.Equal(t, "fr-FR", list.Items[0].Locale) + require.Equal(t, []string{testEmail}, list.Items[0].To) + + detail := h.getDelivery(t, list.Items[0].DeliveryID) + require.Equal(t, "authsession", detail.Source) + require.Equal(t, "suppressed", detail.Status) + require.Equal(t, "auth.login_code", detail.TemplateID) + require.Equal(t, "fr-FR", detail.Locale) + require.False(t, detail.LocaleFallbackUsed) + require.Equal(t, []string{testEmail}, detail.To) + require.NotEmpty(t, detail.IdempotencyKey) + + code := templateVariableString(t, detail.TemplateVariables, "code") + + confirm := h.confirmCode(t, challengeID, code, clientPrivateKey) + require.Equal(t, http.StatusOK, confirm.StatusCode, confirm.Body) + + var confirmBody confirmEmailCodeResponse + require.NoError(t, decodeStrictJSONPayload([]byte(confirm.Body), &confirmBody)) + require.NotEmpty(t, confirmBody.DeviceSessionID) + + record := h.waitForGatewaySession(t, confirmBody.DeviceSessionID) + require.Equal(t, gatewaySessionRecord{ + DeviceSessionID: confirmBody.DeviceSessionID, + UserID: "user-1", + ClientPublicKey: encodePublicKey(clientPrivateKey.Public().(ed25519.PublicKey)), + Status: "active", + }, record) + + ensureCalls := h.userStub.EnsureCalls() + require.Len(t, ensureCalls, 1) + require.Equal(t, testEmail, ensureCalls[0].Email) + require.Equal(t, "fr-FR", ensureCalls[0].PreferredLanguage) + require.Equal(t, testTimeZone, ensureCalls[0].TimeZone) + + conn := h.dialGateway(t) + client := gatewayv1.NewEdgeGatewayClient(conn) + + stream, err := client.SubscribeEvents(context.Background(), newSubscribeEventsRequest(confirmBody.DeviceSessionID, "request-bootstrap", clientPrivateKey)) + require.NoError(t, err) + + event, err := stream.Recv() + require.NoError(t, err) + assertBootstrapEvent(t, event, h.responseSignerPublicKey, "request-bootstrap") +} + +func TestGatewayAuthsessionMailUnavailablePassesThroughGatewaySurface(t *testing.T) { + h := newGatewayAuthsessionMailHarness(t) + h.stopMail(t) + + response := postJSONValue(t, h.gatewayPublicURL+gatewaySendEmailCodePath, map[string]string{ + "email": testEmail, + }) + + require.Equal(t, http.StatusServiceUnavailable, response.StatusCode) + require.JSONEq(t, `{"error":{"code":"service_unavailable","message":"service is unavailable"}}`, response.Body) +} diff --git a/integration/gatewayauthsessionmail/harness_test.go b/integration/gatewayauthsessionmail/harness_test.go new file mode 100644 index 0000000..2fa6744 --- /dev/null +++ b/integration/gatewayauthsessionmail/harness_test.go @@ -0,0 +1,546 @@ +package gatewayauthsessionmail_test + +import ( + "bytes" + "context" + "crypto/ed25519" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "errors" + "io" + "net/http" + "net/url" + "path/filepath" + "runtime" + "testing" + "time" + + gatewayv1 "galaxy/gateway/proto/galaxy/gateway/v1" + contractsgatewayv1 "galaxy/integration/internal/contracts/gatewayv1" + "galaxy/integration/internal/harness" + + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials/insecure" +) + +const ( + gatewaySendEmailCodePath = "/api/v1/public/auth/send-email-code" + gatewayConfirmEmailCodePath = "/api/v1/public/auth/confirm-email-code" + gatewayMailDeliveriesPath = "/api/v1/internal/deliveries" + + testEmail = "pilot@example.com" + testTimeZone = "Europe/Kaliningrad" +) + +type gatewayAuthsessionMailHarness struct { + redis *redis.Client + + userStub *harness.UserStub + + authsessionPublicURL string + authsessionInternalURL string + gatewayPublicURL string + gatewayGRPCAddr string + mailInternalURL string + + responseSignerPublicKey ed25519.PublicKey + + gatewayProcess *harness.Process + authsessionProcess *harness.Process + mailProcess *harness.Process +} + +type httpResponse struct { + StatusCode int + Body string + Header http.Header +} + +type sendEmailCodeResponse struct { + ChallengeID string `json:"challenge_id"` +} + +type confirmEmailCodeResponse struct { + DeviceSessionID string `json:"device_session_id"` +} + +type gatewaySessionRecord struct { + DeviceSessionID string `json:"device_session_id"` + UserID string `json:"user_id"` + ClientPublicKey string `json:"client_public_key"` + Status string `json:"status"` + RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"` +} + +type mailDeliveryListResponse struct { + Items []mailDeliverySummary `json:"items"` +} + +type mailDeliverySummary struct { + DeliveryID string `json:"delivery_id"` + Source string `json:"source"` + TemplateID string `json:"template_id"` + Locale string `json:"locale"` + To []string `json:"to"` + Status string `json:"status"` +} + +type mailDeliveryDetailResponse struct { + DeliveryID string `json:"delivery_id"` + Source string `json:"source"` + TemplateID string `json:"template_id"` + Locale string `json:"locale"` + LocaleFallbackUsed bool `json:"locale_fallback_used"` + To []string `json:"to"` + IdempotencyKey string `json:"idempotency_key"` + Status string `json:"status"` + TemplateVariables map[string]any `json:"template_variables,omitempty"` +} + +func newGatewayAuthsessionMailHarness(t *testing.T) *gatewayAuthsessionMailHarness { + t.Helper() + + redisRuntime := harness.StartRedisContainer(t) + redisClient := redis.NewClient(&redis.Options{ + Addr: redisRuntime.Addr, + Protocol: 2, + DisableIdentity: true, + }) + t.Cleanup(func() { + require.NoError(t, redisClient.Close()) + }) + + userStub := harness.NewUserStub(t) + + responseSignerPath, responseSignerPublicKey := harness.WriteResponseSignerPEM(t, t.Name()) + mailInternalAddr := harness.FreeTCPAddress(t) + authsessionPublicAddr := harness.FreeTCPAddress(t) + authsessionInternalAddr := harness.FreeTCPAddress(t) + gatewayPublicAddr := harness.FreeTCPAddress(t) + gatewayGRPCAddr := harness.FreeTCPAddress(t) + + mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail") + authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession") + gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway") + + mailProcess := harness.StartProcess(t, "mail", mailBinary, map[string]string{ + "MAIL_LOG_LEVEL": "info", + "MAIL_INTERNAL_HTTP_ADDR": mailInternalAddr, + "MAIL_REDIS_ADDR": redisRuntime.Addr, + "MAIL_TEMPLATE_DIR": moduleTemplateDir(t), + "MAIL_SMTP_MODE": "stub", + "MAIL_STREAM_BLOCK_TIMEOUT": "100ms", + "MAIL_OPERATOR_REQUEST_TIMEOUT": time.Second.String(), + "MAIL_SHUTDOWN_TIMEOUT": "2s", + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + }) + waitForMailReady(t, mailProcess, "http://"+mailInternalAddr) + + authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{ + "AUTHSESSION_LOG_LEVEL": "info", + "AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr, + "AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr, + "AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_REDIS_ADDR": redisRuntime.Addr, + "AUTHSESSION_USER_SERVICE_MODE": "rest", + "AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(), + "AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_MAIL_SERVICE_MODE": "rest", + "AUTHSESSION_MAIL_SERVICE_BASE_URL": "http://" + mailInternalAddr, + "AUTHSESSION_MAIL_SERVICE_REQUEST_TIMEOUT": time.Second.String(), + "AUTHSESSION_REDIS_GATEWAY_SESSION_CACHE_KEY_PREFIX": "gateway:session:", + "AUTHSESSION_REDIS_GATEWAY_SESSION_EVENTS_STREAM": "gateway:session_events", + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + }) + waitForAuthsessionPublicReady(t, authsessionProcess, "http://"+authsessionPublicAddr) + + gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, map[string]string{ + "GATEWAY_LOG_LEVEL": "info", + "GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr, + "GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr, + "GATEWAY_SESSION_CACHE_REDIS_ADDR": redisRuntime.Addr, + "GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:", + "GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events", + "GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events", + "GATEWAY_REPLAY_REDIS_KEY_PREFIX": "gateway:replay:", + "GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH": filepath.Clean(responseSignerPath), + "GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr, + "GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(), + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_REQUESTS": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_WINDOW": "1s", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_PUBLIC_AUTH_RATE_LIMIT_BURST": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_SEND_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_REQUESTS": "100", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_WINDOW": "1s", + "GATEWAY_PUBLIC_HTTP_ANTI_ABUSE_CONFIRM_EMAIL_CODE_IDENTITY_RATE_LIMIT_BURST": "100", + "OTEL_TRACES_EXPORTER": "none", + "OTEL_METRICS_EXPORTER": "none", + }) + harness.WaitForHTTPStatus(t, gatewayProcess, "http://"+gatewayPublicAddr+"/healthz", http.StatusOK) + harness.WaitForTCP(t, gatewayProcess, gatewayGRPCAddr) + + return &gatewayAuthsessionMailHarness{ + redis: redisClient, + userStub: userStub, + authsessionPublicURL: "http://" + authsessionPublicAddr, + authsessionInternalURL: "http://" + authsessionInternalAddr, + gatewayPublicURL: "http://" + gatewayPublicAddr, + gatewayGRPCAddr: gatewayGRPCAddr, + mailInternalURL: "http://" + mailInternalAddr, + responseSignerPublicKey: responseSignerPublicKey, + gatewayProcess: gatewayProcess, + authsessionProcess: authsessionProcess, + mailProcess: mailProcess, + } +} + +func (h *gatewayAuthsessionMailHarness) stopMail(t *testing.T) { + t.Helper() + + h.mailProcess.Stop(t) +} + +func (h *gatewayAuthsessionMailHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) string { + t.Helper() + + response := postJSONValueWithHeaders( + t, + h.gatewayPublicURL+gatewaySendEmailCodePath, + map[string]string{"email": email}, + map[string]string{"Accept-Language": acceptLanguage}, + ) + require.Equal(t, http.StatusOK, response.StatusCode, response.Body) + + var body sendEmailCodeResponse + require.NoError(t, decodeStrictJSONPayload([]byte(response.Body), &body)) + require.NotEmpty(t, body.ChallengeID) + return body.ChallengeID +} + +func (h *gatewayAuthsessionMailHarness) confirmCode(t *testing.T, challengeID string, code string, clientPrivateKey ed25519.PrivateKey) httpResponse { + t.Helper() + + return postJSONValue(t, h.gatewayPublicURL+gatewayConfirmEmailCodePath, map[string]string{ + "challenge_id": challengeID, + "code": code, + "client_public_key": encodePublicKey(clientPrivateKey.Public().(ed25519.PublicKey)), + "time_zone": testTimeZone, + }) +} + +func (h *gatewayAuthsessionMailHarness) eventuallyListDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse { + t.Helper() + + var response mailDeliveryListResponse + require.Eventually(t, func() bool { + response = h.listDeliveries(t, query) + return len(response.Items) > 0 + }, 10*time.Second, 50*time.Millisecond) + + return response +} + +func (h *gatewayAuthsessionMailHarness) listDeliveries(t *testing.T, query url.Values) mailDeliveryListResponse { + t.Helper() + + target := h.mailInternalURL + gatewayMailDeliveriesPath + if encoded := query.Encode(); encoded != "" { + target += "?" + encoded + } + + request, err := http.NewRequest(http.MethodGet, target, nil) + require.NoError(t, err) + + return doJSONRequest[mailDeliveryListResponse](t, request, http.StatusOK) +} + +func (h *gatewayAuthsessionMailHarness) getDelivery(t *testing.T, deliveryID string) mailDeliveryDetailResponse { + t.Helper() + + request, err := http.NewRequest(http.MethodGet, h.mailInternalURL+gatewayMailDeliveriesPath+"/"+url.PathEscape(deliveryID), nil) + require.NoError(t, err) + + return doJSONRequest[mailDeliveryDetailResponse](t, request, http.StatusOK) +} + +func (h *gatewayAuthsessionMailHarness) waitForGatewaySession(t *testing.T, deviceSessionID string) gatewaySessionRecord { + t.Helper() + + deadline := time.Now().Add(5 * time.Second) + for time.Now().Before(deadline) { + payload, err := h.redis.Get(context.Background(), "gateway:session:"+deviceSessionID).Bytes() + if err == nil { + var record gatewaySessionRecord + require.NoError(t, decodeStrictJSONPayload(payload, &record)) + return record + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("gateway session projection for %s was not published in time", deviceSessionID) + return gatewaySessionRecord{} +} + +func (h *gatewayAuthsessionMailHarness) dialGateway(t *testing.T) *grpc.ClientConn { + t.Helper() + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + conn, err := grpc.DialContext( + ctx, + h.gatewayGRPCAddr, + grpc.WithTransportCredentials(insecure.NewCredentials()), + grpc.WithBlock(), + ) + require.NoError(t, err) + t.Cleanup(func() { + require.NoError(t, conn.Close()) + }) + + return conn +} + +func postJSONValue(t *testing.T, targetURL string, body any) httpResponse { + t.Helper() + + return postJSONValueWithHeaders(t, targetURL, body, nil) +} + +func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse { + t.Helper() + + payload, err := json.Marshal(body) + require.NoError(t, err) + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + require.NoError(t, err) + request.Header.Set("Content-Type", "application/json") + for key, value := range headers { + if value == "" { + continue + } + request.Header.Set(key, value) + } + + return doRequest(t, request) +} + +func doJSONRequest[T any](t *testing.T, request *http.Request, wantStatus int) T { + t.Helper() + + response := doRequest(t, request) + require.Equal(t, wantStatus, response.StatusCode, response.Body) + + var decoded T + require.NoError(t, json.Unmarshal([]byte(response.Body), &decoded), response.Body) + return decoded +} + +func doRequest(t *testing.T, request *http.Request) httpResponse { + t.Helper() + + client := &http.Client{ + Timeout: 5 * time.Second, + Transport: &http.Transport{ + DisableKeepAlives: true, + }, + } + t.Cleanup(client.CloseIdleConnections) + + response, err := client.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + payload, err := io.ReadAll(response.Body) + require.NoError(t, err) + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(payload), + Header: response.Header.Clone(), + } +} + +func decodeStrictJSONPayload(payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return err + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return errors.New("unexpected trailing JSON input") + } + return err + } + + return nil +} + +func templateVariableString(t *testing.T, variables map[string]any, field string) string { + t.Helper() + + value, ok := variables[field] + require.True(t, ok, "template variable %q is missing", field) + + text, ok := value.(string) + require.True(t, ok, "template variable %q must be a string", field) + require.NotEmpty(t, text) + + return text +} + +func newClientPrivateKey(label string) ed25519.PrivateKey { + seed := sha256.Sum256([]byte("galaxy-integration-gateway-authsessionmail-client-" + label)) + return ed25519.NewKeyFromSeed(seed[:]) +} + +func encodePublicKey(publicKey ed25519.PublicKey) string { + return base64.StdEncoding.EncodeToString(publicKey) +} + +func newSubscribeEventsRequest(deviceSessionID string, requestID string, clientPrivateKey ed25519.PrivateKey) *gatewayv1.SubscribeEventsRequest { + payloadHash := contractsgatewayv1.ComputePayloadHash(nil) + + request := &gatewayv1.SubscribeEventsRequest{ + ProtocolVersion: contractsgatewayv1.ProtocolVersionV1, + DeviceSessionId: deviceSessionID, + MessageType: contractsgatewayv1.SubscribeMessageType, + TimestampMs: time.Now().UnixMilli(), + RequestId: requestID, + PayloadHash: payloadHash, + TraceId: "trace-" + requestID, + } + request.Signature = contractsgatewayv1.SignRequest(clientPrivateKey, contractsgatewayv1.RequestSigningFields{ + ProtocolVersion: request.GetProtocolVersion(), + DeviceSessionID: request.GetDeviceSessionId(), + MessageType: request.GetMessageType(), + TimestampMS: request.GetTimestampMs(), + RequestID: request.GetRequestId(), + PayloadHash: request.GetPayloadHash(), + }) + + return request +} + +func assertBootstrapEvent(t *testing.T, event *gatewayv1.GatewayEvent, responseSignerPublicKey ed25519.PublicKey, wantRequestID string) { + t.Helper() + + require.Equal(t, contractsgatewayv1.ServerTimeEventType, event.GetEventType()) + require.Equal(t, wantRequestID, event.GetEventId()) + require.Equal(t, wantRequestID, event.GetRequestId()) + require.NoError(t, contractsgatewayv1.VerifyPayloadHash(event.GetPayloadBytes(), event.GetPayloadHash())) + require.NoError(t, contractsgatewayv1.VerifyEventSignature(responseSignerPublicKey, event.GetSignature(), contractsgatewayv1.EventSigningFields{ + EventType: event.GetEventType(), + EventID: event.GetEventId(), + TimestampMS: event.GetTimestampMs(), + RequestID: event.GetRequestId(), + TraceID: event.GetTraceId(), + PayloadHash: event.GetPayloadHash(), + })) +} + +func waitForMailReady(t *testing.T, process *harness.Process, baseURL string) { + t.Helper() + + client := &http.Client{Timeout: 250 * time.Millisecond} + t.Cleanup(client.CloseIdleConnections) + + deadline := time.Now().Add(10 * time.Second) + for time.Now().Before(deadline) { + request, err := http.NewRequest(http.MethodGet, baseURL+gatewayMailDeliveriesPath, nil) + require.NoError(t, err) + + response, err := client.Do(request) + if err == nil { + _, _ = io.Copy(io.Discard, response.Body) + response.Body.Close() + if response.StatusCode == http.StatusOK { + return + } + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("wait for mail readiness: timeout\n%s", process.Logs()) +} + +func waitForAuthsessionPublicReady(t *testing.T, process *harness.Process, baseURL string) { + t.Helper() + + client := &http.Client{Timeout: 250 * time.Millisecond} + t.Cleanup(client.CloseIdleConnections) + + deadline := time.Now().Add(10 * time.Second) + for time.Now().Before(deadline) { + response, err := postJSONValueMaybe(client, baseURL+gatewaySendEmailCodePath, map[string]string{ + "email": "", + }) + if err == nil && response.StatusCode == http.StatusBadRequest { + return + } + + time.Sleep(25 * time.Millisecond) + } + + t.Fatalf("wait for authsession public readiness: timeout\n%s", process.Logs()) +} + +func postJSONValueMaybe(client *http.Client, targetURL string, body any) (httpResponse, error) { + payload, err := json.Marshal(body) + if err != nil { + return httpResponse{}, err + } + + request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) + if err != nil { + return httpResponse{}, err + } + request.Header.Set("Content-Type", "application/json") + + response, err := client.Do(request) + if err != nil { + return httpResponse{}, err + } + defer response.Body.Close() + + responseBody, err := io.ReadAll(response.Body) + if err != nil { + return httpResponse{}, err + } + + return httpResponse{ + StatusCode: response.StatusCode, + Body: string(responseBody), + Header: response.Header.Clone(), + }, nil +} + +func moduleTemplateDir(t *testing.T) string { + t.Helper() + + return filepath.Join(repositoryRoot(t), "mail", "templates") +} + +func repositoryRoot(t *testing.T) string { + t.Helper() + + _, file, _, ok := runtime.Caller(0) + if !ok { + t.Fatal("resolve repository root: runtime caller is unavailable") + } + + return filepath.Clean(filepath.Join(filepath.Dir(file), "..", "..")) +} diff --git a/integration/gatewayauthsessionuser/gateway_authsession_user_test.go b/integration/gatewayauthsessionuser/gateway_authsession_user_test.go index bc0c076..924186e 100644 --- a/integration/gatewayauthsessionuser/gateway_authsession_user_test.go +++ b/integration/gatewayauthsessionuser/gateway_authsession_user_test.go @@ -61,6 +61,30 @@ func TestGatewayAuthsessionUserExistingAccountKeepsCreateOnlySettings(t *testing require.Equal(t, "Europe/Paris", accountResponse.Account.TimeZone) } +func TestGatewayAuthsessionUserAcceptLanguageSetsLocalizedPreferredLanguage(t *testing.T) { + h := newGatewayAuthsessionUserHarness(t) + + const email = "localized@example.com" + + challengeID := h.sendChallengeWithAcceptLanguage(t, email, "fr-FR, en;q=0.8") + deliveries := h.mailStub.RecordedDeliveries() + require.NotEmpty(t, deliveries) + require.Equal(t, "fr-FR", deliveries[len(deliveries)-1].Locale) + + code := lastMailCodeFor(t, h.mailStub, email) + clientPrivateKey := newClientPrivateKey("localized-account") + + confirmResponse := h.confirmCode(t, challengeID, code, clientPrivateKey) + var confirmBody struct { + DeviceSessionID string `json:"device_session_id"` + } + requireJSONStatus(t, confirmResponse, http.StatusOK, &confirmBody) + + accountResponse := h.executeGetMyAccount(t, confirmBody.DeviceSessionID, "request-localized-account", clientPrivateKey) + require.Equal(t, "fr-FR", accountResponse.Account.PreferredLanguage) + require.Equal(t, gatewayAuthsessionUserTestTimeZone, accountResponse.Account.TimeZone) +} + func TestGatewayAuthsessionUserBlockedEmailAndUserBehavior(t *testing.T) { h := newGatewayAuthsessionUserHarness(t) diff --git a/integration/gatewayauthsessionuser/harness_test.go b/integration/gatewayauthsessionuser/harness_test.go index 90310e0..74ac183 100644 --- a/integration/gatewayauthsessionuser/harness_test.go +++ b/integration/gatewayauthsessionuser/harness_test.go @@ -148,9 +148,18 @@ func newGatewayAuthsessionUserHarness(t *testing.T) *gatewayAuthsessionUserHarne func (h *gatewayAuthsessionUserHarness) sendChallenge(t *testing.T, email string) string { t.Helper() - response := postJSONValue(t, h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", map[string]string{ - "email": email, - }) + return h.sendChallengeWithAcceptLanguage(t, email, "") +} + +func (h *gatewayAuthsessionUserHarness) sendChallengeWithAcceptLanguage(t *testing.T, email string, acceptLanguage string) string { + t.Helper() + + response := postJSONValueWithHeaders( + t, + h.gatewayPublicURL+"/api/v1/public/auth/send-email-code", + map[string]string{"email": email}, + map[string]string{"Accept-Language": acceptLanguage}, + ) require.Equal(t, http.StatusOK, response.StatusCode) var body struct { @@ -299,12 +308,24 @@ type userLookupResponse struct { func postJSONValue(t *testing.T, targetURL string, body any) httpResponse { t.Helper() + return postJSONValueWithHeaders(t, targetURL, body, nil) +} + +func postJSONValueWithHeaders(t *testing.T, targetURL string, body any, headers map[string]string) httpResponse { + t.Helper() + payload, err := json.Marshal(body) require.NoError(t, err) request, err := http.NewRequest(http.MethodPost, targetURL, bytes.NewReader(payload)) require.NoError(t, err) request.Header.Set("Content-Type", "application/json") + for key, value := range headers { + if value == "" { + continue + } + request.Header.Set(key, value) + } client := &http.Client{Timeout: 5 * time.Second} response, err := client.Do(request) diff --git a/integration/go.mod b/integration/go.mod index e7fa428..24ace24 100644 --- a/integration/go.mod +++ b/integration/go.mod @@ -6,26 +6,68 @@ require ( github.com/alicebob/miniredis/v2 v2.37.0 github.com/redis/go-redis/v9 v9.18.0 github.com/stretchr/testify v1.11.1 + github.com/testcontainers/testcontainers-go v0.42.0 + github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 google.golang.org/grpc v1.80.0 ) require ( + dario.cat/mergo v1.0.2 // indirect + github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect + github.com/Microsoft/go-winio v0.6.2 // indirect + github.com/cenkalti/backoff/v4 v4.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/containerd/errdefs v1.0.0 // indirect + github.com/containerd/errdefs/pkg v0.3.0 // indirect + github.com/containerd/log v0.1.0 // indirect + github.com/containerd/platforms v0.2.1 // indirect + github.com/cpuguy83/dockercfg v0.3.2 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect + github.com/distribution/reference v0.6.0 // indirect + github.com/docker/go-connections v0.6.0 // indirect + github.com/docker/go-units v0.5.0 // indirect + github.com/ebitengine/purego v0.10.0 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-logr/stdr v1.2.2 // indirect + github.com/go-ole/go-ole v1.2.6 // indirect + github.com/google/uuid v1.6.0 // indirect + github.com/klauspost/compress v1.18.5 // indirect github.com/klauspost/cpuid/v2 v2.3.0 // indirect - github.com/kr/pretty v0.3.1 // indirect + github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect + github.com/magiconair/properties v1.8.10 // indirect + github.com/mdelapenya/tlscert v0.2.0 // indirect + github.com/moby/docker-image-spec v1.3.1 // indirect + github.com/moby/go-archive v0.2.0 // indirect + github.com/moby/moby/api v1.54.1 // indirect + github.com/moby/moby/client v0.4.0 // indirect + github.com/moby/patternmatcher v0.6.1 // indirect + github.com/moby/sys/sequential v0.6.0 // indirect + github.com/moby/sys/user v0.4.0 // indirect + github.com/moby/sys/userns v0.1.0 // indirect + github.com/moby/term v0.5.2 // indirect + github.com/opencontainers/go-digest v1.0.0 // indirect + github.com/opencontainers/image-spec v1.1.1 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect - github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect + github.com/shirou/gopsutil/v4 v4.26.3 // indirect + github.com/sirupsen/logrus v1.9.4 // indirect + github.com/tklauser/go-sysconf v0.3.16 // indirect + github.com/tklauser/numcpus v0.11.0 // indirect github.com/yuin/gopher-lua v1.1.1 // indirect + github.com/yusufpapurcu/wmi v1.2.4 // indirect + go.opentelemetry.io/auto/sdk v1.2.1 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 // indirect go.opentelemetry.io/otel v1.43.0 // indirect - go.opentelemetry.io/otel/sdk/metric v1.43.0 // indirect + go.opentelemetry.io/otel/metric v1.43.0 // indirect + go.opentelemetry.io/otel/trace v1.43.0 // indirect go.uber.org/atomic v1.11.0 // indirect + golang.org/x/crypto v0.49.0 // indirect golang.org/x/net v0.52.0 // indirect golang.org/x/sys v0.42.0 // indirect - golang.org/x/text v0.35.0 // indirect + golang.org/x/text v0.36.0 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect google.golang.org/protobuf v1.36.11 // indirect - gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect gopkg.in/yaml.v3 v3.0.1 // indirect ) diff --git a/integration/go.sum b/integration/go.sum index 82496b7..8e09b24 100644 --- a/integration/go.sum +++ b/integration/go.sum @@ -1,57 +1,170 @@ +dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8= +dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA= +github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk= +github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8= +github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg= +github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= +github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY= +github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU= github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68= github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM= github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs= github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c= github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= +github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= +github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI= +github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M= +github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE= +github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk= +github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I= +github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo= +github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A= +github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw= +github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA= +github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc= +github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s= +github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= +github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk= +github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= +github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94= +github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE= +github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4= +github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU= +github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ= +github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= +github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY= +github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE= +github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ= github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y= +github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4= +github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I= +github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE= +github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0= +github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI= +github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o= +github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0= +github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo= +github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8= +github.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU= +github.com/moby/moby/api v1.54.1 h1:TqVzuJkOLsgLDDwNLmYqACUuTehOHRGKiPhvH8V3Nn4= +github.com/moby/moby/api v1.54.1/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs= +github.com/moby/moby/client v0.4.0 h1:S+2XegzHQrrvTCvF6s5HFzcrywWQmuVnhOXe2kiWjIw= +github.com/moby/moby/client v0.4.0/go.mod h1:QWPbvWchQbxBNdaLSpoKpCdf5E+WxFAgNHogCWDoa7g= +github.com/moby/patternmatcher v0.6.1 h1:qlhtafmr6kgMIJjKJMDmMWq7WLkKIo23hsrpR3x084U= +github.com/moby/patternmatcher v0.6.1/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc= +github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU= +github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko= +github.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs= +github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs= +github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g= +github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28= +github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ= +github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc= +github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U= +github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM= +github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040= +github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU= +github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE= github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs= github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc= +github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ= +github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w= +github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g= +github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4= +github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY= +github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30= +github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 h1:id/6LH8ZeDrtAUVSuNvZUAJ1kVpb82y1pr9yweAWsRg= +github.com/testcontainers/testcontainers-go/modules/redis v0.42.0/go.mod h1:uF0jI8FITagQpBNOgweGBmPf6rP4K0SeL1XFPbsZSSY= +github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA= +github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI= +github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw= +github.com/tklauser/numcpus v0.11.0/go.mod h1:z+LwcLq54uWZTX0u/bGobaV34u6V7KNlTZejzM6/3MQ= github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M= github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw= +github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0= +github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0= github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0= github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY= go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I= +go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0= go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM= +go.opentelemetry.io/otel/metric v1.43.0/go.mod h1:RDnPtIxvqlgO8GRW18W6Z/4P462ldprJtfxHxyKd2PY= go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg= +go.opentelemetry.io/otel/sdk v1.43.0/go.mod h1:P+IkVU3iWukmiit/Yf9AWvpyRDlUeBaRg6Y+C58QHzg= go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw= +go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A= go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A= +go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0= go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= +golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= +golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA= golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= +golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw= +golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo= -golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8= +golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw= +golang.org/x/term v0.41.0 h1:QCgPso/Q3RTJx2Th4bDLqML4W6iJiaXFq2/ftQF13YU= +golang.org/x/term v0.41.0/go.mod h1:3pfBgksrReYfZ5lvYM0kSO0LIkAl4Yl2bXOkKP7Ec2A= +golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg= +golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4= gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E= google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg= +google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8= google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM= google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4= google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q= +gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA= +pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk= +pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04= diff --git a/integration/internal/harness/mail_stub.go b/integration/internal/harness/mail_stub.go index e459dd7..2d543f3 100644 --- a/integration/internal/harness/mail_stub.go +++ b/integration/internal/harness/mail_stub.go @@ -22,6 +22,9 @@ type LoginCodeDelivery struct { // Code stores the cleartext login code requested by authsession. Code string + + // Locale stores the canonical BCP 47 language tag selected by authsession. + Locale string } // MailBehavior overrides one external mail-stub response. @@ -100,8 +103,9 @@ func (s *MailStub) handle(writer http.ResponseWriter, request *http.Request) { } var payload struct { - Email string `json:"email"` - Code string `json:"code"` + Email string `json:"email"` + Code string `json:"code"` + Locale string `json:"locale"` } if err := decodeStrictJSONRequest(request, &payload); err != nil { http.Error(writer, err.Error(), http.StatusBadRequest) @@ -110,8 +114,9 @@ func (s *MailStub) handle(writer http.ResponseWriter, request *http.Request) { s.mu.Lock() s.deliveries = append(s.deliveries, LoginCodeDelivery{ - Email: payload.Email, - Code: payload.Code, + Email: payload.Email, + Code: payload.Code, + Locale: payload.Locale, }) behavior := s.behavior s.mu.Unlock() diff --git a/integration/internal/harness/redis_container.go b/integration/internal/harness/redis_container.go new file mode 100644 index 0000000..1c5311d --- /dev/null +++ b/integration/internal/harness/redis_container.go @@ -0,0 +1,47 @@ +package harness + +import ( + "context" + "testing" + + testcontainers "github.com/testcontainers/testcontainers-go" + rediscontainer "github.com/testcontainers/testcontainers-go/modules/redis" +) + +const defaultRedisContainerImage = "redis:7" + +// RedisRuntime stores one started real Redis container together with the +// externally reachable endpoint used by black-box suites. +type RedisRuntime struct { + Container *rediscontainer.RedisContainer + Addr string +} + +// StartRedisContainer starts one isolated real Redis container and registers +// automatic cleanup for the suite. +func StartRedisContainer(t testing.TB) *RedisRuntime { + t.Helper() + + ctx := context.Background() + + container, err := rediscontainer.Run(ctx, defaultRedisContainerImage) + if err != nil { + t.Fatalf("start redis container: %v", err) + } + + t.Cleanup(func() { + if err := testcontainers.TerminateContainer(container); err != nil { + t.Errorf("terminate redis container: %v", err) + } + }) + + addr, err := container.Endpoint(ctx, "") + if err != nil { + t.Fatalf("resolve redis container endpoint: %v", err) + } + + return &RedisRuntime{ + Container: container, + Addr: addr, + } +} diff --git a/integration/internal/harness/smtp_capture.go b/integration/internal/harness/smtp_capture.go new file mode 100644 index 0000000..65541f4 --- /dev/null +++ b/integration/internal/harness/smtp_capture.go @@ -0,0 +1,377 @@ +package harness + +import ( + "bytes" + "crypto/rand" + "crypto/rsa" + "crypto/tls" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "io" + "math/big" + "net" + "os" + "path/filepath" + "strings" + "sync" + "testing" + "time" +) + +// SMTPCaptureConfig configures one local SMTP capture server. +type SMTPCaptureConfig struct { + // SupportsSTARTTLS controls whether the server advertises and accepts the + // STARTTLS upgrade command. + SupportsSTARTTLS bool + + // FinalDataReply stores the final SMTP status line returned after the + // message body has been received. Empty value keeps the default accepted + // reply. + FinalDataReply string +} + +// SMTPCapture stores one running local SMTP capture server together with the +// generated trust anchor used by external processes. +type SMTPCapture struct { + addr string + rootCAPath string + listener net.Listener + tlsConfig *tls.Config + + connsMu sync.Mutex + conns map[net.Conn]struct{} + + payloadsMu sync.Mutex + payloads []string + + acceptWG sync.WaitGroup + connWG sync.WaitGroup +} + +// StartSMTPCapture starts one local SMTP server suitable for black-box tests +// that need to observe captured message payloads. +func StartSMTPCapture(t testing.TB, cfg SMTPCaptureConfig) *SMTPCapture { + t.Helper() + + if cfg.FinalDataReply == "" { + cfg.FinalDataReply = "250 2.0.0 accepted" + } + + serverCertificate, rootCAPEM := newSMTPCertificates(t) + rootCAPath := filepath.Join(t.TempDir(), "smtp-root-ca.pem") + if err := os.WriteFile(rootCAPath, rootCAPEM, 0o600); err != nil { + t.Fatalf("write SMTP root CA: %v", err) + } + + listener, err := net.Listen("tcp", "127.0.0.1:0") + if err != nil { + t.Fatalf("start SMTP capture listener: %v", err) + } + + capture := &SMTPCapture{ + addr: listener.Addr().String(), + rootCAPath: rootCAPath, + listener: listener, + tlsConfig: &tls.Config{ + Certificates: []tls.Certificate{serverCertificate}, + MinVersion: tls.VersionTLS12, + }, + conns: make(map[net.Conn]struct{}), + } + + capture.acceptWG.Add(1) + go func() { + defer capture.acceptWG.Done() + for { + conn, err := listener.Accept() + if err != nil { + return + } + + capture.trackConn(conn) + capture.connWG.Add(1) + go func() { + defer capture.connWG.Done() + defer capture.untrackConn(conn) + defer func() { + _ = conn.Close() + }() + + capture.serveConnection(conn, cfg) + }() + } + }() + + t.Cleanup(func() { + _ = capture.listener.Close() + capture.closeConnections() + capture.acceptWG.Wait() + capture.connWG.Wait() + }) + + return capture +} + +// Addr returns the externally reachable TCP address of the capture server. +func (capture *SMTPCapture) Addr() string { + if capture == nil { + return "" + } + + return capture.addr +} + +// RootCAPath returns the PEM path that should be trusted by clients talking to +// the capture server over STARTTLS. +func (capture *SMTPCapture) RootCAPath() string { + if capture == nil { + return "" + } + + return capture.rootCAPath +} + +// LatestPayload returns the most recently captured SMTP DATA payload. +func (capture *SMTPCapture) LatestPayload() string { + if capture == nil { + return "" + } + + capture.payloadsMu.Lock() + defer capture.payloadsMu.Unlock() + + if len(capture.payloads) == 0 { + return "" + } + + return capture.payloads[len(capture.payloads)-1] +} + +func (capture *SMTPCapture) trackConn(conn net.Conn) { + capture.connsMu.Lock() + defer capture.connsMu.Unlock() + capture.conns[conn] = struct{}{} +} + +func (capture *SMTPCapture) untrackConn(conn net.Conn) { + capture.connsMu.Lock() + defer capture.connsMu.Unlock() + delete(capture.conns, conn) +} + +func (capture *SMTPCapture) closeConnections() { + capture.connsMu.Lock() + defer capture.connsMu.Unlock() + + for conn := range capture.conns { + _ = conn.Close() + } +} + +func (capture *SMTPCapture) appendPayload(payload string) { + capture.payloadsMu.Lock() + defer capture.payloadsMu.Unlock() + capture.payloads = append(capture.payloads, payload) +} + +func (capture *SMTPCapture) serveConnection(conn net.Conn, cfg SMTPCaptureConfig) { + reader := newSMTPLineReader(conn) + writer := newSMTPLineWriter(conn) + writer.writeLine("220 localhost ESMTP") + + tlsActive := false + for { + line, err := reader.readLine() + if err != nil { + return + } + + command := strings.ToUpper(line) + switch { + case strings.HasPrefix(command, "EHLO "), strings.HasPrefix(command, "HELO "): + if cfg.SupportsSTARTTLS && !tlsActive { + writer.writeLines( + "250-localhost", + "250-8BITMIME", + "250-STARTTLS", + "250 SMTPUTF8", + ) + continue + } + + writer.writeLines( + "250-localhost", + "250-8BITMIME", + "250 SMTPUTF8", + ) + case command == "STARTTLS": + if !cfg.SupportsSTARTTLS { + writer.writeLine("454 4.7.0 TLS not available") + continue + } + + writer.writeLine("220 Ready to start TLS") + tlsConn := tls.Server(conn, capture.tlsConfig) + if err := tlsConn.Handshake(); err != nil { + return + } + + capture.trackConn(tlsConn) + capture.untrackConn(conn) + conn = tlsConn + reader = newSMTPLineReader(conn) + writer = newSMTPLineWriter(conn) + tlsActive = true + case strings.HasPrefix(command, "MAIL FROM:"): + writer.writeLine("250 2.1.0 Ok") + case strings.HasPrefix(command, "RCPT TO:"): + writer.writeLine("250 2.1.5 Ok") + case command == "DATA": + writer.writeLine("354 End data with .") + + var payload strings.Builder + for { + dataLine, err := reader.readRawLine() + if err != nil { + return + } + if dataLine == ".\r\n" { + break + } + payload.WriteString(dataLine) + } + + capture.appendPayload(payload.String()) + writer.writeLine(cfg.FinalDataReply) + case command == "RSET": + writer.writeLine("250 2.0.0 Ok") + case command == "QUIT": + writer.writeLine("221 2.0.0 Bye") + return + default: + writer.writeLine("250 2.0.0 Ok") + } + } +} + +type smtpLineReader struct { + conn net.Conn +} + +func newSMTPLineReader(conn net.Conn) *smtpLineReader { + return &smtpLineReader{conn: conn} +} + +func (reader *smtpLineReader) readLine() (string, error) { + line, err := reader.readRawLine() + if err != nil { + return "", err + } + + return strings.TrimSuffix(strings.TrimSuffix(line, "\n"), "\r"), nil +} + +func (reader *smtpLineReader) readRawLine() (string, error) { + var buffer bytes.Buffer + tmp := make([]byte, 1) + for { + if _, err := reader.conn.Read(tmp); err != nil { + return "", err + } + + buffer.WriteByte(tmp[0]) + if tmp[0] == '\n' { + return buffer.String(), nil + } + } +} + +type smtpLineWriter struct { + conn net.Conn +} + +func newSMTPLineWriter(conn net.Conn) *smtpLineWriter { + return &smtpLineWriter{conn: conn} +} + +func (writer *smtpLineWriter) writeLine(line string) { + _, _ = io.WriteString(writer.conn, line+"\r\n") +} + +func (writer *smtpLineWriter) writeLines(lines ...string) { + for _, line := range lines { + writer.writeLine(line) + } +} + +func newSMTPCertificates(t testing.TB) (tls.Certificate, []byte) { + t.Helper() + + rootKey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + t.Fatalf("generate SMTP root key: %v", err) + } + + now := time.Now() + rootTemplate := x509.Certificate{ + SerialNumber: big.NewInt(1), + Subject: pkix.Name{ + CommonName: "galaxy-integration-smtp-root", + }, + NotBefore: now.Add(-time.Hour), + NotAfter: now.Add(24 * time.Hour), + KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign | x509.KeyUsageDigitalSignature, + IsCA: true, + BasicConstraintsValid: true, + } + + rootDER, err := x509.CreateCertificate(rand.Reader, &rootTemplate, &rootTemplate, &rootKey.PublicKey, rootKey) + if err != nil { + t.Fatalf("create SMTP root certificate: %v", err) + } + + rootPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: rootDER}) + + serverKey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + t.Fatalf("generate SMTP server key: %v", err) + } + + serverTemplate := x509.Certificate{ + SerialNumber: big.NewInt(2), + Subject: pkix.Name{ + CommonName: "127.0.0.1", + }, + NotBefore: now.Add(-time.Hour), + NotAfter: now.Add(24 * time.Hour), + KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, + BasicConstraintsValid: true, + DNSNames: []string{"localhost"}, + IPAddresses: []net.IP{net.ParseIP("127.0.0.1")}, + } + + rootCert, err := x509.ParseCertificate(rootDER) + if err != nil { + t.Fatalf("parse SMTP root certificate: %v", err) + } + + serverDER, err := x509.CreateCertificate(rand.Reader, &serverTemplate, rootCert, &serverKey.PublicKey, rootKey) + if err != nil { + t.Fatalf("create SMTP server certificate: %v", err) + } + + serverPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: serverDER}) + serverKeyPEM := pem.EncodeToMemory(&pem.Block{ + Type: "RSA PRIVATE KEY", + Bytes: x509.MarshalPKCS1PrivateKey(serverKey), + }) + + certificate, err := tls.X509KeyPair(append(serverPEM, rootPEM...), serverKeyPEM) + if err != nil { + t.Fatalf("load SMTP server key pair: %v", err) + } + + return certificate, rootPEM +} diff --git a/mail/PLAN.md b/mail/PLAN.md new file mode 100644 index 0000000..5fc00e8 --- /dev/null +++ b/mail/PLAN.md @@ -0,0 +1,834 @@ +# Mail Service Implementation Plan + +This plan has been already implemented and stays here for historical reasons. + +It should NOT be threated as source of truth for service functionality. + +## Summary + +This plan describes the full v1 implementation path for `galaxy/mail`. +It is intentionally decision-complete: the implementer should not need to +invent service boundaries, storage layout, contracts, or retry semantics while +building the service. + +The target outcome is one runnable internal service that: + +- accepts auth-code mail synchronously over trusted REST +- consumes generic non-auth mail asynchronously from `Redis Streams` +- renders templates or accepts pre-rendered content +- delivers through SMTP or a deterministic stub +- retries bounded transient failures +- stores durable delivery audit state +- exposes trusted operator reads and resend controls + +## Global Rules + +- keep one logical delivery equal to one SMTP envelope +- keep `suppressed` separate from failure +- require explicit idempotency for every accepted command +- prefer deterministic Redis-backed scheduling over in-memory timers +- keep operator inspection possible without direct Redis access +- treat filesystem templates as the v1 source of truth +- keep public and trusted contracts explicit and versionable + +## Target Runtime Layout + +```text +mail/ +├── cmd/ +│ └── mail/ +│ └── main.go +├── internal/ +│ ├── app/ +│ │ ├── app.go +│ │ ├── bootstrap.go +│ │ └── runtime.go +│ ├── config/ +│ │ ├── config.go +│ │ ├── env.go +│ │ └── validation.go +│ ├── domain/ +│ │ ├── delivery/ +│ │ │ ├── model.go +│ │ │ ├── state.go +│ │ │ └── errors.go +│ │ ├── attempt/ +│ │ │ ├── model.go +│ │ │ ├── state.go +│ │ │ └── policy.go +│ │ ├── idempotency/ +│ │ │ └── model.go +│ │ ├── template/ +│ │ │ ├── model.go +│ │ │ └── locale.go +│ │ └── common/ +│ │ ├── email.go +│ │ ├── locale.go +│ │ ├── attachment.go +│ │ └── ids.go +│ ├── ports/ +│ │ ├── deliverystore.go +│ │ ├── attemptstore.go +│ │ ├── idempotencystore.go +│ │ ├── commandsubscriber.go +│ │ ├── attemptscheduler.go +│ │ ├── templatecatalog.go +│ │ ├── provider.go +│ │ ├── clock.go +│ │ └── idgenerator.go +│ ├── service/ +│ │ ├── acceptauthdelivery/ +│ │ ├── acceptgenericdelivery/ +│ │ ├── executeattempt/ +│ │ ├── listdeliveries/ +│ │ ├── getdelivery/ +│ │ ├── listattempts/ +│ │ └── resenddelivery/ +│ ├── api/ +│ │ ├── internalhttp/ +│ │ └── streamcommand/ +│ ├── adapters/ +│ │ ├── redis/ +│ │ ├── smtp/ +│ │ ├── templates/ +│ │ ├── stubprovider/ +│ │ ├── clock/ +│ │ └── id/ +│ ├── worker/ +│ │ ├── command_consumer.go +│ │ ├── scheduler.go +│ │ ├── attempt_worker.go +│ │ └── cleanup_worker.go +│ ├── observability/ +│ │ ├── logging.go +│ │ ├── metrics.go +│ │ └── tracing.go +│ └── testkit/ +│ ├── redis.go +│ ├── provider.go +│ ├── clock.go +│ ├── templates.go +│ └── commands.go +├── templates/ +│ └── ... +├── docs/ +│ ├── README.md +│ └── stage-01-vocabulary-and-ownership.md +├── README.md +└── PLAN.md +``` + +## Target Configuration + +Planned environment variables: + +- `MAIL_INTERNAL_HTTP_ADDR` with default `:8080` +- `MAIL_REDIS_ADDR` required +- `MAIL_REDIS_COMMAND_STREAM` with default `mail:delivery_commands` +- `MAIL_REDIS_ATTEMPT_SCHEDULE_KEY` with default `mail:attempt_schedule` +- `MAIL_REDIS_DEAD_LETTER_PREFIX` with default `mail:dead_letters:` +- `MAIL_SMTP_MODE=stub|smtp` with default `stub` +- `MAIL_SMTP_ADDR` required in `smtp` mode +- `MAIL_SMTP_USERNAME` optional +- `MAIL_SMTP_PASSWORD` optional +- `MAIL_SMTP_FROM_EMAIL` required in `smtp` mode +- `MAIL_SMTP_FROM_NAME` optional +- `MAIL_SMTP_TIMEOUT` with default `15s` +- `MAIL_TEMPLATE_DIR` with default `templates` +- `MAIL_ATTEMPT_WORKER_CONCURRENCY` with default `4` +- `MAIL_STREAM_BLOCK_TIMEOUT` with default `2s` +- `MAIL_OPERATOR_REQUEST_TIMEOUT` with default `5s` +- `MAIL_IDEMPOTENCY_TTL` with default `168h` +- `MAIL_DELIVERY_TTL` with default `720h` +- `MAIL_ATTEMPT_TTL` with default `2160h` + +## ~~Stage 01.~~ Freeze Vocabulary and Ownership + +Status: implemented. + +### Goal + +Freeze the service vocabulary and remove cross-service ambiguity before any +implementation work starts. + +### Tasks + +- Freeze that `Mail Service` owns delivery acceptance, attempts, retry, + suppression, audit, and resend. +- Freeze that `Notification Service` owns the business decision to request + non-auth mail. +- Freeze that `Auth / Session Service` uses the dedicated auth REST contract. +- Freeze that `Geo Profile Service` routes optional admin mail through + `Notification Service`, not directly to `Mail Service`. +- Freeze that operator APIs are part of v1, not a later add-on. + +### Artifacts + +- stable service README +- aligned architecture references +- list of accepted source values: + - `authsession` + - `notification` + - `operator_resend` + +### Exit Criteria + +- no document still treats `Geo Profile Service` as a direct `Mail Service` + caller +- no document claims that all `Mail Service` callers use the same transport + +### Targeted Tests + +- documentation review only + +## ~~Stage 02.~~ Define the Domain Model and State Rules + +Status: implemented. + +### Goal + +Describe the logical delivery entities and freeze their valid state +transitions. + +### Tasks + +- Define `mail_delivery`, `mail_attempt`, `mail_idempotency_record`, + `mail_template`, and `mail_dead_letter_entry`. +- Freeze delivery states: + - `accepted` + - `queued` + - `rendered` + - `sending` + - `sent` + - `suppressed` + - `failed` + - `dead_letter` +- Freeze attempt states: + - `scheduled` + - `in_progress` + - `provider_accepted` + - `provider_rejected` + - `transport_failed` + - `timed_out` +- Freeze resend as clone-only with immutable parent history. +- Freeze terminal-state resend eligibility: + - `sent` + - `suppressed` + - `failed` + - `dead_letter` + +### Artifacts + +- domain models +- state transition table +- resend eligibility rules + +### Exit Criteria + +- every use case can rely on one explicit state machine + +### Targeted Tests + +- unit tests for allowed and forbidden delivery transitions +- unit tests for resend eligibility + +## ~~Stage 03.~~ Freeze the Redis Physical Model + +Status: implemented. + +### Goal + +Lock the Redis layout so repository and scheduling adapters can be implemented +without revisiting the data design. + +### Tasks + +- Freeze primary keys: + - `mail:deliveries:` + - `mail:attempts::` + - `mail:idempotency::` + - `mail:dead_letters:` +- Freeze scheduler and ingress keys: + - `mail:delivery_commands` + - `mail:attempt_schedule` +- Freeze search indexes: + - `mail:idx:recipient:` + - `mail:idx:status:` + - `mail:idx:source:` + - `mail:idx:template:` + - `mail:idx:idempotency::` + - `mail:idx:created_at` +- Freeze storage format: + - canonical JSON blob in Redis string keys for delivery and attempt records + - sorted-set indexes scored by `created_at_ms` +- Explicitly reject Redis storage for template contents in v1 because the + template catalog is filesystem-backed. +- Freeze retention: + - idempotency `7d` + - delivery `30d` + - attempts and dead letters `90d` +- Freeze atomic write boundaries: + - reserve idempotency + - store delivery + - schedule first attempt + - create resend clone + +### Artifacts + +- Redis key catalog +- atomicity notes for Lua or optimistic transaction usage +- retention and cleanup notes + +### Exit Criteria + +- the Redis adapters can be implemented without unresolved naming or + transactional questions + +### Targeted Tests + +- repository tests for key naming +- atomicity tests for duplicate idempotency races +- cleanup tests for TTL-driven record expiry + +## ~~Stage 04.~~ Freeze the Auth REST Contract + +Status: implemented. + +### Goal + +Define the direct trusted contract from `Auth / Session Service`. + +### Tasks + +- Freeze route `POST /api/v1/internal/login-code-deliveries`. +- Freeze required `Idempotency-Key` header. +- Freeze body fields: + - `email` + - `code` + - `locale` +- Freeze success outcomes: + - `sent` + - `suppressed` +- Freeze trusted error codes: + - `invalid_request` + - `internal_error` + - `service_unavailable` +- Freeze the meaning of `sent` as durable acceptance into the mail pipeline, + not immediate SMTP completion. +- Freeze auth-client behavior of no automatic retry on upstream or transport + failures. + +### Artifacts + +- request/response DTOs +- handler contract notes +- error mapping table + +### Exit Criteria + +- the auth REST client and server can be built from the frozen contract + +### Targeted Tests + +- strict JSON decoding tests +- required header validation tests +- idempotent repeat request tests +- sent versus suppressed response tests + +## ~~Stage 05.~~ Freeze the Async Generic Contract + +Status: implemented. + +### Goal + +Define the exact `Redis Streams` command format used by +`Notification Service`. + +### Tasks + +- Freeze the stream name `mail:delivery_commands`. +- Freeze required fields: + - `delivery_id` + - `source` + - `payload_mode` + - `idempotency_key` + - `requested_at_ms` + - `payload_json` +- Freeze optional fields: + - `request_id` + - `trace_id` +- Freeze that async `source` accepts only: + - `notification` +- Freeze payload modes: + - `rendered` + - `template` +- Freeze the rendered payload shape with: + - recipient envelope + - `subject` + - `text_body` + - optional `html_body` + - attachments +- Freeze the template payload shape with: + - recipient envelope + - `template_id` + - `locale` + - `variables` + - attachments +- Freeze duplicate handling by `(source, idempotency_key)`. +- Freeze `request_id` and `trace_id` as tracing-only metadata excluded from + the idempotency fingerprint. +- Freeze the malformed-command path into dedicated operator-visible + `mail_malformed_command_entry` state outside `mail_delivery`. + +### Artifacts + +- stream field catalog +- typed stream command contract +- `AsyncAPI` specification +- `payload_json` schema notes +- malformed command handling rules + +### Exit Criteria + +- `Notification Service` can publish one command without needing a follow-up + design round + +### Targeted Tests + +- strict stream-entry decoding tests +- duplicate idempotency tests +- malformed command recording-contract tests +- rendered and template payload acceptance tests + +## ~~Stage 06.~~ Build the Runnable Service Skeleton + +Status: implemented. + +### Goal + +Create one runnable internal process with config, Redis, HTTP server, and +workers. + +### Tasks + +- Implement `cmd/mail`. +- Implement config loading and validation. +- Wire Redis client, template catalog, provider adapter, HTTP server, and + workers. +- Add graceful shutdown across: + - HTTP server + - stream consumer + - scheduler + - attempt workers + - cleanup worker +- Add startup validation for required Redis and provider config. + +### Artifacts + +- runnable `cmd/mail` +- bootstrap wiring +- graceful shutdown logic + +### Exit Criteria + +- the process starts and stops cleanly with valid config + +### Targeted Tests + +- startup with stub mode +- startup failure on invalid Redis config +- graceful shutdown without leaked goroutines + +## ~~Stage 07.~~ Implement Auth Delivery Acceptance + +Status: implemented. + +### Goal + +Accept auth-code deliveries synchronously and durably. + +### Tasks + +- Implement the auth acceptance use case. +- Validate `email`, `code`, `locale`, and `Idempotency-Key`. +- Classify explicit suppression without treating it as failure. +- Persist delivery, idempotency record, and first scheduled attempt + atomically. +- Keep `suppressed` acceptance as the explicit exception that persists only + delivery plus idempotency state without a first attempt. +- Return stable `sent` or `suppressed`. +- Add telemetry for accepted auth requests. +- Reject mismatched replays with the same idempotency key. + +### Artifacts + +- auth acceptance service +- internal HTTP handler +- DTO validation and error mapping + +### Exit Criteria + +- auth requests create one durable delivery or fail closed without partial + state + +### Targeted Tests + +- valid request accepted as `sent` +- valid request accepted as `suppressed` without attempt scheduling +- duplicate identical request returns same result +- duplicate mismatched request is rejected +- Redis persistence failure surfaces `503 service_unavailable` + +## ~~Stage 08.~~ Implement Async Generic Acceptance + +Status: implemented. + +### Goal + +Consume generic mail commands from `Redis Streams` and convert them into +durable deliveries. + +### Tasks + +- Implement plain `XREAD`-based stream consumption. +- Decode and validate stream entries. +- Persist one delivery and schedule one first attempt atomically. +- Advance the consumer offset only after durable acceptance. +- Meter malformed entries and record them as operator-visible + `mail_malformed_command_entry` state. +- Keep duplicate idempotency requests as no-op accepts. + +### Artifacts + +- stream consumer worker +- generic acceptance service +- malformed command recorder + +### Exit Criteria + +- valid commands are never lost after they are read from the stream + +### Targeted Tests + +- rendered command acceptance +- template command acceptance +- duplicate command no-op behavior +- malformed command recording +- consumer restart continuing from the correct offset + +## ~~Stage 09.~~ Implement the Template Catalog and Rendering + +Status: implemented. + +### Goal + +Provide deterministic rendering for template-mode deliveries. + +### Tasks + +- Implement filesystem-backed template discovery under `templates/`. +- Freeze directory layout as `//subject.tmpl`, + `text.tmpl`, and optional `html.tmpl`. +- Implement locale validation and fallback to `en`. +- Record `locale_fallback_used`. +- Validate required variables before rendering. +- Reject unknown missing required variables deterministically. +- Add dedicated auth template family: + - `auth.login_code` + +### Artifacts + +- template catalog adapter +- renderer +- auth template assets + +### Exit Criteria + +- template mode always produces one final deterministic subject/body bundle or + one classified render failure + +### Targeted Tests + +- exact locale render +- unsupported locale fallback to `en` +- missing `en` fallback failure +- missing required variable failure +- deterministic render snapshots + +## ~~Stage 10.~~ Implement the Provider Layer + +Status: implemented. + +### Goal + +Provide concrete delivery adapters for SMTP and deterministic local testing. + +### Tasks + +- Freeze provider result classifications: + - `accepted` + - `suppressed` + - `transient_failure` + - `permanent_failure` +- Implement SMTP adapter with: + - dial/connect + - optional auth + - envelope mapping + - MIME body construction + - inline attachment mapping + - timeout classification +- Implement stub adapter with scriptable outcomes. +- Redact provider summaries before storing them in audit fields. + +### Artifacts + +- SMTP adapter +- stub provider adapter +- MIME builder helpers + +### Exit Criteria + +- one attempt can be executed against either adapter with stable classified + outcomes + +### Targeted Tests + +- SMTP request construction tests +- attachment mapping tests +- timeout classification tests +- stub scripted outcome tests + +## ~~Stage 11.~~ Implement the Attempt Scheduler and Workers + +Status: implemented. + +### Goal + +Run due attempts exactly once per scheduled slot and apply retry policy. + +### Tasks + +- Implement `mail:attempt_schedule` claim logic. +- Enforce at most one active attempt per delivery. +- Execute provider calls through the attempt service. +- Schedule retries at: + - `1m` + - `5m` + - `30m` +- Transition exhausted deliveries to `dead_letter`. +- Keep recoverable state across process restarts. +- Ensure claimed but unfinished work becomes visible again after worker crash + recovery. + +### Artifacts + +- scheduler worker +- attempt worker +- retry planner +- dead-letter writer + +### Exit Criteria + +- the service survives restarts and resumes scheduled work without duplicate + attempt ownership + +### Targeted Tests + +- immediate first attempt +- transient retry chain to success +- retry exhaustion to dead letter +- crash recovery of in-progress attempt ownership + +## ~~Stage 12.~~ Implement the Operator API + +Status: implemented. + +### Goal + +Provide trusted read and resend controls without direct Redis access. + +### Tasks + +- Implement delivery lookup by `delivery_id`. +- Implement filtered list with deterministic cursor pagination. +- Implement attempt history reads. +- Implement resend clone creation. +- Freeze cursor format as opaque base64 of `created_at_ms:delivery_id`. +- Reject resend for non-terminal statuses. + +### Artifacts + +- operator HTTP handlers +- list query DTOs +- resend service + +### Exit Criteria + +- operators can inspect and resend deliveries safely through the service API + +### Targeted Tests + +- list filtering by recipient, status, source, template, and idempotency key +- cursor pagination tests +- resend allowed for terminal states only +- resend creates a linked clone rather than mutating the original + +## ~~Stage 13.~~ Add Observability and Runbook Coverage + +Status: implemented. + +### Goal + +Make the service operable without reading the code. + +### Tasks + +- Add counters for: + - accepted auth deliveries + - accepted generic deliveries + - suppressed deliveries + - delivery statuses + - attempt outcomes + - dead letters + - locale fallback +- Add gauges or histograms for: + - scheduled depth + - oldest scheduled age + - SMTP latency +- Add structured logs with: + - `delivery_id` + - `source` + - `template_id` + - `attempt_no` +- Add traces around: + - acceptance + - rendering + - provider send + - resend +- Write operator runbook content for: + - backlog growth + - dead-letter spikes + - repeated suppressions + - SMTP auth or timeout failures + - malformed stream commands + +### Artifacts + +- telemetry runtime +- logging helpers +- runbook section drafts + +### Exit Criteria + +- common failure modes are visible and actionable + +### Targeted Tests + +- metric emission tests +- log field presence tests +- trace smoke tests where practical + +## ~~Stage 14.~~ Complete the Test Matrix + +Status: implemented. + +### Goal + +Reach a safe verification baseline across unit, integration, and end-to-end +scenarios. + +### Tasks + +- Add unit tests for: + - validation + - state transitions + - idempotency + - rendering + - provider classification + - retry planning +- Add integration tests for: + - auth REST to durable delivery + - stream command to durable delivery + - attempt execution against stub provider + - operator API against Redis-backed state +- Add end-to-end scenarios for: + - auth `sent` + - auth `suppressed` + - template locale fallback + - transient retry to success + - retry exhaustion to dead letter + - duplicate idempotency key + - resend clone + - graceful shutdown with pending work + +### Artifacts + +- unit test suite +- integration harness +- end-to-end scenarios + +### Exit Criteria + +- the planned behavior is covered closely enough to refactor safely + +### Targeted Tests + +- execute the smallest relevant subset: + - `go test ./mail/...` + - focused integration packages once they exist + +## ~~Stage 15.~~ Align Cross-Service Documentation + +Status: implemented. + +### Goal + +Update existing documentation so the repository tells one coherent story about +`Mail Service`. + +### Tasks + +- Update `ARCHITECTURE.md`: + - direct auth mail is synchronous trusted REST + - generic notification mail is asynchronous through `Notification Service` + - clarify that durable acceptance may precede SMTP completion +- Update `geoprofile` docs: + - remove direct `Geo Profile Service -> Mail Service` + - route optional admin mail through `Notification Service` +- Update `authsession` docs: + - clarify localized mail acceptance semantics + - clarify that `sent` means accepted into the mail pipeline +- Update `gateway` docs: + - document `Accept-Language` as the public auth locale source + - keep JSON bodies unchanged +- Update `user` docs: + - document the auth-provided preferred-language candidate rule for new-user + creation + +### Artifacts + +- aligned service READMEs and docs +- aligned architecture narrative + +### Exit Criteria + +- no first-class document contradicts the new `Mail Service` model + +### Targeted Tests + +- documentation review +- contract-document sync review + +## Final Acceptance Checklist + +The implementation is complete only when all of the following hold: + +- the process starts with Redis and stub provider config +- auth REST intake works with explicit idempotency +- async generic stream intake works with duplicate suppression +- template rendering and locale fallback are deterministic +- SMTP and stub providers both work through the same port +- retries and dead-letter flow operate after restarts +- operator reads and resend clone work +- metrics, logs, and traces cover the main failure modes +- repository documentation is aligned with the final service model diff --git a/mail/README.md b/mail/README.md new file mode 100644 index 0000000..f163888 --- /dev/null +++ b/mail/README.md @@ -0,0 +1,460 @@ +# Mail Service + +`Mail Service` is the internal e-mail delivery service of Galaxy. + +Canonical contracts: + +- [Internal REST API](api/internal-openapi.yaml) +- [Async generic command contract](api/delivery-commands-asyncapi.yaml) +- [Extended service docs](docs/README.md) + +## Purpose + +`Mail Service` owns durable intake, rendering, execution, retry, audit, and +operator recovery for outbound e-mail. + +It does not decide whether a business event should become e-mail. That +decision belongs to `Notification Service`. + +## Responsibility Boundaries + +`Mail Service` is responsible for: + +- direct auth-code mail intake from `Auth / Session Service` +- async generic mail intake from `Notification Service` +- validation of recipient envelope, payload shape, locale, and attachments +- deterministic template rendering for template-mode deliveries +- provider execution through `stub` or `smtp` +- retry scheduling, dead-letter escalation, and operator-visible audit state +- trusted operator reads and resend by clone creation + +`Mail Service` is not responsible for: + +- end-user authentication or authorization +- notification preference ownership +- deciding whether non-auth mail should be sent at all +- direct calls from `Geo Profile Service` +- hot-reloading templates or editing template catalog state at runtime + +Cross-service routing rules: + +- `Auth / Session Service -> Mail Service` is synchronous trusted REST +- `Notification Service -> Mail Service` is asynchronous `Redis Streams` +- `Geo Profile Service` must route optional admin e-mail through + `Notification Service`, not directly to `Mail Service` + +## Runtime Surface + +`cmd/mail` starts one internal-only process with: + +- one trusted internal HTTP listener on `MAIL_INTERNAL_HTTP_ADDR` +- one async command consumer +- one attempt scheduler +- one attempt worker pool +- one cleanup worker + +The service has no public ingress and no dedicated admin listener. + +Intentional runtime omissions: + +- no `/healthz` +- no `/readyz` +- no `/metrics` + +Operational behavior: + +- startup performs bounded Redis connectivity checks and fails fast on invalid + runtime configuration +- the template catalog is parsed once at startup and kept immutable for the + lifetime of the process +- template changes require process restart +- operator handlers execute under `MAIL_OPERATOR_REQUEST_TIMEOUT` + +## Configuration + +Required for all starts: + +- `MAIL_REDIS_ADDR` + +Primary configuration groups: + +- process and logging: + - `MAIL_SHUTDOWN_TIMEOUT` + - `MAIL_LOG_LEVEL` +- internal HTTP: + - `MAIL_INTERNAL_HTTP_ADDR` + - `MAIL_INTERNAL_HTTP_READ_HEADER_TIMEOUT` + - `MAIL_INTERNAL_HTTP_READ_TIMEOUT` + - `MAIL_INTERNAL_HTTP_IDLE_TIMEOUT` +- Redis connectivity: + - `MAIL_REDIS_USERNAME` + - `MAIL_REDIS_PASSWORD` + - `MAIL_REDIS_DB` + - `MAIL_REDIS_TLS_ENABLED` + - `MAIL_REDIS_OPERATION_TIMEOUT` + - `MAIL_REDIS_COMMAND_STREAM` +- SMTP provider: + - `MAIL_SMTP_MODE=stub|smtp` + - `MAIL_SMTP_ADDR` + - `MAIL_SMTP_USERNAME` + - `MAIL_SMTP_PASSWORD` + - `MAIL_SMTP_FROM_EMAIL` + - `MAIL_SMTP_FROM_NAME` + - `MAIL_SMTP_TIMEOUT` + - `MAIL_SMTP_INSECURE_SKIP_VERIFY` +- template catalog: + - `MAIL_TEMPLATE_DIR` +- worker and operator behavior: + - `MAIL_ATTEMPT_WORKER_CONCURRENCY` + - `MAIL_STREAM_BLOCK_TIMEOUT` + - `MAIL_OPERATOR_REQUEST_TIMEOUT` +- OpenTelemetry: + - `OTEL_SERVICE_NAME` + - `OTEL_TRACES_EXPORTER` + - `OTEL_METRICS_EXPORTER` + - `OTEL_EXPORTER_OTLP_PROTOCOL` + - `OTEL_EXPORTER_OTLP_TRACES_PROTOCOL` + - `OTEL_EXPORTER_OTLP_METRICS_PROTOCOL` + - `MAIL_OTEL_STDOUT_TRACES_ENABLED` + - `MAIL_OTEL_STDOUT_METRICS_ENABLED` + +Defaults worth knowing: + +- `MAIL_INTERNAL_HTTP_ADDR=:8080` +- `MAIL_SMTP_MODE=stub` +- `MAIL_SMTP_TIMEOUT=15s` + +Additional SMTP note: + +- `MAIL_SMTP_INSECURE_SKIP_VERIFY=false` by default and is intended only for + local self-signed SMTP capture or similar non-production environments +- `MAIL_TEMPLATE_DIR=templates` +- `MAIL_ATTEMPT_WORKER_CONCURRENCY=4` +- `MAIL_STREAM_BLOCK_TIMEOUT=2s` +- `MAIL_OPERATOR_REQUEST_TIMEOUT=5s` +- `MAIL_SHUTDOWN_TIMEOUT=5s` + +Current implementation caveats: + +- `MAIL_REDIS_COMMAND_STREAM` is effective for the async command consumer +- `MAIL_REDIS_ATTEMPT_SCHEDULE_KEY` and `MAIL_REDIS_DEAD_LETTER_PREFIX` are + parsed but the Redis adapters still use the fixed keys + `mail:attempt_schedule` and `mail:dead_letters:` +- `MAIL_IDEMPOTENCY_TTL`, `MAIL_DELIVERY_TTL`, and `MAIL_ATTEMPT_TTL` are + parsed but the Redis adapters still enforce fixed retentions of `7d`, `30d`, + and `90d` + +## Stable Input Contracts + +### 1. Auth delivery REST + +Route: + +- `POST /api/v1/internal/login-code-deliveries` + +Headers: + +- required `Idempotency-Key` + +Request body: + +- `email` +- `code` +- `locale` + +Stable success outcomes: + +- `sent` +- `suppressed` + +Important semantics: + +- `sent` means the request was durably accepted into the internal + mail-delivery pipeline +- `sent` does not mean that SMTP delivery has already completed +- new durable auth deliveries surface as: + - `queued` in `MAIL_SMTP_MODE=smtp` + - `suppressed` in `MAIL_SMTP_MODE=stub` +- duplicate replays with the same normalized request return the same stable + outcome +- mismatched replays on the same `(source, idempotency_key)` return + `409 conflict` + +### 2. Async generic command intake + +Ingress stream: + +- `mail:delivery_commands` + +Stable envelope fields: + +- `delivery_id` +- `source` +- `payload_mode` +- `idempotency_key` +- `request_id` +- `trace_id` +- `payload_json` + +Contract rules: + +- async `source` is fixed to `notification` +- supported `payload_mode` values are `rendered` and `template` +- `request_id` and `trace_id` are observability-only metadata and do not + participate in idempotency fingerprinting +- malformed commands are metered, logged, and recorded as dedicated + malformed-command entries +- malformed commands do not create a durable delivery record +- stream offset advances only after durable acceptance or durable + malformed-command recording + +### 3. Trusted operator REST + +Routes: + +- `GET /api/v1/internal/deliveries` +- `GET /api/v1/internal/deliveries/{delivery_id}` +- `GET /api/v1/internal/deliveries/{delivery_id}/attempts` +- `POST /api/v1/internal/deliveries/{delivery_id}/resend` + +List filters: + +- `recipient` +- `status` +- `source` +- `template_id` +- `idempotency_key` +- `from_created_at_ms` +- `to_created_at_ms` +- `limit` +- `cursor` + +Stable list behavior: + +- ordering is `created_at_ms DESC`, then `delivery_id DESC` +- cursor is an opaque base64url encoding of `created_at_ms:delivery_id` +- `idempotency_key` without `source` matches across all stable sources + +Stable resend rules: + +- resend is clone-only +- resend is allowed only for terminal delivery states +- resend creates a new delivery with `source=operator_resend` +- resend clones preserve audit history of the original instead of mutating it + +## Delivery Model + +### Source vocabulary + +Stable `mail_delivery.source` values: + +- `authsession` +- `notification` +- `operator_resend` + +### Payload modes + +Stable `mail_delivery.payload_mode` values: + +- `rendered` +- `template` + +Rules: + +- `rendered` stores final `subject`, `text_body`, and optional `html_body` +- `template` stores `template_id`, canonical `locale`, and strict JSON-object + `template_variables` +- raw attachment bodies are stored separately from the delivery audit record + +### Delivery statuses + +Stable operator-visible `mail_delivery.status` values: + +- `queued` +- `rendered` +- `sending` +- `sent` +- `suppressed` +- `failed` +- `dead_letter` + +Status meanings: + +- `queued`: durable intake completed and the next attempt is scheduled +- `rendered`: template content has been materialized +- `sending`: one worker currently owns the active attempt +- `sent`: provider accepted the envelope +- `suppressed`: delivery was intentionally skipped as a successful business + outcome +- `failed`: terminal failure without dead-letter escalation +- `dead_letter`: retry budget was exhausted and operator follow-up is required + +Stable transition rules: + +- newly accepted durable deliveries surface as `queued` or `suppressed` +- `queued -> rendered` is used only for `payload_mode=template` +- `queued|rendered -> sending` happens on successful claim +- `sending -> sent|suppressed|failed|queued|dead_letter` depends on provider + classification and retry policy + +The internal type `delivery.StatusAccepted` still exists in code, but it is +not part of the stable public delivery-status vocabulary and is not emitted by +the current runtime. + +### Attempt statuses + +Stable `mail_attempt.status` values: + +- `scheduled` +- `in_progress` +- `render_failed` +- `provider_accepted` +- `provider_rejected` +- `transport_failed` +- `timed_out` + +Rules: + +- there is at most one active `in_progress` attempt per delivery +- `render_failed` means template rendering failed before provider execution +- `provider_accepted` ends the delivery as `sent` +- `provider_rejected` is used for: + - provider-side suppression ending in `suppressed` + - permanent provider failure ending in `failed` +- retryable paths are expressed through: + - `transport_failed` + - `timed_out` + +## Template and Locale Policy + +Template layout: + +- `//subject.tmpl` +- `//text.tmpl` +- optional `//html.tmpl` + +Required auth fallback files: + +- `auth.login_code/en/subject.tmpl` +- `auth.login_code/en/text.tmpl` + +Rendering rules: + +- the process loads the full catalog at startup +- exact locale match is attempted first +- the only fallback locale is `en` +- there are no intermediate reductions such as `fr-CA -> fr -> en` +- `locale_fallback_used=true` is stored durably when fallback is applied +- subject and text use `text/template` +- optional HTML uses `html/template` +- missing required variables and template lookup failures are classified into + stable render-failure codes + +## Redis Logical Model + +Primary keys: + +- `mail:deliveries:` +- `mail:attempts::` +- `mail:idempotency::` +- `mail:dead_letters:` +- `mail:delivery_payloads:` +- `mail:malformed_commands:` +- `mail:stream_offsets:` + +Scheduling and ingress keys: + +- `mail:delivery_commands` +- `mail:attempt_schedule` + +Operator indexes: + +- `mail:idx:recipient:` +- `mail:idx:status:` +- `mail:idx:source:` +- `mail:idx:template:` +- `mail:idx:idempotency::` +- `mail:idx:created_at` +- `mail:idx:malformed_command:created_at` + +Storage rules: + +- dynamic Redis key segments are base64url-encoded +- durable records are stored as strict JSON blobs +- timestamps are stored in Unix milliseconds +- raw attachment payloads are separated from audit metadata +- malformed async commands are stored idempotently by `stream_entry_id` + +Current fixed retentions: + +- idempotency: `7d` +- deliveries and payload audit: `30d` +- attempts and dead letters: `90d` +- malformed commands: `90d` + +## Provider, Retry, and Failure Policy + +Provider modes: + +- `stub` +- `smtp` + +SMTP rules: + +- outbound SMTP requires `STARTTLS` +- servers without `STARTTLS` support are treated as permanent failure +- SMTP authentication is enabled only when both username and password are set + +Retry ladder: + +- attempt `1 -> 2`: `1m` +- attempt `2 -> 3`: `5m` +- attempt `3 -> 4`: `30m` +- after attempt `4`: `dead_letter` + +Failure handling: + +- retryable provider failures become `transport_failed` or `timed_out`, then + either reschedule or escalate to `dead_letter` +- permanent provider failures become `failed` +- render failures become `failed` with `render_failed` +- stale claimed work is recovered after `MAIL_SMTP_TIMEOUT + 30s` + +## Observability + +The runtime exports telemetry through configured OpenTelemetry exporters only. + +Main signals: + +- `mail.delivery.accepted_auth` +- `mail.delivery.accepted_generic` +- `mail.delivery.suppressed` +- `mail.delivery.status_transitions` +- `mail.attempt.outcomes` +- `mail.delivery.dead_letters` +- `mail.template.locale_fallback` +- `mail.attempt_schedule.depth` +- `mail.attempt_schedule.oldest_age_ms` +- `mail.provider.send.duration_ms` +- `mail.stream_commands.malformed` + +Additional behavior: + +- internal HTTP uses `otelhttp` +- Redis clients use `redisotel` +- structured logs include `otel_trace_id` and `otel_span_id` when available + +## Verification + +Relevant commands: + +- `cd mail && go test ./...` +- `cd integration && go test ./authsessionmail/...` +- `cd integration && go test ./gatewayauthsessionmail/...` + +Extended references: + +- [Runtime and components](docs/runtime.md) +- [Main flows](docs/flows.md) +- [Configuration and contract examples](docs/examples.md) +- [Operator runbook](docs/runbook.md) diff --git a/mail/api/delivery-commands-asyncapi.yaml b/mail/api/delivery-commands-asyncapi.yaml new file mode 100644 index 0000000..64764ed --- /dev/null +++ b/mail/api/delivery-commands-asyncapi.yaml @@ -0,0 +1,215 @@ +asyncapi: 3.1.0 +info: + title: Mail Service Async Generic Command Contract + version: 1.0.0 + description: | + Stable contract for generic asynchronous delivery commands published by + Notification Service to Mail Service through Redis Streams. +channels: + deliveryCommands: + address: mail:delivery_commands + messages: + renderedDeliveryCommand: + $ref: '#/components/messages/RenderedDeliveryCommand' + templateDeliveryCommand: + $ref: '#/components/messages/TemplateDeliveryCommand' +operations: + publishDeliveryCommand: + action: send + channel: + $ref: '#/channels/deliveryCommands' + messages: + - $ref: '#/channels/deliveryCommands/messages/renderedDeliveryCommand' + - $ref: '#/channels/deliveryCommands/messages/templateDeliveryCommand' +components: + messages: + RenderedDeliveryCommand: + name: RenderedDeliveryCommand + title: Rendered delivery command + summary: Generic asynchronous delivery command with final rendered content. + payload: + $ref: '#/components/schemas/RenderedDeliveryCommandEnvelope' + examples: + - name: rendered + summary: Rendered delivery command example. + payload: + delivery_id: mail-123 + source: notification + payload_mode: rendered + idempotency_key: notification:mail-123 + requested_at_ms: "1775121700000" + request_id: req-123 + trace_id: trace-123 + payload_json: '{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":["noreply@example.com"],"subject":"Turn ready","text_body":"Turn 54 is ready.","html_body":"

Turn 54 is ready.

","attachments":[{"filename":"report.txt","content_type":"text/plain","content_base64":"cmVwb3J0"}]}' + TemplateDeliveryCommand: + name: TemplateDeliveryCommand + title: Template delivery command + summary: Generic asynchronous delivery command with template rendering data. + payload: + $ref: '#/components/schemas/TemplateDeliveryCommandEnvelope' + examples: + - name: template + summary: Template delivery command example. + payload: + delivery_id: mail-124 + source: notification + payload_mode: template + idempotency_key: notification:mail-124 + requested_at_ms: "1775121700001" + payload_json: '{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":[],"template_id":"game.turn_ready","locale":"fr-FR","variables":{"turn_number":54},"attachments":[]}' + schemas: + RenderedDeliveryCommandEnvelope: + type: object + additionalProperties: false + required: + - delivery_id + - source + - payload_mode + - idempotency_key + - requested_at_ms + - payload_json + properties: + delivery_id: + type: string + source: + type: string + const: notification + payload_mode: + type: string + const: rendered + idempotency_key: + type: string + requested_at_ms: + type: string + pattern: '^[0-9]+$' + request_id: + type: string + trace_id: + type: string + payload_json: + type: string + contentMediaType: application/json + contentSchema: + $ref: '#/components/schemas/RenderedPayloadJSON' + TemplateDeliveryCommandEnvelope: + type: object + additionalProperties: false + required: + - delivery_id + - source + - payload_mode + - idempotency_key + - requested_at_ms + - payload_json + properties: + delivery_id: + type: string + source: + type: string + const: notification + payload_mode: + type: string + const: template + idempotency_key: + type: string + requested_at_ms: + type: string + pattern: '^[0-9]+$' + request_id: + type: string + trace_id: + type: string + payload_json: + type: string + contentMediaType: application/json + contentSchema: + $ref: '#/components/schemas/TemplatePayloadJSON' + RenderedPayloadJSON: + type: object + additionalProperties: false + required: + - to + - cc + - bcc + - reply_to + - subject + - text_body + - attachments + properties: + to: + $ref: '#/components/schemas/EmailList' + cc: + $ref: '#/components/schemas/EmailList' + bcc: + $ref: '#/components/schemas/EmailList' + reply_to: + $ref: '#/components/schemas/EmailList' + subject: + type: string + minLength: 1 + text_body: + type: string + minLength: 1 + html_body: + type: string + attachments: + $ref: '#/components/schemas/AttachmentList' + TemplatePayloadJSON: + type: object + additionalProperties: false + required: + - to + - cc + - bcc + - reply_to + - template_id + - locale + - variables + - attachments + properties: + to: + $ref: '#/components/schemas/EmailList' + cc: + $ref: '#/components/schemas/EmailList' + bcc: + $ref: '#/components/schemas/EmailList' + reply_to: + $ref: '#/components/schemas/EmailList' + template_id: + type: string + minLength: 1 + locale: + type: string + minLength: 1 + variables: + type: object + additionalProperties: true + attachments: + $ref: '#/components/schemas/AttachmentList' + EmailList: + type: array + items: + type: string + format: email + AttachmentList: + type: array + maxItems: 5 + items: + $ref: '#/components/schemas/Attachment' + Attachment: + type: object + additionalProperties: false + required: + - filename + - content_type + - content_base64 + properties: + filename: + type: string + minLength: 1 + content_type: + type: string + minLength: 1 + content_base64: + type: string + description: Inline base64 payload. The sum of all attachment `content_base64` lengths must not exceed 2097152 bytes. diff --git a/mail/api/internal-openapi.yaml b/mail/api/internal-openapi.yaml new file mode 100644 index 0000000..8f018bd --- /dev/null +++ b/mail/api/internal-openapi.yaml @@ -0,0 +1,725 @@ +openapi: 3.0.3 +info: + title: Galaxy Mail Service Internal REST API + version: v1 + description: | + This specification documents the trusted internal REST contract of + `galaxy/mail`. + + The current document freezes: + - the dedicated auth-delivery route used by `Auth / Session Service`; + - the trusted operator read and resend routes used for delivery audit and + recovery. + + Contract rules: + - the internal surface lives under `/api/v1/internal`; + - request and response bodies are JSON only; + - auth-delivery intake requires the `Idempotency-Key` header; + - request bodies use strict JSON decoding with unknown-field rejection; + - trailing JSON input is rejected; + - success outcomes are limited to `sent` and `suppressed` on the auth + route; + - mismatched replays on the same `Idempotency-Key` return `409 conflict`; + - operator listing order is `created_at_ms DESC`, then `delivery_id DESC`; + - operator list cursors are opaque base64url encodings of + `created_at_ms:delivery_id`; + - `sent` means durable acceptance into the mail-delivery pipeline rather + than immediate SMTP completion; + - auth callers must not automatically retry transport or upstream + failures; + - `Auth / Session Service` sends the created `challenge_id` as the raw + `Idempotency-Key` header value. +servers: + - url: http://localhost:8080 + description: Default local internal listener for Mail Service. +tags: + - name: AuthIntegration + description: Trusted auth-facing mail-delivery intake frozen for `Auth / Session Service`. + - name: OperatorIntegration + description: Trusted operator-facing delivery reads and resend controls. +paths: + /api/v1/internal/login-code-deliveries: + post: + tags: + - AuthIntegration + operationId: acceptLoginCodeDelivery + summary: Accept one auth login-code delivery request + description: | + Validates one trusted auth login-code delivery request and accepts it + durably into the internal mail-delivery pipeline or intentionally + suppresses outward delivery while keeping the auth flow success-shaped. + parameters: + - $ref: "#/components/parameters/IdempotencyKey" + requestBody: + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/LoginCodeDeliveryRequest" + responses: + "200": + description: Stable auth-delivery acceptance outcome. + content: + application/json: + schema: + $ref: "#/components/schemas/LoginCodeDeliveryResponse" + "400": + $ref: "#/components/responses/InvalidRequestError" + "409": + $ref: "#/components/responses/ConflictError" + "500": + $ref: "#/components/responses/InternalError" + "503": + $ref: "#/components/responses/ServiceUnavailableError" + /api/v1/internal/deliveries: + get: + tags: + - OperatorIntegration + operationId: listDeliveries + summary: List durable deliveries for trusted operators + description: | + Returns one deterministic page of brief delivery summaries ordered by + `created_at_ms DESC`, then `delivery_id DESC`. + parameters: + - $ref: "#/components/parameters/RecipientFilter" + - $ref: "#/components/parameters/StatusFilter" + - $ref: "#/components/parameters/SourceFilter" + - $ref: "#/components/parameters/TemplateIDFilter" + - $ref: "#/components/parameters/IdempotencyKeyFilter" + - $ref: "#/components/parameters/FromCreatedAtMSFilter" + - $ref: "#/components/parameters/ToCreatedAtMSFilter" + - $ref: "#/components/parameters/ListLimit" + - $ref: "#/components/parameters/ListCursor" + responses: + "200": + description: One deterministic page of delivery summaries. + content: + application/json: + schema: + $ref: "#/components/schemas/DeliveryListResponse" + "400": + $ref: "#/components/responses/InvalidRequestError" + "500": + $ref: "#/components/responses/InternalError" + "503": + $ref: "#/components/responses/ServiceUnavailableError" + /api/v1/internal/deliveries/{delivery_id}: + get: + tags: + - OperatorIntegration + operationId: getDelivery + summary: Get one durable delivery for trusted operators + parameters: + - $ref: "#/components/parameters/DeliveryIDPath" + responses: + "200": + description: One full delivery view with the optional dead-letter entry. + content: + application/json: + schema: + $ref: "#/components/schemas/DeliveryDetailResponse" + "400": + $ref: "#/components/responses/InvalidRequestError" + "404": + $ref: "#/components/responses/DeliveryNotFoundError" + "500": + $ref: "#/components/responses/InternalError" + "503": + $ref: "#/components/responses/ServiceUnavailableError" + /api/v1/internal/deliveries/{delivery_id}/attempts: + get: + tags: + - OperatorIntegration + operationId: listDeliveryAttempts + summary: Get the attempt history of one durable delivery + parameters: + - $ref: "#/components/parameters/DeliveryIDPath" + responses: + "200": + description: The ordered attempt history of one durable delivery. + content: + application/json: + schema: + $ref: "#/components/schemas/DeliveryAttemptsResponse" + "400": + $ref: "#/components/responses/InvalidRequestError" + "404": + $ref: "#/components/responses/DeliveryNotFoundError" + "500": + $ref: "#/components/responses/InternalError" + "503": + $ref: "#/components/responses/ServiceUnavailableError" + /api/v1/internal/deliveries/{delivery_id}/resend: + post: + tags: + - OperatorIntegration + operationId: resendDelivery + summary: Clone one terminal delivery for resend + parameters: + - $ref: "#/components/parameters/DeliveryIDPath" + responses: + "200": + description: The clone delivery was created successfully. + content: + application/json: + schema: + $ref: "#/components/schemas/DeliveryResendResponse" + "400": + $ref: "#/components/responses/InvalidRequestError" + "404": + $ref: "#/components/responses/DeliveryNotFoundError" + "409": + $ref: "#/components/responses/ResendNotAllowedError" + "500": + $ref: "#/components/responses/InternalError" + "503": + $ref: "#/components/responses/ServiceUnavailableError" +components: + parameters: + IdempotencyKey: + name: Idempotency-Key + in: header + required: true + description: | + Caller-owned stable deduplication key. `Auth / Session Service` uses + the created `challenge_id` as the raw header value. + schema: + type: string + DeliveryIDPath: + name: delivery_id + in: path + required: true + description: Mail Service delivery identifier. + schema: + type: string + RecipientFilter: + name: recipient + in: query + required: false + description: Effective-recipient filter covering `to`, `cc`, and `bcc`. + schema: + type: string + StatusFilter: + name: status + in: query + required: false + description: Delivery lifecycle status filter. + schema: + type: string + enum: + - queued + - rendered + - sending + - sent + - suppressed + - failed + - dead_letter + SourceFilter: + name: source + in: query + required: false + description: Delivery source filter. + schema: + type: string + enum: + - authsession + - notification + - operator_resend + TemplateIDFilter: + name: template_id + in: query + required: false + description: Template family filter. + schema: + type: string + IdempotencyKeyFilter: + name: idempotency_key + in: query + required: false + description: | + Idempotency-key filter. When `source` is omitted, Mail Service matches + the key across all frozen sources. + schema: + type: string + FromCreatedAtMSFilter: + name: from_created_at_ms + in: query + required: false + description: Inclusive lower bound for `created_at_ms`. + schema: + type: integer + format: int64 + ToCreatedAtMSFilter: + name: to_created_at_ms + in: query + required: false + description: Inclusive upper bound for `created_at_ms`. + schema: + type: integer + format: int64 + ListLimit: + name: limit + in: query + required: false + description: | + Maximum number of returned deliveries. The frozen default is `50` and + the maximum is `200`. + schema: + type: integer + minimum: 1 + maximum: 200 + ListCursor: + name: cursor + in: query + required: false + description: | + Opaque continuation cursor encoded as base64url of + `created_at_ms:delivery_id`. + schema: + type: string + schemas: + LoginCodeDeliveryRequest: + type: object + additionalProperties: false + required: + - email + - code + - locale + properties: + email: + type: string + description: Normalized destination e-mail address. + code: + type: string + description: Exact login code generated by `Auth / Session Service`. + locale: + type: string + description: Canonical BCP 47 language tag already resolved upstream. + LoginCodeDeliveryResponse: + type: object + additionalProperties: false + required: + - outcome + properties: + outcome: + type: string + description: Stable coarse outcome of the auth-delivery acceptance. + enum: + - sent + - suppressed + DeliverySummaryResponse: + type: object + additionalProperties: false + required: + - delivery_id + - source + - payload_mode + - to + - cc + - bcc + - reply_to + - locale_fallback_used + - idempotency_key + - status + - attempt_count + - created_at_ms + - updated_at_ms + properties: + delivery_id: + type: string + resend_parent_delivery_id: + type: string + source: + type: string + enum: + - authsession + - notification + - operator_resend + payload_mode: + type: string + enum: + - rendered + - template + template_id: + type: string + to: + type: array + items: + type: string + cc: + type: array + items: + type: string + bcc: + type: array + items: + type: string + reply_to: + type: array + items: + type: string + locale: + type: string + locale_fallback_used: + type: boolean + idempotency_key: + type: string + status: + type: string + enum: + - queued + - rendered + - sending + - sent + - suppressed + - failed + - dead_letter + attempt_count: + type: integer + last_attempt_status: + type: string + enum: + - scheduled + - in_progress + - render_failed + - provider_accepted + - provider_rejected + - transport_failed + - timed_out + provider_summary: + type: string + created_at_ms: + type: integer + format: int64 + updated_at_ms: + type: integer + format: int64 + sent_at_ms: + type: integer + format: int64 + suppressed_at_ms: + type: integer + format: int64 + failed_at_ms: + type: integer + format: int64 + dead_lettered_at_ms: + type: integer + format: int64 + DeliveryListResponse: + type: object + additionalProperties: false + required: + - items + properties: + items: + type: array + items: + $ref: "#/components/schemas/DeliverySummaryResponse" + next_cursor: + type: string + AttachmentResponse: + type: object + additionalProperties: false + required: + - filename + - content_type + - size_bytes + properties: + filename: + type: string + content_type: + type: string + size_bytes: + type: integer + format: int64 + DeadLetterResponse: + type: object + additionalProperties: false + required: + - delivery_id + - final_attempt_no + - failure_classification + - created_at_ms + properties: + delivery_id: + type: string + final_attempt_no: + type: integer + failure_classification: + type: string + provider_summary: + type: string + created_at_ms: + type: integer + format: int64 + recovery_hint: + type: string + DeliveryDetailResponse: + type: object + additionalProperties: false + required: + - delivery_id + - source + - payload_mode + - to + - cc + - bcc + - reply_to + - attachments + - locale_fallback_used + - idempotency_key + - status + - attempt_count + - created_at_ms + - updated_at_ms + properties: + delivery_id: + type: string + resend_parent_delivery_id: + type: string + source: + type: string + enum: + - authsession + - notification + - operator_resend + payload_mode: + type: string + enum: + - rendered + - template + template_id: + type: string + template_variables: + type: object + additionalProperties: true + to: + type: array + items: + type: string + cc: + type: array + items: + type: string + bcc: + type: array + items: + type: string + reply_to: + type: array + items: + type: string + subject: + type: string + text_body: + type: string + html_body: + type: string + attachments: + type: array + items: + $ref: "#/components/schemas/AttachmentResponse" + locale: + type: string + locale_fallback_used: + type: boolean + idempotency_key: + type: string + status: + type: string + enum: + - queued + - rendered + - sending + - sent + - suppressed + - failed + - dead_letter + attempt_count: + type: integer + last_attempt_status: + type: string + enum: + - scheduled + - in_progress + - render_failed + - provider_accepted + - provider_rejected + - transport_failed + - timed_out + provider_summary: + type: string + created_at_ms: + type: integer + format: int64 + updated_at_ms: + type: integer + format: int64 + sent_at_ms: + type: integer + format: int64 + suppressed_at_ms: + type: integer + format: int64 + failed_at_ms: + type: integer + format: int64 + dead_lettered_at_ms: + type: integer + format: int64 + dead_letter: + $ref: "#/components/schemas/DeadLetterResponse" + AttemptResponse: + type: object + additionalProperties: false + required: + - delivery_id + - attempt_no + - scheduled_for_ms + - status + properties: + delivery_id: + type: string + attempt_no: + type: integer + scheduled_for_ms: + type: integer + format: int64 + started_at_ms: + type: integer + format: int64 + finished_at_ms: + type: integer + format: int64 + status: + type: string + enum: + - scheduled + - in_progress + - render_failed + - provider_accepted + - provider_rejected + - transport_failed + - timed_out + provider_classification: + type: string + provider_summary: + type: string + DeliveryAttemptsResponse: + type: object + additionalProperties: false + required: + - items + properties: + items: + type: array + items: + $ref: "#/components/schemas/AttemptResponse" + DeliveryResendResponse: + type: object + additionalProperties: false + required: + - delivery_id + properties: + delivery_id: + type: string + ErrorResponse: + type: object + additionalProperties: false + required: + - error + properties: + error: + $ref: "#/components/schemas/ErrorBody" + ErrorBody: + type: object + additionalProperties: false + required: + - code + - message + properties: + code: + type: string + description: Stable internal API error code. + message: + type: string + description: Human-readable trusted error message. + responses: + InvalidRequestError: + description: Request validation failed. + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" + examples: + missingHeader: + value: + error: + code: invalid_request + message: Idempotency-Key header must not be empty + invalidCursor: + value: + error: + code: invalid_request + message: cursor is invalid + ConflictError: + description: The current idempotency scope belongs to a different normalized request. + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" + examples: + conflict: + value: + error: + code: conflict + message: request conflicts with current state + DeliveryNotFoundError: + description: The requested delivery does not exist. + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" + examples: + missingDelivery: + value: + error: + code: delivery_not_found + message: delivery not found + ResendNotAllowedError: + description: The requested delivery is not in a terminal resend-eligible state. + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" + examples: + resendNotAllowed: + value: + error: + code: resend_not_allowed + message: delivery status does not allow resend + InternalError: + description: Internal application invariant failed. + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" + examples: + internal: + value: + error: + code: internal_error + message: internal server error + ServiceUnavailableError: + description: Durable acceptance or trusted delivery inspection could not be completed. + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" + examples: + unavailable: + value: + error: + code: service_unavailable + message: service is unavailable diff --git a/mail/cmd/mail/main.go b/mail/cmd/mail/main.go new file mode 100644 index 0000000..d2cf7c8 --- /dev/null +++ b/mail/cmd/mail/main.go @@ -0,0 +1,45 @@ +package main + +import ( + "context" + "fmt" + "os" + "os/signal" + "syscall" + + "galaxy/mail/internal/app" + "galaxy/mail/internal/config" + "galaxy/mail/internal/logging" +) + +func main() { + if err := run(); err != nil { + _, _ = fmt.Fprintf(os.Stderr, "mail: %v\n", err) + os.Exit(1) + } +} + +func run() error { + cfg, err := config.LoadFromEnv() + if err != nil { + return err + } + + logger, err := logging.New(cfg.Logging.Level) + if err != nil { + return err + } + + rootCtx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM) + defer stop() + + runtime, err := app.NewRuntime(rootCtx, cfg, logger) + if err != nil { + return err + } + defer func() { + _ = runtime.Close() + }() + + return runtime.Run(rootCtx) +} diff --git a/mail/contract_asyncapi_test.go b/mail/contract_asyncapi_test.go new file mode 100644 index 0000000..3f63345 --- /dev/null +++ b/mail/contract_asyncapi_test.go @@ -0,0 +1,189 @@ +package mail + +import ( + "os" + "path/filepath" + "runtime" + "testing" + + "gopkg.in/yaml.v3" + + "github.com/stretchr/testify/require" +) + +func TestDeliveryCommandsAsyncAPISpecLoads(t *testing.T) { + t.Parallel() + + doc := loadAsyncAPISpec(t) + require.Equal(t, "3.1.0", getStringValue(t, doc, "asyncapi")) +} + +func TestDeliveryCommandsAsyncAPISpecFreezesChannelAndOperation(t *testing.T) { + t.Parallel() + + doc := loadAsyncAPISpec(t) + + channel := getMapValue(t, doc, "channels", "deliveryCommands") + require.Equal(t, "mail:delivery_commands", getStringValue(t, channel, "address")) + + channelMessages := getMapValue(t, channel, "messages") + require.Equal(t, "#/components/messages/RenderedDeliveryCommand", getStringValue(t, getMapValue(t, channelMessages, "renderedDeliveryCommand"), "$ref")) + require.Equal(t, "#/components/messages/TemplateDeliveryCommand", getStringValue(t, getMapValue(t, channelMessages, "templateDeliveryCommand"), "$ref")) + + operation := getMapValue(t, doc, "operations", "publishDeliveryCommand") + require.Equal(t, "send", getStringValue(t, operation, "action")) + require.Equal(t, "#/channels/deliveryCommands", getStringValue(t, getMapValue(t, operation, "channel"), "$ref")) + + messageRefs := getSliceValue(t, operation, "messages") + require.Len(t, messageRefs, 2) + require.Equal(t, "#/channels/deliveryCommands/messages/renderedDeliveryCommand", getStringValue(t, messageRefs[0].(map[string]any), "$ref")) + require.Equal(t, "#/channels/deliveryCommands/messages/templateDeliveryCommand", getStringValue(t, messageRefs[1].(map[string]any), "$ref")) +} + +func TestDeliveryCommandsAsyncAPISpecFreezesEnvelopeSchemas(t *testing.T) { + t.Parallel() + + doc := loadAsyncAPISpec(t) + schemas := getMapValue(t, getMapValue(t, doc, "components"), "schemas") + + renderedEnvelope := getMapValue(t, schemas, "RenderedDeliveryCommandEnvelope") + require.ElementsMatch( + t, + []any{"delivery_id", "source", "payload_mode", "idempotency_key", "requested_at_ms", "payload_json"}, + getSliceValue(t, renderedEnvelope, "required"), + ) + require.Equal(t, "notification", getScalarValue(t, getMapValue(t, getMapValue(t, renderedEnvelope, "properties"), "source"), "const")) + require.Equal(t, "rendered", getScalarValue(t, getMapValue(t, getMapValue(t, renderedEnvelope, "properties"), "payload_mode"), "const")) + require.Equal( + t, + "#/components/schemas/RenderedPayloadJSON", + getStringValue(t, getMapValue(t, getMapValue(t, getMapValue(t, renderedEnvelope, "properties"), "payload_json"), "contentSchema"), "$ref"), + ) + + templateEnvelope := getMapValue(t, schemas, "TemplateDeliveryCommandEnvelope") + require.Equal(t, "notification", getScalarValue(t, getMapValue(t, getMapValue(t, templateEnvelope, "properties"), "source"), "const")) + require.Equal(t, "template", getScalarValue(t, getMapValue(t, getMapValue(t, templateEnvelope, "properties"), "payload_mode"), "const")) + require.Equal( + t, + "#/components/schemas/TemplatePayloadJSON", + getStringValue(t, getMapValue(t, getMapValue(t, getMapValue(t, templateEnvelope, "properties"), "payload_json"), "contentSchema"), "$ref"), + ) +} + +func TestDeliveryCommandsAsyncAPISpecFreezesPayloadSchemasAndExamples(t *testing.T) { + t.Parallel() + + doc := loadAsyncAPISpec(t) + components := getMapValue(t, doc, "components") + schemas := getMapValue(t, components, "schemas") + messages := getMapValue(t, components, "messages") + + renderedPayload := getMapValue(t, schemas, "RenderedPayloadJSON") + require.ElementsMatch( + t, + []any{"to", "cc", "bcc", "reply_to", "subject", "text_body", "attachments"}, + getSliceValue(t, renderedPayload, "required"), + ) + + templatePayload := getMapValue(t, schemas, "TemplatePayloadJSON") + require.ElementsMatch( + t, + []any{"to", "cc", "bcc", "reply_to", "template_id", "locale", "variables", "attachments"}, + getSliceValue(t, templatePayload, "required"), + ) + + attachment := getMapValue(t, schemas, "Attachment") + require.ElementsMatch( + t, + []any{"filename", "content_type", "content_base64"}, + getSliceValue(t, attachment, "required"), + ) + + renderedExamples := getSliceValue(t, getMapValue(t, messages, "RenderedDeliveryCommand"), "examples") + require.NotEmpty(t, getStringValue(t, getMapValue(t, renderedExamples[0].(map[string]any), "payload"), "payload_json")) + + templateExamples := getSliceValue(t, getMapValue(t, messages, "TemplateDeliveryCommand"), "examples") + require.NotEmpty(t, getStringValue(t, getMapValue(t, templateExamples[0].(map[string]any), "payload"), "payload_json")) +} + +func loadAsyncAPISpec(t *testing.T) map[string]any { + t.Helper() + + _, thisFile, _, ok := runtime.Caller(0) + if !ok { + require.FailNow(t, "runtime.Caller failed") + } + + specPath := filepath.Join(filepath.Dir(thisFile), "api", "delivery-commands-asyncapi.yaml") + payload, err := os.ReadFile(specPath) + if err != nil { + require.Failf(t, "test failed", "read spec %s: %v", specPath, err) + } + + var doc map[string]any + if err := yaml.Unmarshal(payload, &doc); err != nil { + require.Failf(t, "test failed", "decode spec %s: %v", specPath, err) + } + + return doc +} + +func getMapValue(t *testing.T, value map[string]any, path ...string) map[string]any { + t.Helper() + + current := value + for _, segment := range path { + raw, ok := current[segment] + if !ok { + require.Failf(t, "test failed", "missing map key %s", segment) + } + next, ok := raw.(map[string]any) + if !ok { + require.Failf(t, "test failed", "value at %s is not a map", segment) + } + current = next + } + + return current +} + +func getStringValue(t *testing.T, value map[string]any, key string) string { + t.Helper() + + raw, ok := value[key] + if !ok { + require.Failf(t, "test failed", "missing key %s", key) + } + result, ok := raw.(string) + if !ok { + require.Failf(t, "test failed", "value at %s is not a string", key) + } + + return result +} + +func getScalarValue(t *testing.T, value map[string]any, key string) any { + t.Helper() + + raw, ok := value[key] + if !ok { + require.Failf(t, "test failed", "missing key %s", key) + } + + return raw +} + +func getSliceValue(t *testing.T, value map[string]any, key string) []any { + t.Helper() + + raw, ok := value[key] + if !ok { + require.Failf(t, "test failed", "missing key %s", key) + } + result, ok := raw.([]any) + if !ok { + require.Failf(t, "test failed", "value at %s is not a slice", key) + } + + return result +} diff --git a/mail/contract_openapi_test.go b/mail/contract_openapi_test.go new file mode 100644 index 0000000..5c6c698 --- /dev/null +++ b/mail/contract_openapi_test.go @@ -0,0 +1,283 @@ +package mail + +import ( + "context" + "encoding/json" + "net/http" + "path/filepath" + "runtime" + "testing" + + "github.com/getkin/kin-openapi/openapi3" + "github.com/stretchr/testify/require" +) + +func TestInternalOpenAPISpecValidates(t *testing.T) { + t.Parallel() + + loadSpec(t) +} + +func TestInternalOpenAPISpecFreezesLoginCodeDeliveryContract(t *testing.T) { + t.Parallel() + + doc := loadSpec(t) + operation := getOperation(t, doc, "/api/v1/internal/login-code-deliveries", http.MethodPost) + + require.Equal(t, "acceptLoginCodeDelivery", operation.OperationID) + assertOperationParameterRefs(t, operation, "#/components/parameters/IdempotencyKey") + assertSchemaRef(t, requestSchemaRef(t, operation), "#/components/schemas/LoginCodeDeliveryRequest", "login-code-deliveries request schema") + assertSchemaRef(t, responseSchemaRef(t, operation, http.StatusOK), "#/components/schemas/LoginCodeDeliveryResponse", "login-code-deliveries success schema") + assertSchemaRef(t, responseSchemaRef(t, operation, http.StatusBadRequest), "#/components/schemas/ErrorResponse", "bad request schema") + assertSchemaRef(t, responseSchemaRef(t, operation, http.StatusConflict), "#/components/schemas/ErrorResponse", "conflict schema") + assertSchemaRef(t, responseSchemaRef(t, operation, http.StatusInternalServerError), "#/components/schemas/ErrorResponse", "internal error schema") + assertSchemaRef(t, responseSchemaRef(t, operation, http.StatusServiceUnavailable), "#/components/schemas/ErrorResponse", "service unavailable schema") + + request := componentSchemaRef(t, doc, "LoginCodeDeliveryRequest") + assertRequiredFields(t, request, "email", "code", "locale") + + response := componentSchemaRef(t, doc, "LoginCodeDeliveryResponse") + assertRequiredFields(t, response, "outcome") + assertStringEnum(t, response, "outcome", "sent", "suppressed") +} + +func TestInternalOpenAPISpecFreezesOperatorContract(t *testing.T) { + t.Parallel() + + doc := loadSpec(t) + + listOperation := getOperation(t, doc, "/api/v1/internal/deliveries", http.MethodGet) + require.Equal(t, "listDeliveries", listOperation.OperationID) + assertOperationParameterRefs( + t, + listOperation, + "#/components/parameters/RecipientFilter", + "#/components/parameters/StatusFilter", + "#/components/parameters/SourceFilter", + "#/components/parameters/TemplateIDFilter", + "#/components/parameters/IdempotencyKeyFilter", + "#/components/parameters/FromCreatedAtMSFilter", + "#/components/parameters/ToCreatedAtMSFilter", + "#/components/parameters/ListLimit", + "#/components/parameters/ListCursor", + ) + assertSchemaRef(t, responseSchemaRef(t, listOperation, http.StatusOK), "#/components/schemas/DeliveryListResponse", "deliveries list success schema") + + deliveryGetOperation := getOperation(t, doc, "/api/v1/internal/deliveries/{delivery_id}", http.MethodGet) + require.Equal(t, "getDelivery", deliveryGetOperation.OperationID) + assertOperationParameterRefs(t, deliveryGetOperation, "#/components/parameters/DeliveryIDPath") + assertSchemaRef(t, responseSchemaRef(t, deliveryGetOperation, http.StatusOK), "#/components/schemas/DeliveryDetailResponse", "delivery get success schema") + + attemptsOperation := getOperation(t, doc, "/api/v1/internal/deliveries/{delivery_id}/attempts", http.MethodGet) + require.Equal(t, "listDeliveryAttempts", attemptsOperation.OperationID) + assertOperationParameterRefs(t, attemptsOperation, "#/components/parameters/DeliveryIDPath") + assertSchemaRef(t, responseSchemaRef(t, attemptsOperation, http.StatusOK), "#/components/schemas/DeliveryAttemptsResponse", "delivery attempts success schema") + + resendOperation := getOperation(t, doc, "/api/v1/internal/deliveries/{delivery_id}/resend", http.MethodPost) + require.Equal(t, "resendDelivery", resendOperation.OperationID) + assertOperationParameterRefs(t, resendOperation, "#/components/parameters/DeliveryIDPath") + assertSchemaRef(t, responseSchemaRef(t, resendOperation, http.StatusOK), "#/components/schemas/DeliveryResendResponse", "delivery resend success schema") + assertSchemaRef(t, responseSchemaRef(t, resendOperation, http.StatusConflict), "#/components/schemas/ErrorResponse", "delivery resend conflict schema") + + listResponse := componentSchemaRef(t, doc, "DeliveryListResponse") + assertRequiredFields(t, listResponse, "items") + + detailResponse := componentSchemaRef(t, doc, "DeliveryDetailResponse") + assertRequiredFields(t, detailResponse, "delivery_id", "source", "payload_mode", "to", "cc", "bcc", "reply_to", "attachments", "locale_fallback_used", "idempotency_key", "status", "attempt_count", "created_at_ms", "updated_at_ms") + + attemptsResponse := componentSchemaRef(t, doc, "DeliveryAttemptsResponse") + assertRequiredFields(t, attemptsResponse, "items") + + resendResponse := componentSchemaRef(t, doc, "DeliveryResendResponse") + assertRequiredFields(t, resendResponse, "delivery_id") +} + +func TestInternalOpenAPISpecErrorExamplesMatchStableErrors(t *testing.T) { + t.Parallel() + + doc := loadSpec(t) + + require.JSONEq( + t, + `{"error":{"code":"invalid_request","message":"Idempotency-Key header must not be empty"}}`, + string(mustJSON(t, responseExampleValue(t, doc, "InvalidRequestError", "missingHeader"))), + ) + require.JSONEq( + t, + `{"error":{"code":"conflict","message":"request conflicts with current state"}}`, + string(mustJSON(t, responseExampleValue(t, doc, "ConflictError", "conflict"))), + ) + require.JSONEq( + t, + `{"error":{"code":"internal_error","message":"internal server error"}}`, + string(mustJSON(t, responseExampleValue(t, doc, "InternalError", "internal"))), + ) + require.JSONEq( + t, + `{"error":{"code":"service_unavailable","message":"service is unavailable"}}`, + string(mustJSON(t, responseExampleValue(t, doc, "ServiceUnavailableError", "unavailable"))), + ) + require.JSONEq( + t, + `{"error":{"code":"delivery_not_found","message":"delivery not found"}}`, + string(mustJSON(t, responseExampleValue(t, doc, "DeliveryNotFoundError", "missingDelivery"))), + ) + require.JSONEq( + t, + `{"error":{"code":"resend_not_allowed","message":"delivery status does not allow resend"}}`, + string(mustJSON(t, responseExampleValue(t, doc, "ResendNotAllowedError", "resendNotAllowed"))), + ) +} + +func loadSpec(t *testing.T) *openapi3.T { + t.Helper() + + _, thisFile, _, ok := runtime.Caller(0) + if !ok { + require.FailNow(t, "runtime.Caller failed") + } + + specPath := filepath.Join(filepath.Dir(thisFile), "api", "internal-openapi.yaml") + loader := openapi3.NewLoader() + doc, err := loader.LoadFromFile(specPath) + if err != nil { + require.Failf(t, "test failed", "load spec %s: %v", specPath, err) + } + if doc == nil { + require.Failf(t, "test failed", "load spec %s: returned nil document", specPath) + } + if err := doc.Validate(context.Background()); err != nil { + require.Failf(t, "test failed", "validate spec %s: %v", specPath, err) + } + + return doc +} + +func getOperation(t *testing.T, doc *openapi3.T, path string, method string) *openapi3.Operation { + t.Helper() + + if doc.Paths == nil { + require.FailNow(t, "spec is missing paths") + } + pathItem := doc.Paths.Value(path) + if pathItem == nil { + require.Failf(t, "test failed", "spec is missing path %s", path) + } + operation := pathItem.GetOperation(method) + if operation == nil { + require.Failf(t, "test failed", "spec is missing %s operation for path %s", method, path) + } + + return operation +} + +func requestSchemaRef(t *testing.T, operation *openapi3.Operation) *openapi3.SchemaRef { + t.Helper() + + if operation.RequestBody == nil || operation.RequestBody.Value == nil { + require.FailNow(t, "operation is missing request body") + } + mediaType := operation.RequestBody.Value.Content.Get("application/json") + if mediaType == nil || mediaType.Schema == nil { + require.FailNow(t, "operation is missing application/json request schema") + } + + return mediaType.Schema +} + +func responseSchemaRef(t *testing.T, operation *openapi3.Operation, status int) *openapi3.SchemaRef { + t.Helper() + + responseRef := operation.Responses.Status(status) + if responseRef == nil || responseRef.Value == nil { + require.Failf(t, "test failed", "operation is missing %d response", status) + } + mediaType := responseRef.Value.Content.Get("application/json") + if mediaType == nil || mediaType.Schema == nil { + require.Failf(t, "test failed", "operation is missing application/json schema for %d response", status) + } + + return mediaType.Schema +} + +func componentSchemaRef(t *testing.T, doc *openapi3.T, name string) *openapi3.SchemaRef { + t.Helper() + + if doc.Components.Schemas == nil { + require.FailNow(t, "spec is missing component schemas") + } + schemaRef := doc.Components.Schemas[name] + if schemaRef == nil { + require.Failf(t, "test failed", "spec is missing component schema %s", name) + } + + return schemaRef +} + +func responseExampleValue(t *testing.T, doc *openapi3.T, responseName string, exampleName string) any { + t.Helper() + + responseRef := doc.Components.Responses[responseName] + if responseRef == nil || responseRef.Value == nil { + require.Failf(t, "test failed", "spec is missing component response %s", responseName) + } + mediaType := responseRef.Value.Content.Get("application/json") + if mediaType == nil { + require.Failf(t, "test failed", "response %s is missing application/json content", responseName) + } + exampleRef := mediaType.Examples[exampleName] + if exampleRef == nil || exampleRef.Value == nil { + require.Failf(t, "test failed", "response %s is missing example %s", responseName, exampleName) + } + + return exampleRef.Value.Value +} + +func assertSchemaRef(t *testing.T, schemaRef *openapi3.SchemaRef, want string, name string) { + t.Helper() + + require.NotNil(t, schemaRef, "%s schema ref", name) + require.Equal(t, want, schemaRef.Ref, "%s schema ref", name) +} + +func assertRequiredFields(t *testing.T, schemaRef *openapi3.SchemaRef, fields ...string) { + t.Helper() + + require.NotNil(t, schemaRef) + require.ElementsMatch(t, fields, schemaRef.Value.Required) +} + +func assertStringEnum(t *testing.T, schemaRef *openapi3.SchemaRef, property string, values ...string) { + t.Helper() + + require.NotNil(t, schemaRef) + propertyRef := schemaRef.Value.Properties[property] + require.NotNil(t, propertyRef, "schema property %s", property) + + got := make([]string, 0, len(propertyRef.Value.Enum)) + for _, value := range propertyRef.Value.Enum { + got = append(got, value.(string)) + } + + require.ElementsMatch(t, values, got) +} + +func assertOperationParameterRefs(t *testing.T, operation *openapi3.Operation, refs ...string) { + t.Helper() + + got := make([]string, 0, len(operation.Parameters)) + for _, parameterRef := range operation.Parameters { + got = append(got, parameterRef.Ref) + } + + require.ElementsMatch(t, refs, got) +} + +func mustJSON(t *testing.T, value any) []byte { + t.Helper() + + payload, err := json.Marshal(value) + require.NoError(t, err) + + return payload +} diff --git a/mail/docs/README.md b/mail/docs/README.md new file mode 100644 index 0000000..0db6456 --- /dev/null +++ b/mail/docs/README.md @@ -0,0 +1,23 @@ +# Mail Service Docs + +This directory keeps service-local documentation that is more operational or +more example-heavy than [`../README.md`](../README.md). + +Sections: + +- [Runtime and components](runtime.md) +- [Main flows](flows.md) +- [Configuration and contract examples](examples.md) +- [Operator runbook](runbook.md) + +Primary references: + +- [`../README.md`](../README.md) for stable service scope, contracts, data + model, Redis layout, and retry policy +- [`../api/internal-openapi.yaml`](../api/internal-openapi.yaml) for the + trusted internal REST contract +- [`../api/delivery-commands-asyncapi.yaml`](../api/delivery-commands-asyncapi.yaml) + for the trusted async generic command contract +- [`../../ARCHITECTURE.md`](../../ARCHITECTURE.md) for system-level service + boundaries and transport rules +- [`../../TESTING.md`](../../TESTING.md) for the cross-service testing matrix diff --git a/mail/docs/examples.md b/mail/docs/examples.md new file mode 100644 index 0000000..1cd2ee5 --- /dev/null +++ b/mail/docs/examples.md @@ -0,0 +1,129 @@ +# Configuration and Contract Examples + +The examples below are illustrative. IDs, timestamps, and keys are placeholders +unless explicitly stated otherwise. + +## Example Environment + +Minimal local runtime with stub provider: + +```dotenv +MAIL_REDIS_ADDR=127.0.0.1:6379 +MAIL_INTERNAL_HTTP_ADDR=:8080 +MAIL_TEMPLATE_DIR=templates +MAIL_SMTP_MODE=stub + +OTEL_TRACES_EXPORTER=none +OTEL_METRICS_EXPORTER=none +``` + +SMTP-backed shape: + +```dotenv +MAIL_REDIS_ADDR=127.0.0.1:6379 +MAIL_INTERNAL_HTTP_ADDR=:8080 +MAIL_TEMPLATE_DIR=templates + +MAIL_SMTP_MODE=smtp +MAIL_SMTP_ADDR=127.0.0.1:1025 +MAIL_SMTP_FROM_EMAIL=noreply@example.com +MAIL_SMTP_TIMEOUT=15s +# Optional for local self-signed SMTP capture only: +# MAIL_SMTP_INSECURE_SKIP_VERIFY=true + +OTEL_TRACES_EXPORTER=none +OTEL_METRICS_EXPORTER=none +``` + +## Auth Delivery REST + +Request: + +```bash +curl -X POST http://127.0.0.1:8080/api/v1/internal/login-code-deliveries \ + -H 'Content-Type: application/json' \ + -H 'Idempotency-Key: challenge-123' \ + -d '{ + "email": "pilot@example.com", + "code": "123456", + "locale": "fr-FR" + }' +``` + +Success response: + +```json +{ + "outcome": "sent" +} +``` + +Suppressed response: + +```json +{ + "outcome": "suppressed" +} +``` + +## Async Generic Command Examples + +Rendered payload: + +```bash +redis-cli XADD mail:delivery_commands '*' \ + delivery_id mail-123 \ + source notification \ + payload_mode rendered \ + idempotency_key notification:mail-123 \ + request_id req-123 \ + trace_id trace-123 \ + payload_json '{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":[],"subject":"Turn ready","text_body":"Turn 54 is ready.","html_body":"

Turn 54 is ready.

","attachments":[]}' +``` + +Template payload: + +```bash +redis-cli XADD mail:delivery_commands '*' \ + delivery_id mail-124 \ + source notification \ + payload_mode template \ + idempotency_key notification:mail-124 \ + request_id req-124 \ + trace_id trace-124 \ + payload_json '{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":[],"template_id":"game.turn_ready","locale":"fr-FR","variables":{"turn_number":54},"attachments":[]}' +``` + +## Operator API Examples + +List deliveries: + +```bash +curl 'http://127.0.0.1:8080/api/v1/internal/deliveries?source=authsession&status=sent&limit=10' +``` + +Get one delivery: + +```bash +curl http://127.0.0.1:8080/api/v1/internal/deliveries/delivery-123 +``` + +List attempts: + +```bash +curl http://127.0.0.1:8080/api/v1/internal/deliveries/delivery-123/attempts +``` + +Resend one terminal delivery: + +```bash +curl -X POST http://127.0.0.1:8080/api/v1/internal/deliveries/delivery-123/resend +``` + +Example resend response: + +```json +{ + "delivery_id": "delivery-456" +} +``` diff --git a/mail/docs/flows.md b/mail/docs/flows.md new file mode 100644 index 0000000..3cd9c07 --- /dev/null +++ b/mail/docs/flows.md @@ -0,0 +1,100 @@ +# Main Flows + +## Auth / Session -> Mail + +```mermaid +sequenceDiagram + participant Auth as Auth / Session Service + participant Mail as Mail Service + participant Redis + participant Scheduler + participant SMTP as Provider + + Auth->>Mail: POST /api/v1/internal/login-code-deliveries + Idempotency-Key + Mail->>Mail: validate request and idempotency scope + alt MAIL_SMTP_MODE = stub + Mail->>Redis: persist delivery as suppressed + Mail-->>Auth: 200 {outcome=suppressed} + else MAIL_SMTP_MODE = smtp + Mail->>Redis: persist delivery as queued + attempt #1 scheduled + Mail-->>Auth: 200 {outcome=sent} + Scheduler->>Redis: claim due attempt + Scheduler->>SMTP: send rendered auth mail + SMTP-->>Scheduler: accepted or classified failure + Scheduler->>Redis: commit sent / retry / failed / dead_letter + end +``` + +`sent` on this boundary means durable intake into the mail-delivery pipeline. +It does not mean SMTP completion. + +## Notification -> Mail + +```mermaid +sequenceDiagram + participant Notify as Notification Service + participant Stream as Redis Stream mail:delivery_commands + participant Consumer as Command consumer + participant Mail as Mail Service + participant Redis + + Notify->>Stream: XADD generic command + Consumer->>Stream: XREAD from last stored offset + Consumer->>Mail: decode and validate command + alt malformed or conflicting command + Mail->>Redis: record malformed command entry + Consumer->>Redis: save stream offset + else valid command + Mail->>Redis: persist delivery + first attempt + optional payload bundle + Consumer->>Redis: save stream offset + end +``` + +## Retry and Dead Letter + +```mermaid +sequenceDiagram + participant Scheduler + participant Redis + participant Worker as Attempt worker + participant SMTP as Provider + + Scheduler->>Redis: find next due delivery + Scheduler->>Redis: load work item + alt template delivery not yet rendered + Scheduler->>Redis: render and store materialized content + end + Scheduler->>Redis: claim scheduled attempt + Scheduler->>Worker: enqueue claimed work + Worker->>SMTP: send materialized message + SMTP-->>Worker: accepted / suppressed / transient_failure / permanent_failure + alt accepted + Worker->>Redis: commit sent + provider_accepted + else suppressed + Worker->>Redis: commit suppressed + provider_rejected + else transient failure before retry budget ends + Worker->>Redis: commit transport_failed|timed_out + next scheduled attempt + else retry budget exhausted + Worker->>Redis: commit dead_letter + dead-letter entry + else permanent failure + Worker->>Redis: commit failed + provider_rejected + end +``` + +## Operator Resend + +```mermaid +sequenceDiagram + participant Ops as Trusted operator + participant Mail as Mail Service + participant Redis + + Ops->>Mail: POST /api/v1/internal/deliveries/{delivery_id}/resend + Mail->>Redis: load original delivery and optional payload bundle + Mail->>Mail: verify original status is terminal + Mail->>Redis: create clone delivery with source=operator_resend + Mail-->>Ops: 200 {delivery_id=} +``` + +Resend always creates a new delivery and never mutates the original delivery or +its attempt history. diff --git a/mail/docs/runbook.md b/mail/docs/runbook.md new file mode 100644 index 0000000..8ec4166 --- /dev/null +++ b/mail/docs/runbook.md @@ -0,0 +1,177 @@ +# Operator Runbook + +This runbook covers the checks that matter most during startup, steady-state +verification, shutdown, and common `Mail Service` incidents. + +## Startup Checks + +Before starting the process, confirm: + +- `MAIL_REDIS_ADDR` points to the Redis deployment that stores deliveries, + attempts, idempotency reservations, malformed commands, and stream offsets +- the configured Redis ACL, DB, TLS, and timeout settings match the target + environment +- `MAIL_TEMPLATE_DIR` points to the intended immutable template catalog +- if `MAIL_SMTP_MODE=smtp`, the SMTP address, sender identity, and optional + credentials are configured together +- the OpenTelemetry exporter settings point at the intended collector when + traces or metrics are expected outside the process + +At startup the process performs bounded `PING` checks for both Redis clients +used by the runtime and parses the full template catalog. + +Startup fails fast if those checks fail or if the template catalog cannot be +loaded. + +Known startup caveats: + +- there is no `/healthz`, `/readyz`, or `/metrics` route +- traces and metrics are exported only through the configured OpenTelemetry + exporters +- template changes are not hot-reloaded; restart is required after template + edits + +## Steady-State Verification + +Practical readiness verification is: + +1. confirm the process emitted startup logs for the internal HTTP listener, + command consumer, scheduler, and worker pool +2. open a TCP connection to `MAIL_INTERNAL_HTTP_ADDR` +3. issue one trusted smoke request such as + `GET /api/v1/internal/deliveries/does-not-exist` +4. verify Redis connectivity and OpenTelemetry exporter health out of band + +Expected steady-state signals: + +- `mail.attempt_schedule.depth` remains bounded +- `mail.attempt_schedule.oldest_age_ms` stays near the active retry ladder +- `mail.delivery.dead_letters` changes rarely +- `mail.stream_commands.malformed` changes only on bad upstream commands +- internal HTTP logs include `otel_trace_id` and `otel_span_id` + +## Shutdown + +The process handles `SIGINT` and `SIGTERM`. + +Shutdown behavior: + +- coordinated shutdown is bounded by `MAIL_SHUTDOWN_TIMEOUT` +- the internal HTTP listener is stopped before process resources are closed +- Redis clients are closed after the app stops +- OpenTelemetry providers are flushed during runtime cleanup + +During a planned restart: + +1. send `SIGTERM` +2. wait for listener and worker shutdown logs +3. restart the process with the same Redis and template configuration +4. repeat the steady-state verification steps + +## Incident Triage + +### Attempt Schedule Backlog Grows + +Symptoms: + +- `mail.attempt_schedule.depth` rises steadily +- `mail.attempt_schedule.oldest_age_ms` increases instead of oscillating +- queued deliveries remain in `queued` or `rendered` longer than expected + +Checks: + +1. confirm the scheduler is still logging regular activity +2. confirm Redis connectivity and latency for attempt-schedule keys +3. confirm attempt workers are running and not blocked on SMTP +4. inspect `mail.provider.send.duration_ms` for elevated latency +5. verify `MAIL_ATTEMPT_WORKER_CONCURRENCY` is appropriate for the workload + +### Dead-Letter Spikes + +Symptoms: + +- `mail.delivery.dead_letters` increases rapidly +- operator reads show repeated `dead_letter` deliveries with recent + `transport_failed` or `timed_out` attempts + +Checks: + +1. inspect recent provider summaries on dead-lettered deliveries +2. confirm SMTP reachability from the Mail Service process +3. compare the spike against `mail.provider.send.duration_ms` and timeout logs +4. verify the remote SMTP server is accepting `STARTTLS` and mail submission + +Expected behavior: + +- dead letters appear only after the fixed retry ladder is exhausted +- each dead-lettered delivery has a matching dead-letter entry + +### Repeated `suppressed` Outcomes + +Symptoms: + +- `mail.delivery.suppressed` rises unexpectedly +- auth or generic deliveries end as `suppressed` + +Checks: + +1. determine whether the source is `authsession` or `notification` +2. for auth deliveries, confirm the service is not intentionally running in + `MAIL_SMTP_MODE=stub` +3. inspect provider summaries for policy-driven suppression markers +4. confirm the upstream business workflow still expects those deliveries to be + skipped + +Expected behavior: + +- auth suppression is valid in stub mode and still counts as successful intake +- provider-side suppression is recorded as + `mail_attempt.status=provider_rejected` together with + `mail_delivery.status=suppressed` + +### SMTP Authentication Failures + +Symptoms: + +- provider summaries indicate auth or login failures +- delivery attempts shift toward `failed` or repeated retryable failures, + depending on provider classification + +Checks: + +1. verify `MAIL_SMTP_USERNAME` and `MAIL_SMTP_PASSWORD` are both configured +2. verify the credential pair is valid for the target SMTP server +3. verify the sender identity matches the allowed submission account +4. confirm the server advertises the expected authentication mechanisms + +### SMTP Timeouts + +Symptoms: + +- `mail.attempt.outcomes{status="timed_out"}` increases +- `mail.provider.send.duration_ms` shifts upward +- logs show retry scheduling or dead-letter transitions after timeout paths + +Checks: + +1. confirm network reachability to `MAIL_SMTP_ADDR` +2. compare observed send duration with `MAIL_SMTP_TIMEOUT` +3. verify the SMTP server is not stalling during `STARTTLS`, auth, or `DATA` +4. confirm the process is not CPU-starved or blocked on Redis + +### Malformed Stream Commands + +Symptoms: + +- `mail.stream_commands.malformed` increases +- logs contain `stream command rejected` + +Checks: + +1. inspect `failure_code`, `delivery_id`, `source`, and `stream_entry_id` +2. confirm the upstream command payload still matches + [`../api/delivery-commands-asyncapi.yaml`](../api/delivery-commands-asyncapi.yaml) +3. confirm the producer still sends canonical `payload_mode`, locale, and + idempotency fields +4. review stored malformed-command records through the operator tooling or + direct Redis inspection diff --git a/mail/docs/runtime.md b/mail/docs/runtime.md new file mode 100644 index 0000000..57c47bb --- /dev/null +++ b/mail/docs/runtime.md @@ -0,0 +1,187 @@ +# Runtime and Components + +The diagram below focuses on the deployed `galaxy/mail` process and its runtime +dependencies. + +```mermaid +flowchart LR + subgraph Callers + Auth["Auth / Session Service"] + Notify["Notification Service"] + Ops["Trusted operators"] + end + + subgraph Mail["Mail Service process"] + InternalHTTP["Trusted internal HTTP listener\n/api/v1/internal/*"] + Consumer["Redis Stream command consumer"] + Scheduler["Attempt scheduler"] + Workers["Attempt worker pool"] + Cleanup["Index cleanup worker"] + Services["Application services"] + Templates["Immutable template catalog"] + Telemetry["Logs, traces, metrics"] + end + + Redis["Redis\nstate + streams + indexes"] + Provider["SMTP or stub provider"] + + Auth --> InternalHTTP + Ops --> InternalHTTP + Notify --> Redis + InternalHTTP --> Services + Consumer --> Services + Scheduler --> Services + Workers --> Services + Cleanup --> Services + Services --> Templates + Services --> Redis + Services --> Provider + InternalHTTP --> Telemetry + Consumer --> Telemetry + Scheduler --> Telemetry + Workers --> Telemetry +``` + +## Listener + +`mail` exposes exactly one HTTP listener: + +| Listener | Default addr | Purpose | +| --- | --- | --- | +| Internal HTTP | `:8080` | Trusted intake, operator reads, and resend | + +Shared listener defaults: + +- read-header timeout: `2s` +- read timeout: `10s` +- idle timeout: `1m` + +Intentional omissions: + +- no public listener +- no `/healthz` +- no `/readyz` +- no `/metrics` + +## Startup Wiring + +`cmd/mail` loads config, constructs logging, and builds the runtime through +`internal/app.NewRuntime`. + +The runtime wires: + +- Redis clients for state access and blocking stream consumption +- filesystem-backed template catalog +- provider adapter selected by `MAIL_SMTP_MODE` +- acceptance, render, execution, operator-read, and resend services +- internal HTTP server +- command consumer +- scheduler +- attempt worker pool +- cleanup worker + +Before startup completes, the process performs bounded `PING` checks for both +Redis clients and validates the template catalog. Startup fails fast on invalid +configuration or unavailable Redis. + +## Background Components + +### Command consumer + +- reads one plain `XREAD` stream +- starts from stored offset or `0-0` +- advances offset only after durable command acceptance or durable malformed + command recording + +### Scheduler + +- polls due work every `250ms` +- recovers stale claims every `30s` +- derives recovery deadline from `MAIL_SMTP_TIMEOUT + 30s` + +### Attempt worker pool + +- processes only already claimed work items +- concurrency is controlled by `MAIL_ATTEMPT_WORKER_CONCURRENCY` + +### Cleanup worker + +- removes stale delivery-index members after primary delivery expiry +- does not clean `mail:attempt_schedule` +- does not clean malformed-command index entries + +## Configuration Groups + +Required for all starts: + +- `MAIL_REDIS_ADDR` + +Core process config: + +- `MAIL_SHUTDOWN_TIMEOUT` +- `MAIL_LOG_LEVEL` + +Internal HTTP config: + +- `MAIL_INTERNAL_HTTP_ADDR` +- `MAIL_INTERNAL_HTTP_READ_HEADER_TIMEOUT` +- `MAIL_INTERNAL_HTTP_READ_TIMEOUT` +- `MAIL_INTERNAL_HTTP_IDLE_TIMEOUT` + +Redis connectivity: + +- `MAIL_REDIS_USERNAME` +- `MAIL_REDIS_PASSWORD` +- `MAIL_REDIS_DB` +- `MAIL_REDIS_TLS_ENABLED` +- `MAIL_REDIS_OPERATION_TIMEOUT` +- `MAIL_REDIS_COMMAND_STREAM` +- `MAIL_REDIS_ATTEMPT_SCHEDULE_KEY` +- `MAIL_REDIS_DEAD_LETTER_PREFIX` + +SMTP provider: + +- `MAIL_SMTP_MODE` +- `MAIL_SMTP_ADDR` +- `MAIL_SMTP_USERNAME` +- `MAIL_SMTP_PASSWORD` +- `MAIL_SMTP_FROM_EMAIL` +- `MAIL_SMTP_FROM_NAME` +- `MAIL_SMTP_TIMEOUT` +- `MAIL_SMTP_INSECURE_SKIP_VERIFY` + +Templates and workers: + +- `MAIL_TEMPLATE_DIR` +- `MAIL_ATTEMPT_WORKER_CONCURRENCY` +- `MAIL_STREAM_BLOCK_TIMEOUT` +- `MAIL_OPERATOR_REQUEST_TIMEOUT` +- `MAIL_IDEMPOTENCY_TTL` +- `MAIL_DELIVERY_TTL` +- `MAIL_ATTEMPT_TTL` + +Telemetry: + +- `OTEL_SERVICE_NAME` +- `OTEL_TRACES_EXPORTER` +- `OTEL_METRICS_EXPORTER` +- `OTEL_EXPORTER_OTLP_PROTOCOL` +- `OTEL_EXPORTER_OTLP_TRACES_PROTOCOL` +- `OTEL_EXPORTER_OTLP_METRICS_PROTOCOL` +- `MAIL_OTEL_STDOUT_TRACES_ENABLED` +- `MAIL_OTEL_STDOUT_METRICS_ENABLED` + +## Runtime Notes + +- `MAIL_REDIS_COMMAND_STREAM` is the only Redis key override that currently + changes runtime behavior +- `MAIL_SMTP_INSECURE_SKIP_VERIFY` is a local-development escape hatch for + self-signed SMTP capture only and should remain disabled in production +- attempt-schedule and dead-letter key overrides are parsed but not yet wired + into Redis adapters +- retention overrides are parsed but storage still uses the fixed `7d`, `30d`, + and `90d` values +- template catalog parsing is eager and immutable +- auth deliveries in `MAIL_SMTP_MODE=stub` surface as `suppressed` +- auth deliveries in `MAIL_SMTP_MODE=smtp` surface as `queued` and later move + through normal attempt execution diff --git a/mail/go.mod b/mail/go.mod new file mode 100644 index 0000000..7e813ed --- /dev/null +++ b/mail/go.mod @@ -0,0 +1,100 @@ +module galaxy/mail + +go 1.26.1 + +require ( + github.com/alicebob/miniredis/v2 v2.37.0 + github.com/getkin/kin-openapi v0.135.0 + github.com/google/uuid v1.6.0 + github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 + github.com/redis/go-redis/v9 v9.18.0 + github.com/stretchr/testify v1.11.1 + github.com/testcontainers/testcontainers-go v0.42.0 + github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 + github.com/wneessen/go-mail v0.7.2 + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 + go.opentelemetry.io/otel v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 + go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 + go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 + go.opentelemetry.io/otel/metric v1.43.0 + go.opentelemetry.io/otel/sdk v1.43.0 + go.opentelemetry.io/otel/sdk/metric v1.43.0 + go.opentelemetry.io/otel/trace v1.43.0 + golang.org/x/text v0.36.0 + gopkg.in/yaml.v3 v3.0.1 +) + +require ( + dario.cat/mergo v1.0.2 // indirect + github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect + github.com/Microsoft/go-winio v0.6.2 // indirect + github.com/cenkalti/backoff/v4 v4.3.0 // indirect + github.com/cenkalti/backoff/v5 v5.0.3 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/containerd/errdefs v1.0.0 // indirect + github.com/containerd/errdefs/pkg v0.3.0 // indirect + github.com/containerd/log v0.1.0 // indirect + github.com/containerd/platforms v0.2.1 // indirect + github.com/cpuguy83/dockercfg v0.3.2 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect + github.com/distribution/reference v0.6.0 // indirect + github.com/docker/go-connections v0.6.0 // indirect + github.com/docker/go-units v0.5.0 // indirect + github.com/ebitengine/purego v0.10.0 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-logr/stdr v1.2.2 // indirect + github.com/go-ole/go-ole v1.2.6 // indirect + github.com/go-openapi/jsonpointer v0.21.0 // indirect + github.com/go-openapi/swag v0.23.0 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/klauspost/compress v1.18.5 // indirect + github.com/klauspost/cpuid/v2 v2.3.0 // indirect + github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect + github.com/magiconair/properties v1.8.10 // indirect + github.com/mailru/easyjson v0.7.7 // indirect + github.com/mdelapenya/tlscert v0.2.0 // indirect + github.com/moby/docker-image-spec v1.3.1 // indirect + github.com/moby/go-archive v0.2.0 // indirect + github.com/moby/moby/api v1.54.1 // indirect + github.com/moby/moby/client v0.4.0 // indirect + github.com/moby/patternmatcher v0.6.1 // indirect + github.com/moby/sys/sequential v0.6.0 // indirect + github.com/moby/sys/user v0.4.0 // indirect + github.com/moby/sys/userns v0.1.0 // indirect + github.com/moby/term v0.5.2 // indirect + github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect + github.com/oasdiff/yaml v0.0.9 // indirect + github.com/oasdiff/yaml3 v0.0.9 // indirect + github.com/opencontainers/go-digest v1.0.0 // indirect + github.com/opencontainers/image-spec v1.1.1 // indirect + github.com/perimeterx/marshmallow v1.1.5 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect + github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 // indirect + github.com/shirou/gopsutil/v4 v4.26.3 // indirect + github.com/sirupsen/logrus v1.9.4 // indirect + github.com/tklauser/go-sysconf v0.3.16 // indirect + github.com/tklauser/numcpus v0.11.0 // indirect + github.com/ugorji/go/codec v1.3.1 // indirect + github.com/woodsbury/decimal128 v1.3.0 // indirect + github.com/yuin/gopher-lua v1.1.1 // indirect + github.com/yusufpapurcu/wmi v1.2.4 // indirect + go.opentelemetry.io/auto/sdk v1.2.1 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect + go.opentelemetry.io/proto/otlp v1.10.0 // indirect + go.uber.org/atomic v1.11.0 // indirect + golang.org/x/crypto v0.49.0 // indirect + golang.org/x/net v0.52.0 // indirect + golang.org/x/sys v0.42.0 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect + google.golang.org/grpc v1.80.0 // indirect + google.golang.org/protobuf v1.36.11 // indirect +) diff --git a/mail/go.sum b/mail/go.sum new file mode 100644 index 0000000..e7f565b --- /dev/null +++ b/mail/go.sum @@ -0,0 +1,225 @@ +dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8= +dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA= +github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk= +github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8= +github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg= +github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= +github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY= +github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU= +github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68= +github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM= +github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs= +github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c= +github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= +github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= +github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= +github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= +github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM= +github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI= +github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M= +github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE= +github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk= +github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I= +github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo= +github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A= +github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw= +github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA= +github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc= +github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s= +github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= +github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= +github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk= +github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= +github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94= +github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE= +github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4= +github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU= +github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ= +github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= +github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg= +github.com/getkin/kin-openapi v0.135.0/go.mod h1:6dd5FJl6RdX4usBtFBaQhk9q62Yb2J0Mk5IhUO/QqFI= +github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= +github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY= +github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= +github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ= +github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY= +github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE= +github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ= +github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM= +github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE= +github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= +github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE= +github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ= +github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y= +github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4= +github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I= +github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE= +github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0= +github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= +github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= +github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI= +github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o= +github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0= +github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo= +github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8= +github.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU= +github.com/moby/moby/api v1.54.1 h1:TqVzuJkOLsgLDDwNLmYqACUuTehOHRGKiPhvH8V3Nn4= +github.com/moby/moby/api v1.54.1/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs= +github.com/moby/moby/client v0.4.0 h1:S+2XegzHQrrvTCvF6s5HFzcrywWQmuVnhOXe2kiWjIw= +github.com/moby/moby/client v0.4.0/go.mod h1:QWPbvWchQbxBNdaLSpoKpCdf5E+WxFAgNHogCWDoa7g= +github.com/moby/patternmatcher v0.6.1 h1:qlhtafmr6kgMIJjKJMDmMWq7WLkKIo23hsrpR3x084U= +github.com/moby/patternmatcher v0.6.1/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc= +github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU= +github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko= +github.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs= +github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs= +github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g= +github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28= +github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ= +github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc= +github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw= +github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8= +github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48= +github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM= +github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g= +github.com/oasdiff/yaml3 v0.0.9/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o= +github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U= +github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM= +github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040= +github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M= +github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s= +github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU= +github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE= +github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 h1:QY4nmPHLFAJjtT5O4OMUEOxP8WVaRNOFpcbmxT2NLZU= +github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0/go.mod h1:WH8cY/0fT41Bsf341qzo8v4nx0GCE8FykAA23IVbVmo= +github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 h1:2dKdoEYBJ0CZCLPiCdvvc7luz3DPwY6hKdzjL6m1eHE= +github.com/redis/go-redis/extra/redisotel/v9 v9.18.0/go.mod h1:WzkrVG9ro9BwCQD0eJOWn6AGL4Z1CleGflM45w1hu10= +github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs= +github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc= +github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ= +github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w= +github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g= +github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4= +github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY= +github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30= +github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 h1:id/6LH8ZeDrtAUVSuNvZUAJ1kVpb82y1pr9yweAWsRg= +github.com/testcontainers/testcontainers-go/modules/redis v0.42.0/go.mod h1:uF0jI8FITagQpBNOgweGBmPf6rP4K0SeL1XFPbsZSSY= +github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA= +github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI= +github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw= +github.com/tklauser/numcpus v0.11.0/go.mod h1:z+LwcLq54uWZTX0u/bGobaV34u6V7KNlTZejzM6/3MQ= +github.com/ugorji/go/codec v1.3.1 h1:waO7eEiFDwidsBN6agj1vJQ4AG7lh2yqXyOXqhgQuyY= +github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4= +github.com/wneessen/go-mail v0.7.2 h1:xxPnhZ6IZLSgxShebmZ6DPKh1b6OJcoHfzy7UjOkzS8= +github.com/wneessen/go-mail v0.7.2/go.mod h1:+TkW6QP3EVkgTEqHtVmnAE/1MRhmzb8Y9/W3pweuS+k= +github.com/woodsbury/decimal128 v1.3.0 h1:8pffMNWIlC0O5vbyHWFZAt5yWvWcrHA+3ovIIjVWss0= +github.com/woodsbury/decimal128 v1.3.0/go.mod h1:C5UTmyTjW3JftjUFzOVhC20BEQa2a4ZKOB5I6Zjb+ds= +github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M= +github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw= +github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0= +github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0= +github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0= +github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA= +go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= +go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0/go.mod h1:BuhAPThV8PBHBvg8ZzZ/Ok3idOdhWIodywz2xEcRbJo= +go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I= +go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 h1:8UQVDcZxOJLtX6gxtDt3vY2WTgvZqMQRzjsqiIHQdkc= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0/go.mod h1:2lmweYCiHYpEjQ/lSJBYhj9jP1zvCvQW4BqL9dnT7FQ= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 h1:w1K+pCJoPpQifuVpsKamUdn9U0zM3xUziVOqsGksUrY= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0/go.mod h1:HBy4BjzgVE8139ieRI75oXm3EcDN+6GhD88JT1Kjvxg= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0/go.mod h1:Vl1/iaggsuRlrHf/hfPJPvVag77kKyvrLeD10kpMl+A= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0/go.mod h1:AGmbycVGEsRx9mXMZ75CsOyhSP6MFIcj/6dnG+vhVjk= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0/go.mod h1:/G+nUPfhq2e+qiXMGxMwumDrP5jtzU+mWN7/sjT2rak= +go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 h1:TC+BewnDpeiAmcscXbGMfxkO+mwYUwE/VySwvw88PfA= +go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0/go.mod h1:J/ZyF4vfPwsSr9xJSPyQ4LqtcTPULFR64KwTikGLe+A= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 h1:mS47AX77OtFfKG4vtp+84kuGSFZHTyxtXIN269vChY0= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0/go.mod h1:PJnsC41lAGncJlPUniSwM81gc80GkgWJWr3cu2nKEtU= +go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM= +go.opentelemetry.io/otel/metric v1.43.0/go.mod h1:RDnPtIxvqlgO8GRW18W6Z/4P462ldprJtfxHxyKd2PY= +go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg= +go.opentelemetry.io/otel/sdk v1.43.0/go.mod h1:P+IkVU3iWukmiit/Yf9AWvpyRDlUeBaRg6Y+C58QHzg= +go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw= +go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A= +go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A= +go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0= +go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g= +go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk= +go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= +go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= +golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA= +golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0= +golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw= +golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo= +golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw= +golang.org/x/term v0.41.0 h1:QCgPso/Q3RTJx2Th4bDLqML4W6iJiaXFq2/ftQF13YU= +golang.org/x/term v0.41.0/go.mod h1:3pfBgksrReYfZ5lvYM0kSO0LIkAl4Yl2bXOkKP7Ec2A= +golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg= +golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4= +gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E= +google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA= +google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M= +google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg= +google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8= +google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM= +google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4= +google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= +google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q= +gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA= +pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk= +pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04= diff --git a/mail/internal/adapters/id/uuid.go b/mail/internal/adapters/id/uuid.go new file mode 100644 index 0000000..58a40e2 --- /dev/null +++ b/mail/internal/adapters/id/uuid.go @@ -0,0 +1,23 @@ +// Package id provides internal identifier generators used by Mail Service. +package id + +import ( + "fmt" + + "galaxy/mail/internal/domain/common" + + "github.com/google/uuid" +) + +// Generator builds UUID-backed internal delivery identifiers. +type Generator struct{} + +// NewDeliveryID returns one new UUID v4 delivery identifier. +func (Generator) NewDeliveryID() (common.DeliveryID, error) { + value, err := uuid.NewRandom() + if err != nil { + return "", fmt.Errorf("new delivery id: %w", err) + } + + return common.DeliveryID(value.String()), nil +} diff --git a/mail/internal/adapters/redisstate/atomic_writer.go b/mail/internal/adapters/redisstate/atomic_writer.go new file mode 100644 index 0000000..b74930b --- /dev/null +++ b/mail/internal/adapters/redisstate/atomic_writer.go @@ -0,0 +1,501 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + "time" + + "galaxy/mail/internal/domain/attempt" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + "galaxy/mail/internal/service/acceptgenericdelivery" + + "github.com/redis/go-redis/v9" +) + +// AtomicWriter performs the minimal multi-key Redis mutations that later Mail +// Service acceptance flows will need. +type AtomicWriter struct { + client *redis.Client + keyspace Keyspace +} + +// CreateAcceptanceInput describes the frozen write set required to durably +// accept one delivery into Redis-backed state. +type CreateAcceptanceInput struct { + // Delivery stores the accepted delivery record. + Delivery deliverydomain.Delivery + + // FirstAttempt stores the optional first scheduled attempt record. + FirstAttempt *attempt.Attempt + + // DeliveryPayload stores the optional raw attachment payload bundle. + DeliveryPayload *acceptgenericdelivery.DeliveryPayload + + // Idempotency stores the optional idempotency reservation to create + // together with the delivery. Resend clone creation can omit it. + Idempotency *idempotency.Record +} + +// MarkRenderedInput describes the durable mutation applied after successful +// template materialization. +type MarkRenderedInput struct { + // Delivery stores the rendered delivery record. + Delivery deliverydomain.Delivery +} + +// Validate reports whether input contains one rendered template delivery. +func (input MarkRenderedInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if input.Delivery.PayloadMode != deliverydomain.PayloadModeTemplate { + return fmt.Errorf("delivery payload mode must be %q", deliverydomain.PayloadModeTemplate) + } + if input.Delivery.Status != deliverydomain.StatusRendered { + return fmt.Errorf("delivery status must be %q", deliverydomain.StatusRendered) + } + + return nil +} + +// MarkRenderFailedInput describes the durable mutation applied after one +// classified render failure. +type MarkRenderFailedInput struct { + // Delivery stores the failed delivery record. + Delivery deliverydomain.Delivery + + // Attempt stores the terminal render-failed attempt. + Attempt attempt.Attempt +} + +// Validate reports whether input contains one failed delivery and its +// terminal render-failed attempt. +func (input MarkRenderFailedInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if err := input.Attempt.Validate(); err != nil { + return fmt.Errorf("attempt: %w", err) + } + if input.Delivery.PayloadMode != deliverydomain.PayloadModeTemplate { + return fmt.Errorf("delivery payload mode must be %q", deliverydomain.PayloadModeTemplate) + } + if input.Delivery.Status != deliverydomain.StatusFailed { + return fmt.Errorf("delivery status must be %q", deliverydomain.StatusFailed) + } + if input.Attempt.Status != attempt.StatusRenderFailed { + return fmt.Errorf("attempt status must be %q", attempt.StatusRenderFailed) + } + if input.Attempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("attempt delivery id must match delivery id") + } + if input.Delivery.LastAttemptStatus != attempt.StatusRenderFailed { + return fmt.Errorf("delivery last attempt status must be %q", attempt.StatusRenderFailed) + } + + return nil +} + +// Validate reports whether CreateAcceptanceInput is internally consistent. +func (input CreateAcceptanceInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + + switch { + case input.FirstAttempt == nil: + if input.Delivery.Status != deliverydomain.StatusSuppressed { + return errors.New("first attempt must not be nil unless delivery status is suppressed") + } + case input.Delivery.Status == deliverydomain.StatusSuppressed: + return errors.New("suppressed delivery must not create first attempt") + default: + if err := input.FirstAttempt.Validate(); err != nil { + return fmt.Errorf("first attempt: %w", err) + } + if input.FirstAttempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("first attempt delivery id must match delivery id") + } + if input.FirstAttempt.Status != attempt.StatusScheduled { + return fmt.Errorf("first attempt status must be %q", attempt.StatusScheduled) + } + } + + if input.DeliveryPayload != nil { + if err := input.DeliveryPayload.Validate(); err != nil { + return fmt.Errorf("delivery payload: %w", err) + } + if input.DeliveryPayload.DeliveryID != input.Delivery.DeliveryID { + return errors.New("delivery payload delivery id must match delivery id") + } + } + + if input.Idempotency == nil { + return nil + } + + if err := input.Idempotency.Validate(); err != nil { + return fmt.Errorf("idempotency: %w", err) + } + if input.Idempotency.DeliveryID != input.Delivery.DeliveryID { + return errors.New("idempotency delivery id must match delivery id") + } + if input.Idempotency.Source != input.Delivery.Source { + return errors.New("idempotency source must match delivery source") + } + if input.Idempotency.IdempotencyKey != input.Delivery.IdempotencyKey { + return errors.New("idempotency key must match delivery idempotency key") + } + if input.Idempotency.ExpiresAt.Sub(input.Idempotency.CreatedAt) != IdempotencyTTL { + return fmt.Errorf("idempotency retention must equal %s", IdempotencyTTL) + } + + return nil +} + +// NewAtomicWriter constructs a low-level Redis mutation helper. +func NewAtomicWriter(client *redis.Client) (*AtomicWriter, error) { + if client == nil { + return nil, errors.New("new redis atomic writer: nil client") + } + + return &AtomicWriter{ + client: client, + keyspace: Keyspace{}, + }, nil +} + +// CreateAcceptance stores one delivery, the optional first scheduled attempt, +// the optional first schedule entry, the delivery-level secondary indexes, and +// an optional idempotency record in one optimistic Redis transaction. +func (writer *AtomicWriter) CreateAcceptance(ctx context.Context, input CreateAcceptanceInput) error { + if writer == nil || writer.client == nil { + return errors.New("create acceptance in redis: nil writer") + } + if ctx == nil { + return errors.New("create acceptance in redis: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("create acceptance in redis: %w", err) + } + + deliveryPayload, err := MarshalDelivery(input.Delivery) + if err != nil { + return fmt.Errorf("create acceptance in redis: %w", err) + } + var ( + attemptKey string + attemptPayload []byte + deliveryPayloadKey string + deliveryPayloadBytes []byte + scheduleScore float64 + idempotencyKey string + idempotencyPayload []byte + idempotencyTTL time.Duration + ) + if input.FirstAttempt != nil { + attemptPayload, err = MarshalAttempt(*input.FirstAttempt) + if err != nil { + return fmt.Errorf("create acceptance in redis: %w", err) + } + attemptKey = writer.keyspace.Attempt(input.FirstAttempt.DeliveryID, input.FirstAttempt.AttemptNo) + scheduleScore = ScheduledForScore(input.FirstAttempt.ScheduledFor) + } + if input.DeliveryPayload != nil { + deliveryPayloadBytes, err = MarshalDeliveryPayload(*input.DeliveryPayload) + if err != nil { + return fmt.Errorf("create acceptance in redis: %w", err) + } + deliveryPayloadKey = writer.keyspace.DeliveryPayload(input.DeliveryPayload.DeliveryID) + } + if input.Idempotency != nil { + idempotencyPayload, err = MarshalIdempotency(*input.Idempotency) + if err != nil { + return fmt.Errorf("create acceptance in redis: %w", err) + } + idempotencyTTL, err = ttlUntil(input.Idempotency.ExpiresAt) + if err != nil { + return fmt.Errorf("create acceptance in redis: %w", err) + } + idempotencyKey = writer.keyspace.Idempotency(input.Idempotency.Source, input.Idempotency.IdempotencyKey) + } + + deliveryKey := writer.keyspace.Delivery(input.Delivery.DeliveryID) + watchKeys := []string{deliveryKey} + if attemptKey != "" { + watchKeys = append(watchKeys, attemptKey) + } + if deliveryPayloadKey != "" { + watchKeys = append(watchKeys, deliveryPayloadKey) + } + if idempotencyKey != "" { + watchKeys = append(watchKeys, idempotencyKey) + } + + indexKeys := writer.keyspace.DeliveryIndexKeys(input.Delivery) + createdAtScore := CreatedAtScore(input.Delivery.CreatedAt) + deliveryMember := input.Delivery.DeliveryID.String() + + watchErr := writer.client.Watch(ctx, func(tx *redis.Tx) error { + for _, key := range watchKeys { + if err := ensureKeyAbsent(ctx, tx, key); err != nil { + return fmt.Errorf("create acceptance in redis: %w", err) + } + } + + _, err := tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error { + pipe.Set(ctx, deliveryKey, deliveryPayload, DeliveryTTL) + if attemptKey != "" { + pipe.Set(ctx, attemptKey, attemptPayload, AttemptTTL) + } + if deliveryPayloadKey != "" { + pipe.Set(ctx, deliveryPayloadKey, deliveryPayloadBytes, DeliveryTTL) + } + if idempotencyKey != "" { + pipe.Set(ctx, idempotencyKey, idempotencyPayload, idempotencyTTL) + } + if attemptKey != "" { + pipe.ZAdd(ctx, writer.keyspace.AttemptSchedule(), redis.Z{ + Score: scheduleScore, + Member: deliveryMember, + }) + } + for _, indexKey := range indexKeys { + pipe.ZAdd(ctx, indexKey, redis.Z{ + Score: createdAtScore, + Member: deliveryMember, + }) + } + + return nil + }) + if err != nil { + return fmt.Errorf("create acceptance in redis: %w", err) + } + + return nil + }, watchKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("create acceptance in redis: %w", ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// MarkRendered stores the successful materialization result for one queued +// template delivery and updates the delivery-status secondary index +// atomically. +func (writer *AtomicWriter) MarkRendered(ctx context.Context, input MarkRenderedInput) error { + if writer == nil || writer.client == nil { + return errors.New("mark rendered in redis: nil writer") + } + if ctx == nil { + return errors.New("mark rendered in redis: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("mark rendered in redis: %w", err) + } + + deliveryKey := writer.keyspace.Delivery(input.Delivery.DeliveryID) + deliveryPayload, err := MarshalDelivery(input.Delivery) + if err != nil { + return fmt.Errorf("mark rendered in redis: %w", err) + } + + watchErr := writer.client.Watch(ctx, func(tx *redis.Tx) error { + currentDelivery, err := loadDeliveryFromTx(ctx, tx, deliveryKey) + if err != nil { + return fmt.Errorf("mark rendered in redis: %w", err) + } + if currentDelivery.Status != deliverydomain.StatusQueued { + return fmt.Errorf("mark rendered in redis: %w", ErrConflict) + } + + deliveryTTL, err := ttlForExistingKey(ctx, tx, deliveryKey, DeliveryTTL) + if err != nil { + return fmt.Errorf("mark rendered in redis: %w", err) + } + + createdAtScore := CreatedAtScore(currentDelivery.CreatedAt) + deliveryMember := input.Delivery.DeliveryID.String() + + _, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error { + pipe.Set(ctx, deliveryKey, deliveryPayload, deliveryTTL) + pipe.ZRem(ctx, writer.keyspace.StatusIndex(currentDelivery.Status), deliveryMember) + pipe.ZAdd(ctx, writer.keyspace.StatusIndex(input.Delivery.Status), redis.Z{ + Score: createdAtScore, + Member: deliveryMember, + }) + return nil + }) + if err != nil { + return fmt.Errorf("mark rendered in redis: %w", err) + } + + return nil + }, deliveryKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("mark rendered in redis: %w", ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// MarkRenderFailed stores one terminal render-failed attempt together with +// the owning failed delivery and updates the delivery-status secondary index +// atomically. +func (writer *AtomicWriter) MarkRenderFailed(ctx context.Context, input MarkRenderFailedInput) error { + if writer == nil || writer.client == nil { + return errors.New("mark render failed in redis: nil writer") + } + if ctx == nil { + return errors.New("mark render failed in redis: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("mark render failed in redis: %w", err) + } + + deliveryKey := writer.keyspace.Delivery(input.Delivery.DeliveryID) + attemptKey := writer.keyspace.Attempt(input.Attempt.DeliveryID, input.Attempt.AttemptNo) + + deliveryPayload, err := MarshalDelivery(input.Delivery) + if err != nil { + return fmt.Errorf("mark render failed in redis: %w", err) + } + attemptPayload, err := MarshalAttempt(input.Attempt) + if err != nil { + return fmt.Errorf("mark render failed in redis: %w", err) + } + + watchErr := writer.client.Watch(ctx, func(tx *redis.Tx) error { + currentDelivery, err := loadDeliveryFromTx(ctx, tx, deliveryKey) + if err != nil { + return fmt.Errorf("mark render failed in redis: %w", err) + } + currentAttempt, err := loadAttemptFromTx(ctx, tx, attemptKey) + if err != nil { + return fmt.Errorf("mark render failed in redis: %w", err) + } + if currentDelivery.Status != deliverydomain.StatusQueued { + return fmt.Errorf("mark render failed in redis: %w", ErrConflict) + } + if currentAttempt.Status != attempt.StatusScheduled { + return fmt.Errorf("mark render failed in redis: %w", ErrConflict) + } + + deliveryTTL, err := ttlForExistingKey(ctx, tx, deliveryKey, DeliveryTTL) + if err != nil { + return fmt.Errorf("mark render failed in redis: %w", err) + } + attemptTTL, err := ttlForExistingKey(ctx, tx, attemptKey, AttemptTTL) + if err != nil { + return fmt.Errorf("mark render failed in redis: %w", err) + } + + createdAtScore := CreatedAtScore(currentDelivery.CreatedAt) + deliveryMember := input.Delivery.DeliveryID.String() + + _, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error { + pipe.Set(ctx, deliveryKey, deliveryPayload, deliveryTTL) + pipe.Set(ctx, attemptKey, attemptPayload, attemptTTL) + pipe.ZRem(ctx, writer.keyspace.StatusIndex(currentDelivery.Status), deliveryMember) + pipe.ZAdd(ctx, writer.keyspace.StatusIndex(input.Delivery.Status), redis.Z{ + Score: createdAtScore, + Member: deliveryMember, + }) + pipe.ZRem(ctx, writer.keyspace.AttemptSchedule(), deliveryMember) + return nil + }) + if err != nil { + return fmt.Errorf("mark render failed in redis: %w", err) + } + + return nil + }, deliveryKey, attemptKey) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("mark render failed in redis: %w", ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +func ensureKeyAbsent(ctx context.Context, tx *redis.Tx, key string) error { + exists, err := tx.Exists(ctx, key).Result() + if err != nil { + return err + } + if exists > 0 { + return ErrConflict + } + + return nil +} + +func loadDeliveryFromTx(ctx context.Context, tx *redis.Tx, key string) (deliverydomain.Delivery, error) { + payload, err := tx.Get(ctx, key).Bytes() + switch { + case errors.Is(err, redis.Nil): + return deliverydomain.Delivery{}, ErrConflict + case err != nil: + return deliverydomain.Delivery{}, err + } + + record, err := UnmarshalDelivery(payload) + if err != nil { + return deliverydomain.Delivery{}, err + } + + return record, nil +} + +func loadAttemptFromTx(ctx context.Context, tx *redis.Tx, key string) (attempt.Attempt, error) { + payload, err := tx.Get(ctx, key).Bytes() + switch { + case errors.Is(err, redis.Nil): + return attempt.Attempt{}, ErrConflict + case err != nil: + return attempt.Attempt{}, err + } + + record, err := UnmarshalAttempt(payload) + if err != nil { + return attempt.Attempt{}, err + } + + return record, nil +} + +func ttlForExistingKey(ctx context.Context, tx *redis.Tx, key string, fallback time.Duration) (time.Duration, error) { + ttl, err := tx.PTTL(ctx, key).Result() + if err != nil { + return 0, err + } + if ttl <= 0 { + return fallback, nil + } + + return ttl, nil +} + +func ttlUntil(expiresAt time.Time) (time.Duration, error) { + ttl := time.Until(expiresAt) + if ttl <= 0 { + return 0, errors.New("idempotency expires at must be in the future") + } + + return ttl, nil +} diff --git a/mail/internal/adapters/redisstate/atomic_writer_test.go b/mail/internal/adapters/redisstate/atomic_writer_test.go new file mode 100644 index 0000000..f790e07 --- /dev/null +++ b/mail/internal/adapters/redisstate/atomic_writer_test.go @@ -0,0 +1,429 @@ +package redisstate + +import ( + "context" + "errors" + "sync" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestAtomicWriterCreateAcceptanceStoresStateWithoutIdempotencyRecord(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + writer, err := NewAtomicWriter(client) + require.NoError(t, err) + + record := validDelivery(t) + record.Source = deliverydomain.SourceNotification + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusQueued + record.SentAt = nil + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + require.NoError(t, record.Validate()) + + firstAttempt := validScheduledAttempt(t, record.DeliveryID) + input := CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(firstAttempt), + DeliveryPayload: ptr(validDeliveryPayload(t, record.DeliveryID)), + } + + require.NoError(t, writer.CreateAcceptance(context.Background(), input)) + + storedDelivery, err := client.Get(context.Background(), Keyspace{}.Delivery(record.DeliveryID)).Bytes() + require.NoError(t, err) + decodedDelivery, err := UnmarshalDelivery(storedDelivery) + require.NoError(t, err) + require.Equal(t, record, decodedDelivery) + + storedAttempt, err := client.Get(context.Background(), Keyspace{}.Attempt(record.DeliveryID, firstAttempt.AttemptNo)).Bytes() + require.NoError(t, err) + decodedAttempt, err := UnmarshalAttempt(storedAttempt) + require.NoError(t, err) + require.Equal(t, firstAttempt, decodedAttempt) + + storedDeliveryPayload, err := client.Get(context.Background(), Keyspace{}.DeliveryPayload(record.DeliveryID)).Bytes() + require.NoError(t, err) + decodedDeliveryPayload, err := UnmarshalDeliveryPayload(storedDeliveryPayload) + require.NoError(t, err) + require.Equal(t, *input.DeliveryPayload, decodedDeliveryPayload) + + scheduledDeliveries, err := client.ZRange(context.Background(), Keyspace{}.AttemptSchedule(), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{record.DeliveryID.String()}, scheduledDeliveries) + + recipientMembers, err := client.ZRange(context.Background(), Keyspace{}.RecipientIndex(record.Envelope.To[0]), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{record.DeliveryID.String()}, recipientMembers) + + idempotencyMembers, err := client.ZRange(context.Background(), Keyspace{}.IdempotencyIndex(record.Source, record.IdempotencyKey), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{record.DeliveryID.String()}, idempotencyMembers) +} + +func TestAtomicWriterCreateAcceptanceDetectsDuplicateIdempotencyRace(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + writer, err := NewAtomicWriter(client) + require.NoError(t, err) + + record := validDelivery(t) + record.Source = deliverydomain.SourceNotification + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusQueued + record.SentAt = nil + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + require.NoError(t, record.Validate()) + + input := CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(validScheduledAttempt(t, record.DeliveryID)), + DeliveryPayload: ptr(validDeliveryPayload(t, record.DeliveryID)), + Idempotency: ptr(validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey)), + } + + const contenders = 8 + + var ( + wg sync.WaitGroup + successes int + conflicts int + mu sync.Mutex + ) + + for range contenders { + wg.Add(1) + go func() { + defer wg.Done() + + err := writer.CreateAcceptance(context.Background(), input) + + mu.Lock() + defer mu.Unlock() + switch { + case err == nil: + successes++ + case errors.Is(err, ErrConflict): + conflicts++ + default: + t.Errorf("unexpected error: %v", err) + } + }() + } + wg.Wait() + + require.Equal(t, 1, successes) + require.Equal(t, contenders-1, conflicts) + + require.True(t, server.Exists(Keyspace{}.Delivery(record.DeliveryID))) + require.NotNil(t, input.FirstAttempt) + require.True(t, server.Exists(Keyspace{}.Attempt(record.DeliveryID, input.FirstAttempt.AttemptNo))) + require.True(t, server.Exists(Keyspace{}.DeliveryPayload(record.DeliveryID))) + require.True(t, server.Exists(Keyspace{}.Idempotency(record.Source, record.IdempotencyKey))) + + scheduleCard, err := client.ZCard(context.Background(), Keyspace{}.AttemptSchedule()).Result() + require.NoError(t, err) + require.EqualValues(t, 1, scheduleCard) + + createdAtCard, err := client.ZCard(context.Background(), Keyspace{}.CreatedAtIndex()).Result() + require.NoError(t, err) + require.EqualValues(t, 1, createdAtCard) + + idempotencyCard, err := client.ZCard(context.Background(), Keyspace{}.IdempotencyIndex(record.Source, record.IdempotencyKey)).Result() + require.NoError(t, err) + require.EqualValues(t, 1, idempotencyCard) +} + +func TestCreateAcceptanceInputValidateRejectsMismatchedDeliveryPayload(t *testing.T) { + t.Parallel() + + record := validDelivery(t) + record.Source = deliverydomain.SourceNotification + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusQueued + record.SentAt = nil + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + require.NoError(t, record.Validate()) + + payload := validDeliveryPayload(t, common.DeliveryID("delivery-other")) + input := CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(validScheduledAttempt(t, record.DeliveryID)), + DeliveryPayload: &payload, + Idempotency: ptr(validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey)), + } + + err := input.Validate() + require.Error(t, err) + require.ErrorContains(t, err, "delivery payload delivery id must match delivery id") +} + +func TestCreateAcceptanceInputValidateRejectsMismatchedIdempotency(t *testing.T) { + t.Parallel() + + record := validDelivery(t) + record.Source = deliverydomain.SourceNotification + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusQueued + record.SentAt = nil + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + require.NoError(t, record.Validate()) + + input := CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(validScheduledAttempt(t, record.DeliveryID)), + Idempotency: ptr(validIdempotencyRecord(t, deliverydomain.SourceAuthSession, record.DeliveryID, record.IdempotencyKey)), + } + + err := input.Validate() + require.Error(t, err) + require.ErrorContains(t, err, "idempotency source must match delivery source") +} + +func TestCreateAcceptanceInputValidateRejectsUnexpectedIdempotencyRetention(t *testing.T) { + t.Parallel() + + record := validDelivery(t) + record.Source = deliverydomain.SourceNotification + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusQueued + record.SentAt = nil + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + require.NoError(t, record.Validate()) + + idempotencyRecord := validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey) + idempotencyRecord.ExpiresAt = idempotencyRecord.CreatedAt.Add(time.Hour) + + input := CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(validScheduledAttempt(t, record.DeliveryID)), + Idempotency: ptr(idempotencyRecord), + } + + err := input.Validate() + require.Error(t, err) + require.ErrorContains(t, err, "idempotency retention must equal") +} + +func TestAtomicWriterCreateAcceptanceStoresSuppressedStateWithoutAttempt(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + writer, err := NewAtomicWriter(client) + require.NoError(t, err) + + record := validDelivery(t) + record.Source = deliverydomain.SourceAuthSession + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusSuppressed + record.AttemptCount = 0 + record.LastAttemptStatus = "" + record.ProviderSummary = "" + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + record.SentAt = nil + record.SuppressedAt = ptr(record.UpdatedAt) + require.NoError(t, record.Validate()) + + input := CreateAcceptanceInput{ + Delivery: record, + Idempotency: ptr(validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey)), + } + + require.NoError(t, writer.CreateAcceptance(context.Background(), input)) + + storedDelivery, err := client.Get(context.Background(), Keyspace{}.Delivery(record.DeliveryID)).Bytes() + require.NoError(t, err) + decodedDelivery, err := UnmarshalDelivery(storedDelivery) + require.NoError(t, err) + require.Equal(t, record, decodedDelivery) + + require.False(t, server.Exists(Keyspace{}.Attempt(record.DeliveryID, 1))) + + scheduleCard, err := client.ZCard(context.Background(), Keyspace{}.AttemptSchedule()).Result() + require.NoError(t, err) + require.Zero(t, scheduleCard) +} + +func TestAtomicWriterMarkRenderedUpdatesDeliveryAndStatusIndex(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + writer, err := NewAtomicWriter(client) + require.NoError(t, err) + + record := validQueuedTemplateDelivery(t) + firstAttempt := validScheduledAttempt(t, record.DeliveryID) + createInput := CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(firstAttempt), + Idempotency: ptr(validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey)), + } + require.NoError(t, writer.CreateAcceptance(context.Background(), createInput)) + + rendered := record + rendered.Status = deliverydomain.StatusRendered + rendered.Content = deliverydomain.Content{ + Subject: "Turn 54", + TextBody: "Hello Pilot", + HTMLBody: "

Hello Pilot

", + } + rendered.LocaleFallbackUsed = true + rendered.UpdatedAt = rendered.CreatedAt.Add(time.Minute) + require.NoError(t, rendered.Validate()) + + require.NoError(t, writer.MarkRendered(context.Background(), MarkRenderedInput{ + Delivery: rendered, + })) + + storedDelivery, err := client.Get(context.Background(), Keyspace{}.Delivery(record.DeliveryID)).Bytes() + require.NoError(t, err) + decodedDelivery, err := UnmarshalDelivery(storedDelivery) + require.NoError(t, err) + require.Equal(t, rendered, decodedDelivery) + + queuedMembers, err := client.ZRange(context.Background(), Keyspace{}.StatusIndex(deliverydomain.StatusQueued), 0, -1).Result() + require.NoError(t, err) + require.Empty(t, queuedMembers) + + renderedMembers, err := client.ZRange(context.Background(), Keyspace{}.StatusIndex(deliverydomain.StatusRendered), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{record.DeliveryID.String()}, renderedMembers) +} + +func TestAtomicWriterMarkRenderFailedUpdatesDeliveryAttemptAndStatusIndex(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + writer, err := NewAtomicWriter(client) + require.NoError(t, err) + + record := validQueuedTemplateDelivery(t) + firstAttempt := validScheduledAttempt(t, record.DeliveryID) + createInput := CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(firstAttempt), + Idempotency: ptr(validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey)), + } + require.NoError(t, writer.CreateAcceptance(context.Background(), createInput)) + + failed := record + failed.Status = deliverydomain.StatusFailed + failed.LastAttemptStatus = attempt.StatusRenderFailed + failed.ProviderSummary = "missing required variables: player.name" + failed.UpdatedAt = failed.CreatedAt.Add(time.Minute) + failed.FailedAt = ptr(failed.UpdatedAt) + require.NoError(t, failed.Validate()) + + renderFailedAttempt := validRenderFailedAttempt(t, record.DeliveryID) + + require.NoError(t, writer.MarkRenderFailed(context.Background(), MarkRenderFailedInput{ + Delivery: failed, + Attempt: renderFailedAttempt, + })) + + storedDelivery, err := client.Get(context.Background(), Keyspace{}.Delivery(record.DeliveryID)).Bytes() + require.NoError(t, err) + decodedDelivery, err := UnmarshalDelivery(storedDelivery) + require.NoError(t, err) + require.Equal(t, failed, decodedDelivery) + + storedAttempt, err := client.Get(context.Background(), Keyspace{}.Attempt(record.DeliveryID, 1)).Bytes() + require.NoError(t, err) + decodedAttempt, err := UnmarshalAttempt(storedAttempt) + require.NoError(t, err) + require.Equal(t, renderFailedAttempt, decodedAttempt) + + queuedMembers, err := client.ZRange(context.Background(), Keyspace{}.StatusIndex(deliverydomain.StatusQueued), 0, -1).Result() + require.NoError(t, err) + require.Empty(t, queuedMembers) + + failedMembers, err := client.ZRange(context.Background(), Keyspace{}.StatusIndex(deliverydomain.StatusFailed), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{record.DeliveryID.String()}, failedMembers) + + scheduledMembers, err := client.ZRange(context.Background(), Keyspace{}.AttemptSchedule(), 0, -1).Result() + require.NoError(t, err) + require.Empty(t, scheduledMembers) +} + +func TestAtomicWriterMarkRenderedRejectsUnexpectedCurrentState(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + writer, err := NewAtomicWriter(client) + require.NoError(t, err) + + record := validQueuedTemplateDelivery(t) + firstAttempt := validScheduledAttempt(t, record.DeliveryID) + require.NoError(t, writer.CreateAcceptance(context.Background(), CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(firstAttempt), + Idempotency: ptr(validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey)), + })) + + failed := record + failed.Status = deliverydomain.StatusFailed + failed.LastAttemptStatus = attempt.StatusRenderFailed + failed.ProviderSummary = "missing required variables: player.name" + failed.UpdatedAt = failed.CreatedAt.Add(time.Minute) + failed.FailedAt = ptr(failed.UpdatedAt) + require.NoError(t, failed.Validate()) + require.NoError(t, writer.MarkRenderFailed(context.Background(), MarkRenderFailedInput{ + Delivery: failed, + Attempt: validRenderFailedAttempt(t, record.DeliveryID), + })) + + rendered := record + rendered.Status = deliverydomain.StatusRendered + rendered.Content = deliverydomain.Content{ + Subject: "Turn 54", + TextBody: "Hello Pilot", + } + rendered.UpdatedAt = rendered.CreatedAt.Add(2 * time.Minute) + require.NoError(t, rendered.Validate()) + + err = writer.MarkRendered(context.Background(), MarkRenderedInput{Delivery: rendered}) + require.Error(t, err) + require.ErrorIs(t, err, ErrConflict) +} + +func ptr[T any](value T) *T { + return &value +} + +var _ = attempt.Attempt{} diff --git a/mail/internal/adapters/redisstate/attempt_execution_store.go b/mail/internal/adapters/redisstate/attempt_execution_store.go new file mode 100644 index 0000000..baccdad --- /dev/null +++ b/mail/internal/adapters/redisstate/attempt_execution_store.go @@ -0,0 +1,502 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/service/acceptgenericdelivery" + "galaxy/mail/internal/service/executeattempt" + "galaxy/mail/internal/telemetry" + + "github.com/redis/go-redis/v9" +) + +var errNotClaimable = errors.New("attempt is not claimable") + +// AttemptExecutionStore provides the Redis-backed durable storage used by the +// attempt scheduler and attempt execution service. +type AttemptExecutionStore struct { + client *redis.Client + keys Keyspace +} + +// NewAttemptExecutionStore constructs one Redis-backed attempt execution +// store. +func NewAttemptExecutionStore(client *redis.Client) (*AttemptExecutionStore, error) { + if client == nil { + return nil, errors.New("new attempt execution store: nil redis client") + } + + return &AttemptExecutionStore{ + client: client, + keys: Keyspace{}, + }, nil +} + +// NextDueDeliveryIDs returns up to limit due delivery identifiers ordered by +// the attempt schedule score. +func (store *AttemptExecutionStore) NextDueDeliveryIDs(ctx context.Context, now time.Time, limit int64) ([]common.DeliveryID, error) { + if store == nil || store.client == nil { + return nil, errors.New("next due delivery ids: nil store") + } + if ctx == nil { + return nil, errors.New("next due delivery ids: nil context") + } + if limit <= 0 { + return nil, errors.New("next due delivery ids: non-positive limit") + } + + values, err := store.client.ZRangeByScore(ctx, store.keys.AttemptSchedule(), &redis.ZRangeBy{ + Min: "-inf", + Max: fmt.Sprintf("%d", now.UTC().UnixMilli()), + Count: limit, + }).Result() + if err != nil { + return nil, fmt.Errorf("next due delivery ids: %w", err) + } + + ids := make([]common.DeliveryID, len(values)) + for index, value := range values { + ids[index] = common.DeliveryID(value) + } + + return ids, nil +} + +// ReadAttemptScheduleSnapshot returns the current depth of the durable attempt +// schedule together with its oldest scheduled timestamp when one exists. +func (store *AttemptExecutionStore) ReadAttemptScheduleSnapshot(ctx context.Context) (telemetry.AttemptScheduleSnapshot, error) { + if store == nil || store.client == nil { + return telemetry.AttemptScheduleSnapshot{}, errors.New("read attempt schedule snapshot: nil store") + } + if ctx == nil { + return telemetry.AttemptScheduleSnapshot{}, errors.New("read attempt schedule snapshot: nil context") + } + + depth, err := store.client.ZCard(ctx, store.keys.AttemptSchedule()).Result() + if err != nil { + return telemetry.AttemptScheduleSnapshot{}, fmt.Errorf("read attempt schedule snapshot: depth: %w", err) + } + + snapshot := telemetry.AttemptScheduleSnapshot{ + Depth: depth, + } + if depth == 0 { + return snapshot, nil + } + + values, err := store.client.ZRangeWithScores(ctx, store.keys.AttemptSchedule(), 0, 0).Result() + if err != nil { + return telemetry.AttemptScheduleSnapshot{}, fmt.Errorf("read attempt schedule snapshot: oldest scheduled entry: %w", err) + } + if len(values) == 0 { + return snapshot, nil + } + + oldestScheduledFor := time.UnixMilli(int64(values[0].Score)).UTC() + snapshot.OldestScheduledFor = &oldestScheduledFor + return snapshot, nil +} + +// SendingDeliveryIDs returns every delivery id currently indexed as +// `mail_delivery.status=sending`. +func (store *AttemptExecutionStore) SendingDeliveryIDs(ctx context.Context) ([]common.DeliveryID, error) { + if store == nil || store.client == nil { + return nil, errors.New("sending delivery ids: nil store") + } + if ctx == nil { + return nil, errors.New("sending delivery ids: nil context") + } + + values, err := store.client.ZRange(ctx, store.keys.StatusIndex(deliverydomain.StatusSending), 0, -1).Result() + if err != nil { + return nil, fmt.Errorf("sending delivery ids: %w", err) + } + + ids := make([]common.DeliveryID, len(values)) + for index, value := range values { + ids[index] = common.DeliveryID(value) + } + + return ids, nil +} + +// RemoveScheduledDelivery removes deliveryID from the attempt schedule set. +func (store *AttemptExecutionStore) RemoveScheduledDelivery(ctx context.Context, deliveryID common.DeliveryID) error { + if store == nil || store.client == nil { + return errors.New("remove scheduled delivery: nil store") + } + if ctx == nil { + return errors.New("remove scheduled delivery: nil context") + } + if err := deliveryID.Validate(); err != nil { + return fmt.Errorf("remove scheduled delivery: %w", err) + } + + if err := store.client.ZRem(ctx, store.keys.AttemptSchedule(), deliveryID.String()).Err(); err != nil { + return fmt.Errorf("remove scheduled delivery: %w", err) + } + + return nil +} + +// LoadWorkItem loads the current delivery and its latest attempt when both are +// present. +func (store *AttemptExecutionStore) LoadWorkItem(ctx context.Context, deliveryID common.DeliveryID) (executeattempt.WorkItem, bool, error) { + if store == nil || store.client == nil { + return executeattempt.WorkItem{}, false, errors.New("load attempt work item: nil store") + } + if ctx == nil { + return executeattempt.WorkItem{}, false, errors.New("load attempt work item: nil context") + } + if err := deliveryID.Validate(); err != nil { + return executeattempt.WorkItem{}, false, fmt.Errorf("load attempt work item: %w", err) + } + + deliveryRecord, found, err := store.loadDelivery(ctx, deliveryID) + if err != nil || !found { + return executeattempt.WorkItem{}, found, err + } + if deliveryRecord.AttemptCount < 1 { + return executeattempt.WorkItem{}, false, nil + } + + attemptRecord, found, err := store.loadAttempt(ctx, deliveryID, deliveryRecord.AttemptCount) + if err != nil || !found { + return executeattempt.WorkItem{}, found, err + } + + return executeattempt.WorkItem{ + Delivery: deliveryRecord, + Attempt: attemptRecord, + }, true, nil +} + +// LoadPayload loads one stored raw attachment payload bundle. +func (store *AttemptExecutionStore) LoadPayload(ctx context.Context, deliveryID common.DeliveryID) (acceptgenericdelivery.DeliveryPayload, bool, error) { + if store == nil || store.client == nil { + return acceptgenericdelivery.DeliveryPayload{}, false, errors.New("load attempt payload: nil store") + } + if ctx == nil { + return acceptgenericdelivery.DeliveryPayload{}, false, errors.New("load attempt payload: nil context") + } + if err := deliveryID.Validate(); err != nil { + return acceptgenericdelivery.DeliveryPayload{}, false, fmt.Errorf("load attempt payload: %w", err) + } + + payload, err := store.client.Get(ctx, store.keys.DeliveryPayload(deliveryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return acceptgenericdelivery.DeliveryPayload{}, false, nil + case err != nil: + return acceptgenericdelivery.DeliveryPayload{}, false, fmt.Errorf("load attempt payload: %w", err) + } + + record, err := UnmarshalDeliveryPayload(payload) + if err != nil { + return acceptgenericdelivery.DeliveryPayload{}, false, fmt.Errorf("load attempt payload: %w", err) + } + + return record, true, nil +} + +// ClaimDueAttempt transitions one due scheduled attempt into `in_progress` +// ownership and returns the claimed work item. +func (store *AttemptExecutionStore) ClaimDueAttempt(ctx context.Context, deliveryID common.DeliveryID, now time.Time) (executeattempt.WorkItem, bool, error) { + if store == nil || store.client == nil { + return executeattempt.WorkItem{}, false, errors.New("claim due attempt: nil store") + } + if ctx == nil { + return executeattempt.WorkItem{}, false, errors.New("claim due attempt: nil context") + } + if err := deliveryID.Validate(); err != nil { + return executeattempt.WorkItem{}, false, fmt.Errorf("claim due attempt: %w", err) + } + + claimedAt := now.UTC().Truncate(time.Millisecond) + if claimedAt.IsZero() { + return executeattempt.WorkItem{}, false, errors.New("claim due attempt: zero claim time") + } + + deliveryKey := store.keys.Delivery(deliveryID) + + var claimed executeattempt.WorkItem + + watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error { + deliveryRecord, err := loadDeliveryFromTx(ctx, tx, deliveryKey) + switch { + case errors.Is(err, ErrConflict): + return errNotClaimable + case err != nil: + return fmt.Errorf("claim due attempt: %w", err) + } + if deliveryRecord.AttemptCount < 1 { + return errNotClaimable + } + + attemptKey := store.keys.Attempt(deliveryID, deliveryRecord.AttemptCount) + attemptRecord, err := loadAttemptFromTx(ctx, tx, attemptKey) + switch { + case errors.Is(err, ErrConflict): + return errNotClaimable + case err != nil: + return fmt.Errorf("claim due attempt: %w", err) + } + + score, err := tx.ZScore(ctx, store.keys.AttemptSchedule(), deliveryID.String()).Result() + switch { + case errors.Is(err, redis.Nil): + return errNotClaimable + case err != nil: + return fmt.Errorf("claim due attempt: read attempt schedule: %w", err) + } + + switch deliveryRecord.Status { + case deliverydomain.StatusQueued, deliverydomain.StatusRendered: + default: + return errNotClaimable + } + if attemptRecord.Status != attempt.StatusScheduled { + return errNotClaimable + } + if score > ScheduledForScore(claimedAt) || attemptRecord.ScheduledFor.After(claimedAt) { + return errNotClaimable + } + + claimedDelivery := deliveryRecord + claimedDelivery.Status = deliverydomain.StatusSending + claimedDelivery.UpdatedAt = claimedAt + if err := claimedDelivery.Validate(); err != nil { + return fmt.Errorf("claim due attempt: build claimed delivery: %w", err) + } + + claimedAttempt := attemptRecord + claimedAttempt.Status = attempt.StatusInProgress + claimedAttempt.StartedAt = ptrTime(claimedAt) + if err := claimedAttempt.Validate(); err != nil { + return fmt.Errorf("claim due attempt: build claimed attempt: %w", err) + } + + deliveryPayload, err := MarshalDelivery(claimedDelivery) + if err != nil { + return fmt.Errorf("claim due attempt: %w", err) + } + attemptPayload, err := MarshalAttempt(claimedAttempt) + if err != nil { + return fmt.Errorf("claim due attempt: %w", err) + } + + deliveryTTL, err := ttlForExistingKey(ctx, tx, deliveryKey, DeliveryTTL) + if err != nil { + return fmt.Errorf("claim due attempt: delivery ttl: %w", err) + } + attemptTTL, err := ttlForExistingKey(ctx, tx, attemptKey, AttemptTTL) + if err != nil { + return fmt.Errorf("claim due attempt: attempt ttl: %w", err) + } + + createdAtScore := CreatedAtScore(deliveryRecord.CreatedAt) + + _, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error { + pipe.Set(ctx, deliveryKey, deliveryPayload, deliveryTTL) + pipe.Set(ctx, attemptKey, attemptPayload, attemptTTL) + pipe.ZRem(ctx, store.keys.StatusIndex(deliveryRecord.Status), deliveryID.String()) + pipe.ZAdd(ctx, store.keys.StatusIndex(deliverydomain.StatusSending), redis.Z{ + Score: createdAtScore, + Member: deliveryID.String(), + }) + pipe.ZRem(ctx, store.keys.AttemptSchedule(), deliveryID.String()) + return nil + }) + if err != nil { + return fmt.Errorf("claim due attempt: %w", err) + } + + claimed = executeattempt.WorkItem{ + Delivery: claimedDelivery, + Attempt: claimedAttempt, + } + return nil + }, deliveryKey) + + switch { + case errors.Is(watchErr, errNotClaimable), errors.Is(watchErr, redis.TxFailedErr): + return executeattempt.WorkItem{}, false, nil + case watchErr != nil: + return executeattempt.WorkItem{}, false, watchErr + default: + return claimed, true, nil + } +} + +// Commit atomically stores one complete attempt execution outcome. +func (store *AttemptExecutionStore) Commit(ctx context.Context, input executeattempt.CommitStateInput) error { + if store == nil || store.client == nil { + return errors.New("commit attempt outcome: nil store") + } + if ctx == nil { + return errors.New("commit attempt outcome: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + + deliveryKey := store.keys.Delivery(input.Delivery.DeliveryID) + currentAttemptKey := store.keys.Attempt(input.Attempt.DeliveryID, input.Attempt.AttemptNo) + + deliveryPayload, err := MarshalDelivery(input.Delivery) + if err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + attemptPayload, err := MarshalAttempt(input.Attempt) + if err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + + var ( + nextAttemptKey string + nextAttemptPayload []byte + nextAttemptScore float64 + deadLetterKey string + deadLetterPayload []byte + ) + if input.NextAttempt != nil { + nextAttemptKey = store.keys.Attempt(input.NextAttempt.DeliveryID, input.NextAttempt.AttemptNo) + nextAttemptPayload, err = MarshalAttempt(*input.NextAttempt) + if err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + nextAttemptScore = ScheduledForScore(input.NextAttempt.ScheduledFor) + } + if input.DeadLetter != nil { + deadLetterKey = store.keys.DeadLetter(input.DeadLetter.DeliveryID) + deadLetterPayload, err = MarshalDeadLetter(*input.DeadLetter) + if err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + } + + watchKeys := []string{deliveryKey, currentAttemptKey} + if nextAttemptKey != "" { + watchKeys = append(watchKeys, nextAttemptKey) + } + if deadLetterKey != "" { + watchKeys = append(watchKeys, deadLetterKey) + } + + watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error { + currentDelivery, err := loadDeliveryFromTx(ctx, tx, deliveryKey) + if err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + currentAttempt, err := loadAttemptFromTx(ctx, tx, currentAttemptKey) + if err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + if currentDelivery.Status != deliverydomain.StatusSending { + return fmt.Errorf("commit attempt outcome: %w", ErrConflict) + } + if currentAttempt.Status != attempt.StatusInProgress { + return fmt.Errorf("commit attempt outcome: %w", ErrConflict) + } + if nextAttemptKey != "" { + if err := ensureKeyAbsent(ctx, tx, nextAttemptKey); err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + } + if deadLetterKey != "" { + if err := ensureKeyAbsent(ctx, tx, deadLetterKey); err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + } + + deliveryTTL, err := ttlForExistingKey(ctx, tx, deliveryKey, DeliveryTTL) + if err != nil { + return fmt.Errorf("commit attempt outcome: delivery ttl: %w", err) + } + attemptTTL, err := ttlForExistingKey(ctx, tx, currentAttemptKey, AttemptTTL) + if err != nil { + return fmt.Errorf("commit attempt outcome: attempt ttl: %w", err) + } + createdAtScore := CreatedAtScore(currentDelivery.CreatedAt) + + _, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error { + pipe.Set(ctx, deliveryKey, deliveryPayload, deliveryTTL) + pipe.Set(ctx, currentAttemptKey, attemptPayload, attemptTTL) + pipe.ZRem(ctx, store.keys.StatusIndex(currentDelivery.Status), input.Delivery.DeliveryID.String()) + pipe.ZAdd(ctx, store.keys.StatusIndex(input.Delivery.Status), redis.Z{ + Score: createdAtScore, + Member: input.Delivery.DeliveryID.String(), + }) + pipe.ZRem(ctx, store.keys.AttemptSchedule(), input.Delivery.DeliveryID.String()) + if nextAttemptKey != "" { + pipe.Set(ctx, nextAttemptKey, nextAttemptPayload, AttemptTTL) + pipe.ZAdd(ctx, store.keys.AttemptSchedule(), redis.Z{ + Score: nextAttemptScore, + Member: input.Delivery.DeliveryID.String(), + }) + } + if deadLetterKey != "" { + pipe.Set(ctx, deadLetterKey, deadLetterPayload, DeadLetterTTL) + } + return nil + }) + if err != nil { + return fmt.Errorf("commit attempt outcome: %w", err) + } + + return nil + }, watchKeys...) + + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return fmt.Errorf("commit attempt outcome: %w", ErrConflict) + case watchErr != nil: + return watchErr + default: + return nil + } +} + +func (store *AttemptExecutionStore) loadDelivery(ctx context.Context, deliveryID common.DeliveryID) (deliverydomain.Delivery, bool, error) { + payload, err := store.client.Get(ctx, store.keys.Delivery(deliveryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return deliverydomain.Delivery{}, false, nil + case err != nil: + return deliverydomain.Delivery{}, false, fmt.Errorf("load attempt delivery: %w", err) + } + + record, err := UnmarshalDelivery(payload) + if err != nil { + return deliverydomain.Delivery{}, false, fmt.Errorf("load attempt delivery: %w", err) + } + + return record, true, nil +} + +func (store *AttemptExecutionStore) loadAttempt(ctx context.Context, deliveryID common.DeliveryID, attemptNo int) (attempt.Attempt, bool, error) { + payload, err := store.client.Get(ctx, store.keys.Attempt(deliveryID, attemptNo)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return attempt.Attempt{}, false, nil + case err != nil: + return attempt.Attempt{}, false, fmt.Errorf("load attempt record: %w", err) + } + + record, err := UnmarshalAttempt(payload) + if err != nil { + return attempt.Attempt{}, false, fmt.Errorf("load attempt record: %w", err) + } + + return record, true, nil +} + +func ptrTime(value time.Time) *time.Time { + return &value +} diff --git a/mail/internal/adapters/redisstate/attempt_execution_store_test.go b/mail/internal/adapters/redisstate/attempt_execution_store_test.go new file mode 100644 index 0000000..702c21b --- /dev/null +++ b/mail/internal/adapters/redisstate/attempt_execution_store_test.go @@ -0,0 +1,301 @@ +package redisstate + +import ( + "context" + "sync" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/service/executeattempt" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestAttemptExecutionStoreClaimDueAttemptTransitionsState(t *testing.T) { + t.Parallel() + + server, client, store := newAttemptExecutionFixture(t) + record := queuedRenderedDelivery(t, common.DeliveryID("delivery-claim")) + createAcceptedDelivery(t, store, record) + + claimed, found, err := store.ClaimDueAttempt(context.Background(), record.DeliveryID, record.CreatedAt.Add(time.Minute)) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, deliverydomain.StatusSending, claimed.Delivery.Status) + require.Equal(t, attempt.StatusInProgress, claimed.Attempt.Status) + require.NotNil(t, claimed.Attempt.StartedAt) + + require.False(t, server.Exists(Keyspace{}.AttemptSchedule())) + + storedDelivery, err := client.Get(context.Background(), Keyspace{}.Delivery(record.DeliveryID)).Bytes() + require.NoError(t, err) + decodedDelivery, err := UnmarshalDelivery(storedDelivery) + require.NoError(t, err) + require.Equal(t, claimed.Delivery, decodedDelivery) + + sendingMembers, err := client.ZRange(context.Background(), Keyspace{}.StatusIndex(deliverydomain.StatusSending), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{record.DeliveryID.String()}, sendingMembers) +} + +func TestAttemptExecutionStoreClaimDueAttemptAllowsOnlyOneOwner(t *testing.T) { + t.Parallel() + + _, _, store := newAttemptExecutionFixture(t) + record := queuedRenderedDelivery(t, common.DeliveryID("delivery-race")) + createAcceptedDelivery(t, store, record) + + const contenders = 8 + + var ( + waitGroup sync.WaitGroup + mu sync.Mutex + successes int + ) + + for range contenders { + waitGroup.Add(1) + go func() { + defer waitGroup.Done() + + _, found, err := store.ClaimDueAttempt(context.Background(), record.DeliveryID, record.CreatedAt.Add(time.Minute)) + require.NoError(t, err) + + mu.Lock() + defer mu.Unlock() + if found { + successes++ + } + }() + } + waitGroup.Wait() + + require.Equal(t, 1, successes) +} + +func TestAttemptExecutionStoreCommitSchedulesRetry(t *testing.T) { + t.Parallel() + + _, client, store := newAttemptExecutionFixture(t) + workItem := inProgressWorkItem(t, common.DeliveryID("delivery-retry"), 1) + seedWorkItemState(t, client, workItem) + + finishedAt := workItem.Attempt.StartedAt.Add(30 * time.Second) + currentAttempt := workItem.Attempt + currentAttempt.Status = attempt.StatusTransportFailed + currentAttempt.FinishedAt = ptrTimeAttemptStore(finishedAt) + currentAttempt.ProviderClassification = "transient_failure" + currentAttempt.ProviderSummary = "provider=smtp result=transient_failure phase=data smtp_code=451" + require.NoError(t, currentAttempt.Validate()) + + nextAttempt := attempt.Attempt{ + DeliveryID: workItem.Delivery.DeliveryID, + AttemptNo: 2, + ScheduledFor: finishedAt.Add(time.Minute), + Status: attempt.StatusScheduled, + } + require.NoError(t, nextAttempt.Validate()) + + deliveryRecord := workItem.Delivery + deliveryRecord.Status = deliverydomain.StatusQueued + deliveryRecord.AttemptCount = nextAttempt.AttemptNo + deliveryRecord.LastAttemptStatus = currentAttempt.Status + deliveryRecord.ProviderSummary = currentAttempt.ProviderSummary + deliveryRecord.UpdatedAt = finishedAt + require.NoError(t, deliveryRecord.Validate()) + + input := executeattempt.CommitStateInput{ + Delivery: deliveryRecord, + Attempt: currentAttempt, + NextAttempt: &nextAttempt, + } + require.NoError(t, input.Validate()) + require.NoError(t, store.Commit(context.Background(), input)) + + reloaded, found, err := store.LoadWorkItem(context.Background(), workItem.Delivery.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, deliveryRecord, reloaded.Delivery) + require.Equal(t, nextAttempt, reloaded.Attempt) + + firstAttemptPayload, err := client.Get(context.Background(), Keyspace{}.Attempt(workItem.Delivery.DeliveryID, 1)).Bytes() + require.NoError(t, err) + firstAttemptRecord, err := UnmarshalAttempt(firstAttemptPayload) + require.NoError(t, err) + require.Equal(t, currentAttempt, firstAttemptRecord) + + scheduledMembers, err := client.ZRange(context.Background(), Keyspace{}.AttemptSchedule(), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{workItem.Delivery.DeliveryID.String()}, scheduledMembers) +} + +func TestAttemptExecutionStoreCommitCreatesDeadLetter(t *testing.T) { + t.Parallel() + + _, client, store := newAttemptExecutionFixture(t) + workItem := inProgressWorkItem(t, common.DeliveryID("delivery-dead-letter"), 4) + seedWorkItemState(t, client, workItem) + + finishedAt := workItem.Attempt.StartedAt.Add(30 * time.Second) + currentAttempt := workItem.Attempt + currentAttempt.Status = attempt.StatusTimedOut + currentAttempt.FinishedAt = ptrTimeAttemptStore(finishedAt) + currentAttempt.ProviderClassification = "deadline_exceeded" + currentAttempt.ProviderSummary = "attempt claim TTL expired" + require.NoError(t, currentAttempt.Validate()) + + deliveryRecord := workItem.Delivery + deliveryRecord.Status = deliverydomain.StatusDeadLetter + deliveryRecord.LastAttemptStatus = currentAttempt.Status + deliveryRecord.ProviderSummary = currentAttempt.ProviderSummary + deliveryRecord.UpdatedAt = finishedAt + deliveryRecord.DeadLetteredAt = ptrTimeAttemptStore(finishedAt) + require.NoError(t, deliveryRecord.Validate()) + + deadLetter := &deliverydomain.DeadLetterEntry{ + DeliveryID: deliveryRecord.DeliveryID, + FinalAttemptNo: currentAttempt.AttemptNo, + FailureClassification: "retry_exhausted", + ProviderSummary: currentAttempt.ProviderSummary, + CreatedAt: finishedAt, + RecoveryHint: "check SMTP connectivity", + } + require.NoError(t, deadLetter.ValidateFor(deliveryRecord)) + + input := executeattempt.CommitStateInput{ + Delivery: deliveryRecord, + Attempt: currentAttempt, + DeadLetter: deadLetter, + } + require.NoError(t, input.Validate()) + require.NoError(t, store.Commit(context.Background(), input)) + + storedDelivery, found, err := store.LoadWorkItem(context.Background(), workItem.Delivery.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, deliveryRecord, storedDelivery.Delivery) + require.Equal(t, currentAttempt, storedDelivery.Attempt) + + deadLetterPayload, err := client.Get(context.Background(), Keyspace{}.DeadLetter(workItem.Delivery.DeliveryID)).Bytes() + require.NoError(t, err) + decodedDeadLetter, err := UnmarshalDeadLetter(deadLetterPayload) + require.NoError(t, err) + require.Equal(t, *deadLetter, decodedDeadLetter) +} + +func newAttemptExecutionFixture(t *testing.T) (*miniredis.Miniredis, *redis.Client, *AttemptExecutionStore) { + t.Helper() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewAttemptExecutionStore(client) + require.NoError(t, err) + + return server, client, store +} + +func createAcceptedDelivery(t *testing.T, store *AttemptExecutionStore, record deliverydomain.Delivery) { + t.Helper() + + client := store.client + writer, err := NewAtomicWriter(client) + require.NoError(t, err) + + firstAttempt := attempt.Attempt{ + DeliveryID: record.DeliveryID, + AttemptNo: 1, + ScheduledFor: record.CreatedAt, + Status: attempt.StatusScheduled, + } + require.NoError(t, firstAttempt.Validate()) + + require.NoError(t, writer.CreateAcceptance(context.Background(), CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: &firstAttempt, + })) +} + +func queuedRenderedDelivery(t *testing.T, deliveryID common.DeliveryID) deliverydomain.Delivery { + t.Helper() + + record := validDelivery(t) + record.DeliveryID = deliveryID + record.ResendParentDeliveryID = "" + record.Source = deliverydomain.SourceNotification + record.PayloadMode = deliverydomain.PayloadModeRendered + record.TemplateID = "" + record.Locale = "" + record.TemplateVariables = nil + record.LocaleFallbackUsed = false + record.Attachments = nil + record.Status = deliverydomain.StatusQueued + record.AttemptCount = 1 + record.LastAttemptStatus = "" + record.ProviderSummary = "" + record.CreatedAt = time.Unix(1_775_121_700, 0).UTC() + record.UpdatedAt = record.CreatedAt + record.SentAt = nil + record.SuppressedAt = nil + record.FailedAt = nil + record.DeadLetteredAt = nil + record.IdempotencyKey = common.IdempotencyKey("notification:" + deliveryID.String()) + require.NoError(t, record.Validate()) + + return record +} + +func inProgressWorkItem(t *testing.T, deliveryID common.DeliveryID, attemptNo int) executeattempt.WorkItem { + t.Helper() + + deliveryRecord := queuedRenderedDelivery(t, deliveryID) + deliveryRecord.Status = deliverydomain.StatusSending + deliveryRecord.AttemptCount = attemptNo + deliveryRecord.UpdatedAt = deliveryRecord.CreatedAt.Add(time.Duration(attemptNo) * time.Minute) + require.NoError(t, deliveryRecord.Validate()) + + scheduledFor := deliveryRecord.CreatedAt.Add(time.Duration(attemptNo-1) * time.Minute) + startedAt := scheduledFor.Add(5 * time.Second) + attemptRecord := attempt.Attempt{ + DeliveryID: deliveryID, + AttemptNo: attemptNo, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + Status: attempt.StatusInProgress, + } + require.NoError(t, attemptRecord.Validate()) + + return executeattempt.WorkItem{ + Delivery: deliveryRecord, + Attempt: attemptRecord, + } +} + +func seedWorkItemState(t *testing.T, client *redis.Client, item executeattempt.WorkItem) { + t.Helper() + + deliveryPayload, err := MarshalDelivery(item.Delivery) + require.NoError(t, err) + attemptPayload, err := MarshalAttempt(item.Attempt) + require.NoError(t, err) + + err = client.Set(context.Background(), Keyspace{}.Delivery(item.Delivery.DeliveryID), deliveryPayload, DeliveryTTL).Err() + require.NoError(t, err) + err = client.Set(context.Background(), Keyspace{}.Attempt(item.Attempt.DeliveryID, item.Attempt.AttemptNo), attemptPayload, AttemptTTL).Err() + require.NoError(t, err) + err = client.ZAdd(context.Background(), Keyspace{}.StatusIndex(deliverydomain.StatusSending), redis.Z{ + Score: CreatedAtScore(item.Delivery.CreatedAt), + Member: item.Delivery.DeliveryID.String(), + }).Err() + require.NoError(t, err) +} + +func ptrTimeAttemptStore(value time.Time) *time.Time { + return &value +} diff --git a/mail/internal/adapters/redisstate/auth_acceptance_store.go b/mail/internal/adapters/redisstate/auth_acceptance_store.go new file mode 100644 index 0000000..19fa6ab --- /dev/null +++ b/mail/internal/adapters/redisstate/auth_acceptance_store.go @@ -0,0 +1,117 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + "galaxy/mail/internal/service/acceptauthdelivery" + + "github.com/redis/go-redis/v9" +) + +// AcceptanceStore provides the Redis-backed durable storage used by the +// auth-delivery acceptance use case. +type AcceptanceStore struct { + client *redis.Client + writer *AtomicWriter + keys Keyspace +} + +// NewAcceptanceStore constructs one Redis-backed auth acceptance store. +func NewAcceptanceStore(client *redis.Client) (*AcceptanceStore, error) { + if client == nil { + return nil, errors.New("new auth acceptance store: nil redis client") + } + + writer, err := NewAtomicWriter(client) + if err != nil { + return nil, fmt.Errorf("new auth acceptance store: %w", err) + } + + return &AcceptanceStore{ + client: client, + writer: writer, + keys: Keyspace{}, + }, nil +} + +// CreateAcceptance stores one auth-delivery acceptance write set in Redis. +func (store *AcceptanceStore) CreateAcceptance(ctx context.Context, input acceptauthdelivery.CreateAcceptanceInput) error { + if store == nil || store.client == nil || store.writer == nil { + return errors.New("create auth acceptance: nil store") + } + if ctx == nil { + return errors.New("create auth acceptance: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("create auth acceptance: %w", err) + } + + err := store.writer.CreateAcceptance(ctx, CreateAcceptanceInput{ + Delivery: input.Delivery, + FirstAttempt: input.FirstAttempt, + Idempotency: &input.Idempotency, + }) + if errors.Is(err, ErrConflict) { + return fmt.Errorf("create auth acceptance: %w", acceptauthdelivery.ErrConflict) + } + if err != nil { + return fmt.Errorf("create auth acceptance: %w", err) + } + + return nil +} + +// GetIdempotency loads one accepted idempotency scope from Redis. +func (store *AcceptanceStore) GetIdempotency(ctx context.Context, source deliverydomain.Source, key common.IdempotencyKey) (idempotency.Record, bool, error) { + if store == nil || store.client == nil { + return idempotency.Record{}, false, errors.New("get auth acceptance idempotency: nil store") + } + if ctx == nil { + return idempotency.Record{}, false, errors.New("get auth acceptance idempotency: nil context") + } + + payload, err := store.client.Get(ctx, store.keys.Idempotency(source, key)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return idempotency.Record{}, false, nil + case err != nil: + return idempotency.Record{}, false, fmt.Errorf("get auth acceptance idempotency: %w", err) + } + + record, err := UnmarshalIdempotency(payload) + if err != nil { + return idempotency.Record{}, false, fmt.Errorf("get auth acceptance idempotency: %w", err) + } + + return record, true, nil +} + +// GetDelivery loads one accepted delivery from Redis. +func (store *AcceptanceStore) GetDelivery(ctx context.Context, deliveryID common.DeliveryID) (deliverydomain.Delivery, bool, error) { + if store == nil || store.client == nil { + return deliverydomain.Delivery{}, false, errors.New("get auth acceptance delivery: nil store") + } + if ctx == nil { + return deliverydomain.Delivery{}, false, errors.New("get auth acceptance delivery: nil context") + } + + payload, err := store.client.Get(ctx, store.keys.Delivery(deliveryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return deliverydomain.Delivery{}, false, nil + case err != nil: + return deliverydomain.Delivery{}, false, fmt.Errorf("get auth acceptance delivery: %w", err) + } + + record, err := UnmarshalDelivery(payload) + if err != nil { + return deliverydomain.Delivery{}, false, fmt.Errorf("get auth acceptance delivery: %w", err) + } + + return record, true, nil +} diff --git a/mail/internal/adapters/redisstate/auth_acceptance_store_test.go b/mail/internal/adapters/redisstate/auth_acceptance_store_test.go new file mode 100644 index 0000000..97ad342 --- /dev/null +++ b/mail/internal/adapters/redisstate/auth_acceptance_store_test.go @@ -0,0 +1,117 @@ +package redisstate + +import ( + "context" + "testing" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + "galaxy/mail/internal/service/acceptauthdelivery" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestAcceptanceStoreCreateAndReadQueuedDelivery(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewAcceptanceStore(client) + require.NoError(t, err) + + record := validDelivery(t) + record.Source = deliverydomain.SourceAuthSession + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusQueued + record.AttemptCount = 1 + record.LastAttemptStatus = "" + record.ProviderSummary = "" + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt + record.SentAt = nil + require.NoError(t, record.Validate()) + + input := acceptauthdelivery.CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(validScheduledAttempt(t, record.DeliveryID)), + Idempotency: validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey), + } + + require.NoError(t, store.CreateAcceptance(context.Background(), input)) + + storedDelivery, found, err := store.GetDelivery(context.Background(), record.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, record, storedDelivery) + + storedIdempotency, found, err := store.GetIdempotency(context.Background(), record.Source, record.IdempotencyKey) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, input.Idempotency, storedIdempotency) +} + +func TestAcceptanceStoreCreateAndReadSuppressedDelivery(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewAcceptanceStore(client) + require.NoError(t, err) + + record := validDelivery(t) + record.Source = deliverydomain.SourceAuthSession + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusSuppressed + record.AttemptCount = 0 + record.LastAttemptStatus = "" + record.ProviderSummary = "" + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + record.SentAt = nil + record.SuppressedAt = ptr(record.UpdatedAt) + require.NoError(t, record.Validate()) + + input := acceptauthdelivery.CreateAcceptanceInput{ + Delivery: record, + Idempotency: validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey), + } + + require.NoError(t, store.CreateAcceptance(context.Background(), input)) + + storedDelivery, found, err := store.GetDelivery(context.Background(), record.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, record, storedDelivery) + + attemptExists := server.Exists(Keyspace{}.Attempt(record.DeliveryID, 1)) + require.False(t, attemptExists) +} + +func TestAcceptanceStoreReturnsNotFound(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewAcceptanceStore(client) + require.NoError(t, err) + + deliveryRecord, found, err := store.GetDelivery(context.Background(), common.DeliveryID("missing")) + require.NoError(t, err) + require.False(t, found) + require.Equal(t, deliverydomain.Delivery{}, deliveryRecord) + + idempotencyRecord, found, err := store.GetIdempotency(context.Background(), deliverydomain.SourceAuthSession, common.IdempotencyKey("missing")) + require.NoError(t, err) + require.False(t, found) + require.Equal(t, idempotency.Record{}, idempotencyRecord) +} diff --git a/mail/internal/adapters/redisstate/codecs.go b/mail/internal/adapters/redisstate/codecs.go new file mode 100644 index 0000000..cffb329 --- /dev/null +++ b/mail/internal/adapters/redisstate/codecs.go @@ -0,0 +1,697 @@ +package redisstate + +import ( + "bytes" + "encoding/json" + "fmt" + "io" + "strings" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + "galaxy/mail/internal/domain/malformedcommand" + "galaxy/mail/internal/service/acceptgenericdelivery" +) + +type deliveryRecord struct { + DeliveryID string `json:"delivery_id"` + ResendParentDeliveryID string `json:"resend_parent_delivery_id,omitempty"` + Source deliverydomain.Source `json:"source"` + PayloadMode deliverydomain.PayloadMode `json:"payload_mode"` + TemplateID string `json:"template_id,omitempty"` + TemplateVariables *map[string]any `json:"template_variables,omitempty"` + To []string `json:"to"` + Cc []string `json:"cc"` + Bcc []string `json:"bcc"` + ReplyTo []string `json:"reply_to"` + Subject string `json:"subject,omitempty"` + TextBody string `json:"text_body,omitempty"` + HTMLBody string `json:"html_body,omitempty"` + Attachments []attachmentRecord `json:"attachments"` + Locale string `json:"locale,omitempty"` + LocaleFallbackUsed bool `json:"locale_fallback_used"` + IdempotencyKey string `json:"idempotency_key"` + Status deliverydomain.Status `json:"status"` + AttemptCount int `json:"attempt_count"` + LastAttemptStatus attempt.Status `json:"last_attempt_status,omitempty"` + ProviderSummary string `json:"provider_summary,omitempty"` + CreatedAtMS int64 `json:"created_at_ms"` + UpdatedAtMS int64 `json:"updated_at_ms"` + SentAtMS *int64 `json:"sent_at_ms,omitempty"` + SuppressedAtMS *int64 `json:"suppressed_at_ms,omitempty"` + FailedAtMS *int64 `json:"failed_at_ms,omitempty"` + DeadLetteredAtMS *int64 `json:"dead_lettered_at_ms,omitempty"` +} + +type attemptRecord struct { + DeliveryID string `json:"delivery_id"` + AttemptNo int `json:"attempt_no"` + ScheduledForMS int64 `json:"scheduled_for_ms"` + StartedAtMS *int64 `json:"started_at_ms,omitempty"` + FinishedAtMS *int64 `json:"finished_at_ms,omitempty"` + Status attempt.Status `json:"status"` + ProviderClassification string `json:"provider_classification,omitempty"` + ProviderSummary string `json:"provider_summary,omitempty"` +} + +type idempotencyRecord struct { + Source deliverydomain.Source `json:"source"` + IdempotencyKey string `json:"idempotency_key"` + DeliveryID string `json:"delivery_id"` + RequestFingerprint string `json:"request_fingerprint"` + CreatedAtMS int64 `json:"created_at_ms"` + ExpiresAtMS int64 `json:"expires_at_ms"` +} + +type deadLetterRecord struct { + DeliveryID string `json:"delivery_id"` + FinalAttemptNo int `json:"final_attempt_no"` + FailureClassification string `json:"failure_classification"` + ProviderSummary string `json:"provider_summary,omitempty"` + CreatedAtMS int64 `json:"created_at_ms"` + RecoveryHint string `json:"recovery_hint,omitempty"` +} + +type deliveryPayloadRecord struct { + DeliveryID string `json:"delivery_id"` + Attachments []deliveryPayloadAttachmentRecord `json:"attachments"` +} + +type deliveryPayloadAttachmentRecord struct { + Filename string `json:"filename"` + ContentType string `json:"content_type"` + ContentBase64 string `json:"content_base64"` + SizeBytes int64 `json:"size_bytes"` +} + +type malformedCommandRecord struct { + StreamEntryID string `json:"stream_entry_id"` + DeliveryID string `json:"delivery_id,omitempty"` + Source string `json:"source,omitempty"` + IdempotencyKey string `json:"idempotency_key,omitempty"` + FailureCode malformedcommand.FailureCode `json:"failure_code"` + FailureMessage string `json:"failure_message"` + RawFieldsJSON map[string]any `json:"raw_fields_json"` + RecordedAtMS int64 `json:"recorded_at_ms"` +} + +type streamOffsetRecord struct { + Stream string `json:"stream"` + LastProcessedEntryID string `json:"last_processed_entry_id"` + UpdatedAtMS int64 `json:"updated_at_ms"` +} + +// StreamOffset stores the persisted progress of one plain-XREAD consumer. +type StreamOffset struct { + // Stream stores the Redis Stream name. + Stream string + + // LastProcessedEntryID stores the last durably processed entry id. + LastProcessedEntryID string + + // UpdatedAt stores when the offset was updated. + UpdatedAt time.Time +} + +// Validate reports whether offset contains a complete persisted progress +// record. +func (offset StreamOffset) Validate() error { + if strings.TrimSpace(offset.Stream) == "" { + return fmt.Errorf("stream offset stream must not be empty") + } + if strings.TrimSpace(offset.LastProcessedEntryID) == "" { + return fmt.Errorf("stream offset last processed entry id must not be empty") + } + if err := common.ValidateTimestamp("stream offset updated at", offset.UpdatedAt); err != nil { + return err + } + + return nil +} + +type attachmentRecord struct { + Filename string `json:"filename"` + ContentType string `json:"content_type"` + SizeBytes int64 `json:"size_bytes"` +} + +// MarshalDelivery encodes record into the strict Redis JSON shape used for +// mail_delivery records. +func MarshalDelivery(record deliverydomain.Delivery) ([]byte, error) { + if err := record.Validate(); err != nil { + return nil, fmt.Errorf("marshal redis delivery record: %w", err) + } + + stored := deliveryRecord{ + DeliveryID: record.DeliveryID.String(), + ResendParentDeliveryID: record.ResendParentDeliveryID.String(), + Source: record.Source, + PayloadMode: record.PayloadMode, + TemplateID: record.TemplateID.String(), + TemplateVariables: optionalJSONObject(record.TemplateVariables), + To: cloneEmailStrings(record.Envelope.To), + Cc: cloneEmailStrings(record.Envelope.Cc), + Bcc: cloneEmailStrings(record.Envelope.Bcc), + ReplyTo: cloneEmailStrings(record.Envelope.ReplyTo), + Subject: record.Content.Subject, + TextBody: record.Content.TextBody, + HTMLBody: record.Content.HTMLBody, + Attachments: cloneAttachments(record.Attachments), + Locale: record.Locale.String(), + LocaleFallbackUsed: record.LocaleFallbackUsed, + IdempotencyKey: record.IdempotencyKey.String(), + Status: record.Status, + AttemptCount: record.AttemptCount, + LastAttemptStatus: record.LastAttemptStatus, + ProviderSummary: record.ProviderSummary, + CreatedAtMS: record.CreatedAt.UTC().UnixMilli(), + UpdatedAtMS: record.UpdatedAt.UTC().UnixMilli(), + SentAtMS: optionalUnixMilli(record.SentAt), + SuppressedAtMS: optionalUnixMilli(record.SuppressedAt), + FailedAtMS: optionalUnixMilli(record.FailedAt), + DeadLetteredAtMS: optionalUnixMilli(record.DeadLetteredAt), + } + + payload, err := json.Marshal(stored) + if err != nil { + return nil, fmt.Errorf("marshal redis delivery record: %w", err) + } + + return payload, nil +} + +// UnmarshalDelivery decodes payload from the strict Redis JSON shape used for +// mail_delivery records. +func UnmarshalDelivery(payload []byte) (deliverydomain.Delivery, error) { + var stored deliveryRecord + if err := decodeStrictJSON("decode redis delivery record", payload, &stored); err != nil { + return deliverydomain.Delivery{}, err + } + + record := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID(stored.DeliveryID), + ResendParentDeliveryID: common.DeliveryID(stored.ResendParentDeliveryID), + Source: stored.Source, + PayloadMode: stored.PayloadMode, + TemplateID: common.TemplateID(stored.TemplateID), + TemplateVariables: cloneJSONObjectPtr(stored.TemplateVariables), + Envelope: deliverydomain.Envelope{ + To: cloneEmails(stored.To), + Cc: cloneEmails(stored.Cc), + Bcc: cloneEmails(stored.Bcc), + ReplyTo: cloneEmails(stored.ReplyTo), + }, + Content: deliverydomain.Content{ + Subject: stored.Subject, + TextBody: stored.TextBody, + HTMLBody: stored.HTMLBody, + }, + Attachments: inflateAttachments(stored.Attachments), + Locale: common.Locale(stored.Locale), + LocaleFallbackUsed: stored.LocaleFallbackUsed, + IdempotencyKey: common.IdempotencyKey(stored.IdempotencyKey), + Status: stored.Status, + AttemptCount: stored.AttemptCount, + LastAttemptStatus: stored.LastAttemptStatus, + ProviderSummary: stored.ProviderSummary, + CreatedAt: time.UnixMilli(stored.CreatedAtMS).UTC(), + UpdatedAt: time.UnixMilli(stored.UpdatedAtMS).UTC(), + SentAt: inflateOptionalTime(stored.SentAtMS), + SuppressedAt: inflateOptionalTime(stored.SuppressedAtMS), + FailedAt: inflateOptionalTime(stored.FailedAtMS), + DeadLetteredAt: inflateOptionalTime(stored.DeadLetteredAtMS), + } + if err := record.Validate(); err != nil { + return deliverydomain.Delivery{}, fmt.Errorf("decode redis delivery record: %w", err) + } + + return record, nil +} + +// MarshalAttempt encodes record into the strict Redis JSON shape used for +// mail_attempt records. +func MarshalAttempt(record attempt.Attempt) ([]byte, error) { + if err := record.Validate(); err != nil { + return nil, fmt.Errorf("marshal redis attempt record: %w", err) + } + + stored := attemptRecord{ + DeliveryID: record.DeliveryID.String(), + AttemptNo: record.AttemptNo, + ScheduledForMS: record.ScheduledFor.UTC().UnixMilli(), + StartedAtMS: optionalUnixMilli(record.StartedAt), + FinishedAtMS: optionalUnixMilli(record.FinishedAt), + Status: record.Status, + ProviderClassification: record.ProviderClassification, + ProviderSummary: record.ProviderSummary, + } + + payload, err := json.Marshal(stored) + if err != nil { + return nil, fmt.Errorf("marshal redis attempt record: %w", err) + } + + return payload, nil +} + +// UnmarshalAttempt decodes payload from the strict Redis JSON shape used for +// mail_attempt records. +func UnmarshalAttempt(payload []byte) (attempt.Attempt, error) { + var stored attemptRecord + if err := decodeStrictJSON("decode redis attempt record", payload, &stored); err != nil { + return attempt.Attempt{}, err + } + + record := attempt.Attempt{ + DeliveryID: common.DeliveryID(stored.DeliveryID), + AttemptNo: stored.AttemptNo, + ScheduledFor: time.UnixMilli(stored.ScheduledForMS).UTC(), + StartedAt: inflateOptionalTime(stored.StartedAtMS), + FinishedAt: inflateOptionalTime(stored.FinishedAtMS), + Status: stored.Status, + ProviderClassification: stored.ProviderClassification, + ProviderSummary: stored.ProviderSummary, + } + if err := record.Validate(); err != nil { + return attempt.Attempt{}, fmt.Errorf("decode redis attempt record: %w", err) + } + + return record, nil +} + +// MarshalIdempotency encodes record into the strict Redis JSON shape used for +// mail_idempotency_record values. +func MarshalIdempotency(record idempotency.Record) ([]byte, error) { + if err := record.Validate(); err != nil { + return nil, fmt.Errorf("marshal redis idempotency record: %w", err) + } + + stored := idempotencyRecord{ + Source: record.Source, + IdempotencyKey: record.IdempotencyKey.String(), + DeliveryID: record.DeliveryID.String(), + RequestFingerprint: record.RequestFingerprint, + CreatedAtMS: record.CreatedAt.UTC().UnixMilli(), + ExpiresAtMS: record.ExpiresAt.UTC().UnixMilli(), + } + + payload, err := json.Marshal(stored) + if err != nil { + return nil, fmt.Errorf("marshal redis idempotency record: %w", err) + } + + return payload, nil +} + +// UnmarshalIdempotency decodes payload from the strict Redis JSON shape used +// for mail_idempotency_record values. +func UnmarshalIdempotency(payload []byte) (idempotency.Record, error) { + var stored idempotencyRecord + if err := decodeStrictJSON("decode redis idempotency record", payload, &stored); err != nil { + return idempotency.Record{}, err + } + + record := idempotency.Record{ + Source: stored.Source, + IdempotencyKey: common.IdempotencyKey(stored.IdempotencyKey), + DeliveryID: common.DeliveryID(stored.DeliveryID), + RequestFingerprint: stored.RequestFingerprint, + CreatedAt: time.UnixMilli(stored.CreatedAtMS).UTC(), + ExpiresAt: time.UnixMilli(stored.ExpiresAtMS).UTC(), + } + if err := record.Validate(); err != nil { + return idempotency.Record{}, fmt.Errorf("decode redis idempotency record: %w", err) + } + + return record, nil +} + +// MarshalDeadLetter encodes entry into the strict Redis JSON shape used for +// mail_dead_letter_entry values. +func MarshalDeadLetter(entry deliverydomain.DeadLetterEntry) ([]byte, error) { + if err := entry.Validate(); err != nil { + return nil, fmt.Errorf("marshal redis dead-letter record: %w", err) + } + + stored := deadLetterRecord{ + DeliveryID: entry.DeliveryID.String(), + FinalAttemptNo: entry.FinalAttemptNo, + FailureClassification: entry.FailureClassification, + ProviderSummary: entry.ProviderSummary, + CreatedAtMS: entry.CreatedAt.UTC().UnixMilli(), + RecoveryHint: entry.RecoveryHint, + } + + payload, err := json.Marshal(stored) + if err != nil { + return nil, fmt.Errorf("marshal redis dead-letter record: %w", err) + } + + return payload, nil +} + +// UnmarshalDeadLetter decodes payload from the strict Redis JSON shape used +// for mail_dead_letter_entry values. +func UnmarshalDeadLetter(payload []byte) (deliverydomain.DeadLetterEntry, error) { + var stored deadLetterRecord + if err := decodeStrictJSON("decode redis dead-letter record", payload, &stored); err != nil { + return deliverydomain.DeadLetterEntry{}, err + } + + entry := deliverydomain.DeadLetterEntry{ + DeliveryID: common.DeliveryID(stored.DeliveryID), + FinalAttemptNo: stored.FinalAttemptNo, + FailureClassification: stored.FailureClassification, + ProviderSummary: stored.ProviderSummary, + CreatedAt: time.UnixMilli(stored.CreatedAtMS).UTC(), + RecoveryHint: stored.RecoveryHint, + } + if err := entry.Validate(); err != nil { + return deliverydomain.DeadLetterEntry{}, fmt.Errorf("decode redis dead-letter record: %w", err) + } + + return entry, nil +} + +// MarshalDeliveryPayload encodes payload into the strict Redis JSON shape used +// for raw generic-delivery attachment bundles. +func MarshalDeliveryPayload(payload acceptgenericdelivery.DeliveryPayload) ([]byte, error) { + if err := payload.Validate(); err != nil { + return nil, fmt.Errorf("marshal redis delivery payload record: %w", err) + } + + stored := deliveryPayloadRecord{ + DeliveryID: payload.DeliveryID.String(), + Attachments: cloneDeliveryPayloadAttachments(payload.Attachments), + } + + encoded, err := json.Marshal(stored) + if err != nil { + return nil, fmt.Errorf("marshal redis delivery payload record: %w", err) + } + + return encoded, nil +} + +// UnmarshalDeliveryPayload decodes payload from the strict Redis JSON shape +// used for raw generic-delivery attachment bundles. +func UnmarshalDeliveryPayload(payload []byte) (acceptgenericdelivery.DeliveryPayload, error) { + var stored deliveryPayloadRecord + if err := decodeStrictJSON("decode redis delivery payload record", payload, &stored); err != nil { + return acceptgenericdelivery.DeliveryPayload{}, err + } + + record := acceptgenericdelivery.DeliveryPayload{ + DeliveryID: common.DeliveryID(stored.DeliveryID), + Attachments: inflateDeliveryPayloadAttachments(stored.Attachments), + } + if err := record.Validate(); err != nil { + return acceptgenericdelivery.DeliveryPayload{}, fmt.Errorf("decode redis delivery payload record: %w", err) + } + + return record, nil +} + +// MarshalMalformedCommand encodes entry into the strict Redis JSON shape used +// for operator-visible malformed async command records. +func MarshalMalformedCommand(entry malformedcommand.Entry) ([]byte, error) { + if err := entry.Validate(); err != nil { + return nil, fmt.Errorf("marshal redis malformed command record: %w", err) + } + + stored := malformedCommandRecord{ + StreamEntryID: entry.StreamEntryID, + DeliveryID: entry.DeliveryID, + Source: entry.Source, + IdempotencyKey: entry.IdempotencyKey, + FailureCode: entry.FailureCode, + FailureMessage: entry.FailureMessage, + RawFieldsJSON: cloneJSONObject(entry.RawFields), + RecordedAtMS: entry.RecordedAt.UTC().UnixMilli(), + } + + encoded, err := json.Marshal(stored) + if err != nil { + return nil, fmt.Errorf("marshal redis malformed command record: %w", err) + } + + return encoded, nil +} + +// UnmarshalMalformedCommand decodes payload from the strict Redis JSON shape +// used for operator-visible malformed async command records. +func UnmarshalMalformedCommand(payload []byte) (malformedcommand.Entry, error) { + var stored malformedCommandRecord + if err := decodeStrictJSON("decode redis malformed command record", payload, &stored); err != nil { + return malformedcommand.Entry{}, err + } + + entry := malformedcommand.Entry{ + StreamEntryID: stored.StreamEntryID, + DeliveryID: stored.DeliveryID, + Source: stored.Source, + IdempotencyKey: stored.IdempotencyKey, + FailureCode: stored.FailureCode, + FailureMessage: stored.FailureMessage, + RawFields: cloneJSONObject(stored.RawFieldsJSON), + RecordedAt: time.UnixMilli(stored.RecordedAtMS).UTC(), + } + if err := entry.Validate(); err != nil { + return malformedcommand.Entry{}, fmt.Errorf("decode redis malformed command record: %w", err) + } + + return entry, nil +} + +// MarshalStreamOffset encodes offset into the strict Redis JSON shape used for +// persisted consumer progress. +func MarshalStreamOffset(offset StreamOffset) ([]byte, error) { + if err := offset.Validate(); err != nil { + return nil, fmt.Errorf("marshal redis stream offset record: %w", err) + } + + stored := streamOffsetRecord{ + Stream: offset.Stream, + LastProcessedEntryID: offset.LastProcessedEntryID, + UpdatedAtMS: offset.UpdatedAt.UTC().UnixMilli(), + } + + encoded, err := json.Marshal(stored) + if err != nil { + return nil, fmt.Errorf("marshal redis stream offset record: %w", err) + } + + return encoded, nil +} + +// UnmarshalStreamOffset decodes payload from the strict Redis JSON shape used +// for persisted consumer progress. +func UnmarshalStreamOffset(payload []byte) (StreamOffset, error) { + var stored streamOffsetRecord + if err := decodeStrictJSON("decode redis stream offset record", payload, &stored); err != nil { + return StreamOffset{}, err + } + + offset := StreamOffset{ + Stream: stored.Stream, + LastProcessedEntryID: stored.LastProcessedEntryID, + UpdatedAt: time.UnixMilli(stored.UpdatedAtMS).UTC(), + } + if err := offset.Validate(); err != nil { + return StreamOffset{}, fmt.Errorf("decode redis stream offset record: %w", err) + } + + return offset, nil +} + +func decodeStrictJSON(operation string, payload []byte, target any) error { + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return fmt.Errorf("%s: %w", operation, err) + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return fmt.Errorf("%s: unexpected trailing JSON input", operation) + } + return fmt.Errorf("%s: %w", operation, err) + } + + return nil +} + +func cloneEmailStrings(values []common.Email) []string { + if values == nil { + return nil + } + + cloned := make([]string, len(values)) + for index, value := range values { + cloned[index] = value.String() + } + + return cloned +} + +func cloneEmails(values []string) []common.Email { + if values == nil { + return nil + } + + cloned := make([]common.Email, len(values)) + for index, value := range values { + cloned[index] = common.Email(value) + } + + return cloned +} + +func cloneAttachments(values []common.AttachmentMetadata) []attachmentRecord { + if values == nil { + return nil + } + + cloned := make([]attachmentRecord, len(values)) + for index, value := range values { + cloned[index] = attachmentRecord{ + Filename: value.Filename, + ContentType: value.ContentType, + SizeBytes: value.SizeBytes, + } + } + + return cloned +} + +func inflateAttachments(values []attachmentRecord) []common.AttachmentMetadata { + if values == nil { + return nil + } + + cloned := make([]common.AttachmentMetadata, len(values)) + for index, value := range values { + cloned[index] = common.AttachmentMetadata{ + Filename: value.Filename, + ContentType: value.ContentType, + SizeBytes: value.SizeBytes, + } + } + + return cloned +} + +func optionalJSONObject(value map[string]any) *map[string]any { + if value == nil { + return nil + } + + cloned := make(map[string]any, len(value)) + for key, item := range value { + cloned[key] = cloneJSONValue(item) + } + + return &cloned +} + +func cloneJSONObjectPtr(value *map[string]any) map[string]any { + if value == nil { + return nil + } + + cloned := make(map[string]any, len(*value)) + for key, item := range *value { + cloned[key] = cloneJSONValue(item) + } + + return cloned +} + +func cloneJSONObject(value map[string]any) map[string]any { + if value == nil { + return nil + } + + cloned := make(map[string]any, len(value)) + for key, item := range value { + cloned[key] = cloneJSONValue(item) + } + + return cloned +} + +func cloneJSONValue(value any) any { + switch typed := value.(type) { + case map[string]any: + cloned := make(map[string]any, len(typed)) + for key, item := range typed { + cloned[key] = cloneJSONValue(item) + } + return cloned + case []any: + cloned := make([]any, len(typed)) + for index, item := range typed { + cloned[index] = cloneJSONValue(item) + } + return cloned + default: + return typed + } +} + +func cloneDeliveryPayloadAttachments(values []acceptgenericdelivery.AttachmentPayload) []deliveryPayloadAttachmentRecord { + if values == nil { + return nil + } + + cloned := make([]deliveryPayloadAttachmentRecord, len(values)) + for index, value := range values { + cloned[index] = deliveryPayloadAttachmentRecord{ + Filename: value.Filename, + ContentType: value.ContentType, + ContentBase64: value.ContentBase64, + SizeBytes: value.SizeBytes, + } + } + + return cloned +} + +func inflateDeliveryPayloadAttachments(values []deliveryPayloadAttachmentRecord) []acceptgenericdelivery.AttachmentPayload { + if values == nil { + return nil + } + + cloned := make([]acceptgenericdelivery.AttachmentPayload, len(values)) + for index, value := range values { + cloned[index] = acceptgenericdelivery.AttachmentPayload{ + Filename: value.Filename, + ContentType: value.ContentType, + ContentBase64: value.ContentBase64, + SizeBytes: value.SizeBytes, + } + } + + return cloned +} + +func optionalUnixMilli(value *time.Time) *int64 { + if value == nil { + return nil + } + + milliseconds := value.UTC().UnixMilli() + return &milliseconds +} + +func inflateOptionalTime(value *int64) *time.Time { + if value == nil { + return nil + } + + converted := time.UnixMilli(*value).UTC() + return &converted +} diff --git a/mail/internal/adapters/redisstate/codecs_test.go b/mail/internal/adapters/redisstate/codecs_test.go new file mode 100644 index 0000000..dcd91bc --- /dev/null +++ b/mail/internal/adapters/redisstate/codecs_test.go @@ -0,0 +1,124 @@ +package redisstate + +import ( + "bytes" + "testing" + + "galaxy/mail/internal/domain/attempt" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + + "github.com/stretchr/testify/require" +) + +func TestDeliveryCodecRoundTrip(t *testing.T) { + t.Parallel() + + record := validDelivery(t) + + payload, err := MarshalDelivery(record) + require.NoError(t, err) + + decoded, err := UnmarshalDelivery(payload) + require.NoError(t, err) + require.Equal(t, record, decoded) +} + +func TestAttemptCodecRoundTrip(t *testing.T) { + t.Parallel() + + record := validTerminalAttempt(t, validDelivery(t).DeliveryID) + + payload, err := MarshalAttempt(record) + require.NoError(t, err) + + decoded, err := UnmarshalAttempt(payload) + require.NoError(t, err) + require.Equal(t, record, decoded) +} + +func TestIdempotencyCodecRoundTrip(t *testing.T) { + t.Parallel() + + deliveryRecord := validDelivery(t) + record := validIdempotencyRecord(t, deliveryRecord.Source, deliveryRecord.DeliveryID, deliveryRecord.IdempotencyKey) + + payload, err := MarshalIdempotency(record) + require.NoError(t, err) + + decoded, err := UnmarshalIdempotency(payload) + require.NoError(t, err) + require.Equal(t, record, decoded) +} + +func TestDeadLetterCodecRoundTrip(t *testing.T) { + t.Parallel() + + record := validDeadLetterEntry(t, validDelivery(t).DeliveryID) + + payload, err := MarshalDeadLetter(record) + require.NoError(t, err) + + decoded, err := UnmarshalDeadLetter(payload) + require.NoError(t, err) + require.Equal(t, record, decoded) +} + +func TestDeliveryCodecRejectsUnknownField(t *testing.T) { + t.Parallel() + + payload, err := MarshalDelivery(validDelivery(t)) + require.NoError(t, err) + + payload = append(payload[:len(payload)-1], []byte(`,"extra":true}`)...) + + _, err = UnmarshalDelivery(payload) + require.Error(t, err) + require.ErrorContains(t, err, "unknown field") +} + +func TestAttemptCodecRejectsWrongType(t *testing.T) { + t.Parallel() + + payload, err := MarshalAttempt(validTerminalAttempt(t, validDelivery(t).DeliveryID)) + require.NoError(t, err) + + payload = bytes.Replace(payload, []byte(`"attempt_no":2`), []byte(`"attempt_no":"2"`), 1) + + _, err = UnmarshalAttempt(payload) + require.Error(t, err) + require.ErrorContains(t, err, "cannot unmarshal") +} + +func TestIdempotencyCodecRejectsTrailingJSON(t *testing.T) { + t.Parallel() + + deliveryRecord := validDelivery(t) + payload, err := MarshalIdempotency(validIdempotencyRecord(t, deliveryRecord.Source, deliveryRecord.DeliveryID, deliveryRecord.IdempotencyKey)) + require.NoError(t, err) + + payload = append(payload, []byte(` {}`)...) + + _, err = UnmarshalIdempotency(payload) + require.Error(t, err) + require.ErrorContains(t, err, "unexpected trailing JSON input") +} + +func TestDeadLetterCodecRejectsUnknownField(t *testing.T) { + t.Parallel() + + payload, err := MarshalDeadLetter(validDeadLetterEntry(t, validDelivery(t).DeliveryID)) + require.NoError(t, err) + + payload = append(payload[:len(payload)-1], []byte(`,"unexpected":"value"}`)...) + + _, err = UnmarshalDeadLetter(payload) + require.Error(t, err) + require.ErrorContains(t, err, "unknown field") +} + +var ( + _ = attempt.Attempt{} + _ = deliverydomain.DeadLetterEntry{} + _ = idempotency.Record{} +) diff --git a/mail/internal/adapters/redisstate/errors.go b/mail/internal/adapters/redisstate/errors.go new file mode 100644 index 0000000..ebefcf3 --- /dev/null +++ b/mail/internal/adapters/redisstate/errors.go @@ -0,0 +1,12 @@ +// Package redisstate defines the frozen Redis keyspace, strict JSON records, +// and low-level mutation helpers used by future Mail Service Redis adapters. +package redisstate + +import "errors" + +var ( + // ErrConflict reports that a Redis mutation could not be applied because + // one of the watched or newly created keys already existed or changed + // concurrently. + ErrConflict = errors.New("redis state conflict") +) diff --git a/mail/internal/adapters/redisstate/fixtures_test.go b/mail/internal/adapters/redisstate/fixtures_test.go new file mode 100644 index 0000000..26ed85b --- /dev/null +++ b/mail/internal/adapters/redisstate/fixtures_test.go @@ -0,0 +1,201 @@ +package redisstate + +import ( + "encoding/base64" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + "galaxy/mail/internal/domain/malformedcommand" + "galaxy/mail/internal/service/acceptgenericdelivery" + + "github.com/stretchr/testify/require" +) + +func validDelivery(t require.TestingT) deliverydomain.Delivery { + locale, err := common.ParseLocale("fr-fr") + require.NoError(t, err) + + createdAt := time.Unix(1_775_121_700, 0).UTC() + updatedAt := createdAt.Add(2 * time.Minute) + sentAt := updatedAt.Add(15 * time.Second) + + record := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-123"), + ResendParentDeliveryID: common.DeliveryID("delivery-parent-001"), + Source: deliverydomain.SourceOperatorResend, + PayloadMode: deliverydomain.PayloadModeTemplate, + TemplateID: common.TemplateID("auth.login_code"), + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + Cc: []common.Email{common.Email("copilot@example.com")}, + Bcc: []common.Email{common.Email("ops@example.com")}, + ReplyTo: []common.Email{common.Email("noreply@example.com")}, + }, + Content: deliverydomain.Content{ + Subject: "Your login code", + TextBody: "Code: 123456", + HTMLBody: "

Code: 123456

", + }, + Attachments: []common.AttachmentMetadata{ + {Filename: "instructions.txt", ContentType: "text/plain; charset=utf-8", SizeBytes: 128}, + }, + Locale: locale, + TemplateVariables: map[string]any{ + "code": "123456", + }, + LocaleFallbackUsed: true, + IdempotencyKey: common.IdempotencyKey("operator:resend:delivery-123"), + Status: deliverydomain.StatusSent, + AttemptCount: 2, + LastAttemptStatus: attempt.StatusProviderAccepted, + ProviderSummary: "queued by provider", + CreatedAt: createdAt, + UpdatedAt: updatedAt, + SentAt: &sentAt, + } + require.NoError(t, record.Validate()) + + return record +} + +func validScheduledAttempt(t require.TestingT, deliveryID common.DeliveryID) attempt.Attempt { + scheduledFor := time.Unix(1_775_121_820, 0).UTC() + + record := attempt.Attempt{ + DeliveryID: deliveryID, + AttemptNo: 1, + ScheduledFor: scheduledFor, + Status: attempt.StatusScheduled, + } + require.NoError(t, record.Validate()) + + return record +} + +func validQueuedTemplateDelivery(t require.TestingT) deliverydomain.Delivery { + record := validDelivery(t) + record.DeliveryID = common.DeliveryID("delivery-queued") + record.ResendParentDeliveryID = "" + record.Source = deliverydomain.SourceNotification + record.Status = deliverydomain.StatusQueued + record.AttemptCount = 1 + record.LastAttemptStatus = "" + record.ProviderSummary = "" + record.LocaleFallbackUsed = false + record.Content = deliverydomain.Content{} + record.CreatedAt = time.Unix(1_775_121_700, 0).UTC() + record.UpdatedAt = record.CreatedAt + record.SentAt = nil + record.SuppressedAt = nil + record.FailedAt = nil + record.DeadLetteredAt = nil + record.IdempotencyKey = common.IdempotencyKey("notification:delivery-queued") + require.NoError(t, record.Validate()) + + return record +} + +func validTerminalAttempt(t require.TestingT, deliveryID common.DeliveryID) attempt.Attempt { + scheduledFor := time.Unix(1_775_121_820, 0).UTC() + startedAt := scheduledFor.Add(5 * time.Second) + finishedAt := startedAt.Add(2 * time.Second) + + record := attempt.Attempt{ + DeliveryID: deliveryID, + AttemptNo: 2, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + FinishedAt: &finishedAt, + Status: attempt.StatusProviderAccepted, + ProviderClassification: "accepted", + ProviderSummary: "queued by provider", + } + require.NoError(t, record.Validate()) + + return record +} + +func validRenderFailedAttempt(t require.TestingT, deliveryID common.DeliveryID) attempt.Attempt { + record := validScheduledAttempt(t, deliveryID) + startedAt := record.ScheduledFor.Add(time.Second) + finishedAt := startedAt + record.StartedAt = &startedAt + record.FinishedAt = &finishedAt + record.Status = attempt.StatusRenderFailed + record.ProviderClassification = "missing_required_variable" + record.ProviderSummary = "missing required variables: player.name" + require.NoError(t, record.Validate()) + + return record +} + +func validIdempotencyRecord(t require.TestingT, source deliverydomain.Source, deliveryID common.DeliveryID, key common.IdempotencyKey) idempotency.Record { + createdAt := time.Now().UTC().Truncate(time.Millisecond).Add(-time.Minute) + + record := idempotency.Record{ + Source: source, + IdempotencyKey: key, + DeliveryID: deliveryID, + RequestFingerprint: "sha256:abcdef123456", + CreatedAt: createdAt, + ExpiresAt: createdAt.Add(IdempotencyTTL), + } + require.NoError(t, record.Validate()) + + return record +} + +func validDeadLetterEntry(t require.TestingT, deliveryID common.DeliveryID) deliverydomain.DeadLetterEntry { + entry := deliverydomain.DeadLetterEntry{ + DeliveryID: deliveryID, + FinalAttemptNo: 3, + FailureClassification: "retry_exhausted", + ProviderSummary: "smtp timeout", + CreatedAt: time.Unix(1_775_122_000, 0).UTC(), + RecoveryHint: "check SMTP connectivity", + } + require.NoError(t, entry.Validate()) + + return entry +} + +func validDeliveryPayload(t require.TestingT, deliveryID common.DeliveryID) acceptgenericdelivery.DeliveryPayload { + payload := acceptgenericdelivery.DeliveryPayload{ + DeliveryID: deliveryID, + Attachments: []acceptgenericdelivery.AttachmentPayload{ + { + Filename: "instructions.txt", + ContentType: "text/plain; charset=utf-8", + ContentBase64: base64.StdEncoding.EncodeToString([]byte("read me")), + SizeBytes: int64(len([]byte("read me"))), + }, + }, + } + require.NoError(t, payload.Validate()) + + return payload +} + +func validMalformedCommandEntry(t require.TestingT) malformedcommand.Entry { + entry := malformedcommand.Entry{ + StreamEntryID: "1775121700000-0", + DeliveryID: "mail-123", + Source: "notification", + IdempotencyKey: "notification:mail-123", + FailureCode: malformedcommand.FailureCodeInvalidPayload, + FailureMessage: "payload_json.subject is required", + RawFields: map[string]any{ + "delivery_id": "mail-123", + "source": "notification", + "payload_mode": "rendered", + "idempotency_key": "notification:mail-123", + }, + RecordedAt: time.Unix(1_775_121_700, 0).UTC(), + } + require.NoError(t, entry.Validate()) + + return entry +} diff --git a/mail/internal/adapters/redisstate/generic_acceptance_store.go b/mail/internal/adapters/redisstate/generic_acceptance_store.go new file mode 100644 index 0000000..8c9ab7b --- /dev/null +++ b/mail/internal/adapters/redisstate/generic_acceptance_store.go @@ -0,0 +1,148 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + "galaxy/mail/internal/service/acceptgenericdelivery" + + "github.com/redis/go-redis/v9" +) + +// GenericAcceptanceStore provides the Redis-backed durable storage used by the +// generic-delivery acceptance use case. +type GenericAcceptanceStore struct { + client *redis.Client + writer *AtomicWriter + keys Keyspace +} + +// NewGenericAcceptanceStore constructs one Redis-backed generic acceptance +// store. +func NewGenericAcceptanceStore(client *redis.Client) (*GenericAcceptanceStore, error) { + if client == nil { + return nil, errors.New("new generic acceptance store: nil redis client") + } + + writer, err := NewAtomicWriter(client) + if err != nil { + return nil, fmt.Errorf("new generic acceptance store: %w", err) + } + + return &GenericAcceptanceStore{ + client: client, + writer: writer, + keys: Keyspace{}, + }, nil +} + +// CreateAcceptance stores one generic-delivery acceptance write set in Redis. +func (store *GenericAcceptanceStore) CreateAcceptance(ctx context.Context, input acceptgenericdelivery.CreateAcceptanceInput) error { + if store == nil || store.client == nil || store.writer == nil { + return errors.New("create generic acceptance: nil store") + } + if ctx == nil { + return errors.New("create generic acceptance: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("create generic acceptance: %w", err) + } + + writerInput := CreateAcceptanceInput{ + Delivery: input.Delivery, + FirstAttempt: &input.FirstAttempt, + Idempotency: &input.Idempotency, + } + if input.DeliveryPayload != nil { + writerInput.DeliveryPayload = input.DeliveryPayload + } + + err := store.writer.CreateAcceptance(ctx, writerInput) + if errors.Is(err, ErrConflict) { + return fmt.Errorf("create generic acceptance: %w", acceptgenericdelivery.ErrConflict) + } + if err != nil { + return fmt.Errorf("create generic acceptance: %w", err) + } + + return nil +} + +// GetIdempotency loads one accepted idempotency scope from Redis. +func (store *GenericAcceptanceStore) GetIdempotency(ctx context.Context, source deliverydomain.Source, key common.IdempotencyKey) (idempotency.Record, bool, error) { + if store == nil || store.client == nil { + return idempotency.Record{}, false, errors.New("get generic acceptance idempotency: nil store") + } + if ctx == nil { + return idempotency.Record{}, false, errors.New("get generic acceptance idempotency: nil context") + } + + payload, err := store.client.Get(ctx, store.keys.Idempotency(source, key)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return idempotency.Record{}, false, nil + case err != nil: + return idempotency.Record{}, false, fmt.Errorf("get generic acceptance idempotency: %w", err) + } + + record, err := UnmarshalIdempotency(payload) + if err != nil { + return idempotency.Record{}, false, fmt.Errorf("get generic acceptance idempotency: %w", err) + } + + return record, true, nil +} + +// GetDelivery loads one accepted delivery by its identifier. +func (store *GenericAcceptanceStore) GetDelivery(ctx context.Context, deliveryID common.DeliveryID) (deliverydomain.Delivery, bool, error) { + if store == nil || store.client == nil { + return deliverydomain.Delivery{}, false, errors.New("get generic acceptance delivery: nil store") + } + if ctx == nil { + return deliverydomain.Delivery{}, false, errors.New("get generic acceptance delivery: nil context") + } + + payload, err := store.client.Get(ctx, store.keys.Delivery(deliveryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return deliverydomain.Delivery{}, false, nil + case err != nil: + return deliverydomain.Delivery{}, false, fmt.Errorf("get generic acceptance delivery: %w", err) + } + + record, err := UnmarshalDelivery(payload) + if err != nil { + return deliverydomain.Delivery{}, false, fmt.Errorf("get generic acceptance delivery: %w", err) + } + + return record, true, nil +} + +// GetDeliveryPayload loads one raw accepted attachment bundle by delivery id. +func (store *GenericAcceptanceStore) GetDeliveryPayload(ctx context.Context, deliveryID common.DeliveryID) (acceptgenericdelivery.DeliveryPayload, bool, error) { + if store == nil || store.client == nil { + return acceptgenericdelivery.DeliveryPayload{}, false, errors.New("get generic acceptance delivery payload: nil store") + } + if ctx == nil { + return acceptgenericdelivery.DeliveryPayload{}, false, errors.New("get generic acceptance delivery payload: nil context") + } + + payload, err := store.client.Get(ctx, store.keys.DeliveryPayload(deliveryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return acceptgenericdelivery.DeliveryPayload{}, false, nil + case err != nil: + return acceptgenericdelivery.DeliveryPayload{}, false, fmt.Errorf("get generic acceptance delivery payload: %w", err) + } + + record, err := UnmarshalDeliveryPayload(payload) + if err != nil { + return acceptgenericdelivery.DeliveryPayload{}, false, fmt.Errorf("get generic acceptance delivery payload: %w", err) + } + + return record, true, nil +} diff --git a/mail/internal/adapters/redisstate/generic_acceptance_store_test.go b/mail/internal/adapters/redisstate/generic_acceptance_store_test.go new file mode 100644 index 0000000..2d63d39 --- /dev/null +++ b/mail/internal/adapters/redisstate/generic_acceptance_store_test.go @@ -0,0 +1,145 @@ +package redisstate + +import ( + "context" + "testing" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/service/acceptgenericdelivery" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestGenericAcceptanceStoreCreateAndReadRenderedDelivery(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewGenericAcceptanceStore(client) + require.NoError(t, err) + + record := validDelivery(t) + record.Source = deliverydomain.SourceNotification + record.ResendParentDeliveryID = "" + record.PayloadMode = deliverydomain.PayloadModeRendered + record.TemplateID = "" + record.TemplateVariables = nil + record.Locale = "" + record.LocaleFallbackUsed = false + record.Status = deliverydomain.StatusQueued + record.AttemptCount = 1 + record.LastAttemptStatus = "" + record.ProviderSummary = "" + record.SentAt = nil + record.UpdatedAt = record.CreatedAt + require.NoError(t, record.Validate()) + + input := acceptgenericdelivery.CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: validScheduledAttempt(t, record.DeliveryID), + DeliveryPayload: ptr(validDeliveryPayload(t, record.DeliveryID)), + Idempotency: validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey), + } + + require.NoError(t, store.CreateAcceptance(context.Background(), input)) + + storedDelivery, found, err := store.GetDelivery(context.Background(), record.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, record, storedDelivery) + + storedPayload, found, err := store.GetDeliveryPayload(context.Background(), record.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, *input.DeliveryPayload, storedPayload) +} + +func TestGenericAcceptanceStoreReturnsMissingPayload(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewGenericAcceptanceStore(client) + require.NoError(t, err) + + payload, found, err := store.GetDeliveryPayload(context.Background(), common.DeliveryID("missing")) + require.NoError(t, err) + require.False(t, found) + require.Equal(t, acceptgenericdelivery.DeliveryPayload{}, payload) +} + +func TestMalformedCommandStoreRecordIsIdempotent(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewMalformedCommandStore(client) + require.NoError(t, err) + + entry := validMalformedCommandEntry(t) + + require.NoError(t, store.Record(context.Background(), entry)) + require.NoError(t, store.Record(context.Background(), entry)) + + storedEntry, found, err := store.Get(context.Background(), entry.StreamEntryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, entry, storedEntry) + + indexCard, err := client.ZCard(context.Background(), Keyspace{}.MalformedCommandCreatedAtIndex()).Result() + require.NoError(t, err) + require.EqualValues(t, 1, indexCard) +} + +func TestMalformedCommandStoreAppliesRetention(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewMalformedCommandStore(client) + require.NoError(t, err) + + entry := validMalformedCommandEntry(t) + require.NoError(t, store.Record(context.Background(), entry)) + + ttl := server.TTL(Keyspace{}.MalformedCommand(entry.StreamEntryID)) + require.InDelta(t, DeadLetterTTL.Seconds(), ttl.Seconds(), 1) +} + +func TestStreamOffsetStoreSaveAndLoad(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewStreamOffsetStore(client) + require.NoError(t, err) + + require.NoError(t, store.Save(context.Background(), "mail:delivery_commands", "1775121700000-0")) + + entryID, found, err := store.Load(context.Background(), "mail:delivery_commands") + require.NoError(t, err) + require.True(t, found) + require.Equal(t, "1775121700000-0", entryID) + + payload, err := client.Get(context.Background(), Keyspace{}.StreamOffset("mail:delivery_commands")).Bytes() + require.NoError(t, err) + offset, err := UnmarshalStreamOffset(payload) + require.NoError(t, err) + require.Equal(t, "mail:delivery_commands", offset.Stream) + require.Equal(t, "1775121700000-0", offset.LastProcessedEntryID) + require.WithinDuration(t, time.Now().UTC(), offset.UpdatedAt, time.Second) +} diff --git a/mail/internal/adapters/redisstate/index_cleaner.go b/mail/internal/adapters/redisstate/index_cleaner.go new file mode 100644 index 0000000..ce03e4f --- /dev/null +++ b/mail/internal/adapters/redisstate/index_cleaner.go @@ -0,0 +1,118 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + "strings" + + "galaxy/mail/internal/domain/common" + + "github.com/redis/go-redis/v9" +) + +// CleanupReport describes the work done by IndexCleaner. +type CleanupReport struct { + // ScannedIndexes stores how many secondary index keys were inspected. + ScannedIndexes int + + // ScannedMembers stores how many index members were examined. + ScannedMembers int + + // RemovedMembers stores how many stale members were removed. + RemovedMembers int +} + +// IndexCleaner removes stale delivery references from the Mail Service +// secondary indexes after primary delivery keys expire by TTL. +type IndexCleaner struct { + client *redis.Client + keyspace Keyspace +} + +// NewIndexCleaner constructs one delivery-index cleanup helper. +func NewIndexCleaner(client *redis.Client) (*IndexCleaner, error) { + if client == nil { + return nil, errors.New("new redis index cleaner: nil client") + } + + return &IndexCleaner{ + client: client, + keyspace: Keyspace{}, + }, nil +} + +// CleanDeliveryIndexes scans every `mail:idx:*` key and removes members that +// no longer have a primary delivery record. +func (cleaner *IndexCleaner) CleanDeliveryIndexes(ctx context.Context) (CleanupReport, error) { + if cleaner == nil || cleaner.client == nil { + return CleanupReport{}, errors.New("clean delivery indexes in redis: nil cleaner") + } + if ctx == nil { + return CleanupReport{}, errors.New("clean delivery indexes in redis: nil context") + } + + var ( + report CleanupReport + cursor uint64 + ) + + for { + keys, nextCursor, err := cleaner.client.Scan(ctx, cursor, cleaner.keyspace.SecondaryIndexPattern(), 0).Result() + if err != nil { + return CleanupReport{}, fmt.Errorf("clean delivery indexes in redis: %w", err) + } + + for _, key := range keys { + if key == cleaner.keyspace.MalformedCommandCreatedAtIndex() { + continue + } + + report.ScannedIndexes++ + + members, err := cleaner.client.ZRange(ctx, key, 0, -1).Result() + if err != nil { + return CleanupReport{}, fmt.Errorf("clean delivery indexes in redis: read index %q: %w", key, err) + } + + report.ScannedMembers += len(members) + for _, member := range members { + remove, err := cleaner.shouldRemoveMember(ctx, member) + if err != nil { + return CleanupReport{}, fmt.Errorf("clean delivery indexes in redis: inspect index %q member %q: %w", key, member, err) + } + if !remove { + continue + } + + if err := cleaner.client.ZRem(ctx, key, member).Err(); err != nil { + return CleanupReport{}, fmt.Errorf("clean delivery indexes in redis: remove index %q member %q: %w", key, member, err) + } + report.RemovedMembers++ + } + } + + if nextCursor == 0 { + return report, nil + } + cursor = nextCursor + } +} + +func (cleaner *IndexCleaner) shouldRemoveMember(ctx context.Context, member string) (bool, error) { + if strings.TrimSpace(member) == "" { + return true, nil + } + + deliveryID := common.DeliveryID(member) + if err := deliveryID.Validate(); err != nil { + return true, nil + } + + exists, err := cleaner.client.Exists(ctx, cleaner.keyspace.Delivery(deliveryID)).Result() + if err != nil { + return false, err + } + + return exists == 0, nil +} diff --git a/mail/internal/adapters/redisstate/index_cleaner_test.go b/mail/internal/adapters/redisstate/index_cleaner_test.go new file mode 100644 index 0000000..35edd2f --- /dev/null +++ b/mail/internal/adapters/redisstate/index_cleaner_test.go @@ -0,0 +1,112 @@ +package redisstate + +import ( + "context" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestIndexCleanerRemovesStaleMembersAfterDeliveryExpiry(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + writer, err := NewAtomicWriter(client) + require.NoError(t, err) + cleaner, err := NewIndexCleaner(client) + require.NoError(t, err) + + record := validDelivery(t) + record.Source = deliverydomain.SourceNotification + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusQueued + record.SentAt = nil + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + require.NoError(t, record.Validate()) + + input := CreateAcceptanceInput{ + Delivery: record, + FirstAttempt: ptr(validScheduledAttempt(t, record.DeliveryID)), + Idempotency: ptr(validIdempotencyRecord(t, record.Source, record.DeliveryID, record.IdempotencyKey)), + } + require.NoError(t, writer.CreateAcceptance(context.Background(), input)) + + deadLetterEntry := validDeadLetterEntry(t, record.DeliveryID) + deadLetterPayload, err := MarshalDeadLetter(deadLetterEntry) + require.NoError(t, err) + require.NoError(t, client.Set(context.Background(), Keyspace{}.DeadLetter(record.DeliveryID), deadLetterPayload, DeadLetterTTL).Err()) + + server.FastForward(DeliveryTTL + time.Second) + + require.False(t, server.Exists(Keyspace{}.Delivery(record.DeliveryID))) + require.True(t, server.Exists(Keyspace{}.Attempt(record.DeliveryID, input.FirstAttempt.AttemptNo))) + require.True(t, server.Exists(Keyspace{}.DeadLetter(record.DeliveryID))) + + report, err := cleaner.CleanDeliveryIndexes(context.Background()) + require.NoError(t, err) + require.Positive(t, report.ScannedIndexes) + require.Positive(t, report.ScannedMembers) + require.Positive(t, report.RemovedMembers) + + assertZCard := func(key string, want int64) { + t.Helper() + + got, err := client.ZCard(context.Background(), key).Result() + require.NoError(t, err) + require.Equal(t, want, got) + } + + assertZCard(Keyspace{}.CreatedAtIndex(), 0) + assertZCard(Keyspace{}.SourceIndex(record.Source), 0) + assertZCard(Keyspace{}.StatusIndex(record.Status), 0) + assertZCard(Keyspace{}.RecipientIndex(record.Envelope.To[0]), 0) + assertZCard(Keyspace{}.RecipientIndex(record.Envelope.Cc[0]), 0) + assertZCard(Keyspace{}.RecipientIndex(record.Envelope.Bcc[0]), 0) + assertZCard(Keyspace{}.TemplateIndex(record.TemplateID), 0) + assertZCard(Keyspace{}.IdempotencyIndex(record.Source, record.IdempotencyKey), 0) + + require.True(t, server.Exists(Keyspace{}.Attempt(record.DeliveryID, input.FirstAttempt.AttemptNo))) + require.True(t, server.Exists(Keyspace{}.DeadLetter(record.DeliveryID))) + scheduleCard, err := client.ZCard(context.Background(), Keyspace{}.AttemptSchedule()).Result() + require.NoError(t, err) + require.EqualValues(t, 1, scheduleCard) +} + +func TestIndexCleanerSkipsMalformedCommandIndex(t *testing.T) { + t.Parallel() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + cleaner, err := NewIndexCleaner(client) + require.NoError(t, err) + + entry := validMalformedCommandEntry(t) + require.NoError(t, client.ZAdd(context.Background(), Keyspace{}.MalformedCommandCreatedAtIndex(), redis.Z{ + Score: float64(entry.RecordedAt.UTC().UnixMilli()), + Member: entry.StreamEntryID, + }).Err()) + + report, err := cleaner.CleanDeliveryIndexes(context.Background()) + require.NoError(t, err) + require.Zero(t, report.ScannedIndexes) + require.Zero(t, report.ScannedMembers) + require.Zero(t, report.RemovedMembers) + + indexMembers, err := client.ZRange(context.Background(), Keyspace{}.MalformedCommandCreatedAtIndex(), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{entry.StreamEntryID}, indexMembers) +} + +var _ = attempt.Attempt{} diff --git a/mail/internal/adapters/redisstate/keyspace.go b/mail/internal/adapters/redisstate/keyspace.go new file mode 100644 index 0000000..2ea57b6 --- /dev/null +++ b/mail/internal/adapters/redisstate/keyspace.go @@ -0,0 +1,172 @@ +package redisstate + +import ( + "encoding/base64" + "sort" + "strconv" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" +) + +const defaultPrefix = "mail:" + +const ( + // IdempotencyTTL is the frozen Redis retention for idempotency records. + IdempotencyTTL = 7 * 24 * time.Hour + + // DeliveryTTL is the frozen Redis retention for accepted delivery records. + DeliveryTTL = 30 * 24 * time.Hour + + // AttemptTTL is the frozen Redis retention for attempt records. + AttemptTTL = 90 * 24 * time.Hour + + // DeadLetterTTL is the frozen Redis retention for dead-letter entries. + DeadLetterTTL = 90 * 24 * time.Hour +) + +// Keyspace builds the frozen Mail Service Redis keys. All dynamic key +// segments are encoded with base64url so raw key structure does not depend on +// user-provided or caller-provided characters. +type Keyspace struct{} + +// Delivery returns the primary Redis key for one mail_delivery record. +func (Keyspace) Delivery(deliveryID common.DeliveryID) string { + return defaultPrefix + "deliveries:" + encodeKeyComponent(deliveryID.String()) +} + +// Attempt returns the primary Redis key for one mail_attempt record. +func (Keyspace) Attempt(deliveryID common.DeliveryID, attemptNo int) string { + return defaultPrefix + "attempts:" + encodeKeyComponent(deliveryID.String()) + ":" + encodeKeyComponent(strconv.Itoa(attemptNo)) +} + +// Idempotency returns the primary Redis key for one mail_idempotency_record. +func (Keyspace) Idempotency(source deliverydomain.Source, key common.IdempotencyKey) string { + return defaultPrefix + "idempotency:" + encodeKeyComponent(string(source)) + ":" + encodeKeyComponent(key.String()) +} + +// DeadLetter returns the primary Redis key for one mail_dead_letter_entry. +func (Keyspace) DeadLetter(deliveryID common.DeliveryID) string { + return defaultPrefix + "dead_letters:" + encodeKeyComponent(deliveryID.String()) +} + +// DeliveryPayload returns the primary Redis key for one raw generic-delivery +// payload bundle. +func (Keyspace) DeliveryPayload(deliveryID common.DeliveryID) string { + return defaultPrefix + "delivery_payloads:" + encodeKeyComponent(deliveryID.String()) +} + +// MalformedCommand returns the primary Redis key for one operator-visible +// malformed async command record. +func (Keyspace) MalformedCommand(streamEntryID string) string { + return defaultPrefix + "malformed_commands:" + encodeKeyComponent(streamEntryID) +} + +// StreamOffset returns the primary Redis key for one persisted stream-consumer +// offset. +func (Keyspace) StreamOffset(stream string) string { + return defaultPrefix + "stream_offsets:" + encodeKeyComponent(stream) +} + +// DeliveryCommands returns the frozen async ingress Redis Stream key. +func (Keyspace) DeliveryCommands() string { + return defaultPrefix + "delivery_commands" +} + +// AttemptSchedule returns the frozen attempt schedule sorted-set key. +func (Keyspace) AttemptSchedule() string { + return defaultPrefix + "attempt_schedule" +} + +// RecipientIndex returns the secondary index key for one effective recipient. +func (Keyspace) RecipientIndex(email common.Email) string { + return defaultPrefix + "idx:recipient:" + encodeKeyComponent(email.String()) +} + +// StatusIndex returns the secondary index key for one delivery status. +func (Keyspace) StatusIndex(status deliverydomain.Status) string { + return defaultPrefix + "idx:status:" + encodeKeyComponent(string(status)) +} + +// SourceIndex returns the secondary index key for one delivery source. +func (Keyspace) SourceIndex(source deliverydomain.Source) string { + return defaultPrefix + "idx:source:" + encodeKeyComponent(string(source)) +} + +// TemplateIndex returns the secondary index key for one template id. +func (Keyspace) TemplateIndex(templateID common.TemplateID) string { + return defaultPrefix + "idx:template:" + encodeKeyComponent(templateID.String()) +} + +// IdempotencyIndex returns the secondary lookup key for one `(source, +// idempotency_key)` scope. +func (Keyspace) IdempotencyIndex(source deliverydomain.Source, key common.IdempotencyKey) string { + return defaultPrefix + "idx:idempotency:" + encodeKeyComponent(string(source)) + ":" + encodeKeyComponent(key.String()) +} + +// CreatedAtIndex returns the newest-first delivery ordering index key. +func (Keyspace) CreatedAtIndex() string { + return defaultPrefix + "idx:created_at" +} + +// MalformedCommandCreatedAtIndex returns the newest-first malformed-command +// ordering index key. +func (Keyspace) MalformedCommandCreatedAtIndex() string { + return defaultPrefix + "idx:malformed_command:created_at" +} + +// SecondaryIndexPattern returns the key-scan pattern that matches every +// delivery-level secondary index owned by Mail Service. +func (Keyspace) SecondaryIndexPattern() string { + return defaultPrefix + "idx:*" +} + +// DeliveryIndexKeys returns the full set of secondary index keys that must +// reference record at creation time. Recipient indexing covers `to`, `cc`, and +// `bcc`, but intentionally excludes `reply_to`. +func (keyspace Keyspace) DeliveryIndexKeys(record deliverydomain.Delivery) []string { + keys := []string{ + keyspace.StatusIndex(record.Status), + keyspace.SourceIndex(record.Source), + keyspace.IdempotencyIndex(record.Source, record.IdempotencyKey), + keyspace.CreatedAtIndex(), + } + if !record.TemplateID.IsZero() { + keys = append(keys, keyspace.TemplateIndex(record.TemplateID)) + } + + seen := make(map[string]struct{}, len(keys)+len(record.Envelope.To)+len(record.Envelope.Cc)+len(record.Envelope.Bcc)) + for _, key := range keys { + seen[key] = struct{}{} + } + for _, group := range [][]common.Email{record.Envelope.To, record.Envelope.Cc, record.Envelope.Bcc} { + for _, email := range group { + seen[keyspace.RecipientIndex(email)] = struct{}{} + } + } + + keys = keys[:0] + for key := range seen { + keys = append(keys, key) + } + sort.Strings(keys) + + return keys +} + +// CreatedAtScore returns the frozen sorted-set score representation for +// delivery creation timestamps. +func CreatedAtScore(createdAt time.Time) float64 { + return float64(createdAt.UTC().UnixMilli()) +} + +// ScheduledForScore returns the frozen sorted-set score representation for +// attempt schedule timestamps. +func ScheduledForScore(scheduledFor time.Time) float64 { + return float64(scheduledFor.UTC().UnixMilli()) +} + +func encodeKeyComponent(value string) string { + return base64.RawURLEncoding.EncodeToString([]byte(value)) +} diff --git a/mail/internal/adapters/redisstate/keyspace_test.go b/mail/internal/adapters/redisstate/keyspace_test.go new file mode 100644 index 0000000..61ee078 --- /dev/null +++ b/mail/internal/adapters/redisstate/keyspace_test.go @@ -0,0 +1,68 @@ +package redisstate + +import ( + "testing" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "github.com/stretchr/testify/require" +) + +func TestKeyspaceBuildsStableKeys(t *testing.T) { + t.Parallel() + + keyspace := Keyspace{} + + require.Equal(t, "mail:deliveries:ZGVsaXZlcnktMTIz", keyspace.Delivery(common.DeliveryID("delivery-123"))) + require.Equal(t, "mail:attempts:ZGVsaXZlcnktMTIz:MQ", keyspace.Attempt(common.DeliveryID("delivery-123"), 1)) + require.Equal(t, "mail:idempotency:bm90aWZpY2F0aW9u:bm90aWZpY2F0aW9uOm1haWwtMTIz", keyspace.Idempotency(deliverydomain.SourceNotification, common.IdempotencyKey("notification:mail-123"))) + require.Equal(t, "mail:dead_letters:ZGVsaXZlcnktMTIz", keyspace.DeadLetter(common.DeliveryID("delivery-123"))) + require.Equal(t, "mail:delivery_commands", keyspace.DeliveryCommands()) + require.Equal(t, "mail:attempt_schedule", keyspace.AttemptSchedule()) + require.Equal(t, "mail:idx:recipient:cGlsb3RAZXhhbXBsZS5jb20", keyspace.RecipientIndex(common.Email("pilot@example.com"))) + require.Equal(t, "mail:idx:status:c2VudA", keyspace.StatusIndex(deliverydomain.StatusSent)) + require.Equal(t, "mail:idx:source:bm90aWZpY2F0aW9u", keyspace.SourceIndex(deliverydomain.SourceNotification)) + require.Equal(t, "mail:idx:template:YXV0aC5sb2dpbl9jb2Rl", keyspace.TemplateIndex(common.TemplateID("auth.login_code"))) + require.Equal(t, "mail:idx:idempotency:bm90aWZpY2F0aW9u:bm90aWZpY2F0aW9uOm1haWwtMTIz", keyspace.IdempotencyIndex(deliverydomain.SourceNotification, common.IdempotencyKey("notification:mail-123"))) + require.Equal(t, "mail:idx:created_at", keyspace.CreatedAtIndex()) + require.Equal(t, "mail:idx:*", keyspace.SecondaryIndexPattern()) +} + +func TestDeliveryIndexKeysDedupeRecipientsAndIgnoreReplyTo(t *testing.T) { + t.Parallel() + + record := validDelivery(t) + record.Source = deliverydomain.SourceNotification + record.ResendParentDeliveryID = "" + record.Status = deliverydomain.StatusQueued + record.SentAt = nil + record.LocaleFallbackUsed = false + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + record.Envelope.Cc = []common.Email{common.Email("pilot@example.com")} + record.Envelope.ReplyTo = []common.Email{common.Email("reply@example.com")} + require.NoError(t, record.Validate()) + + require.Equal(t, []string{ + "mail:idx:created_at", + "mail:idx:idempotency:bm90aWZpY2F0aW9u:b3BlcmF0b3I6cmVzZW5kOmRlbGl2ZXJ5LTEyMw", + "mail:idx:recipient:b3BzQGV4YW1wbGUuY29t", + "mail:idx:recipient:cGlsb3RAZXhhbXBsZS5jb20", + "mail:idx:source:bm90aWZpY2F0aW9u", + "mail:idx:status:cXVldWVk", + "mail:idx:template:YXV0aC5sb2dpbl9jb2Rl", + }, Keyspace{}.DeliveryIndexKeys(record)) +} + +func TestScoresAndRetentionConstants(t *testing.T) { + t.Parallel() + + value := time.Unix(1_775_240_000, 123_000_000).UTC() + require.Equal(t, float64(value.UnixMilli()), CreatedAtScore(value)) + require.Equal(t, float64(value.UnixMilli()), ScheduledForScore(value)) + require.Equal(t, 7*24*time.Hour, IdempotencyTTL) + require.Equal(t, 30*24*time.Hour, DeliveryTTL) + require.Equal(t, 90*24*time.Hour, AttemptTTL) + require.Equal(t, 90*24*time.Hour, DeadLetterTTL) +} diff --git a/mail/internal/adapters/redisstate/malformed_command_store.go b/mail/internal/adapters/redisstate/malformed_command_store.go new file mode 100644 index 0000000..ac5c9a0 --- /dev/null +++ b/mail/internal/adapters/redisstate/malformed_command_store.go @@ -0,0 +1,111 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + + "galaxy/mail/internal/domain/malformedcommand" + + "github.com/redis/go-redis/v9" +) + +// MalformedCommandStore provides the Redis-backed storage used for +// operator-visible malformed async command records. +type MalformedCommandStore struct { + client *redis.Client + keys Keyspace +} + +// NewMalformedCommandStore constructs one Redis-backed malformed-command +// store. +func NewMalformedCommandStore(client *redis.Client) (*MalformedCommandStore, error) { + if client == nil { + return nil, errors.New("new malformed command store: nil redis client") + } + + return &MalformedCommandStore{ + client: client, + keys: Keyspace{}, + }, nil +} + +// Record stores entry idempotently by stream entry id. +func (store *MalformedCommandStore) Record(ctx context.Context, entry malformedcommand.Entry) error { + if store == nil || store.client == nil { + return errors.New("record malformed command: nil store") + } + if ctx == nil { + return errors.New("record malformed command: nil context") + } + if err := entry.Validate(); err != nil { + return fmt.Errorf("record malformed command: %w", err) + } + + payload, err := MarshalMalformedCommand(entry) + if err != nil { + return fmt.Errorf("record malformed command: %w", err) + } + + key := store.keys.MalformedCommand(entry.StreamEntryID) + indexKey := store.keys.MalformedCommandCreatedAtIndex() + score := float64(entry.RecordedAt.UTC().UnixMilli()) + + watchErr := store.client.Watch(ctx, func(tx *redis.Tx) error { + exists, err := tx.Exists(ctx, key).Result() + if err != nil { + return fmt.Errorf("record malformed command: %w", err) + } + if exists > 0 { + return nil + } + + _, err = tx.TxPipelined(ctx, func(pipe redis.Pipeliner) error { + pipe.Set(ctx, key, payload, DeadLetterTTL) + pipe.ZAdd(ctx, indexKey, redis.Z{ + Score: score, + Member: entry.StreamEntryID, + }) + + return nil + }) + if err != nil { + return fmt.Errorf("record malformed command: %w", err) + } + + return nil + }, key) + switch { + case errors.Is(watchErr, redis.TxFailedErr): + return nil + case watchErr != nil: + return watchErr + default: + return nil + } +} + +// Get loads one malformed-command entry by stream entry id. +func (store *MalformedCommandStore) Get(ctx context.Context, streamEntryID string) (malformedcommand.Entry, bool, error) { + if store == nil || store.client == nil { + return malformedcommand.Entry{}, false, errors.New("get malformed command: nil store") + } + if ctx == nil { + return malformedcommand.Entry{}, false, errors.New("get malformed command: nil context") + } + + payload, err := store.client.Get(ctx, store.keys.MalformedCommand(streamEntryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return malformedcommand.Entry{}, false, nil + case err != nil: + return malformedcommand.Entry{}, false, fmt.Errorf("get malformed command: %w", err) + } + + entry, err := UnmarshalMalformedCommand(payload) + if err != nil { + return malformedcommand.Entry{}, false, fmt.Errorf("get malformed command: %w", err) + } + + return entry, true, nil +} diff --git a/mail/internal/adapters/redisstate/operator_store.go b/mail/internal/adapters/redisstate/operator_store.go new file mode 100644 index 0000000..0c2d510 --- /dev/null +++ b/mail/internal/adapters/redisstate/operator_store.go @@ -0,0 +1,532 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + "slices" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/service/acceptgenericdelivery" + "galaxy/mail/internal/service/listattempts" + "galaxy/mail/internal/service/listdeliveries" + "galaxy/mail/internal/service/resenddelivery" + + "github.com/redis/go-redis/v9" +) + +// OperatorStore provides the Redis-backed durable storage used by the +// operator read and resend workflows. +type OperatorStore struct { + client *redis.Client + writer *AtomicWriter + keys Keyspace +} + +// NewOperatorStore constructs one Redis-backed operator store. +func NewOperatorStore(client *redis.Client) (*OperatorStore, error) { + if client == nil { + return nil, errors.New("new operator store: nil redis client") + } + + writer, err := NewAtomicWriter(client) + if err != nil { + return nil, fmt.Errorf("new operator store: %w", err) + } + + return &OperatorStore{ + client: client, + writer: writer, + keys: Keyspace{}, + }, nil +} + +// GetDelivery loads one accepted delivery by its identifier. +func (store *OperatorStore) GetDelivery(ctx context.Context, deliveryID common.DeliveryID) (deliverydomain.Delivery, bool, error) { + if store == nil || store.client == nil { + return deliverydomain.Delivery{}, false, errors.New("get operator delivery: nil store") + } + if ctx == nil { + return deliverydomain.Delivery{}, false, errors.New("get operator delivery: nil context") + } + if err := deliveryID.Validate(); err != nil { + return deliverydomain.Delivery{}, false, fmt.Errorf("get operator delivery: %w", err) + } + + payload, err := store.client.Get(ctx, store.keys.Delivery(deliveryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return deliverydomain.Delivery{}, false, nil + case err != nil: + return deliverydomain.Delivery{}, false, fmt.Errorf("get operator delivery: %w", err) + } + + record, err := UnmarshalDelivery(payload) + if err != nil { + return deliverydomain.Delivery{}, false, fmt.Errorf("get operator delivery: %w", err) + } + + return record, true, nil +} + +// GetDeadLetter loads the dead-letter entry associated with deliveryID when +// one exists. +func (store *OperatorStore) GetDeadLetter(ctx context.Context, deliveryID common.DeliveryID) (deliverydomain.DeadLetterEntry, bool, error) { + if store == nil || store.client == nil { + return deliverydomain.DeadLetterEntry{}, false, errors.New("get operator dead-letter entry: nil store") + } + if ctx == nil { + return deliverydomain.DeadLetterEntry{}, false, errors.New("get operator dead-letter entry: nil context") + } + if err := deliveryID.Validate(); err != nil { + return deliverydomain.DeadLetterEntry{}, false, fmt.Errorf("get operator dead-letter entry: %w", err) + } + + payload, err := store.client.Get(ctx, store.keys.DeadLetter(deliveryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return deliverydomain.DeadLetterEntry{}, false, nil + case err != nil: + return deliverydomain.DeadLetterEntry{}, false, fmt.Errorf("get operator dead-letter entry: %w", err) + } + + entry, err := UnmarshalDeadLetter(payload) + if err != nil { + return deliverydomain.DeadLetterEntry{}, false, fmt.Errorf("get operator dead-letter entry: %w", err) + } + + return entry, true, nil +} + +// GetDeliveryPayload loads one raw accepted attachment bundle by delivery id. +func (store *OperatorStore) GetDeliveryPayload(ctx context.Context, deliveryID common.DeliveryID) (acceptgenericdelivery.DeliveryPayload, bool, error) { + if store == nil || store.client == nil { + return acceptgenericdelivery.DeliveryPayload{}, false, errors.New("get operator delivery payload: nil store") + } + if ctx == nil { + return acceptgenericdelivery.DeliveryPayload{}, false, errors.New("get operator delivery payload: nil context") + } + if err := deliveryID.Validate(); err != nil { + return acceptgenericdelivery.DeliveryPayload{}, false, fmt.Errorf("get operator delivery payload: %w", err) + } + + payload, err := store.client.Get(ctx, store.keys.DeliveryPayload(deliveryID)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return acceptgenericdelivery.DeliveryPayload{}, false, nil + case err != nil: + return acceptgenericdelivery.DeliveryPayload{}, false, fmt.Errorf("get operator delivery payload: %w", err) + } + + record, err := UnmarshalDeliveryPayload(payload) + if err != nil { + return acceptgenericdelivery.DeliveryPayload{}, false, fmt.Errorf("get operator delivery payload: %w", err) + } + + return record, true, nil +} + +// ListAttempts loads exactly expectedCount attempts in ascending attempt +// number order. Missing attempts are treated as durable-state corruption. +func (store *OperatorStore) ListAttempts(ctx context.Context, deliveryID common.DeliveryID, expectedCount int) ([]attempt.Attempt, error) { + if store == nil || store.client == nil { + return nil, errors.New("list operator attempts: nil store") + } + if ctx == nil { + return nil, errors.New("list operator attempts: nil context") + } + if err := deliveryID.Validate(); err != nil { + return nil, fmt.Errorf("list operator attempts: %w", err) + } + if expectedCount < 0 { + return nil, errors.New("list operator attempts: negative expected count") + } + if expectedCount == 0 { + return []attempt.Attempt{}, nil + } + + result := make([]attempt.Attempt, 0, expectedCount) + for attemptNo := 1; attemptNo <= expectedCount; attemptNo++ { + payload, err := store.client.Get(ctx, store.keys.Attempt(deliveryID, attemptNo)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return nil, fmt.Errorf("list operator attempts: missing attempt %d for delivery %q", attemptNo, deliveryID) + case err != nil: + return nil, fmt.Errorf("list operator attempts: %w", err) + } + + record, err := UnmarshalAttempt(payload) + if err != nil { + return nil, fmt.Errorf("list operator attempts: %w", err) + } + result = append(result, record) + } + + return result, nil +} + +// List loads one filtered ordered page of delivery records. +func (store *OperatorStore) List(ctx context.Context, input listdeliveries.Input) (listdeliveries.Result, error) { + if store == nil || store.client == nil { + return listdeliveries.Result{}, errors.New("list operator deliveries: nil store") + } + if ctx == nil { + return listdeliveries.Result{}, errors.New("list operator deliveries: nil context") + } + if err := input.Validate(); err != nil { + return listdeliveries.Result{}, fmt.Errorf("list operator deliveries: %w", err) + } + + selection := chooseListIndex(store.keys, input.Filters) + if selection.mergeIDempotency { + return store.listMergedIdempotency(ctx, input, selection.keys) + } + + return store.listSingleIndex(ctx, input, selection.keys[0]) +} + +// CreateResend atomically creates the cloned delivery, its first attempt, and +// the optional cloned raw payload bundle. +func (store *OperatorStore) CreateResend(ctx context.Context, input resenddelivery.CreateResendInput) error { + if store == nil || store.client == nil || store.writer == nil { + return errors.New("create operator resend: nil store") + } + if ctx == nil { + return errors.New("create operator resend: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("create operator resend: %w", err) + } + + writerInput := CreateAcceptanceInput{ + Delivery: input.Delivery, + FirstAttempt: &input.FirstAttempt, + } + if input.DeliveryPayload != nil { + writerInput.DeliveryPayload = input.DeliveryPayload + } + + if err := store.writer.CreateAcceptance(ctx, writerInput); err != nil { + return fmt.Errorf("create operator resend: %w", err) + } + + return nil +} + +type listSelection struct { + keys []string + mergeIDempotency bool +} + +func chooseListIndex(keyspace Keyspace, filters listdeliveries.Filters) listSelection { + switch { + case filters.IdempotencyKey != "" && filters.Source != "": + return listSelection{ + keys: []string{keyspace.IdempotencyIndex(filters.Source, filters.IdempotencyKey)}, + } + case filters.IdempotencyKey != "": + return listSelection{ + keys: []string{ + keyspace.IdempotencyIndex(deliverydomain.SourceAuthSession, filters.IdempotencyKey), + keyspace.IdempotencyIndex(deliverydomain.SourceNotification, filters.IdempotencyKey), + keyspace.IdempotencyIndex(deliverydomain.SourceOperatorResend, filters.IdempotencyKey), + }, + mergeIDempotency: true, + } + case filters.Recipient != "": + return listSelection{keys: []string{keyspace.RecipientIndex(filters.Recipient)}} + case filters.TemplateID != "": + return listSelection{keys: []string{keyspace.TemplateIndex(filters.TemplateID)}} + case filters.Status != "": + return listSelection{keys: []string{keyspace.StatusIndex(filters.Status)}} + case filters.Source != "": + return listSelection{keys: []string{keyspace.SourceIndex(filters.Source)}} + default: + return listSelection{keys: []string{keyspace.CreatedAtIndex()}} + } +} + +func (store *OperatorStore) listSingleIndex(ctx context.Context, input listdeliveries.Input, indexKey string) (listdeliveries.Result, error) { + startIndex := int64(0) + if input.Cursor != nil { + cursorIndex, err := cursorStartIndex(ctx, store.client, indexKey, *input.Cursor) + if err != nil { + return listdeliveries.Result{}, err + } + startIndex = cursorIndex + } + + items, nextCursor, err := store.collectFromIndex(ctx, indexKey, startIndex, input.Limit, input.Filters) + if err != nil { + return listdeliveries.Result{}, err + } + + return listdeliveries.Result{ + Items: items, + NextCursor: nextCursor, + }, nil +} + +func (store *OperatorStore) listMergedIdempotency(ctx context.Context, input listdeliveries.Input, indexKeys []string) (listdeliveries.Result, error) { + iterators := make([]*redisIndexIterator, 0, len(indexKeys)) + for _, key := range indexKeys { + iterators = append(iterators, &redisIndexIterator{ + client: store.client, + indexKey: key, + batchSize: listBatchSize(input.Limit), + cursor: input.Cursor, + }) + } + + heads := make([]indexedRef, 0, len(iterators)) + for index, iterator := range iterators { + ref, err := iterator.Next(ctx) + if err != nil { + return listdeliveries.Result{}, err + } + if ref != nil { + heads = append(heads, indexedRef{streamIndex: index, ref: *ref}) + } + } + + items := make([]deliverydomain.Delivery, 0, input.Limit+1) + for len(heads) > 0 && len(items) <= input.Limit { + bestIndex := 0 + for index := 1; index < len(heads); index++ { + if compareDeliveryOrder(heads[index].ref, heads[bestIndex].ref) < 0 { + bestIndex = index + } + } + + selected := heads[bestIndex] + heads = slices.Delete(heads, bestIndex, bestIndex+1) + + record, found, err := store.GetDelivery(ctx, selected.ref.DeliveryID) + if err != nil { + return listdeliveries.Result{}, err + } + if found && input.Filters.Matches(record) { + items = append(items, record) + } + + nextRef, err := iterators[selected.streamIndex].Next(ctx) + if err != nil { + return listdeliveries.Result{}, err + } + if nextRef != nil { + heads = append(heads, indexedRef{streamIndex: selected.streamIndex, ref: *nextRef}) + } + } + + result := listdeliveries.Result{} + if len(items) > input.Limit { + next := cursorFromDelivery(items[input.Limit-1]) + result.NextCursor = &next + items = items[:input.Limit] + } + result.Items = items + + return result, nil +} + +func (store *OperatorStore) collectFromIndex( + ctx context.Context, + indexKey string, + startIndex int64, + limit int, + filters listdeliveries.Filters, +) ([]deliverydomain.Delivery, *listdeliveries.Cursor, error) { + items := make([]deliverydomain.Delivery, 0, limit+1) + batchSize := listBatchSize(limit) + + for len(items) <= limit { + batch, err := store.client.ZRevRangeWithScores(ctx, indexKey, startIndex, startIndex+int64(batchSize)-1).Result() + if err != nil { + return nil, nil, fmt.Errorf("list operator deliveries: %w", err) + } + if len(batch) == 0 { + break + } + + startIndex += int64(len(batch)) + for _, member := range batch { + deliveryID, err := memberDeliveryID(member.Member) + if err != nil { + return nil, nil, fmt.Errorf("list operator deliveries: %w", err) + } + + record, found, err := store.GetDelivery(ctx, deliveryID) + if err != nil { + return nil, nil, err + } + if !found || !filters.Matches(record) { + continue + } + + items = append(items, record) + if len(items) > limit { + break + } + } + } + + var nextCursor *listdeliveries.Cursor + if len(items) > limit { + next := cursorFromDelivery(items[limit-1]) + nextCursor = &next + items = items[:limit] + } + + return items, nextCursor, nil +} + +type indexedRef struct { + streamIndex int + ref deliveryRef +} + +type deliveryRef struct { + CreatedAt time.Time + DeliveryID common.DeliveryID +} + +type redisIndexIterator struct { + client *redis.Client + indexKey string + batchSize int + offset int64 + cursor *listdeliveries.Cursor + batch []redis.Z + position int +} + +func (iterator *redisIndexIterator) Next(ctx context.Context) (*deliveryRef, error) { + for { + if iterator.position >= len(iterator.batch) { + batch, err := iterator.client.ZRevRangeWithScores( + ctx, + iterator.indexKey, + iterator.offset, + iterator.offset+int64(iterator.batchSize)-1, + ).Result() + if err != nil { + return nil, fmt.Errorf("list operator deliveries: %w", err) + } + if len(batch) == 0 { + return nil, nil + } + + iterator.batch = batch + iterator.position = 0 + iterator.offset += int64(len(batch)) + } + + ref, err := deliveryRefFromSortedSet(iterator.batch[iterator.position]) + iterator.position++ + if err != nil { + return nil, fmt.Errorf("list operator deliveries: %w", err) + } + if iterator.cursor != nil && !isAfterCursor(ref, *iterator.cursor) { + continue + } + + return &ref, nil + } +} + +func cursorStartIndex(ctx context.Context, client *redis.Client, indexKey string, cursor listdeliveries.Cursor) (int64, error) { + score, err := client.ZScore(ctx, indexKey, cursor.DeliveryID.String()).Result() + switch { + case errors.Is(err, redis.Nil): + return 0, listdeliveries.ErrInvalidCursor + case err != nil: + return 0, fmt.Errorf("list operator deliveries: %w", err) + } + if !time.UnixMilli(int64(score)).UTC().Equal(cursor.CreatedAt.UTC()) { + return 0, listdeliveries.ErrInvalidCursor + } + + rank, err := client.ZRevRank(ctx, indexKey, cursor.DeliveryID.String()).Result() + switch { + case errors.Is(err, redis.Nil): + return 0, listdeliveries.ErrInvalidCursor + case err != nil: + return 0, fmt.Errorf("list operator deliveries: %w", err) + default: + return rank + 1, nil + } +} + +func compareDeliveryOrder(left deliveryRef, right deliveryRef) int { + switch { + case left.CreatedAt.After(right.CreatedAt): + return -1 + case left.CreatedAt.Before(right.CreatedAt): + return 1 + case left.DeliveryID.String() > right.DeliveryID.String(): + return -1 + case left.DeliveryID.String() < right.DeliveryID.String(): + return 1 + default: + return 0 + } +} + +func isAfterCursor(ref deliveryRef, cursor listdeliveries.Cursor) bool { + return compareDeliveryOrder(ref, deliveryRef{ + CreatedAt: cursor.CreatedAt.UTC(), + DeliveryID: cursor.DeliveryID, + }) > 0 +} + +func cursorFromDelivery(record deliverydomain.Delivery) listdeliveries.Cursor { + return listdeliveries.Cursor{ + CreatedAt: record.CreatedAt.UTC(), + DeliveryID: record.DeliveryID, + } +} + +func deliveryRefFromSortedSet(member redis.Z) (deliveryRef, error) { + deliveryID, err := memberDeliveryID(member.Member) + if err != nil { + return deliveryRef{}, err + } + + return deliveryRef{ + CreatedAt: time.UnixMilli(int64(member.Score)).UTC(), + DeliveryID: deliveryID, + }, nil +} + +func memberDeliveryID(member any) (common.DeliveryID, error) { + value, ok := member.(string) + if !ok { + return "", fmt.Errorf("unexpected delivery index member type %T", member) + } + + deliveryID := common.DeliveryID(value) + if err := deliveryID.Validate(); err != nil { + return "", fmt.Errorf("delivery index member delivery id: %w", err) + } + + return deliveryID, nil +} + +func listBatchSize(limit int) int { + size := limit * 4 + if size < limit+1 { + size = limit + 1 + } + if size < 100 { + size = 100 + } + + return size +} + +var _ listdeliveries.Store = (*OperatorStore)(nil) +var _ listattempts.Store = (*OperatorStore)(nil) +var _ resenddelivery.Store = (*OperatorStore)(nil) diff --git a/mail/internal/adapters/redisstate/operator_store_test.go b/mail/internal/adapters/redisstate/operator_store_test.go new file mode 100644 index 0000000..1ebd888 --- /dev/null +++ b/mail/internal/adapters/redisstate/operator_store_test.go @@ -0,0 +1,346 @@ +package redisstate + +import ( + "context" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/service/listdeliveries" + "galaxy/mail/internal/service/resenddelivery" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestOperatorStoreListFilters(t *testing.T) { + t.Parallel() + + type testCase struct { + name string + filters listdeliveries.Filters + wantIDs []common.DeliveryID + } + + cases := []testCase{ + { + name: "recipient", + filters: listdeliveries.Filters{Recipient: common.Email("recipient-filter@example.com")}, + wantIDs: []common.DeliveryID{"delivery-recipient"}, + }, + { + name: "status", + filters: listdeliveries.Filters{Status: deliverydomain.StatusSuppressed}, + wantIDs: []common.DeliveryID{"delivery-status"}, + }, + { + name: "source", + filters: listdeliveries.Filters{Source: deliverydomain.SourceOperatorResend}, + wantIDs: []common.DeliveryID{"delivery-source"}, + }, + { + name: "template", + filters: listdeliveries.Filters{TemplateID: common.TemplateID("template.filter")}, + wantIDs: []common.DeliveryID{"delivery-template"}, + }, + { + name: "idempotency", + filters: listdeliveries.Filters{IdempotencyKey: common.IdempotencyKey("idempotency-filter")}, + wantIDs: []common.DeliveryID{"delivery-idempotency"}, + }, + } + + for _, tt := range cases { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + store, client := newOperatorStoreForTest(t) + seedOperatorFilterDataset(t, client) + + result, err := store.List(context.Background(), listdeliveries.Input{ + Limit: 10, + Filters: tt.filters, + }) + require.NoError(t, err) + require.Equal(t, tt.wantIDs, deliveryIDs(result.Items)) + require.Nil(t, result.NextCursor) + }) + } +} + +func TestOperatorStoreListCursorPaginationUsesCreatedAtDescDeliveryIDDesc(t *testing.T) { + t.Parallel() + + store, client := newOperatorStoreForTest(t) + + createdAt := time.Unix(1_775_122_500, 0).UTC() + seedDeliveryRecord(t, client, buildStoredDelivery("delivery-a", createdAt, deliverydomain.SourceNotification, common.IdempotencyKey("notification:delivery-a"), deliverydomain.StatusSent)) + seedDeliveryRecord(t, client, buildStoredDelivery("delivery-c", createdAt, deliverydomain.SourceNotification, common.IdempotencyKey("notification:delivery-c"), deliverydomain.StatusSent)) + seedDeliveryRecord(t, client, buildStoredDelivery("delivery-b", createdAt, deliverydomain.SourceNotification, common.IdempotencyKey("notification:delivery-b"), deliverydomain.StatusSent)) + + firstPage, err := store.List(context.Background(), listdeliveries.Input{Limit: 2}) + require.NoError(t, err) + require.Equal(t, []common.DeliveryID{"delivery-c", "delivery-b"}, deliveryIDs(firstPage.Items)) + require.NotNil(t, firstPage.NextCursor) + + secondPage, err := store.List(context.Background(), listdeliveries.Input{ + Limit: 2, + Cursor: firstPage.NextCursor, + }) + require.NoError(t, err) + require.Equal(t, []common.DeliveryID{"delivery-a"}, deliveryIDs(secondPage.Items)) + require.Nil(t, secondPage.NextCursor) +} + +func TestOperatorStoreListMergesIdempotencyAcrossSources(t *testing.T) { + t.Parallel() + + store, client := newOperatorStoreForTest(t) + sharedKey := common.IdempotencyKey("shared-idempotency") + seedDeliveryRecord(t, client, buildStoredDelivery("delivery-auth", time.Unix(1_775_122_100, 0).UTC(), deliverydomain.SourceAuthSession, sharedKey, deliverydomain.StatusSuppressed)) + seedDeliveryRecord(t, client, buildStoredDelivery("delivery-notification", time.Unix(1_775_122_200, 0).UTC(), deliverydomain.SourceNotification, sharedKey, deliverydomain.StatusSent)) + seedDeliveryRecord(t, client, buildStoredDelivery("delivery-resend", time.Unix(1_775_122_300, 0).UTC(), deliverydomain.SourceOperatorResend, sharedKey, deliverydomain.StatusSent)) + + result, err := store.List(context.Background(), listdeliveries.Input{ + Limit: 10, + Filters: listdeliveries.Filters{ + IdempotencyKey: sharedKey, + }, + }) + require.NoError(t, err) + require.Equal(t, []common.DeliveryID{"delivery-resend", "delivery-notification", "delivery-auth"}, deliveryIDs(result.Items)) +} + +func TestOperatorStoreGetDeadLetter(t *testing.T) { + t.Parallel() + + store, client := newOperatorStoreForTest(t) + record := buildStoredDelivery("delivery-dead-letter", time.Unix(1_775_122_400, 0).UTC(), deliverydomain.SourceNotification, common.IdempotencyKey("notification:delivery-dead-letter"), deliverydomain.StatusDeadLetter) + seedDeliveryRecord(t, client, record) + + entry := validDeadLetterEntry(t, record.DeliveryID) + payload, err := MarshalDeadLetter(entry) + require.NoError(t, err) + require.NoError(t, client.Set(context.Background(), Keyspace{}.DeadLetter(record.DeliveryID), payload, DeadLetterTTL).Err()) + + got, found, err := store.GetDeadLetter(context.Background(), record.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, entry, got) +} + +func TestOperatorStoreListAttempts(t *testing.T) { + t.Parallel() + + store, client := newOperatorStoreForTest(t) + record := buildStoredDelivery("delivery-attempts", time.Unix(1_775_122_410, 0).UTC(), deliverydomain.SourceNotification, common.IdempotencyKey("notification:delivery-attempts"), deliverydomain.StatusFailed) + record.AttemptCount = 2 + failedAt := record.UpdatedAt + record.FailedAt = &failedAt + require.NoError(t, record.Validate()) + seedDeliveryRecord(t, client, record) + + firstAttempt := validTerminalAttempt(t, record.DeliveryID) + firstAttempt.AttemptNo = 1 + secondAttempt := validTerminalAttempt(t, record.DeliveryID) + secondAttempt.AttemptNo = 2 + secondAttempt.Status = attempt.StatusProviderRejected + payload, err := MarshalAttempt(firstAttempt) + require.NoError(t, err) + require.NoError(t, client.Set(context.Background(), Keyspace{}.Attempt(record.DeliveryID, 1), payload, AttemptTTL).Err()) + payload, err = MarshalAttempt(secondAttempt) + require.NoError(t, err) + require.NoError(t, client.Set(context.Background(), Keyspace{}.Attempt(record.DeliveryID, 2), payload, AttemptTTL).Err()) + + got, err := store.ListAttempts(context.Background(), record.DeliveryID, 2) + require.NoError(t, err) + require.Equal(t, []attempt.Attempt{firstAttempt, secondAttempt}, got) +} + +func TestOperatorStoreCreateResendAtomicallyCreatesCloneState(t *testing.T) { + t.Parallel() + + store, client := newOperatorStoreForTest(t) + + createdAt := time.Unix(1_775_122_600, 0).UTC() + clone := buildStoredDelivery("delivery-clone", createdAt, deliverydomain.SourceOperatorResend, common.IdempotencyKey("operator:resend:delivery-parent"), deliverydomain.StatusQueued) + clone.ResendParentDeliveryID = common.DeliveryID("delivery-parent") + clone.AttemptCount = 1 + require.NoError(t, clone.Validate()) + + firstAttempt := validScheduledAttempt(t, clone.DeliveryID) + firstAttempt.AttemptNo = 1 + firstAttempt.ScheduledFor = createdAt + require.NoError(t, firstAttempt.Validate()) + + deliveryPayload := validDeliveryPayload(t, clone.DeliveryID) + input := resenddelivery.CreateResendInput{ + Delivery: clone, + FirstAttempt: firstAttempt, + DeliveryPayload: &deliveryPayload, + } + + require.NoError(t, store.CreateResend(context.Background(), input)) + + storedDelivery, found, err := store.GetDelivery(context.Background(), clone.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, clone, storedDelivery) + + storedPayload, found, err := store.GetDeliveryPayload(context.Background(), clone.DeliveryID) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, deliveryPayload, storedPayload) + + attemptPayload, err := client.Get(context.Background(), Keyspace{}.Attempt(clone.DeliveryID, 1)).Bytes() + require.NoError(t, err) + decodedAttempt, err := UnmarshalAttempt(attemptPayload) + require.NoError(t, err) + require.Equal(t, firstAttempt, decodedAttempt) + + scheduledMembers, err := client.ZRange(context.Background(), Keyspace{}.AttemptSchedule(), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{clone.DeliveryID.String()}, scheduledMembers) + + indexMembers, err := client.ZRange(context.Background(), Keyspace{}.IdempotencyIndex(clone.Source, clone.IdempotencyKey), 0, -1).Result() + require.NoError(t, err) + require.Equal(t, []string{clone.DeliveryID.String()}, indexMembers) + + _, err = client.Get(context.Background(), Keyspace{}.Idempotency(clone.Source, clone.IdempotencyKey)).Bytes() + require.ErrorIs(t, err, redis.Nil) +} + +func newOperatorStoreForTest(t *testing.T) (*OperatorStore, *redis.Client) { + t.Helper() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := NewOperatorStore(client) + require.NoError(t, err) + + return store, client +} + +func seedOperatorFilterDataset(t *testing.T, client *redis.Client) { + t.Helper() + + seedDeliveryRecord(t, client, func() deliverydomain.Delivery { + record := buildStoredDelivery("delivery-recipient", time.Unix(1_775_122_001, 0).UTC(), deliverydomain.SourceNotification, common.IdempotencyKey("notification:delivery-recipient"), deliverydomain.StatusSent) + record.Envelope.To = []common.Email{common.Email("recipient-filter@example.com")} + require.NoError(t, record.Validate()) + return record + }()) + + seedDeliveryRecord(t, client, func() deliverydomain.Delivery { + record := buildStoredDelivery("delivery-status", time.Unix(1_775_122_002, 0).UTC(), deliverydomain.SourceAuthSession, common.IdempotencyKey("authsession:delivery-status"), deliverydomain.StatusSuppressed) + record.SentAt = nil + suppressedAt := record.UpdatedAt + record.SuppressedAt = &suppressedAt + require.NoError(t, record.Validate()) + return record + }()) + + seedDeliveryRecord(t, client, buildStoredDelivery("delivery-source", time.Unix(1_775_122_003, 0).UTC(), deliverydomain.SourceOperatorResend, common.IdempotencyKey("operator:resend:delivery-source"), deliverydomain.StatusSent)) + + seedDeliveryRecord(t, client, func() deliverydomain.Delivery { + record := buildStoredDelivery("delivery-template", time.Unix(1_775_122_004, 0).UTC(), deliverydomain.SourceNotification, common.IdempotencyKey("notification:delivery-template"), deliverydomain.StatusSent) + record.TemplateID = common.TemplateID("template.filter") + record.PayloadMode = deliverydomain.PayloadModeTemplate + record.Locale = common.Locale("en") + record.TemplateVariables = map[string]any{"name": "Pilot"} + require.NoError(t, record.Validate()) + return record + }()) + + seedDeliveryRecord(t, client, buildStoredDelivery("delivery-idempotency", time.Unix(1_775_122_005, 0).UTC(), deliverydomain.SourceNotification, common.IdempotencyKey("idempotency-filter"), deliverydomain.StatusSent)) +} + +func seedDeliveryRecord(t *testing.T, client *redis.Client, record deliverydomain.Delivery) { + t.Helper() + + keyspace := Keyspace{} + payload, err := MarshalDelivery(record) + require.NoError(t, err) + require.NoError(t, client.Set(context.Background(), keyspace.Delivery(record.DeliveryID), payload, DeliveryTTL).Err()) + + score := CreatedAtScore(record.CreatedAt) + for _, indexKey := range keyspace.DeliveryIndexKeys(record) { + require.NoError(t, client.ZAdd(context.Background(), indexKey, redis.Z{ + Score: score, + Member: record.DeliveryID.String(), + }).Err()) + } +} + +func buildStoredDelivery( + deliveryID string, + createdAt time.Time, + source deliverydomain.Source, + idempotencyKey common.IdempotencyKey, + status deliverydomain.Status, +) deliverydomain.Delivery { + updatedAt := createdAt.Add(time.Minute) + record := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID(deliveryID), + Source: source, + PayloadMode: deliverydomain.PayloadModeRendered, + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + }, + Content: deliverydomain.Content{ + Subject: "Test subject", + TextBody: "Test body", + }, + IdempotencyKey: idempotencyKey, + Status: status, + CreatedAt: createdAt, + UpdatedAt: updatedAt, + } + + switch status { + case deliverydomain.StatusSent: + record.AttemptCount = 1 + record.LastAttemptStatus = attempt.StatusProviderAccepted + sentAt := updatedAt + record.SentAt = &sentAt + case deliverydomain.StatusSuppressed: + suppressedAt := updatedAt + record.SuppressedAt = &suppressedAt + case deliverydomain.StatusFailed: + record.AttemptCount = 1 + record.LastAttemptStatus = attempt.StatusProviderRejected + failedAt := updatedAt + record.FailedAt = &failedAt + case deliverydomain.StatusDeadLetter: + record.AttemptCount = 1 + record.LastAttemptStatus = attempt.StatusTimedOut + deadLetteredAt := updatedAt + record.DeadLetteredAt = &deadLetteredAt + default: + record.AttemptCount = 1 + } + if source == deliverydomain.SourceOperatorResend { + record.ResendParentDeliveryID = common.DeliveryID("parent-" + deliveryID) + } + if err := record.Validate(); err != nil { + panic(err) + } + + return record +} + +func deliveryIDs(records []deliverydomain.Delivery) []common.DeliveryID { + result := make([]common.DeliveryID, len(records)) + for index, record := range records { + result[index] = record.DeliveryID + } + + return result +} diff --git a/mail/internal/adapters/redisstate/render_store.go b/mail/internal/adapters/redisstate/render_store.go new file mode 100644 index 0000000..ff33e70 --- /dev/null +++ b/mail/internal/adapters/redisstate/render_store.go @@ -0,0 +1,74 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + + "galaxy/mail/internal/service/renderdelivery" + + "github.com/redis/go-redis/v9" +) + +// RenderStore provides the Redis-backed durable storage used by the +// render-delivery use case. +type RenderStore struct { + writer *AtomicWriter +} + +// NewRenderStore constructs one Redis-backed render-delivery store. +func NewRenderStore(client *redis.Client) (*RenderStore, error) { + if client == nil { + return nil, errors.New("new render store: nil redis client") + } + + writer, err := NewAtomicWriter(client) + if err != nil { + return nil, fmt.Errorf("new render store: %w", err) + } + + return &RenderStore{writer: writer}, nil +} + +// MarkRendered stores one successfully materialized template delivery. +func (store *RenderStore) MarkRendered(ctx context.Context, input renderdelivery.MarkRenderedInput) error { + if store == nil || store.writer == nil { + return errors.New("mark rendered in render store: nil store") + } + if ctx == nil { + return errors.New("mark rendered in render store: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("mark rendered in render store: %w", err) + } + + if err := store.writer.MarkRendered(ctx, MarkRenderedInput{ + Delivery: input.Delivery, + }); err != nil { + return fmt.Errorf("mark rendered in render store: %w", err) + } + + return nil +} + +// MarkRenderFailed stores one classified terminal render failure. +func (store *RenderStore) MarkRenderFailed(ctx context.Context, input renderdelivery.MarkRenderFailedInput) error { + if store == nil || store.writer == nil { + return errors.New("mark render failed in render store: nil store") + } + if ctx == nil { + return errors.New("mark render failed in render store: nil context") + } + if err := input.Validate(); err != nil { + return fmt.Errorf("mark render failed in render store: %w", err) + } + + if err := store.writer.MarkRenderFailed(ctx, MarkRenderFailedInput{ + Delivery: input.Delivery, + Attempt: input.Attempt, + }); err != nil { + return fmt.Errorf("mark render failed in render store: %w", err) + } + + return nil +} diff --git a/mail/internal/adapters/redisstate/stream_offset_store.go b/mail/internal/adapters/redisstate/stream_offset_store.go new file mode 100644 index 0000000..956f9c5 --- /dev/null +++ b/mail/internal/adapters/redisstate/stream_offset_store.go @@ -0,0 +1,79 @@ +package redisstate + +import ( + "context" + "errors" + "fmt" + "time" + + "github.com/redis/go-redis/v9" +) + +// StreamOffsetStore provides the Redis-backed storage used for persisted +// plain-XREAD consumer progress. +type StreamOffsetStore struct { + client *redis.Client + keys Keyspace +} + +// NewStreamOffsetStore constructs one Redis-backed stream-offset store. +func NewStreamOffsetStore(client *redis.Client) (*StreamOffsetStore, error) { + if client == nil { + return nil, errors.New("new stream offset store: nil redis client") + } + + return &StreamOffsetStore{ + client: client, + keys: Keyspace{}, + }, nil +} + +// Load returns the last processed entry id for stream when one is stored. +func (store *StreamOffsetStore) Load(ctx context.Context, stream string) (string, bool, error) { + if store == nil || store.client == nil { + return "", false, errors.New("load stream offset: nil store") + } + if ctx == nil { + return "", false, errors.New("load stream offset: nil context") + } + + payload, err := store.client.Get(ctx, store.keys.StreamOffset(stream)).Bytes() + switch { + case errors.Is(err, redis.Nil): + return "", false, nil + case err != nil: + return "", false, fmt.Errorf("load stream offset: %w", err) + } + + offset, err := UnmarshalStreamOffset(payload) + if err != nil { + return "", false, fmt.Errorf("load stream offset: %w", err) + } + + return offset.LastProcessedEntryID, true, nil +} + +// Save stores the last processed entry id for stream. +func (store *StreamOffsetStore) Save(ctx context.Context, stream string, entryID string) error { + if store == nil || store.client == nil { + return errors.New("save stream offset: nil store") + } + if ctx == nil { + return errors.New("save stream offset: nil context") + } + + offset := StreamOffset{ + Stream: stream, + LastProcessedEntryID: entryID, + UpdatedAt: time.Now().UTC().Truncate(time.Millisecond), + } + payload, err := MarshalStreamOffset(offset) + if err != nil { + return fmt.Errorf("save stream offset: %w", err) + } + if err := store.client.Set(ctx, store.keys.StreamOffset(stream), payload, 0).Err(); err != nil { + return fmt.Errorf("save stream offset: %w", err) + } + + return nil +} diff --git a/mail/internal/adapters/smtp/provider.go b/mail/internal/adapters/smtp/provider.go new file mode 100644 index 0000000..668863b --- /dev/null +++ b/mail/internal/adapters/smtp/provider.go @@ -0,0 +1,440 @@ +// Package smtp provides the SMTP-backed provider adapter used by Mail +// Service. +package smtp + +import ( + "bytes" + "context" + "crypto/tls" + "errors" + "fmt" + "net" + stdmail "net/mail" + "strconv" + "strings" + "time" + + "galaxy/mail/internal/ports" + + gomail "github.com/wneessen/go-mail" +) + +const providerName = "smtp" + +// Config stores the SMTP provider connection settings. +type Config struct { + // Addr stores the SMTP server network address. + Addr string + + // Username stores the optional SMTP authentication username. + Username string + + // Password stores the optional SMTP authentication password. + Password string + + // FromEmail stores the envelope sender mailbox. + FromEmail string + + // FromName stores the optional display name of the sender. + FromName string + + // Timeout stores the maximum SMTP dial-and-send window enforced by the + // adapter when the caller does not provide an earlier deadline. + Timeout time.Duration + + // InsecureSkipVerify disables SMTP certificate verification. This is meant + // only for local development and black-box tests with self-signed capture + // servers. + InsecureSkipVerify bool + + // TLSConfig stores the optional TLS client configuration override used by + // tests. Production wiring leaves it nil and uses secure defaults. + TLSConfig *tls.Config +} + +// Provider stores the SMTP-backed delivery adapter. +type Provider struct { + client *gomail.Client + fromEmail string + fromName string + timeout time.Duration +} + +// New constructs one SMTP-backed provider and validates cfg. +func New(cfg Config) (*Provider, error) { + if err := cfg.Validate(); err != nil { + return nil, fmt.Errorf("new smtp provider: %w", err) + } + + host, portText, err := net.SplitHostPort(strings.TrimSpace(cfg.Addr)) + if err != nil { + return nil, fmt.Errorf("new smtp provider: split smtp addr: %w", err) + } + port, err := strconv.Atoi(portText) + if err != nil { + return nil, fmt.Errorf("new smtp provider: parse smtp port: %w", err) + } + + options := []gomail.Option{ + gomail.WithPort(port), + gomail.WithTimeout(cfg.Timeout), + gomail.WithTLSPolicy(gomail.TLSMandatory), + } + if cfg.TLSConfig != nil { + options = append(options, gomail.WithTLSConfig(cfg.TLSConfig)) + } else if cfg.InsecureSkipVerify { + options = append(options, gomail.WithTLSConfig(&tls.Config{ + MinVersion: tls.VersionTLS12, + ServerName: host, + InsecureSkipVerify: true, //nolint:gosec // Explicit opt-in for local integration scenarios only. + })) + } else { + options = append(options, gomail.WithTLSConfig(&tls.Config{ + MinVersion: tls.VersionTLS12, + ServerName: host, + })) + } + if cfg.Username != "" { + options = append(options, + gomail.WithUsername(cfg.Username), + gomail.WithPassword(cfg.Password), + gomail.WithSMTPAuth(gomail.SMTPAuthAutoDiscover), + ) + } + + client, err := gomail.NewClient(host, options...) + if err != nil { + return nil, fmt.Errorf("new smtp provider: %w", err) + } + + return &Provider{ + client: client, + fromEmail: cfg.FromEmail, + fromName: cfg.FromName, + timeout: cfg.Timeout, + }, nil +} + +// Send attempts one outbound SMTP delivery and returns a classified provider +// outcome whenever the interaction reached a stable SMTP result. +func (provider *Provider) Send(ctx context.Context, message ports.Message) (ports.Result, error) { + switch { + case ctx == nil: + return ports.Result{}, errors.New("send with smtp provider: nil context") + case provider == nil || provider.client == nil: + return ports.Result{}, errors.New("send with smtp provider: nil provider") + } + if err := message.Validate(); err != nil { + return ports.Result{}, fmt.Errorf("send with smtp provider: %w", err) + } + + if err := ctx.Err(); err != nil { + if errors.Is(err, context.DeadlineExceeded) { + return newResult(ports.ClassificationTransientFailure, summaryFields{ + Phase: "context", + }, map[string]string{ + "phase": "context", + "error": "deadline_exceeded", + }) + } + + return ports.Result{}, fmt.Errorf("send with smtp provider: %w", err) + } + + msg, err := provider.buildMessage(message) + if err != nil { + return newResult(ports.ClassificationPermanentFailure, summaryFields{ + Phase: "build", + }, map[string]string{ + "phase": "build", + "error": classifyLocalBuildError(err), + }) + } + + sendCtx, cancel := provider.sendContext(ctx) + defer cancel() + + err = provider.client.DialAndSendWithContext(sendCtx, msg) + if err == nil { + return newResult(ports.ClassificationAccepted, summaryFields{}, nil) + } + + return provider.classifySendError(err) +} + +// Close releases SMTP client resources. +func (provider *Provider) Close() error { + if provider == nil || provider.client == nil { + return nil + } + + provider.client.Close() + return nil +} + +// Validate reports whether cfg stores a complete SMTP provider configuration. +func (cfg Config) Validate() error { + host, port, err := net.SplitHostPort(strings.TrimSpace(cfg.Addr)) + switch { + case err != nil || port == "": + return fmt.Errorf("smtp addr %q must use host:port form", cfg.Addr) + case host != "" && strings.Contains(host, " "): + return fmt.Errorf("smtp addr %q must use host:port form", cfg.Addr) + case cfg.Timeout <= 0: + return fmt.Errorf("smtp timeout must be positive") + case strings.TrimSpace(cfg.Username) == "" && strings.TrimSpace(cfg.Password) != "": + return fmt.Errorf("smtp username and password must be configured together") + case strings.TrimSpace(cfg.Username) != "" && strings.TrimSpace(cfg.Password) == "": + return fmt.Errorf("smtp username and password must be configured together") + } + + parsed, err := stdmail.ParseAddress(strings.TrimSpace(cfg.FromEmail)) + if err != nil || parsed == nil || parsed.Name != "" || parsed.Address != strings.TrimSpace(cfg.FromEmail) { + return fmt.Errorf("smtp from email %q must be a single valid email address", cfg.FromEmail) + } + + return nil +} + +func (provider *Provider) buildMessage(message ports.Message) (*gomail.Msg, error) { + msg := gomail.NewMsg() + msg.EnvelopeFrom(provider.fromEmail) + + switch strings.TrimSpace(provider.fromName) { + case "": + if err := msg.From(provider.fromEmail); err != nil { + return nil, fmt.Errorf("set from header: %w", err) + } + default: + if err := msg.FromFormat(provider.fromName, provider.fromEmail); err != nil { + return nil, fmt.Errorf("set from header: %w", err) + } + } + + msg.SetBodyString(gomail.TypeTextPlain, message.Content.TextBody) + if message.Content.HTMLBody != "" { + msg.AddAlternativeString(gomail.TypeTextHTML, message.Content.HTMLBody) + } + msg.Subject(message.Content.Subject) + + for _, address := range message.Envelope.To { + if err := msg.AddTo(address.String()); err != nil { + return nil, fmt.Errorf("add to recipient: %w", err) + } + } + for _, address := range message.Envelope.Cc { + if err := msg.AddCc(address.String()); err != nil { + return nil, fmt.Errorf("add cc recipient: %w", err) + } + } + for _, address := range message.Envelope.Bcc { + if err := msg.AddBcc(address.String()); err != nil { + return nil, fmt.Errorf("add bcc recipient: %w", err) + } + } + for _, address := range message.Envelope.ReplyTo { + if err := msg.ReplyTo(address.String()); err != nil { + return nil, fmt.Errorf("add reply-to recipient: %w", err) + } + } + for _, attachment := range message.Attachments { + if err := attachment.Validate(); err != nil { + return nil, fmt.Errorf("attach file %q: %w", attachment.Metadata.Filename, err) + } + if err := msg.AttachReader( + attachment.Metadata.Filename, + bytes.NewReader(attachment.Content), + gomail.WithFileContentType(gomail.ContentType(attachment.Metadata.ContentType)), + ); err != nil { + return nil, fmt.Errorf("attach file %q: %w", attachment.Metadata.Filename, err) + } + } + + return msg, nil +} + +func (provider *Provider) classifySendError(err error) (ports.Result, error) { + switch { + case errors.Is(err, context.DeadlineExceeded): + return newResult(ports.ClassificationTransientFailure, summaryFields{ + Phase: "send", + }, map[string]string{ + "phase": "send", + "error": "deadline_exceeded", + }) + case strings.Contains(strings.ToLower(err.Error()), "starttls"): + return newResult(ports.ClassificationPermanentFailure, summaryFields{ + Phase: "tls", + }, map[string]string{ + "phase": "tls", + "error": "starttls_required", + }) + } + + var sendErr *gomail.SendError + if errors.As(err, &sendErr) { + codeText := "" + if code := sendErr.ErrorCode(); code > 0 { + codeText = strconv.Itoa(code) + } + phase := smtpReasonPhase(sendErr, err) + + details := map[string]string{ + "phase": phase, + "error": sanitizeDetailValue(strings.ToLower(sendErr.Reason.String())), + } + if codeText != "" { + details["smtp_code"] = codeText + } + + switch { + case sendErr.ErrorCode() >= 500: + return newResult(ports.ClassificationPermanentFailure, summaryFields{ + Phase: phase, + SMTPCode: codeText, + }, details) + case sendErr.ErrorCode() >= 400: + return newResult(ports.ClassificationTransientFailure, summaryFields{ + Phase: phase, + SMTPCode: codeText, + }, details) + case sendErr.IsTemp(): + return newResult(ports.ClassificationTransientFailure, summaryFields{ + Phase: phase, + }, details) + default: + return newResult(ports.ClassificationPermanentFailure, summaryFields{ + Phase: phase, + }, details) + } + } + + var netErr net.Error + if errors.As(err, &netErr) { + return newResult(ports.ClassificationTransientFailure, summaryFields{ + Phase: "dial", + }, map[string]string{ + "phase": "dial", + "net_op": "smtp", + "net_err": sanitizeDetailValue(strings.ToLower(netErr.Error())), + }) + } + + return newResult(ports.ClassificationPermanentFailure, summaryFields{ + Phase: "send", + }, map[string]string{ + "phase": "send", + "error": sanitizeDetailValue(strings.ToLower(err.Error())), + }) +} + +func (provider *Provider) sendContext(ctx context.Context) (context.Context, context.CancelFunc) { + if deadline, ok := ctx.Deadline(); ok { + remaining := time.Until(deadline) + if remaining <= provider.timeout { + return ctx, func() {} + } + } + + return context.WithTimeout(ctx, provider.timeout) +} + +type summaryFields struct { + Phase string + SMTPCode string +} + +func newResult(classification ports.Classification, fields summaryFields, details map[string]string) (ports.Result, error) { + summary, err := ports.BuildSafeSummary(ports.SummaryFields{ + Provider: providerName, + Result: string(classification), + Phase: fields.Phase, + SMTPCode: fields.SMTPCode, + }) + if err != nil { + return ports.Result{}, fmt.Errorf("build smtp provider summary: %w", err) + } + + result := ports.Result{ + Classification: classification, + Summary: summary, + Details: ports.CloneDetails(details), + } + if err := result.Validate(); err != nil { + return ports.Result{}, fmt.Errorf("build smtp provider result: %w", err) + } + + return result, nil +} + +func classifyLocalBuildError(err error) string { + return sanitizeDetailValue(strings.ToLower(err.Error())) +} + +func smtpReasonPhase(sendErr *gomail.SendError, err error) string { + if sendErr == nil { + return "send" + } + + switch sendErr.Reason { + case gomail.ErrConnCheck: + return "dial" + case gomail.ErrSMTPMailFrom: + return "mail_from" + case gomail.ErrSMTPRcptTo: + return "rcpt_to" + case gomail.ErrSMTPData: + return "data" + case gomail.ErrSMTPDataClose: + return "data" + case gomail.ErrSMTPReset: + return "reset" + case gomail.ErrWriteContent: + return "build" + case gomail.ErrGetSender, gomail.ErrGetRcpts: + return "build" + case gomail.ErrNoUnencoded: + return "build" + default: + lower := strings.ToLower(err.Error()) + switch { + case strings.Contains(lower, "starttls"): + return "tls" + case strings.Contains(lower, "auth"): + return "auth" + default: + return "send" + } + } +} + +func sanitizeDetailValue(value string) string { + value = strings.TrimSpace(value) + if value == "" { + return "unknown" + } + + var builder strings.Builder + for _, r := range value { + if r > 0x7f { + builder.WriteByte('_') + continue + } + switch { + case r >= 'a' && r <= 'z': + builder.WriteRune(r) + case r >= '0' && r <= '9': + builder.WriteRune(r) + case r == '.', r == '_', r == '-': + builder.WriteRune(r) + default: + builder.WriteByte('_') + } + } + + if builder.Len() == 0 { + return "unknown" + } + + return builder.String() +} diff --git a/mail/internal/adapters/smtp/provider_test.go b/mail/internal/adapters/smtp/provider_test.go new file mode 100644 index 0000000..abff8d7 --- /dev/null +++ b/mail/internal/adapters/smtp/provider_test.go @@ -0,0 +1,453 @@ +package smtp + +import ( + "bytes" + "context" + "crypto/rand" + "crypto/rsa" + "crypto/tls" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "io" + "math/big" + "net" + "strings" + "sync" + "testing" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/ports" + + "github.com/stretchr/testify/require" +) + +func TestProviderBuildMessageIncludesHeadersBodiesAndAttachments(t *testing.T) { + t.Parallel() + + provider := newTestProvider(t) + message := testMessage(t) + + msg, err := provider.buildMessage(message) + require.NoError(t, err) + + var buffer bytes.Buffer + _, err = msg.WriteTo(&buffer) + require.NoError(t, err) + + payload := buffer.String() + require.Contains(t, payload, "From: \"Galaxy Mail\" ") + require.Contains(t, payload, "To: ") + require.Contains(t, payload, "Cc: ") + require.Contains(t, payload, "Reply-To: ") + require.Contains(t, payload, "Subject: Turn update") + require.Contains(t, payload, "multipart/mixed") + require.Contains(t, payload, "multipart/alternative") + require.Contains(t, payload, "text/plain") + require.Contains(t, payload, "text/html") + require.Contains(t, payload, "guide.txt") + require.Contains(t, payload, "charset=utf-8") + require.NotContains(t, payload, "\nBcc:") +} + +func TestProviderSendClassifiesAccepted(t *testing.T) { + t.Parallel() + + server := startSMTPTestServer(t, smtpTestServerConfig{ + supportsSTARTTLS: true, + finalDataReply: "250 2.0.0 accepted", + }) + + provider := newLiveProvider(t, server.addr) + result, err := provider.Send(context.Background(), testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationAccepted, result.Classification) + require.Equal(t, "provider=smtp result=accepted", result.Summary) + require.Contains(t, server.data(), "Subject: Turn update") + require.NotContains(t, server.data(), "\nBcc:") +} + +func TestProviderSendClassifiesTransientSMTPFailure(t *testing.T) { + t.Parallel() + + server := startSMTPTestServer(t, smtpTestServerConfig{ + supportsSTARTTLS: true, + finalDataReply: "451 4.3.0 temporary_failure", + }) + + provider := newLiveProvider(t, server.addr) + result, err := provider.Send(context.Background(), testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationTransientFailure, result.Classification) + require.Contains(t, result.Summary, "provider=smtp") + require.Contains(t, result.Summary, "result=transient_failure") + require.Contains(t, result.Summary, "phase=data") + require.Contains(t, result.Summary, "smtp_code=451") +} + +func TestProviderSendClassifiesPermanentSMTPFailure(t *testing.T) { + t.Parallel() + + server := startSMTPTestServer(t, smtpTestServerConfig{ + supportsSTARTTLS: true, + finalDataReply: "550 5.7.1 permanent_failure", + }) + + provider := newLiveProvider(t, server.addr) + result, err := provider.Send(context.Background(), testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationPermanentFailure, result.Classification) + require.Contains(t, result.Summary, "provider=smtp") + require.Contains(t, result.Summary, "result=permanent_failure") + require.Contains(t, result.Summary, "phase=data") + require.Contains(t, result.Summary, "smtp_code=550") +} + +func TestProviderSendClassifiesMissingSTARTTLSAsPermanentFailure(t *testing.T) { + t.Parallel() + + server := startSMTPTestServer(t, smtpTestServerConfig{ + supportsSTARTTLS: false, + finalDataReply: "250 2.0.0 accepted", + }) + + provider := newLiveProvider(t, server.addr) + result, err := provider.Send(context.Background(), testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationPermanentFailure, result.Classification) + require.Contains(t, result.Summary, "provider=smtp") + require.Contains(t, result.Summary, "result=permanent_failure") + require.Contains(t, result.Summary, "phase=tls") +} + +func TestProviderSendClassifiesExpiredDeadlineAsTransientFailure(t *testing.T) { + t.Parallel() + + provider := newTestProvider(t) + + ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(-time.Second)) + defer cancel() + + result, err := provider.Send(ctx, testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationTransientFailure, result.Classification) + require.Contains(t, result.Summary, "result=transient_failure") + require.Contains(t, result.Summary, "phase=context") +} + +func TestNewRejectsUnpairedAuthConfiguration(t *testing.T) { + t.Parallel() + + _, err := New(Config{ + Addr: "127.0.0.1:2525", + Username: "mailer", + FromEmail: "noreply@example.com", + Timeout: time.Second, + }) + require.Error(t, err) + require.Contains(t, err.Error(), "smtp username and password") +} + +func newTestProvider(t *testing.T) *Provider { + t.Helper() + + provider, err := New(Config{ + Addr: "127.0.0.1:2525", + FromEmail: "noreply@example.com", + FromName: "Galaxy Mail", + Timeout: 15 * time.Second, + TLSConfig: &tls.Config{ + ServerName: "localhost", + InsecureSkipVerify: true, //nolint:gosec // test-only self-signed SMTP server. + }, + }) + require.NoError(t, err) + t.Cleanup(func() { + require.NoError(t, provider.Close()) + }) + + return provider +} + +func newLiveProvider(t *testing.T, addr string) *Provider { + t.Helper() + + provider, err := New(Config{ + Addr: addr, + FromEmail: "noreply@example.com", + FromName: "Galaxy Mail", + Timeout: 5 * time.Second, + TLSConfig: &tls.Config{ + ServerName: "localhost", + InsecureSkipVerify: true, //nolint:gosec // test-only self-signed SMTP server. + }, + }) + require.NoError(t, err) + t.Cleanup(func() { + require.NoError(t, provider.Close()) + }) + + return provider +} + +func testMessage(t *testing.T) ports.Message { + t.Helper() + + message := ports.Message{ + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + Cc: []common.Email{common.Email("copilot@example.com")}, + Bcc: []common.Email{common.Email("ops@example.com")}, + ReplyTo: []common.Email{common.Email("reply@example.com")}, + }, + Content: deliverydomain.Content{ + Subject: "Turn update", + TextBody: "Turn 54 is ready.", + HTMLBody: "

Turn 54 is ready.

", + }, + Attachments: []ports.Attachment{ + { + Metadata: common.AttachmentMetadata{ + Filename: "guide.txt", + ContentType: "text/plain; charset=utf-8", + SizeBytes: int64(len([]byte("read me"))), + }, + Content: []byte("read me"), + }, + }, + } + require.NoError(t, message.Validate()) + + return message +} + +type smtpTestServerConfig struct { + supportsSTARTTLS bool + finalDataReply string +} + +type smtpTestServer struct { + addr string + listener net.Listener + tlsConfig *tls.Config + + mu sync.Mutex + conn net.Conn + payload strings.Builder +} + +func startSMTPTestServer(t *testing.T, cfg smtpTestServerConfig) *smtpTestServer { + t.Helper() + + certificate := newTestCertificate(t) + listener, err := net.Listen("tcp", "127.0.0.1:0") + require.NoError(t, err) + + server := &smtpTestServer{ + addr: listener.Addr().String(), + listener: listener, + tlsConfig: &tls.Config{ + Certificates: []tls.Certificate{certificate}, + MinVersion: tls.VersionTLS12, + }, + } + + done := make(chan struct{}) + go func() { + defer close(done) + + conn, err := listener.Accept() + if err != nil { + return + } + server.mu.Lock() + server.conn = conn + server.mu.Unlock() + defer func() { + _ = conn.Close() + }() + + server.serveConnection(conn, cfg) + }() + + t.Cleanup(func() { + server.mu.Lock() + if server.conn != nil { + _ = server.conn.Close() + } + server.mu.Unlock() + _ = listener.Close() + <-done + }) + + return server +} + +func (server *smtpTestServer) data() string { + server.mu.Lock() + defer server.mu.Unlock() + return server.payload.String() +} + +func (server *smtpTestServer) serveConnection(conn net.Conn, cfg smtpTestServerConfig) { + reader := newSMTPLineReader(conn) + writer := newSMTPLineWriter(conn) + writer.writeLine("220 localhost ESMTP") + + tlsActive := false + for { + line, err := reader.readLine() + if err != nil { + return + } + + command := strings.ToUpper(line) + switch { + case strings.HasPrefix(command, "EHLO "), strings.HasPrefix(command, "HELO "): + if cfg.supportsSTARTTLS && !tlsActive { + writer.writeLines( + "250-localhost", + "250-8BITMIME", + "250-STARTTLS", + "250 SMTPUTF8", + ) + continue + } + writer.writeLines( + "250-localhost", + "250-8BITMIME", + "250 SMTPUTF8", + ) + case command == "STARTTLS": + writer.writeLine("220 Ready to start TLS") + tlsConn := tls.Server(conn, server.tlsConfig) + if err := tlsConn.Handshake(); err != nil { + return + } + conn = tlsConn + server.mu.Lock() + server.conn = conn + server.mu.Unlock() + reader = newSMTPLineReader(conn) + writer = newSMTPLineWriter(conn) + tlsActive = true + case strings.HasPrefix(command, "MAIL FROM:"): + writer.writeLine("250 2.1.0 Ok") + case strings.HasPrefix(command, "RCPT TO:"): + writer.writeLine("250 2.1.5 Ok") + case command == "DATA": + writer.writeLine("354 End data with .") + + var builder strings.Builder + for { + dataLine, err := reader.readRawLine() + if err != nil { + return + } + if dataLine == ".\r\n" { + break + } + builder.WriteString(dataLine) + } + + server.mu.Lock() + server.payload.WriteString(builder.String()) + server.mu.Unlock() + + writer.writeLine(cfg.finalDataReply) + case command == "RSET": + writer.writeLine("250 2.0.0 Ok") + case command == "QUIT": + writer.writeLine("221 2.0.0 Bye") + return + default: + writer.writeLine("250 2.0.0 Ok") + } + } +} + +type smtpLineReader struct { + reader *bytes.Buffer + conn net.Conn +} + +func newSMTPLineReader(conn net.Conn) *smtpLineReader { + return &smtpLineReader{conn: conn} +} + +func (reader *smtpLineReader) readLine() (string, error) { + line, err := reader.readRawLine() + if err != nil { + return "", err + } + return strings.TrimSuffix(strings.TrimSuffix(line, "\n"), "\r"), nil +} + +func (reader *smtpLineReader) readRawLine() (string, error) { + var buffer bytes.Buffer + tmp := make([]byte, 1) + for { + _, err := reader.conn.Read(tmp) + if err != nil { + return "", err + } + buffer.WriteByte(tmp[0]) + if tmp[0] == '\n' { + return buffer.String(), nil + } + } +} + +type smtpLineWriter struct { + conn net.Conn +} + +func newSMTPLineWriter(conn net.Conn) *smtpLineWriter { + return &smtpLineWriter{conn: conn} +} + +func (writer *smtpLineWriter) writeLine(line string) { + _, _ = io.WriteString(writer.conn, line+"\r\n") +} + +func (writer *smtpLineWriter) writeLines(lines ...string) { + for _, line := range lines { + writer.writeLine(line) + } +} + +func newTestCertificate(t *testing.T) tls.Certificate { + t.Helper() + + privateKey, err := rsa.GenerateKey(rand.Reader, 2048) + require.NoError(t, err) + + template := x509.Certificate{ + SerialNumber: big.NewInt(1), + Subject: pkix.Name{ + CommonName: "localhost", + }, + NotBefore: time.Now().Add(-time.Hour), + NotAfter: time.Now().Add(time.Hour), + KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, + BasicConstraintsValid: true, + DNSNames: []string{"localhost"}, + IPAddresses: []net.IP{net.ParseIP("127.0.0.1")}, + } + + der, err := x509.CreateCertificate(rand.Reader, &template, &template, &privateKey.PublicKey, privateKey) + require.NoError(t, err) + + certPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: der}) + keyPEM := pem.EncodeToMemory(&pem.Block{ + Type: "RSA PRIVATE KEY", + Bytes: x509.MarshalPKCS1PrivateKey(privateKey), + }) + + certificate, err := tls.X509KeyPair(certPEM, keyPEM) + require.NoError(t, err) + return certificate +} diff --git a/mail/internal/adapters/stubprovider/provider.go b/mail/internal/adapters/stubprovider/provider.go new file mode 100644 index 0000000..d17d2d3 --- /dev/null +++ b/mail/internal/adapters/stubprovider/provider.go @@ -0,0 +1,211 @@ +// Package stubprovider provides the deterministic local provider used by Mail +// Service tests and local bootstrap flows. +package stubprovider + +import ( + "context" + "errors" + "fmt" + "sync" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/ports" +) + +const providerName = "stub" + +// ScriptedOutcome stores one queued stub-provider result consumed by the next +// Send call. +type ScriptedOutcome struct { + // Classification stores the stable provider result classification. + Classification ports.Classification + + // Script stores the optional stable script label included in the redacted + // provider summary. + Script string + + // Details stores optional in-memory-only diagnostic fields associated with + // the scripted result. + Details map[string]string +} + +// Validate reports whether outcome contains one supported queued stub result. +func (outcome ScriptedOutcome) Validate() error { + if !outcome.Classification.IsKnown() { + return fmt.Errorf("stub scripted classification %q is unsupported", outcome.Classification) + } + if outcome.Script != "" { + if _, err := ports.BuildSafeSummary(ports.SummaryFields{ + Provider: providerName, + Result: string(outcome.Classification), + Script: outcome.Script, + }); err != nil { + return fmt.Errorf("stub scripted outcome: %w", err) + } + } + for key, value := range outcome.Details { + result := ports.Result{ + Classification: outcome.Classification, + Summary: "provider=stub result=accepted", + Details: map[string]string{ + key: value, + }, + } + if err := result.Validate(); err != nil { + return fmt.Errorf("stub scripted details: %w", err) + } + } + + return nil +} + +// Provider stores one deterministic in-memory provider implementation. +type Provider struct { + mu sync.Mutex + queue []ScriptedOutcome + inputs []ports.Message + closed bool +} + +// New constructs the deterministic stub provider. +func New(initial ...ScriptedOutcome) (*Provider, error) { + provider := &Provider{} + if err := provider.Enqueue(initial...); err != nil { + return nil, fmt.Errorf("new stub provider: %w", err) + } + + return provider, nil +} + +// Send records message and returns the next scripted outcome, or a stable +// accepted outcome when no script remains. +func (provider *Provider) Send(ctx context.Context, message ports.Message) (ports.Result, error) { + switch { + case ctx == nil: + return ports.Result{}, errors.New("send with stub provider: nil context") + case provider == nil: + return ports.Result{}, errors.New("send with stub provider: nil provider") + } + if err := message.Validate(); err != nil { + return ports.Result{}, fmt.Errorf("send with stub provider: %w", err) + } + + provider.mu.Lock() + defer provider.mu.Unlock() + + if provider.closed { + return ports.Result{}, errors.New("send with stub provider: provider is closed") + } + + provider.inputs = append(provider.inputs, cloneMessage(message)) + + if len(provider.queue) == 0 { + return scriptedResult(ScriptedOutcome{ + Classification: ports.ClassificationAccepted, + }) + } + + next := provider.queue[0] + provider.queue = provider.queue[1:] + return scriptedResult(next) +} + +// Close marks the provider as closed. Future Send calls fail fast. +func (provider *Provider) Close() error { + if provider == nil { + return nil + } + + provider.mu.Lock() + defer provider.mu.Unlock() + provider.closed = true + return nil +} + +// Enqueue appends scripted outcomes to the stub queue. +func (provider *Provider) Enqueue(outcomes ...ScriptedOutcome) error { + if provider == nil { + return errors.New("enqueue stub provider outcomes: nil provider") + } + + provider.mu.Lock() + defer provider.mu.Unlock() + + for index, outcome := range outcomes { + if err := outcome.Validate(); err != nil { + return fmt.Errorf("enqueue stub provider outcomes[%d]: %w", index, err) + } + provider.queue = append(provider.queue, ScriptedOutcome{ + Classification: outcome.Classification, + Script: outcome.Script, + Details: ports.CloneDetails(outcome.Details), + }) + } + + return nil +} + +// Inputs returns a detached snapshot of the accepted Send inputs. +func (provider *Provider) Inputs() []ports.Message { + if provider == nil { + return nil + } + + provider.mu.Lock() + defer provider.mu.Unlock() + + inputs := make([]ports.Message, len(provider.inputs)) + for index, input := range provider.inputs { + inputs[index] = cloneMessage(input) + } + + return inputs +} + +func scriptedResult(outcome ScriptedOutcome) (ports.Result, error) { + summary, err := ports.BuildSafeSummary(ports.SummaryFields{ + Provider: providerName, + Result: string(outcome.Classification), + Script: outcome.Script, + }) + if err != nil { + return ports.Result{}, fmt.Errorf("build stub provider summary: %w", err) + } + + result := ports.Result{ + Classification: outcome.Classification, + Summary: summary, + Details: ports.CloneDetails(outcome.Details), + } + if err := result.Validate(); err != nil { + return ports.Result{}, fmt.Errorf("build stub provider result: %w", err) + } + + return result, nil +} + +func cloneMessage(message ports.Message) ports.Message { + cloned := ports.Message{ + Envelope: deliverydomain.Envelope{ + To: append([]common.Email(nil), message.Envelope.To...), + Cc: append([]common.Email(nil), message.Envelope.Cc...), + Bcc: append([]common.Email(nil), message.Envelope.Bcc...), + ReplyTo: append([]common.Email(nil), message.Envelope.ReplyTo...), + }, + Content: message.Content, + } + if len(message.Attachments) > 0 { + cloned.Attachments = make([]ports.Attachment, len(message.Attachments)) + for index, attachment := range message.Attachments { + content := make([]byte, len(attachment.Content)) + copy(content, attachment.Content) + cloned.Attachments[index] = ports.Attachment{ + Metadata: attachment.Metadata, + Content: content, + } + } + } + + return cloned +} diff --git a/mail/internal/adapters/stubprovider/provider_test.go b/mail/internal/adapters/stubprovider/provider_test.go new file mode 100644 index 0000000..08308b7 --- /dev/null +++ b/mail/internal/adapters/stubprovider/provider_test.go @@ -0,0 +1,123 @@ +package stubprovider + +import ( + "context" + "fmt" + "sync" + "testing" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/ports" + + "github.com/stretchr/testify/require" +) + +func TestProviderSendUsesAcceptedDefault(t *testing.T) { + t.Parallel() + + provider, err := New() + require.NoError(t, err) + + result, err := provider.Send(context.Background(), testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationAccepted, result.Classification) + require.Equal(t, "provider=stub result=accepted", result.Summary) + require.Len(t, provider.Inputs(), 1) +} + +func TestProviderSendConsumesScriptedOutcomesInOrder(t *testing.T) { + t.Parallel() + + provider, err := New( + ScriptedOutcome{ + Classification: ports.ClassificationTransientFailure, + Script: "retry_later", + }, + ScriptedOutcome{ + Classification: ports.ClassificationSuppressed, + Script: "policy_skip", + }, + ) + require.NoError(t, err) + + first, err := provider.Send(context.Background(), testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationTransientFailure, first.Classification) + require.Equal(t, "provider=stub result=transient_failure script=retry_later", first.Summary) + + second, err := provider.Send(context.Background(), testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationSuppressed, second.Classification) + require.Equal(t, "provider=stub result=suppressed script=policy_skip", second.Summary) + + third, err := provider.Send(context.Background(), testMessage(t)) + require.NoError(t, err) + require.Equal(t, ports.ClassificationAccepted, third.Classification) +} + +func TestProviderSendConsumesQueueSafelyAcrossGoroutines(t *testing.T) { + t.Parallel() + + const sendCount = 24 + + initial := make([]ScriptedOutcome, 0, sendCount) + for index := 0; index < sendCount; index++ { + initial = append(initial, ScriptedOutcome{ + Classification: ports.ClassificationAccepted, + Script: fmt.Sprintf("case_%02d", index), + }) + } + + provider, err := New(initial...) + require.NoError(t, err) + + message := testMessage(t) + summaries := make(chan string, sendCount) + errs := make(chan error, sendCount) + var waitGroup sync.WaitGroup + for index := 0; index < sendCount; index++ { + waitGroup.Add(1) + go func() { + defer waitGroup.Done() + result, sendErr := provider.Send(context.Background(), message) + if sendErr != nil { + errs <- sendErr + return + } + summaries <- result.Summary + }() + } + waitGroup.Wait() + close(summaries) + close(errs) + + for err := range errs { + require.NoError(t, err) + } + + seen := make(map[string]struct{}, sendCount) + for summary := range summaries { + seen[summary] = struct{}{} + } + + require.Len(t, seen, sendCount) + require.Len(t, provider.Inputs(), sendCount) +} + +func testMessage(t *testing.T) ports.Message { + t.Helper() + + message := ports.Message{ + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + }, + Content: deliverydomain.Content{ + Subject: "Turn update", + TextBody: "Turn 54 is ready.", + }, + } + require.NoError(t, message.Validate()) + + return message +} diff --git a/mail/internal/adapters/templates/catalog.go b/mail/internal/adapters/templates/catalog.go new file mode 100644 index 0000000..a328ef2 --- /dev/null +++ b/mail/internal/adapters/templates/catalog.go @@ -0,0 +1,574 @@ +// Package templates provides the filesystem-backed template catalog used by +// Mail Service. +package templates + +import ( + "crypto/sha256" + "encoding/hex" + "errors" + "fmt" + htmltemplate "html/template" + "os" + "path/filepath" + "sort" + "strings" + texttemplate "text/template" + "text/template/parse" + + "galaxy/mail/internal/domain/common" + templatedomain "galaxy/mail/internal/domain/template" +) + +const ( + subjectTemplateFile = "subject.tmpl" + textTemplateFile = "text.tmpl" + htmlTemplateFile = "html.tmpl" +) + +var ( + // ErrTemplateNotFound reports that no template family exists for the + // requested template identifier. + ErrTemplateNotFound = errors.New("template catalog template not found") + + // ErrFallbackMissing reports that the requested locale is unavailable and + // the mandatory `en` fallback variant is also missing. + ErrFallbackMissing = errors.New("template catalog fallback locale missing") + + // ErrTemplateParseFailed reports that one filesystem template file could + // not be parsed into the in-memory registry. + ErrTemplateParseFailed = errors.New("template catalog template parse failed") + + requiredStartupTemplate = templateKey{ + TemplateID: common.TemplateID("auth.login_code"), + Locale: common.Locale("en"), + } +) + +// Catalog stores the immutable in-memory template registry built at process +// startup. +type Catalog struct { + rootDir string + templates map[templateKey]*compiledTemplate + availableLocales map[common.TemplateID][]common.Locale +} + +// ResolvedTemplate stores one resolved template variant together with lookup +// metadata such as locale fallback usage and required variable paths. +type ResolvedTemplate struct { + record templatedomain.Template + resolvedLocale common.Locale + localeFallbackUsed bool + requiredVariablePaths []string + subject *texttemplate.Template + text *texttemplate.Template + html *htmltemplate.Template +} + +type templateKey struct { + TemplateID common.TemplateID + Locale common.Locale +} + +type compiledTemplate struct { + record templatedomain.Template + requiredVariablePaths []string + subject *texttemplate.Template + text *texttemplate.Template + html *htmltemplate.Template +} + +type templateSources struct { + TemplateID common.TemplateID + Locale common.Locale + Subject string + Text string + HTML string +} + +// NewCatalog constructs Catalog for rootDir, parses the full template +// registry, and validates the mandatory auth login-code fallback template. +func NewCatalog(rootDir string) (*Catalog, error) { + if strings.TrimSpace(rootDir) == "" { + return nil, fmt.Errorf("new template catalog: root dir must not be empty") + } + + cleanRootDir := filepath.Clean(rootDir) + info, err := os.Stat(cleanRootDir) + if err != nil { + return nil, fmt.Errorf("new template catalog: stat root dir %q: %w", cleanRootDir, err) + } + if !info.IsDir() { + return nil, fmt.Errorf("new template catalog: root dir %q must be a directory", cleanRootDir) + } + + registry, availableLocales, err := loadRegistry(cleanRootDir) + if err != nil { + return nil, fmt.Errorf("new template catalog: %w", err) + } + if _, ok := registry[requiredStartupTemplate]; !ok { + return nil, fmt.Errorf( + "new template catalog: required template %q locale %q is missing", + requiredStartupTemplate.TemplateID, + requiredStartupTemplate.Locale, + ) + } + + return &Catalog{ + rootDir: cleanRootDir, + templates: registry, + availableLocales: availableLocales, + }, nil +} + +// RootDir returns the configured template catalog root directory. +func (catalog *Catalog) RootDir() string { + if catalog == nil { + return "" + } + + return catalog.rootDir +} + +// Lookup resolves one template family for locale, applying the frozen exact +// match followed by `en` fallback rule. +func (catalog *Catalog) Lookup(templateID common.TemplateID, locale common.Locale) (ResolvedTemplate, error) { + if catalog == nil { + return ResolvedTemplate{}, errors.New("lookup template: nil catalog") + } + if err := templateID.Validate(); err != nil { + return ResolvedTemplate{}, fmt.Errorf("lookup template: template id: %w", err) + } + if err := locale.Validate(); err != nil { + return ResolvedTemplate{}, fmt.Errorf("lookup template: locale: %w", err) + } + + exactKey := templateKey{TemplateID: templateID, Locale: locale} + if compiled, ok := catalog.templates[exactKey]; ok { + return compiled.resolve(false), nil + } + + fallbackKey := templateKey{TemplateID: templateID, Locale: common.Locale("en")} + if compiled, ok := catalog.templates[fallbackKey]; ok { + return compiled.resolve(true), nil + } + + if _, ok := catalog.availableLocales[templateID]; ok { + return ResolvedTemplate{}, fmt.Errorf( + "lookup template %q locale %q: %w", + templateID, + locale, + ErrFallbackMissing, + ) + } + + return ResolvedTemplate{}, fmt.Errorf( + "lookup template %q locale %q: %w", + templateID, + locale, + ErrTemplateNotFound, + ) +} + +// Template returns the resolved logical template record. +func (resolved ResolvedTemplate) Template() templatedomain.Template { + return resolved.record +} + +// ResolvedLocale returns the filesystem locale variant that will actually be +// executed. +func (resolved ResolvedTemplate) ResolvedLocale() common.Locale { + return resolved.resolvedLocale +} + +// LocaleFallbackUsed reports whether lookup fell back from the requested +// locale to `en`. +func (resolved ResolvedTemplate) LocaleFallbackUsed() bool { + return resolved.localeFallbackUsed +} + +// RequiredVariablePaths returns the sorted list of dot-path variables used by +// the resolved template variant. +func (resolved ResolvedTemplate) RequiredVariablePaths() []string { + return append([]string(nil), resolved.requiredVariablePaths...) +} + +// ExecuteSubject executes the resolved subject template with data. +func (resolved ResolvedTemplate) ExecuteSubject(data any) (string, error) { + return executeTextTemplate("subject", resolved.subject, data) +} + +// ExecuteText executes the resolved plaintext body template with data. +func (resolved ResolvedTemplate) ExecuteText(data any) (string, error) { + return executeTextTemplate("text", resolved.text, data) +} + +// ExecuteHTML executes the resolved HTML body template with data. The second +// return value reports whether the resolved variant contains HTML content. +func (resolved ResolvedTemplate) ExecuteHTML(data any) (string, bool, error) { + if resolved.html == nil { + return "", false, nil + } + + rendered, err := executeHTMLTemplate("html", resolved.html, data) + if err != nil { + return "", true, err + } + + return rendered, true, nil +} + +func loadRegistry(rootDir string) (map[templateKey]*compiledTemplate, map[common.TemplateID][]common.Locale, error) { + sourceBundles := make(map[templateKey]*templateSources) + + if err := filepath.WalkDir(rootDir, func(path string, entry os.DirEntry, walkErr error) error { + if walkErr != nil { + return walkErr + } + + relativePath, err := filepath.Rel(rootDir, path) + if err != nil { + return err + } + if relativePath == "." { + return nil + } + + relativePath = filepath.ToSlash(relativePath) + if entry.IsDir() { + return nil + } + + parts := strings.Split(relativePath, "/") + if len(parts) != 3 { + return fmt.Errorf("invalid template path %q: expected //", relativePath) + } + + templateID := common.TemplateID(parts[0]) + if err := templateID.Validate(); err != nil { + return fmt.Errorf("invalid template path %q: %w", relativePath, err) + } + + locale, err := common.ParseLocale(parts[1]) + if err != nil { + return fmt.Errorf("invalid template path %q: %w", relativePath, err) + } + + contentsBytes, err := os.ReadFile(path) + if err != nil { + return fmt.Errorf("read template file %q: %w", path, err) + } + + key := templateKey{TemplateID: templateID, Locale: locale} + bundle := sourceBundles[key] + if bundle == nil { + bundle = &templateSources{ + TemplateID: templateID, + Locale: locale, + } + sourceBundles[key] = bundle + } + + switch parts[2] { + case subjectTemplateFile: + if bundle.Subject != "" { + return fmt.Errorf("duplicate template subject for %q locale %q", templateID, locale) + } + bundle.Subject = string(contentsBytes) + case textTemplateFile: + if bundle.Text != "" { + return fmt.Errorf("duplicate template text body for %q locale %q", templateID, locale) + } + bundle.Text = string(contentsBytes) + case htmlTemplateFile: + if bundle.HTML != "" { + return fmt.Errorf("duplicate template html body for %q locale %q", templateID, locale) + } + bundle.HTML = string(contentsBytes) + default: + return fmt.Errorf("invalid template path %q: unsupported file name %q", relativePath, parts[2]) + } + + return nil + }); err != nil { + return nil, nil, err + } + + registry := make(map[templateKey]*compiledTemplate, len(sourceBundles)) + availableLocales := make(map[common.TemplateID][]common.Locale) + + for key, bundle := range sourceBundles { + compiled, err := compileTemplate(*bundle) + if err != nil { + return nil, nil, err + } + + registry[key] = compiled + availableLocales[key.TemplateID] = append(availableLocales[key.TemplateID], key.Locale) + } + + for templateID := range availableLocales { + sort.Slice(availableLocales[templateID], func(left int, right int) bool { + return availableLocales[templateID][left].String() < availableLocales[templateID][right].String() + }) + } + + return registry, availableLocales, nil +} + +func compileTemplate(source templateSources) (*compiledTemplate, error) { + if source.Subject == "" { + return nil, fmt.Errorf("template %q locale %q is missing %s", source.TemplateID, source.Locale, subjectTemplateFile) + } + if source.Text == "" { + return nil, fmt.Errorf("template %q locale %q is missing %s", source.TemplateID, source.Locale, textTemplateFile) + } + + subject, err := parseText(source.TemplateID, source.Locale, "subject", source.Subject) + if err != nil { + return nil, err + } + textBody, err := parseText(source.TemplateID, source.Locale, "text", source.Text) + if err != nil { + return nil, err + } + + var htmlBody *htmltemplate.Template + if source.HTML != "" { + htmlBody, err = parseHTML(source.TemplateID, source.Locale, "html", source.HTML) + if err != nil { + return nil, err + } + } + + record := templatedomain.Template{ + TemplateID: source.TemplateID, + Locale: source.Locale, + SubjectTemplate: source.Subject, + TextTemplate: source.Text, + HTMLTemplate: source.HTML, + Version: computeVersion(source), + } + if err := record.Validate(); err != nil { + return nil, fmt.Errorf("compile template %q locale %q: %w", source.TemplateID, source.Locale, err) + } + + requiredVariablePaths := collectRequiredVariablePaths(subject.Tree, textBody.Tree) + if htmlBody != nil { + requiredVariablePaths = mergeRequiredVariablePaths(requiredVariablePaths, collectRequiredVariablePaths(htmlBody.Tree)) + } + + return &compiledTemplate{ + record: record, + requiredVariablePaths: requiredVariablePaths, + subject: subject, + text: textBody, + html: htmlBody, + }, nil +} + +func parseText(templateID common.TemplateID, locale common.Locale, part string, source string) (*texttemplate.Template, error) { + parsed, err := texttemplate.New(part).Option("missingkey=error").Parse(source) + if err != nil { + return nil, fmt.Errorf( + "parse template %q locale %q part %q: %w: %v", + templateID, + locale, + part, + ErrTemplateParseFailed, + err, + ) + } + + return parsed, nil +} + +func parseHTML(templateID common.TemplateID, locale common.Locale, part string, source string) (*htmltemplate.Template, error) { + parsed, err := htmltemplate.New(part).Option("missingkey=error").Parse(source) + if err != nil { + return nil, fmt.Errorf( + "parse template %q locale %q part %q: %w: %v", + templateID, + locale, + part, + ErrTemplateParseFailed, + err, + ) + } + + return parsed, nil +} + +func computeVersion(source templateSources) string { + sum := sha256.New() + for _, part := range []string{ + source.TemplateID.String(), + source.Locale.String(), + source.Subject, + source.Text, + source.HTML, + } { + _, _ = sum.Write([]byte(part)) + _, _ = sum.Write([]byte{0}) + } + + return "sha256:" + hex.EncodeToString(sum.Sum(nil)) +} + +func (compiled *compiledTemplate) resolve(localeFallbackUsed bool) ResolvedTemplate { + return ResolvedTemplate{ + record: compiled.record, + resolvedLocale: compiled.record.Locale, + localeFallbackUsed: localeFallbackUsed, + requiredVariablePaths: append([]string(nil), compiled.requiredVariablePaths...), + subject: compiled.subject, + text: compiled.text, + html: compiled.html, + } +} + +func executeTextTemplate(name string, tmpl *texttemplate.Template, data any) (string, error) { + if tmpl == nil { + return "", fmt.Errorf("execute %s template: nil template", name) + } + + var builder strings.Builder + if err := tmpl.Execute(&builder, data); err != nil { + return "", fmt.Errorf("execute %s template: %w", name, err) + } + + return builder.String(), nil +} + +func executeHTMLTemplate(name string, tmpl *htmltemplate.Template, data any) (string, error) { + if tmpl == nil { + return "", fmt.Errorf("execute %s template: nil template", name) + } + + var builder strings.Builder + if err := tmpl.Execute(&builder, data); err != nil { + return "", fmt.Errorf("execute %s template: %w", name, err) + } + + return builder.String(), nil +} + +func collectRequiredVariablePaths(trees ...*parse.Tree) []string { + paths := make(map[string]struct{}) + + for _, tree := range trees { + if tree == nil || tree.Root == nil { + continue + } + collectNodePaths(tree.Root, nil, paths) + } + + collected := make([]string, 0, len(paths)) + for path := range paths { + collected = append(collected, path) + } + sort.Strings(collected) + + return collected +} + +func mergeRequiredVariablePaths(existing []string, additional []string) []string { + merged := make(map[string]struct{}, len(existing)+len(additional)) + for _, path := range existing { + merged[path] = struct{}{} + } + for _, path := range additional { + merged[path] = struct{}{} + } + + combined := make([]string, 0, len(merged)) + for path := range merged { + combined = append(combined, path) + } + sort.Strings(combined) + + return combined +} + +func collectNodePaths(node parse.Node, scope []string, paths map[string]struct{}) { + switch typed := node.(type) { + case *parse.ListNode: + if typed == nil { + return + } + for _, child := range typed.Nodes { + collectNodePaths(child, scope, paths) + } + case *parse.ActionNode: + collectPipePaths(typed.Pipe, scope, paths) + case *parse.IfNode: + collectPipePaths(typed.Pipe, scope, paths) + collectNodePaths(typed.List, scope, paths) + collectNodePaths(typed.ElseList, scope, paths) + case *parse.RangeNode: + collectPipePaths(typed.Pipe, scope, paths) + collectNodePaths(typed.List, scopeForPipe(typed.Pipe, scope), paths) + collectNodePaths(typed.ElseList, scope, paths) + case *parse.WithNode: + collectPipePaths(typed.Pipe, scope, paths) + collectNodePaths(typed.List, scopeForPipe(typed.Pipe, scope), paths) + collectNodePaths(typed.ElseList, scope, paths) + case *parse.TemplateNode: + collectPipePaths(typed.Pipe, scope, paths) + } +} + +func collectPipePaths(pipe *parse.PipeNode, scope []string, paths map[string]struct{}) { + if pipe == nil { + return + } + + for _, command := range pipe.Cmds { + for _, arg := range command.Args { + path, ok := nodePath(arg, scope) + if !ok || len(path) == 0 { + continue + } + paths[strings.Join(path, ".")] = struct{}{} + } + } +} + +func scopeForPipe(pipe *parse.PipeNode, scope []string) []string { + if pipe == nil || len(pipe.Cmds) != 1 || len(pipe.Cmds[0].Args) != 1 { + return nil + } + + path, ok := nodePath(pipe.Cmds[0].Args[0], scope) + if !ok { + return nil + } + + return path +} + +func nodePath(node parse.Node, scope []string) ([]string, bool) { + switch typed := node.(type) { + case *parse.FieldNode: + return appendPath(scope, typed.Ident), true + case *parse.ChainNode: + prefix, ok := nodePath(typed.Node, scope) + if !ok { + return nil, false + } + return appendPath(prefix, typed.Field), true + case *parse.DotNode: + if len(scope) == 0 { + return nil, false + } + return append([]string(nil), scope...), true + default: + return nil, false + } +} + +func appendPath(prefix []string, suffix []string) []string { + combined := make([]string, 0, len(prefix)+len(suffix)) + combined = append(combined, prefix...) + combined = append(combined, suffix...) + return combined +} diff --git a/mail/internal/adapters/templates/catalog_test.go b/mail/internal/adapters/templates/catalog_test.go new file mode 100644 index 0000000..686f39b --- /dev/null +++ b/mail/internal/adapters/templates/catalog_test.go @@ -0,0 +1,204 @@ +package templates + +import ( + "errors" + "os" + "path/filepath" + "testing" + + "galaxy/mail/internal/domain/common" + + "github.com/stretchr/testify/require" +) + +func TestNewCatalogBuildsImmutableRegistry(t *testing.T) { + t.Parallel() + + rootDir := t.TempDir() + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "subject.tmpl"), "Your login code") + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "text.tmpl"), "Code: {{.code}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "fr-fr", "subject.tmpl"), "Tour {{.turn_number}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "fr-fr", "text.tmpl"), "Bonjour {{with .player}}{{.name}}{{end}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "fr-fr", "html.tmpl"), "

{{.player.name}}

") + + catalog, err := NewCatalog(rootDir) + require.NoError(t, err) + require.Equal(t, filepath.Clean(rootDir), catalog.RootDir()) + + locale, err := common.ParseLocale("fr-FR") + require.NoError(t, err) + resolved, err := catalog.Lookup(common.TemplateID("game.turn_ready"), locale) + require.NoError(t, err) + require.False(t, resolved.LocaleFallbackUsed()) + require.Equal(t, common.Locale("fr-FR"), resolved.ResolvedLocale()) + require.Equal(t, []string{"player", "player.name", "turn_number"}, resolved.RequiredVariablePaths()) + + subject, err := resolved.ExecuteSubject(map[string]any{ + "turn_number": 54, + "player": map[string]any{ + "name": "Pilot", + }, + }) + require.NoError(t, err) + require.Equal(t, "Tour 54", subject) + + textBody, err := resolved.ExecuteText(map[string]any{ + "player": map[string]any{ + "name": "Pilot", + }, + }) + require.NoError(t, err) + require.Equal(t, "Bonjour Pilot", textBody) + + htmlBody, ok, err := resolved.ExecuteHTML(map[string]any{ + "player": map[string]any{ + "name": "Pilot", + }, + }) + require.NoError(t, err) + require.True(t, ok) + require.Equal(t, "

Pilot

", htmlBody) +} + +func TestCatalogLookupFallsBackToEnglish(t *testing.T) { + t.Parallel() + + rootDir := t.TempDir() + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "subject.tmpl"), "Your login code") + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "text.tmpl"), "Code: {{.code}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "en", "subject.tmpl"), "Turn {{.turn_number}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "en", "text.tmpl"), "Hello {{.player.name}}") + + catalog, err := NewCatalog(rootDir) + require.NoError(t, err) + + locale, err := common.ParseLocale("fr-FR") + require.NoError(t, err) + resolved, err := catalog.Lookup(common.TemplateID("game.turn_ready"), locale) + require.NoError(t, err) + require.True(t, resolved.LocaleFallbackUsed()) + require.Equal(t, common.Locale("en"), resolved.ResolvedLocale()) +} + +func TestCatalogLookupRejectsMissingEnglishFallback(t *testing.T) { + t.Parallel() + + rootDir := t.TempDir() + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "subject.tmpl"), "Your login code") + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "text.tmpl"), "Code: {{.code}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "fr-FR", "subject.tmpl"), "Tour {{.turn_number}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "fr-FR", "text.tmpl"), "Bonjour {{.player.name}}") + + catalog, err := NewCatalog(rootDir) + require.NoError(t, err) + + locale, err := common.ParseLocale("de-DE") + require.NoError(t, err) + _, err = catalog.Lookup(common.TemplateID("game.turn_ready"), locale) + require.Error(t, err) + require.True(t, errors.Is(err, ErrFallbackMissing)) +} + +func TestCatalogLookupRejectsUnknownTemplateFamily(t *testing.T) { + t.Parallel() + + rootDir := t.TempDir() + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "subject.tmpl"), "Your login code") + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "text.tmpl"), "Code: {{.code}}") + + catalog, err := NewCatalog(rootDir) + require.NoError(t, err) + + locale, err := common.ParseLocale("en") + require.NoError(t, err) + _, err = catalog.Lookup(common.TemplateID("game.turn_ready"), locale) + require.Error(t, err) + require.True(t, errors.Is(err, ErrTemplateNotFound)) +} + +func TestCatalogAllowsTemplateWithoutHTML(t *testing.T) { + t.Parallel() + + rootDir := t.TempDir() + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "subject.tmpl"), "Your login code") + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "text.tmpl"), "Code: {{.code}}") + + catalog, err := NewCatalog(rootDir) + require.NoError(t, err) + + locale, err := common.ParseLocale("en") + require.NoError(t, err) + resolved, err := catalog.Lookup(common.TemplateID("auth.login_code"), locale) + require.NoError(t, err) + + htmlBody, ok, err := resolved.ExecuteHTML(map[string]any{"code": "123456"}) + require.NoError(t, err) + require.False(t, ok) + require.Empty(t, htmlBody) +} + +func TestCatalogVersionIsDeterministic(t *testing.T) { + t.Parallel() + + rootDir := t.TempDir() + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "subject.tmpl"), "Your login code") + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "text.tmpl"), "Code: {{.code}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "en", "subject.tmpl"), "Turn {{.turn_number}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "en", "text.tmpl"), "Hello {{.player.name}}") + + firstCatalog, err := NewCatalog(rootDir) + require.NoError(t, err) + secondCatalog, err := NewCatalog(rootDir) + require.NoError(t, err) + + locale, err := common.ParseLocale("en") + require.NoError(t, err) + firstResolved, err := firstCatalog.Lookup(common.TemplateID("game.turn_ready"), locale) + require.NoError(t, err) + secondResolved, err := secondCatalog.Lookup(common.TemplateID("game.turn_ready"), locale) + require.NoError(t, err) + + require.Equal(t, firstResolved.Template().Version, secondResolved.Template().Version) +} + +func TestNewCatalogRejectsMissingDirectory(t *testing.T) { + t.Parallel() + + _, err := NewCatalog(filepath.Join(t.TempDir(), "missing")) + require.Error(t, err) + require.Contains(t, err.Error(), "stat root dir") +} + +func TestNewCatalogRejectsMissingRequiredStartupTemplate(t *testing.T) { + t.Parallel() + + rootDir := t.TempDir() + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "en", "subject.tmpl"), "Turn {{.turn_number}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "en", "text.tmpl"), "Hello {{.player.name}}") + + _, err := NewCatalog(rootDir) + require.Error(t, err) + require.Contains(t, err.Error(), `required template "auth.login_code" locale "en" is missing`) +} + +func TestNewCatalogRejectsBrokenTemplateParse(t *testing.T) { + t.Parallel() + + rootDir := t.TempDir() + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "subject.tmpl"), "Your login code") + writeTemplateFile(t, rootDir, filepath.Join("auth.login_code", "en", "text.tmpl"), "Code: {{.code}}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "en", "subject.tmpl"), "{{if .turn_number}") + writeTemplateFile(t, rootDir, filepath.Join("game.turn_ready", "en", "text.tmpl"), "Hello {{.player.name}}") + + _, err := NewCatalog(rootDir) + require.Error(t, err) + require.True(t, errors.Is(err, ErrTemplateParseFailed)) +} + +func writeTemplateFile(t *testing.T, rootDir string, relativePath string, contents string) { + t.Helper() + + absolutePath := filepath.Join(rootDir, relativePath) + require.NoError(t, os.MkdirAll(filepath.Dir(absolutePath), 0o755)) + require.NoError(t, os.WriteFile(absolutePath, []byte(contents), 0o644)) +} diff --git a/mail/internal/api/internalhttp/contract.go b/mail/internal/api/internalhttp/contract.go new file mode 100644 index 0000000..b480820 --- /dev/null +++ b/mail/internal/api/internalhttp/contract.go @@ -0,0 +1,294 @@ +// Package internalhttp defines the frozen trusted internal HTTP contract used +// by Mail Service. +package internalhttp + +import ( + "bytes" + "crypto/sha256" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "io" + "mime" + "net/http" + "strings" + + "galaxy/mail/internal/domain/common" +) + +const ( + // LoginCodeDeliveriesPath is the dedicated trusted route used by + // Auth / Session Service for auth login-code delivery intake. + LoginCodeDeliveriesPath = "/api/v1/internal/login-code-deliveries" + + // IdempotencyKeyHeader is the required header that scopes auth-delivery + // deduplication. + IdempotencyKeyHeader = "Idempotency-Key" + + // ErrorCodeInvalidRequest identifies trusted validation failures. + ErrorCodeInvalidRequest = "invalid_request" + + // ErrorCodeInternalError identifies trusted invariant failures. + ErrorCodeInternalError = "internal_error" + + // ErrorCodeServiceUnavailable identifies trusted availability failures. + ErrorCodeServiceUnavailable = "service_unavailable" + + // ErrorCodeConflict identifies conflicting idempotency replays. + ErrorCodeConflict = "conflict" + + jsonMediaType = "application/json" +) + +// LoginCodeDeliveryRequest stores the strict JSON body accepted on the frozen +// auth-delivery route before normalization. +type LoginCodeDeliveryRequest struct { + // Email stores the destination e-mail address. + Email string `json:"email"` + + // Code stores the exact login code generated by Auth / Session Service. + Code string `json:"code"` + + // Locale stores the caller-selected BCP 47 language tag. + Locale string `json:"locale"` +} + +// LoginCodeDeliveryCommand stores the normalized auth-delivery request shape +// that later Mail Service handlers and services can consume directly. +type LoginCodeDeliveryCommand struct { + // IdempotencyKey stores the caller-owned stable deduplication key. + IdempotencyKey common.IdempotencyKey + + // Email stores the normalized recipient address. + Email common.Email + + // Code stores the exact login code after boundary validation. + Code string + + // Locale stores the canonical BCP 47 language tag. + Locale common.Locale +} + +// Validate reports whether command satisfies the frozen auth-delivery +// contract. +func (command LoginCodeDeliveryCommand) Validate() error { + if err := command.IdempotencyKey.Validate(); err != nil { + return fmt.Errorf("idempotency key: %w", err) + } + if err := command.Email.Validate(); err != nil { + return fmt.Errorf("email: %w", err) + } + if strings.TrimSpace(command.Code) == "" { + return errors.New("code must not be empty") + } + if strings.TrimSpace(command.Code) != command.Code { + return errors.New("code must not contain surrounding whitespace") + } + if err := command.Locale.Validate(); err != nil { + return fmt.Errorf("locale: %w", err) + } + + return nil +} + +// Fingerprint returns the stable auth-delivery idempotency fingerprint of +// command. +func (command LoginCodeDeliveryCommand) Fingerprint() (string, error) { + if err := command.Validate(); err != nil { + return "", err + } + + normalized := struct { + IdempotencyKey string `json:"idempotency_key"` + Email string `json:"email"` + Code string `json:"code"` + Locale string `json:"locale"` + }{ + IdempotencyKey: command.IdempotencyKey.String(), + Email: command.Email.String(), + Code: command.Code, + Locale: command.Locale.String(), + } + + payload, err := json.Marshal(normalized) + if err != nil { + return "", fmt.Errorf("marshal login code delivery fingerprint: %w", err) + } + + sum := sha256.Sum256(payload) + + return "sha256:" + hex.EncodeToString(sum[:]), nil +} + +// LoginCodeDeliveryOutcome identifies the stable successful auth-delivery +// intake outcomes. +type LoginCodeDeliveryOutcome string + +const ( + // LoginCodeDeliveryOutcomeSent reports durable acceptance into the internal + // mail-delivery pipeline. + LoginCodeDeliveryOutcomeSent LoginCodeDeliveryOutcome = "sent" + + // LoginCodeDeliveryOutcomeSuppressed reports intentional outward delivery + // suppression while keeping the auth flow success-shaped. + LoginCodeDeliveryOutcomeSuppressed LoginCodeDeliveryOutcome = "suppressed" +) + +// IsKnown reports whether outcome belongs to the frozen auth success surface. +func (outcome LoginCodeDeliveryOutcome) IsKnown() bool { + switch outcome { + case LoginCodeDeliveryOutcomeSent, LoginCodeDeliveryOutcomeSuppressed: + return true + default: + return false + } +} + +// LoginCodeDeliveryResponse stores the stable successful auth-delivery +// response body. +type LoginCodeDeliveryResponse struct { + // Outcome stores the stable coarse acceptance result. + Outcome LoginCodeDeliveryOutcome `json:"outcome"` +} + +// Validate reports whether response satisfies the frozen success contract. +func (response LoginCodeDeliveryResponse) Validate() error { + if !response.Outcome.IsKnown() { + return fmt.Errorf("login code delivery outcome %q is unsupported", response.Outcome) + } + + return nil +} + +// ErrorResponse stores the stable trusted error envelope used by Mail Service. +type ErrorResponse struct { + // Error stores the stable trusted error body. + Error ErrorBody `json:"error"` +} + +// Validate reports whether response satisfies the frozen trusted error +// envelope contract. +func (response ErrorResponse) Validate() error { + return response.Error.Validate() +} + +// ErrorBody stores the stable trusted error shape returned by Mail Service. +type ErrorBody struct { + // Code stores the stable machine-readable error code. + Code string `json:"code"` + + // Message stores the trusted human-readable error message. + Message string `json:"message"` +} + +// Validate reports whether body contains a complete trusted error payload. +func (body ErrorBody) Validate() error { + switch { + case strings.TrimSpace(body.Code) == "": + return errors.New("error code must not be empty") + case strings.TrimSpace(body.Code) != body.Code: + return errors.New("error code must not contain surrounding whitespace") + case strings.TrimSpace(body.Message) == "": + return errors.New("error message must not be empty") + default: + return nil + } +} + +// DecodeLoginCodeDeliveryCommand validates one trusted HTTP request and +// returns the normalized auth-delivery command shape frozen by Stage 04. +func DecodeLoginCodeDeliveryCommand(request *http.Request) (LoginCodeDeliveryCommand, error) { + if request == nil { + return LoginCodeDeliveryCommand{}, errors.New("login code delivery request must not be nil") + } + + if err := validateJSONContentType(request.Header.Get("Content-Type")); err != nil { + return LoginCodeDeliveryCommand{}, err + } + + idempotencyKey, err := parseIdempotencyKey(request.Header.Get(IdempotencyKeyHeader)) + if err != nil { + return LoginCodeDeliveryCommand{}, err + } + + body, err := decodeLoginCodeDeliveryRequest(request.Body) + if err != nil { + return LoginCodeDeliveryCommand{}, err + } + + command := LoginCodeDeliveryCommand{ + IdempotencyKey: idempotencyKey, + Email: common.Email(strings.TrimSpace(body.Email)), + Code: body.Code, + } + + locale, err := common.ParseLocale(strings.TrimSpace(body.Locale)) + if err != nil { + return LoginCodeDeliveryCommand{}, fmt.Errorf("locale: %w", err) + } + command.Locale = locale + + if err := command.Validate(); err != nil { + return LoginCodeDeliveryCommand{}, err + } + + return command, nil +} + +func decodeLoginCodeDeliveryRequest(body io.ReadCloser) (LoginCodeDeliveryRequest, error) { + if body == nil { + return LoginCodeDeliveryRequest{}, errors.New("request body must not be nil") + } + defer body.Close() + + payload, err := io.ReadAll(body) + if err != nil { + return LoginCodeDeliveryRequest{}, fmt.Errorf("read request body: %w", err) + } + + decoder := json.NewDecoder(bytes.NewReader(payload)) + decoder.DisallowUnknownFields() + + var request LoginCodeDeliveryRequest + if err := decoder.Decode(&request); err != nil { + return LoginCodeDeliveryRequest{}, fmt.Errorf("decode request body: %w", err) + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return LoginCodeDeliveryRequest{}, errors.New("decode request body: unexpected trailing JSON input") + } + + return LoginCodeDeliveryRequest{}, fmt.Errorf("decode request body: %w", err) + } + + return request, nil +} + +func parseIdempotencyKey(value string) (common.IdempotencyKey, error) { + switch { + case strings.TrimSpace(value) == "": + return "", errors.New("Idempotency-Key header must not be empty") + case strings.TrimSpace(value) != value: + return "", errors.New("Idempotency-Key header must not contain surrounding whitespace") + default: + key := common.IdempotencyKey(value) + if err := key.Validate(); err != nil { + return "", fmt.Errorf("idempotency key: %w", err) + } + + return key, nil + } +} + +func validateJSONContentType(value string) error { + mediaType, _, err := mime.ParseMediaType(value) + if err != nil { + return fmt.Errorf("Content-Type must be %s", jsonMediaType) + } + if mediaType != jsonMediaType { + return fmt.Errorf("Content-Type must be %s", jsonMediaType) + } + + return nil +} diff --git a/mail/internal/api/internalhttp/contract_test.go b/mail/internal/api/internalhttp/contract_test.go new file mode 100644 index 0000000..295209f --- /dev/null +++ b/mail/internal/api/internalhttp/contract_test.go @@ -0,0 +1,184 @@ +package internalhttp + +import ( + "net/http" + "net/http/httptest" + "strings" + "testing" + + "galaxy/mail/internal/domain/common" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestDecodeLoginCodeDeliveryCommandSuccess(t *testing.T) { + t.Parallel() + + request := httptest.NewRequest(http.MethodPost, LoginCodeDeliveriesPath, strings.NewReader(`{"email":" pilot@example.com ","code":"123456","locale":" en "}`)) + request.Header.Set("Content-Type", "application/json") + request.Header.Set(IdempotencyKeyHeader, "challenge-1") + + command, err := DecodeLoginCodeDeliveryCommand(request) + require.NoError(t, err) + assert.Equal(t, LoginCodeDeliveryCommand{ + IdempotencyKey: common.IdempotencyKey("challenge-1"), + Email: common.Email("pilot@example.com"), + Code: "123456", + Locale: common.Locale("en"), + }, command) +} + +func TestDecodeLoginCodeDeliveryCommandRejectsInvalidRequests(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + contentType string + headerValue string + body string + wantErr string + }{ + { + name: "missing content type", + headerValue: "challenge-1", + body: `{"email":"pilot@example.com","code":"123456","locale":"en"}`, + wantErr: "Content-Type must be application/json", + }, + { + name: "missing idempotency key", + contentType: "application/json", + body: `{"email":"pilot@example.com","code":"123456","locale":"en"}`, + wantErr: "Idempotency-Key header must not be empty", + }, + { + name: "idempotency key surrounding whitespace", + contentType: "application/json", + headerValue: " challenge-1 ", + body: `{"email":"pilot@example.com","code":"123456","locale":"en"}`, + wantErr: "Idempotency-Key header must not contain surrounding whitespace", + }, + { + name: "unknown field", + contentType: "application/json", + headerValue: "challenge-1", + body: `{"email":"pilot@example.com","code":"123456","locale":"en","extra":true}`, + wantErr: "decode request body", + }, + { + name: "trailing json", + contentType: "application/json", + headerValue: "challenge-1", + body: `{"email":"pilot@example.com","code":"123456","locale":"en"}{}`, + wantErr: "unexpected trailing JSON input", + }, + { + name: "code surrounding whitespace", + contentType: "application/json", + headerValue: "challenge-1", + body: `{"email":"pilot@example.com","code":" 123456 ","locale":"en"}`, + wantErr: "code must not contain surrounding whitespace", + }, + { + name: "invalid locale", + contentType: "application/json", + headerValue: "challenge-1", + body: `{"email":"pilot@example.com","code":"123456","locale":"english"}`, + wantErr: "locale:", + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + request := httptest.NewRequest(http.MethodPost, LoginCodeDeliveriesPath, strings.NewReader(tt.body)) + if tt.contentType != "" { + request.Header.Set("Content-Type", tt.contentType) + } + if tt.headerValue != "" { + request.Header.Set(IdempotencyKeyHeader, tt.headerValue) + } + + _, err := DecodeLoginCodeDeliveryCommand(request) + require.Error(t, err) + assert.ErrorContains(t, err, tt.wantErr) + }) + } +} + +func TestDecodeLoginCodeDeliveryCommandRepeatedEquivalentRequestsMatch(t *testing.T) { + t.Parallel() + + first := httptest.NewRequest(http.MethodPost, LoginCodeDeliveriesPath, strings.NewReader(`{"email":"pilot@example.com","code":"123456","locale":"en"}`)) + first.Header.Set("Content-Type", "application/json") + first.Header.Set(IdempotencyKeyHeader, "challenge-1") + + second := httptest.NewRequest(http.MethodPost, LoginCodeDeliveriesPath, strings.NewReader(`{"email":" pilot@example.com ","code":"123456","locale":" en "}`)) + second.Header.Set("Content-Type", "application/json") + second.Header.Set(IdempotencyKeyHeader, "challenge-1") + + firstCommand, err := DecodeLoginCodeDeliveryCommand(first) + require.NoError(t, err) + secondCommand, err := DecodeLoginCodeDeliveryCommand(second) + require.NoError(t, err) + + assert.Equal(t, firstCommand, secondCommand) +} + +func TestLoginCodeDeliveryCommandFingerprintStableForEquivalentRequests(t *testing.T) { + t.Parallel() + + first := LoginCodeDeliveryCommand{ + IdempotencyKey: common.IdempotencyKey("challenge-1"), + Email: common.Email("pilot@example.com"), + Code: "123456", + Locale: common.Locale("en"), + } + second := LoginCodeDeliveryCommand{ + IdempotencyKey: common.IdempotencyKey("challenge-1"), + Email: common.Email("pilot@example.com"), + Code: "123456", + Locale: common.Locale("en"), + } + + firstFingerprint, err := first.Fingerprint() + require.NoError(t, err) + secondFingerprint, err := second.Fingerprint() + require.NoError(t, err) + + assert.Equal(t, firstFingerprint, secondFingerprint) +} + +func TestLoginCodeDeliveryResponseValidate(t *testing.T) { + t.Parallel() + + require.NoError(t, LoginCodeDeliveryResponse{Outcome: LoginCodeDeliveryOutcomeSent}.Validate()) + require.NoError(t, LoginCodeDeliveryResponse{Outcome: LoginCodeDeliveryOutcomeSuppressed}.Validate()) + + err := LoginCodeDeliveryResponse{Outcome: LoginCodeDeliveryOutcome("queued")}.Validate() + require.Error(t, err) + assert.ErrorContains(t, err, "unsupported") +} + +func TestErrorResponseValidate(t *testing.T) { + t.Parallel() + + require.NoError(t, ErrorResponse{ + Error: ErrorBody{ + Code: ErrorCodeInvalidRequest, + Message: "field-specific validation detail", + }, + }.Validate()) + + err := ErrorResponse{ + Error: ErrorBody{ + Code: " invalid_request ", + Message: "", + }, + }.Validate() + require.Error(t, err) + assert.ErrorContains(t, err, "error code") +} diff --git a/mail/internal/api/internalhttp/handler.go b/mail/internal/api/internalhttp/handler.go new file mode 100644 index 0000000..8ba98f4 --- /dev/null +++ b/mail/internal/api/internalhttp/handler.go @@ -0,0 +1,63 @@ +package internalhttp + +import ( + "context" + "encoding/json" + "errors" + "net/http" + + "galaxy/mail/internal/service/acceptauthdelivery" +) + +// AcceptLoginCodeDeliveryUseCase accepts one auth login-code delivery request. +type AcceptLoginCodeDeliveryUseCase interface { + // Execute durably accepts one normalized auth login-code delivery command. + Execute(context.Context, acceptauthdelivery.Input) (acceptauthdelivery.Result, error) +} + +func newAcceptLoginCodeDeliveryHandler(useCase AcceptLoginCodeDeliveryUseCase) http.HandlerFunc { + return func(writer http.ResponseWriter, request *http.Request) { + ctx := request.Context() + + command, err := DecodeLoginCodeDeliveryCommand(request) + if err != nil { + writeErrorResponse(writer, http.StatusBadRequest, ErrorCodeInvalidRequest, err.Error()) + return + } + + result, err := useCase.Execute(ctx, acceptauthdelivery.Input{ + IdempotencyKey: command.IdempotencyKey, + Email: command.Email, + Code: command.Code, + Locale: command.Locale, + }) + if err != nil { + switch { + case errors.Is(err, acceptauthdelivery.ErrConflict): + writeErrorResponse(writer, http.StatusConflict, ErrorCodeConflict, "request conflicts with current state") + case errors.Is(err, acceptauthdelivery.ErrServiceUnavailable): + writeErrorResponse(writer, http.StatusServiceUnavailable, ErrorCodeServiceUnavailable, "service is unavailable") + default: + writeErrorResponse(writer, http.StatusInternalServerError, ErrorCodeInternalError, "internal server error") + } + return + } + + if err := result.Validate(); err != nil { + writeErrorResponse(writer, http.StatusInternalServerError, ErrorCodeInternalError, "internal server error") + return + } + + response := LoginCodeDeliveryResponse{ + Outcome: LoginCodeDeliveryOutcome(result.Outcome), + } + if err := response.Validate(); err != nil { + writeErrorResponse(writer, http.StatusInternalServerError, ErrorCodeInternalError, "internal server error") + return + } + + writer.Header().Set("Content-Type", "application/json") + writer.WriteHeader(http.StatusOK) + _ = json.NewEncoder(writer).Encode(response) + } +} diff --git a/mail/internal/api/internalhttp/handler_test.go b/mail/internal/api/internalhttp/handler_test.go new file mode 100644 index 0000000..e258da5 --- /dev/null +++ b/mail/internal/api/internalhttp/handler_test.go @@ -0,0 +1,236 @@ +package internalhttp + +import ( + "bytes" + "context" + "encoding/json" + "io" + "log/slog" + "net/http" + "net/http/httptest" + "testing" + + "galaxy/mail/internal/service/acceptauthdelivery" + mailtelemetry "galaxy/mail/internal/telemetry" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "go.opentelemetry.io/otel/attribute" + sdkmetric "go.opentelemetry.io/otel/sdk/metric" + "go.opentelemetry.io/otel/sdk/metric/metricdata" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/sdk/trace/tracetest" +) + +func TestLoginCodeDeliveryHandlerReturnsSuccessOutcomes(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + result acceptauthdelivery.Result + wantOutcome string + }{ + {name: "sent", result: acceptauthdelivery.Result{Outcome: acceptauthdelivery.OutcomeSent}, wantOutcome: "sent"}, + {name: "suppressed", result: acceptauthdelivery.Result{Outcome: acceptauthdelivery.OutcomeSuppressed}, wantOutcome: "suppressed"}, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + handler := newHandler(Dependencies{ + Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), + AcceptLoginCodeDelivery: acceptLoginCodeDeliveryFunc(func(context.Context, acceptauthdelivery.Input) (acceptauthdelivery.Result, error) { + return tt.result, nil + }), + }) + + response := doLoginCodeDeliveryRequest(t, handler, `{"email":"pilot@example.com","code":"123456","locale":"en"}`, "challenge-1") + defer response.Body.Close() + + require.Equal(t, http.StatusOK, response.StatusCode) + require.Equal(t, "application/json", response.Header.Get("Content-Type")) + + var payload LoginCodeDeliveryResponse + require.NoError(t, decodeJSONBody(response, &payload)) + require.Equal(t, LoginCodeDeliveryOutcome(tt.wantOutcome), payload.Outcome) + }) + } +} + +func TestLoginCodeDeliveryHandlerMapsErrors(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + body string + header string + useCaseErr error + wantCode int + wantErr string + }{ + { + name: "invalid request", + body: `{"email":"pilot@example.com","code":"123456","locale":"en"}`, + wantCode: http.StatusBadRequest, + wantErr: ErrorCodeInvalidRequest, + }, + { + name: "conflict", + body: `{"email":"pilot@example.com","code":"123456","locale":"en"}`, + header: "challenge-1", + useCaseErr: acceptauthdelivery.ErrConflict, + wantCode: http.StatusConflict, + wantErr: ErrorCodeConflict, + }, + { + name: "service unavailable", + body: `{"email":"pilot@example.com","code":"123456","locale":"en"}`, + header: "challenge-1", + useCaseErr: acceptauthdelivery.ErrServiceUnavailable, + wantCode: http.StatusServiceUnavailable, + wantErr: ErrorCodeServiceUnavailable, + }, + { + name: "internal error", + body: `{"email":"pilot@example.com","code":"123456","locale":"en"}`, + header: "challenge-1", + useCaseErr: context.DeadlineExceeded, + wantCode: http.StatusInternalServerError, + wantErr: ErrorCodeInternalError, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + handler := newHandler(Dependencies{ + Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), + AcceptLoginCodeDelivery: acceptLoginCodeDeliveryFunc(func(context.Context, acceptauthdelivery.Input) (acceptauthdelivery.Result, error) { + if tt.useCaseErr != nil { + return acceptauthdelivery.Result{}, tt.useCaseErr + } + + return acceptauthdelivery.Result{Outcome: acceptauthdelivery.OutcomeSent}, nil + }), + }) + + response := doLoginCodeDeliveryRequest(t, handler, tt.body, tt.header) + defer response.Body.Close() + + require.Equal(t, tt.wantCode, response.StatusCode) + + var payload ErrorResponse + require.NoError(t, decodeJSONBody(response, &payload)) + require.Equal(t, tt.wantErr, payload.Error.Code) + }) + } +} + +func TestLoginCodeDeliveryHandlerEmitsMetricsAndSpan(t *testing.T) { + t.Parallel() + + reader := sdkmetric.NewManualReader() + meterProvider := sdkmetric.NewMeterProvider(sdkmetric.WithReader(reader)) + recorder := tracetest.NewSpanRecorder() + tracerProvider := sdktrace.NewTracerProvider(sdktrace.WithSpanProcessor(recorder)) + + telemetryRuntime, err := mailtelemetry.NewWithProviders(meterProvider, tracerProvider) + require.NoError(t, err) + + loggerBuffer := &bytes.Buffer{} + logger := slog.New(slog.NewJSONHandler(loggerBuffer, nil)) + handler := newHandler(Dependencies{ + Logger: logger, + Telemetry: telemetryRuntime, + AcceptLoginCodeDelivery: acceptLoginCodeDeliveryFunc(func(context.Context, acceptauthdelivery.Input) (acceptauthdelivery.Result, error) { + return acceptauthdelivery.Result{Outcome: acceptauthdelivery.OutcomeSent}, nil + }), + }) + + response := doLoginCodeDeliveryRequest(t, handler, `{"email":"pilot@example.com","code":"123456","locale":"en"}`, "challenge-1") + defer response.Body.Close() + + require.Equal(t, http.StatusOK, response.StatusCode) + require.Len(t, recorder.Ended(), 1) + assert.Equal(t, LoginCodeDeliveriesPath, recorder.Ended()[0].Name()) + assert.Contains(t, loggerBuffer.String(), "otel_trace_id") + assert.Contains(t, loggerBuffer.String(), "otel_span_id") + + assertMetricCount(t, reader, "mail.internal_http.requests", map[string]string{ + "route": LoginCodeDeliveriesPath, + "method": http.MethodPost, + "edge_outcome": "success", + }, 1) +} + +type acceptLoginCodeDeliveryFunc func(context.Context, acceptauthdelivery.Input) (acceptauthdelivery.Result, error) + +func (fn acceptLoginCodeDeliveryFunc) Execute(ctx context.Context, input acceptauthdelivery.Input) (acceptauthdelivery.Result, error) { + return fn(ctx, input) +} + +func doLoginCodeDeliveryRequest(t *testing.T, handler http.Handler, body string, idempotencyKey string) *http.Response { + t.Helper() + + request := httptest.NewRequest(http.MethodPost, LoginCodeDeliveriesPath, bytes.NewBufferString(body)) + request.Header.Set("Content-Type", "application/json") + if idempotencyKey != "" { + request.Header.Set(IdempotencyKeyHeader, idempotencyKey) + } + + recorder := httptest.NewRecorder() + handler.ServeHTTP(recorder, request) + + return recorder.Result() +} + +func decodeJSONBody(response *http.Response, target any) error { + return json.NewDecoder(response.Body).Decode(target) +} + +func assertMetricCount(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string, wantValue int64) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + sum, ok := metric.Data.(metricdata.Sum[int64]) + require.True(t, ok) + + for _, point := range sum.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + assert.Equal(t, wantValue, point.Value) + return + } + } + } + } + + require.Failf(t, "test failed", "metric %q with attrs %v not found", metricName, wantAttrs) +} + +func hasMetricAttributes(values []attribute.KeyValue, want map[string]string) bool { + if len(values) != len(want) { + return false + } + + for _, value := range values { + if want[string(value.Key)] != value.Value.AsString() { + return false + } + } + + return true +} diff --git a/mail/internal/api/internalhttp/observability.go b/mail/internal/api/internalhttp/observability.go new file mode 100644 index 0000000..4140c47 --- /dev/null +++ b/mail/internal/api/internalhttp/observability.go @@ -0,0 +1,114 @@ +package internalhttp + +import ( + "log/slog" + "net/http" + "time" + + "galaxy/mail/internal/logging" + "galaxy/mail/internal/telemetry" + + "go.opentelemetry.io/otel/attribute" +) + +type edgeOutcome string + +const ( + edgeOutcomeSuccess edgeOutcome = "success" + edgeOutcomeRejected edgeOutcome = "rejected" + edgeOutcomeFailed edgeOutcome = "failed" +) + +func instrumentRoute(route string, logger *slog.Logger, telemetryRuntime *telemetry.Runtime, next http.Handler) http.Handler { + if logger == nil { + logger = slog.Default() + } + + return http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) { + startedAt := time.Now() + recorder := &observedResponseWriter{ + ResponseWriter: writer, + statusCode: http.StatusOK, + } + + next.ServeHTTP(recorder, request) + + duration := time.Since(startedAt) + outcome := outcomeFromStatusCode(recorder.statusCode) + attrs := []attribute.KeyValue{ + attribute.String("route", route), + attribute.String("method", request.Method), + attribute.String("edge_outcome", string(outcome)), + } + if recorder.errorCode != "" { + attrs = append(attrs, attribute.String("error_code", recorder.errorCode)) + } + if telemetryRuntime != nil { + telemetryRuntime.RecordInternalHTTPRequest(request.Context(), attrs, duration) + } + + logArgs := []any{ + "component", "internal_http", + "transport", "http", + "route", route, + "method", request.Method, + "status_code", recorder.statusCode, + "duration_ms", float64(duration.Microseconds()) / 1000, + "edge_outcome", string(outcome), + } + if recorder.errorCode != "" { + logArgs = append(logArgs, "error_code", recorder.errorCode) + } + logArgs = append(logArgs, logging.TraceAttrsFromContext(request.Context())...) + + switch outcome { + case edgeOutcomeSuccess: + logger.Info("internal request completed", logArgs...) + case edgeOutcomeFailed: + logger.Error("internal request failed", logArgs...) + default: + logger.Warn("internal request rejected", logArgs...) + } + }) +} + +type observedResponseWriter struct { + http.ResponseWriter + + statusCode int + errorCode string + wroteHeader bool +} + +func (writer *observedResponseWriter) WriteHeader(statusCode int) { + if writer.wroteHeader { + return + } + + writer.statusCode = statusCode + writer.wroteHeader = true + writer.ResponseWriter.WriteHeader(statusCode) +} + +func (writer *observedResponseWriter) Write(payload []byte) (int, error) { + if !writer.wroteHeader { + writer.WriteHeader(http.StatusOK) + } + + return writer.ResponseWriter.Write(payload) +} + +func (writer *observedResponseWriter) SetErrorCode(code string) { + writer.errorCode = code +} + +func outcomeFromStatusCode(statusCode int) edgeOutcome { + switch { + case statusCode >= 500: + return edgeOutcomeFailed + case statusCode >= 400: + return edgeOutcomeRejected + default: + return edgeOutcomeSuccess + } +} diff --git a/mail/internal/api/internalhttp/operator_contract.go b/mail/internal/api/internalhttp/operator_contract.go new file mode 100644 index 0000000..bb17d28 --- /dev/null +++ b/mail/internal/api/internalhttp/operator_contract.go @@ -0,0 +1,625 @@ +package internalhttp + +import ( + "encoding/base64" + "errors" + "fmt" + "net/http" + "strconv" + "strings" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/service/listdeliveries" +) + +const ( + // ErrorCodeDeliveryNotFound identifies a missing trusted delivery lookup + // target. + ErrorCodeDeliveryNotFound = "delivery_not_found" + + // ErrorCodeResendNotAllowed identifies resend requests against non-terminal + // deliveries. + ErrorCodeResendNotAllowed = "resend_not_allowed" + + deliveryIDPathValue = "delivery_id" +) + +// DeliveryListQuery stores the raw trusted query-string values accepted by the +// operator delivery-list route before normalization. +type DeliveryListQuery struct { + // Recipient stores the optional recipient filter covering `to`, `cc`, and + // `bcc`. + Recipient string + + // Status stores the optional delivery-status filter. + Status string + + // Source stores the optional delivery-source filter. + Source string + + // TemplateID stores the optional template-family filter. + TemplateID string + + // IdempotencyKey stores the optional idempotency-key filter. + IdempotencyKey string + + // FromCreatedAtMS stores the optional inclusive lower creation-time bound. + FromCreatedAtMS string + + // ToCreatedAtMS stores the optional inclusive upper creation-time bound. + ToCreatedAtMS string + + // Limit stores the optional page size. + Limit string + + // Cursor stores the optional opaque continuation cursor. + Cursor string +} + +// DeliverySummaryResponse stores one brief operator-facing delivery record. +type DeliverySummaryResponse struct { + DeliveryID string `json:"delivery_id"` + ResendParentDeliveryID string `json:"resend_parent_delivery_id,omitempty"` + Source string `json:"source"` + PayloadMode string `json:"payload_mode"` + TemplateID string `json:"template_id,omitempty"` + To []string `json:"to"` + Cc []string `json:"cc"` + Bcc []string `json:"bcc"` + ReplyTo []string `json:"reply_to"` + Locale string `json:"locale,omitempty"` + LocaleFallbackUsed bool `json:"locale_fallback_used"` + IdempotencyKey string `json:"idempotency_key"` + Status string `json:"status"` + AttemptCount int `json:"attempt_count"` + LastAttemptStatus string `json:"last_attempt_status,omitempty"` + ProviderSummary string `json:"provider_summary,omitempty"` + CreatedAtMS int64 `json:"created_at_ms"` + UpdatedAtMS int64 `json:"updated_at_ms"` + SentAtMS *int64 `json:"sent_at_ms,omitempty"` + SuppressedAtMS *int64 `json:"suppressed_at_ms,omitempty"` + FailedAtMS *int64 `json:"failed_at_ms,omitempty"` + DeadLetteredAtMS *int64 `json:"dead_lettered_at_ms,omitempty"` +} + +// DeliveryListResponse stores one deterministic page of brief delivery +// summaries. +type DeliveryListResponse struct { + Items []DeliverySummaryResponse `json:"items"` + NextCursor string `json:"next_cursor,omitempty"` +} + +// AttachmentResponse stores one durable attachment audit record. +type AttachmentResponse struct { + Filename string `json:"filename"` + ContentType string `json:"content_type"` + SizeBytes int64 `json:"size_bytes"` +} + +// DeadLetterResponse stores one operator-visible dead-letter entry. +type DeadLetterResponse struct { + DeliveryID string `json:"delivery_id"` + FinalAttemptNo int `json:"final_attempt_no"` + FailureClassification string `json:"failure_classification"` + ProviderSummary string `json:"provider_summary,omitempty"` + CreatedAtMS int64 `json:"created_at_ms"` + RecoveryHint string `json:"recovery_hint,omitempty"` +} + +// DeliveryDetailResponse stores one full operator-facing delivery view. +type DeliveryDetailResponse struct { + DeliveryID string `json:"delivery_id"` + ResendParentDeliveryID string `json:"resend_parent_delivery_id,omitempty"` + Source string `json:"source"` + PayloadMode string `json:"payload_mode"` + TemplateID string `json:"template_id,omitempty"` + TemplateVariables map[string]any `json:"template_variables,omitempty"` + To []string `json:"to"` + Cc []string `json:"cc"` + Bcc []string `json:"bcc"` + ReplyTo []string `json:"reply_to"` + Subject string `json:"subject,omitempty"` + TextBody string `json:"text_body,omitempty"` + HTMLBody string `json:"html_body,omitempty"` + Attachments []AttachmentResponse `json:"attachments"` + Locale string `json:"locale,omitempty"` + LocaleFallbackUsed bool `json:"locale_fallback_used"` + IdempotencyKey string `json:"idempotency_key"` + Status string `json:"status"` + AttemptCount int `json:"attempt_count"` + LastAttemptStatus string `json:"last_attempt_status,omitempty"` + ProviderSummary string `json:"provider_summary,omitempty"` + CreatedAtMS int64 `json:"created_at_ms"` + UpdatedAtMS int64 `json:"updated_at_ms"` + SentAtMS *int64 `json:"sent_at_ms,omitempty"` + SuppressedAtMS *int64 `json:"suppressed_at_ms,omitempty"` + FailedAtMS *int64 `json:"failed_at_ms,omitempty"` + DeadLetteredAtMS *int64 `json:"dead_lettered_at_ms,omitempty"` + DeadLetter *DeadLetterResponse `json:"dead_letter,omitempty"` +} + +// AttemptResponse stores one operator-facing delivery-attempt record. +type AttemptResponse struct { + DeliveryID string `json:"delivery_id"` + AttemptNo int `json:"attempt_no"` + ScheduledForMS int64 `json:"scheduled_for_ms"` + StartedAtMS *int64 `json:"started_at_ms,omitempty"` + FinishedAtMS *int64 `json:"finished_at_ms,omitempty"` + Status string `json:"status"` + ProviderClassification string `json:"provider_classification,omitempty"` + ProviderSummary string `json:"provider_summary,omitempty"` +} + +// DeliveryAttemptsResponse stores the attempt history of one accepted +// delivery. +type DeliveryAttemptsResponse struct { + Items []AttemptResponse `json:"items"` +} + +// DeliveryResendResponse stores the identifier of the clone delivery created +// by one resend request. +type DeliveryResendResponse struct { + DeliveryID string `json:"delivery_id"` +} + +// DecodeDeliveryListInput validates one trusted operator delivery-list +// request and returns the normalized list input. +func DecodeDeliveryListInput(request *http.Request) (listdeliveries.Input, error) { + if request == nil { + return listdeliveries.Input{}, errors.New("delivery list request must not be nil") + } + + query, err := decodeDeliveryListQuery(request) + if err != nil { + return listdeliveries.Input{}, err + } + + input, err := query.Normalize() + if err != nil { + return listdeliveries.Input{}, err + } + + return input, nil +} + +// DecodeDeliveryIDFromPath validates one trusted path delivery identifier. +func DecodeDeliveryIDFromPath(request *http.Request) (common.DeliveryID, error) { + if request == nil { + return "", errors.New("delivery lookup request must not be nil") + } + + return parseDeliveryID(request.PathValue(deliveryIDPathValue)) +} + +// EncodeDeliveryListCursor encodes cursor into the frozen opaque base64url +// format `created_at_ms:delivery_id`. +func EncodeDeliveryListCursor(cursor listdeliveries.Cursor) (string, error) { + if err := cursor.Validate(); err != nil { + return "", fmt.Errorf("encode delivery list cursor: %w", err) + } + + payload := fmt.Sprintf("%d:%s", cursor.CreatedAt.UTC().UnixMilli(), cursor.DeliveryID.String()) + + return base64.RawURLEncoding.EncodeToString([]byte(payload)), nil +} + +// DecodeDeliveryListCursor decodes raw from the frozen opaque cursor format. +func DecodeDeliveryListCursor(raw string) (listdeliveries.Cursor, error) { + if strings.TrimSpace(raw) == "" { + return listdeliveries.Cursor{}, errors.New("cursor must not be empty") + } + if strings.TrimSpace(raw) != raw { + return listdeliveries.Cursor{}, errors.New("cursor must not contain surrounding whitespace") + } + + payload, err := base64.RawURLEncoding.DecodeString(raw) + if err != nil { + return listdeliveries.Cursor{}, fmt.Errorf("decode cursor: %w", err) + } + + createdAtRaw, deliveryIDRaw, ok := strings.Cut(string(payload), ":") + if !ok { + return listdeliveries.Cursor{}, errors.New("decode cursor: invalid cursor payload") + } + + createdAtMS, err := strconv.ParseInt(createdAtRaw, 10, 64) + if err != nil { + return listdeliveries.Cursor{}, fmt.Errorf("decode cursor created_at_ms: %w", err) + } + + cursor := listdeliveries.Cursor{ + CreatedAt: time.UnixMilli(createdAtMS).UTC(), + DeliveryID: common.DeliveryID(deliveryIDRaw), + } + if err := cursor.Validate(); err != nil { + return listdeliveries.Cursor{}, fmt.Errorf("decode cursor: %w", err) + } + + return cursor, nil +} + +// Normalize converts the raw trusted query-string shape into the operator list +// input consumed by the service layer. +func (query DeliveryListQuery) Normalize() (listdeliveries.Input, error) { + var input listdeliveries.Input + + recipient, err := parseOptionalEmail(query.Recipient, "recipient") + if err != nil { + return listdeliveries.Input{}, err + } + status, err := parseOptionalStatus(query.Status) + if err != nil { + return listdeliveries.Input{}, err + } + source, err := parseOptionalSource(query.Source) + if err != nil { + return listdeliveries.Input{}, err + } + templateID, err := parseOptionalTemplateID(query.TemplateID) + if err != nil { + return listdeliveries.Input{}, err + } + idempotencyKey, err := parseOptionalIdempotencyKey(query.IdempotencyKey) + if err != nil { + return listdeliveries.Input{}, err + } + fromCreatedAt, err := parseOptionalUnixMilli(query.FromCreatedAtMS, "from_created_at_ms") + if err != nil { + return listdeliveries.Input{}, err + } + toCreatedAt, err := parseOptionalUnixMilli(query.ToCreatedAtMS, "to_created_at_ms") + if err != nil { + return listdeliveries.Input{}, err + } + limit, err := parseOptionalLimit(query.Limit) + if err != nil { + return listdeliveries.Input{}, err + } + cursor, err := parseOptionalCursor(query.Cursor) + if err != nil { + return listdeliveries.Input{}, err + } + + input = listdeliveries.Input{ + Limit: limit, + Cursor: cursor, + Filters: listdeliveries.Filters{ + Recipient: recipient, + Status: status, + Source: source, + TemplateID: templateID, + IdempotencyKey: idempotencyKey, + FromCreatedAt: fromCreatedAt, + ToCreatedAt: toCreatedAt, + }, + } + if err := input.Validate(); err != nil { + return listdeliveries.Input{}, err + } + + return input, nil +} + +func decodeDeliveryListQuery(request *http.Request) (DeliveryListQuery, error) { + values := request.URL.Query() + + recipient, err := singleQueryValue(values, "recipient") + if err != nil { + return DeliveryListQuery{}, err + } + status, err := singleQueryValue(values, "status") + if err != nil { + return DeliveryListQuery{}, err + } + source, err := singleQueryValue(values, "source") + if err != nil { + return DeliveryListQuery{}, err + } + templateID, err := singleQueryValue(values, "template_id") + if err != nil { + return DeliveryListQuery{}, err + } + idempotencyKey, err := singleQueryValue(values, "idempotency_key") + if err != nil { + return DeliveryListQuery{}, err + } + fromCreatedAtMS, err := singleQueryValue(values, "from_created_at_ms") + if err != nil { + return DeliveryListQuery{}, err + } + toCreatedAtMS, err := singleQueryValue(values, "to_created_at_ms") + if err != nil { + return DeliveryListQuery{}, err + } + limit, err := singleQueryValue(values, "limit") + if err != nil { + return DeliveryListQuery{}, err + } + cursor, err := singleQueryValue(values, "cursor") + if err != nil { + return DeliveryListQuery{}, err + } + + return DeliveryListQuery{ + Recipient: recipient, + Status: status, + Source: source, + TemplateID: templateID, + IdempotencyKey: idempotencyKey, + FromCreatedAtMS: fromCreatedAtMS, + ToCreatedAtMS: toCreatedAtMS, + Limit: limit, + Cursor: cursor, + }, nil +} + +func singleQueryValue(values map[string][]string, key string) (string, error) { + rawValues := values[key] + switch len(rawValues) { + case 0: + return "", nil + case 1: + return rawValues[0], nil + default: + return "", fmt.Errorf("query parameter %q must appear at most once", key) + } +} + +func parseDeliveryID(raw string) (common.DeliveryID, error) { + deliveryID := common.DeliveryID(raw) + if err := deliveryID.Validate(); err != nil { + return "", fmt.Errorf("delivery id: %w", err) + } + + return deliveryID, nil +} + +func parseOptionalEmail(raw string, name string) (common.Email, error) { + if raw == "" { + return "", nil + } + + email := common.Email(strings.TrimSpace(raw)) + if err := email.Validate(); err != nil { + return "", fmt.Errorf("%s: %w", name, err) + } + + return email, nil +} + +func parseOptionalStatus(raw string) (deliverydomain.Status, error) { + if raw == "" { + return "", nil + } + + status := deliverydomain.Status(strings.TrimSpace(raw)) + if !status.IsKnown() { + return "", fmt.Errorf("status %q is unsupported", raw) + } + + return status, nil +} + +func parseOptionalSource(raw string) (deliverydomain.Source, error) { + if raw == "" { + return "", nil + } + + source := deliverydomain.Source(strings.TrimSpace(raw)) + if !source.IsKnown() { + return "", fmt.Errorf("source %q is unsupported", raw) + } + + return source, nil +} + +func parseOptionalTemplateID(raw string) (common.TemplateID, error) { + if raw == "" { + return "", nil + } + + templateID := common.TemplateID(strings.TrimSpace(raw)) + if err := templateID.Validate(); err != nil { + return "", fmt.Errorf("template id: %w", err) + } + + return templateID, nil +} + +func parseOptionalIdempotencyKey(raw string) (common.IdempotencyKey, error) { + if raw == "" { + return "", nil + } + + key := common.IdempotencyKey(strings.TrimSpace(raw)) + if err := key.Validate(); err != nil { + return "", fmt.Errorf("idempotency key: %w", err) + } + + return key, nil +} + +func parseOptionalUnixMilli(raw string, name string) (*time.Time, error) { + if raw == "" { + return nil, nil + } + + value, err := strconv.ParseInt(strings.TrimSpace(raw), 10, 64) + if err != nil { + return nil, fmt.Errorf("%s: %w", name, err) + } + timestamp := time.UnixMilli(value).UTC() + if err := common.ValidateTimestamp(name, timestamp); err != nil { + return nil, err + } + + return ×tamp, nil +} + +func parseOptionalLimit(raw string) (int, error) { + if raw == "" { + return 0, nil + } + + value, err := strconv.Atoi(strings.TrimSpace(raw)) + if err != nil { + return 0, fmt.Errorf("limit: %w", err) + } + if value < 1 { + return 0, errors.New("limit must be at least 1") + } + + return value, nil +} + +func parseOptionalCursor(raw string) (*listdeliveries.Cursor, error) { + if raw == "" { + return nil, nil + } + + cursor, err := DecodeDeliveryListCursor(raw) + if err != nil { + return nil, err + } + + return &cursor, nil +} + +func summaryResponseFromDelivery(record deliverydomain.Delivery) DeliverySummaryResponse { + return DeliverySummaryResponse{ + DeliveryID: record.DeliveryID.String(), + ResendParentDeliveryID: record.ResendParentDeliveryID.String(), + Source: string(record.Source), + PayloadMode: string(record.PayloadMode), + TemplateID: record.TemplateID.String(), + To: emailStrings(record.Envelope.To), + Cc: emailStrings(record.Envelope.Cc), + Bcc: emailStrings(record.Envelope.Bcc), + ReplyTo: emailStrings(record.Envelope.ReplyTo), + Locale: record.Locale.String(), + LocaleFallbackUsed: record.LocaleFallbackUsed, + IdempotencyKey: record.IdempotencyKey.String(), + Status: string(record.Status), + AttemptCount: record.AttemptCount, + LastAttemptStatus: string(record.LastAttemptStatus), + ProviderSummary: record.ProviderSummary, + CreatedAtMS: record.CreatedAt.UTC().UnixMilli(), + UpdatedAtMS: record.UpdatedAt.UTC().UnixMilli(), + SentAtMS: unixMilliPtr(record.SentAt), + SuppressedAtMS: unixMilliPtr(record.SuppressedAt), + FailedAtMS: unixMilliPtr(record.FailedAt), + DeadLetteredAtMS: unixMilliPtr(record.DeadLetteredAt), + } +} + +func detailResponseFromDelivery(record deliverydomain.Delivery, deadLetter *deliverydomain.DeadLetterEntry) DeliveryDetailResponse { + response := DeliveryDetailResponse{ + DeliveryID: record.DeliveryID.String(), + ResendParentDeliveryID: record.ResendParentDeliveryID.String(), + Source: string(record.Source), + PayloadMode: string(record.PayloadMode), + TemplateID: record.TemplateID.String(), + TemplateVariables: cloneJSONObject(record.TemplateVariables), + To: emailStrings(record.Envelope.To), + Cc: emailStrings(record.Envelope.Cc), + Bcc: emailStrings(record.Envelope.Bcc), + ReplyTo: emailStrings(record.Envelope.ReplyTo), + Subject: record.Content.Subject, + TextBody: record.Content.TextBody, + HTMLBody: record.Content.HTMLBody, + Attachments: attachmentResponses(record.Attachments), + Locale: record.Locale.String(), + LocaleFallbackUsed: record.LocaleFallbackUsed, + IdempotencyKey: record.IdempotencyKey.String(), + Status: string(record.Status), + AttemptCount: record.AttemptCount, + LastAttemptStatus: string(record.LastAttemptStatus), + ProviderSummary: record.ProviderSummary, + CreatedAtMS: record.CreatedAt.UTC().UnixMilli(), + UpdatedAtMS: record.UpdatedAt.UTC().UnixMilli(), + SentAtMS: unixMilliPtr(record.SentAt), + SuppressedAtMS: unixMilliPtr(record.SuppressedAt), + FailedAtMS: unixMilliPtr(record.FailedAt), + DeadLetteredAtMS: unixMilliPtr(record.DeadLetteredAt), + } + if deadLetter != nil { + response.DeadLetter = &DeadLetterResponse{ + DeliveryID: deadLetter.DeliveryID.String(), + FinalAttemptNo: deadLetter.FinalAttemptNo, + FailureClassification: deadLetter.FailureClassification, + ProviderSummary: deadLetter.ProviderSummary, + CreatedAtMS: deadLetter.CreatedAt.UTC().UnixMilli(), + RecoveryHint: deadLetter.RecoveryHint, + } + } + + return response +} + +func attemptResponseFromRecord(record attempt.Attempt) AttemptResponse { + return AttemptResponse{ + DeliveryID: record.DeliveryID.String(), + AttemptNo: record.AttemptNo, + ScheduledForMS: record.ScheduledFor.UTC().UnixMilli(), + StartedAtMS: unixMilliPtr(record.StartedAt), + FinishedAtMS: unixMilliPtr(record.FinishedAt), + Status: string(record.Status), + ProviderClassification: record.ProviderClassification, + ProviderSummary: record.ProviderSummary, + } +} + +func attachmentResponses(attachments []common.AttachmentMetadata) []AttachmentResponse { + if len(attachments) == 0 { + return []AttachmentResponse{} + } + + result := make([]AttachmentResponse, len(attachments)) + for index, attachment := range attachments { + result[index] = AttachmentResponse{ + Filename: attachment.Filename, + ContentType: attachment.ContentType, + SizeBytes: attachment.SizeBytes, + } + } + + return result +} + +func emailStrings(values []common.Email) []string { + if len(values) == 0 { + return []string{} + } + + result := make([]string, len(values)) + for index, value := range values { + result[index] = value.String() + } + + return result +} + +func unixMilliPtr(value *time.Time) *int64 { + if value == nil { + return nil + } + + encoded := value.UTC().UnixMilli() + return &encoded +} + +func cloneJSONObject(value map[string]any) map[string]any { + if value == nil { + return nil + } + + cloned := make(map[string]any, len(value)) + for key, entry := range value { + cloned[key] = entry + } + + return cloned +} diff --git a/mail/internal/api/internalhttp/operator_contract_test.go b/mail/internal/api/internalhttp/operator_contract_test.go new file mode 100644 index 0000000..46c821a --- /dev/null +++ b/mail/internal/api/internalhttp/operator_contract_test.go @@ -0,0 +1,76 @@ +package internalhttp + +import ( + "net/http" + "net/http/httptest" + "testing" + "time" + + "galaxy/mail/internal/domain/common" + "galaxy/mail/internal/service/listdeliveries" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestDecodeDeliveryListInputSuccess(t *testing.T) { + t.Parallel() + + cursor, err := EncodeDeliveryListCursor(listdeliveries.Cursor{ + CreatedAt: time.Unix(1_775_122_000, 0).UTC(), + DeliveryID: common.DeliveryID("delivery-123"), + }) + require.NoError(t, err) + + request := httptest.NewRequest( + http.MethodGet, + DeliveriesPath+"?recipient=pilot@example.com&status=sent&source=notification&template_id=template.welcome&idempotency_key=notification:delivery-123&from_created_at_ms=1775122000000&to_created_at_ms=1775122600000&limit=25&cursor="+cursor, + nil, + ) + + input, err := DecodeDeliveryListInput(request) + require.NoError(t, err) + require.Equal(t, 25, input.Limit) + require.Equal(t, common.Email("pilot@example.com"), input.Filters.Recipient) + require.Equal(t, common.TemplateID("template.welcome"), input.Filters.TemplateID) + require.Equal(t, common.IdempotencyKey("notification:delivery-123"), input.Filters.IdempotencyKey) + require.NotNil(t, input.Cursor) + require.Equal(t, common.DeliveryID("delivery-123"), input.Cursor.DeliveryID) +} + +func TestDecodeDeliveryListInputRejectsInvalidCursor(t *testing.T) { + t.Parallel() + + request := httptest.NewRequest(http.MethodGet, DeliveriesPath+"?cursor=bad", nil) + + _, err := DecodeDeliveryListInput(request) + require.Error(t, err) + assert.ErrorContains(t, err, "decode cursor") +} + +func TestDeliveryListCursorRoundTrip(t *testing.T) { + t.Parallel() + + cursor := listdeliveries.Cursor{ + CreatedAt: time.Unix(1_775_122_500, 0).UTC(), + DeliveryID: common.DeliveryID("delivery-xyz"), + } + + encoded, err := EncodeDeliveryListCursor(cursor) + require.NoError(t, err) + + decoded, err := DecodeDeliveryListCursor(encoded) + require.NoError(t, err) + require.Equal(t, cursor, decoded) +} + +func TestDecodeDeliveryIDFromPath(t *testing.T) { + t.Parallel() + + request := httptest.NewRequest(http.MethodGet, "/api/v1/internal/deliveries/delivery-123", nil) + request.SetPathValue(deliveryIDPathValue, "delivery-123") + + deliveryID, err := DecodeDeliveryIDFromPath(request) + require.NoError(t, err) + require.Equal(t, common.DeliveryID("delivery-123"), deliveryID) +} diff --git a/mail/internal/api/internalhttp/operator_handler.go b/mail/internal/api/internalhttp/operator_handler.go new file mode 100644 index 0000000..85c631d --- /dev/null +++ b/mail/internal/api/internalhttp/operator_handler.go @@ -0,0 +1,195 @@ +package internalhttp + +import ( + "context" + "encoding/json" + "errors" + "net/http" + "time" + + "galaxy/mail/internal/service/getdelivery" + "galaxy/mail/internal/service/listattempts" + "galaxy/mail/internal/service/listdeliveries" + "galaxy/mail/internal/service/resenddelivery" +) + +const defaultOperatorRequestTimeout = 5 * time.Second + +// ListDeliveriesUseCase lists accepted deliveries for trusted operators. +type ListDeliveriesUseCase interface { + // Execute returns one filtered deterministic ordered page of deliveries. + Execute(context.Context, listdeliveries.Input) (listdeliveries.Result, error) +} + +// GetDeliveryUseCase resolves one accepted delivery for trusted operators. +type GetDeliveryUseCase interface { + // Execute returns one exact delivery view and its optional dead-letter + // entry. + Execute(context.Context, getdelivery.Input) (getdelivery.Result, error) +} + +// ListAttemptsUseCase resolves one delivery-attempt history for trusted +// operators. +type ListAttemptsUseCase interface { + // Execute returns the full attempt history of one accepted delivery. + Execute(context.Context, listattempts.Input) (listattempts.Result, error) +} + +// ResendDeliveryUseCase clones one accepted terminal delivery for trusted +// operator resend. +type ResendDeliveryUseCase interface { + // Execute creates one new clone delivery and returns its identifier. + Execute(context.Context, resenddelivery.Input) (resenddelivery.Result, error) +} + +func newListDeliveriesHandler(useCase ListDeliveriesUseCase, timeout time.Duration) http.HandlerFunc { + return func(writer http.ResponseWriter, request *http.Request) { + input, err := DecodeDeliveryListInput(request) + if err != nil { + writeErrorResponse(writer, http.StatusBadRequest, ErrorCodeInvalidRequest, err.Error()) + return + } + + callCtx, cancel := context.WithTimeout(request.Context(), effectiveOperatorTimeout(timeout)) + defer cancel() + + result, err := useCase.Execute(callCtx, input) + if err != nil { + switch { + case errors.Is(err, listdeliveries.ErrInvalidCursor): + writeErrorResponse(writer, http.StatusBadRequest, ErrorCodeInvalidRequest, "cursor is invalid") + case errors.Is(err, listdeliveries.ErrServiceUnavailable): + writeErrorResponse(writer, http.StatusServiceUnavailable, ErrorCodeServiceUnavailable, "service is unavailable") + default: + writeErrorResponse(writer, http.StatusInternalServerError, ErrorCodeInternalError, "internal server error") + } + return + } + + response := DeliveryListResponse{ + Items: make([]DeliverySummaryResponse, len(result.Items)), + } + for index, record := range result.Items { + response.Items[index] = summaryResponseFromDelivery(record) + } + if result.NextCursor != nil { + encodedCursor, err := EncodeDeliveryListCursor(*result.NextCursor) + if err != nil { + writeErrorResponse(writer, http.StatusInternalServerError, ErrorCodeInternalError, "internal server error") + return + } + response.NextCursor = encodedCursor + } + + writeJSONResponse(writer, http.StatusOK, response) + } +} + +func newGetDeliveryHandler(useCase GetDeliveryUseCase, timeout time.Duration) http.HandlerFunc { + return func(writer http.ResponseWriter, request *http.Request) { + deliveryID, err := DecodeDeliveryIDFromPath(request) + if err != nil { + writeErrorResponse(writer, http.StatusBadRequest, ErrorCodeInvalidRequest, err.Error()) + return + } + + callCtx, cancel := context.WithTimeout(request.Context(), effectiveOperatorTimeout(timeout)) + defer cancel() + + result, err := useCase.Execute(callCtx, getdelivery.Input{DeliveryID: deliveryID}) + if err != nil { + switch { + case errors.Is(err, getdelivery.ErrNotFound): + writeErrorResponse(writer, http.StatusNotFound, ErrorCodeDeliveryNotFound, "delivery not found") + case errors.Is(err, getdelivery.ErrServiceUnavailable): + writeErrorResponse(writer, http.StatusServiceUnavailable, ErrorCodeServiceUnavailable, "service is unavailable") + default: + writeErrorResponse(writer, http.StatusInternalServerError, ErrorCodeInternalError, "internal server error") + } + return + } + + writeJSONResponse(writer, http.StatusOK, detailResponseFromDelivery(result.Delivery, result.DeadLetter)) + } +} + +func newListAttemptsHandler(useCase ListAttemptsUseCase, timeout time.Duration) http.HandlerFunc { + return func(writer http.ResponseWriter, request *http.Request) { + deliveryID, err := DecodeDeliveryIDFromPath(request) + if err != nil { + writeErrorResponse(writer, http.StatusBadRequest, ErrorCodeInvalidRequest, err.Error()) + return + } + + callCtx, cancel := context.WithTimeout(request.Context(), effectiveOperatorTimeout(timeout)) + defer cancel() + + result, err := useCase.Execute(callCtx, listattempts.Input{DeliveryID: deliveryID}) + if err != nil { + switch { + case errors.Is(err, listattempts.ErrNotFound): + writeErrorResponse(writer, http.StatusNotFound, ErrorCodeDeliveryNotFound, "delivery not found") + case errors.Is(err, listattempts.ErrServiceUnavailable): + writeErrorResponse(writer, http.StatusServiceUnavailable, ErrorCodeServiceUnavailable, "service is unavailable") + default: + writeErrorResponse(writer, http.StatusInternalServerError, ErrorCodeInternalError, "internal server error") + } + return + } + + response := DeliveryAttemptsResponse{ + Items: make([]AttemptResponse, len(result.Attempts)), + } + for index, record := range result.Attempts { + response.Items[index] = attemptResponseFromRecord(record) + } + + writeJSONResponse(writer, http.StatusOK, response) + } +} + +func newResendDeliveryHandler(useCase ResendDeliveryUseCase, timeout time.Duration) http.HandlerFunc { + return func(writer http.ResponseWriter, request *http.Request) { + deliveryID, err := DecodeDeliveryIDFromPath(request) + if err != nil { + writeErrorResponse(writer, http.StatusBadRequest, ErrorCodeInvalidRequest, err.Error()) + return + } + + callCtx, cancel := context.WithTimeout(request.Context(), effectiveOperatorTimeout(timeout)) + defer cancel() + + result, err := useCase.Execute(callCtx, resenddelivery.Input{DeliveryID: deliveryID}) + if err != nil { + switch { + case errors.Is(err, resenddelivery.ErrNotFound): + writeErrorResponse(writer, http.StatusNotFound, ErrorCodeDeliveryNotFound, "delivery not found") + case errors.Is(err, resenddelivery.ErrNotAllowed): + writeErrorResponse(writer, http.StatusConflict, ErrorCodeResendNotAllowed, "delivery status does not allow resend") + case errors.Is(err, resenddelivery.ErrServiceUnavailable): + writeErrorResponse(writer, http.StatusServiceUnavailable, ErrorCodeServiceUnavailable, "service is unavailable") + default: + writeErrorResponse(writer, http.StatusInternalServerError, ErrorCodeInternalError, "internal server error") + } + return + } + + writeJSONResponse(writer, http.StatusOK, DeliveryResendResponse{ + DeliveryID: result.DeliveryID.String(), + }) + } +} + +func effectiveOperatorTimeout(timeout time.Duration) time.Duration { + if timeout <= 0 { + return defaultOperatorRequestTimeout + } + + return timeout +} + +func writeJSONResponse(writer http.ResponseWriter, statusCode int, payload any) { + writer.Header().Set("Content-Type", "application/json") + writer.WriteHeader(statusCode) + _ = json.NewEncoder(writer).Encode(payload) +} diff --git a/mail/internal/api/internalhttp/operator_handler_test.go b/mail/internal/api/internalhttp/operator_handler_test.go new file mode 100644 index 0000000..a743ee0 --- /dev/null +++ b/mail/internal/api/internalhttp/operator_handler_test.go @@ -0,0 +1,313 @@ +package internalhttp + +import ( + "context" + "encoding/json" + "errors" + "io" + "log/slog" + "net/http" + "net/http/httptest" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/service/getdelivery" + "galaxy/mail/internal/service/listattempts" + "galaxy/mail/internal/service/listdeliveries" + "galaxy/mail/internal/service/resenddelivery" + + "github.com/stretchr/testify/require" +) + +func TestOperatorHandlersReturnSuccessResponses(t *testing.T) { + t.Parallel() + + listDelivery := validOperatorDelivery("delivery-list", deliverydomain.StatusSent) + getDeliveryRecord := validOperatorDelivery("delivery-get", deliverydomain.StatusDeadLetter) + deadLetter := validOperatorDeadLetter(getDeliveryRecord.DeliveryID) + attemptRecord := validOperatorAttempt(getDeliveryRecord.DeliveryID, 1, attempt.StatusProviderRejected) + + handler := newHandler(Dependencies{ + Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), + OperatorRequestTimeout: time.Second, + ListDeliveries: listDeliveriesFunc(func(context.Context, listdeliveries.Input) (listdeliveries.Result, error) { + return listdeliveries.Result{ + Items: []deliverydomain.Delivery{listDelivery}, + NextCursor: &listdeliveries.Cursor{ + CreatedAt: listDelivery.CreatedAt, + DeliveryID: listDelivery.DeliveryID, + }, + }, nil + }), + GetDelivery: getDeliveryFunc(func(context.Context, getdelivery.Input) (getdelivery.Result, error) { + return getdelivery.Result{ + Delivery: getDeliveryRecord, + DeadLetter: &deadLetter, + }, nil + }), + ListAttempts: listAttemptsFunc(func(context.Context, listattempts.Input) (listattempts.Result, error) { + return listattempts.Result{ + Delivery: getDeliveryRecord, + Attempts: []attempt.Attempt{attemptRecord}, + }, nil + }), + ResendDelivery: resendDeliveryFunc(func(context.Context, resenddelivery.Input) (resenddelivery.Result, error) { + return resenddelivery.Result{DeliveryID: common.DeliveryID("delivery-clone")}, nil + }), + }) + + t.Run("list", func(t *testing.T) { + request := httptest.NewRequest(http.MethodGet, DeliveriesPath+"?limit=1", nil) + response := httptest.NewRecorder() + handler.ServeHTTP(response, request) + + require.Equal(t, http.StatusOK, response.Code) + var payload DeliveryListResponse + require.NoError(t, json.NewDecoder(response.Body).Decode(&payload)) + require.Len(t, payload.Items, 1) + require.Equal(t, "delivery-list", payload.Items[0].DeliveryID) + require.NotEmpty(t, payload.NextCursor) + }) + + t.Run("get", func(t *testing.T) { + request := httptest.NewRequest(http.MethodGet, "/api/v1/internal/deliveries/delivery-get", nil) + response := httptest.NewRecorder() + handler.ServeHTTP(response, request) + + require.Equal(t, http.StatusOK, response.Code) + var payload DeliveryDetailResponse + require.NoError(t, json.NewDecoder(response.Body).Decode(&payload)) + require.Equal(t, "delivery-get", payload.DeliveryID) + require.NotNil(t, payload.DeadLetter) + }) + + t.Run("attempts", func(t *testing.T) { + request := httptest.NewRequest(http.MethodGet, "/api/v1/internal/deliveries/delivery-get/attempts", nil) + response := httptest.NewRecorder() + handler.ServeHTTP(response, request) + + require.Equal(t, http.StatusOK, response.Code) + var payload DeliveryAttemptsResponse + require.NoError(t, json.NewDecoder(response.Body).Decode(&payload)) + require.Len(t, payload.Items, 1) + require.Equal(t, 1, payload.Items[0].AttemptNo) + }) + + t.Run("resend", func(t *testing.T) { + request := httptest.NewRequest(http.MethodPost, "/api/v1/internal/deliveries/delivery-get/resend", nil) + response := httptest.NewRecorder() + handler.ServeHTTP(response, request) + + require.Equal(t, http.StatusOK, response.Code) + var payload DeliveryResendResponse + require.NoError(t, json.NewDecoder(response.Body).Decode(&payload)) + require.Equal(t, "delivery-clone", payload.DeliveryID) + }) +} + +func TestOperatorHandlersMapErrors(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + method string + path string + deps Dependencies + wantStatus int + wantCode string + }{ + { + name: "list bad request", + method: http.MethodGet, + path: DeliveriesPath + "?limit=0", + deps: Dependencies{Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), ListDeliveries: listDeliveriesFunc(func(context.Context, listdeliveries.Input) (listdeliveries.Result, error) { + return listdeliveries.Result{}, nil + })}, + wantStatus: http.StatusBadRequest, + wantCode: ErrorCodeInvalidRequest, + }, + { + name: "get not found", + method: http.MethodGet, + path: "/api/v1/internal/deliveries/missing", + deps: Dependencies{Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), GetDelivery: getDeliveryFunc(func(context.Context, getdelivery.Input) (getdelivery.Result, error) { + return getdelivery.Result{}, getdelivery.ErrNotFound + })}, + wantStatus: http.StatusNotFound, + wantCode: ErrorCodeDeliveryNotFound, + }, + { + name: "attempts unavailable", + method: http.MethodGet, + path: "/api/v1/internal/deliveries/missing/attempts", + deps: Dependencies{Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), ListAttempts: listAttemptsFunc(func(context.Context, listattempts.Input) (listattempts.Result, error) { + return listattempts.Result{}, listattempts.ErrServiceUnavailable + })}, + wantStatus: http.StatusServiceUnavailable, + wantCode: ErrorCodeServiceUnavailable, + }, + { + name: "resend not allowed", + method: http.MethodPost, + path: "/api/v1/internal/deliveries/missing/resend", + deps: Dependencies{Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), ResendDelivery: resendDeliveryFunc(func(context.Context, resenddelivery.Input) (resenddelivery.Result, error) { + return resenddelivery.Result{}, resenddelivery.ErrNotAllowed + })}, + wantStatus: http.StatusConflict, + wantCode: ErrorCodeResendNotAllowed, + }, + { + name: "resend internal error", + method: http.MethodPost, + path: "/api/v1/internal/deliveries/missing/resend", + deps: Dependencies{Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), ResendDelivery: resendDeliveryFunc(func(context.Context, resenddelivery.Input) (resenddelivery.Result, error) { + return resenddelivery.Result{}, errors.New("boom") + })}, + wantStatus: http.StatusInternalServerError, + wantCode: ErrorCodeInternalError, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + tt.deps.OperatorRequestTimeout = time.Second + handler := newHandler(tt.deps) + request := httptest.NewRequest(tt.method, tt.path, nil) + response := httptest.NewRecorder() + + handler.ServeHTTP(response, request) + + require.Equal(t, tt.wantStatus, response.Code) + var payload ErrorResponse + require.NoError(t, json.NewDecoder(response.Body).Decode(&payload)) + require.Equal(t, tt.wantCode, payload.Error.Code) + }) + } +} + +func TestOperatorHandlersApplyRequestTimeout(t *testing.T) { + t.Parallel() + + deadlineObserved := make(chan struct{}, 1) + handler := newHandler(Dependencies{ + Logger: slog.New(slog.NewJSONHandler(io.Discard, nil)), + OperatorRequestTimeout: 50 * time.Millisecond, + ListDeliveries: listDeliveriesFunc(func(ctx context.Context, input listdeliveries.Input) (listdeliveries.Result, error) { + _ = input + if _, ok := ctx.Deadline(); ok { + deadlineObserved <- struct{}{} + } + return listdeliveries.Result{}, nil + }), + }) + + request := httptest.NewRequest(http.MethodGet, DeliveriesPath, nil) + response := httptest.NewRecorder() + handler.ServeHTTP(response, request) + + require.Equal(t, http.StatusOK, response.Code) + select { + case <-deadlineObserved: + default: + t.Fatal("expected operator handler to apply request timeout") + } +} + +type listDeliveriesFunc func(context.Context, listdeliveries.Input) (listdeliveries.Result, error) + +func (fn listDeliveriesFunc) Execute(ctx context.Context, input listdeliveries.Input) (listdeliveries.Result, error) { + return fn(ctx, input) +} + +type getDeliveryFunc func(context.Context, getdelivery.Input) (getdelivery.Result, error) + +func (fn getDeliveryFunc) Execute(ctx context.Context, input getdelivery.Input) (getdelivery.Result, error) { + return fn(ctx, input) +} + +type listAttemptsFunc func(context.Context, listattempts.Input) (listattempts.Result, error) + +func (fn listAttemptsFunc) Execute(ctx context.Context, input listattempts.Input) (listattempts.Result, error) { + return fn(ctx, input) +} + +type resendDeliveryFunc func(context.Context, resenddelivery.Input) (resenddelivery.Result, error) + +func (fn resendDeliveryFunc) Execute(ctx context.Context, input resenddelivery.Input) (resenddelivery.Result, error) { + return fn(ctx, input) +} + +func validOperatorDelivery(id string, status deliverydomain.Status) deliverydomain.Delivery { + createdAt := time.Unix(1_775_122_000, 0).UTC() + updatedAt := createdAt.Add(time.Minute) + record := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID(id), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeRendered, + Envelope: deliverydomain.Envelope{To: []common.Email{common.Email("pilot@example.com")}}, + Content: deliverydomain.Content{Subject: "Turn ready", TextBody: "Turn ready"}, + IdempotencyKey: common.IdempotencyKey("notification:" + id), + Status: status, + AttemptCount: 1, + CreatedAt: createdAt, + UpdatedAt: updatedAt, + } + + switch status { + case deliverydomain.StatusSent: + sentAt := updatedAt + record.SentAt = &sentAt + record.LastAttemptStatus = attempt.StatusProviderAccepted + case deliverydomain.StatusDeadLetter: + deadLetteredAt := updatedAt + record.DeadLetteredAt = &deadLetteredAt + record.LastAttemptStatus = attempt.StatusTimedOut + } + + if err := record.Validate(); err != nil { + panic(err) + } + return record +} + +func validOperatorDeadLetter(deliveryID common.DeliveryID) deliverydomain.DeadLetterEntry { + entry := deliverydomain.DeadLetterEntry{ + DeliveryID: deliveryID, + FinalAttemptNo: 1, + FailureClassification: "retry_exhausted", + ProviderSummary: "smtp timeout", + CreatedAt: time.Unix(1_775_122_100, 0).UTC(), + RecoveryHint: "check SMTP connectivity", + } + if err := entry.Validate(); err != nil { + panic(err) + } + + return entry +} + +func validOperatorAttempt(deliveryID common.DeliveryID, attemptNo int, status attempt.Status) attempt.Attempt { + scheduledFor := time.Unix(1_775_122_050, 0).UTC() + startedAt := scheduledFor.Add(time.Second) + finishedAt := startedAt.Add(time.Second) + record := attempt.Attempt{ + DeliveryID: deliveryID, + AttemptNo: attemptNo, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + FinishedAt: &finishedAt, + Status: status, + } + if err := record.Validate(); err != nil { + panic(err) + } + + return record +} diff --git a/mail/internal/api/internalhttp/server.go b/mail/internal/api/internalhttp/server.go new file mode 100644 index 0000000..43b3ce4 --- /dev/null +++ b/mail/internal/api/internalhttp/server.go @@ -0,0 +1,277 @@ +// Package internalhttp provides the trusted internal HTTP listener used by the +// runnable Mail Service process. +package internalhttp + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "log/slog" + "net" + "net/http" + "sync" + "time" + + "galaxy/mail/internal/telemetry" + + "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" +) + +const ( + // DeliveriesPath is the trusted operator list route reserved by the Stage 6 + // runnable skeleton. + DeliveriesPath = "/api/v1/internal/deliveries" + + // DeliveryByIDPath is the trusted operator get-delivery route reserved by + // the Stage 6 runnable skeleton. + DeliveryByIDPath = "/api/v1/internal/deliveries/{delivery_id}" + + // DeliveryAttemptsPath is the trusted operator list-attempts route reserved + // by the Stage 6 runnable skeleton. + DeliveryAttemptsPath = "/api/v1/internal/deliveries/{delivery_id}/attempts" + + // DeliveryResendPath is the trusted operator resend route reserved by the + // Stage 6 runnable skeleton. + DeliveryResendPath = "/api/v1/internal/deliveries/{delivery_id}/resend" +) + +// Config describes the trusted internal HTTP listener owned by Mail Service. +type Config struct { + // Addr is the TCP listen address used by the trusted internal HTTP server. + Addr string + + // ReadHeaderTimeout bounds how long the listener may spend reading request + // headers before the server rejects the connection. + ReadHeaderTimeout time.Duration + + // ReadTimeout bounds how long the listener may spend reading one trusted + // internal request. + ReadTimeout time.Duration + + // IdleTimeout bounds how long the listener keeps an idle keep-alive + // connection open. + IdleTimeout time.Duration +} + +// Validate reports whether cfg contains a usable internal HTTP listener +// configuration. +func (cfg Config) Validate() error { + switch { + case cfg.Addr == "": + return errors.New("internal HTTP addr must not be empty") + case cfg.ReadHeaderTimeout <= 0: + return errors.New("internal HTTP read header timeout must be positive") + case cfg.ReadTimeout <= 0: + return errors.New("internal HTTP read timeout must be positive") + case cfg.IdleTimeout <= 0: + return errors.New("internal HTTP idle timeout must be positive") + default: + return nil + } +} + +// Dependencies describes the collaborators used by the trusted internal HTTP +// transport layer. +type Dependencies struct { + // Logger writes structured transport logs. When nil, slog.Default is used. + Logger *slog.Logger + + // Telemetry records low-cardinality transport and auth-delivery metrics. + Telemetry *telemetry.Runtime + + // AcceptLoginCodeDelivery handles the dedicated auth-delivery route when + // provided. + AcceptLoginCodeDelivery AcceptLoginCodeDeliveryUseCase + + // ListDeliveries handles the trusted operator delivery-list route when + // provided. + ListDeliveries ListDeliveriesUseCase + + // GetDelivery handles the trusted operator exact delivery-read route when + // provided. + GetDelivery GetDeliveryUseCase + + // ListAttempts handles the trusted operator attempt-history route when + // provided. + ListAttempts ListAttemptsUseCase + + // ResendDelivery handles the trusted operator resend route when provided. + ResendDelivery ResendDeliveryUseCase + + // OperatorRequestTimeout bounds one trusted operator use-case execution. + OperatorRequestTimeout time.Duration +} + +// Server owns the trusted internal HTTP listener exposed by Mail Service. +type Server struct { + cfg Config + + handler http.Handler + logger *slog.Logger + + stateMu sync.RWMutex + server *http.Server + listener net.Listener +} + +// NewServer constructs one trusted internal HTTP server for cfg and deps. +func NewServer(cfg Config, deps Dependencies) (*Server, error) { + if err := cfg.Validate(); err != nil { + return nil, fmt.Errorf("new internal HTTP server: %w", err) + } + + logger := deps.Logger + if logger == nil { + logger = slog.Default() + } + + return &Server{ + cfg: cfg, + handler: newHandler(deps), + logger: logger.With("component", "internal_http"), + }, nil +} + +// Run binds the configured listener and serves the trusted internal HTTP +// surface until Shutdown closes the server. +func (server *Server) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run internal HTTP server: nil context") + } + if err := ctx.Err(); err != nil { + return err + } + + listener, err := net.Listen("tcp", server.cfg.Addr) + if err != nil { + return fmt.Errorf("run internal HTTP server: listen on %q: %w", server.cfg.Addr, err) + } + + httpServer := &http.Server{ + Handler: server.handler, + ReadHeaderTimeout: server.cfg.ReadHeaderTimeout, + ReadTimeout: server.cfg.ReadTimeout, + IdleTimeout: server.cfg.IdleTimeout, + } + + server.stateMu.Lock() + server.server = httpServer + server.listener = listener + server.stateMu.Unlock() + + server.logger.Info("internal HTTP server started", "addr", listener.Addr().String()) + + defer func() { + server.stateMu.Lock() + server.server = nil + server.listener = nil + server.stateMu.Unlock() + }() + + err = httpServer.Serve(listener) + switch { + case err == nil: + return nil + case errors.Is(err, http.ErrServerClosed): + server.logger.Info("internal HTTP server stopped") + return nil + default: + return fmt.Errorf("run internal HTTP server: serve on %q: %w", server.cfg.Addr, err) + } +} + +// Shutdown gracefully stops the trusted internal HTTP server within ctx. +func (server *Server) Shutdown(ctx context.Context) error { + if ctx == nil { + return errors.New("shutdown internal HTTP server: nil context") + } + + server.stateMu.RLock() + httpServer := server.server + server.stateMu.RUnlock() + + if httpServer == nil { + return nil + } + + if err := httpServer.Shutdown(ctx); err != nil && !errors.Is(err, http.ErrServerClosed) { + return fmt.Errorf("shutdown internal HTTP server: %w", err) + } + + return nil +} + +func newHandler(deps Dependencies) http.Handler { + logger := deps.Logger + if logger == nil { + logger = slog.Default() + } + + mux := http.NewServeMux() + + loginCodeHandler := http.HandlerFunc(serviceUnavailableHandler) + if deps.AcceptLoginCodeDelivery != nil { + loginCodeHandler = newAcceptLoginCodeDeliveryHandler(deps.AcceptLoginCodeDelivery) + } + listDeliveriesHandler := http.HandlerFunc(serviceUnavailableHandler) + if deps.ListDeliveries != nil { + listDeliveriesHandler = newListDeliveriesHandler(deps.ListDeliveries, deps.OperatorRequestTimeout) + } + getDeliveryHandler := http.HandlerFunc(serviceUnavailableHandler) + if deps.GetDelivery != nil { + getDeliveryHandler = newGetDeliveryHandler(deps.GetDelivery, deps.OperatorRequestTimeout) + } + listAttemptsHandler := http.HandlerFunc(serviceUnavailableHandler) + if deps.ListAttempts != nil { + listAttemptsHandler = newListAttemptsHandler(deps.ListAttempts, deps.OperatorRequestTimeout) + } + resendDeliveryHandler := http.HandlerFunc(serviceUnavailableHandler) + if deps.ResendDelivery != nil { + resendDeliveryHandler = newResendDeliveryHandler(deps.ResendDelivery, deps.OperatorRequestTimeout) + } + + mux.Handle("POST "+LoginCodeDeliveriesPath, wrapObservedRoute(LoginCodeDeliveriesPath, logger, deps.Telemetry, loginCodeHandler)) + mux.Handle("GET "+DeliveriesPath, wrapObservedRoute(DeliveriesPath, logger, deps.Telemetry, listDeliveriesHandler)) + mux.Handle("GET "+DeliveryByIDPath, wrapObservedRoute(DeliveryByIDPath, logger, deps.Telemetry, getDeliveryHandler)) + mux.Handle("GET "+DeliveryAttemptsPath, wrapObservedRoute(DeliveryAttemptsPath, logger, deps.Telemetry, listAttemptsHandler)) + mux.Handle("POST "+DeliveryResendPath, wrapObservedRoute(DeliveryResendPath, logger, deps.Telemetry, resendDeliveryHandler)) + + return mux +} + +func wrapObservedRoute(route string, logger *slog.Logger, telemetryRuntime *telemetry.Runtime, next http.Handler) http.Handler { + handler := instrumentRoute(route, logger, telemetryRuntime, next) + + options := []otelhttp.Option{} + if telemetryRuntime != nil { + options = append(options, + otelhttp.WithTracerProvider(telemetryRuntime.TracerProvider()), + otelhttp.WithMeterProvider(telemetryRuntime.MeterProvider()), + ) + } + + return otelhttp.NewHandler(handler, route, options...) +} + +func serviceUnavailableHandler(writer http.ResponseWriter, request *http.Request) { + _ = request + writeErrorResponse(writer, http.StatusServiceUnavailable, ErrorCodeServiceUnavailable, "service is unavailable") +} + +func writeErrorResponse(writer http.ResponseWriter, statusCode int, code string, message string) { + if recorder, ok := writer.(*observedResponseWriter); ok { + recorder.SetErrorCode(code) + } + + payload := ErrorResponse{ + Error: ErrorBody{ + Code: code, + Message: message, + }, + } + + writer.Header().Set("Content-Type", "application/json") + writer.WriteHeader(statusCode) + _ = json.NewEncoder(writer).Encode(payload) +} diff --git a/mail/internal/api/internalhttp/server_test.go b/mail/internal/api/internalhttp/server_test.go new file mode 100644 index 0000000..5819a87 --- /dev/null +++ b/mail/internal/api/internalhttp/server_test.go @@ -0,0 +1,205 @@ +package internalhttp + +import ( + "context" + "encoding/json" + "io" + "net" + "net/http" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestNewServerRejectsInvalidConfiguration(t *testing.T) { + t.Parallel() + + cfg := Config{ + ReadHeaderTimeout: time.Second, + ReadTimeout: time.Second, + IdleTimeout: time.Second, + } + + _, err := NewServer(cfg, Dependencies{}) + require.Error(t, err) + assert.Contains(t, err.Error(), "addr") +} + +func TestServerRunAndShutdown(t *testing.T) { + t.Parallel() + + cfg := testConfig(t) + server, err := NewServer(cfg, Dependencies{}) + require.NoError(t, err) + + runErr := make(chan error, 1) + go func() { + runErr <- server.Run(context.Background()) + }() + + client := newTestHTTPClient(t) + waitForReservedRouteReady(t, client, cfg.Addr) + + shutdownCtx, cancel := context.WithTimeout(context.Background(), time.Second) + defer cancel() + require.NoError(t, server.Shutdown(shutdownCtx)) + waitForServerRunResult(t, runErr) +} + +func TestReservedRoutesReturnStableServiceUnavailable(t *testing.T) { + t.Parallel() + + cfg := testConfig(t) + server, err := NewServer(cfg, Dependencies{}) + require.NoError(t, err) + + runErr := make(chan error, 1) + go func() { + runErr <- server.Run(context.Background()) + }() + + client := newTestHTTPClient(t) + waitForReservedRouteReady(t, client, cfg.Addr) + + tests := []struct { + method string + path string + }{ + {method: http.MethodPost, path: LoginCodeDeliveriesPath}, + {method: http.MethodGet, path: DeliveriesPath}, + {method: http.MethodGet, path: "/api/v1/internal/deliveries/delivery-123"}, + {method: http.MethodGet, path: "/api/v1/internal/deliveries/delivery-123/attempts"}, + {method: http.MethodPost, path: "/api/v1/internal/deliveries/delivery-123/resend"}, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.method+" "+tt.path, func(t *testing.T) { + request, err := http.NewRequest(tt.method, "http://"+cfg.Addr+tt.path, nil) + require.NoError(t, err) + + response, err := client.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + require.Equal(t, http.StatusServiceUnavailable, response.StatusCode) + require.Equal(t, "application/json", response.Header.Get("Content-Type")) + + var payload ErrorResponse + require.NoError(t, json.NewDecoder(response.Body).Decode(&payload)) + require.Equal(t, ErrorCodeServiceUnavailable, payload.Error.Code) + require.Equal(t, "service is unavailable", payload.Error.Message) + }) + } + + shutdownCtx, cancel := context.WithTimeout(context.Background(), time.Second) + defer cancel() + require.NoError(t, server.Shutdown(shutdownCtx)) + waitForServerRunResult(t, runErr) +} + +func TestServerDoesNotExposeProbeOrUnknownRoutes(t *testing.T) { + t.Parallel() + + cfg := testConfig(t) + server, err := NewServer(cfg, Dependencies{}) + require.NoError(t, err) + + runErr := make(chan error, 1) + go func() { + runErr <- server.Run(context.Background()) + }() + + client := newTestHTTPClient(t) + waitForReservedRouteReady(t, client, cfg.Addr) + + for _, path := range []string{"/healthz", "/readyz", "/metrics", "/unknown"} { + request, err := http.NewRequest(http.MethodGet, "http://"+cfg.Addr+path, nil) + require.NoError(t, err) + + response, err := client.Do(request) + require.NoError(t, err) + _, _ = io.ReadAll(response.Body) + response.Body.Close() + + assert.Equalf(t, http.StatusNotFound, response.StatusCode, "path %s", path) + } + + shutdownCtx, cancel := context.WithTimeout(context.Background(), time.Second) + defer cancel() + require.NoError(t, server.Shutdown(shutdownCtx)) + waitForServerRunResult(t, runErr) +} + +func testConfig(t *testing.T) Config { + t.Helper() + + return Config{ + Addr: mustFreeAddr(t), + ReadHeaderTimeout: time.Second, + ReadTimeout: 2 * time.Second, + IdleTimeout: time.Minute, + } +} + +func newTestHTTPClient(t *testing.T) *http.Client { + t.Helper() + + transport := &http.Transport{DisableKeepAlives: true} + t.Cleanup(transport.CloseIdleConnections) + + return &http.Client{ + Timeout: 250 * time.Millisecond, + Transport: transport, + } +} + +func waitForReservedRouteReady(t *testing.T, client *http.Client, addr string) { + t.Helper() + + require.Eventually(t, func() bool { + request, err := http.NewRequest(http.MethodPost, "http://"+addr+LoginCodeDeliveriesPath, nil) + if err != nil { + return false + } + + response, err := client.Do(request) + if err != nil { + return false + } + defer response.Body.Close() + _, _ = io.ReadAll(response.Body) + + return response.StatusCode == http.StatusServiceUnavailable + }, 5*time.Second, 25*time.Millisecond, "internal HTTP server did not become reachable") +} + +func waitForServerRunResult(t *testing.T, runErr <-chan error) { + t.Helper() + + var err error + require.Eventually(t, func() bool { + select { + case err = <-runErr: + return true + default: + return false + } + }, 5*time.Second, 10*time.Millisecond, "internal HTTP server did not stop") + require.NoError(t, err) +} + +func mustFreeAddr(t *testing.T) string { + t.Helper() + + listener, err := net.Listen("tcp", "127.0.0.1:0") + require.NoError(t, err) + defer func() { + assert.NoError(t, listener.Close()) + }() + + return listener.Addr().String() +} diff --git a/mail/internal/api/streamcommand/contract.go b/mail/internal/api/streamcommand/contract.go new file mode 100644 index 0000000..4c6dd68 --- /dev/null +++ b/mail/internal/api/streamcommand/contract.go @@ -0,0 +1,693 @@ +// Package streamcommand defines the frozen Redis Streams command contract used +// by Mail Service for generic asynchronous delivery intake. +package streamcommand + +import ( + "bytes" + "crypto/sha256" + "encoding/base64" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "io" + "sort" + "strconv" + "strings" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/malformedcommand" +) + +const ( + // DeliveryCommandsStream is the frozen Redis Stream name used for generic + // asynchronous delivery commands. + DeliveryCommandsStream = "mail:delivery_commands" + + // MaxAttachments is the frozen attachment-count limit for one generic + // asynchronous command. + MaxAttachments = 5 + + // MaxEncodedAttachmentPayloadBytes is the frozen limit for the total + // encoded attachment payload, measured as the sum of attachment + // `content_base64` string lengths. + MaxEncodedAttachmentPayloadBytes = 2 * 1024 * 1024 +) + +const ( + fieldDeliveryID = "delivery_id" + fieldSource = "source" + fieldPayloadMode = "payload_mode" + fieldIdempotency = "idempotency_key" + fieldRequestedAtMS = "requested_at_ms" + fieldPayloadJSON = "payload_json" + fieldRequestID = "request_id" + fieldTraceID = "trace_id" +) + +var ( + requiredFieldNames = map[string]struct{}{ + fieldDeliveryID: {}, + fieldSource: {}, + fieldPayloadMode: {}, + fieldIdempotency: {}, + fieldRequestedAtMS: {}, + fieldPayloadJSON: {}, + } + optionalFieldNames = map[string]struct{}{ + fieldRequestID: {}, + fieldTraceID: {}, + } +) + +// ClassifyDecodeError maps one command-decoding or command-validation error to +// the stable malformed-command failure code surface. +func ClassifyDecodeError(err error) malformedcommand.FailureCode { + if err == nil { + return malformedcommand.FailureCodeInvalidCommand + } + + message := err.Error() + switch { + case strings.Contains(message, "delivery envelope"), + strings.Contains(message, "must contain at least one recipient"): + return malformedcommand.FailureCodeInvalidEnvelope + case strings.Contains(message, "payload_json"), + strings.Contains(message, "stream command attachments"), + strings.Contains(message, "delivery content"), + strings.Contains(message, "template id"), + strings.Contains(message, "locale"), + strings.Contains(message, "variables"): + return malformedcommand.FailureCodeInvalidPayload + default: + return malformedcommand.FailureCodeInvalidCommand + } +} + +// Command stores one normalized generic asynchronous command accepted from the +// Redis Streams contract. +type Command struct { + // DeliveryID stores the publisher-owned logical delivery identifier. + DeliveryID common.DeliveryID + + // Source stores the frozen async source vocabulary value. + Source deliverydomain.Source + + // PayloadMode stores whether the command contains final rendered content or + // template-selection data. + PayloadMode deliverydomain.PayloadMode + + // IdempotencyKey stores the caller-owned stable deduplication key. + IdempotencyKey common.IdempotencyKey + + // RequestedAt stores when the publisher originally requested the generic + // delivery. + RequestedAt time.Time + + // RequestID stores the optional tracing request identifier. + RequestID string + + // TraceID stores the optional tracing trace identifier. + TraceID string + + // Envelope stores the SMTP addressing information frozen by the stream + // payload contract. + Envelope deliverydomain.Envelope + + // Attachments stores the normalized attachment list including computed + // decoded sizes. + Attachments []Attachment + + // Subject stores the required final subject for rendered-mode commands. + Subject string + + // TextBody stores the required plaintext body for rendered-mode commands. + TextBody string + + // HTMLBody stores the optional HTML body for rendered-mode commands. + HTMLBody string + + // TemplateID stores the required template family for template-mode + // commands. + TemplateID common.TemplateID + + // Locale stores the required canonical BCP 47 locale for template-mode + // commands. + Locale common.Locale + + // Variables stores the arbitrary template variables object for + // template-mode commands. + Variables map[string]any +} + +// Validate reports whether Command satisfies the frozen Stage 05 stream +// contract. +func (command Command) Validate() error { + if err := command.DeliveryID.Validate(); err != nil { + return fmt.Errorf("stream command delivery id: %w", err) + } + if command.Source != deliverydomain.SourceNotification { + return fmt.Errorf("stream command source %q is unsupported", command.Source) + } + if !command.PayloadMode.IsKnown() { + return fmt.Errorf("stream command payload mode %q is unsupported", command.PayloadMode) + } + if err := command.IdempotencyKey.Validate(); err != nil { + return fmt.Errorf("stream command idempotency key: %w", err) + } + if err := common.ValidateTimestamp("stream command requested at", command.RequestedAt); err != nil { + return err + } + if err := command.Envelope.Validate(); err != nil { + return err + } + if len(command.Attachments) > MaxAttachments { + return fmt.Errorf("stream command attachments must contain at most %d entries", MaxAttachments) + } + + totalEncodedPayloadBytes := 0 + for index, attachment := range command.Attachments { + if err := attachment.Validate(); err != nil { + return fmt.Errorf("stream command attachments[%d]: %w", index, err) + } + totalEncodedPayloadBytes += len(attachment.ContentBase64) + } + if totalEncodedPayloadBytes > MaxEncodedAttachmentPayloadBytes { + return fmt.Errorf( + "stream command encoded attachment payload must not exceed %d bytes", + MaxEncodedAttachmentPayloadBytes, + ) + } + + switch command.PayloadMode { + case deliverydomain.PayloadModeRendered: + if err := (deliverydomain.Content{ + Subject: command.Subject, + TextBody: command.TextBody, + HTMLBody: command.HTMLBody, + }).ValidateMaterialized(); err != nil { + return err + } + if !command.TemplateID.IsZero() { + return errors.New("rendered stream command must not contain template id") + } + if !command.Locale.IsZero() { + return errors.New("rendered stream command must not contain locale") + } + if len(command.Variables) != 0 { + return errors.New("rendered stream command must not contain template variables") + } + case deliverydomain.PayloadModeTemplate: + if err := command.TemplateID.Validate(); err != nil { + return fmt.Errorf("stream command template id: %w", err) + } + if err := command.Locale.Validate(); err != nil { + return fmt.Errorf("stream command locale: %w", err) + } + if command.Variables == nil { + return errors.New("template stream command variables must not be nil") + } + if command.Subject != "" { + return errors.New("template stream command must not contain subject") + } + if command.TextBody != "" { + return errors.New("template stream command must not contain text body") + } + if command.HTMLBody != "" { + return errors.New("template stream command must not contain html body") + } + } + + return nil +} + +// Fingerprint returns the stable Stage 05 request fingerprint used by later +// idempotency handling. The fingerprint excludes tracing-only metadata +// (`request_id`, `trace_id`) but includes the normalized business fields of +// the command. +func (command Command) Fingerprint() (string, error) { + if err := command.Validate(); err != nil { + return "", err + } + + normalized := fingerprintCommand{ + DeliveryID: command.DeliveryID.String(), + Source: command.Source, + PayloadMode: command.PayloadMode, + IdempotencyKey: command.IdempotencyKey.String(), + RequestedAtMS: command.RequestedAt.UTC().UnixMilli(), + Envelope: fingerprintEnvelope{ + To: cloneEmails(command.Envelope.To), + Cc: cloneEmails(command.Envelope.Cc), + Bcc: cloneEmails(command.Envelope.Bcc), + ReplyTo: cloneEmails(command.Envelope.ReplyTo), + }, + Attachments: cloneAttachments(command.Attachments), + Subject: command.Subject, + TextBody: command.TextBody, + HTMLBody: command.HTMLBody, + TemplateID: command.TemplateID.String(), + Locale: command.Locale.String(), + Variables: command.Variables, + } + + payload, err := json.Marshal(normalized) + if err != nil { + return "", fmt.Errorf("marshal stream command fingerprint: %w", err) + } + + sum := sha256.Sum256(payload) + + return "sha256:" + hex.EncodeToString(sum[:]), nil +} + +// Attachment stores one inline base64 attachment accepted by the asynchronous +// generic stream contract. +type Attachment struct { + // Filename stores the user-facing attachment filename. + Filename string + + // ContentType stores the MIME media type of the attachment. + ContentType string + + // ContentBase64 stores the exact inline base64 payload published on the + // stream. + ContentBase64 string + + // SizeBytes stores the computed decoded attachment size in bytes. + SizeBytes int64 +} + +// Validate reports whether Attachment contains a valid inline base64 payload +// and a complete metadata header. +func (attachment Attachment) Validate() error { + if _, err := base64.StdEncoding.DecodeString(attachment.ContentBase64); err != nil { + return fmt.Errorf("attachment content_base64 must be valid base64: %w", err) + } + + metadata := common.AttachmentMetadata{ + Filename: attachment.Filename, + ContentType: attachment.ContentType, + SizeBytes: attachment.SizeBytes, + } + if err := metadata.Validate(); err != nil { + return err + } + + return nil +} + +// DecodeCommand validates one raw Redis Streams entry and returns the +// normalized asynchronous generic command frozen by Stage 05. +func DecodeCommand(fields map[string]any) (Command, error) { + if fields == nil { + return Command{}, errors.New("stream command fields must not be nil") + } + + if err := validateFieldSet(fields); err != nil { + return Command{}, err + } + + deliveryIDValue, err := requiredString(fields, fieldDeliveryID) + if err != nil { + return Command{}, err + } + sourceValue, err := requiredString(fields, fieldSource) + if err != nil { + return Command{}, err + } + payloadModeValue, err := requiredString(fields, fieldPayloadMode) + if err != nil { + return Command{}, err + } + idempotencyValue, err := requiredString(fields, fieldIdempotency) + if err != nil { + return Command{}, err + } + requestedAtValue, err := requiredString(fields, fieldRequestedAtMS) + if err != nil { + return Command{}, err + } + payloadJSONValue, err := requiredString(fields, fieldPayloadJSON) + if err != nil { + return Command{}, err + } + + requestedAtMS, err := strconv.ParseInt(requestedAtValue, 10, 64) + if err != nil { + return Command{}, fmt.Errorf("stream field %q must be a base-10 Unix milliseconds string", fieldRequestedAtMS) + } + + command := Command{ + DeliveryID: common.DeliveryID(deliveryIDValue), + Source: deliverydomain.Source(sourceValue), + PayloadMode: deliverydomain.PayloadMode(payloadModeValue), + IdempotencyKey: common.IdempotencyKey(idempotencyValue), + RequestedAt: time.UnixMilli(requestedAtMS).UTC(), + } + + if requestIDValue, ok, err := optionalString(fields, fieldRequestID); err != nil { + return Command{}, err + } else if ok { + command.RequestID = requestIDValue + } + if traceIDValue, ok, err := optionalString(fields, fieldTraceID); err != nil { + return Command{}, err + } else if ok { + command.TraceID = traceIDValue + } + + switch command.PayloadMode { + case deliverydomain.PayloadModeRendered: + if err := decodeRenderedPayload(payloadJSONValue, &command); err != nil { + return Command{}, err + } + case deliverydomain.PayloadModeTemplate: + if err := decodeTemplatePayload(payloadJSONValue, &command); err != nil { + return Command{}, err + } + default: + return Command{}, fmt.Errorf("stream field %q value %q is unsupported", fieldPayloadMode, payloadModeValue) + } + + if err := command.Validate(); err != nil { + return Command{}, err + } + + return command, nil +} + +type renderedPayloadJSON struct { + To *[]string `json:"to"` + Cc *[]string `json:"cc"` + Bcc *[]string `json:"bcc"` + ReplyTo *[]string `json:"reply_to"` + Subject *string `json:"subject"` + TextBody *string `json:"text_body"` + HTMLBody *string `json:"html_body,omitempty"` + Attachments *[]attachmentJSON `json:"attachments"` +} + +type templatePayloadJSON struct { + To *[]string `json:"to"` + Cc *[]string `json:"cc"` + Bcc *[]string `json:"bcc"` + ReplyTo *[]string `json:"reply_to"` + TemplateID *string `json:"template_id"` + Locale *string `json:"locale"` + Variables *json.RawMessage `json:"variables"` + Attachments *[]attachmentJSON `json:"attachments"` +} + +type attachmentJSON struct { + Filename *string `json:"filename"` + ContentType *string `json:"content_type"` + ContentBase64 *string `json:"content_base64"` +} + +type fingerprintCommand struct { + DeliveryID string `json:"delivery_id"` + Source deliverydomain.Source `json:"source"` + PayloadMode deliverydomain.PayloadMode `json:"payload_mode"` + IdempotencyKey string `json:"idempotency_key"` + RequestedAtMS int64 `json:"requested_at_ms"` + Envelope fingerprintEnvelope `json:"envelope"` + Attachments []Attachment `json:"attachments"` + Subject string `json:"subject,omitempty"` + TextBody string `json:"text_body,omitempty"` + HTMLBody string `json:"html_body,omitempty"` + TemplateID string `json:"template_id,omitempty"` + Locale string `json:"locale,omitempty"` + Variables map[string]any `json:"variables,omitempty"` +} + +type fingerprintEnvelope struct { + To []string `json:"to"` + Cc []string `json:"cc"` + Bcc []string `json:"bcc"` + ReplyTo []string `json:"reply_to"` +} + +func validateFieldSet(fields map[string]any) error { + missing := make([]string, 0, len(requiredFieldNames)) + for name := range requiredFieldNames { + if _, ok := fields[name]; !ok { + missing = append(missing, name) + } + } + sort.Strings(missing) + if len(missing) > 0 { + return fmt.Errorf("stream command is missing required fields: %s", strings.Join(missing, ", ")) + } + + unexpected := make([]string, 0) + for name := range fields { + if _, ok := requiredFieldNames[name]; ok { + continue + } + if _, ok := optionalFieldNames[name]; ok { + continue + } + unexpected = append(unexpected, name) + } + sort.Strings(unexpected) + if len(unexpected) > 0 { + return fmt.Errorf("stream command contains unsupported fields: %s", strings.Join(unexpected, ", ")) + } + + return nil +} + +func requiredString(fields map[string]any, name string) (string, error) { + value, ok := fields[name] + if !ok { + return "", fmt.Errorf("stream field %q is required", name) + } + + result, ok := value.(string) + if !ok { + return "", fmt.Errorf("stream field %q must be a string", name) + } + + return result, nil +} + +func optionalString(fields map[string]any, name string) (string, bool, error) { + value, ok := fields[name] + if !ok { + return "", false, nil + } + + result, ok := value.(string) + if !ok { + return "", false, fmt.Errorf("stream field %q must be a string", name) + } + + return result, true, nil +} + +func decodeRenderedPayload(payload string, command *Command) error { + var raw renderedPayloadJSON + if err := decodeStrictJSON("decode payload_json", payload, &raw); err != nil { + return err + } + + envelope, attachments, err := decodeCommonPayloadFields( + raw.To, + raw.Cc, + raw.Bcc, + raw.ReplyTo, + raw.Attachments, + ) + if err != nil { + return err + } + if raw.Subject == nil { + return errors.New("payload_json.subject is required") + } + if raw.TextBody == nil { + return errors.New("payload_json.text_body is required") + } + + command.Envelope = envelope + command.Attachments = attachments + command.Subject = *raw.Subject + command.TextBody = *raw.TextBody + if raw.HTMLBody != nil { + command.HTMLBody = *raw.HTMLBody + } + + return nil +} + +func decodeTemplatePayload(payload string, command *Command) error { + var raw templatePayloadJSON + if err := decodeStrictJSON("decode payload_json", payload, &raw); err != nil { + return err + } + + envelope, attachments, err := decodeCommonPayloadFields( + raw.To, + raw.Cc, + raw.Bcc, + raw.ReplyTo, + raw.Attachments, + ) + if err != nil { + return err + } + if raw.TemplateID == nil { + return errors.New("payload_json.template_id is required") + } + if raw.Locale == nil { + return errors.New("payload_json.locale is required") + } + if raw.Variables == nil { + return errors.New("payload_json.variables is required") + } + + variables, err := decodeVariables(*raw.Variables) + if err != nil { + return err + } + + locale, err := common.ParseLocale(*raw.Locale) + if err != nil { + return fmt.Errorf("payload_json.locale: %w", err) + } + + command.Envelope = envelope + command.Attachments = attachments + command.TemplateID = common.TemplateID(*raw.TemplateID) + command.Locale = locale + command.Variables = variables + + return nil +} + +func decodeCommonPayloadFields( + to *[]string, + cc *[]string, + bcc *[]string, + replyTo *[]string, + attachments *[]attachmentJSON, +) (deliverydomain.Envelope, []Attachment, error) { + if to == nil { + return deliverydomain.Envelope{}, nil, errors.New("payload_json.to is required") + } + if cc == nil { + return deliverydomain.Envelope{}, nil, errors.New("payload_json.cc is required") + } + if bcc == nil { + return deliverydomain.Envelope{}, nil, errors.New("payload_json.bcc is required") + } + if replyTo == nil { + return deliverydomain.Envelope{}, nil, errors.New("payload_json.reply_to is required") + } + if attachments == nil { + return deliverydomain.Envelope{}, nil, errors.New("payload_json.attachments is required") + } + + envelope := deliverydomain.Envelope{ + To: inflateEmails(*to), + Cc: inflateEmails(*cc), + Bcc: inflateEmails(*bcc), + ReplyTo: inflateEmails(*replyTo), + } + inflatedAttachments, err := inflateAttachments(*attachments) + if err != nil { + return deliverydomain.Envelope{}, nil, err + } + + return envelope, inflatedAttachments, nil +} + +func inflateAttachments(raw []attachmentJSON) ([]Attachment, error) { + attachments := make([]Attachment, 0, len(raw)) + for index, entry := range raw { + if entry.Filename == nil { + return nil, fmt.Errorf("payload_json.attachments[%d].filename is required", index) + } + if entry.ContentType == nil { + return nil, fmt.Errorf("payload_json.attachments[%d].content_type is required", index) + } + if entry.ContentBase64 == nil { + return nil, fmt.Errorf("payload_json.attachments[%d].content_base64 is required", index) + } + + decoded, err := base64.StdEncoding.DecodeString(*entry.ContentBase64) + if err != nil { + return nil, fmt.Errorf( + "payload_json.attachments[%d].content_base64 must be valid base64: %w", + index, + err, + ) + } + + attachments = append(attachments, Attachment{ + Filename: *entry.Filename, + ContentType: *entry.ContentType, + ContentBase64: *entry.ContentBase64, + SizeBytes: int64(len(decoded)), + }) + } + + return attachments, nil +} + +func inflateEmails(values []string) []common.Email { + emails := make([]common.Email, len(values)) + for index, value := range values { + emails[index] = common.Email(value) + } + + return emails +} + +func decodeVariables(raw json.RawMessage) (map[string]any, error) { + var variables map[string]any + if err := decodeStrictJSON("decode payload_json.variables", string(raw), &variables); err != nil { + return nil, err + } + if variables == nil { + return nil, errors.New("payload_json.variables must be a JSON object") + } + + return variables, nil +} + +func decodeStrictJSON(label string, raw string, target any) error { + decoder := json.NewDecoder(bytes.NewBufferString(raw)) + decoder.DisallowUnknownFields() + + if err := decoder.Decode(target); err != nil { + return fmt.Errorf("%s: %w", label, err) + } + if err := decoder.Decode(&struct{}{}); err != io.EOF { + if err == nil { + return fmt.Errorf("%s: unexpected trailing JSON input", label) + } + + return fmt.Errorf("%s: %w", label, err) + } + + return nil +} + +func cloneEmails(values []common.Email) []string { + result := make([]string, len(values)) + for index, value := range values { + result[index] = value.String() + } + + return result +} + +func cloneAttachments(values []Attachment) []Attachment { + result := make([]Attachment, len(values)) + copy(result, values) + + return result +} diff --git a/mail/internal/api/streamcommand/contract_test.go b/mail/internal/api/streamcommand/contract_test.go new file mode 100644 index 0000000..83bfccf --- /dev/null +++ b/mail/internal/api/streamcommand/contract_test.go @@ -0,0 +1,466 @@ +package streamcommand + +import ( + "encoding/base64" + "encoding/json" + "testing" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestDecodeCommandSuccessRendered(t *testing.T) { + t.Parallel() + + command, err := DecodeCommand(validRenderedFields(t)) + require.NoError(t, err) + + require.Equal(t, Command{ + DeliveryID: common.DeliveryID("mail-123"), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeRendered, + IdempotencyKey: common.IdempotencyKey("notification:mail-123"), + RequestedAt: mustUnixMilli(1_775_121_700_000), + RequestID: "req-123", + TraceID: "trace-123", + Envelope: deliverydomain.Envelope{ + To: []common.Email{"pilot@example.com"}, + Cc: []common.Email{}, + Bcc: []common.Email{}, + ReplyTo: []common.Email{"noreply@example.com"}, + }, + Attachments: []Attachment{ + { + Filename: "report.txt", + ContentType: "text/plain", + ContentBase64: base64.StdEncoding.EncodeToString([]byte("report")), + SizeBytes: 6, + }, + }, + Subject: "Turn ready", + TextBody: "Turn 54 is ready.", + HTMLBody: "

Turn 54 is ready.

", + }, command) +} + +func TestDecodeCommandSuccessTemplate(t *testing.T) { + t.Parallel() + + command, err := DecodeCommand(validTemplateFields(t)) + require.NoError(t, err) + + require.Equal(t, common.TemplateID("game.turn_ready"), command.TemplateID) + require.Equal(t, common.Locale("fr-FR"), command.Locale) + require.Equal(t, map[string]any{ + "turn_number": float64(54), + "player": map[string]any{ + "name": "Pilot", + }, + }, command.Variables) + require.Empty(t, command.Subject) + require.Empty(t, command.TextBody) +} + +func TestDecodeCommandRejectsInvalidEntry(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + fields map[string]any + wantErr string + }{ + { + name: "missing required field", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + delete(fields, fieldDeliveryID) + return fields + }(t), + wantErr: "missing required fields: delivery_id", + }, + { + name: "unsupported field", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields["extra"] = "value" + return fields + }(t), + wantErr: "unsupported fields: extra", + }, + { + name: "non string field", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldDeliveryID] = 42 + return fields + }(t), + wantErr: `stream field "delivery_id" must be a string`, + }, + { + name: "invalid requested at", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldRequestedAtMS] = "not-a-timestamp" + return fields + }(t), + wantErr: `stream field "requested_at_ms" must be a base-10 Unix milliseconds string`, + }, + { + name: "unsupported source", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldSource] = "operator_resend" + return fields + }(t), + wantErr: `stream command source "operator_resend" is unsupported`, + }, + { + name: "unsupported payload mode", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldPayloadMode] = "unknown" + return fields + }(t), + wantErr: `stream field "payload_mode" value "unknown" is unsupported`, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + _, err := DecodeCommand(tt.fields) + require.Error(t, err) + assert.ErrorContains(t, err, tt.wantErr) + }) + } +} + +func TestDecodeCommandRejectsInvalidPayload(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + fields map[string]any + wantErr string + }{ + { + name: "payload must be object", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldPayloadJSON] = `[]` + return fields + }(t), + wantErr: "decode payload_json", + }, + { + name: "rendered payload unknown field", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldPayloadJSON] = mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{}, + "subject": "Turn ready", + "text_body": "Turn 54 is ready.", + "attachments": []map[string]any{}, + "template_id": "game.turn_ready", + }) + return fields + }(t), + wantErr: "unknown field", + }, + { + name: "trailing json input", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldPayloadJSON] = validRenderedPayloadJSON(t) + `{}` + return fields + }(t), + wantErr: "unexpected trailing JSON input", + }, + { + name: "empty recipients", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldPayloadJSON] = mustJSONString(t, map[string]any{ + "to": []string{}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{}, + "subject": "Turn ready", + "text_body": "Turn 54 is ready.", + "attachments": []map[string]any{}, + }) + return fields + }(t), + wantErr: "must contain at least one recipient", + }, + { + name: "invalid locale", + fields: func(t *testing.T) map[string]any { + fields := validTemplateFields(t) + fields[fieldPayloadJSON] = mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{}, + "template_id": "game.turn_ready", + "locale": "english", + "variables": map[string]any{}, + "attachments": []map[string]any{}, + }) + return fields + }(t), + wantErr: "payload_json.locale:", + }, + { + name: "variables must be object", + fields: func(t *testing.T) map[string]any { + fields := validTemplateFields(t) + fields[fieldPayloadJSON] = mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{}, + "template_id": "game.turn_ready", + "locale": "fr-FR", + "variables": []string{"not", "object"}, + "attachments": []map[string]any{}, + }) + return fields + }(t), + wantErr: "decode payload_json.variables", + }, + { + name: "invalid attachment base64", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldPayloadJSON] = mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{}, + "subject": "Turn ready", + "text_body": "Turn 54 is ready.", + "attachments": []map[string]any{ + { + "filename": "report.txt", + "content_type": "text/plain", + "content_base64": "!@#", + }, + }, + }) + return fields + }(t), + wantErr: "content_base64 must be valid base64", + }, + { + name: "too many attachments", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + attachments := make([]map[string]any, 0, MaxAttachments+1) + for index := 0; index < MaxAttachments+1; index++ { + attachments = append(attachments, map[string]any{ + "filename": "report.txt", + "content_type": "text/plain", + "content_base64": base64.StdEncoding.EncodeToString([]byte("a")), + }) + } + fields[fieldPayloadJSON] = mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{}, + "subject": "Turn ready", + "text_body": "Turn 54 is ready.", + "attachments": attachments, + }) + return fields + }(t), + wantErr: "must contain at most 5 entries", + }, + { + name: "encoded attachment payload limit exceeded", + fields: func(t *testing.T) map[string]any { + fields := validRenderedFields(t) + fields[fieldPayloadJSON] = mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{}, + "subject": "Turn ready", + "text_body": "Turn 54 is ready.", + "attachments": []map[string]any{{ + "filename": "report.txt", + "content_type": "text/plain", + "content_base64": oversizedBase64(), + }}, + }) + return fields + }(t), + wantErr: "encoded attachment payload must not exceed", + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + _, err := DecodeCommand(tt.fields) + require.Error(t, err) + assert.ErrorContains(t, err, tt.wantErr) + }) + } +} + +func TestCommandFingerprintIgnoresTracingFields(t *testing.T) { + t.Parallel() + + first, err := DecodeCommand(validRenderedFields(t)) + require.NoError(t, err) + + secondFields := validRenderedFields(t) + secondFields[fieldRequestID] = "req-456" + secondFields[fieldTraceID] = "trace-456" + second, err := DecodeCommand(secondFields) + require.NoError(t, err) + + firstFingerprint, err := first.Fingerprint() + require.NoError(t, err) + secondFingerprint, err := second.Fingerprint() + require.NoError(t, err) + + require.Equal(t, firstFingerprint, secondFingerprint) +} + +func TestCommandFingerprintChangesForBusinessFields(t *testing.T) { + t.Parallel() + + first, err := DecodeCommand(validRenderedFields(t)) + require.NoError(t, err) + + secondFields := validRenderedFields(t) + secondFields[fieldPayloadJSON] = mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{"noreply@example.com"}, + "subject": "Different subject", + "text_body": "Turn 54 is ready.", + "html_body": "

Turn 54 is ready.

", + "attachments": []map[string]any{{"filename": "report.txt", "content_type": "text/plain", "content_base64": base64.StdEncoding.EncodeToString([]byte("report"))}}, + }) + second, err := DecodeCommand(secondFields) + require.NoError(t, err) + + firstFingerprint, err := first.Fingerprint() + require.NoError(t, err) + secondFingerprint, err := second.Fingerprint() + require.NoError(t, err) + + require.NotEqual(t, firstFingerprint, secondFingerprint) +} + +func validRenderedFields(t *testing.T) map[string]any { + t.Helper() + + return map[string]any{ + fieldDeliveryID: "mail-123", + fieldSource: "notification", + fieldPayloadMode: "rendered", + fieldIdempotency: "notification:mail-123", + fieldRequestedAtMS: "1775121700000", + fieldRequestID: "req-123", + fieldTraceID: "trace-123", + fieldPayloadJSON: validRenderedPayloadJSON(t), + } +} + +func validTemplateFields(t *testing.T) map[string]any { + t.Helper() + + return map[string]any{ + fieldDeliveryID: "mail-124", + fieldSource: "notification", + fieldPayloadMode: "template", + fieldIdempotency: "notification:mail-124", + fieldRequestedAtMS: "1775121700001", + fieldPayloadJSON: validTemplatePayloadJSON(t), + } +} + +func validRenderedPayloadJSON(t *testing.T) string { + t.Helper() + + return mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{"noreply@example.com"}, + "subject": "Turn ready", + "text_body": "Turn 54 is ready.", + "html_body": "

Turn 54 is ready.

", + "attachments": []map[string]any{ + { + "filename": "report.txt", + "content_type": "text/plain", + "content_base64": base64.StdEncoding.EncodeToString([]byte("report")), + }, + }, + }) +} + +func validTemplatePayloadJSON(t *testing.T) string { + t.Helper() + + return mustJSONString(t, map[string]any{ + "to": []string{"pilot@example.com"}, + "cc": []string{}, + "bcc": []string{}, + "reply_to": []string{}, + "template_id": "game.turn_ready", + "locale": "fr-FR", + "variables": map[string]any{ + "turn_number": 54, + "player": map[string]any{ + "name": "Pilot", + }, + }, + "attachments": []map[string]any{}, + }) +} + +func mustJSONString(t *testing.T, value any) string { + t.Helper() + + payload, err := json.Marshal(value) + require.NoError(t, err) + + return string(payload) +} + +func oversizedBase64() string { + return string(bytesOf('A', MaxEncodedAttachmentPayloadBytes+4)) +} + +func bytesOf(value byte, size int) []byte { + result := make([]byte, size) + for index := range result { + result[index] = value + } + return result +} + +func mustUnixMilli(value int64) time.Time { + return time.UnixMilli(value).UTC() +} diff --git a/mail/internal/app/app.go b/mail/internal/app/app.go new file mode 100644 index 0000000..97e3012 --- /dev/null +++ b/mail/internal/app/app.go @@ -0,0 +1,168 @@ +// Package app wires the Mail Service process lifecycle and coordinates +// component startup and graceful shutdown. +package app + +import ( + "context" + "errors" + "fmt" + "sync" + + "galaxy/mail/internal/config" +) + +// Component is a long-lived Mail Service subsystem that participates in +// coordinated startup and graceful shutdown. +type Component interface { + // Run starts the component and blocks until it stops. + Run(context.Context) error + + // Shutdown stops the component within the provided timeout-bounded context. + Shutdown(context.Context) error +} + +// App owns the process-level lifecycle of Mail Service and its registered +// components. +type App struct { + cfg config.Config + components []Component +} + +// New constructs App with a defensive copy of the supplied components. +func New(cfg config.Config, components ...Component) *App { + clonedComponents := append([]Component(nil), components...) + + return &App{ + cfg: cfg, + components: clonedComponents, + } +} + +// Run starts all configured components, waits for cancellation or the first +// component failure, and then executes best-effort graceful shutdown. +func (app *App) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run mail app: nil context") + } + if err := app.validate(); err != nil { + return err + } + if len(app.components) == 0 { + <-ctx.Done() + return nil + } + + runCtx, cancel := context.WithCancel(ctx) + defer cancel() + + results := make(chan componentResult, len(app.components)) + var runWaitGroup sync.WaitGroup + + for index, component := range app.components { + runWaitGroup.Add(1) + + go func(componentIndex int, component Component) { + defer runWaitGroup.Done() + results <- componentResult{ + index: componentIndex, + err: component.Run(runCtx), + } + }(index, component) + } + + var runErr error + + select { + case <-ctx.Done(): + case result := <-results: + runErr = classifyComponentResult(ctx, result) + } + + cancel() + + shutdownErr := app.shutdownComponents() + waitErr := app.waitForComponents(&runWaitGroup) + + return errors.Join(runErr, shutdownErr, waitErr) +} + +type componentResult struct { + index int + err error +} + +func (app *App) validate() error { + if app.cfg.ShutdownTimeout <= 0 { + return fmt.Errorf("run mail app: shutdown timeout must be positive, got %s", app.cfg.ShutdownTimeout) + } + + for index, component := range app.components { + if component == nil { + return fmt.Errorf("run mail app: component %d is nil", index) + } + } + + return nil +} + +func classifyComponentResult(parentCtx context.Context, result componentResult) error { + switch { + case result.err == nil: + if parentCtx.Err() != nil { + return nil + } + return fmt.Errorf("run mail app: component %d exited without error before shutdown", result.index) + case errors.Is(result.err, context.Canceled) && parentCtx.Err() != nil: + return nil + default: + return fmt.Errorf("run mail app: component %d: %w", result.index, result.err) + } +} + +func (app *App) shutdownComponents() error { + var shutdownWaitGroup sync.WaitGroup + errs := make(chan error, len(app.components)) + + for index, component := range app.components { + shutdownWaitGroup.Add(1) + + go func(componentIndex int, component Component) { + defer shutdownWaitGroup.Done() + + shutdownCtx, cancel := context.WithTimeout(context.Background(), app.cfg.ShutdownTimeout) + defer cancel() + + if err := component.Shutdown(shutdownCtx); err != nil { + errs <- fmt.Errorf("shutdown mail component %d: %w", componentIndex, err) + } + }(index, component) + } + + shutdownWaitGroup.Wait() + close(errs) + + var joined error + for err := range errs { + joined = errors.Join(joined, err) + } + + return joined +} + +func (app *App) waitForComponents(runWaitGroup *sync.WaitGroup) error { + done := make(chan struct{}) + go func() { + runWaitGroup.Wait() + close(done) + }() + + waitCtx, cancel := context.WithTimeout(context.Background(), app.cfg.ShutdownTimeout) + defer cancel() + + select { + case <-done: + return nil + case <-waitCtx.Done(): + return fmt.Errorf("wait for mail components: %w", waitCtx.Err()) + } +} diff --git a/mail/internal/app/app_test.go b/mail/internal/app/app_test.go new file mode 100644 index 0000000..aa78715 --- /dev/null +++ b/mail/internal/app/app_test.go @@ -0,0 +1,85 @@ +package app + +import ( + "context" + "sync" + "testing" + "time" + + "galaxy/mail/internal/config" + + "github.com/stretchr/testify/require" +) + +func TestAppRunStopsComponentsOnContextCancellation(t *testing.T) { + t.Parallel() + + component := &blockingComponent{} + app := New(config.Config{ShutdownTimeout: time.Second}, component) + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan error, 1) + go func() { + done <- app.Run(ctx) + }() + + require.Eventually(t, func() bool { + component.mu.Lock() + defer component.mu.Unlock() + return component.runStarted + }, time.Second, 10*time.Millisecond) + + cancel() + + require.Eventually(t, func() bool { + select { + case err := <-done: + return err == nil + default: + return false + } + }, time.Second, 10*time.Millisecond) + require.Equal(t, 1, component.shutdownCalls) +} + +func TestAppRunReportsEarlyComponentExit(t *testing.T) { + t.Parallel() + + app := New(config.Config{ShutdownTimeout: time.Second}, componentFunc(func(context.Context) error { + return nil + })) + + err := app.Run(context.Background()) + require.Error(t, err) + require.Contains(t, err.Error(), "exited without error before shutdown") +} + +type blockingComponent struct { + mu sync.Mutex + runStarted bool + shutdownCalls int +} + +func (component *blockingComponent) Run(ctx context.Context) error { + component.mu.Lock() + component.runStarted = true + component.mu.Unlock() + + <-ctx.Done() + return ctx.Err() +} + +func (component *blockingComponent) Shutdown(context.Context) error { + component.shutdownCalls++ + return nil +} + +type componentFunc func(context.Context) error + +func (fn componentFunc) Run(ctx context.Context) error { + return fn(ctx) +} + +func (fn componentFunc) Shutdown(context.Context) error { + return nil +} diff --git a/mail/internal/app/bootstrap.go b/mail/internal/app/bootstrap.go new file mode 100644 index 0000000..9b2db42 --- /dev/null +++ b/mail/internal/app/bootstrap.go @@ -0,0 +1,112 @@ +package app + +import ( + "context" + "fmt" + "log/slog" + + "galaxy/mail/internal/adapters/smtp" + "galaxy/mail/internal/adapters/stubprovider" + templatedir "galaxy/mail/internal/adapters/templates" + "galaxy/mail/internal/config" + "galaxy/mail/internal/ports" + "galaxy/mail/internal/telemetry" + + "github.com/redis/go-redis/extra/redisotel/v9" + "github.com/redis/go-redis/v9" +) + +func newRedisClient(cfg config.RedisConfig) *redis.Client { + return redis.NewClient(&redis.Options{ + Addr: cfg.Addr, + Username: cfg.Username, + Password: cfg.Password, + DB: cfg.DB, + TLSConfig: cfg.TLSConfig(), + DialTimeout: cfg.OperationTimeout, + ReadTimeout: cfg.OperationTimeout, + WriteTimeout: cfg.OperationTimeout, + }) +} + +func instrumentRedisClient(client *redis.Client, telemetryRuntime *telemetry.Runtime) error { + if client == nil { + return fmt.Errorf("instrument redis client: nil client") + } + if telemetryRuntime == nil { + return nil + } + + if err := redisotel.InstrumentTracing( + client, + redisotel.WithTracerProvider(telemetryRuntime.TracerProvider()), + redisotel.WithDBStatement(false), + ); err != nil { + return fmt.Errorf("instrument redis client tracing: %w", err) + } + if err := redisotel.InstrumentMetrics( + client, + redisotel.WithMeterProvider(telemetryRuntime.MeterProvider()), + ); err != nil { + return fmt.Errorf("instrument redis client metrics: %w", err) + } + + return nil +} + +func pingRedis(ctx context.Context, cfg config.RedisConfig, client *redis.Client) error { + if client == nil { + return fmt.Errorf("ping redis: nil client") + } + + pingCtx, cancel := context.WithTimeout(ctx, cfg.OperationTimeout) + defer cancel() + + if err := client.Ping(pingCtx).Err(); err != nil { + return fmt.Errorf("ping redis: %w", err) + } + + return nil +} + +func newTemplateCatalog(cfg config.TemplateConfig) (*templatedir.Catalog, error) { + catalog, err := templatedir.NewCatalog(cfg.Dir) + if err != nil { + return nil, fmt.Errorf("new template catalog: %w", err) + } + + return catalog, nil +} + +func newProvider(cfg config.SMTPConfig, logger *slog.Logger) (ports.Provider, error) { + if logger == nil { + logger = slog.Default() + } + + switch cfg.Mode { + case config.SMTPModeStub: + provider, err := stubprovider.New() + if err != nil { + return nil, fmt.Errorf("new stub provider: %w", err) + } + logger.Info("mail provider configured", "mode", cfg.Mode) + return provider, nil + case config.SMTPModeSMTP: + provider, err := smtp.New(smtp.Config{ + Addr: cfg.Addr, + Username: cfg.Username, + Password: cfg.Password, + FromEmail: cfg.FromEmail, + FromName: cfg.FromName, + Timeout: cfg.Timeout, + InsecureSkipVerify: cfg.InsecureSkipVerify, + }) + if err != nil { + return nil, fmt.Errorf("new smtp provider: %w", err) + } + logger.Info("mail provider configured", "mode", cfg.Mode, "addr", cfg.Addr) + return provider, nil + default: + return nil, fmt.Errorf("new provider: unsupported mode %q", cfg.Mode) + } +} diff --git a/mail/internal/app/bootstrap_test.go b/mail/internal/app/bootstrap_test.go new file mode 100644 index 0000000..8677903 --- /dev/null +++ b/mail/internal/app/bootstrap_test.go @@ -0,0 +1,53 @@ +package app + +import ( + "io" + "log/slog" + "testing" + "time" + + "galaxy/mail/internal/config" + + "github.com/stretchr/testify/require" +) + +func TestNewProviderBuildsStubProvider(t *testing.T) { + t.Parallel() + + provider, err := newProvider(config.SMTPConfig{ + Mode: config.SMTPModeStub, + }, bootstrapTestLogger()) + require.NoError(t, err) + require.NoError(t, provider.Close()) +} + +func TestNewProviderBuildsSMTPProvider(t *testing.T) { + t.Parallel() + + provider, err := newProvider(config.SMTPConfig{ + Mode: config.SMTPModeSMTP, + Addr: "127.0.0.1:2525", + FromEmail: "noreply@example.com", + Timeout: 15 * time.Second, + }, bootstrapTestLogger()) + require.NoError(t, err) + require.NoError(t, provider.Close()) +} + +func TestNewProviderRejectsInvalidSMTPAuthPair(t *testing.T) { + t.Parallel() + + _, err := newProvider(config.SMTPConfig{ + Mode: config.SMTPModeSMTP, + Addr: "127.0.0.1:2525", + Username: "mailer", + FromEmail: "noreply@example.com", + Timeout: 15 * time.Second, + }, bootstrapTestLogger()) + require.Error(t, err) + require.Contains(t, err.Error(), "smtp username and password") +} + +func bootstrapTestLogger() *slog.Logger { + return slog.New(slog.NewJSONHandler(io.Discard, nil)) +} diff --git a/mail/internal/app/runtime.go b/mail/internal/app/runtime.go new file mode 100644 index 0000000..1c75b23 --- /dev/null +++ b/mail/internal/app/runtime.go @@ -0,0 +1,370 @@ +package app + +import ( + "context" + "errors" + "fmt" + "log/slog" + "time" + + "galaxy/mail/internal/adapters/id" + "galaxy/mail/internal/adapters/redisstate" + templatedir "galaxy/mail/internal/adapters/templates" + "galaxy/mail/internal/api/internalhttp" + "galaxy/mail/internal/config" + "galaxy/mail/internal/service/acceptauthdelivery" + "galaxy/mail/internal/service/acceptgenericdelivery" + "galaxy/mail/internal/service/executeattempt" + "galaxy/mail/internal/service/getdelivery" + "galaxy/mail/internal/service/listattempts" + "galaxy/mail/internal/service/listdeliveries" + "galaxy/mail/internal/service/renderdelivery" + "galaxy/mail/internal/service/resenddelivery" + "galaxy/mail/internal/telemetry" + "galaxy/mail/internal/worker" + "galaxy/mail/internal/ports" + + "github.com/redis/go-redis/v9" +) + +// Runtime owns the runnable Mail Service process plus the cleanup functions +// that release runtime resources after shutdown. +type Runtime struct { + cfg config.Config + + app *App + + templateCatalog *templatedir.Catalog + renderDeliveryService *renderdelivery.Service + + cleanupFns []func() error +} + +type runtimeClock interface { + Now() time.Time +} + +type runtimeProviderFactory func(config.SMTPConfig, *slog.Logger) (ports.Provider, error) + +type runtimeDependencies struct { + clock runtimeClock + providerFactory runtimeProviderFactory + schedulerPoll time.Duration + schedulerRecovery time.Duration + schedulerGrace time.Duration +} + +func (deps runtimeDependencies) withDefaults() runtimeDependencies { + if deps.clock == nil { + deps.clock = systemClock{} + } + if deps.providerFactory == nil { + deps.providerFactory = newProvider + } + + return deps +} + +// NewRuntime constructs the runnable Mail Service process from cfg. +func NewRuntime(ctx context.Context, cfg config.Config, logger *slog.Logger) (*Runtime, error) { + return newRuntime(ctx, cfg, logger, runtimeDependencies{}) +} + +func newRuntime(ctx context.Context, cfg config.Config, logger *slog.Logger, deps runtimeDependencies) (*Runtime, error) { + if ctx == nil { + return nil, fmt.Errorf("new mail runtime: nil context") + } + if err := cfg.Validate(); err != nil { + return nil, fmt.Errorf("new mail runtime: %w", err) + } + if logger == nil { + logger = slog.Default() + } + deps = deps.withDefaults() + + runtime := &Runtime{ + cfg: cfg, + } + + cleanupOnError := func(err error) (*Runtime, error) { + if cleanupErr := runtime.Close(); cleanupErr != nil { + return nil, fmt.Errorf("%w; cleanup: %w", err, cleanupErr) + } + + return nil, err + } + + telemetryRuntime, err := telemetry.NewProcess(ctx, telemetry.ProcessConfig{ + ServiceName: cfg.Telemetry.ServiceName, + TracesExporter: cfg.Telemetry.TracesExporter, + MetricsExporter: cfg.Telemetry.MetricsExporter, + TracesProtocol: cfg.Telemetry.TracesProtocol, + MetricsProtocol: cfg.Telemetry.MetricsProtocol, + StdoutTracesEnabled: cfg.Telemetry.StdoutTracesEnabled, + StdoutMetricsEnabled: cfg.Telemetry.StdoutMetricsEnabled, + }, logger) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: telemetry: %w", err)) + } + runtime.cleanupFns = append(runtime.cleanupFns, func() error { + shutdownCtx, cancel := context.WithTimeout(context.Background(), cfg.ShutdownTimeout) + defer cancel() + return telemetryRuntime.Shutdown(shutdownCtx) + }) + + redisClient := newRedisClient(cfg.Redis) + if err := instrumentRedisClient(redisClient, telemetryRuntime); err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: %w", err)) + } + runtime.cleanupFns = append(runtime.cleanupFns, func() error { + return redisClient.Close() + }) + if err := pingRedis(ctx, cfg.Redis, redisClient); err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: %w", err)) + } + + templateCatalog, err := newTemplateCatalog(cfg.Templates) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: %w", err)) + } + runtime.templateCatalog = templateCatalog + + provider, err := deps.providerFactory(cfg.SMTP, logger.With("component", "provider")) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: %w", err)) + } + runtime.cleanupFns = append(runtime.cleanupFns, provider.Close) + + acceptanceStore, err := redisstate.NewAcceptanceStore(redisClient) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: auth acceptance store: %w", err)) + } + authAcceptanceService, err := acceptauthdelivery.New(acceptauthdelivery.Config{ + Store: acceptanceStore, + DeliveryIDGenerator: id.Generator{}, + Clock: deps.clock, + Telemetry: telemetryRuntime, + TracerProvider: telemetryRuntime.TracerProvider(), + Logger: logger, + IdempotencyTTL: redisstate.IdempotencyTTL, + SuppressOutbound: cfg.SMTP.Mode == config.SMTPModeStub, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: auth acceptance service: %w", err)) + } + + genericAcceptanceStore, err := redisstate.NewGenericAcceptanceStore(redisClient) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: generic acceptance store: %w", err)) + } + genericAcceptanceService, err := acceptgenericdelivery.New(acceptgenericdelivery.Config{ + Store: genericAcceptanceStore, + Clock: deps.clock, + Telemetry: telemetryRuntime, + TracerProvider: telemetryRuntime.TracerProvider(), + Logger: logger, + IdempotencyTTL: redisstate.IdempotencyTTL, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: generic acceptance service: %w", err)) + } + + renderStore, err := redisstate.NewRenderStore(redisClient) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: render store: %w", err)) + } + renderDeliveryService, err := renderdelivery.New(renderdelivery.Config{ + Catalog: templateCatalog, + Store: renderStore, + Clock: deps.clock, + Telemetry: telemetryRuntime, + TracerProvider: telemetryRuntime.TracerProvider(), + Logger: logger, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: render delivery service: %w", err)) + } + runtime.renderDeliveryService = renderDeliveryService + + malformedCommandStore, err := redisstate.NewMalformedCommandStore(redisClient) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: malformed command store: %w", err)) + } + streamOffsetStore, err := redisstate.NewStreamOffsetStore(redisClient) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: stream offset store: %w", err)) + } + attemptExecutionStore, err := redisstate.NewAttemptExecutionStore(redisClient) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: attempt execution store: %w", err)) + } + telemetryRuntime.SetAttemptScheduleSnapshotReader(attemptExecutionStore) + operatorStore, err := redisstate.NewOperatorStore(redisClient) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: operator store: %w", err)) + } + attemptExecutionService, err := executeattempt.New(executeattempt.Config{ + Renderer: renderDeliveryService, + Provider: provider, + PayloadLoader: attemptExecutionStore, + Store: attemptExecutionStore, + Clock: deps.clock, + Telemetry: telemetryRuntime, + TracerProvider: telemetryRuntime.TracerProvider(), + Logger: logger, + AttemptTimeout: cfg.SMTP.Timeout, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: attempt execution service: %w", err)) + } + listDeliveriesService, err := listdeliveries.New(listdeliveries.Config{ + Store: operatorStore, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: list deliveries service: %w", err)) + } + getDeliveryService, err := getdelivery.New(getdelivery.Config{ + Store: operatorStore, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: get delivery service: %w", err)) + } + listAttemptsService, err := listattempts.New(listattempts.Config{ + Store: operatorStore, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: list attempts service: %w", err)) + } + resendDeliveryService, err := resenddelivery.New(resenddelivery.Config{ + Store: operatorStore, + DeliveryIDGenerator: id.Generator{}, + Clock: deps.clock, + Telemetry: telemetryRuntime, + TracerProvider: telemetryRuntime.TracerProvider(), + Logger: logger, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: resend delivery service: %w", err)) + } + + commandConsumerRedisClient := newRedisClient(cfg.Redis) + if err := instrumentRedisClient(commandConsumerRedisClient, telemetryRuntime); err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: %w", err)) + } + runtime.cleanupFns = append(runtime.cleanupFns, func() error { + err := commandConsumerRedisClient.Close() + if errors.Is(err, redis.ErrClosed) { + return nil + } + return err + }) + if err := pingRedis(ctx, cfg.Redis, commandConsumerRedisClient); err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: %w", err)) + } + + httpServer, err := internalhttp.NewServer(internalhttp.Config{ + Addr: cfg.InternalHTTP.Addr, + ReadHeaderTimeout: cfg.InternalHTTP.ReadHeaderTimeout, + ReadTimeout: cfg.InternalHTTP.ReadTimeout, + IdleTimeout: cfg.InternalHTTP.IdleTimeout, + }, internalhttp.Dependencies{ + Logger: logger, + Telemetry: telemetryRuntime, + AcceptLoginCodeDelivery: authAcceptanceService, + ListDeliveries: listDeliveriesService, + GetDelivery: getDeliveryService, + ListAttempts: listAttemptsService, + ResendDelivery: resendDeliveryService, + OperatorRequestTimeout: cfg.OperatorRequestTimeout, + }) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: internal HTTP server: %w", err)) + } + + commandConsumer, err := worker.NewCommandConsumer(worker.CommandConsumerConfig{ + Client: commandConsumerRedisClient, + Stream: cfg.Redis.CommandStream, + BlockTimeout: cfg.StreamBlockTimeout, + Acceptor: genericAcceptanceService, + MalformedRecorder: malformedCommandStore, + OffsetStore: streamOffsetStore, + Telemetry: telemetryRuntime, + Clock: deps.clock, + }, logger) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: command consumer: %w", err)) + } + attemptWorkQueue := make(chan executeattempt.WorkItem, cfg.AttemptWorkerConcurrency) + scheduler, err := worker.NewScheduler(worker.SchedulerConfig{ + Store: attemptExecutionStore, + Service: attemptExecutionService, + WorkQueue: attemptWorkQueue, + Clock: deps.clock, + AttemptTimeout: cfg.SMTP.Timeout, + Telemetry: telemetryRuntime, + PollInterval: deps.schedulerPoll, + RecoveryInterval: deps.schedulerRecovery, + RecoveryGrace: deps.schedulerGrace, + }, logger) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: scheduler: %w", err)) + } + attemptWorkers, err := worker.NewAttemptWorkerPool(worker.AttemptWorkerPoolConfig{ + Concurrency: cfg.AttemptWorkerConcurrency, + WorkQueue: attemptWorkQueue, + Service: attemptExecutionService, + }, logger) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: attempt worker pool: %w", err)) + } + indexCleaner, err := redisstate.NewIndexCleaner(redisClient) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: cleanup index cleaner: %w", err)) + } + cleanupWorker, err := worker.NewCleanupWorker(indexCleaner, logger) + if err != nil { + return cleanupOnError(fmt.Errorf("new mail runtime: cleanup worker: %w", err)) + } + + runtime.app = New(cfg, httpServer, commandConsumer, scheduler, attemptWorkers, cleanupWorker) + + return runtime, nil +} + +type systemClock struct{} + +func (systemClock) Now() time.Time { + return time.Now() +} + +// Run serves the internal HTTP listener and background workers until ctx is +// canceled or one component fails. +func (runtime *Runtime) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run mail runtime: nil context") + } + if runtime == nil { + return errors.New("run mail runtime: nil runtime") + } + if runtime.app == nil { + return errors.New("run mail runtime: nil app") + } + + return runtime.app.Run(ctx) +} + +// Close releases every runtime dependency in reverse construction order. +func (runtime *Runtime) Close() error { + if runtime == nil { + return nil + } + + var joined error + for index := len(runtime.cleanupFns) - 1; index >= 0; index-- { + if err := runtime.cleanupFns[index](); err != nil { + joined = errors.Join(joined, err) + } + } + + return joined +} diff --git a/mail/internal/app/runtime_smoke_test.go b/mail/internal/app/runtime_smoke_test.go new file mode 100644 index 0000000..5249be1 --- /dev/null +++ b/mail/internal/app/runtime_smoke_test.go @@ -0,0 +1,262 @@ +package app + +import ( + "context" + "crypto/rand" + "crypto/rsa" + "crypto/tls" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "io" + "log/slog" + "math/big" + "net" + "net/http" + "net/url" + "os" + "path/filepath" + "strings" + "testing" + "time" + + smtpadapter "galaxy/mail/internal/adapters/smtp" + "galaxy/mail/internal/config" + "galaxy/mail/internal/ports" + + "github.com/stretchr/testify/require" + testcontainers "github.com/testcontainers/testcontainers-go" + rediscontainer "github.com/testcontainers/testcontainers-go/modules/redis" + "github.com/testcontainers/testcontainers-go/wait" +) + +const ( + realRuntimeSmokeEnv = "MAIL_REAL_RUNTIME_SMOKE" + realRuntimeRedisImage = "redis:7" + realRuntimeMailpitImage = "axllent/mailpit:v1.28.2" + realRuntimeMailpitCert = "/tmp/mailpit/server.crt" + realRuntimeMailpitKey = "/tmp/mailpit/server.key" +) + +func TestRealRuntimeCompatibility(t *testing.T) { + if os.Getenv(realRuntimeSmokeEnv) != "1" { + t.Skipf("set %s=1 to run the real runtime smoke suite", realRuntimeSmokeEnv) + } + + ctx := context.Background() + + redisContainer, err := rediscontainer.Run(ctx, realRuntimeRedisImage) + require.NoError(t, err) + testcontainers.CleanupContainer(t, redisContainer) + + redisAddr, err := redisContainer.Endpoint(ctx, "") + require.NoError(t, err) + + certFiles := writeMailpitTLSFiles(t) + mailpitContainer, err := testcontainers.Run( + ctx, + realRuntimeMailpitImage, + testcontainers.WithExposedPorts("1025/tcp", "8025/tcp"), + testcontainers.WithFiles( + testcontainers.ContainerFile{ + HostFilePath: certFiles.certPath, + ContainerFilePath: realRuntimeMailpitCert, + FileMode: 0o644, + }, + testcontainers.ContainerFile{ + HostFilePath: certFiles.keyPath, + ContainerFilePath: realRuntimeMailpitKey, + FileMode: 0o600, + }, + ), + testcontainers.WithEnv(map[string]string{ + "MP_SMTP_TLS_CERT": realRuntimeMailpitCert, + "MP_SMTP_TLS_KEY": realRuntimeMailpitKey, + "MP_SMTP_REQUIRE_STARTTLS": "true", + }), + testcontainers.WithWaitStrategy( + wait.ForAll( + wait.ForListeningPort("1025/tcp"), + wait.ForListeningPort("8025/tcp"), + ).WithDeadline(30*time.Second), + ), + ) + require.NoError(t, err) + testcontainers.CleanupContainer(t, mailpitContainer) + + smtpAddr, err := mailpitContainer.PortEndpoint(ctx, "1025/tcp", "") + require.NoError(t, err) + mailpitHTTPBaseURL, err := mailpitContainer.PortEndpoint(ctx, "8025/tcp", "http") + require.NoError(t, err) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = redisAddr + cfg.Templates.Dir = writeRuntimeTemplates(t) + cfg.InternalHTTP.Addr = mustFreeAddr(t) + cfg.ShutdownTimeout = time.Second + cfg.StreamBlockTimeout = 100 * time.Millisecond + cfg.AttemptWorkerConcurrency = 1 + cfg.OperatorRequestTimeout = time.Second + cfg.SMTP.Mode = config.SMTPModeSMTP + cfg.SMTP.Addr = smtpAddr + cfg.SMTP.FromEmail = "noreply@example.com" + cfg.SMTP.Timeout = 2 * time.Second + + instance := startSmokeRuntime(t, cfg, runtimeDependencies{ + providerFactory: func(cfg config.SMTPConfig, _ *slog.Logger) (ports.Provider, error) { + return smtpadapter.New(smtpadapter.Config{ + Addr: cfg.Addr, + FromEmail: cfg.FromEmail, + FromName: cfg.FromName, + Timeout: cfg.Timeout, + TLSConfig: certFiles.clientTLSConfig, + }) + }, + schedulerPoll: 25 * time.Millisecond, + }) + + response := postLoginCodeDelivery(t, instance.baseURL, loginCodeDeliveryRequest{ + idempotencyKey: "real-runtime-smoke", + email: "pilot@example.com", + code: "246810", + locale: "fr-FR", + }) + require.Equal(t, "sent", string(response.Outcome)) + + list := eventuallyListDeliveries(t, instance.baseURL, url.Values{ + "source": []string{"authsession"}, + "idempotency_key": []string{"real-runtime-smoke"}, + }) + require.Len(t, list.Items, 1) + + detail := eventuallyDeliveryStatus(t, instance.baseURL, list.Items[0].DeliveryID, "sent") + require.Equal(t, "authsession", detail.Source) + require.Equal(t, "auth.login_code", detail.TemplateID) + require.Equal(t, "fr-FR", detail.Locale) + require.True(t, detail.LocaleFallbackUsed) + require.Equal(t, []string{"pilot@example.com"}, detail.To) + + attempts := getDeliveryAttempts(t, instance.baseURL, detail.DeliveryID) + require.Len(t, attempts.Items, 1) + require.Equal(t, "provider_accepted", attempts.Items[0].Status) + + messageText := waitForMailpitLatestText(t, mailpitHTTPBaseURL) + require.Contains(t, messageText, "246810") +} + +type smokeTLSFiles struct { + certPath string + keyPath string + clientTLSConfig *tls.Config +} + +func writeMailpitTLSFiles(t *testing.T) smokeTLSFiles { + t.Helper() + + privateKey, err := rsa.GenerateKey(rand.Reader, 2048) + require.NoError(t, err) + + template := x509.Certificate{ + SerialNumber: big.NewInt(1), + Subject: pkix.Name{ + CommonName: "localhost", + }, + NotBefore: time.Now().Add(-time.Hour), + NotAfter: time.Now().Add(time.Hour), + KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, + BasicConstraintsValid: true, + DNSNames: []string{"localhost"}, + IPAddresses: []net.IP{net.ParseIP("127.0.0.1")}, + } + + der, err := x509.CreateCertificate(rand.Reader, &template, &template, &privateKey.PublicKey, privateKey) + require.NoError(t, err) + + certPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: der}) + keyPEM := pem.EncodeToMemory(&pem.Block{ + Type: "RSA PRIVATE KEY", + Bytes: x509.MarshalPKCS1PrivateKey(privateKey), + }) + + root := t.TempDir() + certPath := filepath.Join(root, "server.crt") + keyPath := filepath.Join(root, "server.key") + require.NoError(t, os.WriteFile(certPath, certPEM, 0o644)) + require.NoError(t, os.WriteFile(keyPath, keyPEM, 0o600)) + + rootCAs := x509.NewCertPool() + require.True(t, rootCAs.AppendCertsFromPEM(certPEM)) + + return smokeTLSFiles{ + certPath: certPath, + keyPath: keyPath, + clientTLSConfig: &tls.Config{ + MinVersion: tls.VersionTLS12, + RootCAs: rootCAs, + ServerName: "localhost", + }, + } +} + +func startSmokeRuntime(t *testing.T, cfg config.Config, deps runtimeDependencies) *runtimeInstance { + t.Helper() + + runtime, err := newRuntime(context.Background(), cfg, testLogger(), deps) + require.NoError(t, err) + + instance := &runtimeInstance{ + baseURL: "http://" + cfg.InternalHTTP.Addr, + runtime: runtime, + done: make(chan error, 1), + } + + runCtx, cancel := context.WithCancel(context.Background()) + instance.cancel = cancel + go func() { + instance.done <- runtime.Run(runCtx) + }() + + waitForRuntimeReady(t, instance.baseURL) + t.Cleanup(func() { + instance.stop(t) + }) + + return instance +} + +func waitForMailpitLatestText(t *testing.T, baseURL string) string { + t.Helper() + + client := &http.Client{ + Timeout: 500 * time.Millisecond, + Transport: &http.Transport{ + DisableKeepAlives: true, + }, + } + t.Cleanup(client.CloseIdleConnections) + + var payload string + require.Eventually(t, func() bool { + request, err := http.NewRequest(http.MethodGet, baseURL+"/view/latest.txt", nil) + require.NoError(t, err) + + response, err := client.Do(request) + if err != nil { + return false + } + defer response.Body.Close() + + body, err := io.ReadAll(response.Body) + require.NoError(t, err) + + if response.StatusCode != http.StatusOK { + return false + } + + payload = string(body) + return strings.TrimSpace(payload) != "" + }, 20*time.Second, 100*time.Millisecond) + + return payload +} diff --git a/mail/internal/app/runtime_stage14_test.go b/mail/internal/app/runtime_stage14_test.go new file mode 100644 index 0000000..8cdac2d --- /dev/null +++ b/mail/internal/app/runtime_stage14_test.go @@ -0,0 +1,709 @@ +package app + +import ( + "bytes" + "context" + "encoding/json" + "io" + "log/slog" + "net/http" + "net/url" + "os" + "path/filepath" + "strconv" + "sync" + "testing" + "time" + + "galaxy/mail/internal/adapters/stubprovider" + "galaxy/mail/internal/api/internalhttp" + "galaxy/mail/internal/api/streamcommand" + "galaxy/mail/internal/config" + "galaxy/mail/internal/ports" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestRuntimeAuthDeliverySentWithLocaleFallbackAndDuplicateIdempotency(t *testing.T) { + t.Parallel() + + env := newRuntimeTestEnvironment(t) + clock := newRuntimeTestClock(runtimeClockStart()) + instance := env.start(t, runtimeInstanceOptions{ + clock: clock, + smtpMode: config.SMTPModeSMTP, + scriptedOutcomes: []stubprovider.ScriptedOutcome{ + {Classification: ports.ClassificationAccepted, Script: "accepted"}, + }, + }) + + first := postLoginCodeDelivery(t, instance.baseURL, loginCodeDeliveryRequest{ + idempotencyKey: "challenge-1", + email: "pilot@example.com", + code: "123456", + locale: "fr-FR", + }) + require.Equal(t, internalhttp.LoginCodeDeliveryOutcomeSent, first.Outcome) + + second := postLoginCodeDelivery(t, instance.baseURL, loginCodeDeliveryRequest{ + idempotencyKey: "challenge-1", + email: "pilot@example.com", + code: "123456", + locale: "fr-FR", + }) + require.Equal(t, internalhttp.LoginCodeDeliveryOutcomeSent, second.Outcome) + + list := eventuallyListDeliveries(t, instance.baseURL, url.Values{ + "source": []string{"authsession"}, + "idempotency_key": []string{"challenge-1"}, + }) + require.Len(t, list.Items, 1) + + detail := eventuallyDeliveryStatus(t, instance.baseURL, list.Items[0].DeliveryID, "sent") + require.Equal(t, "authsession", detail.Source) + require.Equal(t, "auth.login_code", detail.TemplateID) + require.Equal(t, "fr-FR", detail.Locale) + require.True(t, detail.LocaleFallbackUsed) + require.Equal(t, "challenge-1", detail.IdempotencyKey) + require.Len(t, detail.To, 1) + require.Equal(t, "pilot@example.com", detail.To[0]) + + attempts := getDeliveryAttempts(t, instance.baseURL, detail.DeliveryID) + require.Len(t, attempts.Items, 1) + require.Equal(t, "provider_accepted", attempts.Items[0].Status) + + require.Eventually(t, func() bool { + return len(instance.stubProvider.Inputs()) == 1 + }, 5*time.Second, 20*time.Millisecond) + + inputs := instance.stubProvider.Inputs() + require.Len(t, inputs, 1) + require.Equal(t, "Your login code", inputs[0].Content.Subject) + require.Contains(t, inputs[0].Content.TextBody, "123456") +} + +func TestRuntimeAuthDeliverySuppressedInStubMode(t *testing.T) { + t.Parallel() + + env := newRuntimeTestEnvironment(t) + clock := newRuntimeTestClock(runtimeClockStart()) + instance := env.start(t, runtimeInstanceOptions{ + clock: clock, + smtpMode: config.SMTPModeStub, + }) + + response := postLoginCodeDelivery(t, instance.baseURL, loginCodeDeliveryRequest{ + idempotencyKey: "challenge-suppressed", + email: "pilot@example.com", + code: "654321", + locale: "en", + }) + require.Equal(t, internalhttp.LoginCodeDeliveryOutcomeSuppressed, response.Outcome) + + list := eventuallyListDeliveries(t, instance.baseURL, url.Values{ + "source": []string{"authsession"}, + "idempotency_key": []string{"challenge-suppressed"}, + }) + require.Len(t, list.Items, 1) + require.Equal(t, "suppressed", list.Items[0].Status) + + detail := getDelivery(t, instance.baseURL, list.Items[0].DeliveryID) + require.Equal(t, "suppressed", detail.Status) + + attempts := getDeliveryAttempts(t, instance.baseURL, detail.DeliveryID) + require.Empty(t, attempts.Items) +} + +func TestRuntimeGenericCommandAndOperatorRoutesSupportResendClone(t *testing.T) { + t.Parallel() + + env := newRuntimeTestEnvironment(t) + clock := newRuntimeTestClock(runtimeClockStart()) + instance := env.start(t, runtimeInstanceOptions{ + clock: clock, + smtpMode: config.SMTPModeSMTP, + scriptedOutcomes: []stubprovider.ScriptedOutcome{ + {Classification: ports.ClassificationAccepted, Script: "original"}, + {Classification: ports.ClassificationAccepted, Script: "resend"}, + }, + }) + + publishRenderedCommand(t, env.redisClient, "delivery-generic", "notification:delivery-generic", "Turn ready") + + detail := eventuallyDeliveryStatus(t, instance.baseURL, "delivery-generic", "sent") + require.Equal(t, "notification", detail.Source) + require.Equal(t, "rendered", detail.PayloadMode) + require.Equal(t, "Turn ready", detail.Subject) + + list := eventuallyListDeliveries(t, instance.baseURL, url.Values{ + "source": []string{"notification"}, + "idempotency_key": []string{"notification:delivery-generic"}, + "status": []string{"sent"}, + "recipient": []string{"pilot@example.com"}, + "from_created_at_ms": []string{formatUnixMilli(clock.Now().Add(-time.Second))}, + }) + require.Len(t, list.Items, 1) + require.Equal(t, detail.DeliveryID, list.Items[0].DeliveryID) + + attempts := getDeliveryAttempts(t, instance.baseURL, detail.DeliveryID) + require.Len(t, attempts.Items, 1) + require.Equal(t, "provider_accepted", attempts.Items[0].Status) + + cloneID := resendDelivery(t, instance.baseURL, detail.DeliveryID) + clone := eventuallyDeliveryStatus(t, instance.baseURL, cloneID, "sent") + require.Equal(t, "operator_resend", clone.Source) + require.Equal(t, detail.DeliveryID, clone.ResendParentDeliveryID) + + require.Eventually(t, func() bool { + return len(instance.stubProvider.Inputs()) == 2 + }, 5*time.Second, 20*time.Millisecond) +} + +func TestRuntimeRetriesTransientFailureUntilSuccess(t *testing.T) { + t.Parallel() + + env := newRuntimeTestEnvironment(t) + clock := newRuntimeTestClock(runtimeClockStart()) + instance := env.start(t, runtimeInstanceOptions{ + clock: clock, + smtpMode: config.SMTPModeSMTP, + scriptedOutcomes: []stubprovider.ScriptedOutcome{ + {Classification: ports.ClassificationTransientFailure, Script: "retry-1"}, + {Classification: ports.ClassificationAccepted, Script: "accepted"}, + }, + }) + + publishRenderedCommand(t, env.redisClient, "delivery-retry", "notification:delivery-retry", "Retry success") + + require.Eventually(t, func() bool { + detail, found := tryGetDelivery(t, instance.baseURL, "delivery-retry") + if !found { + return false + } + return detail.Status == "queued" && detail.AttemptCount == 2 + }, 5*time.Second, 20*time.Millisecond) + + clock.Advance(time.Minute) + + detail := eventuallyDeliveryStatus(t, instance.baseURL, "delivery-retry", "sent") + require.Equal(t, 2, detail.AttemptCount) + + attempts := getDeliveryAttempts(t, instance.baseURL, detail.DeliveryID) + require.Len(t, attempts.Items, 2) + require.Equal(t, "transport_failed", attempts.Items[0].Status) + require.Equal(t, "provider_accepted", attempts.Items[1].Status) +} + +func TestRuntimeMovesDeliveryToDeadLetterAfterRetryExhaustion(t *testing.T) { + t.Parallel() + + env := newRuntimeTestEnvironment(t) + clock := newRuntimeTestClock(runtimeClockStart()) + instance := env.start(t, runtimeInstanceOptions{ + clock: clock, + smtpMode: config.SMTPModeSMTP, + scriptedOutcomes: []stubprovider.ScriptedOutcome{ + {Classification: ports.ClassificationTransientFailure, Script: "retry-1"}, + {Classification: ports.ClassificationTransientFailure, Script: "retry-2"}, + {Classification: ports.ClassificationTransientFailure, Script: "retry-3"}, + {Classification: ports.ClassificationTransientFailure, Script: "retry-4"}, + }, + }) + + publishRenderedCommand(t, env.redisClient, "delivery-dead-letter", "notification:delivery-dead-letter", "Dead letter") + + require.Eventually(t, func() bool { + detail, found := tryGetDelivery(t, instance.baseURL, "delivery-dead-letter") + if !found { + return false + } + return detail.Status == "queued" && detail.AttemptCount == 2 + }, 5*time.Second, 20*time.Millisecond) + + clock.Advance(time.Minute) + require.Eventually(t, func() bool { + detail, found := tryGetDelivery(t, instance.baseURL, "delivery-dead-letter") + if !found { + return false + } + return detail.Status == "queued" && detail.AttemptCount == 3 + }, 5*time.Second, 20*time.Millisecond) + + clock.Advance(5 * time.Minute) + require.Eventually(t, func() bool { + detail, found := tryGetDelivery(t, instance.baseURL, "delivery-dead-letter") + if !found { + return false + } + return detail.Status == "queued" && detail.AttemptCount == 4 + }, 5*time.Second, 20*time.Millisecond) + + clock.Advance(30 * time.Minute) + detail := eventuallyDeliveryStatus(t, instance.baseURL, "delivery-dead-letter", "dead_letter") + require.NotNil(t, detail.DeadLetter) + require.Equal(t, "retry_exhausted", detail.DeadLetter.FailureClassification) +} + +func TestRuntimeRecoversPendingAttemptAfterGracefulShutdown(t *testing.T) { + t.Parallel() + + env := newRuntimeTestEnvironment(t) + clock := newRuntimeTestClock(runtimeClockStart()) + blocking := &blockingProvider{startedCh: make(chan struct{})} + first := env.start(t, runtimeInstanceOptions{ + clock: clock, + smtpMode: config.SMTPModeSMTP, + smtpTimeout: 20 * time.Millisecond, + providerFactory: func(config.SMTPConfig, *slog.Logger) (ports.Provider, error) { + return blocking, nil + }, + }) + + publishRenderedCommand(t, env.redisClient, "delivery-recover", "notification:delivery-recover", "Recover") + + require.Eventually(t, blocking.started, 5*time.Second, 20*time.Millisecond) + require.Eventually(t, func() bool { + detail, found := tryGetDelivery(t, first.baseURL, "delivery-recover") + if !found { + return false + } + return detail.Status == "sending" + }, 5*time.Second, 20*time.Millisecond) + + first.stop(t) + + clock.Advance(30 * time.Millisecond) + + second := env.start(t, runtimeInstanceOptions{ + clock: clock, + smtpMode: config.SMTPModeSMTP, + smtpTimeout: 20 * time.Millisecond, + scriptedOutcomes: []stubprovider.ScriptedOutcome{ + {Classification: ports.ClassificationAccepted, Script: "recovered"}, + }, + }) + + require.Eventually(t, func() bool { + detail, found := tryGetDelivery(t, second.baseURL, "delivery-recover") + if !found { + return false + } + return detail.Status == "queued" && detail.AttemptCount == 2 && detail.LastAttemptStatus == "timed_out" + }, 5*time.Second, 20*time.Millisecond) + + clock.Advance(time.Minute) + + detail := eventuallyDeliveryStatus(t, second.baseURL, "delivery-recover", "sent") + require.Equal(t, 2, detail.AttemptCount) + + attempts := getDeliveryAttempts(t, second.baseURL, detail.DeliveryID) + require.Len(t, attempts.Items, 2) + require.Equal(t, "timed_out", attempts.Items[0].Status) + require.Equal(t, "provider_accepted", attempts.Items[1].Status) +} + +type runtimeTestEnvironment struct { + redisServer *miniredis.Miniredis + redisClient *redis.Client + templateDir string +} + +func newRuntimeTestEnvironment(t *testing.T) *runtimeTestEnvironment { + t.Helper() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { + require.NoError(t, client.Close()) + }) + + return &runtimeTestEnvironment{ + redisServer: server, + redisClient: client, + templateDir: writeRuntimeTemplates(t), + } +} + +type runtimeInstanceOptions struct { + clock *runtimeTestClock + smtpMode string + smtpTimeout time.Duration + scriptedOutcomes []stubprovider.ScriptedOutcome + providerFactory runtimeProviderFactory +} + +type runtimeInstance struct { + baseURL string + runtime *Runtime + cancel context.CancelFunc + done chan error + closeOnce sync.Once + stubProvider *stubprovider.Provider +} + +func (env *runtimeTestEnvironment) start(t *testing.T, opts runtimeInstanceOptions) *runtimeInstance { + t.Helper() + + if opts.clock == nil { + opts.clock = newRuntimeTestClock(runtimeClockStart()) + } + if opts.smtpMode == "" { + opts.smtpMode = config.SMTPModeSMTP + } + if opts.smtpTimeout <= 0 { + opts.smtpTimeout = 20 * time.Millisecond + } + + cfg := config.DefaultConfig() + cfg.Redis.Addr = env.redisServer.Addr() + cfg.Templates.Dir = env.templateDir + cfg.InternalHTTP.Addr = mustFreeAddr(t) + cfg.ShutdownTimeout = time.Second + cfg.StreamBlockTimeout = 20 * time.Millisecond + cfg.AttemptWorkerConcurrency = 1 + cfg.SMTP.Mode = opts.smtpMode + cfg.SMTP.Timeout = opts.smtpTimeout + if opts.smtpMode == config.SMTPModeSMTP { + cfg.SMTP.Addr = "127.0.0.1:2525" + cfg.SMTP.FromEmail = "noreply@example.com" + } + + instance := &runtimeInstance{ + baseURL: "http://" + cfg.InternalHTTP.Addr, + done: make(chan error, 1), + } + + deps := runtimeDependencies{ + clock: opts.clock, + schedulerPoll: 10 * time.Millisecond, + schedulerRecovery: 10 * time.Millisecond, + schedulerGrace: 5 * time.Millisecond, + } + if opts.providerFactory != nil { + deps.providerFactory = opts.providerFactory + } else if opts.smtpMode == config.SMTPModeSMTP { + deps.providerFactory = func(config.SMTPConfig, *slog.Logger) (ports.Provider, error) { + provider, err := stubprovider.New(opts.scriptedOutcomes...) + if err == nil { + instance.stubProvider = provider + } + return provider, err + } + } + + runtime, err := newRuntime(context.Background(), cfg, testLogger(), deps) + require.NoError(t, err) + instance.runtime = runtime + + runCtx, cancel := context.WithCancel(context.Background()) + instance.cancel = cancel + go func() { + instance.done <- runtime.Run(runCtx) + }() + + waitForRuntimeReady(t, instance.baseURL) + t.Cleanup(func() { + instance.stop(t) + }) + + return instance +} + +func (instance *runtimeInstance) stop(t *testing.T) { + t.Helper() + + instance.closeOnce.Do(func() { + if instance.cancel != nil { + instance.cancel() + } + + select { + case err := <-instance.done: + require.NoError(t, err) + case <-time.After(5 * time.Second): + require.FailNow(t, "runtime did not stop before timeout") + } + + require.NoError(t, instance.runtime.Close()) + }) +} + +type runtimeTestClock struct { + mu sync.RWMutex + now time.Time +} + +func newRuntimeTestClock(now time.Time) *runtimeTestClock { + return &runtimeTestClock{now: now.UTC().Truncate(time.Millisecond)} +} + +func runtimeClockStart() time.Time { + return time.Now().UTC().Truncate(time.Millisecond) +} + +func (clock *runtimeTestClock) Now() time.Time { + clock.mu.RLock() + defer clock.mu.RUnlock() + + return clock.now +} + +func (clock *runtimeTestClock) Advance(step time.Duration) { + clock.mu.Lock() + defer clock.mu.Unlock() + + clock.now = clock.now.Add(step).UTC().Truncate(time.Millisecond) +} + +type blockingProvider struct { + mu sync.RWMutex + startedOnce sync.Once + startedCh chan struct{} +} + +func (provider *blockingProvider) started() bool { + if provider == nil { + return false + } + provider.mu.RLock() + startedCh := provider.startedCh + provider.mu.RUnlock() + if startedCh == nil { + return false + } + + select { + case <-startedCh: + return true + default: + return false + } +} + +func (provider *blockingProvider) Send(ctx context.Context, message ports.Message) (ports.Result, error) { + provider.startedOnce.Do(func() { + provider.mu.Lock() + if provider.startedCh == nil { + provider.startedCh = make(chan struct{}) + } + startedCh := provider.startedCh + provider.mu.Unlock() + close(startedCh) + }) + if err := message.Validate(); err != nil { + return ports.Result{}, err + } + + <-ctx.Done() + return ports.Result{}, ctx.Err() +} + +func (provider *blockingProvider) Close() error { + return nil +} + +func writeRuntimeTemplates(t *testing.T) string { + t.Helper() + + rootDir := t.TempDir() + templateDir := filepath.Join(rootDir, "auth.login_code", "en") + require.NoError(t, os.MkdirAll(templateDir, 0o755)) + require.NoError(t, os.WriteFile(filepath.Join(templateDir, "subject.tmpl"), []byte("Your login code"), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(templateDir, "text.tmpl"), []byte("Code: {{.code}}"), 0o644)) + + return rootDir +} + +type loginCodeDeliveryRequest struct { + idempotencyKey string + email string + code string + locale string +} + +func postLoginCodeDelivery(t *testing.T, baseURL string, request loginCodeDeliveryRequest) internalhttp.LoginCodeDeliveryResponse { + t.Helper() + + body, err := json.Marshal(map[string]string{ + "email": request.email, + "code": request.code, + "locale": request.locale, + }) + require.NoError(t, err) + + httpRequest, err := http.NewRequest(http.MethodPost, baseURL+internalhttp.LoginCodeDeliveriesPath, bytes.NewReader(body)) + require.NoError(t, err) + httpRequest.Header.Set("Content-Type", "application/json") + httpRequest.Header.Set(internalhttp.IdempotencyKeyHeader, request.idempotencyKey) + + response := doJSONRequest[internalhttp.LoginCodeDeliveryResponse](t, httpRequest, http.StatusOK) + require.NoError(t, response.Validate()) + + return response +} + +func publishRenderedCommand(t *testing.T, client *redis.Client, deliveryID string, idempotencyKey string, subject string) { + t.Helper() + + _, err := client.XAdd(context.Background(), &redis.XAddArgs{ + Stream: streamcommand.DeliveryCommandsStream, + Values: map[string]any{ + "delivery_id": deliveryID, + "source": "notification", + "payload_mode": "rendered", + "idempotency_key": idempotencyKey, + "requested_at_ms": "1775121700000", + "payload_json": `{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":["noreply@example.com"],"subject":"` + subject + `","text_body":"Turn 54 is ready.","html_body":"

Turn 54 is ready.

","attachments":[]}`, + }, + }).Result() + require.NoError(t, err) +} + +func waitForRuntimeReady(t *testing.T, baseURL string) { + t.Helper() + + require.Eventually(t, func() bool { + request, err := http.NewRequest(http.MethodGet, baseURL+internalhttp.DeliveriesPath, nil) + if err != nil { + return false + } + + response, err := http.DefaultClient.Do(request) + if err != nil { + return false + } + defer response.Body.Close() + + _, _ = io.Copy(io.Discard, response.Body) + return response.StatusCode == http.StatusOK + }, 5*time.Second, 20*time.Millisecond) +} + +func eventuallyListDeliveries(t *testing.T, baseURL string, query url.Values) internalhttp.DeliveryListResponse { + t.Helper() + + var response internalhttp.DeliveryListResponse + require.Eventually(t, func() bool { + response = listDeliveries(t, baseURL, query) + return len(response.Items) > 0 + }, 5*time.Second, 20*time.Millisecond) + + return response +} + +func listDeliveries(t *testing.T, baseURL string, query url.Values) internalhttp.DeliveryListResponse { + t.Helper() + + target := baseURL + internalhttp.DeliveriesPath + if encoded := query.Encode(); encoded != "" { + target += "?" + encoded + } + + request, err := http.NewRequest(http.MethodGet, target, nil) + require.NoError(t, err) + + return doJSONRequest[internalhttp.DeliveryListResponse](t, request, http.StatusOK) +} + +func eventuallyDeliveryStatus(t *testing.T, baseURL string, deliveryID string, status string) internalhttp.DeliveryDetailResponse { + t.Helper() + + var response internalhttp.DeliveryDetailResponse + require.Eventually(t, func() bool { + var found bool + response, found = tryGetDelivery(t, baseURL, deliveryID) + if !found { + return false + } + return response.Status == status + }, 5*time.Second, 20*time.Millisecond) + + return response +} + +func getDelivery(t *testing.T, baseURL string, deliveryID string) internalhttp.DeliveryDetailResponse { + t.Helper() + + response, found := tryGetDelivery(t, baseURL, deliveryID) + require.True(t, found, "delivery %s not found", deliveryID) + + return response +} + +func tryGetDelivery(t *testing.T, baseURL string, deliveryID string) (internalhttp.DeliveryDetailResponse, bool) { + t.Helper() + + request, err := http.NewRequest(http.MethodGet, baseURL+internalhttp.DeliveriesPath+"/"+url.PathEscape(deliveryID), nil) + require.NoError(t, err) + + response, payload := doRequest(t, request) + if response.StatusCode == http.StatusNotFound { + var notFound internalhttp.ErrorResponse + require.NoError(t, json.Unmarshal(payload, ¬Found), string(payload)) + require.NoError(t, notFound.Validate()) + require.Equal(t, internalhttp.ErrorCodeDeliveryNotFound, notFound.Error.Code) + return internalhttp.DeliveryDetailResponse{}, false + } + + return decodeBody[internalhttp.DeliveryDetailResponse](t, response.StatusCode, payload, http.StatusOK), true +} + +func getDeliveryAttempts(t *testing.T, baseURL string, deliveryID string) internalhttp.DeliveryAttemptsResponse { + t.Helper() + + request, err := http.NewRequest(http.MethodGet, baseURL+internalhttp.DeliveriesPath+"/"+url.PathEscape(deliveryID)+"/attempts", nil) + require.NoError(t, err) + + return doJSONRequest[internalhttp.DeliveryAttemptsResponse](t, request, http.StatusOK) +} + +func resendDelivery(t *testing.T, baseURL string, deliveryID string) string { + t.Helper() + + request, err := http.NewRequest(http.MethodPost, baseURL+internalhttp.DeliveriesPath+"/"+url.PathEscape(deliveryID)+"/resend", nil) + require.NoError(t, err) + + response := doJSONRequest[internalhttp.DeliveryResendResponse](t, request, http.StatusOK) + require.NotEmpty(t, response.DeliveryID) + + return response.DeliveryID +} + +func doJSONRequest[T any](t *testing.T, request *http.Request, wantStatus int) T { + t.Helper() + + response, payload := doRequest(t, request) + return decodeBody[T](t, response.StatusCode, payload, wantStatus) +} + +func doRequest(t *testing.T, request *http.Request) (*http.Response, []byte) { + t.Helper() + + response, err := http.DefaultClient.Do(request) + require.NoError(t, err) + defer response.Body.Close() + + payload, err := io.ReadAll(response.Body) + require.NoError(t, err) + + return response, payload +} + +func decodeBody[T any](t *testing.T, gotStatus int, payload []byte, wantStatus int) T { + t.Helper() + + require.Equal(t, wantStatus, gotStatus, string(payload)) + + var decoded T + require.NoError(t, json.Unmarshal(payload, &decoded), string(payload)) + + return decoded +} + +func formatUnixMilli(value time.Time) string { + return strconv.FormatInt(value.UTC().Truncate(time.Millisecond).UnixMilli(), 10) +} + +var _ ports.Provider = (*blockingProvider)(nil) diff --git a/mail/internal/app/runtime_test.go b/mail/internal/app/runtime_test.go new file mode 100644 index 0000000..07333f7 --- /dev/null +++ b/mail/internal/app/runtime_test.go @@ -0,0 +1,184 @@ +package app + +import ( + "context" + "io" + "log/slog" + "net" + "os" + "path/filepath" + "testing" + "time" + + "galaxy/mail/internal/config" + + "github.com/alicebob/miniredis/v2" + "github.com/stretchr/testify/require" +) + +func TestNewRuntimeStartsWithStubMode(t *testing.T) { + t.Parallel() + + redisServer := miniredis.RunT(t) + templateDir := writeStage6Templates(t) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = redisServer.Addr() + cfg.Templates.Dir = templateDir + cfg.InternalHTTP.Addr = mustFreeAddr(t) + + runtime, err := NewRuntime(context.Background(), cfg, testLogger()) + require.NoError(t, err) + require.NoError(t, runtime.Close()) +} + +func TestNewRuntimeRejectsInvalidRedisConfig(t *testing.T) { + t.Parallel() + + templateDir := writeStage6Templates(t) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = "127.0.0.1" + cfg.Templates.Dir = templateDir + cfg.InternalHTTP.Addr = mustFreeAddr(t) + + _, err := NewRuntime(context.Background(), cfg, testLogger()) + require.Error(t, err) + require.Contains(t, err.Error(), "redis addr") +} + +func TestNewRuntimeRejectsUnavailableRedis(t *testing.T) { + t.Parallel() + + templateDir := writeStage6Templates(t) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = "127.0.0.1:6399" + cfg.Redis.OperationTimeout = 100 * time.Millisecond + cfg.Templates.Dir = templateDir + cfg.InternalHTTP.Addr = mustFreeAddr(t) + + _, err := NewRuntime(context.Background(), cfg, testLogger()) + require.Error(t, err) + require.Contains(t, err.Error(), "ping redis") +} + +func TestNewRuntimeRejectsMissingTemplateDirectory(t *testing.T) { + t.Parallel() + + redisServer := miniredis.RunT(t) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = redisServer.Addr() + cfg.Templates.Dir = filepath.Join(t.TempDir(), "missing") + cfg.InternalHTTP.Addr = mustFreeAddr(t) + + _, err := NewRuntime(context.Background(), cfg, testLogger()) + require.Error(t, err) + require.Contains(t, err.Error(), "template catalog") +} + +func TestNewRuntimeRejectsMissingRequiredTemplateFile(t *testing.T) { + t.Parallel() + + redisServer := miniredis.RunT(t) + rootDir := t.TempDir() + require.NoError(t, os.MkdirAll(filepath.Join(rootDir, "auth.login_code", "en"), 0o755)) + require.NoError(t, os.WriteFile(filepath.Join(rootDir, "auth.login_code", "en", "subject.tmpl"), []byte("Subject"), 0o644)) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = redisServer.Addr() + cfg.Templates.Dir = rootDir + cfg.InternalHTTP.Addr = mustFreeAddr(t) + + _, err := NewRuntime(context.Background(), cfg, testLogger()) + require.Error(t, err) + require.Contains(t, err.Error(), "text.tmpl") +} + +func TestNewRuntimeRejectsBrokenTemplateCatalog(t *testing.T) { + t.Parallel() + + redisServer := miniredis.RunT(t) + rootDir := t.TempDir() + require.NoError(t, os.MkdirAll(filepath.Join(rootDir, "auth.login_code", "en"), 0o755)) + require.NoError(t, os.WriteFile(filepath.Join(rootDir, "auth.login_code", "en", "subject.tmpl"), []byte("Your login code"), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(rootDir, "auth.login_code", "en", "text.tmpl"), []byte("Code: {{.code}}"), 0o644)) + require.NoError(t, os.MkdirAll(filepath.Join(rootDir, "game.turn_ready", "en"), 0o755)) + require.NoError(t, os.WriteFile(filepath.Join(rootDir, "game.turn_ready", "en", "subject.tmpl"), []byte("{{if .turn_number}"), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(rootDir, "game.turn_ready", "en", "text.tmpl"), []byte("Turn ready"), 0o644)) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = redisServer.Addr() + cfg.Templates.Dir = rootDir + cfg.InternalHTTP.Addr = mustFreeAddr(t) + + _, err := NewRuntime(context.Background(), cfg, testLogger()) + require.Error(t, err) + require.Contains(t, err.Error(), "template parse failed") +} + +func TestRuntimeRunStopsOnContextCancellation(t *testing.T) { + t.Parallel() + + redisServer := miniredis.RunT(t) + templateDir := writeStage6Templates(t) + + cfg := config.DefaultConfig() + cfg.Redis.Addr = redisServer.Addr() + cfg.Templates.Dir = templateDir + cfg.InternalHTTP.Addr = mustFreeAddr(t) + cfg.ShutdownTimeout = time.Second + + runtime, err := NewRuntime(context.Background(), cfg, testLogger()) + require.NoError(t, err) + defer func() { + require.NoError(t, runtime.Close()) + }() + + runCtx, cancel := context.WithCancel(context.Background()) + done := make(chan error, 1) + go func() { + done <- runtime.Run(runCtx) + }() + + time.Sleep(100 * time.Millisecond) + cancel() + + require.Eventually(t, func() bool { + select { + case err := <-done: + return err == nil + default: + return false + } + }, 5*time.Second, 10*time.Millisecond) +} + +func writeStage6Templates(t *testing.T) string { + t.Helper() + + rootDir := t.TempDir() + templateDir := filepath.Join(rootDir, "auth.login_code", "en") + require.NoError(t, os.MkdirAll(templateDir, 0o755)) + require.NoError(t, os.WriteFile(filepath.Join(templateDir, "subject.tmpl"), []byte("Your login code"), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(templateDir, "text.tmpl"), []byte("Code: {{.code}}"), 0o644)) + + return rootDir +} + +func testLogger() *slog.Logger { + return slog.New(slog.NewJSONHandler(io.Discard, nil)) +} + +func mustFreeAddr(t *testing.T) string { + t.Helper() + + listener, err := net.Listen("tcp", "127.0.0.1:0") + require.NoError(t, err) + defer func() { + require.NoError(t, listener.Close()) + }() + + return listener.Addr().String() +} diff --git a/mail/internal/config/config.go b/mail/internal/config/config.go new file mode 100644 index 0000000..640bef5 --- /dev/null +++ b/mail/internal/config/config.go @@ -0,0 +1,402 @@ +// Package config loads the Mail Service process configuration from environment +// variables. +package config + +import ( + "crypto/tls" + "fmt" + "strings" + "time" + + "galaxy/mail/internal/telemetry" +) + +const ( + shutdownTimeoutEnvVar = "MAIL_SHUTDOWN_TIMEOUT" + logLevelEnvVar = "MAIL_LOG_LEVEL" + + internalHTTPAddrEnvVar = "MAIL_INTERNAL_HTTP_ADDR" + internalHTTPReadHeaderTimeoutEnvVar = "MAIL_INTERNAL_HTTP_READ_HEADER_TIMEOUT" + internalHTTPReadTimeoutEnvVar = "MAIL_INTERNAL_HTTP_READ_TIMEOUT" + internalHTTPIdleTimeoutEnvVar = "MAIL_INTERNAL_HTTP_IDLE_TIMEOUT" + + redisAddrEnvVar = "MAIL_REDIS_ADDR" + redisUsernameEnvVar = "MAIL_REDIS_USERNAME" + redisPasswordEnvVar = "MAIL_REDIS_PASSWORD" + redisDBEnvVar = "MAIL_REDIS_DB" + redisTLSEnabledEnvVar = "MAIL_REDIS_TLS_ENABLED" + redisOperationTimeoutEnvVar = "MAIL_REDIS_OPERATION_TIMEOUT" + redisCommandStreamEnvVar = "MAIL_REDIS_COMMAND_STREAM" + redisAttemptScheduleEnvVar = "MAIL_REDIS_ATTEMPT_SCHEDULE_KEY" + redisDeadLetterPrefixEnvVar = "MAIL_REDIS_DEAD_LETTER_PREFIX" + + smtpModeEnvVar = "MAIL_SMTP_MODE" + smtpAddrEnvVar = "MAIL_SMTP_ADDR" + smtpUsernameEnvVar = "MAIL_SMTP_USERNAME" + smtpPasswordEnvVar = "MAIL_SMTP_PASSWORD" + smtpFromEmailEnvVar = "MAIL_SMTP_FROM_EMAIL" + smtpFromNameEnvVar = "MAIL_SMTP_FROM_NAME" + smtpTimeoutEnvVar = "MAIL_SMTP_TIMEOUT" + smtpInsecureSkipVerifyEnvVar = "MAIL_SMTP_INSECURE_SKIP_VERIFY" + + templateDirEnvVar = "MAIL_TEMPLATE_DIR" + + attemptWorkerConcurrencyEnvVar = "MAIL_ATTEMPT_WORKER_CONCURRENCY" + streamBlockTimeoutEnvVar = "MAIL_STREAM_BLOCK_TIMEOUT" + operatorRequestTimeoutEnvVar = "MAIL_OPERATOR_REQUEST_TIMEOUT" + idempotencyTTLEnvVar = "MAIL_IDEMPOTENCY_TTL" + deliveryTTLEnvVar = "MAIL_DELIVERY_TTL" + attemptTTLEnvVar = "MAIL_ATTEMPT_TTL" + + otelServiceNameEnvVar = "OTEL_SERVICE_NAME" + otelTracesExporterEnvVar = "OTEL_TRACES_EXPORTER" + otelMetricsExporterEnvVar = "OTEL_METRICS_EXPORTER" + otelExporterOTLPProtocolEnvVar = "OTEL_EXPORTER_OTLP_PROTOCOL" + otelExporterOTLPTracesProtocolEnvVar = "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL" + otelExporterOTLPMetricsProtocolEnvVar = "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL" + otelStdoutTracesEnabledEnvVar = "MAIL_OTEL_STDOUT_TRACES_ENABLED" + otelStdoutMetricsEnabledEnvVar = "MAIL_OTEL_STDOUT_METRICS_ENABLED" + + defaultShutdownTimeout = 5 * time.Second + defaultLogLevel = "info" + defaultInternalHTTPAddr = ":8080" + defaultReadHeaderTimeout = 2 * time.Second + defaultReadTimeout = 10 * time.Second + defaultIdleTimeout = time.Minute + defaultRedisDB = 0 + defaultRedisOperationTimeout = 250 * time.Millisecond + defaultRedisCommandStream = "mail:delivery_commands" + defaultRedisAttemptScheduleKey = "mail:attempt_schedule" + defaultRedisDeadLetterPrefix = "mail:dead_letters:" + defaultSMTPMode = SMTPModeStub + defaultSMTPTimeout = 15 * time.Second + defaultTemplateDir = "templates" + defaultAttemptWorkerCount = 4 + defaultStreamBlockTimeout = 2 * time.Second + defaultOperatorRequestTimeout = 5 * time.Second + defaultIdempotencyTTL = 7 * 24 * time.Hour + defaultDeliveryTTL = 30 * 24 * time.Hour + defaultAttemptTTL = 90 * 24 * time.Hour + defaultOTelServiceName = "galaxy-mail" +) + +const ( + // SMTPModeStub configures the deterministic in-process stub provider. + SMTPModeStub = "stub" + + // SMTPModeSMTP configures the real SMTP-backed provider adapter. + SMTPModeSMTP = "smtp" +) + +// Config stores the full Mail Service process configuration. +type Config struct { + // ShutdownTimeout bounds graceful shutdown of every long-lived component. + ShutdownTimeout time.Duration + + // Logging configures the process-wide structured logger. + Logging LoggingConfig + + // InternalHTTP configures the trusted internal HTTP listener. + InternalHTTP InternalHTTPConfig + + // Redis configures the shared Redis client and Redis-owned keys used by the + // runnable service skeleton. + Redis RedisConfig + + // SMTP configures the runtime mail provider mode and provider-specific + // connection details. + SMTP SMTPConfig + + // Templates configures the filesystem-backed template catalog root. + Templates TemplateConfig + + // AttemptWorkerConcurrency stores how many idle attempt workers the process + // starts. + AttemptWorkerConcurrency int + + // StreamBlockTimeout stores the maximum Redis Streams blocking read window + // used by the future command consumer. + StreamBlockTimeout time.Duration + + // OperatorRequestTimeout stores the future application-layer request budget + // for trusted operator handlers. + OperatorRequestTimeout time.Duration + + // IdempotencyTTL stores the configured retention for idempotency records. + IdempotencyTTL time.Duration + + // DeliveryTTL stores the configured retention for delivery records. + DeliveryTTL time.Duration + + // AttemptTTL stores the configured retention for attempt and dead-letter + // records. + AttemptTTL time.Duration + + // Telemetry configures the process-wide OpenTelemetry runtime. + Telemetry TelemetryConfig +} + +// LoggingConfig configures the process-wide structured logger. +type LoggingConfig struct { + // Level stores the process log level accepted by log/slog. + Level string +} + +// InternalHTTPConfig configures the trusted internal HTTP listener. +type InternalHTTPConfig struct { + // Addr stores the TCP listen address. + Addr string + + // ReadHeaderTimeout bounds request-header reading. + ReadHeaderTimeout time.Duration + + // ReadTimeout bounds reading one request. + ReadTimeout time.Duration + + // IdleTimeout bounds how long keep-alive connections stay open. + IdleTimeout time.Duration +} + +// Validate reports whether cfg stores a usable internal HTTP listener +// configuration. +func (cfg InternalHTTPConfig) Validate() error { + switch { + case strings.TrimSpace(cfg.Addr) == "": + return fmt.Errorf("internal HTTP addr must not be empty") + case !isTCPAddr(cfg.Addr): + return fmt.Errorf("internal HTTP addr %q must use host:port form", cfg.Addr) + case cfg.ReadHeaderTimeout <= 0: + return fmt.Errorf("internal HTTP read header timeout must be positive") + case cfg.ReadTimeout <= 0: + return fmt.Errorf("internal HTTP read timeout must be positive") + case cfg.IdleTimeout <= 0: + return fmt.Errorf("internal HTTP idle timeout must be positive") + default: + return nil + } +} + +// RedisConfig configures the shared Redis client used by the runnable process. +type RedisConfig struct { + // Addr stores the Redis network address. + Addr string + + // Username stores the optional Redis ACL username. + Username string + + // Password stores the optional Redis ACL password. + Password string + + // DB stores the Redis logical database index. + DB int + + // TLSEnabled reports whether TLS must be used for Redis connections. + TLSEnabled bool + + // OperationTimeout bounds one Redis round trip including the startup PING. + OperationTimeout time.Duration + + // CommandStream stores the configured Redis Streams key for async command + // intake. + CommandStream string + + // AttemptScheduleKey stores the configured sorted-set key of scheduled + // attempts. + AttemptScheduleKey string + + // DeadLetterPrefix stores the configured Redis key prefix of dead-letter + // entries. + DeadLetterPrefix string +} + +// TLSConfig returns the conservative TLS configuration used by the Redis +// client when TLSEnabled is true. +func (cfg RedisConfig) TLSConfig() *tls.Config { + if !cfg.TLSEnabled { + return nil + } + + return &tls.Config{MinVersion: tls.VersionTLS12} +} + +// Validate reports whether cfg stores a usable Redis configuration. +func (cfg RedisConfig) Validate() error { + switch { + case strings.TrimSpace(cfg.Addr) == "": + return fmt.Errorf("redis addr must not be empty") + case !isTCPAddr(cfg.Addr): + return fmt.Errorf("redis addr %q must use host:port form", cfg.Addr) + case cfg.DB < 0: + return fmt.Errorf("redis db must not be negative") + case cfg.OperationTimeout <= 0: + return fmt.Errorf("redis operation timeout must be positive") + case strings.TrimSpace(cfg.CommandStream) == "": + return fmt.Errorf("redis command stream must not be empty") + case strings.TrimSpace(cfg.AttemptScheduleKey) == "": + return fmt.Errorf("redis attempt schedule key must not be empty") + case strings.TrimSpace(cfg.DeadLetterPrefix) == "": + return fmt.Errorf("redis dead-letter prefix must not be empty") + default: + return nil + } +} + +// SMTPConfig configures the selected provider adapter. +type SMTPConfig struct { + // Mode selects the runtime provider implementation. Supported values are + // `stub` and `smtp`. + Mode string + + // Addr stores the SMTP server address when Mode is `smtp`. + Addr string + + // Username stores the optional SMTP authentication username. + Username string + + // Password stores the optional SMTP authentication password. + Password string + + // FromEmail stores the RFC 5322 single mailbox used as the envelope sender + // when Mode is `smtp`. + FromEmail string + + // FromName stores the optional display name attached to FromEmail. + FromName string + + // Timeout stores the maximum SMTP dial-and-send window. + Timeout time.Duration + + // InsecureSkipVerify disables SMTP certificate verification. This is meant + // only for local development and black-box tests with self-signed capture + // servers. + InsecureSkipVerify bool +} + +// Validate reports whether cfg stores a usable provider configuration. +func (cfg SMTPConfig) Validate() error { + switch cfg.Mode { + case SMTPModeStub: + return nil + case SMTPModeSMTP: + switch { + case strings.TrimSpace(cfg.Addr) == "": + return fmt.Errorf("smtp addr must not be empty") + case !isTCPAddr(cfg.Addr): + return fmt.Errorf("smtp addr %q must use host:port form", cfg.Addr) + case cfg.Timeout <= 0: + return fmt.Errorf("smtp timeout must be positive") + case strings.TrimSpace(cfg.Username) == "" && strings.TrimSpace(cfg.Password) != "": + return fmt.Errorf("smtp username and password must be configured together") + case strings.TrimSpace(cfg.Username) != "" && strings.TrimSpace(cfg.Password) == "": + return fmt.Errorf("smtp username and password must be configured together") + default: + return validateMailbox("smtp from email", cfg.FromEmail) + } + default: + return fmt.Errorf("smtp mode %q is unsupported", cfg.Mode) + } +} + +// TemplateConfig configures the filesystem-backed template catalog. +type TemplateConfig struct { + // Dir stores the root directory of the template catalog. + Dir string +} + +// TelemetryConfig configures the Mail Service OpenTelemetry runtime. +type TelemetryConfig struct { + // ServiceName overrides the default OpenTelemetry service name. + ServiceName string + + // TracesExporter selects the external traces exporter. Supported values are + // `none` and `otlp`. + TracesExporter string + + // MetricsExporter selects the external metrics exporter. Supported values + // are `none` and `otlp`. + MetricsExporter string + + // TracesProtocol selects the OTLP traces protocol when TracesExporter is + // `otlp`. + TracesProtocol string + + // MetricsProtocol selects the OTLP metrics protocol when MetricsExporter is + // `otlp`. + MetricsProtocol string + + // StdoutTracesEnabled enables the additional stdout trace exporter used for + // local development and debugging. + StdoutTracesEnabled bool + + // StdoutMetricsEnabled enables the additional stdout metric exporter used + // for local development and debugging. + StdoutMetricsEnabled bool +} + +// Validate reports whether cfg stores a usable template catalog root. +func (cfg TemplateConfig) Validate() error { + if strings.TrimSpace(cfg.Dir) == "" { + return fmt.Errorf("template dir must not be empty") + } + + return nil +} + +// DefaultConfig returns the default Mail Service process configuration. +func DefaultConfig() Config { + return Config{ + ShutdownTimeout: defaultShutdownTimeout, + Logging: LoggingConfig{ + Level: defaultLogLevel, + }, + InternalHTTP: InternalHTTPConfig{ + Addr: defaultInternalHTTPAddr, + ReadHeaderTimeout: defaultReadHeaderTimeout, + ReadTimeout: defaultReadTimeout, + IdleTimeout: defaultIdleTimeout, + }, + Redis: RedisConfig{ + DB: defaultRedisDB, + OperationTimeout: defaultRedisOperationTimeout, + CommandStream: defaultRedisCommandStream, + AttemptScheduleKey: defaultRedisAttemptScheduleKey, + DeadLetterPrefix: defaultRedisDeadLetterPrefix, + }, + SMTP: SMTPConfig{ + Mode: defaultSMTPMode, + Timeout: defaultSMTPTimeout, + }, + Templates: TemplateConfig{ + Dir: defaultTemplateDir, + }, + AttemptWorkerConcurrency: defaultAttemptWorkerCount, + StreamBlockTimeout: defaultStreamBlockTimeout, + OperatorRequestTimeout: defaultOperatorRequestTimeout, + IdempotencyTTL: defaultIdempotencyTTL, + DeliveryTTL: defaultDeliveryTTL, + AttemptTTL: defaultAttemptTTL, + Telemetry: TelemetryConfig{ + ServiceName: defaultOTelServiceName, + TracesExporter: "none", + MetricsExporter: "none", + TracesProtocol: "", + MetricsProtocol: "", + StdoutTracesEnabled: false, + StdoutMetricsEnabled: false, + }, + } +} + +// Validate reports whether cfg contains a supported OpenTelemetry +// configuration. +func (cfg TelemetryConfig) Validate() error { + return telemetry.ProcessConfig{ + ServiceName: cfg.ServiceName, + TracesExporter: cfg.TracesExporter, + MetricsExporter: cfg.MetricsExporter, + TracesProtocol: cfg.TracesProtocol, + MetricsProtocol: cfg.MetricsProtocol, + StdoutTracesEnabled: cfg.StdoutTracesEnabled, + StdoutMetricsEnabled: cfg.StdoutMetricsEnabled, + }.Validate() +} diff --git a/mail/internal/config/config_test.go b/mail/internal/config/config_test.go new file mode 100644 index 0000000..943359f --- /dev/null +++ b/mail/internal/config/config_test.go @@ -0,0 +1,256 @@ +package config + +import ( + "testing" + "time" + + "github.com/stretchr/testify/require" +) + +func TestLoadFromEnvUsesDefaults(t *testing.T) { + t.Setenv(redisAddrEnvVar, "127.0.0.1:6379") + + cfg, err := LoadFromEnv() + require.NoError(t, err) + + defaults := DefaultConfig() + require.Equal(t, defaults.ShutdownTimeout, cfg.ShutdownTimeout) + require.Equal(t, defaults.Logging, cfg.Logging) + require.Equal(t, defaults.InternalHTTP, cfg.InternalHTTP) + require.Equal(t, "127.0.0.1:6379", cfg.Redis.Addr) + require.Equal(t, defaults.Redis.DB, cfg.Redis.DB) + require.Equal(t, defaults.Redis.OperationTimeout, cfg.Redis.OperationTimeout) + require.Equal(t, defaults.Redis.CommandStream, cfg.Redis.CommandStream) + require.Equal(t, defaults.Redis.AttemptScheduleKey, cfg.Redis.AttemptScheduleKey) + require.Equal(t, defaults.Redis.DeadLetterPrefix, cfg.Redis.DeadLetterPrefix) + require.Equal(t, defaults.SMTP, cfg.SMTP) + require.Equal(t, defaults.Templates, cfg.Templates) + require.Equal(t, defaults.AttemptWorkerConcurrency, cfg.AttemptWorkerConcurrency) + require.Equal(t, defaults.StreamBlockTimeout, cfg.StreamBlockTimeout) + require.Equal(t, defaults.OperatorRequestTimeout, cfg.OperatorRequestTimeout) + require.Equal(t, defaults.IdempotencyTTL, cfg.IdempotencyTTL) + require.Equal(t, defaults.DeliveryTTL, cfg.DeliveryTTL) + require.Equal(t, defaults.AttemptTTL, cfg.AttemptTTL) + require.Equal(t, defaults.Telemetry, cfg.Telemetry) +} + +func TestLoadFromEnvAppliesOverrides(t *testing.T) { + t.Setenv(shutdownTimeoutEnvVar, "9s") + t.Setenv(logLevelEnvVar, "debug") + t.Setenv(internalHTTPAddrEnvVar, "127.0.0.1:18080") + t.Setenv(internalHTTPReadHeaderTimeoutEnvVar, "3s") + t.Setenv(internalHTTPReadTimeoutEnvVar, "11s") + t.Setenv(internalHTTPIdleTimeoutEnvVar, "61s") + t.Setenv(redisAddrEnvVar, "127.0.0.1:6380") + t.Setenv(redisUsernameEnvVar, "alice") + t.Setenv(redisPasswordEnvVar, "secret") + t.Setenv(redisDBEnvVar, "3") + t.Setenv(redisTLSEnabledEnvVar, "true") + t.Setenv(redisOperationTimeoutEnvVar, "750ms") + t.Setenv(redisCommandStreamEnvVar, "mail:test_commands") + t.Setenv(redisAttemptScheduleEnvVar, "mail:test_schedule") + t.Setenv(redisDeadLetterPrefixEnvVar, "mail:test_dead_letters:") + t.Setenv(smtpModeEnvVar, SMTPModeSMTP) + t.Setenv(smtpAddrEnvVar, "127.0.0.1:2525") + t.Setenv(smtpUsernameEnvVar, "mailer") + t.Setenv(smtpPasswordEnvVar, "smtp-secret") + t.Setenv(smtpFromEmailEnvVar, "noreply@example.com") + t.Setenv(smtpFromNameEnvVar, "Galaxy Mail") + t.Setenv(smtpTimeoutEnvVar, "19s") + t.Setenv(smtpInsecureSkipVerifyEnvVar, "true") + t.Setenv(templateDirEnvVar, "/tmp/templates") + t.Setenv(attemptWorkerConcurrencyEnvVar, "8") + t.Setenv(streamBlockTimeoutEnvVar, "5s") + t.Setenv(operatorRequestTimeoutEnvVar, "6s") + t.Setenv(idempotencyTTLEnvVar, "48h") + t.Setenv(deliveryTTLEnvVar, "96h") + t.Setenv(attemptTTLEnvVar, "240h") + t.Setenv(otelServiceNameEnvVar, "custom-mail") + t.Setenv(otelTracesExporterEnvVar, "otlp") + t.Setenv(otelMetricsExporterEnvVar, "otlp") + t.Setenv(otelExporterOTLPProtocolEnvVar, "grpc") + t.Setenv(otelStdoutTracesEnabledEnvVar, "true") + t.Setenv(otelStdoutMetricsEnabledEnvVar, "true") + + cfg, err := LoadFromEnv() + require.NoError(t, err) + + require.Equal(t, 9*time.Second, cfg.ShutdownTimeout) + require.Equal(t, "debug", cfg.Logging.Level) + require.Equal(t, InternalHTTPConfig{ + Addr: "127.0.0.1:18080", + ReadHeaderTimeout: 3 * time.Second, + ReadTimeout: 11 * time.Second, + IdleTimeout: 61 * time.Second, + }, cfg.InternalHTTP) + require.Equal(t, RedisConfig{ + Addr: "127.0.0.1:6380", + Username: "alice", + Password: "secret", + DB: 3, + TLSEnabled: true, + OperationTimeout: 750 * time.Millisecond, + CommandStream: "mail:test_commands", + AttemptScheduleKey: "mail:test_schedule", + DeadLetterPrefix: "mail:test_dead_letters:", + }, cfg.Redis) + require.Equal(t, SMTPConfig{ + Mode: SMTPModeSMTP, + Addr: "127.0.0.1:2525", + Username: "mailer", + Password: "smtp-secret", + FromEmail: "noreply@example.com", + FromName: "Galaxy Mail", + Timeout: 19 * time.Second, + InsecureSkipVerify: true, + }, cfg.SMTP) + require.Equal(t, TemplateConfig{Dir: "/tmp/templates"}, cfg.Templates) + require.Equal(t, 8, cfg.AttemptWorkerConcurrency) + require.Equal(t, 5*time.Second, cfg.StreamBlockTimeout) + require.Equal(t, 6*time.Second, cfg.OperatorRequestTimeout) + require.Equal(t, 48*time.Hour, cfg.IdempotencyTTL) + require.Equal(t, 96*time.Hour, cfg.DeliveryTTL) + require.Equal(t, 240*time.Hour, cfg.AttemptTTL) + require.Equal(t, TelemetryConfig{ + ServiceName: "custom-mail", + TracesExporter: "otlp", + MetricsExporter: "otlp", + TracesProtocol: "grpc", + MetricsProtocol: "grpc", + StdoutTracesEnabled: true, + StdoutMetricsEnabled: true, + }, cfg.Telemetry) +} + +func TestLoadFromEnvRejectsInvalidValues(t *testing.T) { + tests := []struct { + name string + envName string + envVal string + }{ + {name: "invalid duration", envName: shutdownTimeoutEnvVar, envVal: "later"}, + {name: "invalid log level", envName: logLevelEnvVar, envVal: "verbose"}, + {name: "invalid redis db", envName: redisDBEnvVar, envVal: "db-three"}, + {name: "invalid redis tls", envName: redisTLSEnabledEnvVar, envVal: "sometimes"}, + {name: "invalid redis timeout", envName: redisOperationTimeoutEnvVar, envVal: "never"}, + {name: "invalid smtp mode", envName: smtpModeEnvVar, envVal: "ses"}, + {name: "invalid smtp timeout", envName: smtpTimeoutEnvVar, envVal: "fast"}, + {name: "invalid smtp insecure skip verify", envName: smtpInsecureSkipVerifyEnvVar, envVal: "sometimes"}, + {name: "invalid worker count", envName: attemptWorkerConcurrencyEnvVar, envVal: "many"}, + {name: "invalid otel traces exporter", envName: otelTracesExporterEnvVar, envVal: "stdout"}, + {name: "invalid otel metrics exporter", envName: otelMetricsExporterEnvVar, envVal: "stdout"}, + {name: "invalid otel traces protocol", envName: otelExporterOTLPTracesProtocolEnvVar, envVal: "udp"}, + {name: "invalid otel metrics protocol", envName: otelExporterOTLPMetricsProtocolEnvVar, envVal: "udp"}, + {name: "invalid otel stdout traces", envName: otelStdoutTracesEnabledEnvVar, envVal: "sometimes"}, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Setenv(redisAddrEnvVar, "127.0.0.1:6379") + t.Setenv(tt.envName, tt.envVal) + if tt.envName == smtpTimeoutEnvVar { + t.Setenv(smtpModeEnvVar, SMTPModeSMTP) + t.Setenv(smtpAddrEnvVar, "127.0.0.1:2525") + t.Setenv(smtpFromEmailEnvVar, "noreply@example.com") + } + + _, err := LoadFromEnv() + require.Error(t, err) + }) + } +} + +func TestLoadFromEnvRejectsMissingRequiredRedisAddr(t *testing.T) { + _, err := LoadFromEnv() + require.Error(t, err) + require.Contains(t, err.Error(), "redis addr") +} + +func TestLoadFromEnvRejectsInvalidRedisAddr(t *testing.T) { + t.Setenv(redisAddrEnvVar, "127.0.0.1") + + _, err := LoadFromEnv() + require.Error(t, err) + require.Contains(t, err.Error(), "redis addr") +} + +func TestLoadFromEnvRejectsInvalidSMTPConfiguration(t *testing.T) { + t.Setenv(redisAddrEnvVar, "127.0.0.1:6379") + t.Setenv(smtpModeEnvVar, SMTPModeSMTP) + + t.Run("missing addr", func(t *testing.T) { + t.Setenv(smtpFromEmailEnvVar, "noreply@example.com") + + _, err := LoadFromEnv() + require.Error(t, err) + require.Contains(t, err.Error(), "smtp addr") + }) + + t.Run("missing from email", func(t *testing.T) { + t.Setenv(smtpAddrEnvVar, "127.0.0.1:2525") + + _, err := LoadFromEnv() + require.Error(t, err) + require.Contains(t, err.Error(), "smtp from email") + }) + + t.Run("username without password", func(t *testing.T) { + t.Setenv(smtpAddrEnvVar, "127.0.0.1:2525") + t.Setenv(smtpFromEmailEnvVar, "noreply@example.com") + t.Setenv(smtpUsernameEnvVar, "mailer") + + _, err := LoadFromEnv() + require.Error(t, err) + require.Contains(t, err.Error(), "smtp username and password") + }) + + t.Run("password without username", func(t *testing.T) { + t.Setenv(smtpAddrEnvVar, "127.0.0.1:2525") + t.Setenv(smtpFromEmailEnvVar, "noreply@example.com") + t.Setenv(smtpPasswordEnvVar, "secret") + + _, err := LoadFromEnv() + require.Error(t, err) + require.Contains(t, err.Error(), "smtp username and password") + }) +} + +func TestLoadFromEnvRejectsNonPositiveDurationsAndCounts(t *testing.T) { + tests := []struct { + name string + envName string + envVal string + }{ + {name: "shutdown timeout", envName: shutdownTimeoutEnvVar, envVal: "0s"}, + {name: "read header timeout", envName: internalHTTPReadHeaderTimeoutEnvVar, envVal: "0s"}, + {name: "read timeout", envName: internalHTTPReadTimeoutEnvVar, envVal: "0s"}, + {name: "idle timeout", envName: internalHTTPIdleTimeoutEnvVar, envVal: "0s"}, + {name: "redis operation timeout", envName: redisOperationTimeoutEnvVar, envVal: "0s"}, + {name: "smtp timeout", envName: smtpTimeoutEnvVar, envVal: "0s"}, + {name: "attempt worker concurrency", envName: attemptWorkerConcurrencyEnvVar, envVal: "0"}, + {name: "stream block timeout", envName: streamBlockTimeoutEnvVar, envVal: "0s"}, + {name: "operator request timeout", envName: operatorRequestTimeoutEnvVar, envVal: "0s"}, + {name: "idempotency ttl", envName: idempotencyTTLEnvVar, envVal: "0s"}, + {name: "delivery ttl", envName: deliveryTTLEnvVar, envVal: "0s"}, + {name: "attempt ttl", envName: attemptTTLEnvVar, envVal: "0s"}, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Setenv(redisAddrEnvVar, "127.0.0.1:6379") + t.Setenv(tt.envName, tt.envVal) + if tt.envName == smtpTimeoutEnvVar { + t.Setenv(smtpModeEnvVar, SMTPModeSMTP) + t.Setenv(smtpAddrEnvVar, "127.0.0.1:2525") + t.Setenv(smtpFromEmailEnvVar, "noreply@example.com") + } + + _, err := LoadFromEnv() + require.Error(t, err) + }) + } +} diff --git a/mail/internal/config/env.go b/mail/internal/config/env.go new file mode 100644 index 0000000..7200bcc --- /dev/null +++ b/mail/internal/config/env.go @@ -0,0 +1,205 @@ +package config + +import ( + "fmt" + "os" + "strconv" + "strings" + "time" +) + +// LoadFromEnv builds Config from environment variables and validates the +// resulting configuration. +func LoadFromEnv() (Config, error) { + cfg := DefaultConfig() + + var err error + + cfg.ShutdownTimeout, err = durationEnv(shutdownTimeoutEnvVar, cfg.ShutdownTimeout) + if err != nil { + return Config{}, err + } + + cfg.Logging.Level = stringEnv(logLevelEnvVar, cfg.Logging.Level) + + cfg.InternalHTTP.Addr = stringEnv(internalHTTPAddrEnvVar, cfg.InternalHTTP.Addr) + cfg.InternalHTTP.ReadHeaderTimeout, err = durationEnv(internalHTTPReadHeaderTimeoutEnvVar, cfg.InternalHTTP.ReadHeaderTimeout) + if err != nil { + return Config{}, err + } + cfg.InternalHTTP.ReadTimeout, err = durationEnv(internalHTTPReadTimeoutEnvVar, cfg.InternalHTTP.ReadTimeout) + if err != nil { + return Config{}, err + } + cfg.InternalHTTP.IdleTimeout, err = durationEnv(internalHTTPIdleTimeoutEnvVar, cfg.InternalHTTP.IdleTimeout) + if err != nil { + return Config{}, err + } + + cfg.Redis.Addr = stringEnv(redisAddrEnvVar, cfg.Redis.Addr) + cfg.Redis.Username = stringEnv(redisUsernameEnvVar, cfg.Redis.Username) + cfg.Redis.Password = stringEnv(redisPasswordEnvVar, cfg.Redis.Password) + cfg.Redis.DB, err = intEnv(redisDBEnvVar, cfg.Redis.DB) + if err != nil { + return Config{}, err + } + cfg.Redis.TLSEnabled, err = boolEnv(redisTLSEnabledEnvVar, cfg.Redis.TLSEnabled) + if err != nil { + return Config{}, err + } + cfg.Redis.OperationTimeout, err = durationEnv(redisOperationTimeoutEnvVar, cfg.Redis.OperationTimeout) + if err != nil { + return Config{}, err + } + cfg.Redis.CommandStream = stringEnv(redisCommandStreamEnvVar, cfg.Redis.CommandStream) + cfg.Redis.AttemptScheduleKey = stringEnv(redisAttemptScheduleEnvVar, cfg.Redis.AttemptScheduleKey) + cfg.Redis.DeadLetterPrefix = stringEnv(redisDeadLetterPrefixEnvVar, cfg.Redis.DeadLetterPrefix) + + cfg.SMTP.Mode = stringEnv(smtpModeEnvVar, cfg.SMTP.Mode) + cfg.SMTP.Addr = stringEnv(smtpAddrEnvVar, cfg.SMTP.Addr) + cfg.SMTP.Username = stringEnv(smtpUsernameEnvVar, cfg.SMTP.Username) + cfg.SMTP.Password = stringEnv(smtpPasswordEnvVar, cfg.SMTP.Password) + cfg.SMTP.FromEmail = stringEnv(smtpFromEmailEnvVar, cfg.SMTP.FromEmail) + cfg.SMTP.FromName = stringEnv(smtpFromNameEnvVar, cfg.SMTP.FromName) + cfg.SMTP.Timeout, err = durationEnv(smtpTimeoutEnvVar, cfg.SMTP.Timeout) + if err != nil { + return Config{}, err + } + cfg.SMTP.InsecureSkipVerify, err = boolEnv(smtpInsecureSkipVerifyEnvVar, cfg.SMTP.InsecureSkipVerify) + if err != nil { + return Config{}, err + } + + cfg.Templates.Dir = stringEnv(templateDirEnvVar, cfg.Templates.Dir) + + cfg.AttemptWorkerConcurrency, err = intEnv(attemptWorkerConcurrencyEnvVar, cfg.AttemptWorkerConcurrency) + if err != nil { + return Config{}, err + } + cfg.StreamBlockTimeout, err = durationEnv(streamBlockTimeoutEnvVar, cfg.StreamBlockTimeout) + if err != nil { + return Config{}, err + } + cfg.OperatorRequestTimeout, err = durationEnv(operatorRequestTimeoutEnvVar, cfg.OperatorRequestTimeout) + if err != nil { + return Config{}, err + } + cfg.IdempotencyTTL, err = durationEnv(idempotencyTTLEnvVar, cfg.IdempotencyTTL) + if err != nil { + return Config{}, err + } + cfg.DeliveryTTL, err = durationEnv(deliveryTTLEnvVar, cfg.DeliveryTTL) + if err != nil { + return Config{}, err + } + cfg.AttemptTTL, err = durationEnv(attemptTTLEnvVar, cfg.AttemptTTL) + if err != nil { + return Config{}, err + } + + cfg.Telemetry.ServiceName = stringEnv(otelServiceNameEnvVar, cfg.Telemetry.ServiceName) + cfg.Telemetry.TracesExporter = normalizeExporterValue(stringEnv(otelTracesExporterEnvVar, cfg.Telemetry.TracesExporter)) + cfg.Telemetry.MetricsExporter = normalizeExporterValue(stringEnv(otelMetricsExporterEnvVar, cfg.Telemetry.MetricsExporter)) + cfg.Telemetry.TracesProtocol = normalizeProtocolValue( + os.Getenv(otelExporterOTLPTracesProtocolEnvVar), + os.Getenv(otelExporterOTLPProtocolEnvVar), + cfg.Telemetry.TracesProtocol, + ) + cfg.Telemetry.MetricsProtocol = normalizeProtocolValue( + os.Getenv(otelExporterOTLPMetricsProtocolEnvVar), + os.Getenv(otelExporterOTLPProtocolEnvVar), + cfg.Telemetry.MetricsProtocol, + ) + cfg.Telemetry.StdoutTracesEnabled, err = boolEnv(otelStdoutTracesEnabledEnvVar, cfg.Telemetry.StdoutTracesEnabled) + if err != nil { + return Config{}, err + } + cfg.Telemetry.StdoutMetricsEnabled, err = boolEnv(otelStdoutMetricsEnabledEnvVar, cfg.Telemetry.StdoutMetricsEnabled) + if err != nil { + return Config{}, err + } + + if err := validateSlogLevel(cfg.Logging.Level); err != nil { + return Config{}, fmt.Errorf("%s: %w", logLevelEnvVar, err) + } + if err := cfg.Validate(); err != nil { + return Config{}, err + } + + return cfg, nil +} + +func stringEnv(name string, fallback string) string { + value, ok := os.LookupEnv(name) + if !ok { + return fallback + } + + return strings.TrimSpace(value) +} + +func durationEnv(name string, fallback time.Duration) (time.Duration, error) { + value, ok := os.LookupEnv(name) + if !ok { + return fallback, nil + } + + parsed, err := time.ParseDuration(strings.TrimSpace(value)) + if err != nil { + return 0, fmt.Errorf("%s: parse duration: %w", name, err) + } + + return parsed, nil +} + +func intEnv(name string, fallback int) (int, error) { + value, ok := os.LookupEnv(name) + if !ok { + return fallback, nil + } + + parsed, err := strconv.Atoi(strings.TrimSpace(value)) + if err != nil { + return 0, fmt.Errorf("%s: parse int: %w", name, err) + } + + return parsed, nil +} + +func boolEnv(name string, fallback bool) (bool, error) { + value, ok := os.LookupEnv(name) + if !ok { + return fallback, nil + } + + parsed, err := strconv.ParseBool(strings.TrimSpace(value)) + if err != nil { + return false, fmt.Errorf("%s: parse bool: %w", name, err) + } + + return parsed, nil +} + +func normalizeExporterValue(value string) string { + trimmed := strings.TrimSpace(value) + switch trimmed { + case "", "none": + return "none" + default: + return trimmed + } +} + +func normalizeProtocolValue(primary string, fallback string, defaultValue string) string { + primary = strings.TrimSpace(primary) + if primary != "" { + return primary + } + + fallback = strings.TrimSpace(fallback) + if fallback != "" { + return fallback + } + + return strings.TrimSpace(defaultValue) +} diff --git a/mail/internal/config/validation.go b/mail/internal/config/validation.go new file mode 100644 index 0000000..a10243b --- /dev/null +++ b/mail/internal/config/validation.go @@ -0,0 +1,88 @@ +package config + +import ( + "fmt" + "log/slog" + "net" + "net/mail" + "strings" +) + +// Validate reports whether cfg stores a usable Mail Service process +// configuration. +func (cfg Config) Validate() error { + switch { + case cfg.ShutdownTimeout <= 0: + return fmt.Errorf("%s must be positive", shutdownTimeoutEnvVar) + case cfg.AttemptWorkerConcurrency <= 0: + return fmt.Errorf("%s must be positive", attemptWorkerConcurrencyEnvVar) + case cfg.StreamBlockTimeout <= 0: + return fmt.Errorf("%s must be positive", streamBlockTimeoutEnvVar) + case cfg.OperatorRequestTimeout <= 0: + return fmt.Errorf("%s must be positive", operatorRequestTimeoutEnvVar) + case cfg.IdempotencyTTL <= 0: + return fmt.Errorf("%s must be positive", idempotencyTTLEnvVar) + case cfg.DeliveryTTL <= 0: + return fmt.Errorf("%s must be positive", deliveryTTLEnvVar) + case cfg.AttemptTTL <= 0: + return fmt.Errorf("%s must be positive", attemptTTLEnvVar) + } + + if err := cfg.InternalHTTP.Validate(); err != nil { + return err + } + if err := cfg.Redis.Validate(); err != nil { + return err + } + if err := cfg.SMTP.Validate(); err != nil { + return err + } + if err := cfg.Templates.Validate(); err != nil { + return err + } + if err := cfg.Telemetry.Validate(); err != nil { + return err + } + + return nil +} + +func validateSlogLevel(level string) error { + var slogLevel slog.Level + if err := slogLevel.UnmarshalText([]byte(strings.TrimSpace(level))); err != nil { + return fmt.Errorf("invalid slog level %q: %w", level, err) + } + + return nil +} + +func isTCPAddr(value string) bool { + host, port, err := net.SplitHostPort(strings.TrimSpace(value)) + if err != nil { + return false + } + + if port == "" { + return false + } + + if host == "" { + return true + } + + return !strings.Contains(host, " ") +} + +func validateMailbox(name string, value string) error { + trimmed := strings.TrimSpace(value) + if trimmed == "" { + return fmt.Errorf("%s must not be empty", name) + } + + parsed, err := mail.ParseAddress(trimmed) + if err != nil || parsed == nil || parsed.Name != "" || parsed.Address != trimmed { + return fmt.Errorf("%s %q must be a single valid email address", name, value) + } + + return nil +} diff --git a/mail/internal/domain/attempt/model.go b/mail/internal/domain/attempt/model.go new file mode 100644 index 0000000..a71e414 --- /dev/null +++ b/mail/internal/domain/attempt/model.go @@ -0,0 +1,200 @@ +// Package attempt defines the logical delivery-attempt entity owned by Mail +// Service. +package attempt + +import ( + "fmt" + "strings" + "time" + + "galaxy/mail/internal/domain/common" +) + +// Status identifies the lifecycle state of one concrete delivery attempt. +type Status string + +const ( + // StatusScheduled reports that the attempt is durably planned but has not + // started execution yet. + StatusScheduled Status = "scheduled" + + // StatusInProgress reports that one worker currently owns the attempt. + StatusInProgress Status = "in_progress" + + // StatusProviderAccepted reports that the provider accepted the SMTP + // envelope. + StatusProviderAccepted Status = "provider_accepted" + + // StatusProviderRejected reports that the provider rejected the SMTP + // envelope. + StatusProviderRejected Status = "provider_rejected" + + // StatusTransportFailed reports that the attempt failed before a stable + // provider accept or reject result was obtained. + StatusTransportFailed Status = "transport_failed" + + // StatusTimedOut reports that the provider call exceeded the configured + // execution deadline. + StatusTimedOut Status = "timed_out" + + // StatusRenderFailed reports that template rendering failed before any + // provider interaction was attempted. + StatusRenderFailed Status = "render_failed" +) + +// IsKnown reports whether Status is supported by the current Mail Service +// attempt state machine. +func (status Status) IsKnown() bool { + switch status { + case StatusScheduled, + StatusInProgress, + StatusProviderAccepted, + StatusProviderRejected, + StatusTransportFailed, + StatusTimedOut, + StatusRenderFailed: + return true + default: + return false + } +} + +// IsTerminal reports whether Status can no longer accept a lifecycle +// transition. +func (status Status) IsTerminal() bool { + switch status { + case StatusProviderAccepted, + StatusProviderRejected, + StatusTransportFailed, + StatusTimedOut, + StatusRenderFailed: + return true + default: + return false + } +} + +// CanTransitionTo reports whether the current Status may move to next under +// the frozen Stage 2 attempt lifecycle rules. +func (status Status) CanTransitionTo(next Status) bool { + switch status { + case StatusScheduled: + switch next { + case StatusInProgress, StatusRenderFailed: + return true + } + case StatusInProgress: + switch next { + case StatusProviderAccepted, StatusProviderRejected, StatusTransportFailed, StatusTimedOut: + return true + } + } + + return false +} + +// Attempt stores one durable execution record for a delivery attempt. +type Attempt struct { + // DeliveryID identifies the owning logical delivery. + DeliveryID common.DeliveryID + + // AttemptNo stores the monotonically increasing attempt sequence number. + AttemptNo int + + // ScheduledFor stores when the attempt becomes due. + ScheduledFor time.Time + + // StartedAt stores when a worker claimed the attempt for execution. + StartedAt *time.Time + + // FinishedAt stores when the attempt reached a terminal outcome. + FinishedAt *time.Time + + // Status stores the current attempt lifecycle state. + Status Status + + // ProviderClassification stores provider-specific or adapter-specific + // result classification details when available. + ProviderClassification string + + // ProviderSummary stores redacted provider outcome details when available. + ProviderSummary string +} + +// Validate reports whether Attempt satisfies the frozen Stage 2 structural and +// lifecycle invariants. +func (record Attempt) Validate() error { + if err := record.DeliveryID.Validate(); err != nil { + return fmt.Errorf("attempt delivery id: %w", err) + } + if record.AttemptNo < 1 { + return fmt.Errorf("attempt number must be at least 1") + } + if err := common.ValidateTimestamp("attempt scheduled for", record.ScheduledFor); err != nil { + return err + } + if !record.Status.IsKnown() { + return fmt.Errorf("attempt status %q is unsupported", record.Status) + } + if err := validateOptionalToken("attempt provider classification", record.ProviderClassification); err != nil { + return err + } + if err := validateOptionalToken("attempt provider summary", record.ProviderSummary); err != nil { + return err + } + + switch record.Status { + case StatusScheduled: + if record.StartedAt != nil { + return fmt.Errorf("scheduled attempt must not contain started at") + } + if record.FinishedAt != nil { + return fmt.Errorf("scheduled attempt must not contain finished at") + } + case StatusInProgress: + if record.StartedAt == nil { + return fmt.Errorf("in-progress attempt must contain started at") + } + if err := common.ValidateTimestamp("attempt started at", *record.StartedAt); err != nil { + return err + } + if record.StartedAt.Before(record.ScheduledFor) { + return fmt.Errorf("attempt started at must not be before scheduled for") + } + if record.FinishedAt != nil { + return fmt.Errorf("in-progress attempt must not contain finished at") + } + default: + if record.StartedAt == nil { + return fmt.Errorf("terminal attempt must contain started at") + } + if err := common.ValidateTimestamp("attempt started at", *record.StartedAt); err != nil { + return err + } + if record.StartedAt.Before(record.ScheduledFor) { + return fmt.Errorf("attempt started at must not be before scheduled for") + } + if record.FinishedAt == nil { + return fmt.Errorf("terminal attempt must contain finished at") + } + if err := common.ValidateTimestamp("attempt finished at", *record.FinishedAt); err != nil { + return err + } + if record.FinishedAt.Before(*record.StartedAt) { + return fmt.Errorf("attempt finished at must not be before started at") + } + } + + return nil +} + +func validateOptionalToken(name string, value string) error { + if value == "" { + return nil + } + if strings.TrimSpace(value) != value { + return fmt.Errorf("%s must not contain surrounding whitespace", name) + } + + return nil +} diff --git a/mail/internal/domain/attempt/model_test.go b/mail/internal/domain/attempt/model_test.go new file mode 100644 index 0000000..d2ef32f --- /dev/null +++ b/mail/internal/domain/attempt/model_test.go @@ -0,0 +1,168 @@ +package attempt + +import ( + "testing" + "time" + + "galaxy/mail/internal/domain/common" + + "github.com/stretchr/testify/require" +) + +func TestStatusCanTransitionTo(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + from Status + to Status + want bool + }{ + {name: "scheduled to in progress", from: StatusScheduled, to: StatusInProgress, want: true}, + {name: "scheduled to render failed", from: StatusScheduled, to: StatusRenderFailed, want: true}, + {name: "scheduled to accepted", from: StatusScheduled, to: StatusProviderAccepted, want: false}, + {name: "in progress to accepted", from: StatusInProgress, to: StatusProviderAccepted, want: true}, + {name: "in progress to rejected", from: StatusInProgress, to: StatusProviderRejected, want: true}, + {name: "in progress to transport failed", from: StatusInProgress, to: StatusTransportFailed, want: true}, + {name: "in progress to timed out", from: StatusInProgress, to: StatusTimedOut, want: true}, + {name: "accepted terminal", from: StatusProviderAccepted, to: StatusTimedOut, want: false}, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + require.Equal(t, tt.want, tt.from.CanTransitionTo(tt.to)) + }) + } +} + +func TestStatusIsTerminal(t *testing.T) { + t.Parallel() + + require.False(t, StatusScheduled.IsTerminal()) + require.False(t, StatusInProgress.IsTerminal()) + require.True(t, StatusProviderAccepted.IsTerminal()) + require.True(t, StatusProviderRejected.IsTerminal()) + require.True(t, StatusTransportFailed.IsTerminal()) + require.True(t, StatusTimedOut.IsTerminal()) + require.True(t, StatusRenderFailed.IsTerminal()) +} + +func TestAttemptValidate(t *testing.T) { + t.Parallel() + + scheduledFor := time.Unix(1_775_121_700, 0).UTC() + startedAt := scheduledFor.Add(time.Minute) + finishedAt := startedAt.Add(2 * time.Second) + + tests := []struct { + name string + record Attempt + wantErr bool + }{ + { + name: "valid scheduled", + record: Attempt{ + DeliveryID: common.DeliveryID("delivery-123"), + AttemptNo: 1, + ScheduledFor: scheduledFor, + Status: StatusScheduled, + }, + }, + { + name: "valid in progress", + record: Attempt{ + DeliveryID: common.DeliveryID("delivery-123"), + AttemptNo: 2, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + Status: StatusInProgress, + }, + }, + { + name: "valid terminal", + record: Attempt{ + DeliveryID: common.DeliveryID("delivery-123"), + AttemptNo: 3, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + FinishedAt: &finishedAt, + Status: StatusProviderAccepted, + }, + }, + { + name: "valid render failed", + record: Attempt{ + DeliveryID: common.DeliveryID("delivery-123"), + AttemptNo: 4, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + FinishedAt: &finishedAt, + Status: StatusRenderFailed, + ProviderClassification: "missing_required_variable", + ProviderSummary: "missing required variables: player.name", + }, + }, + { + name: "attempt number must be positive", + record: Attempt{ + DeliveryID: common.DeliveryID("delivery-123"), + ScheduledFor: scheduledFor, + Status: StatusScheduled, + }, + wantErr: true, + }, + { + name: "in progress missing started at", + record: Attempt{ + DeliveryID: common.DeliveryID("delivery-123"), + AttemptNo: 1, + ScheduledFor: scheduledFor, + Status: StatusInProgress, + }, + wantErr: true, + }, + { + name: "terminal missing finished at", + record: Attempt{ + DeliveryID: common.DeliveryID("delivery-123"), + AttemptNo: 1, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + Status: StatusProviderRejected, + }, + wantErr: true, + }, + { + name: "finished before started", + record: Attempt{ + DeliveryID: common.DeliveryID("delivery-123"), + AttemptNo: 1, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + FinishedAt: &scheduledFor, + Status: StatusTimedOut, + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + }) + } +} diff --git a/mail/internal/domain/common/types.go b/mail/internal/domain/common/types.go new file mode 100644 index 0000000..bdd8f0a --- /dev/null +++ b/mail/internal/domain/common/types.go @@ -0,0 +1,202 @@ +// Package common defines shared value objects used across the Mail Service +// domain model. +package common + +import ( + "fmt" + "mime" + "net/mail" + "strings" + "time" + + "golang.org/x/text/language" +) + +// DeliveryID identifies one logical mail delivery accepted by Mail Service. +type DeliveryID string + +// String returns DeliveryID as its stored identifier string. +func (id DeliveryID) String() string { + return string(id) +} + +// IsZero reports whether DeliveryID does not contain a usable value. +func (id DeliveryID) IsZero() bool { + return strings.TrimSpace(string(id)) == "" +} + +// Validate reports whether DeliveryID is non-empty and already normalized for +// domain use. +func (id DeliveryID) Validate() error { + return validateToken("delivery id", string(id)) +} + +// TemplateID identifies one template family owned by the filesystem-backed +// Mail Service template catalog. +type TemplateID string + +// String returns TemplateID as its stored identifier string. +func (id TemplateID) String() string { + return string(id) +} + +// IsZero reports whether TemplateID does not contain a usable value. +func (id TemplateID) IsZero() bool { + return strings.TrimSpace(string(id)) == "" +} + +// Validate reports whether TemplateID is non-empty and already normalized for +// domain use. +func (id TemplateID) Validate() error { + return validateToken("template id", string(id)) +} + +// IdempotencyKey stores the caller-owned key used to deduplicate accepted +// delivery commands. +type IdempotencyKey string + +// String returns IdempotencyKey as its stored string. +func (key IdempotencyKey) String() string { + return string(key) +} + +// IsZero reports whether IdempotencyKey does not contain a usable value. +func (key IdempotencyKey) IsZero() bool { + return strings.TrimSpace(string(key)) == "" +} + +// Validate reports whether IdempotencyKey is non-empty and already normalized +// for domain use. +func (key IdempotencyKey) Validate() error { + return validateToken("idempotency key", string(key)) +} + +// Email stores one normalized recipient or reply-to address. +type Email string + +// String returns Email as its stored canonical string. +func (email Email) String() string { + return string(email) +} + +// IsZero reports whether Email does not contain a usable address. +func (email Email) IsZero() bool { + return strings.TrimSpace(string(email)) == "" +} + +// Validate reports whether Email is non-empty, trimmed, and matches the same +// single-address syntax expected by the trusted Mail Service contracts. +func (email Email) Validate() error { + raw := string(email) + if err := validateToken("email", raw); err != nil { + return err + } + + parsedAddress, err := mail.ParseAddress(raw) + if err != nil || parsedAddress.Name != "" || parsedAddress.Address != raw { + return fmt.Errorf("email %q must be a single valid email address", raw) + } + + return nil +} + +// Locale stores one canonical BCP 47 language tag used by template selection +// and rendering. +type Locale string + +// ParseLocale validates value as a BCP 47 language tag and returns the +// canonical stored representation used by the Mail Service domain model. +func ParseLocale(value string) (Locale, error) { + if err := validateToken("locale", value); err != nil { + return "", err + } + + tag, err := language.Parse(value) + if err != nil { + return "", fmt.Errorf("locale %q must be a valid BCP 47 language tag: %w", value, err) + } + + return Locale(tag.String()), nil +} + +// String returns Locale as its stored canonical string. +func (locale Locale) String() string { + return string(locale) +} + +// IsZero reports whether Locale does not contain a usable value. +func (locale Locale) IsZero() bool { + return strings.TrimSpace(string(locale)) == "" +} + +// Validate reports whether Locale stores a canonical BCP 47 language tag. +func (locale Locale) Validate() error { + raw := string(locale) + if err := validateToken("locale", raw); err != nil { + return err + } + + tag, err := language.Parse(raw) + if err != nil { + return fmt.Errorf("locale %q must be a valid BCP 47 language tag: %w", raw, err) + } + + canonical := tag.String() + if raw != canonical { + return fmt.Errorf("locale %q must use canonical BCP 47 form %q", raw, canonical) + } + + return nil +} + +// AttachmentMetadata stores only the durable audit metadata kept for one +// accepted attachment. Raw bytes remain outside the long-lived domain model. +type AttachmentMetadata struct { + // Filename stores the user-facing attachment filename. + Filename string + + // ContentType stores the MIME media type used for SMTP body construction. + ContentType string + + // SizeBytes stores the decoded payload size in bytes. + SizeBytes int64 +} + +// Validate reports whether AttachmentMetadata contains a complete attachment +// audit entry. +func (metadata AttachmentMetadata) Validate() error { + if err := validateToken("attachment filename", metadata.Filename); err != nil { + return err + } + if err := validateToken("attachment content type", metadata.ContentType); err != nil { + return err + } + if _, _, err := mime.ParseMediaType(metadata.ContentType); err != nil { + return fmt.Errorf("attachment content type %q must be a valid MIME media type: %w", metadata.ContentType, err) + } + if metadata.SizeBytes < 0 { + return fmt.Errorf("attachment size bytes must not be negative") + } + + return nil +} + +// ValidateTimestamp reports whether value is present. +func ValidateTimestamp(name string, value time.Time) error { + if value.IsZero() { + return fmt.Errorf("%s must not be zero", name) + } + + return nil +} + +func validateToken(name string, value string) error { + switch { + case strings.TrimSpace(value) == "": + return fmt.Errorf("%s must not be empty", name) + case strings.TrimSpace(value) != value: + return fmt.Errorf("%s must not contain surrounding whitespace", name) + default: + return nil + } +} diff --git a/mail/internal/domain/common/types_test.go b/mail/internal/domain/common/types_test.go new file mode 100644 index 0000000..ee6288f --- /dev/null +++ b/mail/internal/domain/common/types_test.go @@ -0,0 +1,190 @@ +package common + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestIdentifierValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + run func() error + wantErr bool + }{ + { + name: "valid delivery id", + run: func() error { + return DeliveryID("delivery-123").Validate() + }, + }, + { + name: "valid template id", + run: func() error { + return TemplateID("auth.login_code").Validate() + }, + }, + { + name: "valid idempotency key", + run: func() error { + return IdempotencyKey("notification:delivery-123").Validate() + }, + }, + { + name: "empty delivery id", + run: func() error { + return DeliveryID("").Validate() + }, + wantErr: true, + }, + { + name: "template id with whitespace", + run: func() error { + return TemplateID(" auth.login_code ").Validate() + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.run() + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + }) + } +} + +func TestEmailValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value Email + wantErr bool + }{ + {name: "valid", value: Email("pilot@example.com")}, + {name: "empty", value: Email(""), wantErr: true}, + {name: "display name forbidden", value: Email("Pilot "), wantErr: true}, + {name: "whitespace forbidden", value: Email(" pilot@example.com "), wantErr: true}, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + }) + } +} + +func TestParseLocale(t *testing.T) { + t.Parallel() + + value, err := ParseLocale("fr-fr") + require.NoError(t, err) + require.Equal(t, Locale("fr-FR"), value) + require.NoError(t, value.Validate()) +} + +func TestLocaleValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value Locale + wantErr bool + }{ + {name: "canonical language", value: Locale("en")}, + {name: "canonical regional", value: Locale("fr-FR")}, + {name: "non canonical", value: Locale("fr-fr"), wantErr: true}, + {name: "invalid syntax", value: Locale("not a locale"), wantErr: true}, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + }) + } +} + +func TestAttachmentMetadataValidate(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + value AttachmentMetadata + wantErr bool + }{ + { + name: "valid", + value: AttachmentMetadata{ + Filename: "report.txt", + ContentType: "text/plain; charset=utf-8", + SizeBytes: 512, + }, + }, + { + name: "invalid content type", + value: AttachmentMetadata{ + Filename: "report.txt", + ContentType: "plain text", + SizeBytes: 512, + }, + wantErr: true, + }, + { + name: "negative size", + value: AttachmentMetadata{ + Filename: "report.txt", + ContentType: "text/plain", + SizeBytes: -1, + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.value.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + }) + } +} diff --git a/mail/internal/domain/delivery/model.go b/mail/internal/domain/delivery/model.go new file mode 100644 index 0000000..41d903b --- /dev/null +++ b/mail/internal/domain/delivery/model.go @@ -0,0 +1,625 @@ +// Package delivery defines the logical delivery and dead-letter entities owned +// directly by Mail Service. +package delivery + +import ( + "encoding/json" + "fmt" + "strings" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" +) + +// Source identifies the trusted caller or workflow that created one delivery. +type Source string + +const ( + // SourceAuthSession reports deliveries accepted from Auth / Session Service. + SourceAuthSession Source = "authsession" + + // SourceNotification reports deliveries accepted from Notification Service. + SourceNotification Source = "notification" + + // SourceOperatorResend reports clone deliveries created by the operator + // resend workflow. + SourceOperatorResend Source = "operator_resend" +) + +// IsKnown reports whether Source belongs to the frozen v1 source vocabulary. +func (source Source) IsKnown() bool { + switch source { + case SourceAuthSession, SourceNotification, SourceOperatorResend: + return true + default: + return false + } +} + +// PayloadMode identifies whether the delivery carries pre-rendered content or +// template-selection metadata. +type PayloadMode string + +const ( + // PayloadModeRendered reports that the delivery already stores final + // rendered content. + PayloadModeRendered PayloadMode = "rendered" + + // PayloadModeTemplate reports that final content is produced later from a + // template and locale. + PayloadModeTemplate PayloadMode = "template" +) + +// IsKnown reports whether PayloadMode is supported by the current domain +// model. +func (mode PayloadMode) IsKnown() bool { + switch mode { + case PayloadModeRendered, PayloadModeTemplate: + return true + default: + return false + } +} + +// Status identifies the lifecycle state of one logical mail delivery. +type Status string + +const ( + // StatusAccepted reports that intake validation succeeded and a durable + // delivery record exists. + StatusAccepted Status = "accepted" + + // StatusQueued reports that the next attempt is durably scheduled. + StatusQueued Status = "queued" + + // StatusRendered reports that template-mode content has been materialized. + StatusRendered Status = "rendered" + + // StatusSending reports that one worker currently owns the active attempt. + StatusSending Status = "sending" + + // StatusSent reports that the provider accepted the SMTP envelope. + StatusSent Status = "sent" + + // StatusSuppressed reports that delivery was intentionally skipped as a + // successful business outcome. + StatusSuppressed Status = "suppressed" + + // StatusFailed reports that delivery ended in a terminal failure without a + // dead-letter entry. + StatusFailed Status = "failed" + + // StatusDeadLetter reports that delivery reached an operator-visible + // dead-letter state. + StatusDeadLetter Status = "dead_letter" +) + +// IsKnown reports whether Status belongs to the frozen v1 delivery lifecycle. +func (status Status) IsKnown() bool { + switch status { + case StatusAccepted, + StatusQueued, + StatusRendered, + StatusSending, + StatusSent, + StatusSuppressed, + StatusFailed, + StatusDeadLetter: + return true + default: + return false + } +} + +// IsTerminal reports whether Status can no longer accept lifecycle +// transitions. +func (status Status) IsTerminal() bool { + switch status { + case StatusSent, StatusSuppressed, StatusFailed, StatusDeadLetter: + return true + default: + return false + } +} + +// CanTransitionTo reports whether the current Status may move to next under +// the frozen Stage 2 delivery lifecycle rules. +func (status Status) CanTransitionTo(next Status) bool { + switch status { + case StatusAccepted: + switch next { + case StatusQueued, StatusSuppressed: + return true + } + case StatusQueued: + switch next { + case StatusRendered, StatusSending, StatusFailed: + return true + } + case StatusRendered: + switch next { + case StatusSending, StatusFailed: + return true + } + case StatusSending: + switch next { + case StatusSent, StatusSuppressed, StatusQueued, StatusFailed, StatusDeadLetter: + return true + } + } + + return false +} + +// AllowsResend reports whether deliveries in Status may be cloned through the +// trusted resend workflow. +func (status Status) AllowsResend() bool { + switch status { + case StatusSent, StatusSuppressed, StatusFailed, StatusDeadLetter: + return true + default: + return false + } +} + +// Envelope stores the SMTP-addressing fields of one logical delivery. +type Envelope struct { + // To stores the primary recipients. + To []common.Email + + // Cc stores the carbon-copy recipients. + Cc []common.Email + + // Bcc stores the blind-carbon-copy recipients. + Bcc []common.Email + + // ReplyTo stores the reply-to addresses attached to the message headers. + ReplyTo []common.Email +} + +// Validate reports whether Envelope contains only valid addresses and at +// least one effective recipient. +func (envelope Envelope) Validate() error { + recipientCount := 0 + + validateGroup := func(name string, values []common.Email) error { + for index, value := range values { + if err := value.Validate(); err != nil { + return fmt.Errorf("%s[%d]: %w", name, index, err) + } + } + return nil + } + + if err := validateGroup("delivery envelope to", envelope.To); err != nil { + return err + } + recipientCount += len(envelope.To) + + if err := validateGroup("delivery envelope cc", envelope.Cc); err != nil { + return err + } + recipientCount += len(envelope.Cc) + + if err := validateGroup("delivery envelope bcc", envelope.Bcc); err != nil { + return err + } + recipientCount += len(envelope.Bcc) + + if err := validateGroup("delivery envelope reply to", envelope.ReplyTo); err != nil { + return err + } + + if recipientCount == 0 { + return fmt.Errorf("delivery envelope must contain at least one recipient") + } + + return nil +} + +// Content stores the materialized subject and body parts of one delivery. +type Content struct { + // Subject stores the final subject line. + Subject string + + // TextBody stores the final plaintext body. + TextBody string + + // HTMLBody stores the optional final HTML body. + HTMLBody string +} + +// ValidateMaterialized reports whether Content contains the minimum subject +// and plaintext body required for a concrete outbound message. +func (content Content) ValidateMaterialized() error { + if content.Subject == "" { + return fmt.Errorf("delivery content subject must not be empty") + } + if content.TextBody == "" { + return fmt.Errorf("delivery content text body must not be empty") + } + + return nil +} + +// Delivery stores one durable logical mail delivery record. +type Delivery struct { + // DeliveryID identifies the delivery. + DeliveryID common.DeliveryID + + // ResendParentDeliveryID identifies the original delivery when the current + // record was created by the resend workflow. + ResendParentDeliveryID common.DeliveryID + + // Source stores the frozen source vocabulary value. + Source Source + + // PayloadMode stores whether the delivery uses pre-rendered content or + // deferred template rendering. + PayloadMode PayloadMode + + // TemplateID stores the template family used by template-mode deliveries. + TemplateID common.TemplateID + + // Envelope stores the SMTP addressing information. + Envelope Envelope + + // Content stores the final rendered subject and bodies when materialized. + Content Content + + // Attachments stores long-lived attachment metadata only. + Attachments []common.AttachmentMetadata + + // Locale stores the canonical locale used for template selection when + // applicable. + Locale common.Locale + + // LocaleFallbackUsed reports whether rendering fell back from the requested + // locale to `en`. + LocaleFallbackUsed bool + + // TemplateVariables stores the JSON object used for later template + // rendering when PayloadMode is `template`. + TemplateVariables map[string]any + + // IdempotencyKey stores the caller-owned deduplication key. + IdempotencyKey common.IdempotencyKey + + // Status stores the current delivery lifecycle state. + Status Status + + // AttemptCount stores how many attempts have been created for the delivery. + AttemptCount int + + // LastAttemptStatus stores the latest recorded attempt outcome when one is + // available. + LastAttemptStatus attempt.Status + + // ProviderSummary stores redacted provider outcome details when available. + ProviderSummary string + + // CreatedAt stores when the delivery was created. + CreatedAt time.Time + + // UpdatedAt stores when the delivery was last mutated. + UpdatedAt time.Time + + // SentAt stores when the delivery entered the sent terminal state. + SentAt *time.Time + + // SuppressedAt stores when the delivery entered the suppressed terminal + // state. + SuppressedAt *time.Time + + // FailedAt stores when the delivery entered the failed terminal state. + FailedAt *time.Time + + // DeadLetteredAt stores when the delivery entered the dead-letter terminal + // state. + DeadLetteredAt *time.Time +} + +// Validate reports whether Delivery satisfies the frozen Stage 2 structural +// and lifecycle invariants. +func (record Delivery) Validate() error { + if err := record.DeliveryID.Validate(); err != nil { + return fmt.Errorf("delivery id: %w", err) + } + if !record.Source.IsKnown() { + return fmt.Errorf("delivery source %q is unsupported", record.Source) + } + if !record.PayloadMode.IsKnown() { + return fmt.Errorf("delivery payload mode %q is unsupported", record.PayloadMode) + } + if err := record.Envelope.Validate(); err != nil { + return err + } + for index, attachment := range record.Attachments { + if err := attachment.Validate(); err != nil { + return fmt.Errorf("delivery attachments[%d]: %w", index, err) + } + } + if err := record.IdempotencyKey.Validate(); err != nil { + return fmt.Errorf("delivery idempotency key: %w", err) + } + if !record.Status.IsKnown() { + return fmt.Errorf("delivery status %q is unsupported", record.Status) + } + if record.AttemptCount < 0 { + return fmt.Errorf("delivery attempt count must not be negative") + } + if record.LastAttemptStatus != "" && !record.LastAttemptStatus.IsKnown() { + return fmt.Errorf("delivery last attempt status %q is unsupported", record.LastAttemptStatus) + } + if err := validateOptionalToken("delivery provider summary", record.ProviderSummary); err != nil { + return err + } + if err := common.ValidateTimestamp("delivery created at", record.CreatedAt); err != nil { + return err + } + if err := common.ValidateTimestamp("delivery updated at", record.UpdatedAt); err != nil { + return err + } + if record.UpdatedAt.Before(record.CreatedAt) { + return fmt.Errorf("delivery updated at must not be before created at") + } + + switch record.Source { + case SourceOperatorResend: + if err := record.ResendParentDeliveryID.Validate(); err != nil { + return fmt.Errorf("delivery resend parent delivery id: %w", err) + } + if record.ResendParentDeliveryID == record.DeliveryID { + return fmt.Errorf("delivery resend parent delivery id must differ from delivery id") + } + default: + if !record.ResendParentDeliveryID.IsZero() { + return fmt.Errorf("delivery resend parent delivery id must be empty unless source is %q", SourceOperatorResend) + } + } + + switch record.PayloadMode { + case PayloadModeRendered: + if !record.TemplateID.IsZero() { + return fmt.Errorf("rendered delivery must not contain template id") + } + if !record.Locale.IsZero() { + return fmt.Errorf("rendered delivery must not contain locale") + } + if record.LocaleFallbackUsed { + return fmt.Errorf("rendered delivery must not mark locale fallback") + } + if len(record.TemplateVariables) != 0 { + return fmt.Errorf("rendered delivery must not contain template variables") + } + if err := record.Content.ValidateMaterialized(); err != nil { + return err + } + case PayloadModeTemplate: + if err := record.TemplateID.Validate(); err != nil { + return fmt.Errorf("delivery template id: %w", err) + } + if err := record.Locale.Validate(); err != nil { + return fmt.Errorf("delivery locale: %w", err) + } + if err := validateJSONObject("delivery template variables", record.TemplateVariables); err != nil { + return err + } + if record.Status == StatusRendered || record.Status == StatusSending || record.Status == StatusSent { + if err := record.Content.ValidateMaterialized(); err != nil { + return err + } + } + } + + if record.Status == StatusRendered && record.PayloadMode != PayloadModeTemplate { + return fmt.Errorf("delivery status %q requires payload mode %q", StatusRendered, PayloadModeTemplate) + } + + if err := validateTerminalTimestamps(record); err != nil { + return err + } + + return nil +} + +// DeadLetterEntry stores the operator-visible dead-letter record for one +// delivery that exhausted normal automated handling. +type DeadLetterEntry struct { + // DeliveryID identifies the dead-lettered delivery. + DeliveryID common.DeliveryID + + // FinalAttemptNo stores the last attempt number associated with the + // dead-letter transition. + FinalAttemptNo int + + // FailureClassification stores the final machine-readable failure class. + FailureClassification string + + // ProviderSummary stores redacted provider outcome details when available. + ProviderSummary string + + // CreatedAt stores when the dead-letter entry was created. + CreatedAt time.Time + + // RecoveryHint stores an optional operator-facing recovery note. + RecoveryHint string +} + +// Validate reports whether DeadLetterEntry contains a complete dead-letter +// record. +func (entry DeadLetterEntry) Validate() error { + if err := entry.DeliveryID.Validate(); err != nil { + return fmt.Errorf("dead-letter delivery id: %w", err) + } + if entry.FinalAttemptNo < 1 { + return fmt.Errorf("dead-letter final attempt number must be at least 1") + } + if err := validateToken("dead-letter failure classification", entry.FailureClassification); err != nil { + return err + } + if err := validateOptionalToken("dead-letter provider summary", entry.ProviderSummary); err != nil { + return err + } + if err := validateOptionalToken("dead-letter recovery hint", entry.RecoveryHint); err != nil { + return err + } + if err := common.ValidateTimestamp("dead-letter created at", entry.CreatedAt); err != nil { + return err + } + + return nil +} + +// ValidateFor reports whether entry is the required dead-letter record for +// record. +func (entry DeadLetterEntry) ValidateFor(record Delivery) error { + if err := record.Validate(); err != nil { + return err + } + if err := entry.Validate(); err != nil { + return err + } + if record.Status != StatusDeadLetter { + return fmt.Errorf("dead-letter entry requires delivery status %q", StatusDeadLetter) + } + if entry.DeliveryID != record.DeliveryID { + return fmt.Errorf("dead-letter delivery id must match delivery id") + } + if record.AttemptCount < entry.FinalAttemptNo { + return fmt.Errorf("dead-letter final attempt number must not exceed delivery attempt count") + } + if record.DeadLetteredAt == nil { + return fmt.Errorf("dead-letter delivery must contain dead-lettered at") + } + if entry.CreatedAt.Before(*record.DeadLetteredAt) { + return fmt.Errorf("dead-letter created at must not be before delivery dead-lettered at") + } + + return nil +} + +// ValidateDeadLetterState reports whether record and entry satisfy the frozen +// rule that only dead-lettered deliveries may own a dead-letter entry. +func ValidateDeadLetterState(record Delivery, entry *DeadLetterEntry) error { + if err := record.Validate(); err != nil { + return err + } + + if record.Status == StatusDeadLetter { + if entry == nil { + return fmt.Errorf("dead-letter delivery requires dead-letter entry") + } + return entry.ValidateFor(record) + } + + if entry != nil { + return fmt.Errorf("dead-letter entry is not allowed for delivery status %q", record.Status) + } + + return nil +} + +func validateTerminalTimestamps(record Delivery) error { + if record.SentAt != nil { + if err := common.ValidateTimestamp("delivery sent at", *record.SentAt); err != nil { + return err + } + if record.SentAt.Before(record.CreatedAt) { + return fmt.Errorf("delivery sent at must not be before created at") + } + } + if record.SuppressedAt != nil { + if err := common.ValidateTimestamp("delivery suppressed at", *record.SuppressedAt); err != nil { + return err + } + if record.SuppressedAt.Before(record.CreatedAt) { + return fmt.Errorf("delivery suppressed at must not be before created at") + } + } + if record.FailedAt != nil { + if err := common.ValidateTimestamp("delivery failed at", *record.FailedAt); err != nil { + return err + } + if record.FailedAt.Before(record.CreatedAt) { + return fmt.Errorf("delivery failed at must not be before created at") + } + } + if record.DeadLetteredAt != nil { + if err := common.ValidateTimestamp("delivery dead-lettered at", *record.DeadLetteredAt); err != nil { + return err + } + if record.DeadLetteredAt.Before(record.CreatedAt) { + return fmt.Errorf("delivery dead-lettered at must not be before created at") + } + } + + switch record.Status { + case StatusAccepted, StatusQueued, StatusRendered, StatusSending: + if record.SentAt != nil || record.SuppressedAt != nil || record.FailedAt != nil || record.DeadLetteredAt != nil { + return fmt.Errorf("non-terminal delivery must not contain terminal timestamp fields") + } + case StatusSent: + if record.SentAt == nil { + return fmt.Errorf("sent delivery must contain sent at") + } + if record.SuppressedAt != nil || record.FailedAt != nil || record.DeadLetteredAt != nil { + return fmt.Errorf("sent delivery must not contain other terminal timestamp fields") + } + case StatusSuppressed: + if record.SuppressedAt == nil { + return fmt.Errorf("suppressed delivery must contain suppressed at") + } + if record.SentAt != nil || record.FailedAt != nil || record.DeadLetteredAt != nil { + return fmt.Errorf("suppressed delivery must not contain other terminal timestamp fields") + } + case StatusFailed: + if record.FailedAt == nil { + return fmt.Errorf("failed delivery must contain failed at") + } + if record.SentAt != nil || record.SuppressedAt != nil || record.DeadLetteredAt != nil { + return fmt.Errorf("failed delivery must not contain other terminal timestamp fields") + } + case StatusDeadLetter: + if record.DeadLetteredAt == nil { + return fmt.Errorf("dead-letter delivery must contain dead-lettered at") + } + if record.SentAt != nil || record.SuppressedAt != nil || record.FailedAt != nil { + return fmt.Errorf("dead-letter delivery must not contain other terminal timestamp fields") + } + } + + return nil +} + +func validateToken(name string, value string) error { + switch { + case strings.TrimSpace(value) == "": + return fmt.Errorf("%s must not be empty", name) + case strings.TrimSpace(value) != value: + return fmt.Errorf("%s must not contain surrounding whitespace", name) + default: + return nil + } +} + +func validateOptionalToken(name string, value string) error { + if value == "" { + return nil + } + + return validateToken(name, value) +} + +func validateJSONObject(name string, value map[string]any) error { + if value == nil { + return fmt.Errorf("%s must not be nil", name) + } + + if _, err := json.Marshal(value); err != nil { + return fmt.Errorf("%s must be JSON-serializable: %w", name, err) + } + + return nil +} diff --git a/mail/internal/domain/delivery/model_test.go b/mail/internal/domain/delivery/model_test.go new file mode 100644 index 0000000..685f369 --- /dev/null +++ b/mail/internal/domain/delivery/model_test.go @@ -0,0 +1,321 @@ +package delivery + +import ( + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + + "github.com/stretchr/testify/require" +) + +func TestStatusCanTransitionTo(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + from Status + to Status + want bool + }{ + {name: "accepted to queued", from: StatusAccepted, to: StatusQueued, want: true}, + {name: "accepted to suppressed", from: StatusAccepted, to: StatusSuppressed, want: true}, + {name: "accepted to sent", from: StatusAccepted, to: StatusSent, want: false}, + {name: "queued to rendered", from: StatusQueued, to: StatusRendered, want: true}, + {name: "queued to sending", from: StatusQueued, to: StatusSending, want: true}, + {name: "queued to failed", from: StatusQueued, to: StatusFailed, want: true}, + {name: "rendered to sending", from: StatusRendered, to: StatusSending, want: true}, + {name: "rendered to failed", from: StatusRendered, to: StatusFailed, want: true}, + {name: "sending to sent", from: StatusSending, to: StatusSent, want: true}, + {name: "sending to dead letter", from: StatusSending, to: StatusDeadLetter, want: true}, + {name: "failed terminal", from: StatusFailed, to: StatusDeadLetter, want: false}, + {name: "dead letter terminal", from: StatusDeadLetter, to: StatusQueued, want: false}, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + require.Equal(t, tt.want, tt.from.CanTransitionTo(tt.to)) + }) + } +} + +func TestStatusTerminalAndResend(t *testing.T) { + t.Parallel() + + require.False(t, StatusAccepted.IsTerminal()) + require.False(t, StatusQueued.AllowsResend()) + require.True(t, StatusSent.IsTerminal()) + require.True(t, StatusSent.AllowsResend()) + require.True(t, StatusSuppressed.AllowsResend()) + require.True(t, StatusFailed.AllowsResend()) + require.True(t, StatusDeadLetter.AllowsResend()) +} + +func TestDeliveryValidate(t *testing.T) { + t.Parallel() + + base := validRenderedDelivery(t) + templateQueued := validTemplateQueuedDelivery(t) + + tests := []struct { + name string + record Delivery + wantErr bool + }{ + {name: "valid rendered delivery", record: base}, + {name: "valid template queued delivery", record: templateQueued}, + { + name: "operator resend requires parent id", + record: func() Delivery { + record := base + record.Source = SourceOperatorResend + record.ResendParentDeliveryID = "" + return record + }(), + wantErr: true, + }, + { + name: "non resend must not carry parent id", + record: func() Delivery { + record := base + record.ResendParentDeliveryID = common.DeliveryID("delivery-parent") + return record + }(), + wantErr: true, + }, + { + name: "rendered status requires template mode", + record: func() Delivery { + record := base + record.Status = StatusRendered + record.UpdatedAt = record.CreatedAt.Add(time.Minute) + record.SentAt = nil + return record + }(), + wantErr: true, + }, + { + name: "rendered payload requires materialized content", + record: func() Delivery { + record := base + record.Content = Content{} + return record + }(), + wantErr: true, + }, + { + name: "template mode requires template id", + record: func() Delivery { + record := templateQueued + record.TemplateID = "" + return record + }(), + wantErr: true, + }, + { + name: "template mode requires locale", + record: func() Delivery { + record := templateQueued + record.Locale = "" + return record + }(), + wantErr: true, + }, + { + name: "template mode requires template variables", + record: func() Delivery { + record := templateQueued + record.TemplateVariables = nil + return record + }(), + wantErr: true, + }, + { + name: "template rendered requires content", + record: func() Delivery { + record := templateQueued + record.Status = StatusRendered + record.UpdatedAt = record.CreatedAt.Add(2 * time.Minute) + record.Content = Content{} + return record + }(), + wantErr: true, + }, + { + name: "non terminal must not carry terminal timestamps", + record: func() Delivery { + record := templateQueued + record.FailedAt = ptrTime(record.CreatedAt.Add(time.Minute)) + return record + }(), + wantErr: true, + }, + { + name: "rendered delivery must not contain template variables", + record: func() Delivery { + record := base + record.TemplateVariables = map[string]any{"code": "123456"} + return record + }(), + wantErr: true, + }, + { + name: "template variables must be json serializable", + record: func() Delivery { + record := templateQueued + record.TemplateVariables = map[string]any{"invalid": func() {}} + return record + }(), + wantErr: true, + }, + { + name: "failed requires failed at", + record: func() Delivery { + record := templateQueued + record.Status = StatusFailed + record.UpdatedAt = record.CreatedAt.Add(2 * time.Minute) + return record + }(), + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + }) + } +} + +func TestValidateDeadLetterState(t *testing.T) { + t.Parallel() + + record := validDeadLetterDelivery(t) + entry := validDeadLetterEntry(t, record) + + require.NoError(t, ValidateDeadLetterState(record, &entry)) + + err := ValidateDeadLetterState(record, nil) + require.Error(t, err) + + failed := validTemplateQueuedDelivery(t) + failed.Status = StatusFailed + failed.UpdatedAt = failed.CreatedAt.Add(2 * time.Minute) + failed.FailedAt = ptrTime(failed.CreatedAt.Add(2 * time.Minute)) + require.NoError(t, ValidateDeadLetterState(failed, nil)) + require.Error(t, ValidateDeadLetterState(failed, &entry)) + + mismatched := entry + mismatched.DeliveryID = common.DeliveryID("delivery-other") + require.Error(t, ValidateDeadLetterState(record, &mismatched)) +} + +func validRenderedDelivery(t *testing.T) Delivery { + t.Helper() + + createdAt := time.Unix(1_775_121_700, 0).UTC() + sentAt := createdAt.Add(5 * time.Minute) + + record := Delivery{ + DeliveryID: common.DeliveryID("delivery-123"), + Source: SourceNotification, + PayloadMode: PayloadModeRendered, + Envelope: validEnvelope(), + Content: Content{Subject: "Turn ready", TextBody: "Turn 54 is ready."}, + Attachments: []common.AttachmentMetadata{{Filename: "report.txt", ContentType: "text/plain", SizeBytes: 64}}, + TemplateVariables: nil, + IdempotencyKey: common.IdempotencyKey("notification:delivery-123"), + Status: StatusSent, + AttemptCount: 1, + LastAttemptStatus: attempt.StatusProviderAccepted, + ProviderSummary: "queued by provider", + CreatedAt: createdAt, + UpdatedAt: sentAt, + SentAt: &sentAt, + } + + require.NoError(t, record.Validate()) + return record +} + +func validTemplateQueuedDelivery(t *testing.T) Delivery { + t.Helper() + + createdAt := time.Unix(1_775_121_700, 0).UTC() + locale, err := common.ParseLocale("fr-fr") + require.NoError(t, err) + + record := Delivery{ + DeliveryID: common.DeliveryID("delivery-124"), + Source: SourceNotification, + PayloadMode: PayloadModeTemplate, + TemplateID: common.TemplateID("game.turn_ready"), + Envelope: validEnvelope(), + Locale: locale, + TemplateVariables: map[string]any{ + "turn_number": float64(54), + }, + IdempotencyKey: common.IdempotencyKey("notification:delivery-124"), + Status: StatusQueued, + CreatedAt: createdAt, + UpdatedAt: createdAt.Add(time.Minute), + } + + require.NoError(t, record.Validate()) + return record +} + +func validDeadLetterDelivery(t *testing.T) Delivery { + t.Helper() + + record := validTemplateQueuedDelivery(t) + record.Status = StatusDeadLetter + record.AttemptCount = 3 + record.LastAttemptStatus = attempt.StatusTimedOut + record.UpdatedAt = record.CreatedAt.Add(10 * time.Minute) + record.DeadLetteredAt = ptrTime(record.CreatedAt.Add(10 * time.Minute)) + + require.NoError(t, record.Validate()) + return record +} + +func validDeadLetterEntry(t *testing.T, record Delivery) DeadLetterEntry { + t.Helper() + + entry := DeadLetterEntry{ + DeliveryID: record.DeliveryID, + FinalAttemptNo: 3, + FailureClassification: "retry_exhausted", + ProviderSummary: "smtp timeout", + CreatedAt: record.DeadLetteredAt.Add(time.Second), + RecoveryHint: "check SMTP connectivity", + } + + require.NoError(t, entry.ValidateFor(record)) + return entry +} + +func validEnvelope() Envelope { + return Envelope{ + To: []common.Email{"pilot@example.com"}, + } +} + +func ptrTime(value time.Time) *time.Time { + return &value +} diff --git a/mail/internal/domain/idempotency/model.go b/mail/internal/domain/idempotency/model.go new file mode 100644 index 0000000..30dff32 --- /dev/null +++ b/mail/internal/domain/idempotency/model.go @@ -0,0 +1,74 @@ +// Package idempotency defines the deduplication record used by Mail Service +// acceptance flows. +package idempotency + +import ( + "fmt" + "strings" + "time" + + "galaxy/mail/internal/domain/common" + "galaxy/mail/internal/domain/delivery" +) + +// Record stores the first accepted fingerprint bound to one `(source, +// idempotency_key)` scope. +type Record struct { + // Source stores the frozen delivery source vocabulary value. + Source delivery.Source + + // IdempotencyKey stores the caller-owned deduplication key. + IdempotencyKey common.IdempotencyKey + + // DeliveryID stores the accepted delivery linked to the scope. + DeliveryID common.DeliveryID + + // RequestFingerprint stores the stable fingerprint of the first accepted + // request. + RequestFingerprint string + + // CreatedAt stores when the deduplication record was created. + CreatedAt time.Time + + // ExpiresAt stores when the deduplication record becomes invalid. + ExpiresAt time.Time +} + +// Validate reports whether Record satisfies the frozen Stage 2 structural +// invariants. +func (record Record) Validate() error { + if !record.Source.IsKnown() { + return fmt.Errorf("idempotency source %q is unsupported", record.Source) + } + if err := record.IdempotencyKey.Validate(); err != nil { + return fmt.Errorf("idempotency key: %w", err) + } + if err := record.DeliveryID.Validate(); err != nil { + return fmt.Errorf("idempotency delivery id: %w", err) + } + if err := validateToken("idempotency request fingerprint", record.RequestFingerprint); err != nil { + return err + } + if err := common.ValidateTimestamp("idempotency created at", record.CreatedAt); err != nil { + return err + } + if err := common.ValidateTimestamp("idempotency expires at", record.ExpiresAt); err != nil { + return err + } + if !record.ExpiresAt.After(record.CreatedAt) { + return fmt.Errorf("idempotency expires at must be after created at") + } + + return nil +} + +func validateToken(name string, value string) error { + switch { + case strings.TrimSpace(value) == "": + return fmt.Errorf("%s must not be empty", name) + case strings.TrimSpace(value) != value: + return fmt.Errorf("%s must not contain surrounding whitespace", name) + default: + return nil + } +} diff --git a/mail/internal/domain/idempotency/model_test.go b/mail/internal/domain/idempotency/model_test.go new file mode 100644 index 0000000..50dd738 --- /dev/null +++ b/mail/internal/domain/idempotency/model_test.go @@ -0,0 +1,74 @@ +package idempotency + +import ( + "testing" + "time" + + "galaxy/mail/internal/domain/common" + "galaxy/mail/internal/domain/delivery" + + "github.com/stretchr/testify/require" +) + +func TestRecordValidate(t *testing.T) { + t.Parallel() + + createdAt := time.Unix(1_775_121_700, 0).UTC() + + tests := []struct { + name string + record Record + wantErr bool + }{ + { + name: "valid", + record: Record{ + Source: delivery.SourceNotification, + IdempotencyKey: common.IdempotencyKey("notification:delivery-123"), + DeliveryID: common.DeliveryID("delivery-123"), + RequestFingerprint: "sha256:abcdef", + CreatedAt: createdAt, + ExpiresAt: createdAt.Add(7 * 24 * time.Hour), + }, + }, + { + name: "expires at must be after created at", + record: Record{ + Source: delivery.SourceNotification, + IdempotencyKey: common.IdempotencyKey("notification:delivery-123"), + DeliveryID: common.DeliveryID("delivery-123"), + RequestFingerprint: "sha256:abcdef", + CreatedAt: createdAt, + ExpiresAt: createdAt, + }, + wantErr: true, + }, + { + name: "fingerprint required", + record: Record{ + Source: delivery.SourceNotification, + IdempotencyKey: common.IdempotencyKey("notification:delivery-123"), + DeliveryID: common.DeliveryID("delivery-123"), + CreatedAt: createdAt, + ExpiresAt: createdAt.Add(time.Hour), + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + }) + } +} diff --git a/mail/internal/domain/malformedcommand/model.go b/mail/internal/domain/malformedcommand/model.go new file mode 100644 index 0000000..b924008 --- /dev/null +++ b/mail/internal/domain/malformedcommand/model.go @@ -0,0 +1,130 @@ +// Package malformedcommand defines the operator-visible record used for +// malformed asynchronous generic delivery commands. +package malformedcommand + +import ( + "encoding/json" + "fmt" + "strings" + "time" + + "galaxy/mail/internal/domain/common" +) + +// FailureCode identifies the stable malformed-command rejection reason. +type FailureCode string + +const ( + // FailureCodeInvalidEnvelope reports that the command could not be accepted + // because the recipient envelope was invalid. + FailureCodeInvalidEnvelope FailureCode = "invalid_envelope" + + // FailureCodeInvalidPayload reports that the command payload could not be + // decoded or validated. + FailureCodeInvalidPayload FailureCode = "invalid_payload" + + // FailureCodeInvalidCommand reports that the top-level stream envelope was + // malformed or unsupported. + FailureCodeInvalidCommand FailureCode = "invalid_command" + + // FailureCodeIdempotencyConflict reports that the stream command reused an + // existing idempotency scope with a different request fingerprint. + FailureCodeIdempotencyConflict FailureCode = "idempotency_conflict" +) + +// IsKnown reports whether code belongs to the frozen malformed-command +// rejection surface. +func (code FailureCode) IsKnown() bool { + switch code { + case FailureCodeInvalidEnvelope, + FailureCodeInvalidPayload, + FailureCodeInvalidCommand, + FailureCodeIdempotencyConflict: + return true + default: + return false + } +} + +// Entry stores one operator-visible malformed asynchronous command record. +type Entry struct { + // StreamEntryID stores the Redis Stream entry identifier of the malformed + // command. + StreamEntryID string + + // DeliveryID stores the optional raw delivery identifier extracted from the + // stream entry when available. + DeliveryID string + + // Source stores the optional raw source value extracted from the stream + // entry when available. + Source string + + // IdempotencyKey stores the optional raw idempotency key extracted from the + // stream entry when available. + IdempotencyKey string + + // FailureCode stores the stable malformed-command rejection reason. + FailureCode FailureCode + + // FailureMessage stores the detailed validation or decoding failure. + FailureMessage string + + // RawFields stores the raw top-level stream fields captured for later + // operator inspection. + RawFields map[string]any + + // RecordedAt stores when the malformed command was durably recorded. + RecordedAt time.Time +} + +// Validate reports whether entry contains a complete malformed-command record. +func (entry Entry) Validate() error { + if strings.TrimSpace(entry.StreamEntryID) == "" { + return fmt.Errorf("malformed command stream entry id must not be empty") + } + if !entry.FailureCode.IsKnown() { + return fmt.Errorf("malformed command failure code %q is unsupported", entry.FailureCode) + } + if strings.TrimSpace(entry.FailureMessage) == "" { + return fmt.Errorf("malformed command failure message must not be empty") + } + if strings.TrimSpace(entry.FailureMessage) != entry.FailureMessage { + return fmt.Errorf("malformed command failure message must not contain surrounding whitespace") + } + if entry.RawFields == nil { + return fmt.Errorf("malformed command raw fields must not be nil") + } + if err := validateJSONObject("malformed command raw fields", entry.RawFields); err != nil { + return err + } + if err := common.ValidateTimestamp("malformed command recorded at", entry.RecordedAt); err != nil { + return err + } + + return nil +} + +func validateJSONObject(name string, value map[string]any) error { + if value == nil { + return fmt.Errorf("%s must not be nil", name) + } + + payload, err := json.Marshal(value) + if err != nil { + return fmt.Errorf("%s: %w", name, err) + } + if string(payload) == "null" { + return fmt.Errorf("%s must encode as a JSON object", name) + } + + var decoded map[string]any + if err := json.Unmarshal(payload, &decoded); err != nil { + return fmt.Errorf("%s: %w", name, err) + } + if decoded == nil { + return fmt.Errorf("%s must encode as a JSON object", name) + } + + return nil +} diff --git a/mail/internal/domain/malformedcommand/model_test.go b/mail/internal/domain/malformedcommand/model_test.go new file mode 100644 index 0000000..263fcd3 --- /dev/null +++ b/mail/internal/domain/malformedcommand/model_test.go @@ -0,0 +1,61 @@ +package malformedcommand + +import ( + "testing" + "time" + + "github.com/stretchr/testify/require" +) + +func TestEntryValidate(t *testing.T) { + t.Parallel() + + entry := Entry{ + StreamEntryID: "1775121700000-0", + DeliveryID: "mail-123", + Source: "notification", + IdempotencyKey: "notification:mail-123", + FailureCode: FailureCodeInvalidPayload, + FailureMessage: "payload_json.subject is required", + RawFields: map[string]any{ + "delivery_id": "mail-123", + "source": "notification", + "payload_mode": "rendered", + "idempotency_key": "notification:mail-123", + }, + RecordedAt: time.Unix(1_775_121_700, 0).UTC(), + } + + require.NoError(t, entry.Validate()) +} + +func TestEntryValidateRejectsInvalidValue(t *testing.T) { + t.Parallel() + + entry := Entry{ + StreamEntryID: "1775121700000-0", + FailureCode: FailureCode("unsupported"), + FailureMessage: "failure", + RawFields: map[string]any{}, + RecordedAt: time.Unix(1_775_121_700, 0).UTC(), + } + + err := entry.Validate() + require.Error(t, err) + require.ErrorContains(t, err, "failure code") +} + +func TestEntryValidateRejectsNilRawFields(t *testing.T) { + t.Parallel() + + entry := Entry{ + StreamEntryID: "1775121700000-0", + FailureCode: FailureCodeInvalidCommand, + FailureMessage: "missing required fields", + RecordedAt: time.Unix(1_775_121_700, 0).UTC(), + } + + err := entry.Validate() + require.Error(t, err) + require.ErrorContains(t, err, "raw fields") +} diff --git a/mail/internal/domain/template/model.go b/mail/internal/domain/template/model.go new file mode 100644 index 0000000..c1a5bc2 --- /dev/null +++ b/mail/internal/domain/template/model.go @@ -0,0 +1,65 @@ +// Package template defines the logical template entity used by the +// filesystem-backed Mail Service template catalog. +package template + +import ( + "fmt" + "strings" + + "galaxy/mail/internal/domain/common" +) + +// Template stores one locale-specific template bundle. +type Template struct { + // TemplateID identifies the template family. + TemplateID common.TemplateID + + // Locale stores the canonical locale of the template variant. + Locale common.Locale + + // SubjectTemplate stores the subject template source. + SubjectTemplate string + + // TextTemplate stores the plaintext body template source. + TextTemplate string + + // HTMLTemplate stores the optional HTML body template source. + HTMLTemplate string + + // Version stores the template version marker projected into the domain + // model. + Version string +} + +// Validate reports whether Template satisfies the frozen Stage 2 structural +// invariants. +func (record Template) Validate() error { + if err := record.TemplateID.Validate(); err != nil { + return fmt.Errorf("template id: %w", err) + } + if err := record.Locale.Validate(); err != nil { + return fmt.Errorf("template locale: %w", err) + } + if record.SubjectTemplate == "" { + return fmt.Errorf("template subject template must not be empty") + } + if record.TextTemplate == "" { + return fmt.Errorf("template text template must not be empty") + } + if err := validateToken("template version", record.Version); err != nil { + return err + } + + return nil +} + +func validateToken(name string, value string) error { + switch { + case strings.TrimSpace(value) == "": + return fmt.Errorf("%s must not be empty", name) + case strings.TrimSpace(value) != value: + return fmt.Errorf("%s must not contain surrounding whitespace", name) + default: + return nil + } +} diff --git a/mail/internal/domain/template/model_test.go b/mail/internal/domain/template/model_test.go new file mode 100644 index 0000000..b1ec93b --- /dev/null +++ b/mail/internal/domain/template/model_test.go @@ -0,0 +1,71 @@ +package template + +import ( + "testing" + + "galaxy/mail/internal/domain/common" + + "github.com/stretchr/testify/require" +) + +func TestTemplateValidate(t *testing.T) { + t.Parallel() + + locale, err := common.ParseLocale("en-us") + require.NoError(t, err) + + tests := []struct { + name string + record Template + wantErr bool + }{ + { + name: "valid", + record: Template{ + TemplateID: common.TemplateID("auth.login_code"), + Locale: locale, + SubjectTemplate: "Your code", + TextTemplate: "Code: {{.Code}}", + HTMLTemplate: "

Code: {{.Code}}

", + Version: "sha256:abcd", + }, + }, + { + name: "non canonical locale rejected", + record: Template{ + TemplateID: common.TemplateID("auth.login_code"), + Locale: common.Locale("en-us"), + SubjectTemplate: "Your code", + TextTemplate: "Code: {{.Code}}", + Version: "sha256:abcd", + }, + wantErr: true, + }, + { + name: "missing subject template", + record: Template{ + TemplateID: common.TemplateID("auth.login_code"), + Locale: locale, + TextTemplate: "Code: {{.Code}}", + Version: "sha256:abcd", + }, + wantErr: true, + }, + } + + for _, tt := range tests { + tt := tt + + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.record.Validate() + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + }) + } +} diff --git a/mail/internal/logging/logger.go b/mail/internal/logging/logger.go new file mode 100644 index 0000000..5c1a781 --- /dev/null +++ b/mail/internal/logging/logger.go @@ -0,0 +1,91 @@ +// Package logging configures the Mail Service process logger and provides +// context-aware helpers for trace, delivery, attempt, and command fields. +package logging + +import ( + "context" + "fmt" + "log/slog" + "os" + "strings" + + "galaxy/mail/internal/api/streamcommand" + "galaxy/mail/internal/domain/attempt" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "go.opentelemetry.io/otel/trace" +) + +// New constructs the process-wide JSON logger from level. +func New(level string) (*slog.Logger, error) { + var slogLevel slog.Level + if err := slogLevel.UnmarshalText([]byte(strings.TrimSpace(level))); err != nil { + return nil, fmt.Errorf("build logger: %w", err) + } + + return slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{ + Level: slogLevel, + })), nil +} + +// TraceAttrsFromContext returns slog key-value pairs for the active +// OpenTelemetry span when ctx carries a valid span context. +func TraceAttrsFromContext(ctx context.Context) []any { + if ctx == nil { + return nil + } + + spanContext := trace.SpanContextFromContext(ctx) + if !spanContext.IsValid() { + return nil + } + + return []any{ + "otel_trace_id", spanContext.TraceID().String(), + "otel_span_id", spanContext.SpanID().String(), + } +} + +// DeliveryAttrs returns structured delivery-identifying log fields. +func DeliveryAttrs(record deliverydomain.Delivery) []any { + attrs := []any{ + "delivery_id", record.DeliveryID.String(), + "source", string(record.Source), + } + if !record.TemplateID.IsZero() { + attrs = append(attrs, "template_id", record.TemplateID.String()) + } + + return attrs +} + +// AttemptAttrs returns structured attempt-identifying log fields. +func AttemptAttrs(record attempt.Attempt) []any { + return []any{ + "delivery_id", record.DeliveryID.String(), + "attempt_no", record.AttemptNo, + } +} + +// DeliveryAttemptAttrs returns structured delivery and attempt fields. +func DeliveryAttemptAttrs(deliveryRecord deliverydomain.Delivery, attemptRecord attempt.Attempt) []any { + attrs := DeliveryAttrs(deliveryRecord) + attrs = append(attrs, "attempt_no", attemptRecord.AttemptNo) + return attrs +} + +// CommandAttrs returns structured generic-command log fields. +func CommandAttrs(command streamcommand.Command) []any { + attrs := []any{ + "delivery_id", command.DeliveryID.String(), + "source", string(command.Source), + } + if !command.TemplateID.IsZero() { + attrs = append(attrs, "template_id", command.TemplateID.String()) + } + if strings.TrimSpace(command.TraceID) != "" { + attrs = append(attrs, "trace_id", command.TraceID) + } + + return attrs +} diff --git a/mail/internal/ports/provider.go b/mail/internal/ports/provider.go new file mode 100644 index 0000000..cd46650 --- /dev/null +++ b/mail/internal/ports/provider.go @@ -0,0 +1,299 @@ +// Package ports defines the stable interfaces that connect Mail Service use +// cases to external delivery infrastructure. +package ports + +import ( + "context" + "fmt" + "slices" + "strings" + "unicode/utf8" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" +) + +// Provider executes one materialized outbound message against a concrete +// delivery backend such as SMTP or a deterministic local stub. +type Provider interface { + // Send attempts one outbound message delivery and returns a classified + // provider result when the operation reached a stable backend outcome. + Send(context.Context, Message) (Result, error) + + // Close releases provider-owned resources. Implementations must allow + // repeated calls. + Close() error +} + +// Classification identifies the stable provider-level outcome surface frozen +// for Stage 10. +type Classification string + +const ( + // ClassificationAccepted reports that the provider accepted the SMTP + // envelope after the final DATA exchange. + ClassificationAccepted Classification = "accepted" + + // ClassificationSuppressed reports that delivery was intentionally skipped + // by provider-local policy. + ClassificationSuppressed Classification = "suppressed" + + // ClassificationTransientFailure reports that the provider interaction + // failed in a retryable way. + ClassificationTransientFailure Classification = "transient_failure" + + // ClassificationPermanentFailure reports that the provider interaction + // failed in a terminal non-retryable way. + ClassificationPermanentFailure Classification = "permanent_failure" +) + +// IsKnown reports whether classification belongs to the frozen provider +// result surface. +func (classification Classification) IsKnown() bool { + switch classification { + case ClassificationAccepted, + ClassificationSuppressed, + ClassificationTransientFailure, + ClassificationPermanentFailure: + return true + default: + return false + } +} + +// Attachment stores one fully decoded outbound attachment together with the +// durable metadata that remains in the delivery audit. +type Attachment struct { + // Metadata stores the attachment audit fields used by the delivery domain. + Metadata common.AttachmentMetadata + + // Content stores the decoded attachment payload bytes used for MIME body + // construction. + Content []byte +} + +// Validate reports whether attachment contains a consistent decoded outbound +// payload. +func (attachment Attachment) Validate() error { + if err := attachment.Metadata.Validate(); err != nil { + return fmt.Errorf("attachment metadata: %w", err) + } + if int64(len(attachment.Content)) != attachment.Metadata.SizeBytes { + return fmt.Errorf( + "attachment content length must match size bytes: got %d, want %d", + len(attachment.Content), + attachment.Metadata.SizeBytes, + ) + } + + return nil +} + +// Message stores one fully materialized outbound message ready for provider +// handoff. +type Message struct { + // Envelope stores the SMTP routing information. + Envelope deliverydomain.Envelope + + // Content stores the materialized subject and body parts. + Content deliverydomain.Content + + // Attachments stores the decoded outbound attachments. + Attachments []Attachment +} + +// Validate reports whether message is ready for provider execution. +func (message Message) Validate() error { + if err := message.Envelope.Validate(); err != nil { + return fmt.Errorf("message envelope: %w", err) + } + if err := message.Content.ValidateMaterialized(); err != nil { + return fmt.Errorf("message content: %w", err) + } + for index, attachment := range message.Attachments { + if err := attachment.Validate(); err != nil { + return fmt.Errorf("message attachments[%d]: %w", index, err) + } + } + + return nil +} + +// SummaryFields stores the tokenized safe-summary fields allowed in provider +// audit strings. +type SummaryFields struct { + // Provider stores the provider implementation identifier. + Provider string + + // Result stores the stable provider classification. + Result string + + // Phase stores the optional backend stage that produced the outcome. + Phase string + + // SMTPCode stores the optional SMTP response code. + SMTPCode string + + // Script stores the optional stub-script outcome label. + Script string +} + +// BuildSafeSummary renders one stable ASCII summary string for provider audit +// fields. +func BuildSafeSummary(fields SummaryFields) (string, error) { + switch { + case !isSafeSummaryValue(fields.Provider): + return "", fmt.Errorf("provider summary field provider must be a non-empty ASCII token") + case !isSafeSummaryValue(fields.Result): + return "", fmt.Errorf("provider summary field result must be a non-empty ASCII token") + case fields.Phase != "" && !isSafeSummaryValue(fields.Phase): + return "", fmt.Errorf("provider summary field phase must be an ASCII token") + case fields.SMTPCode != "" && !isSafeSummaryValue(fields.SMTPCode): + return "", fmt.Errorf("provider summary field smtp_code must be an ASCII token") + case fields.Script != "" && !isSafeSummaryValue(fields.Script): + return "", fmt.Errorf("provider summary field script must be an ASCII token") + } + + parts := []string{ + "provider=" + fields.Provider, + "result=" + fields.Result, + } + if fields.Phase != "" { + parts = append(parts, "phase="+fields.Phase) + } + if fields.SMTPCode != "" { + parts = append(parts, "smtp_code="+fields.SMTPCode) + } + if fields.Script != "" { + parts = append(parts, "script="+fields.Script) + } + + return strings.Join(parts, " "), nil +} + +// Result stores the stable provider-layer outcome together with the redacted +// summary that can be persisted in delivery audit records. +type Result struct { + // Classification stores the stable provider result classification. + Classification Classification + + // Summary stores the stable persisted provider summary. + Summary string + + // Details stores optional in-memory-only provider details for structured + // logs and diagnostics. Callers must not persist this map directly. + Details map[string]string +} + +// Validate reports whether result contains a supported provider outcome and a +// valid safe summary. +func (result Result) Validate() error { + if !result.Classification.IsKnown() { + return fmt.Errorf("provider result classification %q is unsupported", result.Classification) + } + if err := validateSafeSummary(result.Summary); err != nil { + return err + } + for key, value := range result.Details { + if !isSafeSummaryValue(key) { + return fmt.Errorf("provider result detail key %q must be an ASCII token", key) + } + if !isSafeDetailValue(value) { + return fmt.Errorf("provider result detail value for %q must use printable ASCII without line breaks", key) + } + } + + return nil +} + +// CloneDetails returns a detached copy of details suitable for in-memory +// logging. +func CloneDetails(details map[string]string) map[string]string { + if details == nil { + return nil + } + + cloned := make(map[string]string, len(details)) + for key, value := range details { + cloned[key] = value + } + + return cloned +} + +func validateSafeSummary(summary string) error { + if strings.TrimSpace(summary) == "" { + return fmt.Errorf("provider result summary must not be empty") + } + if !utf8.ValidString(summary) { + return fmt.Errorf("provider result summary must be valid UTF-8") + } + + tokens := strings.Split(summary, " ") + if len(tokens) < 2 { + return fmt.Errorf("provider result summary must contain provider and result tokens") + } + + seen := make(map[string]struct{}, len(tokens)) + for _, token := range tokens { + key, value, ok := strings.Cut(token, "=") + if !ok { + return fmt.Errorf("provider result summary token %q must use key=value form", token) + } + if _, exists := seen[key]; exists { + return fmt.Errorf("provider result summary token %q must not repeat", key) + } + seen[key] = struct{}{} + + if !slices.Contains([]string{"provider", "result", "phase", "smtp_code", "script"}, key) { + return fmt.Errorf("provider result summary token %q is unsupported", key) + } + if !isSafeSummaryValue(value) { + return fmt.Errorf("provider result summary token %q must use a non-empty ASCII value", key) + } + } + + if _, ok := seen["provider"]; !ok { + return fmt.Errorf("provider result summary must include provider token") + } + if _, ok := seen["result"]; !ok { + return fmt.Errorf("provider result summary must include result token") + } + + return nil +} + +func isSafeSummaryValue(value string) bool { + if strings.TrimSpace(value) == "" || strings.TrimSpace(value) != value { + return false + } + + for _, r := range value { + if r > utf8.RuneSelf { + return false + } + switch { + case r >= 'a' && r <= 'z': + case r >= 'A' && r <= 'Z': + case r >= '0' && r <= '9': + case r == '.', r == '_', r == '-': + default: + return false + } + } + + return true +} + +func isSafeDetailValue(value string) bool { + if strings.TrimSpace(value) != value { + return false + } + for _, r := range value { + if r > utf8.RuneSelf || r < 0x20 || r == 0x7f { + return false + } + } + + return true +} diff --git a/mail/internal/ports/provider_test.go b/mail/internal/ports/provider_test.go new file mode 100644 index 0000000..2239a0a --- /dev/null +++ b/mail/internal/ports/provider_test.go @@ -0,0 +1,30 @@ +package ports + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestBuildSafeSummaryBuildsStableTokenOrder(t *testing.T) { + t.Parallel() + + summary, err := BuildSafeSummary(SummaryFields{ + Provider: "smtp", + Result: "transient_failure", + Phase: "data", + SMTPCode: "451", + }) + require.NoError(t, err) + require.Equal(t, "provider=smtp result=transient_failure phase=data smtp_code=451", summary) +} + +func TestResultValidateRejectsUnsafeSummary(t *testing.T) { + t.Parallel() + + result := Result{ + Classification: ClassificationAccepted, + Summary: "provider=smtp result=accepted extra=value", + } + require.Error(t, result.Validate()) +} diff --git a/mail/internal/service/acceptauthdelivery/service.go b/mail/internal/service/acceptauthdelivery/service.go new file mode 100644 index 0000000..f1ad245 --- /dev/null +++ b/mail/internal/service/acceptauthdelivery/service.go @@ -0,0 +1,544 @@ +// Package acceptauthdelivery implements synchronous durable acceptance of auth +// login-code deliveries. +package acceptauthdelivery + +import ( + "context" + "crypto/sha256" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "log/slog" + "strings" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + "galaxy/mail/internal/logging" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + oteltrace "go.opentelemetry.io/otel/trace" +) + +var ( + // ErrConflict reports that the idempotency scope already belongs to a + // different normalized auth request. + ErrConflict = errors.New("accept auth delivery conflict") + + // ErrServiceUnavailable reports that durable acceptance could not be + // completed or recovered safely. + ErrServiceUnavailable = errors.New("accept auth delivery service unavailable") +) + +const ( + // AuthTemplateID is the dedicated template family used for auth login-code + // deliveries. + AuthTemplateID common.TemplateID = "auth.login_code" + + maxCreateRetries = 3 + tracerName = "galaxy/mail/acceptauthdelivery" +) + +// Outcome identifies the stable auth-delivery acceptance outcome. +type Outcome string + +const ( + // OutcomeSent reports that the delivery was accepted into the durable + // internal pipeline. + OutcomeSent Outcome = "sent" + + // OutcomeSuppressed reports that outward delivery was intentionally skipped + // while the auth flow remained success-shaped. + OutcomeSuppressed Outcome = "suppressed" +) + +// IsKnown reports whether outcome belongs to the stable auth-delivery surface. +func (outcome Outcome) IsKnown() bool { + switch outcome { + case OutcomeSent, OutcomeSuppressed: + return true + default: + return false + } +} + +// Result stores the coarse auth-delivery acceptance outcome. +type Result struct { + // Outcome stores the stable auth-delivery result. + Outcome Outcome +} + +// Validate reports whether result contains a supported auth-delivery outcome. +func (result Result) Validate() error { + if !result.Outcome.IsKnown() { + return fmt.Errorf("accept auth delivery outcome %q is unsupported", result.Outcome) + } + + return nil +} + +// Input stores one normalized auth-delivery acceptance command. +type Input struct { + // IdempotencyKey stores the caller-owned stable deduplication key. + IdempotencyKey common.IdempotencyKey + + // Email stores the normalized recipient mailbox. + Email common.Email + + // Code stores the exact login code. + Code string + + // Locale stores the canonical BCP 47 language tag selected upstream. + Locale common.Locale +} + +// Validate reports whether input contains one valid auth-delivery command. +func (input Input) Validate() error { + if err := input.IdempotencyKey.Validate(); err != nil { + return fmt.Errorf("idempotency key: %w", err) + } + if err := input.Email.Validate(); err != nil { + return fmt.Errorf("email: %w", err) + } + if strings.TrimSpace(input.Code) == "" { + return errors.New("code must not be empty") + } + if strings.TrimSpace(input.Code) != input.Code { + return errors.New("code must not contain surrounding whitespace") + } + if err := input.Locale.Validate(); err != nil { + return fmt.Errorf("locale: %w", err) + } + + return nil +} + +// Fingerprint returns the stable idempotency fingerprint of input. +func (input Input) Fingerprint() (string, error) { + if err := input.Validate(); err != nil { + return "", err + } + + normalized := struct { + IdempotencyKey string `json:"idempotency_key"` + Email string `json:"email"` + Code string `json:"code"` + Locale string `json:"locale"` + }{ + IdempotencyKey: input.IdempotencyKey.String(), + Email: input.Email.String(), + Code: input.Code, + Locale: input.Locale.String(), + } + + payload, err := json.Marshal(normalized) + if err != nil { + return "", fmt.Errorf("marshal auth-delivery fingerprint: %w", err) + } + + sum := sha256.Sum256(payload) + + return "sha256:" + hex.EncodeToString(sum[:]), nil +} + +// CreateAcceptanceInput stores the durable write set required for one +// auth-delivery acceptance attempt. +type CreateAcceptanceInput struct { + // Delivery stores the accepted delivery record. + Delivery deliverydomain.Delivery + + // FirstAttempt stores the optional first scheduled attempt. + FirstAttempt *attempt.Attempt + + // Idempotency stores the idempotency reservation bound to Delivery. + Idempotency idempotency.Record +} + +// Validate reports whether input contains a consistent durable write set. +func (input CreateAcceptanceInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if err := input.Idempotency.Validate(); err != nil { + return fmt.Errorf("idempotency: %w", err) + } + if input.Idempotency.DeliveryID != input.Delivery.DeliveryID { + return errors.New("idempotency delivery id must match delivery id") + } + if input.Idempotency.Source != input.Delivery.Source { + return errors.New("idempotency source must match delivery source") + } + if input.Idempotency.IdempotencyKey != input.Delivery.IdempotencyKey { + return errors.New("idempotency key must match delivery idempotency key") + } + + switch { + case input.FirstAttempt == nil: + if input.Delivery.Status != deliverydomain.StatusSuppressed { + return errors.New("first attempt must not be nil unless delivery is suppressed") + } + case input.Delivery.Status == deliverydomain.StatusSuppressed: + return errors.New("suppressed delivery must not create first attempt") + default: + if err := input.FirstAttempt.Validate(); err != nil { + return fmt.Errorf("first attempt: %w", err) + } + if input.FirstAttempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("first attempt delivery id must match delivery id") + } + if input.FirstAttempt.Status != attempt.StatusScheduled { + return fmt.Errorf("first attempt status must be %q", attempt.StatusScheduled) + } + } + + return nil +} + +// Store describes the durable storage required by the auth-delivery use case. +type Store interface { + // CreateAcceptance stores the complete durable write set for one auth + // acceptance attempt. Implementations must wrap ErrConflict when the write + // set races with an already accepted idempotency scope. + CreateAcceptance(context.Context, CreateAcceptanceInput) error + + // GetIdempotency loads the idempotency reservation for one auth-delivery + // scope. + GetIdempotency(context.Context, deliverydomain.Source, common.IdempotencyKey) (idempotency.Record, bool, error) + + // GetDelivery loads one accepted delivery by its internal identifier. + GetDelivery(context.Context, common.DeliveryID) (deliverydomain.Delivery, bool, error) +} + +// DeliveryIDGenerator describes the source of new internal delivery +// identifiers. +type DeliveryIDGenerator interface { + // NewDeliveryID returns one new internal delivery identifier. + NewDeliveryID() (common.DeliveryID, error) +} + +// Clock provides the current wall-clock time. +type Clock interface { + // Now returns the current time. + Now() time.Time +} + +// Telemetry records low-cardinality auth-delivery outcomes. +type Telemetry interface { + // RecordAuthDeliveryOutcome records one coarse auth-delivery outcome. + RecordAuthDeliveryOutcome(context.Context, string) + + // RecordAcceptedAuthDelivery records one newly accepted auth delivery. + RecordAcceptedAuthDelivery(context.Context) + + // RecordDeliveryStatusTransition records one durable delivery status + // transition. + RecordDeliveryStatusTransition(context.Context, string, string) +} + +// Config stores the dependencies and policy switches used by Service. +type Config struct { + // Store owns the durable accepted state. + Store Store + + // DeliveryIDGenerator builds internal delivery identifiers. + DeliveryIDGenerator DeliveryIDGenerator + + // Clock provides wall-clock timestamps. + Clock Clock + + // Telemetry records low-cardinality acceptance outcomes. + Telemetry Telemetry + + // TracerProvider constructs the application span recorder used by the auth + // acceptance flow. + TracerProvider oteltrace.TracerProvider + + // Logger writes structured auth acceptance logs. + Logger *slog.Logger + + // IdempotencyTTL stores how long accepted idempotency scopes remain valid. + IdempotencyTTL time.Duration + + // SuppressOutbound reports whether new auth-deliveries should be accepted + // directly as suppressed. + SuppressOutbound bool +} + +// Service accepts auth login-code deliveries synchronously and durably. +type Service struct { + store Store + deliveryIDGenerator DeliveryIDGenerator + clock Clock + telemetry Telemetry + tracerProvider oteltrace.TracerProvider + logger *slog.Logger + idempotencyTTL time.Duration + suppressOutbound bool +} + +// New constructs Service from cfg. +func New(cfg Config) (*Service, error) { + switch { + case cfg.Store == nil: + return nil, errors.New("new accept auth delivery service: nil store") + case cfg.DeliveryIDGenerator == nil: + return nil, errors.New("new accept auth delivery service: nil delivery id generator") + case cfg.Clock == nil: + return nil, errors.New("new accept auth delivery service: nil clock") + case cfg.IdempotencyTTL <= 0: + return nil, errors.New("new accept auth delivery service: non-positive idempotency ttl") + default: + tracerProvider := cfg.TracerProvider + if tracerProvider == nil { + tracerProvider = otel.GetTracerProvider() + } + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + return &Service{ + store: cfg.Store, + deliveryIDGenerator: cfg.DeliveryIDGenerator, + clock: cfg.Clock, + telemetry: cfg.Telemetry, + tracerProvider: tracerProvider, + logger: logger.With("component", "accept_auth_delivery"), + idempotencyTTL: cfg.IdempotencyTTL, + suppressOutbound: cfg.SuppressOutbound, + }, nil + } +} + +// Execute accepts one auth login-code delivery command. +func (service *Service) Execute(ctx context.Context, input Input) (Result, error) { + if ctx == nil { + return Result{}, errors.New("accept auth delivery: nil context") + } + if service == nil { + return Result{}, errors.New("accept auth delivery: nil service") + } + if err := input.Validate(); err != nil { + return Result{}, fmt.Errorf("accept auth delivery: %w", err) + } + + ctx, span := service.tracerProvider.Tracer(tracerName).Start(ctx, "mail.accept_auth_delivery") + defer span.End() + span.SetAttributes( + attribute.String("mail.locale", input.Locale.String()), + ) + + fingerprint, err := input.Fingerprint() + if err != nil { + return Result{}, fmt.Errorf("accept auth delivery: %w", err) + } + + if result, handled, err := service.resolveReplay(ctx, input.IdempotencyKey, fingerprint); handled { + if err != nil { + service.recordOutcome(ctx, replayOutcomeForError(err)) + return Result{}, err + } + + service.recordOutcome(ctx, "duplicate") + return result, nil + } + + for range maxCreateRetries { + createInput, result, err := service.buildCreateInput(input, fingerprint) + if err != nil { + return Result{}, fmt.Errorf("accept auth delivery: %w", err) + } + + if err := service.store.CreateAcceptance(ctx, createInput); err != nil { + if !errors.Is(err, ErrConflict) { + service.recordOutcome(ctx, "service_unavailable") + return Result{}, fmt.Errorf("%w: create acceptance: %v", ErrServiceUnavailable, err) + } + + if replayResult, handled, replayErr := service.resolveReplay(ctx, input.IdempotencyKey, fingerprint); handled { + if replayErr != nil { + service.recordOutcome(ctx, replayOutcomeForError(replayErr)) + return Result{}, replayErr + } + + service.recordOutcome(ctx, "duplicate") + return replayResult, nil + } + + continue + } + + service.recordOutcome(ctx, string(result.Outcome)) + service.recordAcceptedDelivery(ctx) + service.recordStatusTransition(ctx, createInput.Delivery) + span.SetAttributes( + attribute.String("mail.delivery_id", createInput.Delivery.DeliveryID.String()), + attribute.String("mail.source", string(createInput.Delivery.Source)), + attribute.String("mail.status", string(createInput.Delivery.Status)), + ) + logArgs := logging.DeliveryAttrs(createInput.Delivery) + logArgs = append(logArgs, + "status", string(createInput.Delivery.Status), + "outcome", string(result.Outcome), + "locale", input.Locale.String(), + ) + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + service.logger.Info("auth delivery accepted", logArgs...) + return result, nil + } + + service.recordOutcome(ctx, "service_unavailable") + return Result{}, fmt.Errorf("%w: delivery id conflict retry limit exceeded", ErrServiceUnavailable) +} + +func (service *Service) buildCreateInput(input Input, fingerprint string) (CreateAcceptanceInput, Result, error) { + now := service.clock.Now().UTC().Truncate(time.Millisecond) + + deliveryID, err := service.deliveryIDGenerator.NewDeliveryID() + if err != nil { + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("%w: generate delivery id: %v", ErrServiceUnavailable, err) + } + + deliveryRecord := deliverydomain.Delivery{ + DeliveryID: deliveryID, + Source: deliverydomain.SourceAuthSession, + PayloadMode: deliverydomain.PayloadModeTemplate, + TemplateID: AuthTemplateID, + Envelope: deliverydomain.Envelope{To: []common.Email{input.Email}}, + Locale: input.Locale, + TemplateVariables: map[string]any{ + "code": input.Code, + }, + IdempotencyKey: input.IdempotencyKey, + CreatedAt: now, + UpdatedAt: now, + } + + result := Result{} + var firstAttempt *attempt.Attempt + + if service.suppressOutbound { + deliveryRecord.Status = deliverydomain.StatusSuppressed + deliveryRecord.SuppressedAt = ptrTime(now) + result.Outcome = OutcomeSuppressed + } else { + deliveryRecord.Status = deliverydomain.StatusQueued + deliveryRecord.AttemptCount = 1 + scheduledAttempt := attempt.Attempt{ + DeliveryID: deliveryID, + AttemptNo: 1, + ScheduledFor: now, + Status: attempt.StatusScheduled, + } + firstAttempt = &scheduledAttempt + result.Outcome = OutcomeSent + } + + if err := deliveryRecord.Validate(); err != nil { + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("build auth delivery record: %w", err) + } + if err := result.Validate(); err != nil { + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("build auth delivery result: %w", err) + } + + createInput := CreateAcceptanceInput{ + Delivery: deliveryRecord, + FirstAttempt: firstAttempt, + Idempotency: idempotency.Record{ + Source: deliverydomain.SourceAuthSession, + IdempotencyKey: input.IdempotencyKey, + DeliveryID: deliveryID, + RequestFingerprint: fingerprint, + CreatedAt: now, + ExpiresAt: now.Add(service.idempotencyTTL), + }, + } + if err := createInput.Validate(); err != nil { + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("build auth create input: %w", err) + } + + return createInput, result, nil +} + +func (service *Service) recordAcceptedDelivery(ctx context.Context) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordAcceptedAuthDelivery(ctx) +} + +func (service *Service) recordStatusTransition(ctx context.Context, record deliverydomain.Delivery) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordDeliveryStatusTransition(ctx, string(record.Status), string(record.Source)) +} + +func (service *Service) resolveReplay(ctx context.Context, key common.IdempotencyKey, fingerprint string) (Result, bool, error) { + record, found, err := service.store.GetIdempotency(ctx, deliverydomain.SourceAuthSession, key) + if err != nil { + return Result{}, true, fmt.Errorf("%w: load idempotency: %v", ErrServiceUnavailable, err) + } + if !found { + return Result{}, false, nil + } + if record.RequestFingerprint != fingerprint { + return Result{}, true, fmt.Errorf("%w: request conflicts with current state", ErrConflict) + } + + deliveryRecord, found, err := service.store.GetDelivery(ctx, record.DeliveryID) + if err != nil { + return Result{}, true, fmt.Errorf("%w: load delivery: %v", ErrServiceUnavailable, err) + } + if !found { + return Result{}, true, fmt.Errorf("%w: delivery %q is missing for idempotency scope", ErrServiceUnavailable, record.DeliveryID) + } + + return deriveReplayResult(deliveryRecord) +} + +func deriveReplayResult(record deliverydomain.Delivery) (Result, bool, error) { + switch record.Status { + case deliverydomain.StatusSuppressed: + return Result{Outcome: OutcomeSuppressed}, true, nil + case deliverydomain.StatusAccepted, + deliverydomain.StatusQueued, + deliverydomain.StatusRendered, + deliverydomain.StatusSending, + deliverydomain.StatusSent, + deliverydomain.StatusFailed, + deliverydomain.StatusDeadLetter: + return Result{Outcome: OutcomeSent}, true, nil + default: + return Result{}, true, fmt.Errorf("%w: unsupported replay delivery status %q", ErrServiceUnavailable, record.Status) + } +} + +func (service *Service) recordOutcome(ctx context.Context, outcome string) { + if service == nil || service.telemetry == nil || strings.TrimSpace(outcome) == "" { + return + } + + service.telemetry.RecordAuthDeliveryOutcome(ctx, outcome) +} + +func replayOutcomeForError(err error) string { + switch { + case errors.Is(err, ErrConflict): + return "conflict" + case errors.Is(err, ErrServiceUnavailable): + return "service_unavailable" + default: + return "" + } +} + +func ptrTime(value time.Time) *time.Time { + return &value +} diff --git a/mail/internal/service/acceptauthdelivery/service_test.go b/mail/internal/service/acceptauthdelivery/service_test.go new file mode 100644 index 0000000..4a97ba6 --- /dev/null +++ b/mail/internal/service/acceptauthdelivery/service_test.go @@ -0,0 +1,320 @@ +package acceptauthdelivery + +import ( + "bytes" + "context" + "errors" + "log/slog" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + + "github.com/stretchr/testify/require" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/sdk/trace/tracetest" +) + +func TestServiceExecuteAcceptsQueuedDelivery(t *testing.T) { + t.Parallel() + + store := &stubStore{} + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: store, + DeliveryIDGenerator: stubIDGenerator{ids: []common.DeliveryID{"delivery-queued"}}, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + result, err := service.Execute(context.Background(), validInput()) + require.NoError(t, err) + require.Equal(t, Result{Outcome: OutcomeSent}, result) + require.Len(t, store.createInputs, 1) + require.NotNil(t, store.createInputs[0].FirstAttempt) + require.Equal(t, deliverydomain.StatusQueued, store.createInputs[0].Delivery.Status) + require.Equal(t, []string{"sent"}, telemetry.outcomes) + require.Equal(t, 1, telemetry.accepted) + require.Equal(t, []string{"authsession:queued"}, telemetry.statuses) +} + +func TestServiceExecuteAcceptsSuppressedDelivery(t *testing.T) { + t.Parallel() + + store := &stubStore{} + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: store, + DeliveryIDGenerator: stubIDGenerator{ids: []common.DeliveryID{"delivery-suppressed"}}, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + SuppressOutbound: true, + }) + + result, err := service.Execute(context.Background(), validInput()) + require.NoError(t, err) + require.Equal(t, Result{Outcome: OutcomeSuppressed}, result) + require.Len(t, store.createInputs, 1) + require.Nil(t, store.createInputs[0].FirstAttempt) + require.Equal(t, deliverydomain.StatusSuppressed, store.createInputs[0].Delivery.Status) + require.Equal(t, []string{"suppressed"}, telemetry.outcomes) + require.Equal(t, 1, telemetry.accepted) + require.Equal(t, []string{"authsession:suppressed"}, telemetry.statuses) +} + +func TestServiceExecuteReturnsStableDuplicateResult(t *testing.T) { + t.Parallel() + + input := validInput() + fingerprint, err := input.Fingerprint() + require.NoError(t, err) + + store := &stubStore{ + idempotencyRecord: &idempotency.Record{ + Source: deliverydomain.SourceAuthSession, + IdempotencyKey: input.IdempotencyKey, + DeliveryID: common.DeliveryID("delivery-existing"), + RequestFingerprint: fingerprint, + CreatedAt: fixedNow(), + ExpiresAt: fixedNow().Add(7 * 24 * time.Hour), + }, + deliveryRecord: &deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-existing"), + Source: deliverydomain.SourceAuthSession, + PayloadMode: deliverydomain.PayloadModeTemplate, + TemplateID: AuthTemplateID, + Envelope: deliverydomain.Envelope{ + To: []common.Email{input.Email}, + }, + Locale: input.Locale, + TemplateVariables: map[string]any{ + "code": input.Code, + }, + IdempotencyKey: input.IdempotencyKey, + Status: deliverydomain.StatusSuppressed, + CreatedAt: fixedNow(), + UpdatedAt: fixedNow(), + SuppressedAt: ptrTime(fixedNow()), + }, + } + require.NoError(t, store.idempotencyRecord.Validate()) + require.NoError(t, store.deliveryRecord.Validate()) + + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: store, + DeliveryIDGenerator: stubIDGenerator{}, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + result, err := service.Execute(context.Background(), input) + require.NoError(t, err) + require.Equal(t, Result{Outcome: OutcomeSuppressed}, result) + require.Empty(t, store.createInputs) + require.Equal(t, []string{"duplicate"}, telemetry.outcomes) +} + +func TestServiceExecuteRejectsConflictingReplay(t *testing.T) { + t.Parallel() + + input := validInput() + store := &stubStore{ + idempotencyRecord: &idempotency.Record{ + Source: deliverydomain.SourceAuthSession, + IdempotencyKey: input.IdempotencyKey, + DeliveryID: common.DeliveryID("delivery-existing"), + RequestFingerprint: "sha256:other", + CreatedAt: fixedNow(), + ExpiresAt: fixedNow().Add(7 * 24 * time.Hour), + }, + } + require.NoError(t, store.idempotencyRecord.Validate()) + + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: store, + DeliveryIDGenerator: stubIDGenerator{}, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + _, err := service.Execute(context.Background(), input) + require.Error(t, err) + require.ErrorIs(t, err, ErrConflict) + require.Equal(t, []string{"conflict"}, telemetry.outcomes) +} + +func TestServiceExecuteReturnsServiceUnavailableOnCreateFailure(t *testing.T) { + t.Parallel() + + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: &stubStore{ + createErr: errors.New("redis unavailable"), + }, + DeliveryIDGenerator: stubIDGenerator{ids: []common.DeliveryID{"delivery-queued"}}, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + _, err := service.Execute(context.Background(), validInput()) + require.Error(t, err) + require.ErrorIs(t, err, ErrServiceUnavailable) + require.Equal(t, []string{"service_unavailable"}, telemetry.outcomes) +} + +func TestServiceExecuteLogsAcceptedDeliveryAndCreatesSpan(t *testing.T) { + t.Parallel() + + store := &stubStore{} + telemetry := &stubTelemetry{} + loggerBuffer := &bytes.Buffer{} + recorder := tracetest.NewSpanRecorder() + tracerProvider := sdktrace.NewTracerProvider(sdktrace.WithSpanProcessor(recorder)) + + service := newTestService(t, Config{ + Store: store, + DeliveryIDGenerator: stubIDGenerator{ids: []common.DeliveryID{"delivery-queued"}}, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + TracerProvider: tracerProvider, + Logger: slog.New(slog.NewJSONHandler(loggerBuffer, nil)), + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + _, err := service.Execute(context.Background(), validInput()) + require.NoError(t, err) + require.Contains(t, loggerBuffer.String(), "\"delivery_id\":\"delivery-queued\"") + require.Contains(t, loggerBuffer.String(), "\"source\":\"authsession\"") + require.Contains(t, loggerBuffer.String(), "\"template_id\":\"auth.login_code\"") + require.Contains(t, loggerBuffer.String(), "\"otel_trace_id\":") + require.True(t, hasSpanNamed(recorder.Ended(), "mail.accept_auth_delivery")) +} + +func TestInputFingerprintStableForEquivalentInput(t *testing.T) { + t.Parallel() + + first := validInput() + second := validInput() + + firstFingerprint, err := first.Fingerprint() + require.NoError(t, err) + secondFingerprint, err := second.Fingerprint() + require.NoError(t, err) + + require.Equal(t, firstFingerprint, secondFingerprint) +} + +type stubStore struct { + createInputs []CreateAcceptanceInput + createErr error + idempotencyRecord *idempotency.Record + deliveryRecord *deliverydomain.Delivery +} + +func (store *stubStore) CreateAcceptance(_ context.Context, input CreateAcceptanceInput) error { + store.createInputs = append(store.createInputs, input) + return store.createErr +} + +func (store *stubStore) GetIdempotency(_ context.Context, _ deliverydomain.Source, _ common.IdempotencyKey) (idempotency.Record, bool, error) { + if store.idempotencyRecord == nil { + return idempotency.Record{}, false, nil + } + + return *store.idempotencyRecord, true, nil +} + +func (store *stubStore) GetDelivery(_ context.Context, _ common.DeliveryID) (deliverydomain.Delivery, bool, error) { + if store.deliveryRecord == nil { + return deliverydomain.Delivery{}, false, nil + } + + return *store.deliveryRecord, true, nil +} + +type stubIDGenerator struct { + ids []common.DeliveryID +} + +func (generator stubIDGenerator) NewDeliveryID() (common.DeliveryID, error) { + if len(generator.ids) == 0 { + return "", errors.New("no delivery ids left") + } + + return generator.ids[0], nil +} + +type stubClock struct { + now time.Time +} + +func (clock stubClock) Now() time.Time { + return clock.now +} + +type stubTelemetry struct { + outcomes []string + accepted int + statuses []string +} + +func (telemetry *stubTelemetry) RecordAuthDeliveryOutcome(_ context.Context, outcome string) { + telemetry.outcomes = append(telemetry.outcomes, outcome) +} + +func (telemetry *stubTelemetry) RecordAcceptedAuthDelivery(context.Context) { + telemetry.accepted++ +} + +func (telemetry *stubTelemetry) RecordDeliveryStatusTransition(_ context.Context, status string, source string) { + telemetry.statuses = append(telemetry.statuses, source+":"+status) +} + +func newTestService(t *testing.T, cfg Config) *Service { + t.Helper() + + service, err := New(cfg) + require.NoError(t, err) + + return service +} + +func validInput() Input { + locale, err := common.ParseLocale("en") + if err != nil { + panic(err) + } + + return Input{ + IdempotencyKey: common.IdempotencyKey("challenge-123"), + Email: common.Email("pilot@example.com"), + Code: "123456", + Locale: locale, + } +} + +func fixedNow() time.Time { + return time.Unix(1_775_121_700, 0).UTC() +} + +func hasSpanNamed(spans []sdktrace.ReadOnlySpan, name string) bool { + for _, span := range spans { + if span.Name() == name { + return true + } + } + + return false +} + +var _ = attempt.Attempt{} diff --git a/mail/internal/service/acceptgenericdelivery/service.go b/mail/internal/service/acceptgenericdelivery/service.go new file mode 100644 index 0000000..b195e83 --- /dev/null +++ b/mail/internal/service/acceptgenericdelivery/service.go @@ -0,0 +1,598 @@ +// Package acceptgenericdelivery implements durable asynchronous acceptance of +// generic delivery commands consumed from Redis Streams. +package acceptgenericdelivery + +import ( + "context" + "encoding/base64" + "errors" + "fmt" + "log/slog" + "strings" + "time" + + "galaxy/mail/internal/api/streamcommand" + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + "galaxy/mail/internal/logging" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + oteltrace "go.opentelemetry.io/otel/trace" +) + +var ( + // ErrConflict reports that the idempotency scope already belongs to a + // different normalized generic request. + ErrConflict = errors.New("accept generic delivery conflict") + + // ErrServiceUnavailable reports that durable generic acceptance could not + // be completed or recovered safely. + ErrServiceUnavailable = errors.New("accept generic delivery service unavailable") +) + +const tracerName = "galaxy/mail/acceptgenericdelivery" + +// Outcome identifies the coarse generic-delivery acceptance outcome. +type Outcome string + +const ( + // OutcomeAccepted reports that the command was durably accepted into the + // internal delivery pipeline. + OutcomeAccepted Outcome = "accepted" + + // OutcomeDuplicate reports that the command matched an already accepted + // idempotent request and therefore became a no-op replay. + OutcomeDuplicate Outcome = "duplicate" +) + +// IsKnown reports whether outcome belongs to the supported generic-acceptance +// outcome surface. +func (outcome Outcome) IsKnown() bool { + switch outcome { + case OutcomeAccepted, OutcomeDuplicate: + return true + default: + return false + } +} + +// Result stores the coarse generic-delivery acceptance outcome. +type Result struct { + // Outcome stores the stable generic-acceptance result. + Outcome Outcome +} + +// Validate reports whether result contains a supported generic-acceptance +// outcome. +func (result Result) Validate() error { + if !result.Outcome.IsKnown() { + return fmt.Errorf("accept generic delivery outcome %q is unsupported", result.Outcome) + } + + return nil +} + +// AttachmentPayload stores one durably persisted raw attachment payload owned +// by a generic delivery. +type AttachmentPayload struct { + // Filename stores the user-facing attachment filename. + Filename string + + // ContentType stores the MIME media type used for SMTP body construction. + ContentType string + + // ContentBase64 stores the exact accepted inline base64 payload. + ContentBase64 string + + // SizeBytes stores the decoded attachment size in bytes. + SizeBytes int64 +} + +// Validate reports whether payload contains a complete attachment body. +func (payload AttachmentPayload) Validate() error { + metadata := common.AttachmentMetadata{ + Filename: payload.Filename, + ContentType: payload.ContentType, + SizeBytes: payload.SizeBytes, + } + if err := metadata.Validate(); err != nil { + return err + } + + decoded, err := base64.StdEncoding.DecodeString(payload.ContentBase64) + if err != nil { + return fmt.Errorf("attachment content_base64 must be valid base64: %w", err) + } + if int64(len(decoded)) != payload.SizeBytes { + return fmt.Errorf( + "attachment size bytes must match decoded content size: got %d, want %d", + payload.SizeBytes, + len(decoded), + ) + } + + return nil +} + +// DeliveryPayload stores the raw attachment payloads that must survive stream +// offset advancement. +type DeliveryPayload struct { + // DeliveryID identifies the owning accepted delivery. + DeliveryID common.DeliveryID + + // Attachments stores the raw inline attachment payloads. + Attachments []AttachmentPayload +} + +// Validate reports whether payload contains a complete attachment bundle. +func (payload DeliveryPayload) Validate() error { + if err := payload.DeliveryID.Validate(); err != nil { + return fmt.Errorf("delivery payload delivery id: %w", err) + } + if len(payload.Attachments) == 0 { + return fmt.Errorf("delivery payload attachments must not be empty") + } + for index, attachment := range payload.Attachments { + if err := attachment.Validate(); err != nil { + return fmt.Errorf("delivery payload attachments[%d]: %w", index, err) + } + } + + return nil +} + +// CreateAcceptanceInput stores the durable write set required for one +// generic-delivery acceptance attempt. +type CreateAcceptanceInput struct { + // Delivery stores the accepted delivery record. + Delivery deliverydomain.Delivery + + // FirstAttempt stores the first scheduled attempt. + FirstAttempt attempt.Attempt + + // DeliveryPayload stores the optional raw attachment payload bundle. + DeliveryPayload *DeliveryPayload + + // Idempotency stores the idempotency reservation bound to Delivery. + Idempotency idempotency.Record +} + +// Validate reports whether input contains a consistent durable write set. +func (input CreateAcceptanceInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if err := input.FirstAttempt.Validate(); err != nil { + return fmt.Errorf("first attempt: %w", err) + } + if input.FirstAttempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("first attempt delivery id must match delivery id") + } + if input.FirstAttempt.Status != attempt.StatusScheduled { + return fmt.Errorf("first attempt status must be %q", attempt.StatusScheduled) + } + if err := input.Idempotency.Validate(); err != nil { + return fmt.Errorf("idempotency: %w", err) + } + if input.Idempotency.DeliveryID != input.Delivery.DeliveryID { + return errors.New("idempotency delivery id must match delivery id") + } + if input.Idempotency.Source != input.Delivery.Source { + return errors.New("idempotency source must match delivery source") + } + if input.Idempotency.IdempotencyKey != input.Delivery.IdempotencyKey { + return errors.New("idempotency key must match delivery idempotency key") + } + if input.DeliveryPayload != nil { + if err := input.DeliveryPayload.Validate(); err != nil { + return fmt.Errorf("delivery payload: %w", err) + } + if input.DeliveryPayload.DeliveryID != input.Delivery.DeliveryID { + return errors.New("delivery payload delivery id must match delivery id") + } + } + + return nil +} + +// Store describes the durable storage required by the generic-delivery use +// case. +type Store interface { + // CreateAcceptance stores the complete durable write set for one generic + // acceptance attempt. Implementations must wrap ErrConflict when the write + // set races with an already accepted idempotency scope or delivery key. + CreateAcceptance(context.Context, CreateAcceptanceInput) error + + // GetIdempotency loads the idempotency reservation for one generic-delivery + // scope. + GetIdempotency(context.Context, deliverydomain.Source, common.IdempotencyKey) (idempotency.Record, bool, error) + + // GetDelivery loads one accepted delivery by its identifier. + GetDelivery(context.Context, common.DeliveryID) (deliverydomain.Delivery, bool, error) +} + +// Clock provides the current wall-clock time. +type Clock interface { + // Now returns the current time. + Now() time.Time +} + +// Telemetry records low-cardinality generic-delivery outcomes. +type Telemetry interface { + // RecordGenericDeliveryOutcome records one coarse generic-acceptance + // outcome. + RecordGenericDeliveryOutcome(context.Context, string) + + // RecordAcceptedGenericDelivery records one newly accepted generic + // delivery. + RecordAcceptedGenericDelivery(context.Context) + + // RecordDeliveryStatusTransition records one durable delivery status + // transition. + RecordDeliveryStatusTransition(context.Context, string, string) +} + +// Config stores the dependencies and policy used by Service. +type Config struct { + // Store owns the durable accepted state. + Store Store + + // Clock provides wall-clock timestamps. + Clock Clock + + // Telemetry records low-cardinality acceptance outcomes. + Telemetry Telemetry + + // TracerProvider constructs the application span recorder used by the + // generic acceptance flow. + TracerProvider oteltrace.TracerProvider + + // Logger writes structured generic acceptance logs. + Logger *slog.Logger + + // IdempotencyTTL stores how long accepted idempotency scopes remain valid. + IdempotencyTTL time.Duration +} + +// Service durably accepts generic asynchronous delivery commands. +type Service struct { + store Store + clock Clock + telemetry Telemetry + tracerProvider oteltrace.TracerProvider + logger *slog.Logger + idempotencyTTL time.Duration +} + +// New constructs Service from cfg. +func New(cfg Config) (*Service, error) { + switch { + case cfg.Store == nil: + return nil, errors.New("new accept generic delivery service: nil store") + case cfg.Clock == nil: + return nil, errors.New("new accept generic delivery service: nil clock") + case cfg.IdempotencyTTL <= 0: + return nil, errors.New("new accept generic delivery service: non-positive idempotency ttl") + default: + tracerProvider := cfg.TracerProvider + if tracerProvider == nil { + tracerProvider = otel.GetTracerProvider() + } + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + return &Service{ + store: cfg.Store, + clock: cfg.Clock, + telemetry: cfg.Telemetry, + tracerProvider: tracerProvider, + logger: logger.With("component", "accept_generic_delivery"), + idempotencyTTL: cfg.IdempotencyTTL, + }, nil + } +} + +// Execute accepts one normalized generic-delivery command. +func (service *Service) Execute(ctx context.Context, command streamcommand.Command) (Result, error) { + if ctx == nil { + return Result{}, errors.New("accept generic delivery: nil context") + } + if service == nil { + return Result{}, errors.New("accept generic delivery: nil service") + } + if err := command.Validate(); err != nil { + return Result{}, fmt.Errorf("accept generic delivery: %w", err) + } + + ctx, span := service.tracerProvider.Tracer(tracerName).Start(ctx, "mail.accept_generic_delivery") + defer span.End() + span.SetAttributes( + attribute.String("mail.delivery_id", command.DeliveryID.String()), + attribute.String("mail.source", string(command.Source)), + attribute.String("mail.payload_mode", string(command.PayloadMode)), + ) + if strings.TrimSpace(command.TraceID) != "" { + span.SetAttributes(attribute.String("mail.command_trace_id", command.TraceID)) + } + if !command.TemplateID.IsZero() { + span.SetAttributes(attribute.String("mail.template_id", command.TemplateID.String())) + } + + fingerprint, err := command.Fingerprint() + if err != nil { + return Result{}, fmt.Errorf("accept generic delivery: %w", err) + } + + if result, handled, err := service.resolveReplay(ctx, command, fingerprint); handled { + if err != nil { + service.recordOutcome(ctx, replayOutcomeForError(err)) + return Result{}, err + } + + service.recordOutcome(ctx, string(result.Outcome)) + return result, nil + } + + createInput, result, err := service.buildCreateInput(command, fingerprint) + if err != nil { + return Result{}, fmt.Errorf("accept generic delivery: %w", err) + } + + if err := service.store.CreateAcceptance(ctx, createInput); err != nil { + if !errors.Is(err, ErrConflict) { + service.recordOutcome(ctx, "service_unavailable") + return Result{}, fmt.Errorf("%w: create acceptance: %v", ErrServiceUnavailable, err) + } + + if replayResult, handled, replayErr := service.resolveReplay(ctx, command, fingerprint); handled { + if replayErr != nil { + service.recordOutcome(ctx, replayOutcomeForError(replayErr)) + return Result{}, replayErr + } + + service.recordOutcome(ctx, string(replayResult.Outcome)) + return replayResult, nil + } + + service.recordOutcome(ctx, "service_unavailable") + return Result{}, fmt.Errorf("%w: create acceptance conflict without replay state", ErrServiceUnavailable) + } + + service.recordOutcome(ctx, string(result.Outcome)) + service.recordAcceptedDelivery(ctx) + service.recordStatusTransition(ctx, createInput.Delivery) + span.SetAttributes( + attribute.String("mail.status", string(createInput.Delivery.Status)), + attribute.String("mail.outcome", string(result.Outcome)), + ) + logArgs := logging.CommandAttrs(command) + logArgs = append(logArgs, + "status", string(createInput.Delivery.Status), + "outcome", string(result.Outcome), + "payload_mode", string(command.PayloadMode), + ) + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + service.logger.Info("generic delivery accepted", logArgs...) + return result, nil +} + +func (service *Service) buildCreateInput(command streamcommand.Command, fingerprint string) (CreateAcceptanceInput, Result, error) { + now := service.clock.Now().UTC().Truncate(time.Millisecond) + + deliveryRecord := deliverydomain.Delivery{ + DeliveryID: command.DeliveryID, + Source: command.Source, + PayloadMode: command.PayloadMode, + Envelope: command.Envelope, + Attachments: attachmentMetadata(command.Attachments), + IdempotencyKey: command.IdempotencyKey, + Status: deliverydomain.StatusQueued, + AttemptCount: 1, + CreatedAt: now, + UpdatedAt: now, + } + + switch command.PayloadMode { + case deliverydomain.PayloadModeRendered: + deliveryRecord.Content = deliverydomain.Content{ + Subject: command.Subject, + TextBody: command.TextBody, + HTMLBody: command.HTMLBody, + } + case deliverydomain.PayloadModeTemplate: + deliveryRecord.TemplateID = command.TemplateID + deliveryRecord.Locale = command.Locale + deliveryRecord.TemplateVariables = cloneJSONObject(command.Variables) + default: + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("build generic delivery record: unsupported payload mode %q", command.PayloadMode) + } + + if err := deliveryRecord.Validate(); err != nil { + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("build generic delivery record: %w", err) + } + + firstAttempt := attempt.Attempt{ + DeliveryID: command.DeliveryID, + AttemptNo: 1, + ScheduledFor: now, + Status: attempt.StatusScheduled, + } + if err := firstAttempt.Validate(); err != nil { + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("build generic first attempt: %w", err) + } + + createInput := CreateAcceptanceInput{ + Delivery: deliveryRecord, + FirstAttempt: firstAttempt, + Idempotency: idempotency.Record{ + Source: command.Source, + IdempotencyKey: command.IdempotencyKey, + DeliveryID: command.DeliveryID, + RequestFingerprint: fingerprint, + CreatedAt: now, + ExpiresAt: now.Add(service.idempotencyTTL), + }, + } + if len(command.Attachments) > 0 { + createInput.DeliveryPayload = &DeliveryPayload{ + DeliveryID: command.DeliveryID, + Attachments: attachmentPayloads(command.Attachments), + } + } + if err := createInput.Validate(); err != nil { + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("build generic create input: %w", err) + } + + result := Result{Outcome: OutcomeAccepted} + if err := result.Validate(); err != nil { + return CreateAcceptanceInput{}, Result{}, fmt.Errorf("build generic delivery result: %w", err) + } + + return createInput, result, nil +} + +func (service *Service) resolveReplay(ctx context.Context, command streamcommand.Command, fingerprint string) (Result, bool, error) { + record, found, err := service.store.GetIdempotency(ctx, command.Source, command.IdempotencyKey) + if err != nil { + return Result{}, true, fmt.Errorf("%w: load idempotency: %v", ErrServiceUnavailable, err) + } + if !found { + return Result{}, false, nil + } + if record.RequestFingerprint != fingerprint { + return Result{}, true, fmt.Errorf("%w: request conflicts with current state", ErrConflict) + } + + deliveryRecord, found, err := service.store.GetDelivery(ctx, record.DeliveryID) + if err != nil { + return Result{}, true, fmt.Errorf("%w: load delivery: %v", ErrServiceUnavailable, err) + } + if !found { + return Result{}, true, fmt.Errorf("%w: delivery %q is missing for idempotency scope", ErrServiceUnavailable, record.DeliveryID) + } + + if deliveryRecord.DeliveryID != command.DeliveryID { + return Result{}, true, fmt.Errorf("%w: idempotency delivery %q mismatches command delivery %q", ErrServiceUnavailable, deliveryRecord.DeliveryID, command.DeliveryID) + } + + return deriveReplayResult(deliveryRecord) +} + +func deriveReplayResult(record deliverydomain.Delivery) (Result, bool, error) { + switch record.Status { + case deliverydomain.StatusAccepted, + deliverydomain.StatusQueued, + deliverydomain.StatusRendered, + deliverydomain.StatusSending, + deliverydomain.StatusSent, + deliverydomain.StatusSuppressed, + deliverydomain.StatusFailed, + deliverydomain.StatusDeadLetter: + return Result{Outcome: OutcomeDuplicate}, true, nil + default: + return Result{}, true, fmt.Errorf("%w: unsupported replay delivery status %q", ErrServiceUnavailable, record.Status) + } +} + +func (service *Service) recordAcceptedDelivery(ctx context.Context) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordAcceptedGenericDelivery(ctx) +} + +func (service *Service) recordStatusTransition(ctx context.Context, record deliverydomain.Delivery) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordDeliveryStatusTransition(ctx, string(record.Status), string(record.Source)) +} + +func (service *Service) recordOutcome(ctx context.Context, outcome string) { + if service == nil || service.telemetry == nil || strings.TrimSpace(outcome) == "" { + return + } + + service.telemetry.RecordGenericDeliveryOutcome(ctx, outcome) +} + +func replayOutcomeForError(err error) string { + switch { + case errors.Is(err, ErrConflict): + return "conflict" + case errors.Is(err, ErrServiceUnavailable): + return "service_unavailable" + default: + return "" + } +} + +func attachmentMetadata(values []streamcommand.Attachment) []common.AttachmentMetadata { + if values == nil { + return nil + } + + result := make([]common.AttachmentMetadata, len(values)) + for index, value := range values { + result[index] = common.AttachmentMetadata{ + Filename: value.Filename, + ContentType: value.ContentType, + SizeBytes: value.SizeBytes, + } + } + + return result +} + +func attachmentPayloads(values []streamcommand.Attachment) []AttachmentPayload { + result := make([]AttachmentPayload, len(values)) + for index, value := range values { + result[index] = AttachmentPayload{ + Filename: value.Filename, + ContentType: value.ContentType, + ContentBase64: value.ContentBase64, + SizeBytes: value.SizeBytes, + } + } + + return result +} + +func cloneJSONObject(value map[string]any) map[string]any { + if value == nil { + return nil + } + + cloned := make(map[string]any, len(value)) + for key, item := range value { + cloned[key] = cloneJSONValue(item) + } + + return cloned +} + +func cloneJSONValue(value any) any { + switch typed := value.(type) { + case map[string]any: + cloned := make(map[string]any, len(typed)) + for key, item := range typed { + cloned[key] = cloneJSONValue(item) + } + return cloned + case []any: + cloned := make([]any, len(typed)) + for index, item := range typed { + cloned[index] = cloneJSONValue(item) + } + return cloned + default: + return typed + } +} diff --git a/mail/internal/service/acceptgenericdelivery/service_test.go b/mail/internal/service/acceptgenericdelivery/service_test.go new file mode 100644 index 0000000..df66a67 --- /dev/null +++ b/mail/internal/service/acceptgenericdelivery/service_test.go @@ -0,0 +1,319 @@ +package acceptgenericdelivery + +import ( + "bytes" + "context" + "encoding/base64" + "errors" + "log/slog" + "testing" + "time" + + "galaxy/mail/internal/api/streamcommand" + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/domain/idempotency" + + "github.com/stretchr/testify/require" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/sdk/trace/tracetest" +) + +func TestServiceExecuteAcceptsRenderedDelivery(t *testing.T) { + t.Parallel() + + store := &stubStore{} + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: store, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + result, err := service.Execute(context.Background(), validRenderedCommand(t)) + require.NoError(t, err) + require.Equal(t, Result{Outcome: OutcomeAccepted}, result) + require.Len(t, store.createInputs, 1) + require.Equal(t, deliverydomain.StatusQueued, store.createInputs[0].Delivery.Status) + require.Equal(t, deliverydomain.PayloadModeRendered, store.createInputs[0].Delivery.PayloadMode) + require.Equal(t, "Turn ready", store.createInputs[0].Delivery.Content.Subject) + require.NotNil(t, store.createInputs[0].DeliveryPayload) + require.Equal(t, []string{"accepted"}, telemetry.outcomes) + require.Equal(t, 1, telemetry.accepted) + require.Equal(t, []string{"notification:queued"}, telemetry.statuses) +} + +func TestServiceExecuteAcceptsTemplateDelivery(t *testing.T) { + t.Parallel() + + store := &stubStore{} + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: store, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + result, err := service.Execute(context.Background(), validTemplateCommand(t)) + require.NoError(t, err) + require.Equal(t, Result{Outcome: OutcomeAccepted}, result) + require.Len(t, store.createInputs, 1) + require.Nil(t, store.createInputs[0].DeliveryPayload) + require.Equal(t, common.TemplateID("game.turn_ready"), store.createInputs[0].Delivery.TemplateID) + require.Equal(t, map[string]any{ + "turn_number": float64(54), + "player": map[string]any{ + "name": "Pilot", + }, + }, store.createInputs[0].Delivery.TemplateVariables) + require.Equal(t, []string{"accepted"}, telemetry.outcomes) + require.Equal(t, 1, telemetry.accepted) + require.Equal(t, []string{"notification:queued"}, telemetry.statuses) +} + +func TestServiceExecuteReturnsStableDuplicateResult(t *testing.T) { + t.Parallel() + + command := validTemplateCommand(t) + fingerprint, err := command.Fingerprint() + require.NoError(t, err) + + store := &stubStore{ + idempotencyRecord: &idempotency.Record{ + Source: deliverydomain.SourceNotification, + IdempotencyKey: command.IdempotencyKey, + DeliveryID: command.DeliveryID, + RequestFingerprint: fingerprint, + CreatedAt: fixedNow(), + ExpiresAt: fixedNow().Add(7 * 24 * time.Hour), + }, + deliveryRecord: &deliverydomain.Delivery{ + DeliveryID: command.DeliveryID, + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeTemplate, + TemplateID: command.TemplateID, + Envelope: command.Envelope, + Locale: command.Locale, + TemplateVariables: map[string]any{ + "turn_number": float64(54), + "player": map[string]any{ + "name": "Pilot", + }, + }, + IdempotencyKey: command.IdempotencyKey, + Status: deliverydomain.StatusQueued, + AttemptCount: 1, + CreatedAt: fixedNow(), + UpdatedAt: fixedNow(), + }, + } + require.NoError(t, store.idempotencyRecord.Validate()) + require.NoError(t, store.deliveryRecord.Validate()) + + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: store, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + result, err := service.Execute(context.Background(), command) + require.NoError(t, err) + require.Equal(t, Result{Outcome: OutcomeDuplicate}, result) + require.Empty(t, store.createInputs) + require.Equal(t, []string{"duplicate"}, telemetry.outcomes) +} + +func TestServiceExecuteRejectsConflictingReplay(t *testing.T) { + t.Parallel() + + command := validRenderedCommand(t) + store := &stubStore{ + idempotencyRecord: &idempotency.Record{ + Source: deliverydomain.SourceNotification, + IdempotencyKey: command.IdempotencyKey, + DeliveryID: command.DeliveryID, + RequestFingerprint: "sha256:other", + CreatedAt: fixedNow(), + ExpiresAt: fixedNow().Add(7 * 24 * time.Hour), + }, + } + require.NoError(t, store.idempotencyRecord.Validate()) + + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: store, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + _, err := service.Execute(context.Background(), command) + require.Error(t, err) + require.ErrorIs(t, err, ErrConflict) + require.Equal(t, []string{"conflict"}, telemetry.outcomes) +} + +func TestServiceExecuteReturnsServiceUnavailableOnCreateFailure(t *testing.T) { + t.Parallel() + + telemetry := &stubTelemetry{} + service := newTestService(t, Config{ + Store: &stubStore{ + createErr: errors.New("redis unavailable"), + }, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + _, err := service.Execute(context.Background(), validRenderedCommand(t)) + require.Error(t, err) + require.ErrorIs(t, err, ErrServiceUnavailable) + require.Equal(t, []string{"service_unavailable"}, telemetry.outcomes) +} + +func TestServiceExecuteLogsAcceptedDeliveryAndCreatesSpan(t *testing.T) { + t.Parallel() + + store := &stubStore{} + telemetry := &stubTelemetry{} + loggerBuffer := &bytes.Buffer{} + recorder := tracetest.NewSpanRecorder() + tracerProvider := sdktrace.NewTracerProvider(sdktrace.WithSpanProcessor(recorder)) + command := validTemplateCommand(t) + command.TraceID = "trace-123" + + service := newTestService(t, Config{ + Store: store, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + TracerProvider: tracerProvider, + Logger: slog.New(slog.NewJSONHandler(loggerBuffer, nil)), + IdempotencyTTL: 7 * 24 * time.Hour, + }) + + _, err := service.Execute(context.Background(), command) + require.NoError(t, err) + require.Contains(t, loggerBuffer.String(), "\"delivery_id\":\"mail-124\"") + require.Contains(t, loggerBuffer.String(), "\"source\":\"notification\"") + require.Contains(t, loggerBuffer.String(), "\"template_id\":\"game.turn_ready\"") + require.Contains(t, loggerBuffer.String(), "\"trace_id\":\"trace-123\"") + require.Contains(t, loggerBuffer.String(), "\"otel_trace_id\":") + require.True(t, hasSpanNamed(recorder.Ended(), "mail.accept_generic_delivery")) +} + +type stubStore struct { + createInputs []CreateAcceptanceInput + createErr error + idempotencyRecord *idempotency.Record + deliveryRecord *deliverydomain.Delivery +} + +func (store *stubStore) CreateAcceptance(_ context.Context, input CreateAcceptanceInput) error { + store.createInputs = append(store.createInputs, input) + return store.createErr +} + +func (store *stubStore) GetIdempotency(_ context.Context, _ deliverydomain.Source, _ common.IdempotencyKey) (idempotency.Record, bool, error) { + if store.idempotencyRecord == nil { + return idempotency.Record{}, false, nil + } + + return *store.idempotencyRecord, true, nil +} + +func (store *stubStore) GetDelivery(_ context.Context, _ common.DeliveryID) (deliverydomain.Delivery, bool, error) { + if store.deliveryRecord == nil { + return deliverydomain.Delivery{}, false, nil + } + + return *store.deliveryRecord, true, nil +} + +type stubClock struct { + now time.Time +} + +func (clock stubClock) Now() time.Time { + return clock.now +} + +type stubTelemetry struct { + outcomes []string + accepted int + statuses []string +} + +func (telemetry *stubTelemetry) RecordGenericDeliveryOutcome(_ context.Context, outcome string) { + telemetry.outcomes = append(telemetry.outcomes, outcome) +} + +func (telemetry *stubTelemetry) RecordAcceptedGenericDelivery(context.Context) { + telemetry.accepted++ +} + +func (telemetry *stubTelemetry) RecordDeliveryStatusTransition(_ context.Context, status string, source string) { + telemetry.statuses = append(telemetry.statuses, source+":"+status) +} + +func newTestService(t *testing.T, cfg Config) *Service { + t.Helper() + + service, err := New(cfg) + require.NoError(t, err) + + return service +} + +func validRenderedCommand(t *testing.T) streamcommand.Command { + t.Helper() + + command, err := streamcommand.DecodeCommand(map[string]any{ + "delivery_id": "mail-123", + "source": "notification", + "payload_mode": "rendered", + "idempotency_key": "notification:mail-123", + "requested_at_ms": "1775121700000", + "payload_json": `{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":["noreply@example.com"],"subject":"Turn ready","text_body":"Turn 54 is ready.","html_body":"

Turn 54 is ready.

","attachments":[{"filename":"report.txt","content_type":"text/plain","content_base64":"` + base64.StdEncoding.EncodeToString([]byte("report")) + `"}]}`, + }) + require.NoError(t, err) + + return command +} + +func validTemplateCommand(t *testing.T) streamcommand.Command { + t.Helper() + + command, err := streamcommand.DecodeCommand(map[string]any{ + "delivery_id": "mail-124", + "source": "notification", + "payload_mode": "template", + "idempotency_key": "notification:mail-124", + "requested_at_ms": "1775121700001", + "payload_json": `{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":[],"template_id":"game.turn_ready","locale":"fr-FR","variables":{"turn_number":54,"player":{"name":"Pilot"}},"attachments":[]}`, + }) + require.NoError(t, err) + + return command +} + +func fixedNow() time.Time { + return time.Unix(1_775_121_700, 0).UTC() +} + +func hasSpanNamed(spans []sdktrace.ReadOnlySpan, name string) bool { + for _, span := range spans { + if span.Name() == name { + return true + } + } + + return false +} + +var _ = attempt.Attempt{} diff --git a/mail/internal/service/executeattempt/service.go b/mail/internal/service/executeattempt/service.go new file mode 100644 index 0000000..30335c3 --- /dev/null +++ b/mail/internal/service/executeattempt/service.go @@ -0,0 +1,781 @@ +// Package executeattempt implements provider execution, retry planning, and +// terminal state handling for claimed delivery attempts. +package executeattempt + +import ( + "context" + "encoding/base64" + "errors" + "fmt" + "log/slog" + "strings" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/logging" + "galaxy/mail/internal/ports" + "galaxy/mail/internal/service/acceptgenericdelivery" + "galaxy/mail/internal/service/renderdelivery" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + oteltrace "go.opentelemetry.io/otel/trace" +) + +var ( + // ErrServiceUnavailable reports that attempt execution could not safely + // load or persist durable state. + ErrServiceUnavailable = errors.New("execute attempt service unavailable") +) + +var retryDelays = [...]time.Duration{ + time.Minute, + 5 * time.Minute, + 30 * time.Minute, +} + +const ( + retryExhaustedClassification = "retry_exhausted" + retryRecoveryHint = "check SMTP connectivity" + claimTTLClassification = "claim_ttl_expired" + claimTTLSummary = "attempt claim TTL expired" + deadlineExceededDetail = "deadline_exceeded" + tracerName = "galaxy/mail/executeattempt" +) + +// WorkItem stores one delivery together with the concrete attempt that should +// be prepared, executed, or recovered. +type WorkItem struct { + // Delivery stores the owning logical delivery record. + Delivery deliverydomain.Delivery + + // Attempt stores the concrete delivery attempt record. + Attempt attempt.Attempt +} + +// ValidateForPreparation reports whether item can be prepared for claim-time +// rendering decisions. +func (item WorkItem) ValidateForPreparation() error { + if err := item.validateCommon(); err != nil { + return err + } + if item.Attempt.Status != attempt.StatusScheduled { + return fmt.Errorf("work attempt status must be %q", attempt.StatusScheduled) + } + switch item.Delivery.Status { + case deliverydomain.StatusQueued, deliverydomain.StatusRendered: + default: + return fmt.Errorf( + "work delivery status must be %q or %q", + deliverydomain.StatusQueued, + deliverydomain.StatusRendered, + ) + } + + return nil +} + +// ValidateForExecution reports whether item represents one claimed in-flight +// provider execution. +func (item WorkItem) ValidateForExecution() error { + if err := item.validateCommon(); err != nil { + return err + } + if item.Delivery.Status != deliverydomain.StatusSending { + return fmt.Errorf("work delivery status must be %q", deliverydomain.StatusSending) + } + if item.Attempt.Status != attempt.StatusInProgress { + return fmt.Errorf("work attempt status must be %q", attempt.StatusInProgress) + } + + return nil +} + +func (item WorkItem) validateCommon() error { + if err := item.Delivery.Validate(); err != nil { + return fmt.Errorf("work delivery: %w", err) + } + if err := item.Attempt.Validate(); err != nil { + return fmt.Errorf("work attempt: %w", err) + } + if item.Attempt.DeliveryID != item.Delivery.DeliveryID { + return errors.New("work attempt delivery id must match delivery id") + } + if item.Delivery.AttemptCount != item.Attempt.AttemptNo { + return errors.New("work delivery attempt count must match attempt number") + } + + return nil +} + +// CommitStateInput stores one complete durable attempt outcome mutation. +type CommitStateInput struct { + // Delivery stores the mutated delivery record. + Delivery deliverydomain.Delivery + + // Attempt stores the terminal current attempt record. + Attempt attempt.Attempt + + // NextAttempt stores the optional next scheduled retry attempt. + NextAttempt *attempt.Attempt + + // DeadLetter stores the optional dead-letter record when Delivery becomes + // `dead_letter`. + DeadLetter *deliverydomain.DeadLetterEntry +} + +// Validate reports whether input stores one complete and internally +// consistent durable mutation. +func (input CommitStateInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if err := input.Attempt.Validate(); err != nil { + return fmt.Errorf("attempt: %w", err) + } + if !input.Attempt.Status.IsTerminal() { + return errors.New("attempt status must be terminal") + } + if input.Attempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("attempt delivery id must match delivery id") + } + if input.Delivery.LastAttemptStatus != input.Attempt.Status { + return errors.New("delivery last attempt status must match attempt status") + } + + if input.NextAttempt != nil { + if err := input.NextAttempt.Validate(); err != nil { + return fmt.Errorf("next attempt: %w", err) + } + if input.NextAttempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("next attempt delivery id must match delivery id") + } + if input.NextAttempt.Status != attempt.StatusScheduled { + return fmt.Errorf("next attempt status must be %q", attempt.StatusScheduled) + } + if input.Delivery.Status != deliverydomain.StatusQueued { + return fmt.Errorf("delivery status with next attempt must be %q", deliverydomain.StatusQueued) + } + if input.Delivery.AttemptCount != input.NextAttempt.AttemptNo { + return errors.New("delivery attempt count must match next attempt number") + } + if input.NextAttempt.AttemptNo != input.Attempt.AttemptNo+1 { + return errors.New("next attempt number must increment current attempt number") + } + if input.DeadLetter != nil { + return errors.New("next attempt and dead-letter entry are mutually exclusive") + } + } else if input.Delivery.AttemptCount != input.Attempt.AttemptNo { + return errors.New("delivery attempt count must match current attempt number without next attempt") + } + + if err := deliverydomain.ValidateDeadLetterState(input.Delivery, input.DeadLetter); err != nil { + return fmt.Errorf("dead-letter state: %w", err) + } + + switch input.Delivery.Status { + case deliverydomain.StatusSent: + if input.Attempt.Status != attempt.StatusProviderAccepted { + return fmt.Errorf("sent delivery requires attempt status %q", attempt.StatusProviderAccepted) + } + case deliverydomain.StatusSuppressed, deliverydomain.StatusFailed: + if input.Attempt.Status != attempt.StatusProviderRejected { + return fmt.Errorf( + "%s delivery requires attempt status %q", + input.Delivery.Status, + attempt.StatusProviderRejected, + ) + } + case deliverydomain.StatusQueued: + if input.NextAttempt == nil { + return errors.New("queued delivery requires next attempt") + } + switch input.Attempt.Status { + case attempt.StatusTransportFailed, attempt.StatusTimedOut: + default: + return fmt.Errorf( + "queued delivery requires attempt status %q or %q", + attempt.StatusTransportFailed, + attempt.StatusTimedOut, + ) + } + case deliverydomain.StatusDeadLetter: + switch input.Attempt.Status { + case attempt.StatusTransportFailed, attempt.StatusTimedOut: + default: + return fmt.Errorf( + "dead-letter delivery requires attempt status %q or %q", + attempt.StatusTransportFailed, + attempt.StatusTimedOut, + ) + } + default: + return fmt.Errorf("unsupported delivery status %q for commit input", input.Delivery.Status) + } + + return nil +} + +// Renderer materializes template-mode deliveries before a scheduler claims an +// attempt for outbound execution. +type Renderer interface { + // Execute renders or terminally fails one queued template-mode delivery. + Execute(context.Context, renderdelivery.Input) (renderdelivery.Result, error) +} + +// PayloadLoader loads raw attachment payloads for a delivery. +type PayloadLoader interface { + // LoadPayload returns the stored attachment payload bundle when one exists. + LoadPayload(context.Context, common.DeliveryID) (acceptgenericdelivery.DeliveryPayload, bool, error) +} + +// Store persists durable attempt execution outcomes. +type Store interface { + // Commit applies one complete durable attempt outcome mutation. + Commit(context.Context, CommitStateInput) error +} + +// Clock provides wall-clock time. +type Clock interface { + // Now returns the current time. + Now() time.Time +} + +// Telemetry records low-cardinality attempt-execution metrics. +type Telemetry interface { + // RecordDeliveryStatusTransition records one durable delivery status + // transition. + RecordDeliveryStatusTransition(context.Context, string, string) + + // RecordAttemptOutcome records one durable terminal attempt outcome. + RecordAttemptOutcome(context.Context, string, string) + + // RecordProviderSendDuration records one provider-send latency sample. + RecordProviderSendDuration(context.Context, string, string, time.Duration) +} + +// Config stores the dependencies used by Service. +type Config struct { + // Renderer stores the template renderer used during pre-claim preparation. + Renderer Renderer + + // Provider stores the outbound provider adapter. + Provider ports.Provider + + // PayloadLoader loads raw attachment payloads for SMTP construction. + PayloadLoader PayloadLoader + + // Store persists durable attempt execution outcomes. + Store Store + + // Clock provides wall-clock timestamps. + Clock Clock + + // Telemetry records low-cardinality attempt-execution metrics. + Telemetry Telemetry + + // TracerProvider constructs the application span recorder used by provider + // sends. + TracerProvider oteltrace.TracerProvider + + // Logger writes structured attempt-execution logs. + Logger *slog.Logger + + // AttemptTimeout bounds one provider execution budget. + AttemptTimeout time.Duration +} + +// Service prepares template deliveries, executes claimed attempts, and +// applies retry policy. +type Service struct { + renderer Renderer + provider ports.Provider + payloadLoader PayloadLoader + store Store + clock Clock + telemetry Telemetry + tracerProvider oteltrace.TracerProvider + logger *slog.Logger + attemptTimeout time.Duration +} + +// New constructs Service from cfg. +func New(cfg Config) (*Service, error) { + switch { + case cfg.Renderer == nil: + return nil, errors.New("new execute attempt service: nil renderer") + case cfg.Provider == nil: + return nil, errors.New("new execute attempt service: nil provider") + case cfg.PayloadLoader == nil: + return nil, errors.New("new execute attempt service: nil payload loader") + case cfg.Store == nil: + return nil, errors.New("new execute attempt service: nil store") + case cfg.Clock == nil: + return nil, errors.New("new execute attempt service: nil clock") + case cfg.AttemptTimeout <= 0: + return nil, errors.New("new execute attempt service: non-positive attempt timeout") + default: + tracerProvider := cfg.TracerProvider + if tracerProvider == nil { + tracerProvider = otel.GetTracerProvider() + } + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + return &Service{ + renderer: cfg.Renderer, + provider: cfg.Provider, + payloadLoader: cfg.PayloadLoader, + store: cfg.Store, + clock: cfg.Clock, + telemetry: cfg.Telemetry, + tracerProvider: tracerProvider, + logger: logger.With("component", "execute_attempt"), + attemptTimeout: cfg.AttemptTimeout, + }, nil + } +} + +// Prepare renders one template-mode queued delivery when its content has not +// been materialized yet. The boolean result reports whether the scheduler may +// proceed to claim the attempt. +func (service *Service) Prepare(ctx context.Context, item WorkItem) (bool, error) { + if ctx == nil { + return false, errors.New("prepare execute attempt: nil context") + } + if service == nil { + return false, errors.New("prepare execute attempt: nil service") + } + if err := item.ValidateForPreparation(); err != nil { + return false, fmt.Errorf("prepare execute attempt: %w", err) + } + if item.Delivery.PayloadMode != deliverydomain.PayloadModeTemplate { + return true, nil + } + if item.Delivery.Status == deliverydomain.StatusRendered { + return true, nil + } + if err := item.Delivery.Content.ValidateMaterialized(); err == nil { + return true, nil + } + + result, err := service.renderer.Execute(ctx, renderdelivery.Input{ + Delivery: item.Delivery, + Attempt: item.Attempt, + }) + if err != nil { + return false, fmt.Errorf("prepare execute attempt: %w", err) + } + if result.Outcome == renderdelivery.OutcomeFailed { + return false, nil + } + + return true, nil +} + +// Execute runs one claimed in-progress attempt through the provider and +// durably records the resulting outcome. +func (service *Service) Execute(ctx context.Context, item WorkItem) error { + if ctx == nil { + return errors.New("execute attempt: nil context") + } + if service == nil { + return errors.New("execute attempt: nil service") + } + if err := item.ValidateForExecution(); err != nil { + return fmt.Errorf("execute attempt: %w", err) + } + + message, err := service.buildMessage(ctx, item.Delivery) + if err != nil { + return err + } + + sendStartedAt := time.Now() + sendCtx, span := service.tracerProvider.Tracer(tracerName).Start( + ctx, + "mail.provider_send", + oteltrace.WithAttributes( + attribute.String("mail.delivery_id", item.Delivery.DeliveryID.String()), + attribute.String("mail.source", string(item.Delivery.Source)), + attribute.Int("mail.attempt_no", item.Attempt.AttemptNo), + ), + ) + if !item.Delivery.TemplateID.IsZero() { + span.SetAttributes(attribute.String("mail.template_id", item.Delivery.TemplateID.String())) + } + providerCtx, cancel := context.WithTimeout(sendCtx, service.attemptTimeout) + defer cancel() + defer span.End() + + result, err := service.provider.Send(providerCtx, message) + if err != nil { + span.RecordError(err) + return fmt.Errorf("execute attempt: send provider message: %w", err) + } + if err := result.Validate(); err != nil { + span.RecordError(err) + return fmt.Errorf("execute attempt: provider result: %w", err) + } + providerName := providerNameFromSummary(result.Summary) + sendDuration := time.Since(sendStartedAt) + service.recordProviderSendDuration(sendCtx, providerName, string(result.Classification), sendDuration) + span.SetAttributes( + attribute.String("mail.provider", providerName), + attribute.String("mail.provider_outcome", string(result.Classification)), + attribute.String("mail.provider_summary", result.Summary), + ) + + commit, err := service.commitForProviderResult(item, result) + if err != nil { + return err + } + if err := service.store.Commit(ctx, commit); err != nil { + return fmt.Errorf("%w: commit attempt outcome: %v", ErrServiceUnavailable, err) + } + service.recordCommitMetrics(sendCtx, commit, item.Delivery.Source) + service.logProviderResult(sendCtx, item, result, commit, providerName, sendDuration) + + return nil +} + +// RecoverExpired marks one stale in-progress attempt as expired and applies +// the same retry policy used for runtime timeouts. +func (service *Service) RecoverExpired(ctx context.Context, item WorkItem) error { + if ctx == nil { + return errors.New("recover expired attempt: nil context") + } + if service == nil { + return errors.New("recover expired attempt: nil service") + } + if err := item.ValidateForExecution(); err != nil { + return fmt.Errorf("recover expired attempt: %w", err) + } + + commit, err := service.commitForTimeout(item, claimTTLClassification, claimTTLSummary) + if err != nil { + return err + } + if err := service.store.Commit(ctx, commit); err != nil { + return fmt.Errorf("%w: commit recovered attempt outcome: %v", ErrServiceUnavailable, err) + } + service.recordCommitMetrics(ctx, commit, item.Delivery.Source) + + return nil +} + +func (service *Service) buildMessage(ctx context.Context, deliveryRecord deliverydomain.Delivery) (ports.Message, error) { + message := ports.Message{ + Envelope: deliveryRecord.Envelope, + Content: deliveryRecord.Content, + } + if err := message.Content.ValidateMaterialized(); err != nil { + return ports.Message{}, fmt.Errorf("execute attempt: delivery content: %w", err) + } + if len(deliveryRecord.Attachments) == 0 { + if err := message.Validate(); err != nil { + return ports.Message{}, fmt.Errorf("execute attempt: provider message: %w", err) + } + return message, nil + } + + payload, found, err := service.payloadLoader.LoadPayload(ctx, deliveryRecord.DeliveryID) + if err != nil { + return ports.Message{}, fmt.Errorf("%w: load delivery payload: %v", ErrServiceUnavailable, err) + } + if !found { + return ports.Message{}, fmt.Errorf("%w: delivery payload %q is missing", ErrServiceUnavailable, deliveryRecord.DeliveryID) + } + if len(payload.Attachments) != len(deliveryRecord.Attachments) { + return ports.Message{}, fmt.Errorf( + "%w: delivery payload attachment count %d mismatches delivery attachment count %d", + ErrServiceUnavailable, + len(payload.Attachments), + len(deliveryRecord.Attachments), + ) + } + + message.Attachments = make([]ports.Attachment, len(payload.Attachments)) + for index, attachmentPayload := range payload.Attachments { + metadata := deliveryRecord.Attachments[index] + if metadata.Filename != attachmentPayload.Filename || + metadata.ContentType != attachmentPayload.ContentType || + metadata.SizeBytes != attachmentPayload.SizeBytes { + return ports.Message{}, fmt.Errorf( + "%w: delivery payload attachment %d metadata mismatches delivery audit metadata", + ErrServiceUnavailable, + index, + ) + } + + content, err := base64.StdEncoding.DecodeString(attachmentPayload.ContentBase64) + if err != nil { + return ports.Message{}, fmt.Errorf( + "%w: decode delivery payload attachment %d: %v", + ErrServiceUnavailable, + index, + err, + ) + } + + message.Attachments[index] = ports.Attachment{ + Metadata: metadata, + Content: content, + } + } + if err := message.Validate(); err != nil { + return ports.Message{}, fmt.Errorf("execute attempt: provider message: %w", err) + } + + return message, nil +} + +func (service *Service) commitForProviderResult(item WorkItem, result ports.Result) (CommitStateInput, error) { + switch result.Classification { + case ports.ClassificationAccepted: + return service.commitTerminal(item, attempt.StatusProviderAccepted, deliverydomain.StatusSent, result.Summary, "") + case ports.ClassificationSuppressed: + return service.commitTerminal(item, attempt.StatusProviderRejected, deliverydomain.StatusSuppressed, result.Summary, "suppressed") + case ports.ClassificationPermanentFailure: + return service.commitTerminal(item, attempt.StatusProviderRejected, deliverydomain.StatusFailed, result.Summary, "permanent_failure") + case ports.ClassificationTransientFailure: + classification := attempt.StatusTransportFailed + providerClassification := "transient_failure" + if result.Details["error"] == deadlineExceededDetail { + classification = attempt.StatusTimedOut + providerClassification = deadlineExceededDetail + } + return service.commitForRetryableResult(item, classification, providerClassification, result.Summary) + default: + return CommitStateInput{}, fmt.Errorf("execute attempt: unsupported provider classification %q", result.Classification) + } +} + +func (service *Service) commitForTimeout(item WorkItem, providerClassification string, providerSummary string) (CommitStateInput, error) { + return service.commitForRetryableResult(item, attempt.StatusTimedOut, providerClassification, providerSummary) +} + +func (service *Service) commitForRetryableResult( + item WorkItem, + attemptStatus attempt.Status, + providerClassification string, + providerSummary string, +) (CommitStateInput, error) { + finishedAt := normalizedFinishedAt(service.clock.Now(), item.Attempt) + + currentAttempt := item.Attempt + currentAttempt.Status = attemptStatus + currentAttempt.FinishedAt = ptrTime(finishedAt) + currentAttempt.ProviderClassification = providerClassification + currentAttempt.ProviderSummary = providerSummary + if err := currentAttempt.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build terminal attempt: %w", err) + } + + nextDelay, ok := retryDelayForAttempt(currentAttempt.AttemptNo) + if ok { + nextScheduledFor := finishedAt.Add(nextDelay) + nextAttempt := attempt.Attempt{ + DeliveryID: item.Delivery.DeliveryID, + AttemptNo: currentAttempt.AttemptNo + 1, + ScheduledFor: nextScheduledFor, + Status: attempt.StatusScheduled, + } + if err := nextAttempt.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build next attempt: %w", err) + } + + deliveryRecord := item.Delivery + deliveryRecord.Status = deliverydomain.StatusQueued + deliveryRecord.AttemptCount = nextAttempt.AttemptNo + deliveryRecord.LastAttemptStatus = currentAttempt.Status + deliveryRecord.ProviderSummary = providerSummary + deliveryRecord.UpdatedAt = finishedAt + if err := deliveryRecord.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build queued delivery: %w", err) + } + + input := CommitStateInput{ + Delivery: deliveryRecord, + Attempt: currentAttempt, + NextAttempt: &nextAttempt, + } + if err := input.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build queued commit: %w", err) + } + + return input, nil + } + + deliveryRecord := item.Delivery + deliveryRecord.Status = deliverydomain.StatusDeadLetter + deliveryRecord.LastAttemptStatus = currentAttempt.Status + deliveryRecord.ProviderSummary = providerSummary + deliveryRecord.UpdatedAt = finishedAt + deliveryRecord.DeadLetteredAt = ptrTime(finishedAt) + if err := deliveryRecord.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build dead-letter delivery: %w", err) + } + + deadLetter := &deliverydomain.DeadLetterEntry{ + DeliveryID: deliveryRecord.DeliveryID, + FinalAttemptNo: currentAttempt.AttemptNo, + FailureClassification: retryExhaustedClassification, + ProviderSummary: providerSummary, + CreatedAt: finishedAt, + RecoveryHint: retryRecoveryHint, + } + + input := CommitStateInput{ + Delivery: deliveryRecord, + Attempt: currentAttempt, + DeadLetter: deadLetter, + } + if err := input.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build dead-letter commit: %w", err) + } + + return input, nil +} + +func (service *Service) commitTerminal( + item WorkItem, + attemptStatus attempt.Status, + deliveryStatus deliverydomain.Status, + providerSummary string, + providerClassification string, +) (CommitStateInput, error) { + finishedAt := normalizedFinishedAt(service.clock.Now(), item.Attempt) + + currentAttempt := item.Attempt + currentAttempt.Status = attemptStatus + currentAttempt.FinishedAt = ptrTime(finishedAt) + currentAttempt.ProviderClassification = providerClassification + currentAttempt.ProviderSummary = providerSummary + if err := currentAttempt.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build terminal attempt: %w", err) + } + + deliveryRecord := item.Delivery + deliveryRecord.Status = deliveryStatus + deliveryRecord.LastAttemptStatus = currentAttempt.Status + deliveryRecord.ProviderSummary = providerSummary + deliveryRecord.UpdatedAt = finishedAt + switch deliveryStatus { + case deliverydomain.StatusSent: + deliveryRecord.SentAt = ptrTime(finishedAt) + case deliverydomain.StatusSuppressed: + deliveryRecord.SuppressedAt = ptrTime(finishedAt) + case deliverydomain.StatusFailed: + deliveryRecord.FailedAt = ptrTime(finishedAt) + } + if err := deliveryRecord.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build terminal delivery: %w", err) + } + + input := CommitStateInput{ + Delivery: deliveryRecord, + Attempt: currentAttempt, + } + if err := input.Validate(); err != nil { + return CommitStateInput{}, fmt.Errorf("execute attempt: build terminal commit: %w", err) + } + + return input, nil +} + +func retryDelayForAttempt(attemptNo int) (time.Duration, bool) { + if attemptNo < 1 || attemptNo > len(retryDelays) { + return 0, false + } + + return retryDelays[attemptNo-1], true +} + +func normalizedFinishedAt(now time.Time, record attempt.Attempt) time.Time { + finishedAt := now.UTC().Truncate(time.Millisecond) + if record.StartedAt != nil && finishedAt.Before(*record.StartedAt) { + return *record.StartedAt + } + + return finishedAt +} + +func ptrTime(value time.Time) *time.Time { + return &value +} + +func (service *Service) recordCommitMetrics(ctx context.Context, commit CommitStateInput, source deliverydomain.Source) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordDeliveryStatusTransition(ctx, string(commit.Delivery.Status), string(source)) + service.telemetry.RecordAttemptOutcome(ctx, string(commit.Attempt.Status), string(source)) +} + +func (service *Service) recordProviderSendDuration(ctx context.Context, provider string, outcome string, duration time.Duration) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordProviderSendDuration(ctx, provider, outcome, duration) +} + +func (service *Service) logProviderResult( + ctx context.Context, + item WorkItem, + result ports.Result, + commit CommitStateInput, + providerName string, + sendDuration time.Duration, +) { + logArgs := logging.DeliveryAttemptAttrs(item.Delivery, item.Attempt) + logArgs = append(logArgs, + "provider", providerName, + "provider_outcome", string(result.Classification), + "provider_summary", result.Summary, + "delivery_status", string(commit.Delivery.Status), + "attempt_status", string(commit.Attempt.Status), + "duration_ms", float64(sendDuration.Microseconds())/1000, + ) + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + service.logger.Info("provider send completed", logArgs...) + + if commit.NextAttempt != nil { + retryArgs := logging.DeliveryAttemptAttrs(item.Delivery, item.Attempt) + retryArgs = append(retryArgs, + "next_attempt_no", commit.NextAttempt.AttemptNo, + "next_scheduled_for", commit.NextAttempt.ScheduledFor, + "provider_summary", result.Summary, + ) + retryArgs = append(retryArgs, logging.TraceAttrsFromContext(ctx)...) + service.logger.Info("delivery retry scheduled", retryArgs...) + } + + if commit.DeadLetter != nil { + deadLetterArgs := logging.DeliveryAttemptAttrs(item.Delivery, item.Attempt) + deadLetterArgs = append(deadLetterArgs, + "failure_classification", commit.DeadLetter.FailureClassification, + "recovery_hint", commit.DeadLetter.RecoveryHint, + "provider_summary", commit.DeadLetter.ProviderSummary, + ) + deadLetterArgs = append(deadLetterArgs, logging.TraceAttrsFromContext(ctx)...) + service.logger.Warn("delivery moved to dead letter", deadLetterArgs...) + } +} + +func providerNameFromSummary(summary string) string { + for _, token := range strings.Split(strings.TrimSpace(summary), " ") { + key, value, ok := strings.Cut(token, "=") + if ok && key == "provider" && strings.TrimSpace(value) != "" { + return value + } + } + + return "unknown" +} diff --git a/mail/internal/service/executeattempt/service_test.go b/mail/internal/service/executeattempt/service_test.go new file mode 100644 index 0000000..b16e9c7 --- /dev/null +++ b/mail/internal/service/executeattempt/service_test.go @@ -0,0 +1,570 @@ +package executeattempt + +import ( + "bytes" + "context" + "log/slog" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/ports" + "galaxy/mail/internal/service/acceptgenericdelivery" + "galaxy/mail/internal/service/renderdelivery" + + "github.com/stretchr/testify/require" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/sdk/trace/tracetest" +) + +func TestServicePrepareRendersQueuedTemplateDelivery(t *testing.T) { + t.Parallel() + + renderedDelivery := queuedTemplateWorkItem(t).Delivery + renderedDelivery.Status = deliverydomain.StatusRendered + renderedDelivery.Content = deliverydomain.Content{ + Subject: "Turn 54", + TextBody: "Hello Pilot", + } + renderedDelivery.UpdatedAt = renderedDelivery.CreatedAt.Add(time.Minute) + require.NoError(t, renderedDelivery.Validate()) + + renderer := &stubRenderer{ + result: renderdelivery.Result{ + Outcome: renderdelivery.OutcomeRendered, + Delivery: renderedDelivery, + ResolvedLocale: common.Locale("en"), + TemplateVersion: "sha256:template", + LocaleFallbackUsed: false, + }, + } + + service := newTestService(t, Config{ + Renderer: renderer, + Provider: stubProvider{}, + PayloadLoader: stubPayloadLoader{}, + Store: &stubStore{}, + Clock: stubClock{now: renderedDelivery.UpdatedAt}, + AttemptTimeout: 15 * time.Second, + }) + + ready, err := service.Prepare(context.Background(), queuedTemplateWorkItem(t)) + require.NoError(t, err) + require.True(t, ready) + require.Len(t, renderer.inputs, 1) +} + +func TestServiceExecuteAcceptedRenderedDelivery(t *testing.T) { + t.Parallel() + + store := &stubStore{} + service := newTestService(t, Config{ + Renderer: &stubRenderer{}, + Provider: stubProvider{ + result: ports.Result{ + Classification: ports.ClassificationAccepted, + Summary: "provider=smtp result=accepted", + }, + }, + PayloadLoader: stubPayloadLoader{}, + Store: store, + Clock: stubClock{now: fixedNow().Add(time.Minute)}, + AttemptTimeout: 15 * time.Second, + }) + + err := service.Execute(context.Background(), renderedWorkItem(t, 1)) + require.NoError(t, err) + require.Len(t, store.inputs, 1) + require.Equal(t, deliverydomain.StatusSent, store.inputs[0].Delivery.Status) + require.Equal(t, attempt.StatusProviderAccepted, store.inputs[0].Attempt.Status) + require.Nil(t, store.inputs[0].NextAttempt) + require.Nil(t, store.inputs[0].DeadLetter) +} + +func TestServiceExecuteMapsSuppressedToProviderRejected(t *testing.T) { + t.Parallel() + + store := &stubStore{} + service := newTestService(t, Config{ + Renderer: &stubRenderer{}, + Provider: stubProvider{ + result: ports.Result{ + Classification: ports.ClassificationSuppressed, + Summary: "provider=stub result=suppressed script=policy_skip", + }, + }, + PayloadLoader: stubPayloadLoader{}, + Store: store, + Clock: stubClock{now: fixedNow().Add(time.Minute)}, + AttemptTimeout: 15 * time.Second, + }) + + err := service.Execute(context.Background(), renderedWorkItem(t, 1)) + require.NoError(t, err) + require.Len(t, store.inputs, 1) + require.Equal(t, deliverydomain.StatusSuppressed, store.inputs[0].Delivery.Status) + require.Equal(t, attempt.StatusProviderRejected, store.inputs[0].Attempt.Status) +} + +func TestServiceExecuteMapsPermanentFailureToFailed(t *testing.T) { + t.Parallel() + + store := &stubStore{} + service := newTestService(t, Config{ + Renderer: &stubRenderer{}, + Provider: stubProvider{ + result: ports.Result{ + Classification: ports.ClassificationPermanentFailure, + Summary: "provider=smtp result=permanent_failure phase=data smtp_code=550", + }, + }, + PayloadLoader: stubPayloadLoader{}, + Store: store, + Clock: stubClock{now: fixedNow().Add(time.Minute)}, + AttemptTimeout: 15 * time.Second, + }) + + err := service.Execute(context.Background(), renderedWorkItem(t, 1)) + require.NoError(t, err) + require.Len(t, store.inputs, 1) + require.Equal(t, deliverydomain.StatusFailed, store.inputs[0].Delivery.Status) + require.Equal(t, attempt.StatusProviderRejected, store.inputs[0].Attempt.Status) + require.Nil(t, store.inputs[0].DeadLetter) +} + +func TestServiceExecuteBuildsRetryChainAndDeadLetter(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + attemptNo int + wantStatus deliverydomain.Status + wantAttemptStatus attempt.Status + wantNextAttemptNo int + wantNextDelay time.Duration + wantDeadLetterEntry bool + }{ + { + name: "attempt one schedules retry after one minute", + attemptNo: 1, + wantStatus: deliverydomain.StatusQueued, + wantAttemptStatus: attempt.StatusTransportFailed, + wantNextAttemptNo: 2, + wantNextDelay: time.Minute, + }, + { + name: "attempt two schedules retry after five minutes", + attemptNo: 2, + wantStatus: deliverydomain.StatusQueued, + wantAttemptStatus: attempt.StatusTransportFailed, + wantNextAttemptNo: 3, + wantNextDelay: 5 * time.Minute, + }, + { + name: "attempt three schedules retry after thirty minutes", + attemptNo: 3, + wantStatus: deliverydomain.StatusQueued, + wantAttemptStatus: attempt.StatusTransportFailed, + wantNextAttemptNo: 4, + wantNextDelay: 30 * time.Minute, + }, + { + name: "attempt four becomes dead letter", + attemptNo: 4, + wantStatus: deliverydomain.StatusDeadLetter, + wantAttemptStatus: attempt.StatusTransportFailed, + wantDeadLetterEntry: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + store := &stubStore{} + service := newTestService(t, Config{ + Renderer: &stubRenderer{}, + Provider: stubProvider{ + result: ports.Result{ + Classification: ports.ClassificationTransientFailure, + Summary: "provider=smtp result=transient_failure phase=data smtp_code=451", + Details: map[string]string{ + "phase": "data", + }, + }, + }, + PayloadLoader: stubPayloadLoader{}, + Store: store, + Clock: stubClock{now: fixedNow().Add(time.Minute)}, + AttemptTimeout: 15 * time.Second, + }) + + workItem := renderedWorkItem(t, tt.attemptNo) + err := service.Execute(context.Background(), workItem) + require.NoError(t, err) + require.Len(t, store.inputs, 1) + + input := store.inputs[0] + require.Equal(t, tt.wantStatus, input.Delivery.Status) + require.Equal(t, tt.wantAttemptStatus, input.Attempt.Status) + + if tt.wantDeadLetterEntry { + require.NotNil(t, input.DeadLetter) + require.Nil(t, input.NextAttempt) + require.Equal(t, "retry_exhausted", input.DeadLetter.FailureClassification) + return + } + + require.NotNil(t, input.NextAttempt) + require.Nil(t, input.DeadLetter) + require.Equal(t, tt.wantNextAttemptNo, input.NextAttempt.AttemptNo) + require.Equal(t, input.Attempt.FinishedAt.Add(tt.wantNextDelay), input.NextAttempt.ScheduledFor) + }) + } +} + +func TestServiceExecuteClassifiesDeadlineExceededAsTimedOut(t *testing.T) { + t.Parallel() + + store := &stubStore{} + service := newTestService(t, Config{ + Renderer: &stubRenderer{}, + Provider: stubProvider{ + result: ports.Result{ + Classification: ports.ClassificationTransientFailure, + Summary: "provider=smtp result=transient_failure phase=context", + Details: map[string]string{ + "error": "deadline_exceeded", + }, + }, + }, + PayloadLoader: stubPayloadLoader{}, + Store: store, + Clock: stubClock{now: fixedNow().Add(time.Minute)}, + AttemptTimeout: 15 * time.Second, + }) + + err := service.Execute(context.Background(), renderedWorkItem(t, 1)) + require.NoError(t, err) + require.Len(t, store.inputs, 1) + require.Equal(t, attempt.StatusTimedOut, store.inputs[0].Attempt.Status) + require.Equal(t, "deadline_exceeded", store.inputs[0].Attempt.ProviderClassification) +} + +func TestServiceRecoverExpiredSchedulesTimedOutRetry(t *testing.T) { + t.Parallel() + + store := &stubStore{} + service := newTestService(t, Config{ + Renderer: &stubRenderer{}, + Provider: stubProvider{}, + PayloadLoader: stubPayloadLoader{}, + Store: store, + Clock: stubClock{now: fixedNow().Add(time.Minute)}, + AttemptTimeout: 15 * time.Second, + }) + + err := service.RecoverExpired(context.Background(), renderedWorkItem(t, 1)) + require.NoError(t, err) + require.Len(t, store.inputs, 1) + require.Equal(t, attempt.StatusTimedOut, store.inputs[0].Attempt.Status) + require.Equal(t, "claim_ttl_expired", store.inputs[0].Attempt.ProviderClassification) + require.Equal(t, "attempt claim TTL expired", store.inputs[0].Attempt.ProviderSummary) + require.NotNil(t, store.inputs[0].NextAttempt) +} + +func TestServiceExecuteRecordsMetricsAndLogsProviderResult(t *testing.T) { + t.Parallel() + + store := &stubStore{} + telemetry := &stubTelemetry{} + loggerBuffer := &bytes.Buffer{} + recorder := tracetest.NewSpanRecorder() + tracerProvider := sdktrace.NewTracerProvider(sdktrace.WithSpanProcessor(recorder)) + + service := newTestService(t, Config{ + Renderer: &stubRenderer{}, + Provider: stubProvider{ + result: ports.Result{ + Classification: ports.ClassificationAccepted, + Summary: "provider=smtp result=accepted", + }, + }, + PayloadLoader: stubPayloadLoader{}, + Store: store, + Clock: stubClock{now: fixedNow().Add(time.Minute)}, + Telemetry: telemetry, + TracerProvider: tracerProvider, + Logger: slog.New(slog.NewJSONHandler(loggerBuffer, nil)), + AttemptTimeout: 15 * time.Second, + }) + + err := service.Execute(context.Background(), sendingTemplateWorkItem(t, 1)) + require.NoError(t, err) + require.Equal(t, []string{"notification:sent"}, telemetry.statuses) + require.Equal(t, []string{"notification:provider_accepted"}, telemetry.attempts) + require.Equal(t, []string{"smtp:accepted"}, telemetry.providerDurations) + require.Contains(t, loggerBuffer.String(), "\"delivery_id\":\"delivery-template-sending\"") + require.Contains(t, loggerBuffer.String(), "\"source\":\"notification\"") + require.Contains(t, loggerBuffer.String(), "\"template_id\":\"game.turn_ready\"") + require.Contains(t, loggerBuffer.String(), "\"attempt_no\":1") + require.Contains(t, loggerBuffer.String(), "\"otel_trace_id\":") + require.True(t, hasExecuteSpanNamed(recorder.Ended(), "mail.provider_send")) +} + +func TestServiceExecuteReturnsServiceUnavailableOnMissingPayload(t *testing.T) { + t.Parallel() + + service := newTestService(t, Config{ + Renderer: &stubRenderer{}, + Provider: stubProvider{ + result: ports.Result{ + Classification: ports.ClassificationAccepted, + Summary: "provider=smtp result=accepted", + }, + }, + PayloadLoader: stubPayloadLoader{}, + Store: &stubStore{}, + Clock: stubClock{now: fixedNow().Add(time.Minute)}, + AttemptTimeout: 15 * time.Second, + }) + + workItem := renderedWorkItem(t, 1) + workItem.Delivery.Attachments = []common.AttachmentMetadata{ + {Filename: "guide.txt", ContentType: "text/plain; charset=utf-8", SizeBytes: int64(len([]byte("read me")))}, + } + require.NoError(t, workItem.Delivery.Validate()) + + err := service.Execute(context.Background(), workItem) + require.Error(t, err) + require.ErrorIs(t, err, ErrServiceUnavailable) +} + +type stubRenderer struct { + result renderdelivery.Result + err error + inputs []renderdelivery.Input +} + +func (renderer *stubRenderer) Execute(_ context.Context, input renderdelivery.Input) (renderdelivery.Result, error) { + renderer.inputs = append(renderer.inputs, input) + return renderer.result, renderer.err +} + +type stubProvider struct { + result ports.Result + err error + inputs []ports.Message +} + +func (provider stubProvider) Send(_ context.Context, message ports.Message) (ports.Result, error) { + provider.inputs = append(provider.inputs, message) + return provider.result, provider.err +} + +func (provider stubProvider) Close() error { + return nil +} + +type stubPayloadLoader struct { + payload acceptgenericdelivery.DeliveryPayload + found bool + err error +} + +func (loader stubPayloadLoader) LoadPayload(context.Context, common.DeliveryID) (acceptgenericdelivery.DeliveryPayload, bool, error) { + return loader.payload, loader.found, loader.err +} + +type stubStore struct { + inputs []CommitStateInput + err error +} + +func (store *stubStore) Commit(_ context.Context, input CommitStateInput) error { + store.inputs = append(store.inputs, input) + return store.err +} + +type stubClock struct { + now time.Time +} + +func (clock stubClock) Now() time.Time { + return clock.now +} + +type stubTelemetry struct { + statuses []string + attempts []string + providerDurations []string +} + +func (telemetry *stubTelemetry) RecordDeliveryStatusTransition(_ context.Context, status string, source string) { + telemetry.statuses = append(telemetry.statuses, source+":"+status) +} + +func (telemetry *stubTelemetry) RecordAttemptOutcome(_ context.Context, status string, source string) { + telemetry.attempts = append(telemetry.attempts, source+":"+status) +} + +func (telemetry *stubTelemetry) RecordProviderSendDuration(_ context.Context, provider string, outcome string, _ time.Duration) { + telemetry.providerDurations = append(telemetry.providerDurations, provider+":"+outcome) +} + +func newTestService(t *testing.T, cfg Config) *Service { + t.Helper() + + service, err := New(cfg) + require.NoError(t, err) + + return service +} + +func queuedTemplateWorkItem(t *testing.T) WorkItem { + t.Helper() + + createdAt := fixedNow().Add(-time.Minute) + deliveryRecord := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-template"), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeTemplate, + TemplateID: common.TemplateID("game.turn_ready"), + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + }, + Locale: common.Locale("en"), + TemplateVariables: map[string]any{ + "player": map[string]any{ + "name": "Pilot", + }, + "turn_number": float64(54), + }, + IdempotencyKey: common.IdempotencyKey("notification:delivery-template"), + Status: deliverydomain.StatusQueued, + AttemptCount: 1, + CreatedAt: createdAt, + UpdatedAt: createdAt, + } + require.NoError(t, deliveryRecord.Validate()) + + attemptRecord := attempt.Attempt{ + DeliveryID: deliveryRecord.DeliveryID, + AttemptNo: 1, + ScheduledFor: createdAt, + Status: attempt.StatusScheduled, + } + require.NoError(t, attemptRecord.Validate()) + + return WorkItem{ + Delivery: deliveryRecord, + Attempt: attemptRecord, + } +} + +func renderedWorkItem(t *testing.T, attemptNo int) WorkItem { + t.Helper() + + createdAt := fixedNow().Add(-time.Duration(attemptNo) * time.Minute) + deliveryRecord := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-rendered"), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeRendered, + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + }, + Content: deliverydomain.Content{ + Subject: "Turn ready", + TextBody: "Turn 54 is ready.", + }, + IdempotencyKey: common.IdempotencyKey("notification:delivery-rendered"), + Status: deliverydomain.StatusSending, + AttemptCount: attemptNo, + CreatedAt: createdAt, + UpdatedAt: createdAt.Add(time.Second), + } + require.NoError(t, deliveryRecord.Validate()) + + scheduledFor := createdAt + startedAt := scheduledFor.Add(5 * time.Second) + attemptRecord := attempt.Attempt{ + DeliveryID: deliveryRecord.DeliveryID, + AttemptNo: attemptNo, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + Status: attempt.StatusInProgress, + } + require.NoError(t, attemptRecord.Validate()) + + return WorkItem{ + Delivery: deliveryRecord, + Attempt: attemptRecord, + } +} + +func sendingTemplateWorkItem(t *testing.T, attemptNo int) WorkItem { + t.Helper() + + createdAt := fixedNow().Add(-time.Duration(attemptNo) * time.Minute) + deliveryRecord := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-template-sending"), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeTemplate, + TemplateID: common.TemplateID("game.turn_ready"), + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + }, + Content: deliverydomain.Content{ + Subject: "Turn ready", + TextBody: "Turn 54 is ready.", + }, + Locale: common.Locale("en"), + TemplateVariables: map[string]any{ + "turn_number": float64(54), + }, + IdempotencyKey: common.IdempotencyKey("notification:delivery-template-sending"), + Status: deliverydomain.StatusSending, + AttemptCount: attemptNo, + CreatedAt: createdAt, + UpdatedAt: createdAt.Add(time.Second), + } + require.NoError(t, deliveryRecord.Validate()) + + scheduledFor := createdAt + startedAt := scheduledFor.Add(5 * time.Second) + attemptRecord := attempt.Attempt{ + DeliveryID: deliveryRecord.DeliveryID, + AttemptNo: attemptNo, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + Status: attempt.StatusInProgress, + } + require.NoError(t, attemptRecord.Validate()) + + return WorkItem{ + Delivery: deliveryRecord, + Attempt: attemptRecord, + } +} + +func fixedNow() time.Time { + return time.Unix(1_775_121_700, 0).UTC() +} + +var _ Renderer = (*stubRenderer)(nil) +var _ ports.Provider = stubProvider{} +var _ PayloadLoader = stubPayloadLoader{} +var _ Store = (*stubStore)(nil) +var _ Telemetry = (*stubTelemetry)(nil) + +func hasExecuteSpanNamed(spans []sdktrace.ReadOnlySpan, name string) bool { + for _, span := range spans { + if span.Name() == name { + return true + } + } + + return false +} diff --git a/mail/internal/service/getdelivery/service.go b/mail/internal/service/getdelivery/service.go new file mode 100644 index 0000000..c1be4c9 --- /dev/null +++ b/mail/internal/service/getdelivery/service.go @@ -0,0 +1,128 @@ +// Package getdelivery implements trusted operator lookup of one accepted mail +// delivery. +package getdelivery + +import ( + "context" + "errors" + "fmt" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" +) + +var ( + // ErrNotFound reports that the requested delivery does not exist. + ErrNotFound = errors.New("get delivery not found") + + // ErrServiceUnavailable reports that trusted lookup could not load durable + // state safely. + ErrServiceUnavailable = errors.New("get delivery service unavailable") +) + +// Input stores one exact trusted lookup by delivery identifier. +type Input struct { + // DeliveryID stores the exact accepted delivery identifier to resolve. + DeliveryID common.DeliveryID +} + +// Validate reports whether input contains a complete lookup key. +func (input Input) Validate() error { + if err := input.DeliveryID.Validate(); err != nil { + return fmt.Errorf("delivery id: %w", err) + } + + return nil +} + +// Result stores one full delivery record and its optional dead-letter entry. +type Result struct { + // Delivery stores the resolved accepted delivery record. + Delivery deliverydomain.Delivery + + // DeadLetter stores the optional dead-letter entry when Delivery is in the + // `dead_letter` terminal state. + DeadLetter *deliverydomain.DeadLetterEntry +} + +// Validate reports whether result contains a consistent delivery view. +func (result Result) Validate() error { + if err := result.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if err := deliverydomain.ValidateDeadLetterState(result.Delivery, result.DeadLetter); err != nil { + return fmt.Errorf("dead-letter state: %w", err) + } + + return nil +} + +// Store provides exact lookup of one accepted delivery and its dead-letter +// entry. +type Store interface { + // GetDelivery loads one accepted delivery by its identifier. + GetDelivery(context.Context, common.DeliveryID) (deliverydomain.Delivery, bool, error) + + // GetDeadLetter loads the dead-letter entry associated with deliveryID when + // one exists. + GetDeadLetter(context.Context, common.DeliveryID) (deliverydomain.DeadLetterEntry, bool, error) +} + +// Config stores the dependencies used by Service. +type Config struct { + // Store owns durable delivery and dead-letter state. + Store Store +} + +// Service executes trusted exact delivery lookups. +type Service struct { + store Store +} + +// New constructs Service from cfg. +func New(cfg Config) (*Service, error) { + if cfg.Store == nil { + return nil, errors.New("new get delivery service: nil store") + } + + return &Service{store: cfg.Store}, nil +} + +// Execute loads one accepted delivery and its optional dead-letter entry. +func (service *Service) Execute(ctx context.Context, input Input) (Result, error) { + if ctx == nil { + return Result{}, errors.New("execute get delivery: nil context") + } + if service == nil { + return Result{}, errors.New("execute get delivery: nil service") + } + if err := input.Validate(); err != nil { + return Result{}, fmt.Errorf("execute get delivery: %w", err) + } + + record, found, err := service.store.GetDelivery(ctx, input.DeliveryID) + switch { + case err != nil: + return Result{}, fmt.Errorf("%w: load delivery: %v", ErrServiceUnavailable, err) + case !found: + return Result{}, ErrNotFound + } + + result := Result{Delivery: record} + if record.Status == deliverydomain.StatusDeadLetter { + entry, found, err := service.store.GetDeadLetter(ctx, input.DeliveryID) + switch { + case err != nil: + return Result{}, fmt.Errorf("%w: load dead-letter entry: %v", ErrServiceUnavailable, err) + case !found: + return Result{}, fmt.Errorf("%w: missing dead-letter entry for delivery %q", ErrServiceUnavailable, input.DeliveryID) + default: + result.DeadLetter = &entry + } + } + if err := result.Validate(); err != nil { + return Result{}, fmt.Errorf("%w: invalid result: %v", ErrServiceUnavailable, err) + } + + return result, nil +} diff --git a/mail/internal/service/getdelivery/service_test.go b/mail/internal/service/getdelivery/service_test.go new file mode 100644 index 0000000..2cd0786 --- /dev/null +++ b/mail/internal/service/getdelivery/service_test.go @@ -0,0 +1,154 @@ +package getdelivery + +import ( + "context" + "errors" + "testing" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "github.com/stretchr/testify/require" +) + +func TestServiceExecuteReturnsDeliveryWithoutDeadLetter(t *testing.T) { + t.Parallel() + + store := &stubStore{ + delivery: ptrDelivery(validSentDelivery()), + } + service := newTestService(t, Config{Store: store}) + + result, err := service.Execute(context.Background(), Input{DeliveryID: store.delivery.DeliveryID}) + require.NoError(t, err) + require.Equal(t, *store.delivery, result.Delivery) + require.Nil(t, result.DeadLetter) +} + +func TestServiceExecuteReturnsDeadLetterEntry(t *testing.T) { + t.Parallel() + + record := validDeadLetterDelivery() + entry := validDeadLetterEntry(record.DeliveryID) + store := &stubStore{ + delivery: &record, + deadLetter: &entry, + } + service := newTestService(t, Config{Store: store}) + + result, err := service.Execute(context.Background(), Input{DeliveryID: record.DeliveryID}) + require.NoError(t, err) + require.Equal(t, record, result.Delivery) + require.NotNil(t, result.DeadLetter) + require.Equal(t, entry, *result.DeadLetter) +} + +func TestServiceExecuteReturnsNotFound(t *testing.T) { + t.Parallel() + + service := newTestService(t, Config{Store: &stubStore{}}) + + _, err := service.Execute(context.Background(), Input{DeliveryID: common.DeliveryID("missing")}) + require.ErrorIs(t, err, ErrNotFound) +} + +type stubStore struct { + delivery *deliverydomain.Delivery + deadLetter *deliverydomain.DeadLetterEntry + getDeliveryErr error + getDeadErr error +} + +func (store *stubStore) GetDelivery(context.Context, common.DeliveryID) (deliverydomain.Delivery, bool, error) { + if store.getDeliveryErr != nil { + return deliverydomain.Delivery{}, false, store.getDeliveryErr + } + if store.delivery == nil { + return deliverydomain.Delivery{}, false, nil + } + return *store.delivery, true, nil +} + +func (store *stubStore) GetDeadLetter(context.Context, common.DeliveryID) (deliverydomain.DeadLetterEntry, bool, error) { + if store.getDeadErr != nil { + return deliverydomain.DeadLetterEntry{}, false, store.getDeadErr + } + if store.deadLetter == nil { + return deliverydomain.DeadLetterEntry{}, false, nil + } + return *store.deadLetter, true, nil +} + +func newTestService(t *testing.T, cfg Config) *Service { + t.Helper() + + service, err := New(cfg) + require.NoError(t, err) + + return service +} + +func validSentDelivery() deliverydomain.Delivery { + createdAt := time.Unix(1_775_121_700, 0).UTC() + updatedAt := createdAt.Add(time.Minute) + sentAt := updatedAt.Add(time.Second) + + record := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-sent"), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeRendered, + Envelope: deliverydomain.Envelope{To: []common.Email{common.Email("pilot@example.com")}}, + Content: deliverydomain.Content{Subject: "Ready", TextBody: "Turn ready"}, + IdempotencyKey: common.IdempotencyKey("notification:delivery-sent"), + Status: deliverydomain.StatusSent, + AttemptCount: 1, + CreatedAt: createdAt, + UpdatedAt: updatedAt, + SentAt: &sentAt, + } + if err := record.Validate(); err != nil { + panic(err) + } + + return record +} + +func validDeadLetterDelivery() deliverydomain.Delivery { + record := validSentDelivery() + record.DeliveryID = common.DeliveryID("delivery-dead-letter") + record.IdempotencyKey = common.IdempotencyKey("notification:delivery-dead-letter") + record.Status = deliverydomain.StatusDeadLetter + record.UpdatedAt = record.CreatedAt.Add(2 * time.Minute) + record.SentAt = nil + deadLetteredAt := record.UpdatedAt + record.DeadLetteredAt = &deadLetteredAt + if err := record.Validate(); err != nil { + panic(err) + } + + return record +} + +func validDeadLetterEntry(deliveryID common.DeliveryID) deliverydomain.DeadLetterEntry { + entry := deliverydomain.DeadLetterEntry{ + DeliveryID: deliveryID, + FinalAttemptNo: 1, + FailureClassification: "retry_exhausted", + ProviderSummary: "smtp timeout", + CreatedAt: time.Unix(1_775_121_900, 0).UTC(), + RecoveryHint: "check SMTP connectivity", + } + if err := entry.Validate(); err != nil { + panic(err) + } + + return entry +} + +func ptrDelivery(record deliverydomain.Delivery) *deliverydomain.Delivery { + return &record +} + +var _ Store = (*stubStore)(nil) +var _ = errors.New diff --git a/mail/internal/service/listattempts/service.go b/mail/internal/service/listattempts/service.go new file mode 100644 index 0000000..963baa2 --- /dev/null +++ b/mail/internal/service/listattempts/service.go @@ -0,0 +1,137 @@ +// Package listattempts implements trusted operator reads of delivery-attempt +// history. +package listattempts + +import ( + "context" + "errors" + "fmt" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" +) + +var ( + // ErrNotFound reports that the requested delivery does not exist. + ErrNotFound = errors.New("list attempts delivery not found") + + // ErrServiceUnavailable reports that attempt history could not load durable + // state safely. + ErrServiceUnavailable = errors.New("list attempts service unavailable") +) + +// Input stores one trusted attempt-history lookup request. +type Input struct { + // DeliveryID stores the exact accepted delivery identifier to inspect. + DeliveryID common.DeliveryID +} + +// Validate reports whether input contains a complete lookup key. +func (input Input) Validate() error { + if err := input.DeliveryID.Validate(); err != nil { + return fmt.Errorf("delivery id: %w", err) + } + + return nil +} + +// Result stores the ordered attempt history of one accepted delivery. +type Result struct { + // Delivery stores the owning accepted delivery record. + Delivery deliverydomain.Delivery + + // Attempts stores the concrete attempt history in `attempt_no ASC` order. + Attempts []attempt.Attempt +} + +// Validate reports whether result contains a structurally valid attempt +// history. +func (result Result) Validate() error { + if err := result.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if len(result.Attempts) != result.Delivery.AttemptCount { + return fmt.Errorf("attempt count %d mismatches delivery attempt count %d", len(result.Attempts), result.Delivery.AttemptCount) + } + for index, record := range result.Attempts { + if err := record.Validate(); err != nil { + return fmt.Errorf("attempts[%d]: %w", index, err) + } + if record.DeliveryID != result.Delivery.DeliveryID { + return fmt.Errorf("attempts[%d]: delivery id mismatch", index) + } + if record.AttemptNo != index+1 { + return fmt.Errorf("attempts[%d]: expected attempt number %d, got %d", index, index+1, record.AttemptNo) + } + } + + return nil +} + +// Store provides exact delivery lookup and ordered attempt-history reads. +type Store interface { + // GetDelivery loads one accepted delivery by its identifier. + GetDelivery(context.Context, common.DeliveryID) (deliverydomain.Delivery, bool, error) + + // ListAttempts loads exactly expectedCount attempts in ascending attempt + // number order. Implementations must fail closed when the stored sequence + // contains a gap. + ListAttempts(context.Context, common.DeliveryID, int) ([]attempt.Attempt, error) +} + +// Config stores the dependencies used by Service. +type Config struct { + // Store owns durable delivery and attempt state. + Store Store +} + +// Service executes trusted attempt-history reads. +type Service struct { + store Store +} + +// New constructs Service from cfg. +func New(cfg Config) (*Service, error) { + if cfg.Store == nil { + return nil, errors.New("new list attempts service: nil store") + } + + return &Service{store: cfg.Store}, nil +} + +// Execute loads one delivery and its complete attempt history. +func (service *Service) Execute(ctx context.Context, input Input) (Result, error) { + if ctx == nil { + return Result{}, errors.New("execute list attempts: nil context") + } + if service == nil { + return Result{}, errors.New("execute list attempts: nil service") + } + if err := input.Validate(); err != nil { + return Result{}, fmt.Errorf("execute list attempts: %w", err) + } + + record, found, err := service.store.GetDelivery(ctx, input.DeliveryID) + switch { + case err != nil: + return Result{}, fmt.Errorf("%w: load delivery: %v", ErrServiceUnavailable, err) + case !found: + return Result{}, ErrNotFound + } + + attempts, err := service.store.ListAttempts(ctx, input.DeliveryID, record.AttemptCount) + if err != nil { + return Result{}, fmt.Errorf("%w: load attempts: %v", ErrServiceUnavailable, err) + } + + result := Result{ + Delivery: record, + Attempts: attempts, + } + if err := result.Validate(); err != nil { + return Result{}, fmt.Errorf("%w: invalid result: %v", ErrServiceUnavailable, err) + } + + return result, nil +} diff --git a/mail/internal/service/listattempts/service_test.go b/mail/internal/service/listattempts/service_test.go new file mode 100644 index 0000000..b5db62a --- /dev/null +++ b/mail/internal/service/listattempts/service_test.go @@ -0,0 +1,136 @@ +package listattempts + +import ( + "context" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "github.com/stretchr/testify/require" +) + +func TestServiceExecuteReturnsEmptyHistory(t *testing.T) { + t.Parallel() + + record := validDelivery(0) + store := &stubStore{delivery: &record} + service := newTestService(t, Config{Store: store}) + + result, err := service.Execute(context.Background(), Input{DeliveryID: record.DeliveryID}) + require.NoError(t, err) + require.Equal(t, record, result.Delivery) + require.Empty(t, result.Attempts) +} + +func TestServiceExecuteReturnsOrderedHistory(t *testing.T) { + t.Parallel() + + record := validDelivery(2) + store := &stubStore{ + delivery: &record, + attempts: []attempt.Attempt{ + validAttempt(record.DeliveryID, 1, attempt.StatusProviderRejected), + validAttempt(record.DeliveryID, 2, attempt.StatusProviderAccepted), + }, + } + service := newTestService(t, Config{Store: store}) + + result, err := service.Execute(context.Background(), Input{DeliveryID: record.DeliveryID}) + require.NoError(t, err) + require.Len(t, result.Attempts, 2) + require.Equal(t, 1, result.Attempts[0].AttemptNo) + require.Equal(t, 2, result.Attempts[1].AttemptNo) +} + +func TestServiceExecuteFailsClosedOnGap(t *testing.T) { + t.Parallel() + + record := validDelivery(2) + store := &stubStore{ + delivery: &record, + attempts: []attempt.Attempt{ + validAttempt(record.DeliveryID, 1, attempt.StatusProviderRejected), + validAttempt(record.DeliveryID, 3, attempt.StatusProviderAccepted), + }, + } + service := newTestService(t, Config{Store: store}) + + _, err := service.Execute(context.Background(), Input{DeliveryID: record.DeliveryID}) + require.ErrorIs(t, err, ErrServiceUnavailable) +} + +type stubStore struct { + delivery *deliverydomain.Delivery + attempts []attempt.Attempt +} + +func (store *stubStore) GetDelivery(context.Context, common.DeliveryID) (deliverydomain.Delivery, bool, error) { + if store.delivery == nil { + return deliverydomain.Delivery{}, false, nil + } + + return *store.delivery, true, nil +} + +func (store *stubStore) ListAttempts(context.Context, common.DeliveryID, int) ([]attempt.Attempt, error) { + return append([]attempt.Attempt(nil), store.attempts...), nil +} + +func newTestService(t *testing.T, cfg Config) *Service { + t.Helper() + + service, err := New(cfg) + require.NoError(t, err) + + return service +} + +func validDelivery(attemptCount int) deliverydomain.Delivery { + createdAt := time.Unix(1_775_121_700, 0).UTC() + updatedAt := createdAt.Add(time.Minute) + failedAt := updatedAt.Add(time.Second) + + record := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-attempts"), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeRendered, + Envelope: deliverydomain.Envelope{To: []common.Email{common.Email("pilot@example.com")}}, + Content: deliverydomain.Content{Subject: "Ready", TextBody: "Turn ready"}, + IdempotencyKey: common.IdempotencyKey("notification:delivery-attempts"), + Status: deliverydomain.StatusFailed, + AttemptCount: attemptCount, + CreatedAt: createdAt, + UpdatedAt: updatedAt, + FailedAt: &failedAt, + } + if err := record.Validate(); err != nil { + panic(err) + } + + return record +} + +func validAttempt(deliveryID common.DeliveryID, attemptNo int, status attempt.Status) attempt.Attempt { + scheduledFor := time.Unix(1_775_121_760+int64(attemptNo), 0).UTC() + startedAt := scheduledFor.Add(time.Second) + finishedAt := startedAt.Add(time.Second) + + record := attempt.Attempt{ + DeliveryID: deliveryID, + AttemptNo: attemptNo, + ScheduledFor: scheduledFor, + StartedAt: &startedAt, + FinishedAt: &finishedAt, + Status: status, + } + if err := record.Validate(); err != nil { + panic(err) + } + + return record +} + +var _ Store = (*stubStore)(nil) diff --git a/mail/internal/service/listdeliveries/service.go b/mail/internal/service/listdeliveries/service.go new file mode 100644 index 0000000..3e4fec7 --- /dev/null +++ b/mail/internal/service/listdeliveries/service.go @@ -0,0 +1,280 @@ +// Package listdeliveries implements trusted operator listing of accepted mail +// deliveries. +package listdeliveries + +import ( + "context" + "errors" + "fmt" + "strings" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" +) + +var ( + // ErrInvalidCursor reports that the supplied opaque pagination cursor is + // malformed or no longer matches durable state. + ErrInvalidCursor = errors.New("list deliveries invalid cursor") + + // ErrServiceUnavailable reports that trusted listing could not load durable + // state safely. + ErrServiceUnavailable = errors.New("list deliveries service unavailable") +) + +const ( + // DefaultLimit stores the frozen default page size used by the operator + // listing surface. + DefaultLimit = 50 + + // MaxLimit stores the frozen maximum page size accepted by the operator + // listing surface. + MaxLimit = 200 +) + +// Cursor stores one deterministic continuation position in the delivery sort +// order `created_at_ms DESC, delivery_id DESC`. +type Cursor struct { + // CreatedAt stores the durable creation time of the last visible delivery. + CreatedAt time.Time + + // DeliveryID stores the durable identifier of the last visible delivery. + DeliveryID common.DeliveryID +} + +// Validate reports whether cursor contains a complete continuation tuple. +func (cursor Cursor) Validate() error { + if err := common.ValidateTimestamp("delivery list cursor created at", cursor.CreatedAt); err != nil { + return err + } + if err := cursor.DeliveryID.Validate(); err != nil { + return fmt.Errorf("delivery list cursor delivery id: %w", err) + } + + return nil +} + +// Filters stores the supported operator-listing filters. +type Filters struct { + // Recipient stores the optional recipient envelope filter covering `to`, + // `cc`, and `bcc`. + Recipient common.Email + + // Status stores the optional delivery lifecycle filter. + Status deliverydomain.Status + + // Source stores the optional delivery source filter. + Source deliverydomain.Source + + // TemplateID stores the optional template family filter. + TemplateID common.TemplateID + + // IdempotencyKey stores the optional idempotency-key filter. + IdempotencyKey common.IdempotencyKey + + // FromCreatedAt stores the optional inclusive lower creation-time bound. + FromCreatedAt *time.Time + + // ToCreatedAt stores the optional inclusive upper creation-time bound. + ToCreatedAt *time.Time +} + +// Validate reports whether filters is structurally valid. +func (filters Filters) Validate() error { + if !filters.Recipient.IsZero() { + if err := filters.Recipient.Validate(); err != nil { + return fmt.Errorf("recipient: %w", err) + } + } + if filters.Status != "" && !filters.Status.IsKnown() { + return fmt.Errorf("status %q is unsupported", filters.Status) + } + if filters.Source != "" && !filters.Source.IsKnown() { + return fmt.Errorf("source %q is unsupported", filters.Source) + } + if !filters.TemplateID.IsZero() { + if err := filters.TemplateID.Validate(); err != nil { + return fmt.Errorf("template id: %w", err) + } + } + if !filters.IdempotencyKey.IsZero() { + if err := filters.IdempotencyKey.Validate(); err != nil { + return fmt.Errorf("idempotency key: %w", err) + } + } + if filters.FromCreatedAt != nil { + if err := common.ValidateTimestamp("from created at", *filters.FromCreatedAt); err != nil { + return err + } + } + if filters.ToCreatedAt != nil { + if err := common.ValidateTimestamp("to created at", *filters.ToCreatedAt); err != nil { + return err + } + } + if filters.FromCreatedAt != nil && filters.ToCreatedAt != nil && filters.FromCreatedAt.After(*filters.ToCreatedAt) { + return errors.New("from created at must not be after to created at") + } + + return nil +} + +// Input stores one trusted operator-listing request. +type Input struct { + // Limit stores the maximum number of returned deliveries. The zero value + // selects the frozen default limit. + Limit int + + // Cursor stores the optional continuation cursor for the next page. + Cursor *Cursor + + // Filters stores the normalized listing filters. + Filters Filters +} + +// Validate reports whether input contains a complete supported listing +// request. +func (input Input) Validate() error { + switch { + case input.Limit < 0: + return errors.New("limit must not be negative") + case input.Limit > MaxLimit: + return fmt.Errorf("limit must be at most %d", MaxLimit) + } + if input.Cursor != nil { + if err := input.Cursor.Validate(); err != nil { + return fmt.Errorf("cursor: %w", err) + } + } + if err := input.Filters.Validate(); err != nil { + return fmt.Errorf("filters: %w", err) + } + + return nil +} + +// Result stores one deterministic ordered page of delivery records. +type Result struct { + // Items stores the returned deliveries in `created_at DESC, delivery_id + // DESC` order. + Items []deliverydomain.Delivery + + // NextCursor stores the optional cursor for the next page. + NextCursor *Cursor +} + +// Validate reports whether result contains valid delivery records and an +// optional next cursor. +func (result Result) Validate() error { + for index, record := range result.Items { + if err := record.Validate(); err != nil { + return fmt.Errorf("items[%d]: %w", index, err) + } + } + if result.NextCursor != nil { + if err := result.NextCursor.Validate(); err != nil { + return fmt.Errorf("next cursor: %w", err) + } + } + + return nil +} + +// Store provides deterministic ordered listing over durable delivery state. +type Store interface { + // List returns one filtered ordered page of delivery records. + List(context.Context, Input) (Result, error) +} + +// Config stores the dependencies used by Service. +type Config struct { + // Store loads one deterministic ordered page of durable deliveries. + Store Store +} + +// Service executes trusted operator delivery-list reads. +type Service struct { + store Store +} + +// New constructs Service from cfg. +func New(cfg Config) (*Service, error) { + if cfg.Store == nil { + return nil, errors.New("new list deliveries service: nil store") + } + + return &Service{store: cfg.Store}, nil +} + +// Execute validates input, applies the default limit when omitted, and loads +// one deterministic page of deliveries. +func (service *Service) Execute(ctx context.Context, input Input) (Result, error) { + if ctx == nil { + return Result{}, errors.New("execute list deliveries: nil context") + } + if service == nil { + return Result{}, errors.New("execute list deliveries: nil service") + } + if input.Limit == 0 { + input.Limit = DefaultLimit + } + if err := input.Validate(); err != nil { + return Result{}, fmt.Errorf("execute list deliveries: %w", err) + } + + result, err := service.store.List(ctx, input) + switch { + case errors.Is(err, ErrInvalidCursor): + return Result{}, err + case err != nil: + return Result{}, fmt.Errorf("%w: %v", ErrServiceUnavailable, err) + } + if err := result.Validate(); err != nil { + return Result{}, fmt.Errorf("%w: invalid result: %v", ErrServiceUnavailable, err) + } + if len(result.Items) > input.Limit { + return Result{}, fmt.Errorf("%w: invalid result: returned %d items for limit %d", ErrServiceUnavailable, len(result.Items), input.Limit) + } + + return result, nil +} + +// Matches reports whether record satisfies filters. +func (filters Filters) Matches(record deliverydomain.Delivery) bool { + if filters.Recipient != "" && !containsRecipient(record.Envelope, filters.Recipient) { + return false + } + if filters.Status != "" && record.Status != filters.Status { + return false + } + if filters.Source != "" && record.Source != filters.Source { + return false + } + if filters.TemplateID != "" && record.TemplateID != filters.TemplateID { + return false + } + if filters.IdempotencyKey != "" && record.IdempotencyKey != filters.IdempotencyKey { + return false + } + if filters.FromCreatedAt != nil && record.CreatedAt.Before(filters.FromCreatedAt.UTC()) { + return false + } + if filters.ToCreatedAt != nil && record.CreatedAt.After(filters.ToCreatedAt.UTC()) { + return false + } + + return true +} + +func containsRecipient(envelope deliverydomain.Envelope, email common.Email) bool { + for _, group := range [][]common.Email{envelope.To, envelope.Cc, envelope.Bcc} { + for _, candidate := range group { + if strings.EqualFold(candidate.String(), email.String()) { + return true + } + } + } + + return false +} diff --git a/mail/internal/service/listdeliveries/service_test.go b/mail/internal/service/listdeliveries/service_test.go new file mode 100644 index 0000000..ada8eba --- /dev/null +++ b/mail/internal/service/listdeliveries/service_test.go @@ -0,0 +1,230 @@ +package listdeliveries + +import ( + "context" + "errors" + "testing" + "time" + + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "github.com/stretchr/testify/require" +) + +func TestServiceExecuteAppliesDefaultLimit(t *testing.T) { + t.Parallel() + + store := &stubStore{ + result: Result{ + Items: []deliverydomain.Delivery{validDelivery("delivery-default", "notification:delivery-default")}, + }, + } + service := newTestService(t, Config{Store: store}) + + result, err := service.Execute(context.Background(), Input{}) + require.NoError(t, err) + require.Len(t, result.Items, 1) + require.Equal(t, DefaultLimit, store.lastInput.Limit) +} + +func TestInputValidateRejectsInvalidFiltersAndCursor(t *testing.T) { + t.Parallel() + + validCursor := Cursor{ + CreatedAt: time.Unix(1_775_121_700, 0).UTC(), + DeliveryID: common.DeliveryID("delivery-cursor"), + } + validFrom := time.Unix(1_775_121_700, 0).UTC() + validTo := validFrom.Add(time.Minute) + + tests := []struct { + name string + input Input + wantErr string + }{ + { + name: "invalid recipient", + input: Input{ + Filters: Filters{Recipient: common.Email("not-an-email")}, + }, + wantErr: "recipient:", + }, + { + name: "invalid status", + input: Input{ + Filters: Filters{Status: deliverydomain.Status("bad")}, + }, + wantErr: `status "bad" is unsupported`, + }, + { + name: "invalid source", + input: Input{ + Filters: Filters{Source: deliverydomain.Source("bad")}, + }, + wantErr: `source "bad" is unsupported`, + }, + { + name: "invalid template id", + input: Input{ + Filters: Filters{TemplateID: common.TemplateID(" bad-template")}, + }, + wantErr: "template id:", + }, + { + name: "invalid idempotency key", + input: Input{ + Filters: Filters{IdempotencyKey: common.IdempotencyKey(" bad-key")}, + }, + wantErr: "idempotency key:", + }, + { + name: "invalid created at range", + input: Input{ + Filters: Filters{ + FromCreatedAt: &validTo, + ToCreatedAt: &validFrom, + }, + }, + wantErr: "from created at must not be after to created at", + }, + { + name: "invalid cursor", + input: Input{ + Cursor: &Cursor{ + CreatedAt: time.Time{}, + DeliveryID: common.DeliveryID("delivery-cursor"), + }, + }, + wantErr: "cursor:", + }, + { + name: "valid cursor and filters", + input: Input{ + Limit: 1, + Cursor: &Cursor{ + CreatedAt: validCursor.CreatedAt, + DeliveryID: validCursor.DeliveryID, + }, + Filters: Filters{ + Recipient: common.Email("pilot@example.com"), + Status: deliverydomain.StatusSent, + Source: deliverydomain.SourceNotification, + TemplateID: common.TemplateID("auth.login_code"), + IdempotencyKey: common.IdempotencyKey("notification:delivery-123"), + FromCreatedAt: &validFrom, + ToCreatedAt: &validTo, + }, + }, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + err := tt.input.Validate() + if tt.wantErr == "" { + require.NoError(t, err) + return + } + + require.Error(t, err) + require.ErrorContains(t, err, tt.wantErr) + }) + } +} + +func TestServiceExecutePropagatesInvalidCursor(t *testing.T) { + t.Parallel() + + service := newTestService(t, Config{ + Store: &stubStore{listErr: ErrInvalidCursor}, + }) + + _, err := service.Execute(context.Background(), Input{Limit: 1}) + require.ErrorIs(t, err, ErrInvalidCursor) +} + +func TestServiceExecuteWrapsServiceUnavailable(t *testing.T) { + t.Parallel() + + service := newTestService(t, Config{ + Store: &stubStore{listErr: errors.New("redis unavailable")}, + }) + + _, err := service.Execute(context.Background(), Input{Limit: 1}) + require.ErrorIs(t, err, ErrServiceUnavailable) + require.ErrorContains(t, err, "redis unavailable") +} + +func TestServiceExecuteRejectsOversizedResult(t *testing.T) { + t.Parallel() + + service := newTestService(t, Config{ + Store: &stubStore{ + result: Result{ + Items: []deliverydomain.Delivery{ + validDelivery("delivery-one", "notification:delivery-one"), + validDelivery("delivery-two", "notification:delivery-two"), + }, + }, + }, + }) + + _, err := service.Execute(context.Background(), Input{Limit: 1}) + require.ErrorIs(t, err, ErrServiceUnavailable) + require.ErrorContains(t, err, "returned 2 items for limit 1") +} + +type stubStore struct { + lastInput Input + result Result + listErr error +} + +func (store *stubStore) List(_ context.Context, input Input) (Result, error) { + store.lastInput = input + if store.listErr != nil { + return Result{}, store.listErr + } + + return store.result, nil +} + +func newTestService(t *testing.T, cfg Config) *Service { + t.Helper() + + service, err := New(cfg) + require.NoError(t, err) + + return service +} + +func validDelivery(deliveryID string, idempotencyKey common.IdempotencyKey) deliverydomain.Delivery { + createdAt := time.Unix(1_775_121_700, 0).UTC() + updatedAt := createdAt.Add(time.Minute) + sentAt := updatedAt.Add(time.Second) + + record := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID(deliveryID), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeRendered, + Envelope: deliverydomain.Envelope{To: []common.Email{common.Email("pilot@example.com")}}, + Content: deliverydomain.Content{Subject: "Ready", TextBody: "Turn ready"}, + IdempotencyKey: idempotencyKey, + Status: deliverydomain.StatusSent, + AttemptCount: 1, + CreatedAt: createdAt, + UpdatedAt: updatedAt, + SentAt: &sentAt, + } + if err := record.Validate(); err != nil { + panic(err) + } + + return record +} + +var _ Store = (*stubStore)(nil) diff --git a/mail/internal/service/renderdelivery/service.go b/mail/internal/service/renderdelivery/service.go new file mode 100644 index 0000000..9debf17 --- /dev/null +++ b/mail/internal/service/renderdelivery/service.go @@ -0,0 +1,695 @@ +// Package renderdelivery implements deterministic rendering of template-mode +// deliveries. +package renderdelivery + +import ( + "context" + "errors" + "fmt" + "log/slog" + "slices" + "strings" + "time" + + templatedir "galaxy/mail/internal/adapters/templates" + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/logging" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + oteltrace "go.opentelemetry.io/otel/trace" +) + +var ( + // ErrServiceUnavailable reports that rendered or failed state could not be + // persisted durably. + ErrServiceUnavailable = errors.New("render delivery service unavailable") +) + +const tracerName = "galaxy/mail/renderdelivery" + +// FailureClassification identifies the stable render-failure classification +// surface. +type FailureClassification string + +const ( + // FailureTemplateNotFound reports that the requested template family does + // not exist in the catalog. + FailureTemplateNotFound FailureClassification = "template_not_found" + + // FailureFallbackMissing reports that the requested locale is unavailable + // and the mandatory `en` fallback variant is also missing. + FailureFallbackMissing FailureClassification = "fallback_missing" + + // FailureTemplateParseFailed reports that a template variant could not be + // parsed into a runnable form. + FailureTemplateParseFailed FailureClassification = "template_parse_failed" + + // FailureMissingRequiredVariable reports that the accepted template + // variables do not provide one or more required dot-path values. + FailureMissingRequiredVariable FailureClassification = "missing_required_variable" + + // FailureTemplateExecuteFailed reports that template execution failed after + // lookup and variable validation. + FailureTemplateExecuteFailed FailureClassification = "template_execute_failed" +) + +// IsKnown reports whether classification belongs to the stable render-failure +// surface. +func (classification FailureClassification) IsKnown() bool { + switch classification { + case FailureTemplateNotFound, + FailureFallbackMissing, + FailureTemplateParseFailed, + FailureMissingRequiredVariable, + FailureTemplateExecuteFailed: + return true + default: + return false + } +} + +// Outcome identifies the coarse result of one render-delivery execution. +type Outcome string + +const ( + // OutcomeRendered reports that template content was materialized and stored + // durably as `mail_delivery.status=rendered`. + OutcomeRendered Outcome = "rendered" + + // OutcomeFailed reports that rendering reached a classified terminal + // failure and stored `mail_delivery.status=failed`. + OutcomeFailed Outcome = "failed" +) + +// IsKnown reports whether outcome belongs to the supported render-delivery +// result surface. +func (outcome Outcome) IsKnown() bool { + switch outcome { + case OutcomeRendered, OutcomeFailed: + return true + default: + return false + } +} + +// Input stores one queued template delivery together with its current +// scheduled attempt. +type Input struct { + // Delivery stores the queued template-mode delivery to render. + Delivery deliverydomain.Delivery + + // Attempt stores the current scheduled attempt associated with Delivery. + Attempt attempt.Attempt +} + +// Validate reports whether input contains one queued template delivery and +// its scheduled attempt. +func (input Input) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if err := input.Attempt.Validate(); err != nil { + return fmt.Errorf("attempt: %w", err) + } + if input.Delivery.PayloadMode != deliverydomain.PayloadModeTemplate { + return fmt.Errorf("delivery payload mode must be %q", deliverydomain.PayloadModeTemplate) + } + if input.Delivery.Status != deliverydomain.StatusQueued { + return fmt.Errorf("delivery status must be %q", deliverydomain.StatusQueued) + } + if input.Attempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("attempt delivery id must match delivery id") + } + if input.Attempt.AttemptNo < 1 { + return errors.New("attempt number must be at least 1") + } + if input.Attempt.Status != attempt.StatusScheduled { + return fmt.Errorf("attempt status must be %q", attempt.StatusScheduled) + } + + return nil +} + +// Result stores the durable outcome of one render-delivery execution. +type Result struct { + // Outcome stores the coarse render-delivery result. + Outcome Outcome + + // Delivery stores the durably persisted delivery record after rendering or + // render failure handling. + Delivery deliverydomain.Delivery + + // Attempt stores the durably persisted terminal attempt when Outcome is + // failed. Successful rendering keeps the scheduled attempt unchanged and + // therefore leaves Attempt nil. + Attempt *attempt.Attempt + + // ResolvedLocale stores the actual filesystem locale variant used by + // template lookup when available. + ResolvedLocale common.Locale + + // LocaleFallbackUsed reports whether template lookup fell back from the + // requested locale to `en`. + LocaleFallbackUsed bool + + // TemplateVersion stores the version marker of the resolved template + // variant when available. + TemplateVersion string + + // FailureClassification stores the stable classified failure code when + // Outcome is failed. + FailureClassification FailureClassification +} + +// Validate reports whether result contains a complete supported render +// outcome. +func (result Result) Validate() error { + if !result.Outcome.IsKnown() { + return fmt.Errorf("render delivery outcome %q is unsupported", result.Outcome) + } + if err := result.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + + switch result.Outcome { + case OutcomeRendered: + if result.Attempt != nil { + return errors.New("rendered result must not contain terminal attempt") + } + if result.Delivery.Status != deliverydomain.StatusRendered { + return fmt.Errorf("rendered result delivery status must be %q", deliverydomain.StatusRendered) + } + if result.ResolvedLocale.IsZero() { + return errors.New("rendered result resolved locale must not be empty") + } + if err := result.ResolvedLocale.Validate(); err != nil { + return fmt.Errorf("resolved locale: %w", err) + } + if strings.TrimSpace(result.TemplateVersion) == "" { + return errors.New("rendered result template version must not be empty") + } + if result.FailureClassification != "" { + return errors.New("rendered result must not contain failure classification") + } + case OutcomeFailed: + if result.Attempt == nil { + return errors.New("failed result must contain terminal attempt") + } + if err := result.Attempt.Validate(); err != nil { + return fmt.Errorf("attempt: %w", err) + } + if result.Attempt.DeliveryID != result.Delivery.DeliveryID { + return errors.New("attempt delivery id must match delivery id") + } + if result.Delivery.Status != deliverydomain.StatusFailed { + return fmt.Errorf("failed result delivery status must be %q", deliverydomain.StatusFailed) + } + if result.Attempt.Status != attempt.StatusRenderFailed { + return fmt.Errorf("failed result attempt status must be %q", attempt.StatusRenderFailed) + } + if !result.FailureClassification.IsKnown() { + return fmt.Errorf("failed result classification %q is unsupported", result.FailureClassification) + } + if !result.ResolvedLocale.IsZero() { + if err := result.ResolvedLocale.Validate(); err != nil { + return fmt.Errorf("resolved locale: %w", err) + } + } + if result.Delivery.LastAttemptStatus != attempt.StatusRenderFailed { + return fmt.Errorf("failed result delivery last attempt status must be %q", attempt.StatusRenderFailed) + } + } + + return nil +} + +// MarkRenderedInput stores the durable mutation applied after successful +// template materialization. +type MarkRenderedInput struct { + // Delivery stores the rendered delivery record. + Delivery deliverydomain.Delivery +} + +// Validate reports whether input contains one rendered delivery record. +func (input MarkRenderedInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if input.Delivery.Status != deliverydomain.StatusRendered { + return fmt.Errorf("delivery status must be %q", deliverydomain.StatusRendered) + } + + return nil +} + +// MarkRenderFailedInput stores the durable mutation applied after classified +// render failure. +type MarkRenderFailedInput struct { + // Delivery stores the failed delivery record. + Delivery deliverydomain.Delivery + + // Attempt stores the terminal render-failed attempt record. + Attempt attempt.Attempt +} + +// Validate reports whether input contains one failed delivery record and its +// terminal render-failed attempt. +func (input MarkRenderFailedInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if err := input.Attempt.Validate(); err != nil { + return fmt.Errorf("attempt: %w", err) + } + if input.Delivery.Status != deliverydomain.StatusFailed { + return fmt.Errorf("delivery status must be %q", deliverydomain.StatusFailed) + } + if input.Attempt.Status != attempt.StatusRenderFailed { + return fmt.Errorf("attempt status must be %q", attempt.StatusRenderFailed) + } + if input.Attempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("attempt delivery id must match delivery id") + } + if input.Delivery.LastAttemptStatus != attempt.StatusRenderFailed { + return fmt.Errorf("delivery last attempt status must be %q", attempt.StatusRenderFailed) + } + + return nil +} + +// Store describes the durable persistence required by the render-delivery +// use case. +type Store interface { + // MarkRendered stores the successful materialization result. + MarkRendered(context.Context, MarkRenderedInput) error + + // MarkRenderFailed stores one classified terminal render failure. + MarkRenderFailed(context.Context, MarkRenderFailedInput) error +} + +// TemplateCatalog describes the immutable in-memory template registry used by +// the renderer. +type TemplateCatalog interface { + // Lookup resolves one template family for locale using the frozen exact + // match followed by `en` fallback rule. + Lookup(common.TemplateID, common.Locale) (templatedir.ResolvedTemplate, error) +} + +// Clock provides the current wall-clock time. +type Clock interface { + // Now returns the current time. + Now() time.Time +} + +// Telemetry records low-cardinality render and delivery lifecycle metrics. +type Telemetry interface { + // RecordDeliveryStatusTransition records one durable delivery status + // transition. + RecordDeliveryStatusTransition(context.Context, string, string) + + // RecordAttemptOutcome records one durable terminal attempt outcome. + RecordAttemptOutcome(context.Context, string, string) + + // RecordLocaleFallback records one template locale fallback event. + RecordLocaleFallback(context.Context, string, string, string) +} + +// Config stores the dependencies used by Service. +type Config struct { + // Catalog stores the immutable in-memory template registry. + Catalog TemplateCatalog + + // Store owns the durable rendered and failed delivery state. + Store Store + + // Clock provides the current time. + Clock Clock + + // Telemetry records low-cardinality render and delivery lifecycle metrics. + Telemetry Telemetry + + // TracerProvider constructs the application span recorder used by the + // render flow. + TracerProvider oteltrace.TracerProvider + + // Logger writes structured render logs. + Logger *slog.Logger +} + +// Service materializes queued template deliveries deterministically. +type Service struct { + catalog TemplateCatalog + store Store + clock Clock + telemetry Telemetry + tracerProvider oteltrace.TracerProvider + logger *slog.Logger +} + +// New constructs Service from cfg. +func New(cfg Config) (*Service, error) { + switch { + case cfg.Catalog == nil: + return nil, errors.New("new render delivery service: nil catalog") + case cfg.Store == nil: + return nil, errors.New("new render delivery service: nil store") + case cfg.Clock == nil: + return nil, errors.New("new render delivery service: nil clock") + default: + tracerProvider := cfg.TracerProvider + if tracerProvider == nil { + tracerProvider = otel.GetTracerProvider() + } + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + return &Service{ + catalog: cfg.Catalog, + store: cfg.Store, + clock: cfg.Clock, + telemetry: cfg.Telemetry, + tracerProvider: tracerProvider, + logger: logger.With("component", "render_delivery"), + }, nil + } +} + +// Execute resolves, validates, renders, and durably stores one template-mode +// delivery outcome. +func (service *Service) Execute(ctx context.Context, input Input) (Result, error) { + if ctx == nil { + return Result{}, errors.New("render delivery: nil context") + } + if service == nil { + return Result{}, errors.New("render delivery: nil service") + } + if err := input.Validate(); err != nil { + return Result{}, fmt.Errorf("render delivery: %w", err) + } + + ctx, span := service.tracerProvider.Tracer(tracerName).Start(ctx, "mail.render_delivery") + defer span.End() + span.SetAttributes( + attribute.String("mail.delivery_id", input.Delivery.DeliveryID.String()), + attribute.String("mail.source", string(input.Delivery.Source)), + attribute.String("mail.template_id", input.Delivery.TemplateID.String()), + attribute.Int("mail.attempt_no", input.Attempt.AttemptNo), + attribute.String("mail.requested_locale", input.Delivery.Locale.String()), + ) + + resolved, err := service.catalog.Lookup(input.Delivery.TemplateID, input.Delivery.Locale) + if err != nil { + classification := classifyLookupError(err) + return service.fail(ctx, input, classification, failureSummaryForLookup(input.Delivery, classification), nil) + } + + requiredPaths := resolved.RequiredVariablePaths() + missingPaths := collectMissingPaths(input.Delivery.TemplateVariables, requiredPaths) + if len(missingPaths) > 0 { + result, failErr := service.fail( + ctx, + input, + FailureMissingRequiredVariable, + failureSummaryForMissingVariables(missingPaths), + &resolved, + ) + if failErr != nil { + return Result{}, failErr + } + return result, nil + } + + content, err := renderContent(resolved, input.Delivery.TemplateVariables) + if err != nil { + result, failErr := service.fail( + ctx, + input, + FailureTemplateExecuteFailed, + "template execution failed", + &resolved, + ) + if failErr != nil { + return Result{}, failErr + } + return result, nil + } + + renderedDelivery := input.Delivery + renderedDelivery.Content = content + renderedDelivery.Status = deliverydomain.StatusRendered + renderedDelivery.LocaleFallbackUsed = resolved.LocaleFallbackUsed() + renderedDelivery.UpdatedAt = service.clock.Now().UTC().Truncate(time.Millisecond) + if err := renderedDelivery.Validate(); err != nil { + return Result{}, fmt.Errorf("render delivery: build rendered delivery: %w", err) + } + + if err := service.store.MarkRendered(ctx, MarkRenderedInput{Delivery: renderedDelivery}); err != nil { + return Result{}, fmt.Errorf("%w: store rendered delivery: %v", ErrServiceUnavailable, err) + } + service.recordStatusTransition(ctx, renderedDelivery) + + result := Result{ + Outcome: OutcomeRendered, + Delivery: renderedDelivery, + ResolvedLocale: resolved.ResolvedLocale(), + LocaleFallbackUsed: resolved.LocaleFallbackUsed(), + TemplateVersion: resolved.Template().Version, + } + if err := result.Validate(); err != nil { + return Result{}, fmt.Errorf("render delivery: build rendered result: %w", err) + } + span.SetAttributes( + attribute.String("mail.resolved_locale", result.ResolvedLocale.String()), + attribute.Bool("mail.locale_fallback_used", result.LocaleFallbackUsed), + attribute.String("mail.status", string(renderedDelivery.Status)), + ) + logArgs := logging.DeliveryAttemptAttrs(renderedDelivery, input.Attempt) + logArgs = append(logArgs, + "requested_locale", input.Delivery.Locale.String(), + "resolved_locale", result.ResolvedLocale.String(), + "locale_fallback_used", result.LocaleFallbackUsed, + "template_version", result.TemplateVersion, + ) + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + if result.LocaleFallbackUsed { + service.recordLocaleFallback(ctx, renderedDelivery.TemplateID.String(), input.Delivery.Locale.String(), result.ResolvedLocale.String()) + service.logger.Info("delivery rendered with locale fallback", logArgs...) + } else { + service.logger.Info("delivery rendered", logArgs...) + } + + return result, nil +} + +func (service *Service) fail( + ctx context.Context, + input Input, + classification FailureClassification, + summary string, + resolved *templatedir.ResolvedTemplate, +) (Result, error) { + failureAt := service.clock.Now().UTC().Truncate(time.Millisecond) + if failureAt.Before(input.Attempt.ScheduledFor) { + failureAt = input.Attempt.ScheduledFor + } + + failedDelivery := input.Delivery + failedDelivery.Status = deliverydomain.StatusFailed + failedDelivery.LastAttemptStatus = attempt.StatusRenderFailed + failedDelivery.ProviderSummary = summary + failedDelivery.UpdatedAt = failureAt + failedDelivery.FailedAt = ptrTime(failureAt) + + failedAttempt := input.Attempt + failedAttempt.Status = attempt.StatusRenderFailed + failedAttempt.StartedAt = ptrTime(failureAt) + failedAttempt.FinishedAt = ptrTime(failureAt) + failedAttempt.ProviderClassification = string(classification) + failedAttempt.ProviderSummary = summary + + storeInput := MarkRenderFailedInput{ + Delivery: failedDelivery, + Attempt: failedAttempt, + } + if err := storeInput.Validate(); err != nil { + return Result{}, fmt.Errorf("render delivery: build failed result: %w", err) + } + + if err := service.store.MarkRenderFailed(ctx, storeInput); err != nil { + return Result{}, fmt.Errorf("%w: store failed delivery: %v", ErrServiceUnavailable, err) + } + service.recordStatusTransition(ctx, failedDelivery) + service.recordAttemptOutcome(ctx, failedAttempt.Status, failedDelivery.Source) + + result := Result{ + Outcome: OutcomeFailed, + Delivery: failedDelivery, + Attempt: &failedAttempt, + FailureClassification: classification, + } + if resolved != nil { + result.ResolvedLocale = resolved.ResolvedLocale() + result.LocaleFallbackUsed = resolved.LocaleFallbackUsed() + result.TemplateVersion = resolved.Template().Version + } + if err := result.Validate(); err != nil { + return Result{}, fmt.Errorf("render delivery: build failed result: %w", err) + } + spanAttrs := []attribute.KeyValue{ + attribute.String("mail.status", string(failedDelivery.Status)), + attribute.String("mail.attempt_status", string(failedAttempt.Status)), + attribute.String("mail.failure_classification", string(classification)), + } + if resolved != nil { + spanAttrs = append(spanAttrs, attribute.String("mail.resolved_locale", resolved.ResolvedLocale().String())) + } + oteltrace.SpanFromContext(ctx).SetAttributes(spanAttrs...) + logArgs := logging.DeliveryAttemptAttrs(failedDelivery, failedAttempt) + logArgs = append(logArgs, + "failure_classification", string(classification), + "provider_summary", summary, + ) + if resolved != nil { + logArgs = append(logArgs, + "requested_locale", input.Delivery.Locale.String(), + "resolved_locale", resolved.ResolvedLocale().String(), + "locale_fallback_used", resolved.LocaleFallbackUsed(), + ) + } + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + service.logger.Warn("delivery rendering failed", logArgs...) + + return result, nil +} + +func (service *Service) recordStatusTransition(ctx context.Context, record deliverydomain.Delivery) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordDeliveryStatusTransition(ctx, string(record.Status), string(record.Source)) +} + +func (service *Service) recordAttemptOutcome(ctx context.Context, status attempt.Status, source deliverydomain.Source) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordAttemptOutcome(ctx, string(status), string(source)) +} + +func (service *Service) recordLocaleFallback(ctx context.Context, templateID string, requestedLocale string, resolvedLocale string) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordLocaleFallback(ctx, templateID, requestedLocale, resolvedLocale) +} + +func renderContent(resolved templatedir.ResolvedTemplate, variables map[string]any) (deliverydomain.Content, error) { + subject, err := resolved.ExecuteSubject(variables) + if err != nil { + return deliverydomain.Content{}, err + } + + textBody, err := resolved.ExecuteText(variables) + if err != nil { + return deliverydomain.Content{}, err + } + + htmlBody, ok, err := resolved.ExecuteHTML(variables) + if err != nil { + return deliverydomain.Content{}, err + } + if !ok { + htmlBody = "" + } + + content := deliverydomain.Content{ + Subject: subject, + TextBody: textBody, + HTMLBody: htmlBody, + } + if err := content.ValidateMaterialized(); err != nil { + return deliverydomain.Content{}, err + } + + return content, nil +} + +func collectMissingPaths(variables map[string]any, requiredPaths []string) []string { + missing := make([]string, 0) + for _, path := range requiredPaths { + if hasJSONPath(variables, path) { + continue + } + missing = append(missing, path) + } + + return missing +} + +func hasJSONPath(value map[string]any, path string) bool { + if len(value) == 0 || strings.TrimSpace(path) == "" { + return false + } + + current := any(value) + for _, part := range strings.Split(path, ".") { + typed, ok := current.(map[string]any) + if !ok { + return false + } + + next, ok := typed[part] + if !ok { + return false + } + current = next + } + + return true +} + +func classifyLookupError(err error) FailureClassification { + switch { + case errors.Is(err, templatedir.ErrFallbackMissing): + return FailureFallbackMissing + case errors.Is(err, templatedir.ErrTemplateParseFailed): + return FailureTemplateParseFailed + default: + return FailureTemplateNotFound + } +} + +func failureSummaryForLookup(record deliverydomain.Delivery, classification FailureClassification) string { + switch classification { + case FailureFallbackMissing: + return fmt.Sprintf( + "template %q locale %q and fallback %q are unavailable", + record.TemplateID, + record.Locale, + common.Locale("en"), + ) + case FailureTemplateParseFailed: + return "template parsing failed" + default: + return fmt.Sprintf("template %q is not available", record.TemplateID) + } +} + +func failureSummaryForMissingVariables(missingPaths []string) string { + cloned := append([]string(nil), missingPaths...) + slices.Sort(cloned) + + return "missing required variables: " + strings.Join(cloned, ", ") +} + +func ptrTime(value time.Time) *time.Time { + return &value +} diff --git a/mail/internal/service/renderdelivery/service_test.go b/mail/internal/service/renderdelivery/service_test.go new file mode 100644 index 0000000..730bec2 --- /dev/null +++ b/mail/internal/service/renderdelivery/service_test.go @@ -0,0 +1,385 @@ +package renderdelivery + +import ( + "bytes" + "context" + "errors" + "log/slog" + "os" + "path/filepath" + "testing" + "time" + + templatedir "galaxy/mail/internal/adapters/templates" + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + + "github.com/stretchr/testify/require" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/sdk/trace/tracetest" +) + +func TestServiceExecuteRendersExactLocale(t *testing.T) { + t.Parallel() + + catalog := newTestCatalog(t, map[string]string{ + filepath.Join("auth.login_code", "en", "subject.tmpl"): "Your login code", + filepath.Join("auth.login_code", "en", "text.tmpl"): "Code: {{.code}}", + filepath.Join("game.turn_ready", "fr-fr", "subject.tmpl"): "Tour {{.turn_number}}", + filepath.Join("game.turn_ready", "fr-fr", "text.tmpl"): "Bonjour {{with .player}}{{.name}}{{end}}", + filepath.Join("game.turn_ready", "fr-fr", "html.tmpl"): "

{{.player.name}}

", + }) + + store := &stubStore{} + service := newTestService(t, Config{ + Catalog: catalog, + Store: store, + Clock: stubClock{now: fixedNow()}, + }) + + result, err := service.Execute(context.Background(), validInput(t, "fr-FR")) + require.NoError(t, err) + require.Equal(t, OutcomeRendered, result.Outcome) + require.Equal(t, common.Locale("fr-FR"), result.ResolvedLocale) + require.False(t, result.LocaleFallbackUsed) + require.NotEmpty(t, result.TemplateVersion) + require.Nil(t, result.Attempt) + require.Equal(t, deliverydomain.StatusRendered, result.Delivery.Status) + require.Equal(t, deliverydomain.Content{ + Subject: "Tour 54", + TextBody: "Bonjour Pilot", + HTMLBody: "

Pilot

", + }, result.Delivery.Content) + require.Len(t, store.renderedInputs, 1) + require.Empty(t, store.failedInputs) +} + +func TestServiceExecuteFallsBackToEnglish(t *testing.T) { + t.Parallel() + + catalog := newTestCatalog(t, map[string]string{ + filepath.Join("auth.login_code", "en", "subject.tmpl"): "Your login code", + filepath.Join("auth.login_code", "en", "text.tmpl"): "Code: {{.code}}", + filepath.Join("game.turn_ready", "en", "subject.tmpl"): "Turn {{.turn_number}}", + filepath.Join("game.turn_ready", "en", "text.tmpl"): "Hello {{.player.name}}", + }) + + store := &stubStore{} + service := newTestService(t, Config{ + Catalog: catalog, + Store: store, + Clock: stubClock{now: fixedNow()}, + }) + + result, err := service.Execute(context.Background(), validInput(t, "fr-FR")) + require.NoError(t, err) + require.Equal(t, OutcomeRendered, result.Outcome) + require.Equal(t, common.Locale("en"), result.ResolvedLocale) + require.True(t, result.LocaleFallbackUsed) + require.True(t, result.Delivery.LocaleFallbackUsed) +} + +func TestServiceExecuteRecordsLocaleFallbackAndLogsFields(t *testing.T) { + t.Parallel() + + catalog := newTestCatalog(t, map[string]string{ + filepath.Join("auth.login_code", "en", "subject.tmpl"): "Your login code", + filepath.Join("auth.login_code", "en", "text.tmpl"): "Code: {{.code}}", + filepath.Join("game.turn_ready", "en", "subject.tmpl"): "Turn {{.turn_number}}", + filepath.Join("game.turn_ready", "en", "text.tmpl"): "Hello {{.player.name}}", + }) + + telemetry := &stubTelemetry{} + loggerBuffer := &bytes.Buffer{} + recorder := tracetest.NewSpanRecorder() + tracerProvider := sdktrace.NewTracerProvider(sdktrace.WithSpanProcessor(recorder)) + + service := newTestService(t, Config{ + Catalog: catalog, + Store: &stubStore{}, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + TracerProvider: tracerProvider, + Logger: slog.New(slog.NewJSONHandler(loggerBuffer, nil)), + }) + + _, err := service.Execute(context.Background(), validInput(t, "fr-FR")) + require.NoError(t, err) + require.Equal(t, []string{"notification:rendered"}, telemetry.statuses) + require.Equal(t, []string{"game.turn_ready:fr-FR:en"}, telemetry.fallbacks) + require.Contains(t, loggerBuffer.String(), "\"delivery_id\":\"delivery-123\"") + require.Contains(t, loggerBuffer.String(), "\"source\":\"notification\"") + require.Contains(t, loggerBuffer.String(), "\"template_id\":\"game.turn_ready\"") + require.Contains(t, loggerBuffer.String(), "\"attempt_no\":1") + require.Contains(t, loggerBuffer.String(), "\"otel_trace_id\":") + require.True(t, hasRenderSpanNamed(recorder.Ended(), "mail.render_delivery")) +} + +func TestServiceExecuteFailsOnMissingRequiredVariable(t *testing.T) { + t.Parallel() + + catalog := newTestCatalog(t, map[string]string{ + filepath.Join("auth.login_code", "en", "subject.tmpl"): "Your login code", + filepath.Join("auth.login_code", "en", "text.tmpl"): "Code: {{.code}}", + filepath.Join("game.turn_ready", "en", "subject.tmpl"): "Turn {{.turn_number}}", + filepath.Join("game.turn_ready", "en", "text.tmpl"): "Hello {{.player.name}}", + }) + + store := &stubStore{} + service := newTestService(t, Config{ + Catalog: catalog, + Store: store, + Clock: stubClock{now: fixedNow()}, + }) + + input := validInput(t, "en") + delete(input.Delivery.TemplateVariables, "player") + + result, err := service.Execute(context.Background(), input) + require.NoError(t, err) + require.Equal(t, OutcomeFailed, result.Outcome) + require.Equal(t, FailureMissingRequiredVariable, result.FailureClassification) + require.NotNil(t, result.Attempt) + require.Equal(t, attempt.StatusRenderFailed, result.Attempt.Status) + require.Equal(t, "missing required variables: player.name", result.Attempt.ProviderSummary) + require.Len(t, store.failedInputs, 1) + require.Empty(t, store.renderedInputs) +} + +func TestServiceExecuteFailsOnTemplateExecutionError(t *testing.T) { + t.Parallel() + + catalog := newTestCatalog(t, map[string]string{ + filepath.Join("auth.login_code", "en", "subject.tmpl"): "Your login code", + filepath.Join("auth.login_code", "en", "text.tmpl"): "Code: {{.code}}", + filepath.Join("game.turn_ready", "en", "subject.tmpl"): "{{call .callable}}", + filepath.Join("game.turn_ready", "en", "text.tmpl"): "Hello {{.player.name}}", + }) + + store := &stubStore{} + service := newTestService(t, Config{ + Catalog: catalog, + Store: store, + Clock: stubClock{now: fixedNow()}, + }) + + input := validInput(t, "en") + input.Delivery.TemplateVariables["callable"] = "not-a-func" + + result, err := service.Execute(context.Background(), input) + require.NoError(t, err) + require.Equal(t, OutcomeFailed, result.Outcome) + require.Equal(t, FailureTemplateExecuteFailed, result.FailureClassification) + require.Equal(t, "template execution failed", result.Attempt.ProviderSummary) +} + +func TestServiceExecuteClassifiesTemplateNotFound(t *testing.T) { + t.Parallel() + + service := newTestService(t, Config{ + Catalog: stubCatalog{ + lookupErr: templatedir.ErrTemplateNotFound, + }, + Store: &stubStore{}, + Clock: stubClock{now: fixedNow()}, + }) + + result, err := service.Execute(context.Background(), validInput(t, "en")) + require.NoError(t, err) + require.Equal(t, OutcomeFailed, result.Outcome) + require.Equal(t, FailureTemplateNotFound, result.FailureClassification) +} + +func TestServiceExecuteClassifiesFallbackMissing(t *testing.T) { + t.Parallel() + + service := newTestService(t, Config{ + Catalog: stubCatalog{ + lookupErr: templatedir.ErrFallbackMissing, + }, + Store: &stubStore{}, + Clock: stubClock{now: fixedNow()}, + }) + + result, err := service.Execute(context.Background(), validInput(t, "fr-FR")) + require.NoError(t, err) + require.Equal(t, OutcomeFailed, result.Outcome) + require.Equal(t, FailureFallbackMissing, result.FailureClassification) +} + +func TestServiceExecuteClassifiesTemplateParseFailure(t *testing.T) { + t.Parallel() + + service := newTestService(t, Config{ + Catalog: stubCatalog{ + lookupErr: templatedir.ErrTemplateParseFailed, + }, + Store: &stubStore{}, + Clock: stubClock{now: fixedNow()}, + }) + + result, err := service.Execute(context.Background(), validInput(t, "en")) + require.NoError(t, err) + require.Equal(t, OutcomeFailed, result.Outcome) + require.Equal(t, FailureTemplateParseFailed, result.FailureClassification) +} + +func TestServiceExecuteReturnsServiceUnavailableOnStoreFailure(t *testing.T) { + t.Parallel() + + catalog := newTestCatalog(t, map[string]string{ + filepath.Join("auth.login_code", "en", "subject.tmpl"): "Your login code", + filepath.Join("auth.login_code", "en", "text.tmpl"): "Code: {{.code}}", + filepath.Join("game.turn_ready", "en", "subject.tmpl"): "Turn {{.turn_number}}", + filepath.Join("game.turn_ready", "en", "text.tmpl"): "Hello {{.player.name}}", + }) + + service := newTestService(t, Config{ + Catalog: catalog, + Store: &stubStore{ + markRenderedErr: errors.New("redis unavailable"), + }, + Clock: stubClock{now: fixedNow()}, + }) + + _, err := service.Execute(context.Background(), validInput(t, "en")) + require.Error(t, err) + require.ErrorIs(t, err, ErrServiceUnavailable) +} + +type stubStore struct { + renderedInputs []MarkRenderedInput + failedInputs []MarkRenderFailedInput + markRenderedErr error + markFailedErr error +} + +func (store *stubStore) MarkRendered(_ context.Context, input MarkRenderedInput) error { + store.renderedInputs = append(store.renderedInputs, input) + return store.markRenderedErr +} + +func (store *stubStore) MarkRenderFailed(_ context.Context, input MarkRenderFailedInput) error { + store.failedInputs = append(store.failedInputs, input) + return store.markFailedErr +} + +type stubCatalog struct { + lookupResult templatedir.ResolvedTemplate + lookupErr error +} + +func (catalog stubCatalog) Lookup(common.TemplateID, common.Locale) (templatedir.ResolvedTemplate, error) { + return catalog.lookupResult, catalog.lookupErr +} + +type stubClock struct { + now time.Time +} + +func (clock stubClock) Now() time.Time { + return clock.now +} + +func newTestService(t *testing.T, cfg Config) *Service { + t.Helper() + + service, err := New(cfg) + require.NoError(t, err) + + return service +} + +func newTestCatalog(t *testing.T, files map[string]string) *templatedir.Catalog { + t.Helper() + + rootDir := t.TempDir() + for path, contents := range files { + absolutePath := filepath.Join(rootDir, path) + require.NoError(t, os.MkdirAll(filepath.Dir(absolutePath), 0o755)) + require.NoError(t, os.WriteFile(absolutePath, []byte(contents), 0o644)) + } + + catalog, err := templatedir.NewCatalog(rootDir) + require.NoError(t, err) + + return catalog +} + +type stubTelemetry struct { + statuses []string + attempts []string + fallbacks []string +} + +func (telemetry *stubTelemetry) RecordDeliveryStatusTransition(_ context.Context, status string, source string) { + telemetry.statuses = append(telemetry.statuses, source+":"+status) +} + +func (telemetry *stubTelemetry) RecordAttemptOutcome(_ context.Context, status string, source string) { + telemetry.attempts = append(telemetry.attempts, source+":"+status) +} + +func (telemetry *stubTelemetry) RecordLocaleFallback(_ context.Context, templateID string, requestedLocale string, resolvedLocale string) { + telemetry.fallbacks = append(telemetry.fallbacks, templateID+":"+requestedLocale+":"+resolvedLocale) +} + +func hasRenderSpanNamed(spans []sdktrace.ReadOnlySpan, name string) bool { + for _, span := range spans { + if span.Name() == name { + return true + } + } + + return false +} + +func validInput(t *testing.T, localeValue string) Input { + t.Helper() + + locale, err := common.ParseLocale(localeValue) + require.NoError(t, err) + + createdAt := fixedNow().Add(-time.Minute) + deliveryRecord := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-123"), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeTemplate, + TemplateID: common.TemplateID("game.turn_ready"), + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + }, + Locale: locale, + TemplateVariables: map[string]any{ + "turn_number": float64(54), + "player": map[string]any{ + "name": "Pilot", + }, + }, + IdempotencyKey: common.IdempotencyKey("notification:delivery-123"), + Status: deliverydomain.StatusQueued, + AttemptCount: 1, + CreatedAt: createdAt, + UpdatedAt: createdAt, + } + require.NoError(t, deliveryRecord.Validate()) + + scheduledFor := createdAt + attemptRecord := attempt.Attempt{ + DeliveryID: deliveryRecord.DeliveryID, + AttemptNo: 1, + ScheduledFor: scheduledFor, + Status: attempt.StatusScheduled, + } + require.NoError(t, attemptRecord.Validate()) + + return Input{ + Delivery: deliveryRecord, + Attempt: attemptRecord, + } +} + +func fixedNow() time.Time { + return time.Unix(1_775_121_700, 0).UTC() +} diff --git a/mail/internal/service/resenddelivery/service.go b/mail/internal/service/resenddelivery/service.go new file mode 100644 index 0000000..abfbb2a --- /dev/null +++ b/mail/internal/service/resenddelivery/service.go @@ -0,0 +1,366 @@ +// Package resenddelivery implements trusted operator resend by clone creation. +package resenddelivery + +import ( + "context" + "errors" + "fmt" + "log/slog" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/logging" + "galaxy/mail/internal/service/acceptgenericdelivery" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + oteltrace "go.opentelemetry.io/otel/trace" +) + +var ( + // ErrNotFound reports that the requested original delivery does not exist. + ErrNotFound = errors.New("resend delivery not found") + + // ErrNotAllowed reports that the original delivery is not in a terminal + // state and therefore cannot be cloned for resend. + ErrNotAllowed = errors.New("resend delivery not allowed") + + // ErrServiceUnavailable reports that clone creation could not load or + // persist durable state safely. + ErrServiceUnavailable = errors.New("resend delivery service unavailable") +) + +const tracerName = "galaxy/mail/resenddelivery" + +// Input stores one trusted resend request by original delivery identifier. +type Input struct { + // DeliveryID stores the original accepted delivery identifier to clone. + DeliveryID common.DeliveryID +} + +// Validate reports whether input contains a complete resend target. +func (input Input) Validate() error { + if err := input.DeliveryID.Validate(); err != nil { + return fmt.Errorf("delivery id: %w", err) + } + + return nil +} + +// Result stores the new clone delivery identifier created by resend. +type Result struct { + // DeliveryID stores the identifier of the newly created clone delivery. + DeliveryID common.DeliveryID +} + +// Validate reports whether result contains a usable clone delivery identifier. +func (result Result) Validate() error { + if err := result.DeliveryID.Validate(); err != nil { + return fmt.Errorf("delivery id: %w", err) + } + + return nil +} + +// CreateResendInput stores the durable write set required for one clone-only +// resend operation. +type CreateResendInput struct { + // Delivery stores the new cloned delivery record. + Delivery deliverydomain.Delivery + + // FirstAttempt stores the initial scheduled attempt of the clone. + FirstAttempt attempt.Attempt + + // DeliveryPayload stores the optional cloned raw attachment payload bundle. + DeliveryPayload *acceptgenericdelivery.DeliveryPayload +} + +// Validate reports whether input contains a complete resend write set. +func (input CreateResendInput) Validate() error { + if err := input.Delivery.Validate(); err != nil { + return fmt.Errorf("delivery: %w", err) + } + if input.Delivery.Source != deliverydomain.SourceOperatorResend { + return fmt.Errorf("delivery source must be %q", deliverydomain.SourceOperatorResend) + } + if input.Delivery.Status != deliverydomain.StatusQueued { + return fmt.Errorf("delivery status must be %q", deliverydomain.StatusQueued) + } + if input.Delivery.AttemptCount != 1 { + return errors.New("delivery attempt count must equal 1") + } + if input.Delivery.LastAttemptStatus != "" { + return errors.New("delivery last attempt status must be empty") + } + if input.Delivery.ProviderSummary != "" { + return errors.New("delivery provider summary must be empty") + } + if input.Delivery.SentAt != nil || input.Delivery.SuppressedAt != nil || input.Delivery.FailedAt != nil || input.Delivery.DeadLetteredAt != nil { + return errors.New("delivery terminal timestamps must be empty") + } + if err := input.FirstAttempt.Validate(); err != nil { + return fmt.Errorf("first attempt: %w", err) + } + if input.FirstAttempt.DeliveryID != input.Delivery.DeliveryID { + return errors.New("first attempt delivery id must match delivery id") + } + if input.FirstAttempt.AttemptNo != 1 { + return errors.New("first attempt number must equal 1") + } + if input.FirstAttempt.Status != attempt.StatusScheduled { + return fmt.Errorf("first attempt status must be %q", attempt.StatusScheduled) + } + if input.DeliveryPayload != nil { + if err := input.DeliveryPayload.Validate(); err != nil { + return fmt.Errorf("delivery payload: %w", err) + } + if input.DeliveryPayload.DeliveryID != input.Delivery.DeliveryID { + return errors.New("delivery payload delivery id must match delivery id") + } + } + + return nil +} + +// Store provides the durable delivery state required by clone-only resend. +type Store interface { + // GetDelivery loads one accepted delivery by its identifier. + GetDelivery(context.Context, common.DeliveryID) (deliverydomain.Delivery, bool, error) + + // GetDeliveryPayload loads the raw attachment payload bundle of deliveryID + // when one exists. + GetDeliveryPayload(context.Context, common.DeliveryID) (acceptgenericdelivery.DeliveryPayload, bool, error) + + // CreateResend atomically creates the cloned delivery, its first attempt, + // the optional cloned delivery payload, and the related delivery indexes. + CreateResend(context.Context, CreateResendInput) error +} + +// DeliveryIDGenerator describes the source of new internal delivery +// identifiers. +type DeliveryIDGenerator interface { + // NewDeliveryID returns one new internal delivery identifier. + NewDeliveryID() (common.DeliveryID, error) +} + +// Clock provides the current wall-clock time. +type Clock interface { + // Now returns the current time. + Now() time.Time +} + +// Telemetry records low-cardinality resend metrics. +type Telemetry interface { + // RecordDeliveryStatusTransition records one durable delivery status + // transition. + RecordDeliveryStatusTransition(context.Context, string, string) +} + +// Config stores the dependencies used by Service. +type Config struct { + // Store owns durable resend state. + Store Store + + // DeliveryIDGenerator builds internal clone identifiers. + DeliveryIDGenerator DeliveryIDGenerator + + // Clock provides wall-clock timestamps. + Clock Clock + + // Telemetry records low-cardinality resend metrics. + Telemetry Telemetry + + // TracerProvider constructs the application span recorder used by resend. + TracerProvider oteltrace.TracerProvider + + // Logger writes structured resend logs. + Logger *slog.Logger +} + +// Service executes clone-only trusted resend requests. +type Service struct { + store Store + deliveryIDGenerator DeliveryIDGenerator + clock Clock + telemetry Telemetry + tracerProvider oteltrace.TracerProvider + logger *slog.Logger +} + +// New constructs Service from cfg. +func New(cfg Config) (*Service, error) { + switch { + case cfg.Store == nil: + return nil, errors.New("new resend delivery service: nil store") + case cfg.DeliveryIDGenerator == nil: + return nil, errors.New("new resend delivery service: nil delivery id generator") + case cfg.Clock == nil: + return nil, errors.New("new resend delivery service: nil clock") + default: + tracerProvider := cfg.TracerProvider + if tracerProvider == nil { + tracerProvider = otel.GetTracerProvider() + } + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + return &Service{ + store: cfg.Store, + deliveryIDGenerator: cfg.DeliveryIDGenerator, + clock: cfg.Clock, + telemetry: cfg.Telemetry, + tracerProvider: tracerProvider, + logger: logger.With("component", "resend_delivery"), + }, nil + } +} + +// Execute clones one terminal delivery into a new queued delivery with a +// fresh first attempt. +func (service *Service) Execute(ctx context.Context, input Input) (Result, error) { + if ctx == nil { + return Result{}, errors.New("execute resend delivery: nil context") + } + if service == nil { + return Result{}, errors.New("execute resend delivery: nil service") + } + if err := input.Validate(); err != nil { + return Result{}, fmt.Errorf("execute resend delivery: %w", err) + } + + ctx, span := service.tracerProvider.Tracer(tracerName).Start( + ctx, + "mail.resend_delivery", + oteltrace.WithAttributes(attribute.String("mail.parent_delivery_id", input.DeliveryID.String())), + ) + defer span.End() + + original, found, err := service.store.GetDelivery(ctx, input.DeliveryID) + switch { + case err != nil: + return Result{}, fmt.Errorf("%w: load original delivery: %v", ErrServiceUnavailable, err) + case !found: + return Result{}, ErrNotFound + case !original.Status.AllowsResend(): + return Result{}, ErrNotAllowed + } + + now := service.clock.Now().UTC().Truncate(time.Millisecond) + cloneID, err := service.deliveryIDGenerator.NewDeliveryID() + if err != nil { + return Result{}, fmt.Errorf("%w: generate delivery id: %v", ErrServiceUnavailable, err) + } + + clone := buildClonedDelivery(original, cloneID, now) + firstAttempt := attempt.Attempt{ + DeliveryID: cloneID, + AttemptNo: 1, + ScheduledFor: now, + Status: attempt.StatusScheduled, + } + + var clonedPayload *acceptgenericdelivery.DeliveryPayload + if len(original.Attachments) > 0 { + payload, found, err := service.store.GetDeliveryPayload(ctx, original.DeliveryID) + switch { + case err != nil: + return Result{}, fmt.Errorf("%w: load original delivery payload: %v", ErrServiceUnavailable, err) + case !found: + return Result{}, fmt.Errorf("%w: missing original delivery payload for %q", ErrServiceUnavailable, original.DeliveryID) + default: + cloned := cloneDeliveryPayload(payload, cloneID) + clonedPayload = &cloned + } + } + + createInput := CreateResendInput{ + Delivery: clone, + FirstAttempt: firstAttempt, + DeliveryPayload: clonedPayload, + } + if err := createInput.Validate(); err != nil { + return Result{}, fmt.Errorf("%w: build resend input: %v", ErrServiceUnavailable, err) + } + if err := service.store.CreateResend(ctx, createInput); err != nil { + return Result{}, fmt.Errorf("%w: create resend clone: %v", ErrServiceUnavailable, err) + } + service.recordStatusTransition(ctx, createInput.Delivery) + + result := Result{DeliveryID: cloneID} + if err := result.Validate(); err != nil { + return Result{}, fmt.Errorf("%w: invalid result: %v", ErrServiceUnavailable, err) + } + span.SetAttributes( + attribute.String("mail.delivery_id", cloneID.String()), + attribute.String("mail.source", string(createInput.Delivery.Source)), + ) + logArgs := logging.DeliveryAttrs(createInput.Delivery) + logArgs = append(logArgs, + "parent_delivery_id", original.DeliveryID.String(), + "status", string(createInput.Delivery.Status), + ) + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + service.logger.Info("resend clone created", logArgs...) + + return result, nil +} + +func (service *Service) recordStatusTransition(ctx context.Context, record deliverydomain.Delivery) { + if service == nil || service.telemetry == nil { + return + } + + service.telemetry.RecordDeliveryStatusTransition(ctx, string(record.Status), string(record.Source)) +} + +func buildClonedDelivery(original deliverydomain.Delivery, cloneID common.DeliveryID, now time.Time) deliverydomain.Delivery { + return deliverydomain.Delivery{ + DeliveryID: cloneID, + ResendParentDeliveryID: original.DeliveryID, + Source: deliverydomain.SourceOperatorResend, + PayloadMode: original.PayloadMode, + TemplateID: original.TemplateID, + Envelope: deliverydomain.Envelope{ + To: append([]common.Email(nil), original.Envelope.To...), + Cc: append([]common.Email(nil), original.Envelope.Cc...), + Bcc: append([]common.Email(nil), original.Envelope.Bcc...), + ReplyTo: append([]common.Email(nil), original.Envelope.ReplyTo...), + }, + Content: original.Content, + Attachments: append([]common.AttachmentMetadata(nil), original.Attachments...), + Locale: original.Locale, + LocaleFallbackUsed: original.LocaleFallbackUsed, + TemplateVariables: cloneJSONObject(original.TemplateVariables), + IdempotencyKey: common.IdempotencyKey("operator:resend:" + original.DeliveryID.String()), + Status: deliverydomain.StatusQueued, + AttemptCount: 1, + CreatedAt: now, + UpdatedAt: now, + } +} + +func cloneDeliveryPayload(payload acceptgenericdelivery.DeliveryPayload, cloneID common.DeliveryID) acceptgenericdelivery.DeliveryPayload { + cloned := acceptgenericdelivery.DeliveryPayload{ + DeliveryID: cloneID, + Attachments: make([]acceptgenericdelivery.AttachmentPayload, len(payload.Attachments)), + } + copy(cloned.Attachments, payload.Attachments) + return cloned +} + +func cloneJSONObject(value map[string]any) map[string]any { + if value == nil { + return nil + } + + cloned := make(map[string]any, len(value)) + for key, entry := range value { + cloned[key] = entry + } + + return cloned +} diff --git a/mail/internal/service/resenddelivery/service_test.go b/mail/internal/service/resenddelivery/service_test.go new file mode 100644 index 0000000..61adf58 --- /dev/null +++ b/mail/internal/service/resenddelivery/service_test.go @@ -0,0 +1,273 @@ +package resenddelivery + +import ( + "bytes" + "context" + "log/slog" + "testing" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/service/acceptgenericdelivery" + + "github.com/stretchr/testify/require" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/sdk/trace/tracetest" +) + +func TestServiceExecuteRejectsNonTerminalStatus(t *testing.T) { + t.Parallel() + + tests := []deliverydomain.Status{ + deliverydomain.StatusAccepted, + deliverydomain.StatusQueued, + deliverydomain.StatusRendered, + deliverydomain.StatusSending, + } + + for _, status := range tests { + status := status + + t.Run(string(status), func(t *testing.T) { + t.Parallel() + + record := validOriginalDelivery() + record.Status = status + record.SentAt = nil + record.FailedAt = nil + record.DeadLetteredAt = nil + record.SuppressedAt = nil + require.NoError(t, record.Validate()) + + store := &stubStore{delivery: &record} + service := newTestService(t, Config{ + Store: store, + DeliveryIDGenerator: &stubIDGenerator{ids: []common.DeliveryID{"clone-1"}}, + Clock: stubClock{now: fixedNow()}, + }) + + _, err := service.Execute(context.Background(), Input{DeliveryID: record.DeliveryID}) + require.ErrorIs(t, err, ErrNotAllowed) + }) + } +} + +func TestServiceExecuteCreatesLinkedClone(t *testing.T) { + t.Parallel() + + original := validOriginalDelivery() + originalCopy := original + payload := validPayload(original.DeliveryID) + store := &stubStore{ + delivery: &original, + payload: &payload, + } + service := newTestService(t, Config{ + Store: store, + DeliveryIDGenerator: &stubIDGenerator{ids: []common.DeliveryID{"clone-123"}}, + Clock: stubClock{now: fixedNow()}, + }) + + result, err := service.Execute(context.Background(), Input{DeliveryID: original.DeliveryID}) + require.NoError(t, err) + require.Equal(t, Result{DeliveryID: common.DeliveryID("clone-123")}, result) + require.Len(t, store.createInputs, 1) + + createInput := store.createInputs[0] + require.Equal(t, common.DeliveryID("clone-123"), createInput.Delivery.DeliveryID) + require.Equal(t, original.DeliveryID, createInput.Delivery.ResendParentDeliveryID) + require.Equal(t, deliverydomain.SourceOperatorResend, createInput.Delivery.Source) + require.Equal(t, common.IdempotencyKey("operator:resend:"+original.DeliveryID.String()), createInput.Delivery.IdempotencyKey) + require.Equal(t, deliverydomain.StatusQueued, createInput.Delivery.Status) + require.Equal(t, 1, createInput.Delivery.AttemptCount) + require.Empty(t, createInput.Delivery.LastAttemptStatus) + require.Nil(t, createInput.Delivery.SentAt) + require.Nil(t, createInput.Delivery.FailedAt) + require.Equal(t, attempt.StatusScheduled, createInput.FirstAttempt.Status) + require.Equal(t, 1, createInput.FirstAttempt.AttemptNo) + require.NotNil(t, createInput.DeliveryPayload) + require.Equal(t, common.DeliveryID("clone-123"), createInput.DeliveryPayload.DeliveryID) + require.Equal(t, payload.Attachments, createInput.DeliveryPayload.Attachments) + require.Equal(t, originalCopy, original) +} + +func TestServiceExecuteLogsCloneCreationAndCreatesSpan(t *testing.T) { + t.Parallel() + + original := validOriginalDelivery() + payload := validPayload(original.DeliveryID) + loggerBuffer := &bytes.Buffer{} + recorder := tracetest.NewSpanRecorder() + tracerProvider := sdktrace.NewTracerProvider(sdktrace.WithSpanProcessor(recorder)) + telemetry := &stubTelemetry{} + + store := &stubStore{ + delivery: &original, + payload: &payload, + } + service := newTestService(t, Config{ + Store: store, + DeliveryIDGenerator: &stubIDGenerator{ids: []common.DeliveryID{"clone-456"}}, + Clock: stubClock{now: fixedNow()}, + Telemetry: telemetry, + TracerProvider: tracerProvider, + Logger: slog.New(slog.NewJSONHandler(loggerBuffer, nil)), + }) + + _, err := service.Execute(context.Background(), Input{DeliveryID: original.DeliveryID}) + require.NoError(t, err) + require.Equal(t, []string{"operator_resend:queued"}, telemetry.statuses) + require.Contains(t, loggerBuffer.String(), "\"delivery_id\":\"clone-456\"") + require.Contains(t, loggerBuffer.String(), "\"source\":\"operator_resend\"") + require.Contains(t, loggerBuffer.String(), "\"template_id\":\"game.turn_ready\"") + require.Contains(t, loggerBuffer.String(), "\"otel_trace_id\":") + require.True(t, hasResendSpanNamed(recorder.Ended(), "mail.resend_delivery")) +} + +type stubStore struct { + delivery *deliverydomain.Delivery + payload *acceptgenericdelivery.DeliveryPayload + createInputs []CreateResendInput +} + +func (store *stubStore) GetDelivery(context.Context, common.DeliveryID) (deliverydomain.Delivery, bool, error) { + if store.delivery == nil { + return deliverydomain.Delivery{}, false, nil + } + + return *store.delivery, true, nil +} + +func (store *stubStore) GetDeliveryPayload(context.Context, common.DeliveryID) (acceptgenericdelivery.DeliveryPayload, bool, error) { + if store.payload == nil { + return acceptgenericdelivery.DeliveryPayload{}, false, nil + } + + return *store.payload, true, nil +} + +func (store *stubStore) CreateResend(_ context.Context, input CreateResendInput) error { + store.createInputs = append(store.createInputs, input) + return nil +} + +type stubIDGenerator struct { + ids []common.DeliveryID +} + +func (generator *stubIDGenerator) NewDeliveryID() (common.DeliveryID, error) { + if len(generator.ids) == 0 { + return "", nil + } + + next := generator.ids[0] + generator.ids = generator.ids[1:] + return next, nil +} + +type stubClock struct { + now time.Time +} + +func (clock stubClock) Now() time.Time { + return clock.now +} + +type stubTelemetry struct { + statuses []string +} + +func (telemetry *stubTelemetry) RecordDeliveryStatusTransition(_ context.Context, status string, source string) { + telemetry.statuses = append(telemetry.statuses, source+":"+status) +} + +func newTestService(t *testing.T, cfg Config) *Service { + t.Helper() + + service, err := New(cfg) + require.NoError(t, err) + + return service +} + +func fixedNow() time.Time { + return time.Unix(1_775_122_100, 0).UTC() +} + +func validOriginalDelivery() deliverydomain.Delivery { + createdAt := time.Unix(1_775_121_700, 0).UTC() + updatedAt := createdAt.Add(time.Minute) + sentAt := updatedAt + + record := deliverydomain.Delivery{ + DeliveryID: common.DeliveryID("delivery-original"), + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeTemplate, + TemplateID: common.TemplateID("game.turn_ready"), + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + Cc: []common.Email{common.Email("copilot@example.com")}, + Bcc: []common.Email{common.Email("ops@example.com")}, + ReplyTo: []common.Email{common.Email("noreply@example.com")}, + }, + Content: deliverydomain.Content{ + Subject: "Turn ready", + TextBody: "Your next turn is ready", + }, + Attachments: []common.AttachmentMetadata{ + {Filename: "instructions.txt", ContentType: "text/plain; charset=utf-8", SizeBytes: 7}, + }, + Locale: common.Locale("en"), + TemplateVariables: map[string]any{"turn": 7}, + LocaleFallbackUsed: true, + IdempotencyKey: common.IdempotencyKey("notification:delivery-original"), + Status: deliverydomain.StatusSent, + AttemptCount: 2, + LastAttemptStatus: attempt.StatusProviderAccepted, + ProviderSummary: "provider=smtp result=accepted", + CreatedAt: createdAt, + UpdatedAt: updatedAt, + SentAt: &sentAt, + } + if err := record.Validate(); err != nil { + panic(err) + } + + return record +} + +func validPayload(deliveryID common.DeliveryID) acceptgenericdelivery.DeliveryPayload { + payload := acceptgenericdelivery.DeliveryPayload{ + DeliveryID: deliveryID, + Attachments: []acceptgenericdelivery.AttachmentPayload{ + { + Filename: "instructions.txt", + ContentType: "text/plain; charset=utf-8", + ContentBase64: "cmVhZCBtZQ==", + SizeBytes: 7, + }, + }, + } + if err := payload.Validate(); err != nil { + panic(err) + } + + return payload +} + +var _ Store = (*stubStore)(nil) +var _ DeliveryIDGenerator = (*stubIDGenerator)(nil) +var _ Clock = stubClock{} +var _ Telemetry = (*stubTelemetry)(nil) + +func hasResendSpanNamed(spans []sdktrace.ReadOnlySpan, name string) bool { + for _, span := range spans { + if span.Name() == name { + return true + } + } + + return false +} diff --git a/mail/internal/telemetry/runtime.go b/mail/internal/telemetry/runtime.go new file mode 100644 index 0000000..773e569 --- /dev/null +++ b/mail/internal/telemetry/runtime.go @@ -0,0 +1,661 @@ +// Package telemetry provides lightweight OpenTelemetry helpers and +// low-cardinality Mail Service instruments. +package telemetry + +import ( + "context" + "errors" + "fmt" + "log/slog" + "os" + "strings" + "sync" + "time" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc" + "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" + "go.opentelemetry.io/otel/exporters/stdout/stdoutmetric" + "go.opentelemetry.io/otel/exporters/stdout/stdouttrace" + "go.opentelemetry.io/otel/metric" + "go.opentelemetry.io/otel/propagation" + sdkmetric "go.opentelemetry.io/otel/sdk/metric" + "go.opentelemetry.io/otel/sdk/resource" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + oteltrace "go.opentelemetry.io/otel/trace" +) + +const meterName = "galaxy/mail" + +const ( + defaultServiceName = "galaxy-mail" + + processExporterNone = "none" + processExporterOTLP = "otlp" + processProtocolHTTPProtobuf = "http/protobuf" + processProtocolGRPC = "grpc" +) + +// ProcessConfig configures the process-wide OpenTelemetry runtime. +type ProcessConfig struct { + // ServiceName overrides the default OpenTelemetry service name. + ServiceName string + + // TracesExporter selects the external traces exporter. Supported values are + // `none` and `otlp`. + TracesExporter string + + // MetricsExporter selects the external metrics exporter. Supported values + // are `none` and `otlp`. + MetricsExporter string + + // TracesProtocol selects the OTLP traces protocol when TracesExporter is + // `otlp`. + TracesProtocol string + + // MetricsProtocol selects the OTLP metrics protocol when MetricsExporter is + // `otlp`. + MetricsProtocol string + + // StdoutTracesEnabled enables the additional stdout trace exporter used for + // local development and debugging. + StdoutTracesEnabled bool + + // StdoutMetricsEnabled enables the additional stdout metric exporter used + // for local development and debugging. + StdoutMetricsEnabled bool +} + +// Validate reports whether cfg contains a supported OpenTelemetry exporter +// configuration. +func (cfg ProcessConfig) Validate() error { + switch cfg.TracesExporter { + case processExporterNone, processExporterOTLP: + default: + return fmt.Errorf("unsupported traces exporter %q", cfg.TracesExporter) + } + + switch cfg.MetricsExporter { + case processExporterNone, processExporterOTLP: + default: + return fmt.Errorf("unsupported metrics exporter %q", cfg.MetricsExporter) + } + + if cfg.TracesProtocol != "" && cfg.TracesProtocol != processProtocolHTTPProtobuf && cfg.TracesProtocol != processProtocolGRPC { + return fmt.Errorf("unsupported OTLP traces protocol %q", cfg.TracesProtocol) + } + if cfg.MetricsProtocol != "" && cfg.MetricsProtocol != processProtocolHTTPProtobuf && cfg.MetricsProtocol != processProtocolGRPC { + return fmt.Errorf("unsupported OTLP metrics protocol %q", cfg.MetricsProtocol) + } + + return nil +} + +// Runtime owns the Mail Service OpenTelemetry providers and low-cardinality +// custom instruments. +type Runtime struct { + tracerProvider oteltrace.TracerProvider + meterProvider metric.MeterProvider + + shutdownMu sync.Mutex + shutdownDone bool + shutdownErr error + shutdownFns []func(context.Context) error + + attemptScheduleReaderMu sync.RWMutex + attemptScheduleReader AttemptScheduleSnapshotReader + + internalHTTPRequests metric.Int64Counter + internalHTTPDuration metric.Float64Histogram + authDeliveryOutcomes metric.Int64Counter + genericDeliveryOutcomes metric.Int64Counter + malformedCommands metric.Int64Counter + acceptedAuthDeliveries metric.Int64Counter + acceptedGenericDeliveries metric.Int64Counter + suppressedDeliveries metric.Int64Counter + deliveryStatusTransitions metric.Int64Counter + attemptOutcomes metric.Int64Counter + deadLetters metric.Int64Counter + localeFallbacks metric.Int64Counter + providerSendDuration metric.Float64Histogram +} + +// AttemptScheduleSnapshot stores the current observable state of the durable +// attempt schedule. +type AttemptScheduleSnapshot struct { + // Depth stores how many delivery ids are currently present in the attempt + // schedule. + Depth int64 + + // OldestScheduledFor stores the oldest currently scheduled due time when + // one exists. + OldestScheduledFor *time.Time +} + +// AttemptScheduleSnapshotReader loads one current schedule snapshot for +// observable gauge reporting. +type AttemptScheduleSnapshotReader interface { + // ReadAttemptScheduleSnapshot returns the current attempt schedule depth and + // its oldest scheduled timestamp when one exists. + ReadAttemptScheduleSnapshot(context.Context) (AttemptScheduleSnapshot, error) +} + +// New constructs a lightweight telemetry runtime around meterProvider for +// tests and embedded use cases that do not need process-level exporter wiring. +func New(meterProvider metric.MeterProvider) (*Runtime, error) { + return NewWithProviders(meterProvider, nil) +} + +// NewWithProviders constructs a telemetry runtime around explicitly supplied +// meterProvider and tracerProvider values. +func NewWithProviders(meterProvider metric.MeterProvider, tracerProvider oteltrace.TracerProvider) (*Runtime, error) { + if meterProvider == nil { + meterProvider = otel.GetMeterProvider() + } + if tracerProvider == nil { + tracerProvider = otel.GetTracerProvider() + } + if meterProvider == nil { + return nil, errors.New("new mail telemetry runtime: nil meter provider") + } + if tracerProvider == nil { + return nil, errors.New("new mail telemetry runtime: nil tracer provider") + } + + return buildRuntime(meterProvider, tracerProvider, nil) +} + +// NewProcess constructs the process-wide Mail Service OpenTelemetry runtime +// from cfg, installs the resulting providers globally, and returns the +// runtime. +func NewProcess(ctx context.Context, cfg ProcessConfig, logger *slog.Logger) (*Runtime, error) { + if ctx == nil { + return nil, errors.New("new mail telemetry process: nil context") + } + if err := cfg.Validate(); err != nil { + return nil, fmt.Errorf("new mail telemetry process: %w", err) + } + if logger == nil { + logger = slog.Default() + } + + serviceName := strings.TrimSpace(cfg.ServiceName) + if serviceName == "" { + serviceName = defaultServiceName + } + + res := resource.NewSchemaless(attribute.String("service.name", serviceName)) + + tracerProvider, err := newTracerProvider(ctx, res, cfg) + if err != nil { + return nil, fmt.Errorf("new mail telemetry process: tracer provider: %w", err) + } + meterProvider, err := newMeterProvider(ctx, res, cfg) + if err != nil { + return nil, fmt.Errorf("new mail telemetry process: meter provider: %w", err) + } + + otel.SetTracerProvider(tracerProvider) + otel.SetMeterProvider(meterProvider) + otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator( + propagation.TraceContext{}, + propagation.Baggage{}, + )) + + runtime, err := buildRuntime(meterProvider, tracerProvider, []func(context.Context) error{ + meterProvider.Shutdown, + tracerProvider.Shutdown, + }) + if err != nil { + return nil, fmt.Errorf("new mail telemetry process: runtime: %w", err) + } + + logger.Info("mail telemetry configured", + "service_name", serviceName, + "traces_exporter", cfg.TracesExporter, + "metrics_exporter", cfg.MetricsExporter, + ) + + return runtime, nil +} + +// TracerProvider returns the runtime tracer provider. +func (runtime *Runtime) TracerProvider() oteltrace.TracerProvider { + if runtime == nil || runtime.tracerProvider == nil { + return otel.GetTracerProvider() + } + + return runtime.tracerProvider +} + +// MeterProvider returns the runtime meter provider. +func (runtime *Runtime) MeterProvider() metric.MeterProvider { + if runtime == nil || runtime.meterProvider == nil { + return otel.GetMeterProvider() + } + + return runtime.meterProvider +} + +// Shutdown flushes and stops the configured telemetry providers. Shutdown is +// idempotent. +func (runtime *Runtime) Shutdown(ctx context.Context) error { + if runtime == nil { + return nil + } + + runtime.shutdownMu.Lock() + if runtime.shutdownDone { + err := runtime.shutdownErr + runtime.shutdownMu.Unlock() + return err + } + runtime.shutdownDone = true + runtime.shutdownMu.Unlock() + + var shutdownErr error + for index := len(runtime.shutdownFns) - 1; index >= 0; index-- { + shutdownErr = errors.Join(shutdownErr, runtime.shutdownFns[index](ctx)) + } + + runtime.shutdownMu.Lock() + runtime.shutdownErr = shutdownErr + runtime.shutdownMu.Unlock() + + return shutdownErr +} + +// RecordInternalHTTPRequest records one internal HTTP request outcome. +func (runtime *Runtime) RecordInternalHTTPRequest(ctx context.Context, attrs []attribute.KeyValue, duration time.Duration) { + if runtime == nil { + return + } + + options := metric.WithAttributes(attrs...) + runtime.internalHTTPRequests.Add(normalizeContext(ctx), 1, options) + runtime.internalHTTPDuration.Record(normalizeContext(ctx), duration.Seconds()*1000, options) +} + +// RecordAuthDeliveryOutcome records one auth-delivery acceptance outcome. +func (runtime *Runtime) RecordAuthDeliveryOutcome(ctx context.Context, outcome string) { + if runtime == nil { + return + } + + runtime.authDeliveryOutcomes.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes(attribute.String("outcome", strings.TrimSpace(outcome))), + ) +} + +// RecordGenericDeliveryOutcome records one generic-delivery acceptance +// outcome. +func (runtime *Runtime) RecordGenericDeliveryOutcome(ctx context.Context, outcome string) { + if runtime == nil { + return + } + + runtime.genericDeliveryOutcomes.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes(attribute.String("outcome", strings.TrimSpace(outcome))), + ) +} + +// RecordMalformedCommand records one malformed or rejected async stream +// command. +func (runtime *Runtime) RecordMalformedCommand(ctx context.Context, failureCode string) { + if runtime == nil { + return + } + + runtime.malformedCommands.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes(attribute.String("failure_code", strings.TrimSpace(failureCode))), + ) +} + +// RecordAcceptedAuthDelivery records one newly accepted auth delivery. +func (runtime *Runtime) RecordAcceptedAuthDelivery(ctx context.Context) { + if runtime == nil { + return + } + + runtime.acceptedAuthDeliveries.Add(normalizeContext(ctx), 1) +} + +// RecordAcceptedGenericDelivery records one newly accepted generic delivery. +func (runtime *Runtime) RecordAcceptedGenericDelivery(ctx context.Context) { + if runtime == nil { + return + } + + runtime.acceptedGenericDeliveries.Add(normalizeContext(ctx), 1) +} + +// RecordDeliveryStatusTransition records one durable delivery status +// transition. +func (runtime *Runtime) RecordDeliveryStatusTransition(ctx context.Context, status string, source string) { + if runtime == nil { + return + } + + attrs := metric.WithAttributes( + attribute.String("status", strings.TrimSpace(status)), + attribute.String("source", strings.TrimSpace(source)), + ) + runtime.deliveryStatusTransitions.Add(normalizeContext(ctx), 1, attrs) + + switch strings.TrimSpace(status) { + case "suppressed": + runtime.suppressedDeliveries.Add(normalizeContext(ctx), 1) + case "dead_letter": + runtime.deadLetters.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes(attribute.String("source", strings.TrimSpace(source))), + ) + } +} + +// RecordAttemptOutcome records one durable terminal attempt outcome. +func (runtime *Runtime) RecordAttemptOutcome(ctx context.Context, status string, source string) { + if runtime == nil { + return + } + + runtime.attemptOutcomes.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes( + attribute.String("status", strings.TrimSpace(status)), + attribute.String("source", strings.TrimSpace(source)), + ), + ) +} + +// RecordLocaleFallback records one template locale fallback event. +func (runtime *Runtime) RecordLocaleFallback(ctx context.Context, templateID string, requestedLocale string, resolvedLocale string) { + if runtime == nil { + return + } + + runtime.localeFallbacks.Add( + normalizeContext(ctx), + 1, + metric.WithAttributes( + attribute.String("template_id", strings.TrimSpace(templateID)), + attribute.String("requested_locale", strings.TrimSpace(requestedLocale)), + attribute.String("resolved_locale", strings.TrimSpace(resolvedLocale)), + ), + ) +} + +// RecordProviderSendDuration records one provider send duration sample. +func (runtime *Runtime) RecordProviderSendDuration(ctx context.Context, provider string, outcome string, duration time.Duration) { + if runtime == nil { + return + } + + runtime.providerSendDuration.Record( + normalizeContext(ctx), + duration.Seconds()*1000, + metric.WithAttributes( + attribute.String("provider", strings.TrimSpace(provider)), + attribute.String("outcome", strings.TrimSpace(outcome)), + ), + ) +} + +// SetAttemptScheduleSnapshotReader installs the current attempt-schedule +// reader used by the observable schedule gauges. +func (runtime *Runtime) SetAttemptScheduleSnapshotReader(reader AttemptScheduleSnapshotReader) { + if runtime == nil { + return + } + + runtime.attemptScheduleReaderMu.Lock() + runtime.attemptScheduleReader = reader + runtime.attemptScheduleReaderMu.Unlock() +} + +func buildRuntime(meterProvider metric.MeterProvider, tracerProvider oteltrace.TracerProvider, shutdownFns []func(context.Context) error) (*Runtime, error) { + meter := meterProvider.Meter(meterName) + runtime := &Runtime{ + tracerProvider: tracerProvider, + meterProvider: meterProvider, + shutdownFns: append([]func(context.Context) error(nil), shutdownFns...), + } + + internalHTTPRequests, err := meter.Int64Counter("mail.internal_http.requests") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: internal_http.requests: %w", err) + } + internalHTTPDuration, err := meter.Float64Histogram("mail.internal_http.duration", metric.WithUnit("ms")) + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: internal_http.duration: %w", err) + } + authDeliveryOutcomes, err := meter.Int64Counter("mail.auth_delivery.outcomes") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: auth_delivery.outcomes: %w", err) + } + genericDeliveryOutcomes, err := meter.Int64Counter("mail.generic_delivery.outcomes") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: generic_delivery.outcomes: %w", err) + } + malformedCommands, err := meter.Int64Counter("mail.stream_commands.malformed") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: stream_commands.malformed: %w", err) + } + acceptedAuthDeliveries, err := meter.Int64Counter("mail.delivery.accepted_auth") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: delivery.accepted_auth: %w", err) + } + acceptedGenericDeliveries, err := meter.Int64Counter("mail.delivery.accepted_generic") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: delivery.accepted_generic: %w", err) + } + suppressedDeliveries, err := meter.Int64Counter("mail.delivery.suppressed") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: delivery.suppressed: %w", err) + } + deliveryStatusTransitions, err := meter.Int64Counter("mail.delivery.status_transitions") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: delivery.status_transitions: %w", err) + } + attemptOutcomes, err := meter.Int64Counter("mail.attempt.outcomes") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: attempt.outcomes: %w", err) + } + deadLetters, err := meter.Int64Counter("mail.delivery.dead_letters") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: delivery.dead_letters: %w", err) + } + localeFallbacks, err := meter.Int64Counter("mail.template.locale_fallback") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: template.locale_fallback: %w", err) + } + providerSendDuration, err := meter.Float64Histogram("mail.provider.send.duration_ms", metric.WithUnit("ms")) + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: provider.send.duration_ms: %w", err) + } + attemptScheduleDepth, err := meter.Int64ObservableGauge("mail.attempt_schedule.depth") + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: attempt_schedule.depth: %w", err) + } + attemptScheduleOldestAge, err := meter.Int64ObservableGauge("mail.attempt_schedule.oldest_age_ms", metric.WithUnit("ms")) + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: attempt_schedule.oldest_age_ms: %w", err) + } + registration, err := meter.RegisterCallback(func(ctx context.Context, observer metric.Observer) error { + runtime.observeAttemptSchedule(ctx, observer, attemptScheduleDepth, attemptScheduleOldestAge) + return nil + }, attemptScheduleDepth, attemptScheduleOldestAge) + if err != nil { + return nil, fmt.Errorf("build mail telemetry runtime: attempt schedule callback: %w", err) + } + runtime.shutdownFns = append(runtime.shutdownFns, func(context.Context) error { + return registration.Unregister() + }) + + runtime.internalHTTPRequests = internalHTTPRequests + runtime.internalHTTPDuration = internalHTTPDuration + runtime.authDeliveryOutcomes = authDeliveryOutcomes + runtime.genericDeliveryOutcomes = genericDeliveryOutcomes + runtime.malformedCommands = malformedCommands + runtime.acceptedAuthDeliveries = acceptedAuthDeliveries + runtime.acceptedGenericDeliveries = acceptedGenericDeliveries + runtime.suppressedDeliveries = suppressedDeliveries + runtime.deliveryStatusTransitions = deliveryStatusTransitions + runtime.attemptOutcomes = attemptOutcomes + runtime.deadLetters = deadLetters + runtime.localeFallbacks = localeFallbacks + runtime.providerSendDuration = providerSendDuration + + return runtime, nil +} + +func newTracerProvider(ctx context.Context, res *resource.Resource, cfg ProcessConfig) (*sdktrace.TracerProvider, error) { + options := []sdktrace.TracerProviderOption{ + sdktrace.WithResource(res), + } + + if exporter, err := traceExporter(ctx, cfg); err != nil { + return nil, err + } else if exporter != nil { + options = append(options, sdktrace.WithBatcher(exporter)) + } + + if cfg.StdoutTracesEnabled { + exporter, err := stdouttrace.New(stdouttrace.WithWriter(os.Stdout)) + if err != nil { + return nil, fmt.Errorf("stdout traces exporter: %w", err) + } + options = append(options, sdktrace.WithBatcher(exporter)) + } + + return sdktrace.NewTracerProvider(options...), nil +} + +func newMeterProvider(ctx context.Context, res *resource.Resource, cfg ProcessConfig) (*sdkmetric.MeterProvider, error) { + options := []sdkmetric.Option{ + sdkmetric.WithResource(res), + } + + if exporter, err := metricExporter(ctx, cfg); err != nil { + return nil, err + } else if exporter != nil { + options = append(options, sdkmetric.WithReader(sdkmetric.NewPeriodicReader(exporter))) + } + + if cfg.StdoutMetricsEnabled { + exporter, err := stdoutmetric.New(stdoutmetric.WithWriter(os.Stdout)) + if err != nil { + return nil, fmt.Errorf("stdout metrics exporter: %w", err) + } + options = append(options, sdkmetric.WithReader(sdkmetric.NewPeriodicReader(exporter))) + } + + return sdkmetric.NewMeterProvider(options...), nil +} + +func traceExporter(ctx context.Context, cfg ProcessConfig) (sdktrace.SpanExporter, error) { + if cfg.TracesExporter != processExporterOTLP { + return nil, nil + } + + switch normalizeProtocol(cfg.TracesProtocol) { + case processProtocolGRPC: + exporter, err := otlptracegrpc.New(ctx) + if err != nil { + return nil, fmt.Errorf("otlp grpc traces exporter: %w", err) + } + return exporter, nil + default: + exporter, err := otlptracehttp.New(ctx) + if err != nil { + return nil, fmt.Errorf("otlp http traces exporter: %w", err) + } + return exporter, nil + } +} + +func metricExporter(ctx context.Context, cfg ProcessConfig) (sdkmetric.Exporter, error) { + if cfg.MetricsExporter != processExporterOTLP { + return nil, nil + } + + switch normalizeProtocol(cfg.MetricsProtocol) { + case processProtocolGRPC: + exporter, err := otlpmetricgrpc.New(ctx) + if err != nil { + return nil, fmt.Errorf("otlp grpc metrics exporter: %w", err) + } + return exporter, nil + default: + exporter, err := otlpmetrichttp.New(ctx) + if err != nil { + return nil, fmt.Errorf("otlp http metrics exporter: %w", err) + } + return exporter, nil + } +} + +func normalizeProtocol(value string) string { + switch strings.TrimSpace(value) { + case processProtocolGRPC: + return processProtocolGRPC + default: + return processProtocolHTTPProtobuf + } +} + +func normalizeContext(ctx context.Context) context.Context { + if ctx == nil { + return context.Background() + } + + return ctx +} + +func (runtime *Runtime) observeAttemptSchedule( + ctx context.Context, + observer metric.Observer, + depthGauge metric.Int64ObservableGauge, + oldestAgeGauge metric.Int64ObservableGauge, +) { + depth := int64(0) + oldestAge := int64(0) + + reader := runtime.currentAttemptScheduleReader() + if reader != nil { + snapshot, err := reader.ReadAttemptScheduleSnapshot(ctx) + if err != nil { + otel.Handle(fmt.Errorf("observe mail attempt schedule: %w", err)) + } else { + if snapshot.Depth > 0 { + depth = snapshot.Depth + } + if snapshot.OldestScheduledFor != nil { + oldestAge = time.Since(snapshot.OldestScheduledFor.UTC()).Milliseconds() + if oldestAge < 0 { + oldestAge = 0 + } + } + } + } + + observer.ObserveInt64(depthGauge, depth) + observer.ObserveInt64(oldestAgeGauge, oldestAge) +} + +func (runtime *Runtime) currentAttemptScheduleReader() AttemptScheduleSnapshotReader { + runtime.attemptScheduleReaderMu.RLock() + defer runtime.attemptScheduleReaderMu.RUnlock() + return runtime.attemptScheduleReader +} diff --git a/mail/internal/telemetry/runtime_test.go b/mail/internal/telemetry/runtime_test.go new file mode 100644 index 0000000..1d44d53 --- /dev/null +++ b/mail/internal/telemetry/runtime_test.go @@ -0,0 +1,227 @@ +package telemetry + +import ( + "context" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "go.opentelemetry.io/otel/attribute" + sdkmetric "go.opentelemetry.io/otel/sdk/metric" + "go.opentelemetry.io/otel/sdk/metric/metricdata" + sdktrace "go.opentelemetry.io/otel/sdk/trace" +) + +func TestRuntimeRecordsMetrics(t *testing.T) { + t.Parallel() + + reader := sdkmetric.NewManualReader() + meterProvider := sdkmetric.NewMeterProvider(sdkmetric.WithReader(reader)) + tracerProvider := sdktrace.NewTracerProvider() + + runtime, err := NewWithProviders(meterProvider, tracerProvider) + require.NoError(t, err) + + runtime.RecordInternalHTTPRequest(context.Background(), []attribute.KeyValue{ + attribute.String("route", "/api/v1/internal/login-code-deliveries"), + attribute.String("method", "POST"), + attribute.String("edge_outcome", "success"), + }, 5*time.Millisecond) + runtime.RecordAuthDeliveryOutcome(context.Background(), "sent") + runtime.RecordGenericDeliveryOutcome(context.Background(), "accepted") + runtime.RecordMalformedCommand(context.Background(), "invalid_payload") + runtime.RecordAcceptedAuthDelivery(context.Background()) + runtime.RecordAcceptedGenericDelivery(context.Background()) + runtime.RecordDeliveryStatusTransition(context.Background(), "queued", "notification") + runtime.RecordDeliveryStatusTransition(context.Background(), "suppressed", "authsession") + runtime.RecordDeliveryStatusTransition(context.Background(), "dead_letter", "notification") + runtime.RecordAttemptOutcome(context.Background(), "provider_accepted", "notification") + runtime.RecordLocaleFallback(context.Background(), "auth.login_code", "fr-FR", "en") + runtime.RecordProviderSendDuration(context.Background(), "smtp", "accepted", 15*time.Millisecond) + scheduledAt := time.Now().Add(-time.Second).UTC() + runtime.SetAttemptScheduleSnapshotReader(stubAttemptScheduleSnapshotReader{ + snapshot: AttemptScheduleSnapshot{ + Depth: 3, + OldestScheduledFor: &scheduledAt, + }, + }) + + assertMetricCount(t, reader, "mail.internal_http.requests", map[string]string{ + "route": "/api/v1/internal/login-code-deliveries", + "method": "POST", + "edge_outcome": "success", + }, 1) + assertMetricCount(t, reader, "mail.auth_delivery.outcomes", map[string]string{ + "outcome": "sent", + }, 1) + assertMetricCount(t, reader, "mail.generic_delivery.outcomes", map[string]string{ + "outcome": "accepted", + }, 1) + assertMetricCount(t, reader, "mail.stream_commands.malformed", map[string]string{ + "failure_code": "invalid_payload", + }, 1) + assertMetricCount(t, reader, "mail.delivery.accepted_auth", nil, 1) + assertMetricCount(t, reader, "mail.delivery.accepted_generic", nil, 1) + assertMetricCount(t, reader, "mail.delivery.suppressed", nil, 1) + assertMetricCount(t, reader, "mail.delivery.status_transitions", map[string]string{ + "status": "queued", + "source": "notification", + }, 1) + assertMetricCount(t, reader, "mail.delivery.status_transitions", map[string]string{ + "status": "suppressed", + "source": "authsession", + }, 1) + assertMetricCount(t, reader, "mail.delivery.dead_letters", map[string]string{ + "source": "notification", + }, 1) + assertMetricCount(t, reader, "mail.attempt.outcomes", map[string]string{ + "status": "provider_accepted", + "source": "notification", + }, 1) + assertMetricCount(t, reader, "mail.template.locale_fallback", map[string]string{ + "template_id": "auth.login_code", + "requested_locale": "fr-FR", + "resolved_locale": "en", + }, 1) + assertHistogramCount(t, reader, "mail.provider.send.duration_ms", map[string]string{ + "provider": "smtp", + "outcome": "accepted", + }, 1) + assertGaugeValue(t, reader, "mail.attempt_schedule.depth", nil, 3) + assertGaugePositive(t, reader, "mail.attempt_schedule.oldest_age_ms", nil) +} + +func assertMetricCount(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string, wantValue int64) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + sum, ok := metric.Data.(metricdata.Sum[int64]) + require.True(t, ok) + + for _, point := range sum.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + assert.Equal(t, wantValue, point.Value) + return + } + } + } + } + + require.Failf(t, "test failed", "metric %q with attrs %v not found", metricName, wantAttrs) +} + +func assertHistogramCount(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string, wantCount uint64) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + histogram, ok := metric.Data.(metricdata.Histogram[float64]) + require.True(t, ok) + + for _, point := range histogram.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + assert.Equal(t, wantCount, point.Count) + return + } + } + } + } + + require.Failf(t, "test failed", "histogram %q with attrs %v not found", metricName, wantAttrs) +} + +func assertGaugeValue(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string, wantValue int64) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + gauge, ok := metric.Data.(metricdata.Gauge[int64]) + require.True(t, ok) + + for _, point := range gauge.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + assert.Equal(t, wantValue, point.Value) + return + } + } + } + } + + require.Failf(t, "test failed", "gauge %q with attrs %v not found", metricName, wantAttrs) +} + +func assertGaugePositive(t *testing.T, reader *sdkmetric.ManualReader, metricName string, wantAttrs map[string]string) { + t.Helper() + + var resourceMetrics metricdata.ResourceMetrics + require.NoError(t, reader.Collect(context.Background(), &resourceMetrics)) + + for _, scopeMetrics := range resourceMetrics.ScopeMetrics { + for _, metric := range scopeMetrics.Metrics { + if metric.Name != metricName { + continue + } + + gauge, ok := metric.Data.(metricdata.Gauge[int64]) + require.True(t, ok) + + for _, point := range gauge.DataPoints { + if hasMetricAttributes(point.Attributes.ToSlice(), wantAttrs) { + assert.Greater(t, point.Value, int64(0)) + return + } + } + } + } + + require.Failf(t, "test failed", "gauge %q with attrs %v not found", metricName, wantAttrs) +} + +func hasMetricAttributes(values []attribute.KeyValue, want map[string]string) bool { + if len(want) == 0 { + return len(values) == 0 + } + if len(values) != len(want) { + return false + } + + for _, value := range values { + if want[string(value.Key)] != value.Value.AsString() { + return false + } + } + + return true +} + +type stubAttemptScheduleSnapshotReader struct { + snapshot AttemptScheduleSnapshot + err error +} + +func (reader stubAttemptScheduleSnapshotReader) ReadAttemptScheduleSnapshot(context.Context) (AttemptScheduleSnapshot, error) { + return reader.snapshot, reader.err +} diff --git a/mail/internal/worker/attempt_worker.go b/mail/internal/worker/attempt_worker.go new file mode 100644 index 0000000..23c1d5a --- /dev/null +++ b/mail/internal/worker/attempt_worker.go @@ -0,0 +1,148 @@ +package worker + +import ( + "context" + "errors" + "fmt" + "log/slog" + "sync" + + "galaxy/mail/internal/service/executeattempt" +) + +// AttemptExecutionService executes one claimed in-progress attempt. +type AttemptExecutionService interface { + // Execute runs one claimed attempt through provider execution and durable + // state mutation. + Execute(context.Context, executeattempt.WorkItem) error +} + +// AttemptWorkerPoolConfig stores the dependencies used by AttemptWorkerPool. +type AttemptWorkerPoolConfig struct { + // Concurrency stores how many workers run concurrently. + Concurrency int + + // WorkQueue stores the claimed attempt handoff channel produced by the + // scheduler. + WorkQueue <-chan executeattempt.WorkItem + + // Service executes one claimed attempt. + Service AttemptExecutionService +} + +// AttemptWorkerPool executes claimed attempts concurrently. +type AttemptWorkerPool struct { + concurrency int + workQueue <-chan executeattempt.WorkItem + service AttemptExecutionService + logger *slog.Logger +} + +// NewAttemptWorkerPool constructs one attempt worker pool. +func NewAttemptWorkerPool(cfg AttemptWorkerPoolConfig, logger *slog.Logger) (*AttemptWorkerPool, error) { + switch { + case cfg.Concurrency <= 0: + return nil, errors.New("new attempt worker pool: concurrency must be positive") + case cfg.WorkQueue == nil: + return nil, errors.New("new attempt worker pool: nil work queue") + case cfg.Service == nil: + return nil, errors.New("new attempt worker pool: nil attempt execution service") + } + if logger == nil { + logger = slog.Default() + } + + return &AttemptWorkerPool{ + concurrency: cfg.Concurrency, + workQueue: cfg.WorkQueue, + service: cfg.Service, + logger: logger.With("component", "attempt_worker_pool", "concurrency", cfg.Concurrency), + }, nil +} + +// Run starts the attempt worker pool and blocks until ctx is canceled or one +// worker returns an execution error. +func (pool *AttemptWorkerPool) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run attempt worker pool: nil context") + } + if err := ctx.Err(); err != nil { + return err + } + if pool == nil { + return errors.New("run attempt worker pool: nil pool") + } + + pool.logger.Info("attempt worker pool started") + defer pool.logger.Info("attempt worker pool stopped") + + runCtx, cancel := context.WithCancel(ctx) + defer cancel() + + errs := make(chan error, pool.concurrency) + var waitGroup sync.WaitGroup + + for index := 0; index < pool.concurrency; index++ { + waitGroup.Add(1) + go func(workerIndex int) { + defer waitGroup.Done() + if err := pool.runWorker(runCtx, workerIndex); err != nil { + errs <- err + } + }(index) + } + + done := make(chan struct{}) + go func() { + waitGroup.Wait() + close(done) + }() + + select { + case <-ctx.Done(): + cancel() + <-done + return ctx.Err() + case err := <-errs: + cancel() + <-done + return err + case <-done: + if ctx.Err() != nil { + return ctx.Err() + } + return errors.New("run attempt worker pool: workers exited without shutdown") + } +} + +func (pool *AttemptWorkerPool) runWorker(ctx context.Context, workerIndex int) error { + pool.logger.Debug("attempt worker started", "worker_index", workerIndex) + defer pool.logger.Debug("attempt worker stopped", "worker_index", workerIndex) + + for { + select { + case <-ctx.Done(): + return ctx.Err() + case item, ok := <-pool.workQueue: + if !ok { + return nil + } + if err := pool.service.Execute(ctx, item); err != nil { + return fmt.Errorf("attempt worker %d: %w", workerIndex, err) + } + } + } +} + +// Shutdown stops the attempt worker pool within ctx. The pool does not own +// additional resources beyond its run loop. +func (pool *AttemptWorkerPool) Shutdown(ctx context.Context) error { + if ctx == nil { + return errors.New("shutdown attempt worker pool: nil context") + } + if pool == nil { + return nil + } + + return nil +} diff --git a/mail/internal/worker/attempt_worker_test.go b/mail/internal/worker/attempt_worker_test.go new file mode 100644 index 0000000..34ac609 --- /dev/null +++ b/mail/internal/worker/attempt_worker_test.go @@ -0,0 +1,347 @@ +package worker + +import ( + "context" + "errors" + "io" + "log/slog" + "sync" + "testing" + "time" + + "galaxy/mail/internal/adapters/redisstate" + "galaxy/mail/internal/adapters/stubprovider" + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/ports" + "galaxy/mail/internal/service/executeattempt" + "galaxy/mail/internal/service/renderdelivery" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestAttemptWorkersSendImmediateFirstAttempt(t *testing.T) { + t.Parallel() + + fixture := newAttemptWorkerFixture(t, nil) + createAcceptedRenderedDelivery(t, fixture.client, common.DeliveryID("delivery-immediate"), fixture.clock.Now()) + + cancel, wait := fixture.run(t) + defer func() { + cancel() + wait() + }() + + require.Eventually(t, func() bool { + deliveryRecord := loadDeliveryRecord(t, fixture.client, common.DeliveryID("delivery-immediate")) + return deliveryRecord.Status == deliverydomain.StatusSent + }, 5*time.Second, 20*time.Millisecond) + + require.Len(t, fixture.provider.Inputs(), 1) +} + +func TestAttemptWorkersRetryTransientFailuresUntilSuccess(t *testing.T) { + t.Parallel() + + fixture := newAttemptWorkerFixture(t, []stubprovider.ScriptedOutcome{ + { + Classification: ports.ClassificationTransientFailure, + Script: "retry_1", + }, + { + Classification: ports.ClassificationTransientFailure, + Script: "retry_2", + }, + { + Classification: ports.ClassificationAccepted, + Script: "accepted", + }, + }) + createAcceptedRenderedDelivery(t, fixture.client, common.DeliveryID("delivery-retry-success"), fixture.clock.Now()) + + cancel, wait := fixture.run(t) + defer func() { + cancel() + wait() + }() + + require.Eventually(t, func() bool { + deliveryRecord := loadDeliveryRecord(t, fixture.client, common.DeliveryID("delivery-retry-success")) + return deliveryRecord.AttemptCount == 2 && deliveryRecord.Status == deliverydomain.StatusQueued + }, 5*time.Second, 20*time.Millisecond) + + fixture.clock.Advance(time.Minute) + + require.Eventually(t, func() bool { + deliveryRecord := loadDeliveryRecord(t, fixture.client, common.DeliveryID("delivery-retry-success")) + return deliveryRecord.AttemptCount == 3 && deliveryRecord.Status == deliverydomain.StatusQueued + }, 5*time.Second, 20*time.Millisecond) + + fixture.clock.Advance(5 * time.Minute) + + require.Eventually(t, func() bool { + deliveryRecord := loadDeliveryRecord(t, fixture.client, common.DeliveryID("delivery-retry-success")) + return deliveryRecord.Status == deliverydomain.StatusSent + }, 5*time.Second, 20*time.Millisecond) + + require.Len(t, fixture.provider.Inputs(), 3) +} + +func TestAttemptWorkersDeadLetterAfterRetryExhaustion(t *testing.T) { + t.Parallel() + + fixture := newAttemptWorkerFixture(t, []stubprovider.ScriptedOutcome{ + {Classification: ports.ClassificationTransientFailure, Script: "retry_1"}, + {Classification: ports.ClassificationTransientFailure, Script: "retry_2"}, + {Classification: ports.ClassificationTransientFailure, Script: "retry_3"}, + {Classification: ports.ClassificationTransientFailure, Script: "retry_4"}, + }) + deliveryID := common.DeliveryID("delivery-dead-letter") + createAcceptedRenderedDelivery(t, fixture.client, deliveryID, fixture.clock.Now()) + + cancel, wait := fixture.run(t) + defer func() { + cancel() + wait() + }() + + require.Eventually(t, func() bool { + return loadDeliveryRecord(t, fixture.client, deliveryID).AttemptCount == 2 + }, 5*time.Second, 20*time.Millisecond) + + fixture.clock.Advance(time.Minute) + require.Eventually(t, func() bool { + return loadDeliveryRecord(t, fixture.client, deliveryID).AttemptCount == 3 + }, 5*time.Second, 20*time.Millisecond) + + fixture.clock.Advance(5 * time.Minute) + require.Eventually(t, func() bool { + return loadDeliveryRecord(t, fixture.client, deliveryID).AttemptCount == 4 + }, 5*time.Second, 20*time.Millisecond) + + fixture.clock.Advance(30 * time.Minute) + require.Eventually(t, func() bool { + return loadDeliveryRecord(t, fixture.client, deliveryID).Status == deliverydomain.StatusDeadLetter + }, 5*time.Second, 20*time.Millisecond) + + deadLetter := loadDeadLetterRecord(t, fixture.client, deliveryID) + require.Equal(t, "retry_exhausted", deadLetter.FailureClassification) + require.Len(t, fixture.provider.Inputs(), 4) +} + +func TestAttemptWorkersRecoverExpiredClaimAfterCrash(t *testing.T) { + t.Parallel() + + fixture := newAttemptWorkerFixture(t, []stubprovider.ScriptedOutcome{ + {Classification: ports.ClassificationAccepted, Script: "accepted"}, + }) + deliveryID := common.DeliveryID("delivery-recovered") + createAcceptedRenderedDelivery(t, fixture.client, deliveryID, fixture.clock.Now()) + + claimed, found, err := fixture.store.ClaimDueAttempt(context.Background(), deliveryID, fixture.clock.Now()) + require.NoError(t, err) + require.True(t, found) + require.Equal(t, deliverydomain.StatusSending, claimed.Delivery.Status) + + fixture.clock.Advance(20 * time.Millisecond) + + cancel, wait := fixture.run(t) + defer func() { + cancel() + wait() + }() + + require.Eventually(t, func() bool { + deliveryRecord := loadDeliveryRecord(t, fixture.client, deliveryID) + return deliveryRecord.Status == deliverydomain.StatusQueued && deliveryRecord.AttemptCount == 2 + }, 5*time.Second, 20*time.Millisecond) + + fixture.clock.Advance(time.Minute) + + require.Eventually(t, func() bool { + deliveryRecord := loadDeliveryRecord(t, fixture.client, deliveryID) + return deliveryRecord.Status == deliverydomain.StatusSent + }, 5*time.Second, 20*time.Millisecond) + + require.Len(t, fixture.provider.Inputs(), 1) +} + +type attemptWorkerFixture struct { + client *redis.Client + store *redisstate.AttemptExecutionStore + service *executeattempt.Service + scheduler *Scheduler + pool *AttemptWorkerPool + provider *stubprovider.Provider + clock *schedulerTestClock +} + +func newAttemptWorkerFixture(t *testing.T, scripted []stubprovider.ScriptedOutcome) attemptWorkerFixture { + t.Helper() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + store, err := redisstate.NewAttemptExecutionStore(client) + require.NoError(t, err) + + provider, err := stubprovider.New(scripted...) + require.NoError(t, err) + t.Cleanup(func() { require.NoError(t, provider.Close()) }) + + clock := &schedulerTestClock{now: time.Unix(1_775_121_700, 0).UTC()} + workQueue := make(chan executeattempt.WorkItem, 1) + + service, err := executeattempt.New(executeattempt.Config{ + Renderer: noopRenderer{}, + Provider: provider, + PayloadLoader: store, + Store: store, + Clock: clock, + AttemptTimeout: 5 * time.Millisecond, + }) + require.NoError(t, err) + + scheduler, err := NewScheduler(SchedulerConfig{ + Store: store, + Service: service, + WorkQueue: workQueue, + Clock: clock, + AttemptTimeout: 5 * time.Millisecond, + PollInterval: 10 * time.Millisecond, + RecoveryInterval: 10 * time.Millisecond, + RecoveryGrace: 5 * time.Millisecond, + }, testWorkerLogger()) + require.NoError(t, err) + + pool, err := NewAttemptWorkerPool(AttemptWorkerPoolConfig{ + Concurrency: 1, + WorkQueue: workQueue, + Service: service, + }, testWorkerLogger()) + require.NoError(t, err) + + return attemptWorkerFixture{ + client: client, + store: store, + service: service, + scheduler: scheduler, + pool: pool, + provider: provider, + clock: clock, + } +} + +func (fixture attemptWorkerFixture) run(t *testing.T) (context.CancelFunc, func()) { + t.Helper() + + ctx, cancel := context.WithCancel(context.Background()) + schedulerDone := make(chan error, 1) + poolDone := make(chan error, 1) + + go func() { + schedulerDone <- fixture.scheduler.Run(ctx) + }() + go func() { + poolDone <- fixture.pool.Run(ctx) + }() + + wait := func() { + require.ErrorIs(t, <-schedulerDone, context.Canceled) + require.ErrorIs(t, <-poolDone, context.Canceled) + } + + return cancel, wait +} + +type schedulerTestClock struct { + mu sync.Mutex + now time.Time +} + +func (clock *schedulerTestClock) Now() time.Time { + clock.mu.Lock() + defer clock.mu.Unlock() + return clock.now +} + +func (clock *schedulerTestClock) Advance(delta time.Duration) { + clock.mu.Lock() + defer clock.mu.Unlock() + clock.now = clock.now.Add(delta) +} + +type noopRenderer struct{} + +func (noopRenderer) Execute(context.Context, renderdelivery.Input) (renderdelivery.Result, error) { + return renderdelivery.Result{}, errors.New("unexpected render invocation") +} + +func createAcceptedRenderedDelivery(t *testing.T, client *redis.Client, deliveryID common.DeliveryID, createdAt time.Time) { + t.Helper() + + writer, err := redisstate.NewAtomicWriter(client) + require.NoError(t, err) + + deliveryRecord := deliverydomain.Delivery{ + DeliveryID: deliveryID, + Source: deliverydomain.SourceNotification, + PayloadMode: deliverydomain.PayloadModeRendered, + Envelope: deliverydomain.Envelope{ + To: []common.Email{common.Email("pilot@example.com")}, + }, + Content: deliverydomain.Content{ + Subject: "Turn ready", + TextBody: "Turn 54 is ready.", + }, + IdempotencyKey: common.IdempotencyKey("notification:" + deliveryID.String()), + Status: deliverydomain.StatusQueued, + AttemptCount: 1, + CreatedAt: createdAt.UTC().Truncate(time.Millisecond), + UpdatedAt: createdAt.UTC().Truncate(time.Millisecond), + } + require.NoError(t, deliveryRecord.Validate()) + + firstAttempt := attempt.Attempt{ + DeliveryID: deliveryID, + AttemptNo: 1, + ScheduledFor: createdAt.UTC().Truncate(time.Millisecond), + Status: attempt.StatusScheduled, + } + require.NoError(t, firstAttempt.Validate()) + + require.NoError(t, writer.CreateAcceptance(context.Background(), redisstate.CreateAcceptanceInput{ + Delivery: deliveryRecord, + FirstAttempt: &firstAttempt, + })) +} + +func loadDeliveryRecord(t *testing.T, client *redis.Client, deliveryID common.DeliveryID) deliverydomain.Delivery { + t.Helper() + + payload, err := client.Get(context.Background(), redisstate.Keyspace{}.Delivery(deliveryID)).Bytes() + require.NoError(t, err) + record, err := redisstate.UnmarshalDelivery(payload) + require.NoError(t, err) + + return record +} + +func loadDeadLetterRecord(t *testing.T, client *redis.Client, deliveryID common.DeliveryID) deliverydomain.DeadLetterEntry { + t.Helper() + + payload, err := client.Get(context.Background(), redisstate.Keyspace{}.DeadLetter(deliveryID)).Bytes() + require.NoError(t, err) + record, err := redisstate.UnmarshalDeadLetter(payload) + require.NoError(t, err) + + return record +} + +func testWorkerLogger() *slog.Logger { + return slog.New(slog.NewJSONHandler(io.Discard, nil)) +} diff --git a/mail/internal/worker/cleanup_worker.go b/mail/internal/worker/cleanup_worker.go new file mode 100644 index 0000000..ba77ccf --- /dev/null +++ b/mail/internal/worker/cleanup_worker.go @@ -0,0 +1,73 @@ +package worker + +import ( + "context" + "errors" + "log/slog" + "time" + + "galaxy/mail/internal/adapters/redisstate" +) + +const cleanupInterval = time.Hour + +// CleanupWorker stores the idle index cleanup worker used by the Stage 6 +// runtime skeleton. +type CleanupWorker struct { + cleaner *redisstate.IndexCleaner + logger *slog.Logger +} + +// NewCleanupWorker constructs the idle Stage 6 cleanup worker. +func NewCleanupWorker(cleaner *redisstate.IndexCleaner, logger *slog.Logger) (*CleanupWorker, error) { + if cleaner == nil { + return nil, errors.New("new cleanup worker: nil index cleaner") + } + if logger == nil { + logger = slog.Default() + } + + return &CleanupWorker{ + cleaner: cleaner, + logger: logger.With("component", "cleanup_worker"), + }, nil +} + +// Run starts the idle cleanup worker and blocks until ctx is canceled. +func (worker *CleanupWorker) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run cleanup worker: nil context") + } + if err := ctx.Err(); err != nil { + return err + } + if worker == nil || worker.cleaner == nil { + return errors.New("run cleanup worker: nil cleanup worker") + } + + worker.logger.Info("cleanup worker started", "interval", cleanupInterval.String()) + ticker := time.NewTicker(cleanupInterval) + defer ticker.Stop() + + for { + select { + case <-ctx.Done(): + worker.logger.Info("cleanup worker stopped") + return ctx.Err() + case <-ticker.C: + } + } +} + +// Shutdown stops the cleanup worker within ctx. The Stage 6 skeleton has no +// additional resources to release. +func (worker *CleanupWorker) Shutdown(ctx context.Context) error { + if ctx == nil { + return errors.New("shutdown cleanup worker: nil context") + } + if worker == nil { + return nil + } + + return nil +} diff --git a/mail/internal/worker/command_consumer.go b/mail/internal/worker/command_consumer.go new file mode 100644 index 0000000..5ab5a90 --- /dev/null +++ b/mail/internal/worker/command_consumer.go @@ -0,0 +1,326 @@ +// Package worker provides the long-lived background components used by the +// runnable Mail Service process. +package worker + +import ( + "context" + "errors" + "fmt" + "log/slog" + "strings" + "sync" + "time" + + "galaxy/mail/internal/api/streamcommand" + "galaxy/mail/internal/domain/malformedcommand" + "galaxy/mail/internal/logging" + "galaxy/mail/internal/service/acceptgenericdelivery" + + "github.com/redis/go-redis/v9" +) + +// AcceptGenericDeliveryUseCase accepts one generic asynchronous delivery +// command. +type AcceptGenericDeliveryUseCase interface { + // Execute durably accepts one normalized generic-delivery command. + Execute(context.Context, streamcommand.Command) (acceptgenericdelivery.Result, error) +} + +// MalformedCommandRecorder stores one operator-visible malformed async command +// record. +type MalformedCommandRecorder interface { + // Record persists entry idempotently by stream entry id. + Record(context.Context, malformedcommand.Entry) error +} + +// StreamOffsetStore stores the last durably processed entry id of one plain +// XREAD consumer. +type StreamOffsetStore interface { + // Load returns the last processed entry id for stream when one is stored. + Load(context.Context, string) (string, bool, error) + + // Save stores the last processed entry id for stream. + Save(context.Context, string, string) error +} + +// CommandConsumerTelemetry records low-cardinality stream-consumer events. +type CommandConsumerTelemetry interface { + // RecordMalformedCommand records one malformed or rejected async stream + // command. + RecordMalformedCommand(context.Context, string) +} + +// Clock provides the current wall-clock time. +type Clock interface { + // Now returns the current time. + Now() time.Time +} + +type systemClock struct{} + +func (systemClock) Now() time.Time { + return time.Now() +} + +// CommandConsumerConfig stores the dependencies used by CommandConsumer. +type CommandConsumerConfig struct { + // Client stores the Redis client used for XREAD. + Client *redis.Client + + // Stream stores the Redis Stream name to consume. + Stream string + + // BlockTimeout stores the blocking XREAD timeout. + BlockTimeout time.Duration + + // Acceptor durably accepts valid generic-delivery commands. + Acceptor AcceptGenericDeliveryUseCase + + // MalformedRecorder persists operator-visible malformed-command entries. + MalformedRecorder MalformedCommandRecorder + + // OffsetStore stores the last durably processed stream entry id. + OffsetStore StreamOffsetStore + + // Telemetry records malformed-command counters. + Telemetry CommandConsumerTelemetry + + // Clock provides wall-clock timestamps for malformed-command records. + Clock Clock +} + +// CommandConsumer stores the Redis Streams consumer used for generic +// asynchronous delivery intake. +type CommandConsumer struct { + client *redis.Client + stream string + blockTimeout time.Duration + acceptor AcceptGenericDeliveryUseCase + malformedRecorder MalformedCommandRecorder + offsetStore StreamOffsetStore + telemetry CommandConsumerTelemetry + clock Clock + logger *slog.Logger + closeOnce sync.Once +} + +// NewCommandConsumer constructs the generic-delivery command consumer. +func NewCommandConsumer(cfg CommandConsumerConfig, logger *slog.Logger) (*CommandConsumer, error) { + switch { + case cfg.Client == nil: + return nil, errors.New("new command consumer: nil redis client") + case strings.TrimSpace(cfg.Stream) == "": + return nil, errors.New("new command consumer: stream must not be empty") + case cfg.BlockTimeout <= 0: + return nil, errors.New("new command consumer: block timeout must be positive") + case cfg.Acceptor == nil: + return nil, errors.New("new command consumer: nil acceptor") + case cfg.MalformedRecorder == nil: + return nil, errors.New("new command consumer: nil malformed recorder") + case cfg.OffsetStore == nil: + return nil, errors.New("new command consumer: nil offset store") + } + if cfg.Clock == nil { + cfg.Clock = systemClock{} + } + if logger == nil { + logger = slog.Default() + } + + return &CommandConsumer{ + client: cfg.Client, + stream: cfg.Stream, + blockTimeout: cfg.BlockTimeout, + acceptor: cfg.Acceptor, + malformedRecorder: cfg.MalformedRecorder, + offsetStore: cfg.OffsetStore, + telemetry: cfg.Telemetry, + clock: cfg.Clock, + logger: logger.With("component", "command_consumer", "stream", cfg.Stream), + }, nil +} + +// Run starts the command consumer and blocks until ctx is canceled or Redis +// returns an unexpected error. +func (consumer *CommandConsumer) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run command consumer: nil context") + } + if err := ctx.Err(); err != nil { + return err + } + if consumer == nil || consumer.client == nil { + return errors.New("run command consumer: nil consumer") + } + + lastID, found, err := consumer.offsetStore.Load(ctx, consumer.stream) + if err != nil { + return fmt.Errorf("run command consumer: load stream offset: %w", err) + } + if !found { + lastID = "0-0" + } + + consumer.logger.Info("command consumer started", "block_timeout", consumer.blockTimeout.String(), "start_entry_id", lastID) + + for { + streams, err := consumer.client.XRead(ctx, &redis.XReadArgs{ + Streams: []string{consumer.stream, lastID}, + Count: 1, + Block: consumer.blockTimeout, + }).Result() + switch { + case err == nil: + for _, stream := range streams { + for _, message := range stream.Messages { + if err := consumer.handleMessage(ctx, message); err != nil { + return err + } + if err := consumer.offsetStore.Save(ctx, consumer.stream, message.ID); err != nil { + return fmt.Errorf("run command consumer: save stream offset: %w", err) + } + lastID = message.ID + } + } + case errors.Is(err, redis.Nil): + continue + case ctx.Err() != nil && (errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) || errors.Is(err, redis.ErrClosed)): + consumer.logger.Info("command consumer stopped") + return ctx.Err() + case errors.Is(err, context.Canceled), errors.Is(err, context.DeadlineExceeded), errors.Is(err, redis.ErrClosed): + return fmt.Errorf("run command consumer: %w", err) + default: + return fmt.Errorf("run command consumer: %w", err) + } + } +} + +func (consumer *CommandConsumer) handleMessage(ctx context.Context, message redis.XMessage) error { + rawFields := cloneRawFields(message.Values) + + command, err := streamcommand.DecodeCommand(rawFields) + if err != nil { + return consumer.recordMalformed(ctx, message.ID, rawFields, streamcommand.ClassifyDecodeError(err), err) + } + + result, err := consumer.acceptor.Execute(ctx, command) + switch { + case err == nil: + logArgs := logging.CommandAttrs(command) + logArgs = append(logArgs, + "stream_entry_id", message.ID, + "outcome", string(result.Outcome), + ) + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + consumer.logger.Info("generic command accepted", logArgs...) + return nil + case errors.Is(err, acceptgenericdelivery.ErrConflict): + return consumer.recordMalformed(ctx, message.ID, rawFields, malformedcommand.FailureCodeIdempotencyConflict, err) + case errors.Is(err, acceptgenericdelivery.ErrServiceUnavailable): + return fmt.Errorf("handle command %q: %w", message.ID, err) + default: + return fmt.Errorf("handle command %q: %w", message.ID, err) + } +} + +func (consumer *CommandConsumer) recordMalformed( + ctx context.Context, + streamEntryID string, + rawFields map[string]any, + failureCode malformedcommand.FailureCode, + cause error, +) error { + entry := malformedcommand.Entry{ + StreamEntryID: streamEntryID, + DeliveryID: optionalRawString(rawFields, "delivery_id"), + Source: optionalRawString(rawFields, "source"), + IdempotencyKey: optionalRawString(rawFields, "idempotency_key"), + FailureCode: failureCode, + FailureMessage: strings.TrimSpace(cause.Error()), + RawFields: cloneRawFields(rawFields), + RecordedAt: consumer.clock.Now().UTC().Truncate(time.Millisecond), + } + if err := consumer.malformedRecorder.Record(ctx, entry); err != nil { + return fmt.Errorf("record malformed command %q: %w", streamEntryID, err) + } + if consumer.telemetry != nil { + consumer.telemetry.RecordMalformedCommand(ctx, string(failureCode)) + } + + consumer.logger.Warn("stream command rejected", + append([]any{ + "stream_entry_id", streamEntryID, + "delivery_id", entry.DeliveryID, + "source", entry.Source, + "idempotency_key", entry.IdempotencyKey, + "trace_id", optionalRawString(rawFields, "trace_id"), + "failure_code", string(entry.FailureCode), + "failure_message", entry.FailureMessage, + }, logging.TraceAttrsFromContext(ctx)...)..., + ) + + return nil +} + +func cloneRawFields(values map[string]any) map[string]any { + if values == nil { + return map[string]any{} + } + + cloned := make(map[string]any, len(values)) + for key, value := range values { + cloned[key] = cloneRawValue(value) + } + + return cloned +} + +func cloneRawValue(value any) any { + switch typed := value.(type) { + case map[string]any: + return cloneRawFields(typed) + case []any: + cloned := make([]any, len(typed)) + for index, item := range typed { + cloned[index] = cloneRawValue(item) + } + return cloned + default: + return typed + } +} + +func optionalRawString(values map[string]any, key string) string { + raw, ok := values[key] + if !ok { + return "" + } + + value, ok := raw.(string) + if !ok { + return "" + } + + return value +} + +// Shutdown stops the command consumer within ctx. The consumer uses the +// shared process Redis client and therefore has no dedicated resources to +// release here. +func (consumer *CommandConsumer) Shutdown(ctx context.Context) error { + if ctx == nil { + return errors.New("shutdown command consumer: nil context") + } + if consumer == nil { + return nil + } + + var err error + consumer.closeOnce.Do(func() { + if consumer.client != nil { + err = consumer.client.Close() + } + }) + + return err +} diff --git a/mail/internal/worker/command_consumer_test.go b/mail/internal/worker/command_consumer_test.go new file mode 100644 index 0000000..d807447 --- /dev/null +++ b/mail/internal/worker/command_consumer_test.go @@ -0,0 +1,391 @@ +package worker + +import ( + "context" + "errors" + "io" + "log/slog" + "testing" + "time" + + "galaxy/mail/internal/adapters/redisstate" + "galaxy/mail/internal/service/acceptgenericdelivery" + + "github.com/alicebob/miniredis/v2" + "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/require" +) + +func TestCommandConsumerAcceptsRenderedCommand(t *testing.T) { + t.Parallel() + + fixture := newCommandConsumerFixture(t) + messageID := addRenderedCommand(t, fixture.client, "mail-123", "notification:mail-123") + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan error, 1) + go func() { + done <- fixture.consumer.Run(ctx) + }() + + require.Eventually(t, func() bool { + delivery, found, err := fixture.acceptanceStore.GetDelivery(context.Background(), "mail-123") + if err != nil || !found { + return false + } + entryID, found, err := fixture.offsetStore.Load(context.Background(), fixture.stream) + return err == nil && found && entryID == messageID && delivery.DeliveryID == "mail-123" + }, 5*time.Second, 20*time.Millisecond) + + cancel() + require.ErrorIs(t, <-done, context.Canceled) +} + +func TestCommandConsumerAcceptsTemplateCommand(t *testing.T) { + t.Parallel() + + fixture := newCommandConsumerFixture(t) + messageID := addTemplateCommand(t, fixture.client, "mail-124", "notification:mail-124") + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan error, 1) + go func() { + done <- fixture.consumer.Run(ctx) + }() + + require.Eventually(t, func() bool { + delivery, found, err := fixture.acceptanceStore.GetDelivery(context.Background(), "mail-124") + if err != nil || !found { + return false + } + entryID, found, err := fixture.offsetStore.Load(context.Background(), fixture.stream) + return err == nil && found && entryID == messageID && delivery.TemplateID == "game.turn_ready" + }, 5*time.Second, 20*time.Millisecond) + + cancel() + require.ErrorIs(t, <-done, context.Canceled) +} + +func TestCommandConsumerRecordsMalformedCommandAndContinues(t *testing.T) { + t.Parallel() + + fixture := newCommandConsumerFixture(t) + malformedID := addMalformedRenderedCommand(t, fixture.client, "mail-bad", "notification:mail-bad") + validID := addRenderedCommand(t, fixture.client, "mail-125", "notification:mail-125") + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan error, 1) + go func() { + done <- fixture.consumer.Run(ctx) + }() + + require.Eventually(t, func() bool { + _, deliveryFound, deliveryErr := fixture.acceptanceStore.GetDelivery(context.Background(), "mail-125") + entry, malformedFound, malformedErr := fixture.malformedStore.Get(context.Background(), malformedID) + entryID, offsetFound, offsetErr := fixture.offsetStore.Load(context.Background(), fixture.stream) + return deliveryErr == nil && + malformedErr == nil && + offsetErr == nil && + deliveryFound && + malformedFound && + entry.FailureCode == "invalid_payload" && + offsetFound && + entryID == validID + }, 5*time.Second, 20*time.Millisecond) + + cancel() + require.ErrorIs(t, <-done, context.Canceled) +} + +func TestCommandConsumerRestartsFromSavedOffset(t *testing.T) { + t.Parallel() + + fixture := newCommandConsumerFixture(t) + firstID := addRenderedCommand(t, fixture.client, "mail-126", "notification:mail-126") + + firstCtx, firstCancel := context.WithCancel(context.Background()) + firstDone := make(chan error, 1) + go func() { + firstDone <- fixture.consumer.Run(firstCtx) + }() + + require.Eventually(t, func() bool { + entryID, found, err := fixture.offsetStore.Load(context.Background(), fixture.stream) + return err == nil && found && entryID == firstID + }, 5*time.Second, 20*time.Millisecond) + + firstCancel() + require.ErrorIs(t, <-firstDone, context.Canceled) + + secondID := addRenderedCommand(t, fixture.client, "mail-127", "notification:mail-127") + + secondCtx, secondCancel := context.WithCancel(context.Background()) + secondDone := make(chan error, 1) + go func() { + secondDone <- fixture.consumer.Run(secondCtx) + }() + + require.Eventually(t, func() bool { + _, firstFound, firstErr := fixture.acceptanceStore.GetDelivery(context.Background(), "mail-126") + _, secondFound, secondErr := fixture.acceptanceStore.GetDelivery(context.Background(), "mail-127") + entryID, offsetFound, offsetErr := fixture.offsetStore.Load(context.Background(), fixture.stream) + return firstErr == nil && + secondErr == nil && + offsetErr == nil && + firstFound && + secondFound && + offsetFound && + entryID == secondID + }, 5*time.Second, 20*time.Millisecond) + + secondCancel() + require.ErrorIs(t, <-secondDone, context.Canceled) +} + +func TestCommandConsumerDoesNotDuplicateAcceptanceAfterOffsetSaveFailure(t *testing.T) { + t.Parallel() + + fixture := newCommandConsumerFixture(t) + messageID := addRenderedCommand(t, fixture.client, "mail-128", "notification:mail-128") + failingOffsetStore := &scriptedOffsetStore{ + saveErrs: []error{errors.New("offset unavailable")}, + } + consumer := newCommandConsumerForTest(t, fixture.client, fixture.stream, fixture.acceptor, fixture.malformedStore, failingOffsetStore) + + err := consumer.Run(context.Background()) + require.Error(t, err) + require.ErrorContains(t, err, "save stream offset") + + delivery, found, err := fixture.acceptanceStore.GetDelivery(context.Background(), "mail-128") + require.NoError(t, err) + require.True(t, found) + require.Equal(t, "mail-128", delivery.DeliveryID.String()) + + indexCard, err := fixture.client.ZCard(context.Background(), redisstate.Keyspace{}.CreatedAtIndex()).Result() + require.NoError(t, err) + require.EqualValues(t, 1, indexCard) + + replayConsumer := newCommandConsumerForTest(t, fixture.client, fixture.stream, fixture.acceptor, fixture.malformedStore, failingOffsetStore) + replayCtx, replayCancel := context.WithCancel(context.Background()) + replayDone := make(chan error, 1) + go func() { + replayDone <- replayConsumer.Run(replayCtx) + }() + + require.Eventually(t, func() bool { + return failingOffsetStore.lastEntryID == messageID + }, 5*time.Second, 20*time.Millisecond) + + replayCancel() + require.ErrorIs(t, <-replayDone, context.Canceled) + + indexCard, err = fixture.client.ZCard(context.Background(), redisstate.Keyspace{}.CreatedAtIndex()).Result() + require.NoError(t, err) + require.EqualValues(t, 1, indexCard) + + scheduleCard, err := fixture.client.ZCard(context.Background(), redisstate.Keyspace{}.AttemptSchedule()).Result() + require.NoError(t, err) + require.EqualValues(t, 1, scheduleCard) +} + +func TestCommandConsumerRecordsIdempotencyConflictAsMalformed(t *testing.T) { + t.Parallel() + + fixture := newCommandConsumerFixture(t) + addRenderedCommand(t, fixture.client, "mail-129", "notification:shared") + conflictID := addRenderedCommandWithSubject(t, fixture.client, "mail-130", "notification:shared", "Different subject") + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan error, 1) + go func() { + done <- fixture.consumer.Run(ctx) + }() + + require.Eventually(t, func() bool { + _, firstFound, firstErr := fixture.acceptanceStore.GetDelivery(context.Background(), "mail-129") + _, secondFound, secondErr := fixture.acceptanceStore.GetDelivery(context.Background(), "mail-130") + entry, malformedFound, malformedErr := fixture.malformedStore.Get(context.Background(), conflictID) + return firstErr == nil && + secondErr == nil && + malformedErr == nil && + firstFound && + !secondFound && + malformedFound && + entry.FailureCode == "idempotency_conflict" + }, 5*time.Second, 20*time.Millisecond) + + cancel() + require.ErrorIs(t, <-done, context.Canceled) +} + +type commandConsumerFixture struct { + client *redis.Client + stream string + consumer *CommandConsumer + acceptor *acceptgenericdelivery.Service + acceptanceStore *redisstate.GenericAcceptanceStore + malformedStore *redisstate.MalformedCommandStore + offsetStore *redisstate.StreamOffsetStore +} + +func newCommandConsumerFixture(t *testing.T) commandConsumerFixture { + t.Helper() + + server := miniredis.RunT(t) + client := redis.NewClient(&redis.Options{Addr: server.Addr()}) + t.Cleanup(func() { require.NoError(t, client.Close()) }) + + acceptanceStore, err := redisstate.NewGenericAcceptanceStore(client) + require.NoError(t, err) + now := time.Now().UTC().Truncate(time.Millisecond) + acceptor, err := acceptgenericdelivery.New(acceptgenericdelivery.Config{ + Store: acceptanceStore, + Clock: testClock{now: now}, + IdempotencyTTL: redisstate.IdempotencyTTL, + }) + require.NoError(t, err) + + malformedStore, err := redisstate.NewMalformedCommandStore(client) + require.NoError(t, err) + offsetStore, err := redisstate.NewStreamOffsetStore(client) + require.NoError(t, err) + + stream := redisstate.Keyspace{}.DeliveryCommands() + consumer := newCommandConsumerForTest(t, client, stream, acceptor, malformedStore, offsetStore) + + return commandConsumerFixture{ + client: client, + stream: stream, + consumer: consumer, + acceptor: acceptor, + acceptanceStore: acceptanceStore, + malformedStore: malformedStore, + offsetStore: offsetStore, + } +} + +func newCommandConsumerForTest( + t *testing.T, + client *redis.Client, + stream string, + acceptor AcceptGenericDeliveryUseCase, + malformedRecorder MalformedCommandRecorder, + offsetStore StreamOffsetStore, +) *CommandConsumer { + t.Helper() + + consumer, err := NewCommandConsumer(CommandConsumerConfig{ + Client: client, + Stream: stream, + BlockTimeout: 20 * time.Millisecond, + Acceptor: acceptor, + MalformedRecorder: malformedRecorder, + OffsetStore: offsetStore, + Clock: testClock{now: time.Now().UTC().Truncate(time.Millisecond)}, + }, testLogger()) + require.NoError(t, err) + + return consumer +} + +func addRenderedCommand(t *testing.T, client *redis.Client, deliveryID string, idempotencyKey string) string { + t.Helper() + + return addRenderedCommandWithSubject(t, client, deliveryID, idempotencyKey, "Turn ready") +} + +func addRenderedCommandWithSubject(t *testing.T, client *redis.Client, deliveryID string, idempotencyKey string, subject string) string { + t.Helper() + + messageID, err := client.XAdd(context.Background(), &redis.XAddArgs{ + Stream: redisstate.Keyspace{}.DeliveryCommands(), + Values: map[string]any{ + "delivery_id": deliveryID, + "source": "notification", + "payload_mode": "rendered", + "idempotency_key": idempotencyKey, + "requested_at_ms": "1775121700000", + "payload_json": `{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":["noreply@example.com"],"subject":"` + subject + `","text_body":"Turn 54 is ready.","html_body":"

Turn 54 is ready.

","attachments":[]}`, + }, + }).Result() + require.NoError(t, err) + + return messageID +} + +func addTemplateCommand(t *testing.T, client *redis.Client, deliveryID string, idempotencyKey string) string { + t.Helper() + + messageID, err := client.XAdd(context.Background(), &redis.XAddArgs{ + Stream: redisstate.Keyspace{}.DeliveryCommands(), + Values: map[string]any{ + "delivery_id": deliveryID, + "source": "notification", + "payload_mode": "template", + "idempotency_key": idempotencyKey, + "requested_at_ms": "1775121700001", + "payload_json": `{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":[],"template_id":"game.turn_ready","locale":"fr-FR","variables":{"turn_number":54},"attachments":[]}`, + }, + }).Result() + require.NoError(t, err) + + return messageID +} + +func addMalformedRenderedCommand(t *testing.T, client *redis.Client, deliveryID string, idempotencyKey string) string { + t.Helper() + + messageID, err := client.XAdd(context.Background(), &redis.XAddArgs{ + Stream: redisstate.Keyspace{}.DeliveryCommands(), + Values: map[string]any{ + "delivery_id": deliveryID, + "source": "notification", + "payload_mode": "rendered", + "idempotency_key": idempotencyKey, + "requested_at_ms": "1775121700000", + "payload_json": `{"to":["pilot@example.com"],"cc":[],"bcc":[],"reply_to":[],"text_body":"Turn 54 is ready.","attachments":[]}`, + }, + }).Result() + require.NoError(t, err) + + return messageID +} + +type testClock struct { + now time.Time +} + +func (clock testClock) Now() time.Time { + return clock.now +} + +type scriptedOffsetStore struct { + lastEntryID string + found bool + saveErrs []error + saveCalls int +} + +func (store *scriptedOffsetStore) Load(context.Context, string) (string, bool, error) { + if !store.found { + return "", false, nil + } + + return store.lastEntryID, true, nil +} + +func (store *scriptedOffsetStore) Save(_ context.Context, _ string, entryID string) error { + if store.saveCalls < len(store.saveErrs) && store.saveErrs[store.saveCalls] != nil { + store.saveCalls++ + return store.saveErrs[store.saveCalls-1] + } + + store.saveCalls++ + store.lastEntryID = entryID + store.found = true + return nil +} + +func testLogger() *slog.Logger { + return slog.New(slog.NewJSONHandler(io.Discard, nil)) +} diff --git a/mail/internal/worker/scheduler.go b/mail/internal/worker/scheduler.go new file mode 100644 index 0000000..81fab77 --- /dev/null +++ b/mail/internal/worker/scheduler.go @@ -0,0 +1,347 @@ +package worker + +import ( + "context" + "errors" + "fmt" + "log/slog" + "time" + + "galaxy/mail/internal/domain/attempt" + "galaxy/mail/internal/domain/common" + deliverydomain "galaxy/mail/internal/domain/delivery" + "galaxy/mail/internal/logging" + "galaxy/mail/internal/service/executeattempt" +) + +const ( + defaultSchedulePollInterval = 250 * time.Millisecond + defaultRecoveryInterval = 30 * time.Second + defaultRecoveryGrace = 30 * time.Second +) + +// AttemptExecutionStore describes the durable state operations used by the +// attempt scheduler. +type AttemptExecutionStore interface { + // NextDueDeliveryIDs returns up to limit due delivery identifiers. + NextDueDeliveryIDs(context.Context, time.Time, int64) ([]common.DeliveryID, error) + + // SendingDeliveryIDs returns every delivery currently indexed as sending. + SendingDeliveryIDs(context.Context) ([]common.DeliveryID, error) + + // LoadWorkItem loads the current delivery and active attempt for deliveryID. + LoadWorkItem(context.Context, common.DeliveryID) (executeattempt.WorkItem, bool, error) + + // ClaimDueAttempt atomically claims the due scheduled attempt for + // deliveryID. + ClaimDueAttempt(context.Context, common.DeliveryID, time.Time) (executeattempt.WorkItem, bool, error) + + // RemoveScheduledDelivery removes deliveryID from the attempt schedule set. + RemoveScheduledDelivery(context.Context, common.DeliveryID) error +} + +// AttemptPreparationService prepares queued template deliveries and recovers +// stale claimed attempts. +type AttemptPreparationService interface { + // Prepare renders one queued template delivery when needed and reports + // whether the scheduler may continue to claim the attempt. + Prepare(context.Context, executeattempt.WorkItem) (bool, error) + + // RecoverExpired marks one stale in-progress attempt as timed out. + RecoverExpired(context.Context, executeattempt.WorkItem) error +} + +// SchedulerTelemetry records low-cardinality scheduler-side delivery +// transitions. +type SchedulerTelemetry interface { + // RecordDeliveryStatusTransition records one durable delivery status + // transition. + RecordDeliveryStatusTransition(context.Context, string, string) +} + +// SchedulerConfig stores the dependencies used by Scheduler. +type SchedulerConfig struct { + // Store owns the durable scheduled and in-progress attempt state. + Store AttemptExecutionStore + + // Service prepares queued template deliveries and recovers stale claims. + Service AttemptPreparationService + + // WorkQueue stores the claimed attempt handoff channel consumed by the + // attempt worker pool. + WorkQueue chan<- executeattempt.WorkItem + + // Clock provides the scheduler wall clock. + Clock Clock + + // AttemptTimeout stores the provider execution budget used to derive claim + // recovery deadlines. + AttemptTimeout time.Duration + + // Telemetry records scheduler-side delivery transitions. + Telemetry SchedulerTelemetry + + // PollInterval overrides the default due-attempt polling interval when + // positive. + PollInterval time.Duration + + // RecoveryInterval overrides the default stale-claim recovery interval when + // positive. + RecoveryInterval time.Duration + + // RecoveryGrace overrides the default stale-claim grace window when + // positive. + RecoveryGrace time.Duration +} + +// Scheduler polls due attempts, optionally renders queued template +// deliveries, atomically claims runnable work, and recovers stale in-progress +// ownership. +type Scheduler struct { + store AttemptExecutionStore + service AttemptPreparationService + workQueue chan<- executeattempt.WorkItem + clock Clock + attemptTimeout time.Duration + telemetry SchedulerTelemetry + pollInterval time.Duration + recoveryInterval time.Duration + recoveryGrace time.Duration + logger *slog.Logger +} + +// NewScheduler constructs one attempt scheduler. +func NewScheduler(cfg SchedulerConfig, logger *slog.Logger) (*Scheduler, error) { + switch { + case cfg.Store == nil: + return nil, errors.New("new scheduler: nil attempt execution store") + case cfg.Service == nil: + return nil, errors.New("new scheduler: nil attempt preparation service") + case cfg.WorkQueue == nil: + return nil, errors.New("new scheduler: nil work queue") + case cfg.Clock == nil: + return nil, errors.New("new scheduler: nil clock") + case cfg.AttemptTimeout <= 0: + return nil, errors.New("new scheduler: non-positive attempt timeout") + } + if logger == nil { + logger = slog.Default() + } + + pollInterval := cfg.PollInterval + if pollInterval <= 0 { + pollInterval = defaultSchedulePollInterval + } + + recoveryInterval := cfg.RecoveryInterval + if recoveryInterval <= 0 { + recoveryInterval = defaultRecoveryInterval + } + + recoveryGrace := cfg.RecoveryGrace + if recoveryGrace <= 0 { + recoveryGrace = defaultRecoveryGrace + } + + return &Scheduler{ + store: cfg.Store, + service: cfg.Service, + workQueue: cfg.WorkQueue, + clock: cfg.Clock, + attemptTimeout: cfg.AttemptTimeout, + telemetry: cfg.Telemetry, + pollInterval: pollInterval, + recoveryInterval: recoveryInterval, + recoveryGrace: recoveryGrace, + logger: logger.With( + "component", "scheduler", + "poll_interval", pollInterval.String(), + "recovery_interval", recoveryInterval.String(), + "recovery_grace", recoveryGrace.String(), + ), + }, nil +} + +// Run starts the scheduler loop and blocks until ctx is canceled or one +// durable state operation fails. +func (scheduler *Scheduler) Run(ctx context.Context) error { + if ctx == nil { + return errors.New("run scheduler: nil context") + } + if err := ctx.Err(); err != nil { + return err + } + if scheduler == nil { + return errors.New("run scheduler: nil scheduler") + } + + scheduler.logger.Info("scheduler started") + defer scheduler.logger.Info("scheduler stopped") + + if err := scheduler.recoverExpired(ctx); err != nil { + return err + } + + pollTicker := time.NewTicker(scheduler.pollInterval) + defer pollTicker.Stop() + + recoveryTicker := time.NewTicker(scheduler.recoveryInterval) + defer recoveryTicker.Stop() + + for { + select { + case <-ctx.Done(): + return ctx.Err() + case <-pollTicker.C: + if err := scheduler.dispatchDueAttempts(ctx); err != nil { + return err + } + case <-recoveryTicker.C: + if err := scheduler.recoverExpired(ctx); err != nil { + return err + } + } + } +} + +// Shutdown stops the scheduler within ctx. The scheduler does not own +// additional resources beyond its run loop. +func (scheduler *Scheduler) Shutdown(ctx context.Context) error { + if ctx == nil { + return errors.New("shutdown scheduler: nil context") + } + if scheduler == nil { + return nil + } + + return nil +} + +func (scheduler *Scheduler) dispatchDueAttempts(ctx context.Context) error { + for { + now := scheduler.clock.Now().UTC().Truncate(time.Millisecond) + deliveryIDs, err := scheduler.store.NextDueDeliveryIDs(ctx, now, 1) + if err != nil { + return fmt.Errorf("dispatch due attempts: %w", err) + } + if len(deliveryIDs) == 0 { + return nil + } + + if err := scheduler.dispatchOne(ctx, deliveryIDs[0], now); err != nil { + return err + } + } +} + +func (scheduler *Scheduler) dispatchOne(ctx context.Context, deliveryID common.DeliveryID, now time.Time) error { + workItem, found, err := scheduler.store.LoadWorkItem(ctx, deliveryID) + if err != nil { + return fmt.Errorf("dispatch due delivery %q: load work item: %w", deliveryID, err) + } + if !found { + if err := scheduler.store.RemoveScheduledDelivery(ctx, deliveryID); err != nil { + return fmt.Errorf("dispatch due delivery %q: remove stale schedule: %w", deliveryID, err) + } + return nil + } + if !isSchedulable(workItem) { + if err := scheduler.store.RemoveScheduledDelivery(ctx, deliveryID); err != nil { + return fmt.Errorf("dispatch due delivery %q: remove unschedulable entry: %w", deliveryID, err) + } + return nil + } + + ready, err := scheduler.service.Prepare(ctx, workItem) + if err != nil { + return fmt.Errorf("dispatch due delivery %q: prepare attempt: %w", deliveryID, err) + } + if !ready { + return nil + } + + claimed, found, err := scheduler.store.ClaimDueAttempt(ctx, deliveryID, now) + if err != nil { + return fmt.Errorf("dispatch due delivery %q: claim attempt: %w", deliveryID, err) + } + if !found { + return nil + } + scheduler.recordStatusTransition(ctx, claimed.Delivery) + + select { + case <-ctx.Done(): + return ctx.Err() + case scheduler.workQueue <- claimed: + logArgs := logging.DeliveryAttemptAttrs(claimed.Delivery, claimed.Attempt) + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + scheduler.logger.Debug("attempt claimed", logArgs...) + return nil + } +} + +func (scheduler *Scheduler) recoverExpired(ctx context.Context) error { + now := scheduler.clock.Now().UTC().Truncate(time.Millisecond) + deadline := now.Add(-(scheduler.attemptTimeout + scheduler.recoveryGrace)) + + deliveryIDs, err := scheduler.store.SendingDeliveryIDs(ctx) + if err != nil { + return fmt.Errorf("recover expired attempts: %w", err) + } + + for _, deliveryID := range deliveryIDs { + workItem, found, err := scheduler.store.LoadWorkItem(ctx, deliveryID) + if err != nil { + return fmt.Errorf("recover expired delivery %q: load work item: %w", deliveryID, err) + } + if !found || !isRecoverable(workItem) || workItem.Attempt.StartedAt == nil { + continue + } + if workItem.Attempt.StartedAt.After(deadline) { + continue + } + + if err := scheduler.service.RecoverExpired(ctx, workItem); err != nil { + return fmt.Errorf("recover expired delivery %q: %w", deliveryID, err) + } + + logArgs := logging.DeliveryAttemptAttrs(workItem.Delivery, workItem.Attempt) + logArgs = append(logArgs, "started_at", workItem.Attempt.StartedAt) + logArgs = append(logArgs, logging.TraceAttrsFromContext(ctx)...) + scheduler.logger.Warn("attempt claim expired", logArgs...) + } + + return nil +} + +func (scheduler *Scheduler) recordStatusTransition(ctx context.Context, record deliverydomain.Delivery) { + if scheduler == nil || scheduler.telemetry == nil { + return + } + + scheduler.telemetry.RecordDeliveryStatusTransition(ctx, string(record.Status), string(record.Source)) +} + +func isSchedulable(item executeattempt.WorkItem) bool { + if item.Delivery.AttemptCount != item.Attempt.AttemptNo { + return false + } + switch item.Delivery.Status { + case deliverydomain.StatusQueued, deliverydomain.StatusRendered: + default: + return false + } + + return item.Attempt.Status == attempt.StatusScheduled +} + +func isRecoverable(item executeattempt.WorkItem) bool { + if item.Delivery.AttemptCount != item.Attempt.AttemptNo { + return false + } + if item.Delivery.Status != deliverydomain.StatusSending { + return false + } + + return item.Attempt.Status == attempt.StatusInProgress +} diff --git a/mail/templates/auth.login_code/en/subject.tmpl b/mail/templates/auth.login_code/en/subject.tmpl new file mode 100644 index 0000000..3ce9ccf --- /dev/null +++ b/mail/templates/auth.login_code/en/subject.tmpl @@ -0,0 +1 @@ +Your login code diff --git a/mail/templates/auth.login_code/en/text.tmpl b/mail/templates/auth.login_code/en/text.tmpl new file mode 100644 index 0000000..650f52f --- /dev/null +++ b/mail/templates/auth.login_code/en/text.tmpl @@ -0,0 +1 @@ +Your login code is {{.code}}. diff --git a/user/README.md b/user/README.md index 7ceedfc..5c1cc4e 100644 --- a/user/README.md +++ b/user/README.md @@ -111,10 +111,11 @@ decisions: - Existing users ignore the registration context completely. - Existing users must not have settings overwritten by a later auth flow. - The current rollout source of truth is: - - `Auth / Session Service` sends temporary `preferred_language="en"` + - `Auth / Session Service` forwards the preferred-language candidate derived + from public `Accept-Language` + - unsupported or missing public language input falls back to `en` - `Auth / Session Service` forwards the public confirm `time_zone` - - gateway-side geoip language derivation is not part of the current - contract yet + - the create-only registration context remains unchanged for existing users Auth-facing blocking semantics: diff --git a/user/docs/flows.md b/user/docs/flows.md index f0ba36f..eeec4dd 100644 --- a/user/docs/flows.md +++ b/user/docs/flows.md @@ -28,8 +28,9 @@ Rules: - `registration_context` is create-only - existing users ignore the supplied registration context - blocked subjects return `blocked` rather than creating a user -- the current rollout sends temporary `preferred_language="en"` from - authsession and forwards the public confirm `time_zone` +- the current rollout sends the preferred-language candidate derived from + public `Accept-Language`, falls back to `en` when needed, and forwards the + public confirm `time_zone` Create side effects: diff --git a/user/docs/runbook.md b/user/docs/runbook.md index 9bf1e96..a7198d0 100644 --- a/user/docs/runbook.md +++ b/user/docs/runbook.md @@ -84,8 +84,9 @@ Checks: - Keep `Auth / Session Service` and `User Service` aligned on the current `registration_context` shape. -- During the current rollout, treat authsession-provided - `preferred_language="en"` as the active create-path contract. +- During the current rollout, treat the authsession-provided + `preferred_language` derived from public `Accept-Language`, with fallback to + `en`, as the active create-path contract. - Gateway direct `user.*` self-service routing depends on the internal REST routes staying stable. - Do not roll out billing-driven entitlement mutations assuming another diff --git a/user/go.mod b/user/go.mod index 51cf0e5..2ff646c 100644 --- a/user/go.mod +++ b/user/go.mod @@ -23,7 +23,7 @@ require ( go.opentelemetry.io/otel/sdk v1.43.0 go.opentelemetry.io/otel/sdk/metric v1.43.0 go.opentelemetry.io/otel/trace v1.43.0 - golang.org/x/text v0.35.0 + golang.org/x/text v0.36.0 ) require ( @@ -51,6 +51,7 @@ require ( github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect + github.com/klauspost/compress v1.18.5 // indirect github.com/klauspost/cpuid/v2 v2.3.0 // indirect github.com/leodido/go-urn v1.4.0 // indirect github.com/mailru/easyjson v0.7.7 // indirect diff --git a/user/go.sum b/user/go.sum index 9c868ef..bc5e821 100644 --- a/user/go.sum +++ b/user/go.sum @@ -70,8 +70,7 @@ github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8Hm github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= -github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= -github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= +github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE= github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y= github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= @@ -198,8 +197,7 @@ golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo= golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw= -golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8= -golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA= +golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg= gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4= gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E= google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=