chore(backend): finalize backend baseline and frontend handoff contract

freeze the current backend contract for frontend integration

document the stabilized backend surface and handoff expectations
mark the current state as the baseline for further frontend work
This commit is contained in:
greebo
2026-03-20 16:46:24 +03:00
parent 5aa35b1d04
commit 54b36ba76c
8 changed files with 1103 additions and 23 deletions

View File

@@ -0,0 +1,528 @@
# Backend Integration Contract
This document is the frontend handoff contract for the `svg-service` backend. It is written as an integration baseline, not as an internal backend README.
## 1. Base URL and Auth
- Base URL: `http://<host>:9020`
- API prefix: `/api/v1`
- Auth header: `X-API-Key`
All non-`/healthz` routes require an API key.
Auth failure contract:
- missing API key -> `401` with string detail: `Missing API key`
- invalid API key -> `403` with string detail: `Invalid API key`
- valid non-admin key on admin-only route -> `403` with string detail: `Admin role required`
## 2. Roles and Access Boundaries
- `admin`
- full access to protected routes
- required for all `/api/v1/admin/...` routes
- `operator`
- allowed on non-admin protected routes
- denied on admin-only routes
- `viewer`
- allowed on non-admin protected routes
- denied on admin-only routes
Frontend implication:
- admin UI must treat admin routes as optional capabilities gated by role
- frontend must not assume `operator` or `viewer` can call cleanup, audit, backfill, or current-artifact admin routes
## 3. Core Entities
### Upload
Represents one uploaded SVG source and its normalized/sanitized artifacts.
Important fields:
- `upload_id`
- `original_filename`
- `content_type`
- `size_bytes`
- `original_storage_path`
- `sanitized_storage_path`
- `normalized_storage_path`
- `normalized_elements_count`
- `normalized_seats_count`
- `normalized_groups_count`
- `normalized_sectors_count`
### Scheme
Top-level business object created from upload.
Important fields:
- `scheme_id`
- `source_upload_id`
- `name`
- `status`
- `current_version_number`
- `published_at`
### Scheme Version
Versioned snapshot of the scheme structure and publish state.
Important fields:
- `scheme_version_id`
- `scheme_id`
- `version_number`
- `status`
- `normalized_storage_path`
- `normalized_*_count`
### Sector
Structure entity in a specific `scheme_version`.
Important fields:
- `sector_record_id`
- `sector_id`
- `element_id`
- `name`
Business identity priority:
- use `sector_id` when present
- fallback to `element_id`
- never treat `sector_record_id` as business identity across versions
### Group
Important fields:
- `group_record_id`
- `group_id`
- `element_id`
- `name`
Business identity priority:
- use `group_id` when present
- fallback to `element_id`
- never treat `group_record_id` as business identity across versions
### Seat
Important fields:
- `seat_record_id`
- `seat_id`
- `element_id`
- `sector_id`
- `group_id`
- `row_label`
- `seat_number`
Business identity priority:
- use `seat_id` when present
- fallback to `element_id`
- never treat `seat_record_id` as business identity across versions
### Pricing Category
Important fields:
- `pricing_category_id`
- `scheme_id`
- `name`
- `code`
### Price Rule
Important fields:
- `price_rule_id`
- `scheme_id`
- `pricing_category_id`
- `target_type`
- `target_ref`
- `amount`
- `currency`
### Artifact
Artifact registry row for generated backend files.
Important fields:
- `artifact_id`
- `artifact_type`
- `artifact_variant`
- `storage_path`
- `status`
- `meta_json`
Important artifact types currently exercised by regression:
- `sanitized_svg`
- `normalized_json`
- `display_svg`
- `publish_preview`
## 4. Lifecycle State Machine
### Fresh Upload
Flow:
1. `POST /api/v1/schemes/upload`
2. backend creates:
- `upload`
- `scheme`
- initial `scheme_version`
- structure rows
- initial artifacts
Expected initial state:
- `scheme.status = draft`
- `scheme.current_version_number = 1`
- current version status = `draft`
### Current Draft
If current scheme/version is still draft:
- editor works directly against current version
- `draft/ensure` is idempotent
- `draft/ensure` returns `created=false`
### Ensure Draft From Published Current
If current scheme/version is published:
- `POST /api/v1/schemes/{scheme_id}/draft/ensure`
- backend creates a new draft version
- current pointer switches to the new draft
- version number increments
### Publish
Preconditions:
- current scheme is draft
- current version is draft
- publish readiness must be satisfied
Publish path:
1. optional `draft/pricing/snapshot`
2. `GET draft/publish-readiness`
3. optional `GET draft/publish-preview`
4. `POST /api/v1/schemes/{scheme_id}/publish`
Expected result:
- scheme becomes `published`
- current version becomes `published`
### Rollback
Path:
- `POST /api/v1/schemes/{scheme_id}/rollback`
Effect:
- current pointer switches to requested historical `version_number`
- scheme returns to `draft`
- target version becomes current editable draft
### Unpublish
Path:
- `POST /api/v1/schemes/{scheme_id}/unpublish`
Effect:
- current scheme becomes `draft`
- current version becomes `draft`
## 5. Editor Flow
### Entry Point
- `GET /api/v1/schemes/{scheme_id}/editor/context`
Use it first to decide whether:
- current draft can be edited directly
- or a new draft must be created from published current
Important response fields:
- `current_scheme_version_id`
- `current_version_number`
- `scheme_status`
- `scheme_version_status`
- `current_is_draft`
- `create_draft_available`
- `recommended_action`
### Draft Read Models
- `POST /api/v1/schemes/{scheme_id}/draft/ensure`
- `GET /api/v1/schemes/{scheme_id}/draft/summary`
- `GET /api/v1/schemes/{scheme_id}/draft/structure`
- `GET /api/v1/schemes/{scheme_id}/draft/validation`
- `GET /api/v1/schemes/{scheme_id}/draft/compare-preview`
Frontend should treat `draft/structure` as the main editable read model.
### Patch Operations
Supported flows:
- single seat patch
- bulk seat patch
- sector create/patch/delete
- group create/patch/delete
- repair references
- remap preview/apply
Frontend rule:
- always send `expected_scheme_version_id` when mutating or reading draft state after editor entry
### Stale Conflict Handling
If backend returns a stale or draft editability conflict:
- stop optimistic local mutation flow
- re-read:
- `editor/context`
- `draft/summary`
- `draft/structure`
Do not keep editing against stale cached `scheme_version_id`.
## 6. Pricing Flow
### Categories
- `GET /api/v1/schemes/{scheme_id}/pricing`
- `POST /api/v1/schemes/{scheme_id}/pricing/categories`
- `PUT /api/v1/schemes/{scheme_id}/pricing/categories/{pricing_category_id}`
- `DELETE /api/v1/schemes/{scheme_id}/pricing/categories/{pricing_category_id}`
### Rules
- `POST /api/v1/schemes/{scheme_id}/pricing/rules`
- `PUT /api/v1/schemes/{scheme_id}/pricing/rules/{price_rule_id}`
- `DELETE /api/v1/schemes/{scheme_id}/pricing/rules/{price_rule_id}`
### Read Models
- `GET /api/v1/schemes/{scheme_id}/pricing`
- `GET /api/v1/schemes/{scheme_id}/pricing/coverage`
- `GET /api/v1/schemes/{scheme_id}/pricing/unpriced-seats`
- `GET /api/v1/schemes/{scheme_id}/pricing/explain/{seat_id}`
- `GET /api/v1/schemes/{scheme_id}/pricing/rules/diagnostics`
- `GET /api/v1/schemes/{scheme_id}/current/seats/{seat_id}/price`
- `GET /api/v1/schemes/{scheme_id}/test/seats/{seat_id}`
Frontend rule:
- empty pricing on a fresh upload is valid
- do not treat `categories=[]` and `rules=[]` as backend failure
## 7. Publish Flow
Main endpoints:
- `POST /api/v1/schemes/{scheme_id}/draft/pricing/snapshot`
- `GET /api/v1/schemes/{scheme_id}/draft/publish-readiness`
- `GET /api/v1/schemes/{scheme_id}/draft/publish-preview`
- `POST /api/v1/schemes/{scheme_id}/publish`
Frontend sequencing rule:
1. ensure draft
2. mutate if needed
3. create/refresh pricing
4. build pricing snapshot
5. read publish readiness
6. read publish preview if UI needs preview surface
7. publish
## 8. Admin/Ops Flow
Admin-only endpoints:
- `GET /api/v1/admin/schemes/{scheme_id}/current/artifacts`
- `GET /api/v1/admin/schemes/{scheme_id}/current/validation`
- `POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate`
- `POST /api/v1/admin/display/backfill`
- `GET /api/v1/admin/artifacts/publish-preview/audit`
- `POST /api/v1/admin/artifacts/publish-preview/cleanup`
- `GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview`
- `POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup`
Healthy publish-preview audit contract:
- `orphan_files_count = 0`
- `missing_files_for_db_rows_count = 0`
- `db_rows_count == disk_files_count`
Frontend implication:
- admin tools must not be shown as generally available functionality
- admin cleanup/destructive flows must be role-gated on the client and still handle backend `403`
## 9. Typed Error Catalog
### Auth
- `401` string detail: `Missing API key`
- `403` string detail: `Invalid API key`
- `403` string detail: `Admin role required`
### Lifecycle / Draft / Publish
- `stale_draft_version`
- `stale_current_version`
- `current_version_inconsistent`
- `draft_not_editable`
- `publish_not_ready`
### Editor Uniqueness / References
- `editor_uniqueness_error`
- `editor_reference_error`
- `duplicate_seat_id`
- `duplicate_seat_id_in_payload`
- `duplicate_sector_id`
- `duplicate_group_id`
- `duplicate_sector_element_id`
- `duplicate_group_element_id`
- `unknown_sector_id`
- `unknown_group_id`
- `unknown_sector_ids`
- `unknown_group_ids`
- `unknown_target_sector_id`
- `unknown_target_group_id`
- `business_identifier_nullification_forbidden`
### Pricing / Remap / Test
- `invalid_amount`
- `remap_filter_required`
- `test_preview_failed`
### Validation Report Codes
These appear inside validation report payloads rather than as top-level HTTP conflict codes:
- `duplicate_seat_ids`
- `missing_seat_contract`
- `seats_without_sector_or_group`
- `seats_without_price`
Frontend rule:
- do not parse only HTTP status
- always inspect structured `detail.code` when `detail` is an object
## 10. Frontend Obligations
- always handle auth failures `401` and `403`
- always handle stale/conflict responses on draft, publish, and lifecycle operations
- never treat `*_record_id` as stable cross-version business identity
- always prefer business ids:
- seat -> `seat_id`, fallback `element_id`
- sector -> `sector_id`, fallback `element_id`
- group -> `group_id`, fallback `element_id`
- re-read current/draft state after:
- any `409`
- publish
- rollback
- unpublish
- `draft/ensure` returning a newly created draft
- do not assume current version remains stable across concurrent operator sessions
- do not assume publish-preview artifacts or display artifacts are frontend-owned resources
## 11. Non-Persistent Assumptions Frontend Must Avoid
The frontend must not assume that these remain stable forever:
- `scheme_version_id`
- `seat_record_id`
- `sector_record_id`
- `group_record_id`
- artifact `storage_path`
- publish-preview cache artifacts
These are safe to treat as business-stable:
- `scheme_id`
- `version_number` within one scheme
- `seat_id` when present
- `sector_id` when present
- `group_id` when present
## 12. Known Limitations / Deferred Tech Debt
- some lifecycle negative contracts still return mixed styles:
- typed object conflicts for `409`
- plain string details for some `404` and auth cases
- validation warnings and error code families are not yet unified into one single global error envelope
- admin/ops routes are backend-internal tools, not end-user product APIs
- corruption remediation smoke exists only for `publish_preview`, not for every artifact type
## 13. Regression Baseline Frontend Can Rely On
The frontend can rely on the following regression-backed flows:
- fresh upload on clean DB
- current/draft/editor read flow
- editor mutations and stale draft protection
- pricing setup and publish flow
- version lifecycle:
- publish
- ensure draft from published current
- rollback
- unpublish
- admin ops:
- audit
- cleanup
- destructive pricing cleanup for safe fixture categories
- full admin permission matrix on implemented admin endpoints
- controlled `publish_preview` corruption detection and remediation
- negative upload validation
- negative auth matrix
- negative lifecycle matrix
## 14. Recommended Frontend Integration Sequence
For normal editor work:
1. authenticate
2. upload or pick `scheme_id`
3. read `editor/context`
4. call `draft/ensure` if needed
5. read `draft/structure`
6. mutate using current `scheme_version_id`
7. on `409`, reload editor state before retry
8. configure pricing if needed
9. create pricing snapshot
10. read publish readiness / preview
11. publish
For admin UI:
1. verify admin role in client auth state
2. call admin endpoints
3. still handle backend `403`
4. treat cleanup and remediation as explicit operator actions, not background automation

View File

@@ -17,18 +17,22 @@ export API_URL="http://127.0.0.1:9020"
export API_KEY="admin-local-dev-key" export API_KEY="admin-local-dev-key"
export FIXTURE_SVG_PATH="/home/adminko/svg-service/sample-contract.svg" export FIXTURE_SVG_PATH="/home/adminko/svg-service/sample-contract.svg"
## Main scripts ## Active regression contour
Primary operator regressions: Primary operator regressions:
- `backend/scripts/smoke_core.sh` - `backend/scripts/smoke_core.sh`
- `backend/scripts/smoke_pricing_publish.sh` - `backend/scripts/smoke_pricing_publish.sh`
- `backend/scripts/smoke_version_lifecycle.sh` - `backend/scripts/smoke_version_lifecycle.sh`
- `backend/scripts/smoke_lifecycle_negative.sh`
- `backend/scripts/smoke_admin_ops.sh` - `backend/scripts/smoke_admin_ops.sh`
- `backend/scripts/smoke_authz_admin_ops.sh` - `backend/scripts/smoke_auth_negative.sh`
- `backend/scripts/smoke_authz_admin_all.sh`
- `backend/scripts/smoke_artifact_corruption.sh`
- `backend/scripts/smoke_upload_negative.sh` - `backend/scripts/smoke_upload_negative.sh`
- `backend/scripts/smoke_regression.sh` - `backend/scripts/smoke_regression.sh`
- `backend/scripts/editor_mutation_regression.sh`
Only this set is part of the active backend regression contour.
The scripts are expected to fail fast on any contract break or unexpected 5xx. The scripts are expected to fail fast on any contract break or unexpected 5xx.
@@ -37,11 +41,21 @@ The scripts are expected to fail fast on any contract break or unexpected 5xx.
- first runs `smoke_core.sh` - first runs `smoke_core.sh`
- then runs `smoke_pricing_publish.sh` - then runs `smoke_pricing_publish.sh`
- then runs `smoke_version_lifecycle.sh` - then runs `smoke_version_lifecycle.sh`
- then runs `smoke_lifecycle_negative.sh`
- then runs `smoke_admin_ops.sh` - then runs `smoke_admin_ops.sh`
- then runs `smoke_authz_admin_ops.sh` - then runs `smoke_authz_admin_all.sh`
- then runs `smoke_auth_negative.sh`
- then runs `smoke_artifact_corruption.sh`
- then runs `smoke_upload_negative.sh` - then runs `smoke_upload_negative.sh`
- returns non-zero if any scenario fails - returns non-zero if any scenario fails
## Standalone/manual scripts
- `backend/scripts/editor_mutation_regression.sh`
- `backend/scripts/cleanup_test_pricing_data.sh`
These scripts are intentionally not called by `smoke_regression.sh`.
## Scenario split ## Scenario split
### Core smoke on clean DB ### Core smoke on clean DB
@@ -92,6 +106,21 @@ Important:
- it verifies the rolled-back current structure matches the target version semantics, not the later mutated draft - it verifies the rolled-back current structure matches the target version semantics, not the later mutated draft
- it checks audit trail for `scheme.published`, `scheme.version.created`, `scheme.rolled_back`, and `scheme.unpublished` - it checks audit trail for `scheme.published`, `scheme.version.created`, `scheme.rolled_back`, and `scheme.unpublished`
### Lifecycle negative smoke
Use:
- `backend/scripts/smoke_lifecycle_negative.sh`
This scenario uses fresh disposable scheme data to verify negative lifecycle contracts without leaving the database in a broken state.
Important:
- it checks rollback to a nonexistent version
- it checks stale current-version guards on `draft/ensure`
- it checks stale expected-version guards on `publish`
- it creates a temporary `current_version_inconsistent` pointer only inside the scenario and restores it before exit
### Admin/ops smoke ### Admin/ops smoke
Use: Use:
@@ -113,9 +142,9 @@ Important:
Use: Use:
- `backend/scripts/smoke_authz_admin_ops.sh` - `backend/scripts/smoke_authz_admin_all.sh`
This scenario uploads a fresh SVG, prepares its own cleanup fixture data, and then checks permission boundaries for admin/operator/viewer on admin/ops endpoints. This scenario uploads a fresh SVG, prepares its own cleanup fixture data, and then checks permission boundaries for admin/operator/viewer on all currently implemented admin endpoints used by the regression contour.
Important: Important:
@@ -124,6 +153,38 @@ Important:
- the scenario does not rely on historical scheme ids or dirty pricing state - the scenario does not rely on historical scheme ids or dirty pricing state
- destructive pricing cleanup execution is validated with fresh self-created fixture categories only - destructive pricing cleanup execution is validated with fresh self-created fixture categories only
### Artifact corruption smoke
Use:
- `backend/scripts/smoke_artifact_corruption.sh`
This scenario creates fresh publish-preview artifacts and then simulates two controlled corruption cases only on the artifacts created inside the scenario.
Important:
- case A removes a preview file while leaving its DB row in place
- case B removes a preview DB row while leaving its file on disk
- audit must detect both inconsistencies correctly
- cleanup dry-run must stay readable and non-destructive
- cleanup execute must remediate the introduced inconsistency
- the scenario does not touch historical schemes or unrelated artifact rows/files
### Auth negative smoke
Use:
- `backend/scripts/smoke_auth_negative.sh`
This scenario checks the negative auth matrix on a representative route set.
Important:
- missing API key must return `401`
- invalid API key must return `403`
- valid non-admin key must return `403` only on admin-only endpoints
- the route set includes protected, editor, pricing, admin, and admin-cleanup endpoints
### Negative upload smoke ### Negative upload smoke
Use: Use:
@@ -239,7 +300,26 @@ Validate:
- rolled-back current structure matches version 1 semantics after version 2 mutation - rolled-back current structure matches version 1 semantics after version 2 mutation
- lifecycle audit events are present and JSON-serializable - lifecycle audit events are present and JSON-serializable
## 5. Admin/ops smoke coverage ## 5. Lifecycle negative smoke coverage
`smoke_lifecycle_negative.sh` checks:
- POST /api/v1/schemes/upload -> 200
- GET current on the fresh scheme -> 200
- POST rollback with nonexistent `target_version_number` -> controlled 404
- POST draft/ensure with stale `expected_current_scheme_version_id` -> typed 409
- POST publish with stale `expected_scheme_version_id` -> typed 409
- GET current after temporary `current_version_inconsistent` pointer corruption -> typed 409
- GET current again after scenario restoration -> 200
Validate:
- rollback to missing version stays controlled and non-500
- ensure-draft stale current pointer returns typed `stale_current_version`
- publish stale expected version stays controlled and non-500
- temporary pointer inconsistency returns typed `current_version_inconsistent`
- the temporary inconsistency is restored before the scenario exits
## 6. Admin/ops smoke coverage
`smoke_admin_ops.sh` checks: `smoke_admin_ops.sh` checks:
@@ -275,14 +355,22 @@ Validate:
- protected pricing category and its rule remain after destructive cleanup - protected pricing category and its rule remain after destructive cleanup
- repeated cleanup state remains stable after destructive cleanup - repeated cleanup state remains stable after destructive cleanup
## 6. Admin authz smoke coverage ## 7. Admin authz smoke coverage
`smoke_authz_admin_ops.sh` checks: `smoke_authz_admin_all.sh` checks:
- POST /api/v1/schemes/upload -> 200 - POST /api/v1/schemes/upload -> 200
- POST draft ensure on the fresh scheme -> 200 - POST draft ensure on the fresh scheme -> 200
- POST pricing fixture categories/rule for cleanup authz checks -> 200 - POST pricing fixture categories/rule for cleanup authz checks -> 200
- POST draft/publish-preview refresh fixture -> 200 - POST draft/publish-preview refresh fixture -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts as admin -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts as operator/viewer -> 403
- GET /api/v1/admin/schemes/{scheme_id}/current/validation as admin -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/validation as operator/viewer -> 403
- POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate as admin -> 200
- POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate as operator/viewer -> 403
- POST /api/v1/admin/display/backfill as admin -> 200
- POST /api/v1/admin/display/backfill as operator/viewer -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit as admin -> 200 - GET /api/v1/admin/artifacts/publish-preview/audit as admin -> 200
- GET /api/v1/admin/artifacts/publish-preview/audit as operator/viewer -> 403 - GET /api/v1/admin/artifacts/publish-preview/audit as operator/viewer -> 403
- POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true as admin -> 200 - POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true as admin -> 200
@@ -300,7 +388,55 @@ Validate:
- admin endpoints stay available to admin - admin endpoints stay available to admin
- operator and viewer are denied without 500 - operator and viewer are denied without 500
- destructive cleanup execution remains constrained to self-created safe fixture data - destructive cleanup execution remains constrained to self-created safe fixture data
## 7. Negative upload smoke coverage
## 8. Auth negative smoke coverage
`smoke_auth_negative.sh` checks:
- GET /api/v1/manifest without API key -> 401
- GET /api/v1/manifest with invalid API key -> 403
- GET /api/v1/schemes/{scheme_id}/editor/context without API key -> 401
- GET /api/v1/schemes/{scheme_id}/editor/context with invalid API key -> 403
- GET /api/v1/schemes/{scheme_id}/pricing without API key -> 401
- GET /api/v1/schemes/{scheme_id}/pricing with invalid API key -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit without API key -> 401
- GET /api/v1/admin/artifacts/publish-preview/audit with invalid API key -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit with valid viewer key -> 403
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview without API key -> 401
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview with invalid API key -> 403
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview with valid viewer key -> 403
Validate:
- missing key contract is consistently `401`
- invalid key contract is consistently `403`
- valid non-admin key is denied only on admin-only endpoints
## 9. Artifact corruption smoke coverage
`smoke_artifact_corruption.sh` checks:
- POST /api/v1/schemes/upload -> 200
- POST draft ensure on the fresh scheme -> 200
- GET initial /api/v1/admin/artifacts/publish-preview/audit -> healthy 200
- case A: manually delete fresh preview file while keeping DB row
- GET audit after case A -> reports exactly one missing file for DB row
- POST cleanup dry_run=true after case A -> 200
- POST cleanup dry_run=false after case A -> 200 and deletes the broken DB row
- case B: manually delete fresh preview DB row while keeping file
- GET audit after case B -> reports exactly one orphan file
- POST cleanup dry_run=true after case B -> 200
- POST cleanup dry_run=false after case B -> 200 and deletes the orphan file
- final audit -> healthy 200
Validate:
- audit sees DB-row-without-file and file-without-DB-row separately and correctly
- dry-run remains readable and non-destructive in both corruption cases
- execute cleanup remediates only the inconsistency introduced in the scenario
- final audit is healthy again: `orphan_files_count=0`, `missing_files_for_db_rows_count=0`
## 10. Negative upload smoke coverage
`smoke_upload_negative.sh` checks: `smoke_upload_negative.sh` checks:
@@ -315,7 +451,7 @@ Validate:
- configured max file size is read from manifest, not hardcoded in the script - configured max file size is read from manifest, not hardcoded in the script
- no negative upload case returns 500 - no negative upload case returns 500
## 8. Legacy endpoint families ## 11. Legacy endpoint families
The sections below remain the API baseline by area, but regression execution is now split between clean-DB core smoke and pricing/publish smoke. The sections below remain the API baseline by area, but regression execution is now split between clean-DB core smoke and pricing/publish smoke.

View File

@@ -0,0 +1,173 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
set -a
source "${REPO_ROOT}/.env"
set +a
wait_for_health
create_fresh_scheme_from_upload "smoke-artifact-corruption"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
request "ensure_draft" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure?expected_current_scheme_version_id=${CURRENT_VERSION_ID}" \
"200"
DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")"
echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}"
request "initial_publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/initial_publish_preview_audit.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/initial_publish_preview_audit.body" "missing_files_for_db_rows_count" "0"
request "publish_preview_refresh_case_a" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request "admin_current_artifacts_case_a" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" \
"200"
read -r CASE_A_ARTIFACT_ID CASE_A_STORAGE_PATH <<EOF
$(python3 - "${TMP_DIR}/admin_current_artifacts_case_a.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
items = [item for item in payload.get("items", []) if item.get("artifact_type") == "publish_preview"]
if not items:
raise SystemExit("No publish_preview artifact found for case A")
item = items[-1]
print(item["artifact_id"], item["storage_path"])
PY
)
EOF
echo "CASE_A_ARTIFACT_ID=${CASE_A_ARTIFACT_ID}"
echo "CASE_A_STORAGE_PATH=${CASE_A_STORAGE_PATH}"
docker compose exec -T svg-service python - "${CASE_A_STORAGE_PATH}" <<'PY'
from pathlib import Path
import sys
path = Path(sys.argv[1])
if not path.exists():
raise SystemExit(f"Case A preview file missing before manual removal: {path}")
path.unlink()
PY
echo "[OK] case A manually removed preview file while DB row remains"
request "audit_case_a_broken" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/audit_case_a_broken.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/audit_case_a_broken.body" "missing_files_for_db_rows_count" "1"
assert_file_contains "${TMP_DIR}/audit_case_a_broken.body" "\"artifact_id\":\"${CASE_A_ARTIFACT_ID}\""
request "cleanup_case_a_dry_run" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" \
"200"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_dry_run.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_dry_run.body" "missing_files_for_db_rows_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_dry_run.body" "deleted_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_dry_run.body" "deleted_db_rows_count" "0"
request "cleanup_case_a_execute" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false" \
"200"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_execute.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_execute.body" "missing_files_for_db_rows_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_execute.body" "deleted_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_execute.body" "deleted_db_rows_count" "1"
assert_file_contains "${TMP_DIR}/cleanup_case_a_execute.body" "\"${CASE_A_ARTIFACT_ID}\""
request "audit_case_a_healthy" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/audit_case_a_healthy.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/audit_case_a_healthy.body" "missing_files_for_db_rows_count" "0"
request "publish_preview_refresh_case_b" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request "admin_current_artifacts_case_b" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" \
"200"
read -r CASE_B_ARTIFACT_ID CASE_B_STORAGE_PATH <<EOF
$(python3 - "${TMP_DIR}/admin_current_artifacts_case_b.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
items = [item for item in payload.get("items", []) if item.get("artifact_type") == "publish_preview"]
if not items:
raise SystemExit("No publish_preview artifact found for case B")
item = items[-1]
print(item["artifact_id"], item["storage_path"])
PY
)
EOF
echo "CASE_B_ARTIFACT_ID=${CASE_B_ARTIFACT_ID}"
echo "CASE_B_STORAGE_PATH=${CASE_B_STORAGE_PATH}"
CASE_B_DELETE_COUNT="$(docker compose exec -T postgres psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -Atc "with deleted as (delete from scheme_artifacts where artifact_id='${CASE_B_ARTIFACT_ID}' and artifact_type='publish_preview' and scheme_id='${SCHEME_ID}' returning 1) select count(*) from deleted;")"
if [[ "${CASE_B_DELETE_COUNT}" != "1" ]]; then
fail "Case B expected to delete exactly one publish_preview DB row, got ${CASE_B_DELETE_COUNT}"
fi
echo "[OK] case B manually removed publish_preview DB row while file remains"
request "audit_case_b_broken" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/audit_case_b_broken.body" "orphan_files_count" "1"
assert_json_int_eq "${TMP_DIR}/audit_case_b_broken.body" "missing_files_for_db_rows_count" "0"
assert_file_contains "${TMP_DIR}/audit_case_b_broken.body" "\"${CASE_B_STORAGE_PATH}\""
request "cleanup_case_b_dry_run" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" \
"200"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_dry_run.body" "orphan_files_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_dry_run.body" "missing_files_for_db_rows_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_dry_run.body" "deleted_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_dry_run.body" "deleted_db_rows_count" "0"
request "cleanup_case_b_execute" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false" \
"200"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_execute.body" "orphan_files_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_execute.body" "missing_files_for_db_rows_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_execute.body" "deleted_files_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_execute.body" "deleted_db_rows_count" "0"
assert_file_contains "${TMP_DIR}/cleanup_case_b_execute.body" "\"${CASE_B_STORAGE_PATH}\""
request "final_publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/final_publish_preview_audit.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/final_publish_preview_audit.body" "missing_files_for_db_rows_count" "0"
FINAL_DB_ROWS_COUNT="$(json_get "${TMP_DIR}/final_publish_preview_audit.body" "db_rows_count")"
FINAL_DISK_FILES_COUNT="$(json_get "${TMP_DIR}/final_publish_preview_audit.body" "disk_files_count")"
if [[ "${FINAL_DB_ROWS_COUNT}" != "${FINAL_DISK_FILES_COUNT}" ]]; then
fail "Final publish-preview audit mismatch after remediation: db_rows_count=${FINAL_DB_ROWS_COUNT}, disk_files_count=${FINAL_DISK_FILES_COUNT}"
fi
echo
echo "===== done ====="
echo "[OK] smoke artifact corruption completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"
echo "CASE_A_ARTIFACT_ID=${CASE_A_ARTIFACT_ID}"
echo "CASE_B_ARTIFACT_ID=${CASE_B_ARTIFACT_ID}"

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
INVALID_API_KEY="${INVALID_API_KEY:-definitely-invalid-api-key}"
VIEWER_API_KEY="${VIEWER_API_KEY:-viewer-local-dev-key}"
wait_for_health
create_fresh_scheme_from_upload "smoke-auth-negative"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
request_without_api_key "manifest_missing_key" "GET" \
"${API_URL}/api/v1/manifest" \
"401"
request_with_api_key "${INVALID_API_KEY}" "manifest_invalid_key" "GET" \
"${API_URL}/api/v1/manifest" \
"403"
assert_file_contains "${TMP_DIR}/manifest_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/manifest_invalid_key.body" "Invalid API key"
request_without_api_key "editor_context_missing_key" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/editor/context" \
"401"
request_with_api_key "${INVALID_API_KEY}" "editor_context_invalid_key" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/editor/context" \
"403"
assert_file_contains "${TMP_DIR}/editor_context_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/editor_context_invalid_key.body" "Invalid API key"
request_without_api_key "pricing_bundle_missing_key" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" \
"401"
request_with_api_key "${INVALID_API_KEY}" "pricing_bundle_invalid_key" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" \
"403"
assert_file_contains "${TMP_DIR}/pricing_bundle_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/pricing_bundle_invalid_key.body" "Invalid API key"
request_without_api_key "admin_audit_missing_key" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"401"
request_with_api_key "${INVALID_API_KEY}" "admin_audit_invalid_key" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"403"
request_with_api_key "${VIEWER_API_KEY}" "admin_audit_wrong_role" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"403"
assert_file_contains "${TMP_DIR}/admin_audit_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/admin_audit_invalid_key.body" "Invalid API key"
assert_file_contains "${TMP_DIR}/admin_audit_wrong_role.body" "Admin role required"
request_without_api_key "admin_cleanup_preview_missing_key" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview" \
"401"
request_with_api_key "${INVALID_API_KEY}" "admin_cleanup_preview_invalid_key" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview" \
"403"
request_with_api_key "${VIEWER_API_KEY}" "admin_cleanup_preview_wrong_role" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview" \
"403"
assert_file_contains "${TMP_DIR}/admin_cleanup_preview_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/admin_cleanup_preview_invalid_key.body" "Invalid API key"
assert_file_contains "${TMP_DIR}/admin_cleanup_preview_wrong_role.body" "Admin role required"
echo
echo "===== done ====="
echo "[OK] smoke auth negative completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"

View File

@@ -14,7 +14,7 @@ VIEWER_API_KEY="${VIEWER_API_KEY:-viewer-local-dev-key}"
wait_for_health wait_for_health
create_fresh_scheme_from_upload "smoke-authz-admin-ops" create_fresh_scheme_from_upload "smoke-authz-admin-all"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200" request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")" CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
@@ -38,17 +38,17 @@ from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8")) payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
seat = next((item for item in payload.get("seats", []) if item.get("seat_id")), None) seat = next((item for item in payload.get("seats", []) if item.get("seat_id")), None)
if seat is None: if seat is None:
raise SystemExit("No seat with seat_id found for authz admin ops smoke") raise SystemExit("No seat with seat_id found for authz admin all smoke")
print(seat["seat_id"]) print(seat["seat_id"])
PY PY
)" )"
echo "TARGET_SEAT_ID=${TARGET_SEAT_ID}" echo "TARGET_SEAT_ID=${TARGET_SEAT_ID}"
STAMP="$(date +%s)-$$" STAMP="$(date +%s)-$$"
CLEANUP_PREFIX="AUTHZ_ADMINOPS_${STAMP}_" CLEANUP_PREFIX="AUTHZ_ADMINALL_${STAMP}_"
DELETE_CATEGORY_NAME="authz-adminops-delete-${STAMP}" DELETE_CATEGORY_NAME="authz-adminall-delete-${STAMP}"
DELETE_CATEGORY_CODE="${CLEANUP_PREFIX}DELETE" DELETE_CATEGORY_CODE="${CLEANUP_PREFIX}DELETE"
KEEP_CATEGORY_NAME="authz-adminops-keep-${STAMP}" KEEP_CATEGORY_NAME="authz-adminall-keep-${STAMP}"
KEEP_CATEGORY_CODE="${CLEANUP_PREFIX}KEEP" KEEP_CATEGORY_CODE="${CLEANUP_PREFIX}KEEP"
request "create_delete_category" "POST" \ request "create_delete_category" "POST" \
@@ -68,7 +68,7 @@ echo "KEEP_CATEGORY_ID=${KEEP_CATEGORY_ID}"
request "create_keep_category_rule" "POST" \ request "create_keep_category_rule" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules?expected_scheme_version_id=${DRAFT_VERSION_ID}" \ "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \ "200" \
"{\"pricing_category_id\":\"${KEEP_CATEGORY_ID}\",\"target_type\":\"seat\",\"target_ref\":\"${TARGET_SEAT_ID}\",\"amount\":\"666.00\",\"currency\":\"RUB\"}" "{\"pricing_category_id\":\"${KEEP_CATEGORY_ID}\",\"target_type\":\"seat\",\"target_ref\":\"${TARGET_SEAT_ID}\",\"amount\":\"777.00\",\"currency\":\"RUB\"}"
KEEP_RULE_ID="$(json_get "${TMP_DIR}/create_keep_category_rule.body" "price_rule_id")" KEEP_RULE_ID="$(json_get "${TMP_DIR}/create_keep_category_rule.body" "price_rule_id")"
echo "KEEP_RULE_ID=${KEEP_RULE_ID}" echo "KEEP_RULE_ID=${KEEP_RULE_ID}"
@@ -79,6 +79,42 @@ request "publish_preview_refresh" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" \ "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" "200"
request_with_api_key "${ADMIN_API_KEY}" "admin_current_artifacts" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_current_artifacts" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_current_artifacts" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" "403"
assert_file_contains "${TMP_DIR}/operator_current_artifacts.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_current_artifacts.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_current_validation" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/validation" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_current_validation" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/validation" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_current_validation" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/validation" "403"
assert_file_contains "${TMP_DIR}/operator_current_validation.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_current_validation.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_display_regenerate" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/display/regenerate?mode=passthrough" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_display_regenerate" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/display/regenerate?mode=passthrough" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_display_regenerate" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/display/regenerate?mode=passthrough" "403"
assert_file_contains "${TMP_DIR}/operator_display_regenerate.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_display_regenerate.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_display_backfill" "POST" \
"${API_URL}/api/v1/admin/display/backfill?mode=passthrough&limit=1&only_missing=true" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_display_backfill" "POST" \
"${API_URL}/api/v1/admin/display/backfill?mode=passthrough&limit=1&only_missing=true" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_display_backfill" "POST" \
"${API_URL}/api/v1/admin/display/backfill?mode=passthrough&limit=1&only_missing=true" "403"
assert_file_contains "${TMP_DIR}/operator_display_backfill.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_display_backfill.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_publish_preview_audit" "GET" \ request_with_api_key "${ADMIN_API_KEY}" "admin_publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" "200" "${API_URL}/api/v1/admin/artifacts/publish-preview/audit" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_publish_preview_audit" "GET" \ request_with_api_key "${OPERATOR_API_KEY}" "operator_publish_preview_audit" "GET" \
@@ -152,15 +188,15 @@ category_ids = {item["pricing_category_id"] for item in payload.get("categories"
rule_ids = {item["price_rule_id"] for item in payload.get("rules", [])} rule_ids = {item["price_rule_id"] for item in payload.get("rules", [])}
if delete_category_id in category_ids: if delete_category_id in category_ids:
raise SystemExit("Authz cleanup execute left deletable category behind") raise SystemExit("Authz admin-all cleanup left deletable category behind")
if keep_category_id not in category_ids: if keep_category_id not in category_ids:
raise SystemExit("Authz cleanup execute removed protected category") raise SystemExit("Authz admin-all cleanup removed protected category")
if keep_rule_id not in rule_ids: if keep_rule_id not in rule_ids:
raise SystemExit("Authz cleanup execute removed protected rule") raise SystemExit("Authz admin-all cleanup removed protected rule")
PY PY
echo "[OK] admin cleanup execute remained destructive only for safe fixture category" echo "[OK] admin cleanup execute remained destructive only for safe fixture category"
echo echo
echo "===== done =====" echo "===== done ====="
echo "[OK] smoke authz admin ops completed successfully" echo "[OK] smoke authz admin all completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}" echo "FRESH_SCHEME_ID=${SCHEME_ID}"

View File

@@ -153,6 +153,55 @@ PY
fi fi
} }
request_without_api_key() {
local name="$1"
local method="$2"
local url="$3"
local expected_status="$4"
local body="${5:-}"
local out_file="${TMP_DIR}/${name}.body"
local status_file="${TMP_DIR}/${name}.status"
echo
echo "===== ${name} ====="
if [[ -n "${body}" ]]; then
curl -sS \
-X "${method}" \
-H "Content-Type: application/json" \
-o "${out_file}" \
-w "%{http_code}" \
"${url}" \
--data "${body}" > "${status_file}"
else
curl -sS \
-X "${method}" \
-o "${out_file}" \
-w "%{http_code}" \
"${url}" > "${status_file}"
fi
local actual_status
actual_status="$(python3 - "$status_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8").strip())
PY
)"
echo "[${method}] ${url} -> ${actual_status}"
python3 - "$out_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8"))
PY
echo
if [[ "${actual_status}" != "${expected_status}" ]]; then
fail "Unexpected HTTP status for ${name}: expected ${expected_status}, got ${actual_status}"
fi
}
upload_svg() { upload_svg() {
local name="$1" local name="$1"
local upload_filename="$2" local upload_filename="$2"

View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
set -a
source "${REPO_ROOT}/.env"
set +a
wait_for_health
create_fresh_scheme_from_upload "smoke-lifecycle-negative"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
request "rollback_nonexistent_version" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/rollback" \
"404" \
"{\"target_version_number\":999}"
assert_file_contains "${TMP_DIR}/rollback_nonexistent_version.body" "Target scheme version not found"
request "ensure_draft_stale_current_version" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure?expected_current_scheme_version_id=deadbeefdeadbeefdeadbeefdeadbeef" \
"409"
assert_json_eq "${TMP_DIR}/ensure_draft_stale_current_version.body" "detail.code" "stale_current_version"
request "publish_stale_expected_version" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/publish?expected_scheme_version_id=deadbeefdeadbeefdeadbeefdeadbeef" \
"409"
assert_json_eq "${TMP_DIR}/publish_stale_expected_version.body" "detail.code" "publish_not_ready"
assert_file_contains "${TMP_DIR}/publish_stale_expected_version.body" "\"actual_scheme_version_id\":\"${CURRENT_VERSION_ID}\""
INCONSISTENT_VERSION_NUMBER="999"
UPDATED_VERSION_NUMBER="$(docker compose exec -T postgres psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -Atc "update schemes set current_version_number=${INCONSISTENT_VERSION_NUMBER} where scheme_id='${SCHEME_ID}' and current_version_number=1 returning current_version_number;" | python3 -c 'import sys; lines=[line.strip() for line in sys.stdin.read().splitlines() if line.strip()]; print(lines[0] if lines else "")')"
if [[ "${UPDATED_VERSION_NUMBER}" != "${INCONSISTENT_VERSION_NUMBER}" ]]; then
fail "Failed to introduce temporary current_version_inconsistent state for ${SCHEME_ID}"
fi
echo "[OK] introduced temporary current_version_inconsistent state for ${SCHEME_ID}"
restore_current_version_pointer() {
docker compose exec -T postgres psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -Atc "update schemes set current_version_number=1 where scheme_id='${SCHEME_ID}' and current_version_number=${INCONSISTENT_VERSION_NUMBER};" >/dev/null
}
trap 'restore_current_version_pointer; rm -rf "${TMP_DIR}"' EXIT
request "current_version_inconsistent" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/current" \
"409"
assert_json_eq "${TMP_DIR}/current_version_inconsistent.body" "detail.code" "current_version_inconsistent"
assert_file_contains "${TMP_DIR}/current_version_inconsistent.body" "\"current_version_number\":${INCONSISTENT_VERSION_NUMBER}"
restore_current_version_pointer
request "scheme_current_restored" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
assert_json_eq "${TMP_DIR}/scheme_current_restored.body" "scheme_version_id" "${CURRENT_VERSION_ID}"
assert_json_int_eq "${TMP_DIR}/scheme_current_restored.body" "version_number" "1"
echo
echo "===== done ====="
echo "[OK] smoke lifecycle negative completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"

View File

@@ -14,13 +14,25 @@ echo
echo "===== smoke version lifecycle =====" echo "===== smoke version lifecycle ====="
bash "${SCRIPT_DIR}/smoke_version_lifecycle.sh" bash "${SCRIPT_DIR}/smoke_version_lifecycle.sh"
echo
echo "===== smoke lifecycle negative ====="
bash "${SCRIPT_DIR}/smoke_lifecycle_negative.sh"
echo echo
echo "===== smoke admin ops =====" echo "===== smoke admin ops ====="
bash "${SCRIPT_DIR}/smoke_admin_ops.sh" bash "${SCRIPT_DIR}/smoke_admin_ops.sh"
echo echo
echo "===== smoke authz admin ops =====" echo "===== smoke authz admin all ====="
bash "${SCRIPT_DIR}/smoke_authz_admin_ops.sh" bash "${SCRIPT_DIR}/smoke_authz_admin_all.sh"
echo
echo "===== smoke auth negative ====="
bash "${SCRIPT_DIR}/smoke_auth_negative.sh"
echo
echo "===== smoke artifact corruption ====="
bash "${SCRIPT_DIR}/smoke_artifact_corruption.sh"
echo echo
echo "===== smoke upload negative =====" echo "===== smoke upload negative ====="