chore(backend): finalize backend baseline and frontend handoff contract

freeze the current backend contract for frontend integration

document the stabilized backend surface and handoff expectations
mark the current state as the baseline for further frontend work
This commit is contained in:
greebo
2026-03-20 16:46:24 +03:00
parent 5aa35b1d04
commit 54b36ba76c
8 changed files with 1103 additions and 23 deletions

View File

@@ -17,18 +17,22 @@ export API_URL="http://127.0.0.1:9020"
export API_KEY="admin-local-dev-key"
export FIXTURE_SVG_PATH="/home/adminko/svg-service/sample-contract.svg"
## Main scripts
## Active regression contour
Primary operator regressions:
- `backend/scripts/smoke_core.sh`
- `backend/scripts/smoke_pricing_publish.sh`
- `backend/scripts/smoke_version_lifecycle.sh`
- `backend/scripts/smoke_lifecycle_negative.sh`
- `backend/scripts/smoke_admin_ops.sh`
- `backend/scripts/smoke_authz_admin_ops.sh`
- `backend/scripts/smoke_auth_negative.sh`
- `backend/scripts/smoke_authz_admin_all.sh`
- `backend/scripts/smoke_artifact_corruption.sh`
- `backend/scripts/smoke_upload_negative.sh`
- `backend/scripts/smoke_regression.sh`
- `backend/scripts/editor_mutation_regression.sh`
Only this set is part of the active backend regression contour.
The scripts are expected to fail fast on any contract break or unexpected 5xx.
@@ -37,11 +41,21 @@ The scripts are expected to fail fast on any contract break or unexpected 5xx.
- first runs `smoke_core.sh`
- then runs `smoke_pricing_publish.sh`
- then runs `smoke_version_lifecycle.sh`
- then runs `smoke_lifecycle_negative.sh`
- then runs `smoke_admin_ops.sh`
- then runs `smoke_authz_admin_ops.sh`
- then runs `smoke_authz_admin_all.sh`
- then runs `smoke_auth_negative.sh`
- then runs `smoke_artifact_corruption.sh`
- then runs `smoke_upload_negative.sh`
- returns non-zero if any scenario fails
## Standalone/manual scripts
- `backend/scripts/editor_mutation_regression.sh`
- `backend/scripts/cleanup_test_pricing_data.sh`
These scripts are intentionally not called by `smoke_regression.sh`.
## Scenario split
### Core smoke on clean DB
@@ -92,6 +106,21 @@ Important:
- it verifies the rolled-back current structure matches the target version semantics, not the later mutated draft
- it checks audit trail for `scheme.published`, `scheme.version.created`, `scheme.rolled_back`, and `scheme.unpublished`
### Lifecycle negative smoke
Use:
- `backend/scripts/smoke_lifecycle_negative.sh`
This scenario uses fresh disposable scheme data to verify negative lifecycle contracts without leaving the database in a broken state.
Important:
- it checks rollback to a nonexistent version
- it checks stale current-version guards on `draft/ensure`
- it checks stale expected-version guards on `publish`
- it creates a temporary `current_version_inconsistent` pointer only inside the scenario and restores it before exit
### Admin/ops smoke
Use:
@@ -113,9 +142,9 @@ Important:
Use:
- `backend/scripts/smoke_authz_admin_ops.sh`
- `backend/scripts/smoke_authz_admin_all.sh`
This scenario uploads a fresh SVG, prepares its own cleanup fixture data, and then checks permission boundaries for admin/operator/viewer on admin/ops endpoints.
This scenario uploads a fresh SVG, prepares its own cleanup fixture data, and then checks permission boundaries for admin/operator/viewer on all currently implemented admin endpoints used by the regression contour.
Important:
@@ -124,6 +153,38 @@ Important:
- the scenario does not rely on historical scheme ids or dirty pricing state
- destructive pricing cleanup execution is validated with fresh self-created fixture categories only
### Artifact corruption smoke
Use:
- `backend/scripts/smoke_artifact_corruption.sh`
This scenario creates fresh publish-preview artifacts and then simulates two controlled corruption cases only on the artifacts created inside the scenario.
Important:
- case A removes a preview file while leaving its DB row in place
- case B removes a preview DB row while leaving its file on disk
- audit must detect both inconsistencies correctly
- cleanup dry-run must stay readable and non-destructive
- cleanup execute must remediate the introduced inconsistency
- the scenario does not touch historical schemes or unrelated artifact rows/files
### Auth negative smoke
Use:
- `backend/scripts/smoke_auth_negative.sh`
This scenario checks the negative auth matrix on a representative route set.
Important:
- missing API key must return `401`
- invalid API key must return `403`
- valid non-admin key must return `403` only on admin-only endpoints
- the route set includes protected, editor, pricing, admin, and admin-cleanup endpoints
### Negative upload smoke
Use:
@@ -239,7 +300,26 @@ Validate:
- rolled-back current structure matches version 1 semantics after version 2 mutation
- lifecycle audit events are present and JSON-serializable
## 5. Admin/ops smoke coverage
## 5. Lifecycle negative smoke coverage
`smoke_lifecycle_negative.sh` checks:
- POST /api/v1/schemes/upload -> 200
- GET current on the fresh scheme -> 200
- POST rollback with nonexistent `target_version_number` -> controlled 404
- POST draft/ensure with stale `expected_current_scheme_version_id` -> typed 409
- POST publish with stale `expected_scheme_version_id` -> typed 409
- GET current after temporary `current_version_inconsistent` pointer corruption -> typed 409
- GET current again after scenario restoration -> 200
Validate:
- rollback to missing version stays controlled and non-500
- ensure-draft stale current pointer returns typed `stale_current_version`
- publish stale expected version stays controlled and non-500
- temporary pointer inconsistency returns typed `current_version_inconsistent`
- the temporary inconsistency is restored before the scenario exits
## 6. Admin/ops smoke coverage
`smoke_admin_ops.sh` checks:
@@ -275,14 +355,22 @@ Validate:
- protected pricing category and its rule remain after destructive cleanup
- repeated cleanup state remains stable after destructive cleanup
## 6. Admin authz smoke coverage
## 7. Admin authz smoke coverage
`smoke_authz_admin_ops.sh` checks:
`smoke_authz_admin_all.sh` checks:
- POST /api/v1/schemes/upload -> 200
- POST draft ensure on the fresh scheme -> 200
- POST pricing fixture categories/rule for cleanup authz checks -> 200
- POST draft/publish-preview refresh fixture -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts as admin -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts as operator/viewer -> 403
- GET /api/v1/admin/schemes/{scheme_id}/current/validation as admin -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/validation as operator/viewer -> 403
- POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate as admin -> 200
- POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate as operator/viewer -> 403
- POST /api/v1/admin/display/backfill as admin -> 200
- POST /api/v1/admin/display/backfill as operator/viewer -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit as admin -> 200
- GET /api/v1/admin/artifacts/publish-preview/audit as operator/viewer -> 403
- POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true as admin -> 200
@@ -300,7 +388,55 @@ Validate:
- admin endpoints stay available to admin
- operator and viewer are denied without 500
- destructive cleanup execution remains constrained to self-created safe fixture data
## 7. Negative upload smoke coverage
## 8. Auth negative smoke coverage
`smoke_auth_negative.sh` checks:
- GET /api/v1/manifest without API key -> 401
- GET /api/v1/manifest with invalid API key -> 403
- GET /api/v1/schemes/{scheme_id}/editor/context without API key -> 401
- GET /api/v1/schemes/{scheme_id}/editor/context with invalid API key -> 403
- GET /api/v1/schemes/{scheme_id}/pricing without API key -> 401
- GET /api/v1/schemes/{scheme_id}/pricing with invalid API key -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit without API key -> 401
- GET /api/v1/admin/artifacts/publish-preview/audit with invalid API key -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit with valid viewer key -> 403
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview without API key -> 401
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview with invalid API key -> 403
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview with valid viewer key -> 403
Validate:
- missing key contract is consistently `401`
- invalid key contract is consistently `403`
- valid non-admin key is denied only on admin-only endpoints
## 9. Artifact corruption smoke coverage
`smoke_artifact_corruption.sh` checks:
- POST /api/v1/schemes/upload -> 200
- POST draft ensure on the fresh scheme -> 200
- GET initial /api/v1/admin/artifacts/publish-preview/audit -> healthy 200
- case A: manually delete fresh preview file while keeping DB row
- GET audit after case A -> reports exactly one missing file for DB row
- POST cleanup dry_run=true after case A -> 200
- POST cleanup dry_run=false after case A -> 200 and deletes the broken DB row
- case B: manually delete fresh preview DB row while keeping file
- GET audit after case B -> reports exactly one orphan file
- POST cleanup dry_run=true after case B -> 200
- POST cleanup dry_run=false after case B -> 200 and deletes the orphan file
- final audit -> healthy 200
Validate:
- audit sees DB-row-without-file and file-without-DB-row separately and correctly
- dry-run remains readable and non-destructive in both corruption cases
- execute cleanup remediates only the inconsistency introduced in the scenario
- final audit is healthy again: `orphan_files_count=0`, `missing_files_for_db_rows_count=0`
## 10. Negative upload smoke coverage
`smoke_upload_negative.sh` checks:
@@ -315,7 +451,7 @@ Validate:
- configured max file size is read from manifest, not hardcoded in the script
- no negative upload case returns 500
## 8. Legacy endpoint families
## 11. Legacy endpoint families
The sections below remain the API baseline by area, but regression execution is now split between clean-DB core smoke and pricing/publish smoke.