Files
svg-frontend/doc/smoke-regression.md
greebo 796f0f4af1 feat(frontend): add editor integration shell and draft read-model views
- add editor integration shell
- add editor context read flow
- add draft flow entry handling
- add summary, structure, validation and compare-preview views
- render backend read models for draft and published states
- verify sample-contract entities: 3 seats, 1 group, 1 sector
2026-03-20 18:51:21 +03:00

27 KiB

Smoke regression checklist

This file is the backend manual regression baseline for svg-service.

Preconditions

  • docker compose stack is up
  • backend responds on port 9020
  • valid admin API key is available
  • stable SVG fixture exists in repository, e.g. sample-contract.svg

Environment

Use these variables in shell:

export API_URL="http://127.0.0.1:9020" export API_KEY="admin-local-dev-key" export FIXTURE_SVG_PATH="/home/adminko/svg-service/sample-contract.svg"

Active regression contour

Primary operator regressions:

  • backend/scripts/smoke_core.sh
  • backend/scripts/smoke_pricing_publish.sh
  • backend/scripts/smoke_version_lifecycle.sh
  • backend/scripts/smoke_lifecycle_negative.sh
  • backend/scripts/smoke_admin_ops.sh
  • backend/scripts/smoke_auth_negative.sh
  • backend/scripts/smoke_authz_admin_all.sh
  • backend/scripts/smoke_artifact_corruption.sh
  • backend/scripts/smoke_upload_negative.sh
  • backend/scripts/smoke_regression.sh

Only this set is part of the active backend regression contour.

The scripts are expected to fail fast on any contract break or unexpected 5xx.

smoke_regression.sh is now an orchestration wrapper:

  • first runs smoke_core.sh
  • then runs smoke_pricing_publish.sh
  • then runs smoke_version_lifecycle.sh
  • then runs smoke_lifecycle_negative.sh
  • then runs smoke_admin_ops.sh
  • then runs smoke_authz_admin_all.sh
  • then runs smoke_auth_negative.sh
  • then runs smoke_artifact_corruption.sh
  • then runs smoke_upload_negative.sh
  • returns non-zero if any scenario fails

Standalone/manual scripts

  • backend/scripts/editor_mutation_regression.sh
  • backend/scripts/cleanup_test_pricing_data.sh

These scripts are intentionally not called by smoke_regression.sh.

Scenario split

Core smoke on clean DB

Use:

  • backend/scripts/smoke_core.sh

This scenario is designed for a fully clean database.

It uploads a fresh SVG fixture, resolves the created scheme_id, validates current/draft read models, validates empty pricing state, and then runs editor_mutation_regression.sh on the same fresh scheme.

Important:

  • it does not require pre-existing scheme_id
  • it does not require pricing categories or price rules
  • it does not require publish snapshot or published baseline
  • empty pricing on a fresh upload is a valid state, not a failure

Pricing/publish smoke with fixture setup

Use:

  • backend/scripts/smoke_pricing_publish.sh

This scenario also uploads a fresh SVG fixture, then prepares its own pricing fixture before validating pricing and publish flow.

Important:

  • it creates its own pricing category
  • it creates its own pricing rule
  • it intentionally checks both a priced seat and an unpriced seat on the same fresh scheme
  • it does not rely on historical pricing IDs, rules, or old schemes

Version lifecycle smoke

Use:

  • backend/scripts/smoke_version_lifecycle.sh

This scenario uploads a fresh SVG, publishes version 1, creates version 2 from published current, mutates the new draft, publishes version 2, rolls back to version 1, and then runs unpublish on the current scheme.

Important:

  • it validates multi-version lifecycle beyond fresh upload
  • it checks that draft/ensure creates a new draft only after current becomes published
  • it verifies rollback switches current_version_number to the requested target version
  • it verifies the rolled-back current structure matches the target version semantics, not the later mutated draft
  • it checks audit trail for scheme.published, scheme.version.created, scheme.rolled_back, and scheme.unpublished

Lifecycle negative smoke

Use:

  • backend/scripts/smoke_lifecycle_negative.sh

This scenario uses fresh disposable scheme data to verify negative lifecycle contracts without leaving the database in a broken state.

Important:

  • it checks rollback to a nonexistent version
  • it checks stale current-version guards on draft/ensure
  • it checks stale expected-version guards on publish
  • it creates a temporary current_version_inconsistent pointer only inside the scenario and restores it before exit

Admin/ops smoke

Use:

  • backend/scripts/smoke_admin_ops.sh

This scenario uploads a fresh SVG and prepares its own admin-cleanup fixture inside the scenario before checking current-artifact inspection, validation, publish-preview audit/cleanup, and pricing-category cleanup preview/dry-run.

Important:

  • it creates its own pricing categories for cleanup preview
  • it creates its own protected pricing rule so cleanup preview has both deletable and skipped categories
  • it does not rely on historical orphan artifacts, old schemes, or dirty pricing state
  • it checks publish-preview cleanup in both dry-run and execute modes
  • it requires the final publish-preview audit state to be healthy: orphan_files_count=0 and missing_files_for_db_rows_count=0
  • it executes destructive pricing cleanup only for self-created safe fixture data

Admin authz smoke

Use:

  • backend/scripts/smoke_authz_admin_all.sh

This scenario uploads a fresh SVG, prepares its own cleanup fixture data, and then checks permission boundaries for admin/operator/viewer on all currently implemented admin endpoints used by the regression contour.

Important:

  • admin must be allowed on tested admin endpoints
  • operator and viewer must be denied with controlled 403 responses
  • the scenario does not rely on historical scheme ids or dirty pricing state
  • destructive pricing cleanup execution is validated with fresh self-created fixture categories only

Artifact corruption smoke

Use:

  • backend/scripts/smoke_artifact_corruption.sh

This scenario creates fresh publish-preview artifacts and then simulates two controlled corruption cases only on the artifacts created inside the scenario.

Important:

  • case A removes a preview file while leaving its DB row in place
  • case B removes a preview DB row while leaving its file on disk
  • audit must detect both inconsistencies correctly
  • cleanup dry-run must stay readable and non-destructive
  • cleanup execute must remediate the introduced inconsistency
  • the scenario does not touch historical schemes or unrelated artifact rows/files

Auth negative smoke

Use:

  • backend/scripts/smoke_auth_negative.sh

This scenario checks the negative auth matrix on a representative route set.

Important:

  • missing API key must return 401
  • invalid API key must return 403
  • valid non-admin key must return 403 only on admin-only endpoints
  • the route set includes protected, editor, pricing, admin, and admin-cleanup endpoints

Negative upload smoke

Use:

  • backend/scripts/smoke_upload_negative.sh

This scenario checks controlled upload failures for invalid inputs.

Important:

  • empty upload must fail with a controlled 4xx
  • non-SVG uploads must fail with a controlled 4xx
  • invalid extension/content-type combinations must fail with a controlled 4xx
  • oversize upload must fail with a controlled 413 when the configured size limit is exceeded
  • no negative case is allowed to return 500

1. Health / system

  • GET /healthz -> 200 (smoke uses a bounded retry/wait loop and fails explicitly if the API never becomes ready)
  • GET /api/v1/ping -> 200
  • GET /api/v1/db/ping -> 200
  • GET /api/v1/manifest -> 200

2. Core smoke coverage

smoke_core.sh checks:

  • GET /healthz -> 200
  • GET /api/v1/ping -> 200
  • GET /api/v1/db/ping -> 200
  • GET /api/v1/manifest -> 200
  • POST /api/v1/schemes/upload -> 200
  • GET /api/v1/schemes -> 200 and resolves the fresh scheme_id
  • GET /api/v1/schemes/{scheme_id} -> 200
  • GET /api/v1/schemes/{scheme_id}/versions -> 200
  • GET /api/v1/schemes/{scheme_id}/current -> 200
  • GET /api/v1/schemes/{scheme_id}/editor/context -> 200
  • POST /api/v1/schemes/{scheme_id}/draft/ensure -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/summary -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/structure -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/validation -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/compare-preview -> 200
  • GET draft entities by record id -> 200
  • stale expected_scheme_version_id conflict -> 409 with typed stale_draft_version
  • GET current sectors/groups/seats -> 200
  • GET current SVG display meta -> 200
  • GET pricing bundle -> 200 with empty categories/rules
  • GET pricing coverage -> 200 with zero priced seats
  • GET pricing explain/{seat_id} -> 200 with no_price_rule
  • GET pricing rules diagnostics -> 200 with empty state
  • GET audit -> 200
  • backend/scripts/editor_mutation_regression.sh on the same fresh scheme

Validate:

  • fresh upload is readable immediately through current/draft/editor endpoints
  • empty pricing is accepted as normal state for a newly uploaded scheme
  • no endpoint in core smoke returns 500

3. Pricing/publish smoke coverage

smoke_pricing_publish.sh checks:

  • POST /api/v1/schemes/upload -> 200
  • GET current / POST draft ensure on the fresh scheme -> 200
  • POST pricing category -> 200
  • POST price rule -> 200
  • GET pricing bundle -> 200 with created fixture data
  • GET pricing coverage -> 200 with both priced and unpriced seats present
  • GET pricing explain/{priced_seat_id} -> 200 with matched rule
  • GET pricing explain/{unpriced_seat_id} -> 200 with no_price_rule
  • GET current/seats/{priced_seat_id}/price -> 200
  • GET test/seats/{priced_seat_id} -> 200
  • GET test/seats/{unpriced_seat_id} -> 200
  • POST draft/pricing/snapshot -> 200
  • GET draft/publish-readiness -> 200
  • GET draft/publish-preview?refresh=true -> 200
  • GET draft/publish-preview -> 200
  • POST publish -> 200
  • GET scheme detail/current after publish -> 200 and published state
  • GET audit -> 200 and contains scheme.published

Validate:

  • fixture setup is fully self-contained
  • priced-seat checks happen only after explicit pricing fixture creation
  • publish flow is validated on a fresh scheme, not on historical DB data

4. Version lifecycle smoke coverage

smoke_version_lifecycle.sh checks:

  • POST /api/v1/schemes/upload -> 200
  • GET scheme detail/current immediately after upload -> version 1 draft
  • POST draft ensure on version 1 -> 200 and remains same draft
  • POST pricing category/rule fixture -> 200
  • POST draft/pricing/snapshot on version 1 -> 200
  • POST publish on version 1 -> 200
  • POST draft ensure from published current -> 200 and creates version 2
  • PATCH one draft seat field on version 2 -> 200
  • GET draft compare-preview on version 2 -> 200 and shows changed state
  • POST draft/pricing/snapshot on version 2 -> 200
  • POST publish on version 2 -> 200
  • POST rollback to version 1 -> 200
  • POST unpublish current -> 200
  • GET audit -> 200 with lifecycle events present

Validate:

  • version numbering advances from 1 to 2 only when current was published
  • current pointer tracks the published version before rollback
  • rollback switches current pointer back to the requested target version
  • rolled-back current structure matches version 1 semantics after version 2 mutation
  • lifecycle audit events are present and JSON-serializable

5. Lifecycle negative smoke coverage

smoke_lifecycle_negative.sh checks:

  • POST /api/v1/schemes/upload -> 200
  • GET current on the fresh scheme -> 200
  • POST rollback with nonexistent target_version_number -> controlled 404
  • POST draft/ensure with stale expected_current_scheme_version_id -> typed 409
  • POST publish with stale expected_scheme_version_id -> typed 409
  • GET current after temporary current_version_inconsistent pointer corruption -> typed 409
  • GET current again after scenario restoration -> 200

Validate:

  • rollback to missing version stays controlled and non-500
  • ensure-draft stale current pointer returns typed stale_current_version
  • publish stale expected version stays controlled and non-500
  • temporary pointer inconsistency returns typed current_version_inconsistent
  • the temporary inconsistency is restored before the scenario exits

6. Admin/ops smoke coverage

smoke_admin_ops.sh checks:

  • POST /api/v1/schemes/upload -> 200
  • POST draft ensure on the fresh scheme -> 200
  • POST pricing category fixture for cleanup preview -> 200
  • POST protected pricing rule fixture -> 200
  • POST draft/pricing/snapshot -> 200
  • GET draft/publish-preview?refresh=true -> 200
  • GET draft/publish-preview -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/current/artifacts -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/current/validation -> 200
  • GET /api/v1/admin/artifacts/publish-preview/audit -> 200
  • POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true -> 200
  • POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false -> 200
  • GET /api/v1/admin/artifacts/publish-preview/audit after cleanup -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview -> 200
  • POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=true -> 200
  • POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=false -> 200
  • GET /api/v1/schemes/{scheme_id}/pricing after destructive cleanup -> 200
  • repeated cleanup preview/dry-run after destructive cleanup -> 200

Validate:

  • admin artifact listing stays readable for current draft version
  • admin validation stays readable for current draft version
  • publish-preview cleanup dry-run stays non-destructive and mirrors pre-clean audit counts
  • publish-preview cleanup execute removes all orphan preview files and missing DB rows
  • final publish-preview audit is strict healthy state: orphan_files_count=0, missing_files_for_db_rows_count=0, and db_rows_count == disk_files_count
  • pricing cleanup preview identifies both deletable and protected categories created inside the scenario
  • pricing cleanup dry-run never mutates fixture data
  • destructive pricing cleanup deletes only the safe category without rules
  • protected pricing category and its rule remain after destructive cleanup
  • repeated cleanup state remains stable after destructive cleanup

7. Admin authz smoke coverage

smoke_authz_admin_all.sh checks:

  • POST /api/v1/schemes/upload -> 200
  • POST draft ensure on the fresh scheme -> 200
  • POST pricing fixture categories/rule for cleanup authz checks -> 200
  • POST draft/publish-preview refresh fixture -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/current/artifacts as admin -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/current/artifacts as operator/viewer -> 403
  • GET /api/v1/admin/schemes/{scheme_id}/current/validation as admin -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/current/validation as operator/viewer -> 403
  • POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate as admin -> 200
  • POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate as operator/viewer -> 403
  • POST /api/v1/admin/display/backfill as admin -> 200
  • POST /api/v1/admin/display/backfill as operator/viewer -> 403
  • GET /api/v1/admin/artifacts/publish-preview/audit as admin -> 200
  • GET /api/v1/admin/artifacts/publish-preview/audit as operator/viewer -> 403
  • POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true as admin -> 200
  • POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true as operator/viewer -> 403
  • GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview as admin -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview as operator/viewer -> 403
  • POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=true as admin -> 200
  • POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=true as operator/viewer -> 403
  • POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=false as operator/viewer -> 403
  • POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=false as admin -> 200

Validate:

  • expected role matrix is explicit and enforced
  • admin endpoints stay available to admin
  • operator and viewer are denied without 500
  • destructive cleanup execution remains constrained to self-created safe fixture data

8. Auth negative smoke coverage

smoke_auth_negative.sh checks:

  • GET /api/v1/manifest without API key -> 401
  • GET /api/v1/manifest with invalid API key -> 403
  • GET /api/v1/schemes/{scheme_id}/editor/context without API key -> 401
  • GET /api/v1/schemes/{scheme_id}/editor/context with invalid API key -> 403
  • GET /api/v1/schemes/{scheme_id}/pricing without API key -> 401
  • GET /api/v1/schemes/{scheme_id}/pricing with invalid API key -> 403
  • GET /api/v1/admin/artifacts/publish-preview/audit without API key -> 401
  • GET /api/v1/admin/artifacts/publish-preview/audit with invalid API key -> 403
  • GET /api/v1/admin/artifacts/publish-preview/audit with valid viewer key -> 403
  • GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview without API key -> 401
  • GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview with invalid API key -> 403
  • GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview with valid viewer key -> 403

Validate:

  • missing key contract is consistently 401
  • invalid key contract is consistently 403
  • valid non-admin key is denied only on admin-only endpoints

9. Artifact corruption smoke coverage

smoke_artifact_corruption.sh checks:

  • POST /api/v1/schemes/upload -> 200
  • POST draft ensure on the fresh scheme -> 200
  • GET initial /api/v1/admin/artifacts/publish-preview/audit -> healthy 200
  • case A: manually delete fresh preview file while keeping DB row
  • GET audit after case A -> reports exactly one missing file for DB row
  • POST cleanup dry_run=true after case A -> 200
  • POST cleanup dry_run=false after case A -> 200 and deletes the broken DB row
  • case B: manually delete fresh preview DB row while keeping file
  • GET audit after case B -> reports exactly one orphan file
  • POST cleanup dry_run=true after case B -> 200
  • POST cleanup dry_run=false after case B -> 200 and deletes the orphan file
  • final audit -> healthy 200

Validate:

  • audit sees DB-row-without-file and file-without-DB-row separately and correctly
  • dry-run remains readable and non-destructive in both corruption cases
  • execute cleanup remediates only the inconsistency introduced in the scenario
  • final audit is healthy again: orphan_files_count=0, missing_files_for_db_rows_count=0

10. Negative upload smoke coverage

smoke_upload_negative.sh checks:

  • POST /api/v1/schemes/upload with empty SVG body -> controlled 400
  • POST /api/v1/schemes/upload with non-SVG text/plain body -> controlled 400
  • POST /api/v1/schemes/upload with SVG body but invalid extension/content-type pair -> controlled 400
  • POST /api/v1/schemes/upload with body larger than manifest max_file_size_bytes -> controlled 413

Validate:

  • upload validation rejects bad inputs with explicit 4xx contracts
  • configured max file size is read from manifest, not hardcoded in the script
  • no negative upload case returns 500

11. Legacy endpoint families

The sections below remain the API baseline by area, but regression execution is now split between clean-DB core smoke and pricing/publish smoke.

5. Scheme registry

  • GET /api/v1/schemes -> 200
  • GET /api/v1/schemes/{scheme_id} -> 200
  • GET /api/v1/schemes/{scheme_id}/current -> 200
  • GET /api/v1/schemes/{scheme_id}/versions -> 200

Validate:

  • scheme_id is stable
  • current version exists
  • version list contains current version
  • status and counts are consistent

6. Editor entry flow

  • GET /api/v1/schemes/{scheme_id}/editor/context -> 200
  • POST /api/v1/schemes/{scheme_id}/draft/ensure -> 200

Validate:

  • editor context returns current_scheme_version_id
  • editor context distinguishes draft vs published state correctly
  • ensure endpoint is idempotent on current draft
  • ensure endpoint creates a new draft from published current when needed
  • returned scheme_version_id is reusable as expected_scheme_version_id

7. Draft read model

Using current draft version id from draft/ensure:

  • GET /api/v1/schemes/{scheme_id}/draft/summary?expected_scheme_version_id={draft_version_id} -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/structure?expected_scheme_version_id={draft_version_id} -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/validation?expected_scheme_version_id={draft_version_id} -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/compare-preview?expected_scheme_version_id={draft_version_id} -> 200

Validate:

  • summary returns total_seats / total_sectors / total_groups
  • summary returns validation_summary / structure_diff_summary / publish_readiness
  • structure returns lists for seats / sectors / groups
  • validation is deterministic
  • compare preview returns stable diff structure
  • stale expected_scheme_version_id returns typed 409 conflict

8. Draft entity reads

  • GET /api/v1/schemes/{scheme_id}/draft/seats/records/{seat_record_id} -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id} -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/groups/records/{group_record_id} -> 200

Validate:

  • record endpoints return exact draft entities
  • unknown record id returns 404
  • stale expected_scheme_version_id returns typed 409 conflict

9. Structure read model

  • GET /api/v1/schemes/{scheme_id}/current/sectors -> 200
  • GET /api/v1/schemes/{scheme_id}/current/groups -> 200
  • GET /api/v1/schemes/{scheme_id}/current/seats -> 200

Validate:

  • total counts are non-negative
  • known sample scheme returns expected object lists
  • seats contain seat_id / sector_id / group_id contract where applicable

10. SVG / display pipeline

  • GET /api/v1/schemes/{scheme_id}/current/svg -> 200
  • GET /api/v1/schemes/{scheme_id}/current/svg/display -> 200
  • GET /api/v1/schemes/{scheme_id}/current/svg/display/meta -> 200
  • GET /api/v1/schemes/{scheme_id}/current/svg/display?mode=optimized -> 200 or explicit controlled failure
  • GET /api/v1/schemes/{scheme_id}/current/svg/display/meta?mode=optimized -> 200 or explicit controlled failure

Validate:

  • response content type for svg endpoints is image/svg+xml
  • meta returns scheme_id, scheme_version_id, view_box, width, height
  • no 500 on passthrough mode
  • unsupported mode returns 422

11. Pricing read model

  • GET /api/v1/schemes/{scheme_id}/pricing -> 200
  • GET /api/v1/schemes/{scheme_id}/pricing/coverage -> 200
  • GET /api/v1/schemes/{scheme_id}/pricing/unpriced-seats -> 200
  • GET /api/v1/schemes/{scheme_id}/pricing/explain/{seat_id} -> 200
  • GET /api/v1/schemes/{scheme_id}/pricing/rules/diagnostics -> 200
  • GET /api/v1/schemes/{scheme_id}/current/seats/{seat_id}/price -> 200 only after pricing fixture exists
  • GET /api/v1/schemes/{scheme_id}/test/seats/{seat_id} -> 200 for known seat

Validate:

  • fresh clean upload is allowed to have categories=[] and rules=[]
  • fresh clean upload is allowed to have zero priced seats and no_price_rule explanations
  • priced seat checks belong to pricing/publish smoke after fixture setup
  • diagnostics returns stable empty state with zero rules on clean upload
  • diagnostics returns matched seat visibility after fixture setup
  • priced test seat amount is serialized as string when pricing exists

12. Draft mutation regression

Use:

  • backend/scripts/editor_mutation_regression.sh

This script checks:

  • create sector
  • create group
  • patch seat
  • bulk seat update
  • patch sector
  • patch group
  • duplicate entity validation paths
  • stale draft conflict
  • remap preview validation path
  • repair references
  • delete created sector/group
  • post-mutation read-model consistency

Validate:

  • created entities are returned by API
  • patched draft records are actually changed
  • bulk update changes persisted fields
  • duplicate ids return 422
  • stale expected_scheme_version_id returns typed 409
  • remap preview without filters returns typed 422
  • post-mutation summary / validation / compare-preview remain readable and deterministic

13. Draft publish preview

  • POST /api/v1/schemes/{scheme_id}/draft/pricing/snapshot -> 200 when scheme is in draft
  • GET /api/v1/schemes/{scheme_id}/draft/publish-preview?refresh=true -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/publish-preview -> 200
  • GET /api/v1/schemes/{scheme_id}/draft/publish-preview?refresh=true&baseline_scheme_version_id={published_version_id} -> 200

Validate:

  • refresh and cached read both succeed
  • preview summary contains is_publishable / has_structure_changes / has_artifacts / snapshot_available
  • pricing_coverage is internally consistent
  • baseline override returns override strategy when explicit baseline is provided
  • preview retention does not grow unbounded for same version+variant

14. Publish readiness and publish flow

For current draft version:

  • GET /api/v1/schemes/{scheme_id}/draft/publish-readiness -> 200
  • POST /api/v1/schemes/{scheme_id}/publish?expected_scheme_version_id={draft_version_id} -> 200 or 409

Validate:

  • readiness explicitly shows snapshot_available and pricing gate state
  • publish with stale expected version returns typed 409
  • publish without draft state returns typed 409
  • publish success updates current status to published
  • audit trail contains scheme.published event

15. Admin / ops

  • GET /api/v1/admin/schemes/{scheme_id}/current/artifacts -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/current/validation -> 200
  • GET /api/v1/admin/artifacts/publish-preview/audit -> 200
  • POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true -> 200
  • POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false -> 200
  • GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview -> 200
  • POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=true -> 200
  • POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=false -> 200

Validate:

  • artifact audit does not report orphan files or missing files for DB rows in normal state
  • healthy publish-preview audit is strict: orphan_files_count=0 and missing_files_for_db_rows_count=0
  • validation report is readable and deterministic
  • pricing cleanup preview returns matched candidates and safe_to_delete_count
  • pricing cleanup dry-run returns deleted_count=0
  • destructive pricing cleanup deletes only safe fixture categories without rules
  • admin role is allowed on admin endpoints
  • operator/viewer are denied with controlled 403 on admin endpoints
  • idempotent cleanup is valid in both states: matched_total=0 with would_delete_count=0, or matched_total>0 with would_delete_count>0
  • smoke does not require cleanup dry-run to always find something to delete
  • admin routes do not produce 500 for healthy scheme state

16. Audit trail

  • GET /api/v1/schemes/{scheme_id}/audit -> 200

Validate:

  • recent publish preview / pricing / version / publish events are present when corresponding operations were run
  • audit total is non-negative
  • event payloads stay JSON-serializable

17. Fail criteria

Regression is considered failed if any of the following happen:

  • health or db ping fails
  • any stable read endpoint returns 500
  • passthrough display endpoint fails on known-good sample
  • publish preview refresh or cached read returns 500
  • publish readiness returns 500
  • editor context or draft ensure returns 500
  • draft summary / structure / validation / compare-preview returns 500
  • editor mutation regression returns non-zero exit code
  • clean upload empty pricing state is treated as a failure
  • pricing bundle or diagnostics contract changes unexpectedly
  • admin audit/cleanup endpoints fail on healthy environment
  • pricing cleanup dry-run mutates data
  • artifact retention grows without bound for repeated preview refresh on same variant

18. Operator note

Run this checklist after:

  • schema changes
  • pricing schema/repository refactors
  • artifact lifecycle changes
  • display pipeline changes
  • route reorganization
  • startup/import/config changes
  • draft lifecycle changes
  • publish readiness changes
  • admin cleanup changes
  • editor mutation changes