22 Commits

Author SHA1 Message Date
greebo
54b36ba76c chore(backend): finalize backend baseline and frontend handoff contract
freeze the current backend contract for frontend integration

document the stabilized backend surface and handoff expectations
mark the current state as the baseline for further frontend work
2026-03-20 16:46:24 +03:00
greebo
5aa35b1d04 feat(backend): enforce admin-only ops endpoints and cover destructive cleanup smoke
restrict ops endpoints to admin-only access

block operator and viewer keys from admin maintenance routes
cover destructive pricing cleanup in smoke execution, not only preview

extend orchestration without regressing existing smoke stages
2026-03-20 16:02:38 +03:00
greebo
210981c953 test(backend): split smoke regression into core and pricing publish flows
separate smoke coverage into core backend checks and pricing publish flow checks

make regression runs more focused and easier to maintain
improve troubleshooting when a smoke stage fails
2026-03-20 13:25:32 +03:00
greebo
239b32a246 fix(core): stabilize editor lifecycle, transactional versions, and runtime config 2026-03-20 12:38:10 +03:00
greebo
0f9c2a1cbd feat(backend): add operational smoke tooling and safe pricing cleanup endpoints
- add backend README and refresh API map and smoke regression docs
- add full backend smoke regression script
- add admin pricing cleanup preview and dry-run endpoints
- add helper script for test pricing cleanup
- verify typed error contracts, draft flow, publish readiness and preview flows
- verify publish preview retention and clean backend startup behavior
2026-03-19 22:54:12 +03:00
greebo
127c5bff71 feat(backend): stabilize draft editor flow and complete smoke regression baseline
- add editor entry flow with editor context and ensure-draft bootstrap
- add draft summary read model and single-record draft read endpoints
- add typed draft, edit and publish conflicts with validation errors
- add pricing diagnostics and publish readiness endpoints
- fix Decimal serialization in seat price and test preview flows
- harden draft lifecycle guards for published vs draft current version
- update API map and smoke regression checklist
- add backend README and smoke regression script
2026-03-19 22:23:46 +03:00
greebo
77496dac46 feat(backend): add editor context, draft bootstrap flow and draft summary endpoints
- add editor context endpoint for published and draft state introspection
- add ensure-draft endpoint to create or reuse the current editable draft
- add draft summary endpoint with validation, diff and publish readiness summary
- align draft read endpoints with optimistic concurrency via expected_scheme_version_id
- fix startup, import and schema issues in editor entry flow
- update smoke regression coverage for editor entry flow
2026-03-19 22:04:31 +03:00
greebo
4c15f4c201 feat(backend): add editor context, draft flow bootstrap, and draft summary endpoints
add backend endpoints for editor context and draft summaries

ensure draft flow bootstrap for editor-driven workflows
improve draft-aware initialization and summary reads for clients
2026-03-19 21:47:38 +03:00
greebo
a266f56ddd feat(backend): harden draft, pricing and publish contracts
- unify typed API errors across draft, pricing and publish flows
- add stale draft and publish-state mutation guards
- add publish readiness contract and guarded publish flow
- add sellability reason codes to test seat preview
- add pricing diagnostics and strengthen snapshot/publish lifecycle consistency
2026-03-19 20:58:14 +03:00
greebo
ac3a62f108 feat(backend): add publish readiness contract and guarded publish flow
add backend readiness contract for publish prechecks

guard publish flow with explicit validation and version-aware checks
make publish behavior more predictable for clients and safer against stale state
2026-03-19 20:41:08 +03:00
greebo
8d4255181b feat(backend): add publish readiness contract and pricing diagnostics
add backend readiness contract for publish prechecks

add pricing diagnostics to explain publish-blocking conditions
make publish decisions more explicit and easier to debug for clients
2026-03-19 20:29:58 +03:00
greebo
7b6c12f924 feat(backend): add publish readiness endpoint and enforce publish gate contract
add backend endpoint for publish readiness checks

enforce publish gate contract before version publication
make publish preconditions explicit and consistent for clients
2026-03-19 20:15:48 +03:00
greebo
2af5e49b8c feat(backend): add pricing coverage, unpriced seats, and explain endpoints
add backend endpoints for pricing coverage analysis and unpriced seat inspection

add explain endpoint to make effective pricing decisions traceable
improve pricing diagnostics for admin and editor workflows
2026-03-19 20:10:14 +03:00
greebo
aab5a51654 feat(backend): add sellability reason codes and string price serialization to test seat preview
extend test seat preview with explicit sellability reason codes

serialize preview price amount as string for a stable API contract
improve diagnosis of non-sellable states for preview consumers
2026-03-19 20:07:19 +03:00
greebo
af175d88dd refactor(api): unify typed error contract across draft pricing and publish flows
standardize typed error responses across draft, pricing and publish endpoints

reduce contract drift between related flows
keep client-side handling more predictable and consistent
2026-03-19 19:54:42 +03:00
greebo
64ec1c5180 feat(backend): add draft validation and single-record draft read endpoints
add backend endpoint for draft validation

add single-record read endpoints for draft entities
support targeted draft inspection and version-aware validation flows
2026-03-19 19:51:21 +03:00
greebo
35fc170cef feat(backend): add single-record draft read endpoints
add backend read endpoints for single draft records

support direct fetch of individual draft entities
improve draft inspection and targeted editor workflows
2026-03-19 19:42:03 +03:00
greebo
56aadf848b feat(backend): add draft validation endpoint with stale version guard
add backend endpoint for draft validation

protect validation requests against stale version state
keep draft checks version-aware and consistent with mutation flows
2026-03-19 19:32:22 +03:00
greebo
d060828256 feat(backend): prevent duplicate draft sector and group bindings
reject duplicate sector and group bindings within draft mutations

harden draft structure consistency at the backend layer
prevent conflicting bindings before they reach publish flow
2026-03-19 19:29:00 +03:00
greebo
62550d5cb5 feat(backend): add stale draft guards and reference validation for draft mutations
add stale draft protection for mutation flows

validate referenced entities before applying draft changes
reduce invalid draft writes caused by stale state and broken references

keep mutation behavior explicit and version-aware
2026-03-19 19:25:44 +03:00
greebo
fbeac890be feat(backend): harden pricing mutation contract and sync backend docs
- add typed response schemas for pricing write endpoints
- add stale draft version guard for pricing mutations
- unify pricing API contract around expected_scheme_version_id
- update API route map
- add smoke regression checklist for backend routes and artifact flows
2026-03-19 19:11:33 +03:00
greebo
c7c9184a71 feat: add optimistic concurrency guards for draft editor, pricing and publish flows
add optimistic concurrency guards via expected scheme version id

protect draft editor, pricing snapshot, remap and publish flows from stale mutations
protect version creation from stale current version state

keep backward compatibility with optional query guards

verify 409 conflict behavior for stale clients and 200 for valid flows
2026-03-19 18:58:03 +03:00
59 changed files with 6597 additions and 649 deletions

0
0 Normal file
View File

0
0, Normal file
View File

0
200 Normal file
View File

0
404 Normal file
View File

0
409 Normal file
View File

0
422 Normal file
View File

260
backend/README.md Normal file
View File

@@ -0,0 +1,260 @@
# svg-service backend
Backend for SVG scheme upload, draft editing, pricing, diagnostics, publish preview, and publish lifecycle.
## Stack
- Python 3.11
- FastAPI
- SQLAlchemy async
- PostgreSQL 16
- Docker Compose
## Runtime
Default backend port: `9020`
Health check:
- `GET /healthz`
Main API prefix:
- `/api/v1`
Auth header:
- `X-API-Key`
Default local admin key:
- `admin-local-dev-key`
## Core lifecycle
1. Upload SVG
2. Normalize and persist structure
3. Enter editor flow through context + ensure draft
4. Edit sectors / groups / seats in current draft
5. Configure pricing and inspect diagnostics
6. Build pricing snapshot
7. Inspect publish readiness and publish preview
8. Publish current draft
9. If editing is needed after publish, create or ensure a new draft again
## Main concepts
### Scheme
Top-level business entity.
### Scheme version
Concrete version of the scheme. A version can be `draft` or `published`.
### Current version
The version referenced by the scheme registry as active current.
### Draft
Editable current version. All editor mutations and draft pricing operations must target a current draft version only.
### Published version
Non-editable current version. If current version is published, editor flow must first create or ensure a new draft.
### Upload artifacts
Stored technical artifacts, including:
- original svg
- sanitized svg
- normalized json
- display svg
- publish preview json
## Editor entry flow
### 1. Inspect editor state
`GET /api/v1/schemes/{scheme_id}/editor/context`
Response tells whether:
- current version is draft
- editor is available
- a new draft should be created
- recommended action is `use_current_draft` or `create_draft`
### 2. Ensure editable draft
`POST /api/v1/schemes/{scheme_id}/draft/ensure`
Behavior:
- if current version is already draft: returns it with `created=false`
- if current version is published: clones current version into a new current draft and returns it with `created=true`
Returned `scheme_version_id` should be reused as:
- `expected_scheme_version_id`
for draft reads and mutations.
## Optimistic concurrency
Mutable draft flows support optimistic concurrency through query params:
- `expected_current_scheme_version_id`
- `expected_scheme_version_id`
Typical typed conflicts:
- `stale_current_version`
- `stale_draft_version`
- `draft_not_editable`
- `publish_not_ready`
## Main operator routes
### System
- `GET /healthz`
- `GET /api/v1/ping`
- `GET /api/v1/db/ping`
- `GET /api/v1/manifest`
### Uploads
- `POST /api/v1/schemes/upload`
- `GET /api/v1/uploads`
- `GET /api/v1/uploads/{upload_id}`
- `GET /api/v1/uploads/{upload_id}/normalized`
### Scheme registry
- `GET /api/v1/schemes`
- `GET /api/v1/schemes/{scheme_id}`
- `GET /api/v1/schemes/{scheme_id}/current`
- `GET /api/v1/schemes/{scheme_id}/versions`
- `POST /api/v1/schemes/{scheme_id}/versions`
- `GET /api/v1/schemes/{scheme_id}/publish/validation`
- `GET /api/v1/schemes/{scheme_id}/draft/publish-readiness`
- `POST /api/v1/schemes/{scheme_id}/publish`
- `POST /api/v1/schemes/{scheme_id}/unpublish`
- `POST /api/v1/schemes/{scheme_id}/rollback`
### Editor / draft
- `GET /api/v1/schemes/{scheme_id}/editor/context`
- `POST /api/v1/schemes/{scheme_id}/draft/ensure`
- `GET /api/v1/schemes/{scheme_id}/draft/summary`
- `GET /api/v1/schemes/{scheme_id}/draft/structure`
- `GET /api/v1/schemes/{scheme_id}/draft/validation`
- `GET /api/v1/schemes/{scheme_id}/draft/compare-preview`
- `GET /api/v1/schemes/{scheme_id}/draft/seats/records/{seat_record_id}`
- `GET /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id}`
- `GET /api/v1/schemes/{scheme_id}/draft/groups/records/{group_record_id}`
- `POST /api/v1/schemes/{scheme_id}/draft/sectors`
- `POST /api/v1/schemes/{scheme_id}/draft/groups`
- `DELETE /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id}`
- `DELETE /api/v1/schemes/{scheme_id}/draft/groups/records/{group_record_id}`
- `PATCH /api/v1/schemes/{scheme_id}/draft/seats/records/{seat_record_id}`
- `POST /api/v1/schemes/{scheme_id}/draft/seats/bulk`
- `PATCH /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id}`
- `PATCH /api/v1/schemes/{scheme_id}/draft/groups/records/{group_record_id}`
- `POST /api/v1/schemes/{scheme_id}/draft/repair-references`
### Pricing
- `GET /api/v1/schemes/{scheme_id}/pricing`
- `POST /api/v1/schemes/{scheme_id}/pricing/categories`
- `PUT /api/v1/schemes/{scheme_id}/pricing/categories/{pricing_category_id}`
- `DELETE /api/v1/schemes/{scheme_id}/pricing/categories/{pricing_category_id}`
- `POST /api/v1/schemes/{scheme_id}/pricing/rules`
- `PUT /api/v1/schemes/{scheme_id}/pricing/rules/{price_rule_id}`
- `DELETE /api/v1/schemes/{scheme_id}/pricing/rules/{price_rule_id}`
### Pricing diagnostics
- `GET /api/v1/schemes/{scheme_id}/pricing/coverage`
- `GET /api/v1/schemes/{scheme_id}/pricing/unpriced-seats`
- `GET /api/v1/schemes/{scheme_id}/pricing/explain/{seat_id}`
- `GET /api/v1/schemes/{scheme_id}/pricing/rules/diagnostics`
### Publish preview
- `POST /api/v1/schemes/{scheme_id}/draft/pricing/snapshot`
- `GET /api/v1/schemes/{scheme_id}/draft/publish-preview`
- `POST /api/v1/schemes/{scheme_id}/draft/remap/preview`
- `POST /api/v1/schemes/{scheme_id}/draft/remap/apply`
### Structure read model
- `GET /api/v1/schemes/{scheme_id}/current/sectors`
- `GET /api/v1/schemes/{scheme_id}/current/groups`
- `GET /api/v1/schemes/{scheme_id}/current/seats`
- `GET /api/v1/schemes/{scheme_id}/current/seats/{seat_id}/price`
- `GET /api/v1/schemes/{scheme_id}/current/svg`
- `GET /api/v1/schemes/{scheme_id}/current/svg/display`
- `GET /api/v1/schemes/{scheme_id}/current/svg/display/meta`
### Test mode
- `GET /api/v1/schemes/{scheme_id}/test/seats/{seat_id}`
### Audit
- `GET /api/v1/schemes/{scheme_id}/audit`
### Admin / ops
- `GET /api/v1/admin/schemes/{scheme_id}/current/artifacts`
- `GET /api/v1/admin/schemes/{scheme_id}/current/validation`
- `POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate`
- `POST /api/v1/admin/display/backfill`
- `GET /api/v1/admin/artifacts/publish-preview/audit`
- `POST /api/v1/admin/artifacts/publish-preview/cleanup`
- `GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview`
- `POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup`
## Cleanup of test pricing data
Cleanup endpoints are intended for removing diagnostic / test categories accidentally accumulated in a shared scheme.
Preview candidates:
`GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview`
Execute cleanup:
`POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup`
Safety notes:
- use `dry_run=true` first
- keep `delete_only_without_rules=true` unless you intentionally want a harder cleanup
- prefer matching by prefixes instead of raw ids for repetitive test artifacts
Helper script:
- `backend/scripts/cleanup_test_pricing_data.sh`
Example:
`SCHEME_ID=... DRY_RUN=true ./backend/scripts/cleanup_test_pricing_data.sh`
## Typical local flow
### 1. Read current version
`GET /api/v1/schemes/{scheme_id}/current`
### 2. Ensure draft
`POST /api/v1/schemes/{scheme_id}/draft/ensure`
Store returned:
- `scheme_version_id`
### 3. Read draft state
- `GET /draft/summary?expected_scheme_version_id=...`
- `GET /draft/structure?expected_scheme_version_id=...`
- `GET /draft/validation?expected_scheme_version_id=...`
- `GET /draft/compare-preview?expected_scheme_version_id=...`
### 4. Perform editor mutations
Pass:
- `expected_scheme_version_id={draft_scheme_version_id}`
on every mutation route.
### 5. Inspect pricing quality
- `GET /pricing/coverage`
- `GET /pricing/unpriced-seats`
- `GET /pricing/explain/{seat_id}`
- `GET /pricing/rules/diagnostics`
### 6. Build snapshot and inspect readiness
- `POST /draft/pricing/snapshot`
- `GET /draft/publish-readiness`
- `GET /draft/publish-preview?refresh=true`
### 7. Publish
- `POST /publish?expected_scheme_version_id=...`
## Regression
Main operator regressions:
- `backend/scripts/smoke_regression.sh`
- `backend/scripts/editor_mutation_regression.sh`
Run:
`API_URL=http://127.0.0.1:9020 API_KEY=admin-local-dev-key SCHEME_ID=... ./backend/scripts/smoke_regression.sh`
`API_URL=http://127.0.0.1:9020 API_KEY=admin-local-dev-key SCHEME_ID=... ./backend/scripts/editor_mutation_regression.sh`

View File

@@ -1,9 +1,11 @@
from fastapi import APIRouter
from app.api.routes.admin import router as admin_router
from app.api.routes.admin_cleanup import router as admin_cleanup_router
from app.api.routes.audit import router as audit_router
from app.api.routes.editor import router as editor_router
from app.api.routes.pricing import router as pricing_router
from app.api.routes.pricing_diagnostics import router as pricing_diagnostics_router
from app.api.routes.publish import router as publish_router
from app.api.routes.schemes import router as schemes_router
from app.api.routes.structure import router as structure_router
@@ -17,8 +19,10 @@ router.include_router(uploads_router)
router.include_router(schemes_router)
router.include_router(structure_router)
router.include_router(pricing_router)
router.include_router(pricing_diagnostics_router)
router.include_router(test_mode_router)
router.include_router(audit_router)
router.include_router(admin_router)
router.include_router(admin_cleanup_router)
router.include_router(editor_router)
router.include_router(publish_router)

View File

@@ -5,7 +5,7 @@ from app.repositories.scheme_artifacts import artifact_exists, list_scheme_artif
from app.repositories.scheme_versions import get_current_scheme_version
from app.repositories.schemes import get_scheme_record_by_scheme_id, list_scheme_records
from app.repositories.uploads import get_upload_record_by_upload_id
from app.security.auth import require_api_key
from app.security.auth import require_admin_api_key
from app.services.artifact_maintenance import (
cleanup_publish_preview_storage,
inspect_publish_preview_storage,
@@ -19,7 +19,7 @@ router = APIRouter()
@router.get(f"{settings.api_v1_prefix}/admin/schemes/{{scheme_id}}/current/artifacts")
async def list_current_scheme_artifacts(
scheme_id: str,
role: str = Depends(require_api_key),
role: str = Depends(require_admin_api_key),
):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
@@ -50,7 +50,7 @@ async def list_current_scheme_artifacts(
@router.get(f"{settings.api_v1_prefix}/admin/schemes/{{scheme_id}}/current/validation")
async def validate_current_scheme(
scheme_id: str,
role: str = Depends(require_api_key),
role: str = Depends(require_admin_api_key),
):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
@@ -74,7 +74,7 @@ async def validate_current_scheme(
async def regenerate_current_display(
scheme_id: str,
mode: str = Query(default="passthrough"),
role: str = Depends(require_api_key),
role: str = Depends(require_admin_api_key),
):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
@@ -98,7 +98,7 @@ async def bulk_backfill_display_artifacts(
mode: str = Query(default="passthrough"),
limit: int = Query(default=100, ge=1, le=1000),
only_missing: bool = Query(default=True),
role: str = Depends(require_api_key),
role: str = Depends(require_admin_api_key),
):
schemes = await list_scheme_records(limit=limit, offset=0)
@@ -168,7 +168,7 @@ async def bulk_backfill_display_artifacts(
@router.get(f"{settings.api_v1_prefix}/admin/artifacts/publish-preview/audit")
async def audit_publish_preview_storage(
role: str = Depends(require_api_key),
role: str = Depends(require_admin_api_key),
):
return await inspect_publish_preview_storage()
@@ -176,6 +176,6 @@ async def audit_publish_preview_storage(
@router.post(f"{settings.api_v1_prefix}/admin/artifacts/publish-preview/cleanup")
async def cleanup_publish_preview_artifacts_endpoint(
dry_run: bool = Query(default=True),
role: str = Depends(require_api_key),
role: str = Depends(require_admin_api_key),
):
return await cleanup_publish_preview_storage(dry_run=dry_run)

View File

@@ -0,0 +1,55 @@
from fastapi import APIRouter, Depends, Query
from app.core.config import settings
from app.schemas.admin_cleanup import (
PricingCleanupExecuteRequest,
PricingCleanupExecuteResponse,
PricingCleanupPreviewResponse,
)
from app.security.auth import require_admin_api_key
from app.services.pricing_cleanup import (
build_pricing_cleanup_preview,
execute_pricing_cleanup,
)
router = APIRouter()
@router.get(
f"{settings.api_v1_prefix}/admin/schemes/{{scheme_id}}/pricing/categories/cleanup-preview",
response_model=PricingCleanupPreviewResponse,
)
async def get_pricing_cleanup_preview(
scheme_id: str,
code_prefix: list[str] = Query(default_factory=list),
name_prefix: list[str] = Query(default_factory=list),
pricing_category_id: list[str] = Query(default_factory=list),
delete_only_without_rules: bool = Query(default=True),
role: str = Depends(require_admin_api_key),
):
return await build_pricing_cleanup_preview(
scheme_id=scheme_id,
code_prefixes=code_prefix,
name_prefixes=name_prefix,
pricing_category_ids=pricing_category_id,
delete_only_without_rules=delete_only_without_rules,
)
@router.post(
f"{settings.api_v1_prefix}/admin/schemes/{{scheme_id}}/pricing/categories/cleanup",
response_model=PricingCleanupExecuteResponse,
)
async def post_pricing_cleanup(
scheme_id: str,
payload: PricingCleanupExecuteRequest,
role: str = Depends(require_admin_api_key),
):
return await execute_pricing_cleanup(
scheme_id=scheme_id,
code_prefixes=payload.code_prefixes,
name_prefixes=payload.name_prefixes,
pricing_category_ids=payload.pricing_category_ids,
delete_only_without_rules=payload.delete_only_without_rules,
dry_run=payload.dry_run,
)

View File

@@ -1,10 +1,11 @@
from fastapi import APIRouter, Depends
from fastapi import APIRouter, Depends, Query, Request
from app.core.config import settings
from app.repositories.audit import create_audit_event
from app.repositories.scheme_groups import (
create_scheme_version_group,
delete_scheme_version_group_by_record_id,
get_scheme_version_group_by_record_id,
list_scheme_version_groups,
update_scheme_version_group_by_record_id,
)
@@ -12,15 +13,19 @@ from app.repositories.scheme_seats import (
bulk_update_scheme_version_seats_by_record_id,
cascade_update_seat_group_reference,
cascade_update_seat_sector_reference,
get_scheme_version_seat_by_record_id,
list_scheme_version_seats,
update_scheme_version_seat_by_record_id,
)
from app.repositories.scheme_sectors import (
create_scheme_version_sector,
delete_scheme_version_sector_by_record_id,
get_scheme_version_sector_by_record_id,
list_scheme_version_sectors,
update_scheme_version_sector_by_record_id,
)
from app.repositories.scheme_versions import get_current_scheme_version
from app.repositories.schemes import get_scheme_record_by_scheme_id
from app.schemas.editor import (
BulkSeatPatchRequest,
BulkSeatPatchResponse,
@@ -34,6 +39,8 @@ from app.schemas.editor import (
DraftSeatItem,
DraftSectorItem,
DraftStructureResponse,
DraftSummaryResponse,
EditorContextResponse,
GroupPatchRequest,
GroupPatchResponse,
RepairReferencesResponse,
@@ -47,23 +54,148 @@ from app.schemas.editor import (
from app.security.auth import require_api_key
from app.services.draft_guard import get_current_draft_context
from app.services.editor_validation import (
validate_bulk_seat_patch_references,
validate_bulk_seat_patch_uniqueness,
validate_group_patch_uniqueness,
validate_sector_patch_uniqueness,
validate_single_seat_patch_references,
validate_single_seat_patch_uniqueness,
)
from app.services.publish_readiness import build_publish_readiness
from app.services.scheme_validation import build_scheme_validation_report
from app.services.structure_diff import build_structure_diff
from app.services.structure_sync import repair_structure_references
router = APIRouter()
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/structure", response_model=DraftStructureResponse)
async def get_draft_structure(
def _seat_item(row) -> DraftSeatItem:
return DraftSeatItem(
seat_record_id=row.seat_record_id,
scheme_id=row.scheme_id,
scheme_version_id=row.scheme_version_id,
element_id=row.element_id,
seat_id=row.seat_id,
sector_id=row.sector_id,
group_id=row.group_id,
row_label=row.row_label,
seat_number=row.seat_number,
tag=row.tag,
classes_raw=row.classes_raw,
x=row.x,
y=row.y,
cx=row.cx,
cy=row.cy,
width=row.width,
height=row.height,
created_at=row.created_at.isoformat(),
)
def _sector_item(row) -> DraftSectorItem:
return DraftSectorItem(
sector_record_id=row.sector_record_id,
scheme_id=row.scheme_id,
scheme_version_id=row.scheme_version_id,
element_id=row.element_id,
sector_id=row.sector_id,
name=row.name,
classes_raw=row.classes_raw,
created_at=row.created_at.isoformat(),
)
def _group_item(row) -> DraftGroupItem:
return DraftGroupItem(
group_record_id=row.group_record_id,
scheme_id=row.scheme_id,
scheme_version_id=row.scheme_version_id,
element_id=row.element_id,
group_id=row.group_id,
name=row.name,
classes_raw=row.classes_raw,
created_at=row.created_at.isoformat(),
)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/editor/context", response_model=EditorContextResponse)
async def get_editor_context(
scheme_id: str,
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
current_is_draft = scheme.status == "draft" and version.status == "draft"
return EditorContextResponse(
scheme_id=scheme.scheme_id,
current_scheme_version_id=version.scheme_version_id,
current_version_number=version.version_number,
scheme_status=scheme.status,
scheme_version_status=version.status,
editor_available=True,
current_is_draft=current_is_draft,
create_draft_available=not current_is_draft,
recommended_action="use_current_draft" if current_is_draft else "create_draft",
)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/summary", response_model=DraftSummaryResponse)
async def get_draft_summary(
scheme_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
seats = await list_scheme_version_seats(version.scheme_version_id)
sectors = await list_scheme_version_sectors(version.scheme_version_id)
groups = await list_scheme_version_groups(version.scheme_version_id)
validation = await build_scheme_validation_report(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
)
structure_diff = await build_structure_diff(
scheme_id=scheme.scheme_id,
draft_scheme_version_id=version.scheme_version_id,
)
readiness = await build_publish_readiness(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
status=version.status,
)
return DraftSummaryResponse(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
status=version.status,
total_seats=len(seats),
total_sectors=len(sectors),
total_groups=len(groups),
validation_summary=validation["summary"],
structure_diff_summary=structure_diff["summary"],
publish_readiness=readiness["readiness"],
)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/structure", response_model=DraftStructureResponse)
async def get_draft_structure(
scheme_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
seats = await list_scheme_version_seats(version.scheme_version_id)
sectors = await list_scheme_version_sectors(version.scheme_version_id)
groups = await list_scheme_version_groups(version.scheme_version_id)
@@ -72,67 +204,47 @@ async def get_draft_structure(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
status=version.status,
seats=[
DraftSeatItem(
seat_record_id=row.seat_record_id,
scheme_id=row.scheme_id,
scheme_version_id=row.scheme_version_id,
element_id=row.element_id,
seat_id=row.seat_id,
sector_id=row.sector_id,
group_id=row.group_id,
row_label=row.row_label,
seat_number=row.seat_number,
tag=row.tag,
classes_raw=row.classes_raw,
x=row.x,
y=row.y,
cx=row.cx,
cy=row.cy,
width=row.width,
height=row.height,
created_at=row.created_at.isoformat(),
)
for row in seats
],
sectors=[
DraftSectorItem(
sector_record_id=row.sector_record_id,
scheme_id=row.scheme_id,
scheme_version_id=row.scheme_version_id,
element_id=row.element_id,
sector_id=row.sector_id,
name=row.name,
classes_raw=row.classes_raw,
created_at=row.created_at.isoformat(),
)
for row in sectors
],
groups=[
DraftGroupItem(
group_record_id=row.group_record_id,
scheme_id=row.scheme_id,
scheme_version_id=row.scheme_version_id,
element_id=row.element_id,
group_id=row.group_id,
name=row.name,
classes_raw=row.classes_raw,
created_at=row.created_at.isoformat(),
)
for row in groups
],
seats=[_seat_item(row) for row in seats],
sectors=[_sector_item(row) for row in sectors],
groups=[_group_item(row) for row in groups],
total_seats=len(seats),
total_sectors=len(sectors),
total_groups=len(groups),
)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/validation")
async def get_draft_validation(
scheme_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
report = await build_scheme_validation_report(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
)
return {
"scheme_id": scheme.scheme_id,
"scheme_version_id": version.scheme_version_id,
"status": version.status,
"report": report,
}
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/compare-preview", response_model=StructureDiffResponse)
async def get_draft_compare_preview(
scheme_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
diff = await build_structure_diff(
scheme_id=scheme.scheme_id,
draft_scheme_version_id=version.scheme_version_id,
@@ -148,13 +260,91 @@ async def get_draft_compare_preview(
)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/seats/records/{{seat_record_id}}", response_model=DraftSeatItem)
async def get_draft_seat_by_record_id(
scheme_id: str,
seat_record_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
_scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
row = await get_scheme_version_seat_by_record_id(
scheme_version_id=version.scheme_version_id,
seat_record_id=seat_record_id,
)
return _seat_item(row)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/sectors/records/{{sector_record_id}}", response_model=DraftSectorItem)
async def get_draft_sector_by_record_id(
scheme_id: str,
sector_record_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
_scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
row = await get_scheme_version_sector_by_record_id(
scheme_version_id=version.scheme_version_id,
sector_record_id=sector_record_id,
)
return _sector_item(row)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/groups/records/{{group_record_id}}", response_model=DraftGroupItem)
async def get_draft_group_by_record_id(
scheme_id: str,
group_record_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
_scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
row = await get_scheme_version_group_by_record_id(
scheme_version_id=version.scheme_version_id,
group_record_id=group_record_id,
)
return _group_item(row)
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/sectors", response_model=CreateSectorResponse)
async def create_draft_sector(
scheme_id: str,
payload: CreateSectorRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await validate_sector_patch_uniqueness(
scheme_version_id=version.scheme_version_id,
sector_record_id="__create__",
new_sector_id=payload.sector_id,
)
existing = await list_scheme_version_sectors(version.scheme_version_id)
for row in existing:
if payload.element_id and row.element_id == payload.element_id:
from fastapi import HTTPException, status
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail={
"code": "duplicate_sector_element_id",
"message": "Sector element binding already exists in current draft version",
"element_id": payload.element_id,
"conflict_sector_record_id": row.sector_record_id,
},
)
row = await create_scheme_version_sector(
scheme_id=scheme.scheme_id,
@@ -191,9 +381,33 @@ async def create_draft_sector(
async def create_draft_group(
scheme_id: str,
payload: CreateGroupRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await validate_group_patch_uniqueness(
scheme_version_id=version.scheme_version_id,
group_record_id="__create__",
new_group_id=payload.group_id,
)
existing = await list_scheme_version_groups(version.scheme_version_id)
for row in existing:
if payload.element_id and row.element_id == payload.element_id:
from fastapi import HTTPException, status
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail={
"code": "duplicate_group_element_id",
"message": "Group element binding already exists in current draft version",
"element_id": payload.element_id,
"conflict_group_record_id": row.group_record_id,
},
)
row = await create_scheme_version_group(
scheme_id=scheme.scheme_id,
@@ -230,9 +444,13 @@ async def create_draft_group(
async def delete_draft_sector(
scheme_id: str,
sector_record_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await delete_scheme_version_sector_by_record_id(
scheme_version_id=version.scheme_version_id,
@@ -259,9 +477,13 @@ async def delete_draft_sector(
async def delete_draft_group(
scheme_id: str,
group_record_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await delete_scheme_version_group_by_record_id(
scheme_version_id=version.scheme_version_id,
@@ -286,27 +508,43 @@ async def delete_draft_group(
@router.patch(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/seats/records/{{seat_record_id}}", response_model=SeatPatchResponse)
async def patch_draft_seat(
request: Request,
scheme_id: str,
seat_record_id: str,
payload: SeatPatchRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await validate_single_seat_patch_uniqueness(
scheme_version_id=version.scheme_version_id,
seat_record_id=seat_record_id,
new_seat_id=payload.seat_id,
)
await validate_single_seat_patch_references(
scheme_version_id=version.scheme_version_id,
sector_id=payload.sector_id,
group_id=payload.group_id,
)
raw_json = await request.json()
update_data = {k: v for k, v in payload.model_dump(exclude_unset=True).items() if k in raw_json}
for field in ("seat_id", "sector_id", "group_id"):
if field in update_data and (update_data[field] is None or update_data[field] == ""):
from app.services.api_errors import raise_unprocessable
raise_unprocessable(
code="business_identifier_nullification_forbidden",
message=f"{field} cannot be nullified or explicitly cleared",
)
row = await update_scheme_version_seat_by_record_id(
scheme_version_id=version.scheme_version_id,
seat_record_id=seat_record_id,
seat_id=payload.seat_id,
sector_id=payload.sector_id,
group_id=payload.group_id,
row_label=payload.row_label,
seat_number=payload.seat_number,
**update_data,
)
await create_audit_event(
@@ -338,17 +576,39 @@ async def patch_draft_seat(
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/seats/bulk", response_model=BulkSeatPatchResponse)
async def bulk_patch_draft_seats(
request: Request,
scheme_id: str,
payload: BulkSeatPatchRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
items = [item.model_dump() for item in payload.items]
raw_json = await request.json()
items = []
for i, item in enumerate(payload.items):
item_raw = raw_json.get("items", [])[i] if "items" in raw_json else {}
items.append({k: item.model_dump(exclude_unset=True).get(k) for k in item_raw})
for item in items:
for field in ("seat_id", "sector_id", "group_id"):
if field in item and (item[field] is None or item[field] == ""):
from app.services.api_errors import raise_unprocessable
raise_unprocessable(
code="business_identifier_nullification_forbidden",
message=f"{field} cannot be nullified or explicitly cleared",
)
await validate_bulk_seat_patch_uniqueness(
scheme_version_id=version.scheme_version_id,
items=items,
)
await validate_bulk_seat_patch_references(
scheme_version_id=version.scheme_version_id,
items=items,
)
rows = await bulk_update_scheme_version_seats_by_record_id(
scheme_version_id=version.scheme_version_id,
@@ -386,12 +646,17 @@ async def bulk_patch_draft_seats(
@router.patch(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/sectors/records/{{sector_record_id}}", response_model=SectorPatchResponse)
async def patch_draft_sector(
request: Request,
scheme_id: str,
sector_record_id: str,
payload: SectorPatchRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await validate_sector_patch_uniqueness(
scheme_version_id=version.scheme_version_id,
@@ -399,20 +664,28 @@ async def patch_draft_sector(
new_sector_id=payload.sector_id,
)
raw_json = await request.json()
update_data = {k: v for k, v in payload.model_dump(exclude_unset=True).items() if k in raw_json}
for field in ("sector_id",):
if field in update_data and (update_data[field] is None or update_data[field] == ""):
from app.services.api_errors import raise_unprocessable
raise_unprocessable(
code="business_identifier_nullification_forbidden",
message=f"{field} cannot be nullified or explicitly cleared",
)
row, old_sector_id = await update_scheme_version_sector_by_record_id(
scheme_version_id=version.scheme_version_id,
sector_record_id=sector_record_id,
sector_id=payload.sector_id,
name=payload.name,
)
cascaded_count = await cascade_update_seat_sector_reference(
scheme_version_id=version.scheme_version_id,
old_sector_id=old_sector_id,
new_sector_id=payload.sector_id,
)
repair_result = await repair_structure_references(
scheme_version_id=version.scheme_version_id,
**update_data,
)
cascaded_count = 0
if "sector_id" in update_data and update_data["sector_id"] and update_data["sector_id"] != old_sector_id:
cascaded_count = await cascade_update_seat_sector_reference(
scheme_version_id=version.scheme_version_id,
old_sector_id=old_sector_id,
new_sector_id=update_data["sector_id"],
)
await create_audit_event(
scheme_id=scheme.scheme_id,
@@ -421,10 +694,10 @@ async def patch_draft_sector(
object_ref=sector_record_id,
details={
"scheme_version_id": version.scheme_version_id,
"sector_id": payload.sector_id,
"old_sector_id": old_sector_id,
"new_sector_id": payload.sector_id,
"name": payload.name,
"cascaded_seats_count": cascaded_count,
"repair_result": repair_result,
},
)
@@ -439,12 +712,17 @@ async def patch_draft_sector(
@router.patch(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/groups/records/{{group_record_id}}", response_model=GroupPatchResponse)
async def patch_draft_group(
request: Request,
scheme_id: str,
group_record_id: str,
payload: GroupPatchRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await validate_group_patch_uniqueness(
scheme_version_id=version.scheme_version_id,
@@ -452,20 +730,28 @@ async def patch_draft_group(
new_group_id=payload.group_id,
)
raw_json = await request.json()
update_data = {k: v for k, v in payload.model_dump(exclude_unset=True).items() if k in raw_json}
for field in ("group_id",):
if field in update_data and (update_data[field] is None or update_data[field] == ""):
from app.services.api_errors import raise_unprocessable
raise_unprocessable(
code="business_identifier_nullification_forbidden",
message=f"{field} cannot be nullified or explicitly cleared",
)
row, old_group_id = await update_scheme_version_group_by_record_id(
scheme_version_id=version.scheme_version_id,
group_record_id=group_record_id,
group_id=payload.group_id,
name=payload.name,
)
cascaded_count = await cascade_update_seat_group_reference(
scheme_version_id=version.scheme_version_id,
old_group_id=old_group_id,
new_group_id=payload.group_id,
)
repair_result = await repair_structure_references(
scheme_version_id=version.scheme_version_id,
**update_data,
)
cascaded_count = 0
if "group_id" in update_data and update_data["group_id"] and update_data["group_id"] != old_group_id:
cascaded_count = await cascade_update_seat_group_reference(
scheme_version_id=version.scheme_version_id,
old_group_id=old_group_id,
new_group_id=update_data["group_id"],
)
await create_audit_event(
scheme_id=scheme.scheme_id,
@@ -474,10 +760,10 @@ async def patch_draft_group(
object_ref=group_record_id,
details={
"scheme_version_id": version.scheme_version_id,
"group_id": payload.group_id,
"old_group_id": old_group_id,
"new_group_id": payload.group_id,
"name": payload.name,
"cascaded_seats_count": cascaded_count,
"repair_result": repair_result,
},
)
@@ -493,9 +779,14 @@ async def patch_draft_group(
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/repair-references", response_model=RepairReferencesResponse)
async def repair_draft_references(
scheme_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
result = await repair_structure_references(
scheme_version_id=version.scheme_version_id,
)
@@ -511,7 +802,7 @@ async def repair_draft_references(
return RepairReferencesResponse(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
repaired_sector_refs_count=result["repaired_sector_refs_count"],
repaired_group_refs_count=result["repaired_group_refs_count"],
details=result["details"],
repaired_sector_refs_count=result.get("repaired_sector_refs_count", 0),
repaired_group_refs_count=result.get("repaired_group_refs_count", 0),
details=result,
)

View File

@@ -1,6 +1,6 @@
from decimal import Decimal
from fastapi import APIRouter, Depends, HTTPException, status
from fastapi import APIRouter, Depends, Query
from app.core.config import settings
from app.repositories.audit import create_audit_event
@@ -14,9 +14,6 @@ from app.repositories.pricing import (
update_price_rule,
update_pricing_category,
)
from app.repositories.scheme_version_pricing import replace_scheme_version_pricing_snapshot
from app.repositories.schemes import get_scheme_record_by_scheme_id
from app.repositories.scheme_versions import get_current_scheme_version
from app.schemas.pricing import (
PriceRuleCreateRequest,
PriceRuleItem,
@@ -27,27 +24,27 @@ from app.schemas.pricing import (
PricingCategoryUpdateRequest,
)
from app.security.auth import require_api_key
from app.services.api_errors import raise_unprocessable
from app.services.draft_guard import get_current_draft_context
router = APIRouter()
async def _refresh_current_draft_snapshot_if_possible(scheme_id: str) -> dict | None:
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
if scheme.status != "draft" or version.status != "draft":
return None
return await replace_scheme_version_pricing_snapshot(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
async def _require_current_draft(
scheme_id: str,
expected_scheme_version_id: str | None,
):
return await get_current_draft_context(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/pricing", response_model=PricingBundleResponse)
async def get_pricing_bundle(scheme_id: str, role: str = Depends(require_api_key)):
async def get_pricing_bundle(
scheme_id: str,
role: str = Depends(require_api_key),
):
categories = await list_pricing_categories(scheme_id)
rules = await list_price_rules(scheme_id)
@@ -82,26 +79,35 @@ async def get_pricing_bundle(scheme_id: str, role: str = Depends(require_api_key
async def create_pricing_category_endpoint(
scheme_id: str,
payload: PricingCategoryCreateRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
pricing_category_id = await create_pricing_category(
scheme, version = await _require_current_draft(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
pricing_category_id = await create_pricing_category(
scheme_id=scheme.scheme_id,
name=payload.name,
code=payload.code,
)
snapshot = await _refresh_current_draft_snapshot_if_possible(scheme_id)
await create_audit_event(
scheme_id=scheme_id,
scheme_id=scheme.scheme_id,
event_type="pricing.category.created",
object_type="pricing_category",
object_ref=pricing_category_id,
details={"name": payload.name, "code": payload.code, "snapshot": snapshot},
details={
"scheme_version_id": version.scheme_version_id,
"name": payload.name,
"code": payload.code,
},
)
return {
"pricing_category_id": pricing_category_id,
"scheme_id": scheme_id,
"scheme_id": scheme.scheme_id,
"name": payload.name,
"code": payload.code,
}
@@ -112,22 +118,31 @@ async def update_pricing_category_endpoint(
scheme_id: str,
pricing_category_id: str,
payload: PricingCategoryUpdateRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
row = await update_pricing_category(
scheme, version = await _require_current_draft(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
row = await update_pricing_category(
scheme_id=scheme.scheme_id,
pricing_category_id=pricing_category_id,
name=payload.name,
code=payload.code,
)
snapshot = await _refresh_current_draft_snapshot_if_possible(scheme_id)
await create_audit_event(
scheme_id=scheme_id,
scheme_id=scheme.scheme_id,
event_type="pricing.category.updated",
object_type="pricing_category",
object_ref=pricing_category_id,
details={"name": payload.name, "code": payload.code, "snapshot": snapshot},
details={
"scheme_version_id": version.scheme_version_id,
"name": row.name,
"code": row.code,
},
)
return {
@@ -142,71 +157,85 @@ async def update_pricing_category_endpoint(
async def delete_pricing_category_endpoint(
scheme_id: str,
pricing_category_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
await delete_pricing_category(
scheme, version = await _require_current_draft(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await delete_pricing_category(
scheme_id=scheme.scheme_id,
pricing_category_id=pricing_category_id,
)
snapshot = await _refresh_current_draft_snapshot_if_possible(scheme_id)
await create_audit_event(
scheme_id=scheme_id,
scheme_id=scheme.scheme_id,
event_type="pricing.category.deleted",
object_type="pricing_category",
object_ref=pricing_category_id,
details={"snapshot": snapshot},
details={"scheme_version_id": version.scheme_version_id},
)
return {"deleted": True, "pricing_category_id": pricing_category_id}
return {
"deleted": True,
"pricing_category_id": pricing_category_id,
}
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/pricing/rules")
async def create_price_rule_endpoint(
scheme_id: str,
payload: PriceRuleCreateRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await _require_current_draft(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
try:
amount = Decimal(payload.amount)
except Exception:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="Некорректная сумма",
raise_unprocessable(
code="invalid_amount",
message="Некорректная сумма",
details={"amount": payload.amount},
)
price_rule_id = await create_price_rule(
scheme_id=scheme_id,
scheme_id=scheme.scheme_id,
pricing_category_id=payload.pricing_category_id,
target_type=payload.target_type,
target_ref=payload.target_ref,
amount=amount,
currency=payload.currency,
)
snapshot = await _refresh_current_draft_snapshot_if_possible(scheme_id)
await create_audit_event(
scheme_id=scheme_id,
scheme_id=scheme.scheme_id,
event_type="pricing.rule.created",
object_type="price_rule",
object_ref=price_rule_id,
details={
"scheme_version_id": version.scheme_version_id,
"pricing_category_id": payload.pricing_category_id,
"target_type": payload.target_type,
"target_ref": payload.target_ref,
"amount": payload.amount,
"amount": str(amount),
"currency": payload.currency,
"snapshot": snapshot,
},
)
return {
"price_rule_id": price_rule_id,
"scheme_id": scheme_id,
"scheme_id": scheme.scheme_id,
"pricing_category_id": payload.pricing_category_id,
"target_type": payload.target_type,
"target_ref": payload.target_ref,
"amount": payload.amount,
"amount": str(amount),
"currency": payload.currency,
}
@@ -216,18 +245,25 @@ async def update_price_rule_endpoint(
scheme_id: str,
price_rule_id: str,
payload: PriceRuleUpdateRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await _require_current_draft(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
try:
amount = Decimal(payload.amount)
except Exception:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="Некорректная сумма",
raise_unprocessable(
code="invalid_amount",
message="Некорректная сумма",
details={"amount": payload.amount},
)
row = await update_price_rule(
scheme_id=scheme_id,
scheme_id=scheme.scheme_id,
price_rule_id=price_rule_id,
pricing_category_id=payload.pricing_category_id,
target_type=payload.target_type,
@@ -235,20 +271,19 @@ async def update_price_rule_endpoint(
amount=amount,
currency=payload.currency,
)
snapshot = await _refresh_current_draft_snapshot_if_possible(scheme_id)
await create_audit_event(
scheme_id=scheme_id,
scheme_id=scheme.scheme_id,
event_type="pricing.rule.updated",
object_type="price_rule",
object_ref=price_rule_id,
details={
"pricing_category_id": payload.pricing_category_id,
"target_type": payload.target_type,
"target_ref": payload.target_ref,
"amount": payload.amount,
"currency": payload.currency,
"snapshot": snapshot,
"scheme_version_id": version.scheme_version_id,
"pricing_category_id": row.pricing_category_id,
"target_type": row.target_type,
"target_ref": row.target_ref,
"amount": str(row.amount),
"currency": row.currency,
},
)
@@ -267,20 +302,28 @@ async def update_price_rule_endpoint(
async def delete_price_rule_endpoint(
scheme_id: str,
price_rule_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
await delete_price_rule(
scheme, version = await _require_current_draft(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
await delete_price_rule(
scheme_id=scheme.scheme_id,
price_rule_id=price_rule_id,
)
snapshot = await _refresh_current_draft_snapshot_if_possible(scheme_id)
await create_audit_event(
scheme_id=scheme_id,
scheme_id=scheme.scheme_id,
event_type="pricing.rule.deleted",
object_type="price_rule",
object_ref=price_rule_id,
details={"snapshot": snapshot},
details={"scheme_version_id": version.scheme_version_id},
)
return {"deleted": True, "price_rule_id": price_rule_id}
return {
"deleted": True,
"price_rule_id": price_rule_id,
}

View File

@@ -0,0 +1,196 @@
from fastapi import APIRouter, Depends
from app.core.config import settings
from app.repositories.pricing import find_effective_price_rule
from app.repositories.scheme_seats import get_scheme_version_seat_by_seat_id, list_scheme_version_seats
from app.repositories.scheme_versions import get_current_scheme_version
from app.repositories.schemes import get_scheme_record_by_scheme_id
from app.schemas.pricing_diagnostics import PricingRuleDiagnosticsResponse
from app.security.auth import require_api_key
from app.services.pricing_rule_diagnostics import build_pricing_rule_diagnostics
router = APIRouter()
@router.get(
f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/pricing/rules/diagnostics",
response_model=PricingRuleDiagnosticsResponse,
)
async def get_pricing_rule_diagnostics(
scheme_id: str,
role: str = Depends(require_api_key),
):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
payload = await build_pricing_rule_diagnostics(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
)
return PricingRuleDiagnosticsResponse(**payload)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/pricing/coverage")
async def get_pricing_coverage(
scheme_id: str,
role: str = Depends(require_api_key),
):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
seats = await list_scheme_version_seats(version.scheme_version_id)
priced = 0
unpriced = 0
for seat in seats:
if not seat.seat_id:
unpriced += 1
continue
try:
await find_effective_price_rule(
scheme_id=scheme.scheme_id,
seat_id=seat.seat_id,
group_id=seat.group_id,
sector_id=seat.sector_id,
)
priced += 1
except Exception:
unpriced += 1
total = len(seats)
coverage_percent = round((priced / total) * 100, 2) if total else 100.0
return {
"scheme_id": scheme.scheme_id,
"scheme_version_id": version.scheme_version_id,
"total_seats": total,
"priced_seats": priced,
"unpriced_seats": unpriced,
"coverage_percent": coverage_percent,
}
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/pricing/unpriced-seats")
async def get_unpriced_seats(
scheme_id: str,
role: str = Depends(require_api_key),
):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
seats = await list_scheme_version_seats(version.scheme_version_id)
items: list[dict] = []
for seat in seats:
if not seat.seat_id:
items.append(
{
"seat_record_id": seat.seat_record_id,
"seat_id": seat.seat_id,
"element_id": seat.element_id,
"sector_id": seat.sector_id,
"group_id": seat.group_id,
"row_label": seat.row_label,
"seat_number": seat.seat_number,
"reason_code": "missing_seat_id",
"reason_message": "Seat has no seat_id and cannot be priced.",
}
)
continue
try:
await find_effective_price_rule(
scheme_id=scheme.scheme_id,
seat_id=seat.seat_id,
group_id=seat.group_id,
sector_id=seat.sector_id,
)
except Exception:
items.append(
{
"seat_record_id": seat.seat_record_id,
"seat_id": seat.seat_id,
"element_id": seat.element_id,
"sector_id": seat.sector_id,
"group_id": seat.group_id,
"row_label": seat.row_label,
"seat_number": seat.seat_number,
"reason_code": "no_price_rule",
"reason_message": "No effective price rule was found for this seat.",
}
)
return {
"scheme_id": scheme.scheme_id,
"scheme_version_id": version.scheme_version_id,
"total": len(items),
"items": items,
}
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/pricing/explain/{{seat_id}}")
async def explain_seat_pricing(
scheme_id: str,
seat_id: str,
role: str = Depends(require_api_key),
):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
seat = await get_scheme_version_seat_by_seat_id(
scheme_version_id=version.scheme_version_id,
seat_id=seat_id,
)
try:
matched_rule_level, rule = await find_effective_price_rule(
scheme_id=scheme.scheme_id,
seat_id=seat.seat_id,
group_id=seat.group_id,
sector_id=seat.sector_id,
)
return {
"scheme_id": scheme.scheme_id,
"scheme_version_id": version.scheme_version_id,
"seat_id": seat.seat_id,
"element_id": seat.element_id,
"sector_id": seat.sector_id,
"group_id": seat.group_id,
"row_label": seat.row_label,
"seat_number": seat.seat_number,
"has_price": True,
"reason_code": "ok",
"reason_message": "Effective price rule resolved successfully.",
"matched_rule": {
"matched_rule_level": matched_rule_level,
"matched_target_ref": rule["target_ref"],
"pricing_category_id": rule["pricing_category_id"],
"amount": str(rule["amount"]),
"currency": rule["currency"],
},
}
except Exception:
return {
"scheme_id": scheme.scheme_id,
"scheme_version_id": version.scheme_version_id,
"seat_id": seat.seat_id,
"element_id": seat.element_id,
"sector_id": seat.sector_id,
"group_id": seat.group_id,
"row_label": seat.row_label,
"seat_number": seat.seat_number,
"has_price": False,
"reason_code": "no_price_rule",
"reason_message": "No effective price rule was found for this seat.",
"matched_rule": None,
}

View File

@@ -11,9 +11,11 @@ from app.schemas.publish_preview import (
RemapPreviewResponse,
RemapPreviewSeatItem,
)
from app.schemas.publish_readiness import PublishReadinessResponse
from app.security.auth import require_api_key
from app.services.draft_guard import get_current_draft_context
from app.services.publish_preview import get_or_build_publish_preview_bundle
from app.services.publish_readiness import build_publish_readiness
from app.services.remap_service import apply_remap, preview_remap
router = APIRouter()
@@ -22,9 +24,13 @@ router = APIRouter()
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/pricing/snapshot")
async def create_draft_pricing_snapshot(
scheme_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
result = await replace_scheme_version_pricing_snapshot(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
@@ -45,14 +51,21 @@ async def create_draft_pricing_snapshot(
}
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/publish-preview", response_model=PublishPreviewResponse)
@router.get(
f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/publish-preview",
response_model=PublishPreviewResponse,
)
async def get_publish_preview(
scheme_id: str,
baseline_scheme_version_id: str | None = Query(default=None),
refresh: bool = Query(default=False),
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
bundle = await get_or_build_publish_preview_bundle(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
@@ -70,13 +83,41 @@ async def get_publish_preview(
)
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/remap/preview", response_model=RemapPreviewResponse)
@router.get(
f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/publish-readiness",
response_model=PublishReadinessResponse,
)
async def get_publish_readiness(
scheme_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
readiness = await build_publish_readiness(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
status=version.status,
)
return PublishReadinessResponse(**readiness)
@router.post(
f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/remap/preview",
response_model=RemapPreviewResponse,
)
async def preview_draft_remap(
scheme_id: str,
payload: RemapPreviewRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
items = await preview_remap(
scheme_version_id=version.scheme_version_id,
seat_record_ids=payload.seat_record_ids,
@@ -94,13 +135,20 @@ async def preview_draft_remap(
)
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/remap/apply", response_model=RemapApplyResponse)
@router.post(
f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/remap/apply",
response_model=RemapApplyResponse,
)
async def apply_draft_remap(
scheme_id: str,
payload: RemapApplyRequest,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
scheme, version = await get_current_draft_context(scheme_id)
scheme, version = await get_current_draft_context(
scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
items = await apply_remap(
scheme_version_id=version.scheme_version_id,
seat_record_ids=payload.seat_record_ids,

View File

@@ -2,12 +2,10 @@ from fastapi import APIRouter, Depends, Query
from app.core.config import settings
from app.repositories.audit import create_audit_event
from app.repositories.scheme_groups import clone_scheme_version_groups
from app.repositories.scheme_seats import clone_scheme_version_seats
from app.repositories.scheme_sectors import clone_scheme_version_sectors
from app.repositories.scheme_versions import (
count_scheme_versions,
create_next_scheme_version_from_current,
create_next_scheme_version_from_current_checked,
ensure_draft_scheme_version_consistent,
get_current_scheme_version,
list_scheme_versions,
)
@@ -28,6 +26,7 @@ from app.schemas.scheme_registry import (
SchemeRollbackResponse,
)
from app.schemas.scheme_versions import (
EnsureDraftResponse,
SchemeVersionCreateResponse,
SchemeVersionListItem,
SchemeVersionListResponse,
@@ -137,27 +136,12 @@ async def get_scheme_versions(
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/versions", response_model=SchemeVersionCreateResponse)
async def create_next_scheme_version_endpoint(
scheme_id: str,
expected_current_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
current_scheme = await get_scheme_record_by_scheme_id(scheme_id)
current_version = await get_current_scheme_version(
scheme_id=current_scheme.scheme_id,
current_version_number=current_scheme.current_version_number,
)
new_version = await create_next_scheme_version_from_current(scheme_id)
await clone_scheme_version_sectors(
source_scheme_version_id=current_version.scheme_version_id,
target_scheme_version_id=new_version.scheme_version_id,
)
await clone_scheme_version_groups(
source_scheme_version_id=current_version.scheme_version_id,
target_scheme_version_id=new_version.scheme_version_id,
)
await clone_scheme_version_seats(
source_scheme_version_id=current_version.scheme_version_id,
target_scheme_version_id=new_version.scheme_version_id,
current_version, new_version = await create_next_scheme_version_from_current_checked(
scheme_id=scheme_id,
expected_current_scheme_version_id=expected_current_scheme_version_id,
)
await create_audit_event(
@@ -181,6 +165,52 @@ async def create_next_scheme_version_endpoint(
)
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/draft/ensure", response_model=EnsureDraftResponse)
async def ensure_draft_scheme_version(
scheme_id: str,
expected_current_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
current_version, created, source_scheme_version_id = await ensure_draft_scheme_version_consistent(
scheme_id=scheme_id,
expected_current_scheme_version_id=expected_current_scheme_version_id,
)
if not created:
return EnsureDraftResponse(
scheme_id=current_version.scheme_id,
scheme_version_id=current_version.scheme_version_id,
version_number=current_version.version_number,
status=current_version.status,
normalized_storage_path=current_version.normalized_storage_path,
created=False,
source_scheme_version_id=None,
)
await create_audit_event(
scheme_id=scheme_id,
event_type="scheme.version.created",
object_type="scheme_version",
object_ref=current_version.scheme_version_id,
details={
"source_scheme_version_id": source_scheme_version_id,
"version_number": current_version.version_number,
"normalized_storage_path": current_version.normalized_storage_path,
"reason": "ensure_draft",
},
)
return EnsureDraftResponse(
scheme_id=current_version.scheme_id,
scheme_version_id=current_version.scheme_version_id,
version_number=current_version.version_number,
status=current_version.status,
normalized_storage_path=current_version.normalized_storage_path,
created=True,
source_scheme_version_id=source_scheme_version_id,
)
@router.get(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/publish/validation")
async def get_publish_validation(scheme_id: str, role: str = Depends(require_api_key)):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
@@ -200,8 +230,15 @@ async def get_publish_validation(scheme_id: str, role: str = Depends(require_api
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/publish")
async def publish_scheme_endpoint(scheme_id: str, role: str = Depends(require_api_key)):
return await publish_current_draft_scheme(scheme_id=scheme_id)
async def publish_scheme_endpoint(
scheme_id: str,
expected_scheme_version_id: str | None = Query(default=None),
role: str = Depends(require_api_key),
):
return await publish_current_draft_scheme(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
@router.post(f"{settings.api_v1_prefix}/schemes/{{scheme_id}}/unpublish", response_model=SchemePublishResponse)

View File

@@ -244,7 +244,7 @@ async def get_effective_seat_price(scheme_id: str, seat_id: str, role: str = Dep
matched_rule_level=matched_rule_level,
matched_target_ref=rule["target_ref"],
pricing_category_id=rule["pricing_category_id"],
amount=rule["amount"],
amount=str(rule["amount"]),
currency=rule["currency"],
)

View File

@@ -30,12 +30,6 @@ async def preview_test_seat(
seat_id=seat_id,
)
if not seat.seat_id:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="Невозможно построить preview: у места отсутствует seat_id",
)
matched_rule_level = None
matched_target_ref = None
pricing_category_id = None
@@ -43,6 +37,27 @@ async def preview_test_seat(
currency = None
has_price = False
if not seat.seat_id:
return TestSeatPreviewResponse(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
seat_id=seat.seat_id,
element_id=seat.element_id,
sector_id=seat.sector_id,
group_id=seat.group_id,
row_label=seat.row_label,
seat_number=seat.seat_number,
selectable=False,
has_price=False,
matched_rule_level=None,
matched_target_ref=None,
pricing_category_id=None,
amount=None,
currency=None,
reason_code="missing_seat_id",
reason_message="Seat is not sellable because seat_id is missing.",
)
try:
matched_rule_level, rule = await find_effective_price_rule(
scheme_id=scheme.scheme_id,
@@ -52,7 +67,7 @@ async def preview_test_seat(
)
matched_target_ref = rule["target_ref"]
pricing_category_id = rule["pricing_category_id"]
amount = rule["amount"]
amount = str(rule["amount"])
currency = rule["currency"]
has_price = True
except HTTPException as exc:
@@ -66,15 +81,25 @@ async def preview_test_seat(
)
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail=f"Не удалось построить preview: {exc.__class__.__name__}: {exc}",
detail={
"code": "test_preview_failed",
"message": f"Не удалось построить preview: {exc.__class__.__name__}: {exc}",
},
)
if has_price:
reason_code = "ok"
reason_message = "Seat is sellable."
else:
reason_code = "no_price_rule"
reason_message = "Seat is not sellable because no effective price rule was found."
return TestSeatPreviewResponse(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
seat_id=seat.seat_id,
element_id=seat.element_id,
sector_id=seat.sector_id,
sector_id=seat.seat_id and seat.sector_id,
group_id=seat.group_id,
row_label=seat.row_label,
seat_number=seat.seat_number,
@@ -85,4 +110,6 @@ async def preview_test_seat(
pricing_category_id=pricing_category_id,
amount=amount,
currency=currency,
reason_code=reason_code,
reason_message=reason_message,
)

View File

@@ -10,8 +10,7 @@ from app.repositories.scheme_artifacts import create_scheme_artifact
from app.repositories.scheme_groups import replace_scheme_version_groups
from app.repositories.scheme_seats import replace_scheme_version_seats
from app.repositories.scheme_sectors import replace_scheme_version_sectors
from app.repositories.scheme_versions import create_initial_scheme_version
from app.repositories.schemes import create_scheme_from_upload
from app.repositories.schemes import create_scheme_from_upload_with_initial_version
from app.repositories.uploads import (
count_upload_records,
create_upload_record,
@@ -202,17 +201,9 @@ async def upload_scheme_svg(
processing_status="completed",
)
scheme_id = await create_scheme_from_upload(
scheme_id, scheme_version_id = await create_scheme_from_upload_with_initial_version(
source_upload_id=upload_id,
name=Path(filename).stem or filename,
normalized_elements_count=summary["elements_count"],
normalized_seats_count=summary["seats_count"],
normalized_groups_count=summary["groups_count"],
normalized_sectors_count=summary["sectors_count"],
)
scheme_version_id = await create_initial_scheme_version(
scheme_id=scheme_id,
normalized_storage_path=normalized_storage_path,
normalized_elements_count=summary["elements_count"],
normalized_seats_count=summary["seats_count"],

View File

@@ -1,29 +1,32 @@
from pydantic import Field, model_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
app_name: str = "svg-service"
app_env: str = "development"
app_port: int = 9020
api_v1_prefix: str = "/api/v1"
app_name: str = Field(..., validation_alias="APP_NAME")
app_env: str = Field(..., validation_alias="APP_ENV")
app_port: int = Field(..., validation_alias="BACKEND_PORT")
api_v1_prefix: str = Field(..., validation_alias="API_V1_PREFIX")
auth_header_name: str = "X-API-Key"
admin_api_key: str = "admin-local-dev-key"
viewer_api_key: str = "viewer-local-dev-key"
auth_header_name: str = Field(..., validation_alias="AUTH_HEADER_NAME")
api_keys_admin: str = Field(..., validation_alias="API_KEYS_ADMIN")
api_keys_operator: str = Field(..., validation_alias="API_KEYS_OPERATOR")
api_keys_viewer: str = Field(..., validation_alias="API_KEYS_VIEWER")
postgres_host: str = "postgres"
postgres_port: int = 5432
postgres_db: str = "svg_service"
postgres_user: str = "svg_service"
postgres_password: str = "svg_service_dev_password"
postgres_host: str = Field(..., validation_alias="POSTGRES_HOST")
postgres_port: int = Field(..., validation_alias="POSTGRES_PORT")
postgres_db: str = Field(..., validation_alias="POSTGRES_DB")
postgres_user: str = Field(..., validation_alias="POSTGRES_USER")
postgres_password: str = Field(..., validation_alias="POSTGRES_PASSWORD")
database_url_raw: str | None = Field(default=None, validation_alias="DATABASE_URL")
svg_max_file_size_bytes: int = 10 * 1024 * 1024
svg_max_elements: int = 25000
svg_max_file_size_bytes: int = Field(10 * 1024 * 1024, validation_alias="SVG_MAX_FILE_SIZE_BYTES")
svg_max_elements: int = Field(25000, validation_alias="SVG_MAX_ELEMENTS")
svg_allow_internal_use_references_only: bool = True
svg_forbid_foreign_object_v1: bool = True
svg_forbid_style_v1: bool = False
svg_forbid_image_v1: bool = True
svg_allow_internal_use_references_only: bool = Field(True, validation_alias="SVG_ALLOW_INTERNAL_USE_REFERENCES_ONLY")
svg_forbid_foreign_object_v1: bool = Field(True, validation_alias="SVG_FORBID_FOREIGN_OBJECT_V1")
svg_forbid_style_v1: bool = Field(False, validation_alias="SVG_FORBID_STYLE_V1")
svg_forbid_image_v1: bool = Field(True, validation_alias="SVG_FORBID_IMAGE_V1")
svg_display_enabled: bool = True
svg_display_mode: str = "passthrough"
@@ -34,8 +37,9 @@ class Settings(BaseSettings):
svg_display_force_viewbox: bool = True
svg_display_technical_text_patterns: str = "debug,tech,helper,tmp,service"
storage_root_dir: str = "/data"
storage_root_dir: str = Field(..., validation_alias="STORAGE_ROOT")
publish_preview_retention_per_variant: int = 2
publish_require_full_pricing_coverage: bool = False
model_config = SettingsConfigDict(
env_file=".env",
@@ -44,16 +48,32 @@ class Settings(BaseSettings):
extra="ignore",
)
@model_validator(mode="after")
def validate_database_config(self) -> "Settings":
assembled_database_url = (
f"postgresql+asyncpg://{self.postgres_user}:{self.postgres_password}"
f"@{self.postgres_host}:{self.postgres_port}/{self.postgres_db}"
)
if self.database_url_raw and self.database_url_raw != assembled_database_url:
raise ValueError("DATABASE_URL must match POSTGRES_HOST/PORT/DB/USER/PASSWORD")
return self
@property
def admin_keys(self) -> set[str]:
return {item.strip() for item in self.admin_api_key.split(",") if item.strip()}
return {item.strip() for item in self.api_keys_admin.split(",") if item.strip()}
@property
def operator_keys(self) -> set[str]:
return {item.strip() for item in self.api_keys_operator.split(",") if item.strip()}
@property
def viewer_keys(self) -> set[str]:
return {item.strip() for item in self.viewer_api_key.split(",") if item.strip()}
return {item.strip() for item in self.api_keys_viewer.split(",") if item.strip()}
@property
def database_url(self) -> str:
if self.database_url_raw:
return self.database_url_raw
return (
f"postgresql+asyncpg://{self.postgres_user}:{self.postgres_password}"
f"@{self.postgres_host}:{self.postgres_port}/{self.postgres_db}"

View File

@@ -0,0 +1,95 @@
from __future__ import annotations
from importlib import import_module
from sqlalchemy import delete, func, outerjoin, select
from app.db.session import AsyncSessionLocal
def _resolve_model(module_path: str, *candidate_names: str):
module = import_module(module_path)
for name in candidate_names:
model = getattr(module, name, None)
if model is not None:
return model
raise ImportError(
f"Unable to resolve model from {module_path}. "
f"Tried: {', '.join(candidate_names)}"
)
PricingCategoryModel = _resolve_model(
"app.models.pricing_category",
"PricingCategory",
"PricingCategoryRecord",
)
PriceRuleModel = _resolve_model(
"app.models.price_rule",
"PriceRule",
"PriceRuleRecord",
)
async def list_pricing_categories_with_rule_counts(
*,
scheme_id: str,
) -> list[dict]:
async with AsyncSessionLocal() as session:
stmt = (
select(
PricingCategoryModel.pricing_category_id,
PricingCategoryModel.scheme_id,
PricingCategoryModel.name,
PricingCategoryModel.code,
func.count(PriceRuleModel.price_rule_id).label("rules_count"),
)
.select_from(
outerjoin(
PricingCategoryModel,
PriceRuleModel,
PricingCategoryModel.pricing_category_id == PriceRuleModel.pricing_category_id,
)
)
.where(PricingCategoryModel.scheme_id == scheme_id)
.group_by(
PricingCategoryModel.pricing_category_id,
PricingCategoryModel.scheme_id,
PricingCategoryModel.name,
PricingCategoryModel.code,
)
.order_by(
PricingCategoryModel.name.asc(),
PricingCategoryModel.code.asc(),
PricingCategoryModel.pricing_category_id.asc(),
)
)
rows = (await session.execute(stmt)).all()
return [
{
"pricing_category_id": row.pricing_category_id,
"scheme_id": row.scheme_id,
"name": row.name,
"code": row.code,
"rules_count": int(row.rules_count or 0),
}
for row in rows
]
async def delete_pricing_categories_by_ids(
*,
scheme_id: str,
pricing_category_ids: list[str],
) -> int:
if not pricing_category_ids:
return 0
async with AsyncSessionLocal() as session:
stmt = delete(PricingCategoryModel).where(
PricingCategoryModel.scheme_id == scheme_id,
PricingCategoryModel.pricing_category_id.in_(pricing_category_ids),
)
result = await session.execute(stmt)
await session.commit()
return int(result.rowcount or 0)

View File

@@ -8,6 +8,49 @@ from app.models.scheme_group import SchemeGroupRecord
from app.models.scheme_seat import SchemeSeatRecord
def _conflict(message: str) -> HTTPException:
return HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail={
"code": "group_uniqueness_violation",
"message": message,
},
)
async def _ensure_group_uniqueness(
*,
session,
scheme_version_id: str,
group_id: str | None,
element_id: str | None,
exclude_group_record_id: str | None = None,
) -> None:
if group_id:
stmt = select(SchemeGroupRecord).where(
SchemeGroupRecord.scheme_version_id == scheme_version_id,
SchemeGroupRecord.group_id == group_id,
)
if exclude_group_record_id:
stmt = stmt.where(SchemeGroupRecord.group_record_id != exclude_group_record_id)
existing = (await session.execute(stmt)).scalar_one_or_none()
if existing is not None:
raise _conflict(f"Group with group_id='{group_id}' already exists in current draft version")
if element_id:
stmt = select(SchemeGroupRecord).where(
SchemeGroupRecord.scheme_version_id == scheme_version_id,
SchemeGroupRecord.element_id == element_id,
)
if exclude_group_record_id:
stmt = stmt.where(SchemeGroupRecord.group_record_id != exclude_group_record_id)
existing = (await session.execute(stmt)).scalar_one_or_none()
if existing is not None:
raise _conflict(f"Group with element_id='{element_id}' already exists in current draft version")
async def replace_scheme_version_groups(
*,
scheme_id: str,
@@ -23,13 +66,29 @@ async def replace_scheme_version_groups(
for row in existing_rows:
await session.delete(row)
seen_group_ids: set[str] = set()
seen_element_ids: set[str] = set()
for item in groups:
group_id = item.get("group_id")
element_id = item.get("id")
if group_id:
if group_id in seen_group_ids:
raise _conflict(f"Duplicate group_id='{group_id}' in replacement payload")
seen_group_ids.add(group_id)
if element_id:
if element_id in seen_element_ids:
raise _conflict(f"Duplicate element_id='{element_id}' in replacement payload")
seen_element_ids.add(element_id)
row = SchemeGroupRecord(
group_record_id=item["group_record_id"] if "group_record_id" in item and item["group_record_id"] else uuid4().hex,
scheme_id=scheme_id,
scheme_version_id=scheme_version_id,
element_id=item.get("id"),
group_id=item.get("group_id"),
element_id=element_id,
group_id=group_id,
name=item.get("group_id"),
classes_raw=str(item.get("classes")),
)
@@ -44,26 +103,51 @@ async def clone_scheme_version_groups(
target_scheme_version_id: str,
) -> None:
async with AsyncSessionLocal() as session:
result = await session.execute(
select(SchemeGroupRecord).where(SchemeGroupRecord.scheme_version_id == source_scheme_version_id)
await clone_scheme_version_groups_in_session(
session=session,
source_scheme_version_id=source_scheme_version_id,
target_scheme_version_id=target_scheme_version_id,
)
rows = list(result.scalars().all())
for row in rows:
cloned = SchemeGroupRecord(
group_record_id=uuid4().hex,
scheme_id=row.scheme_id,
scheme_version_id=target_scheme_version_id,
element_id=row.element_id,
group_id=row.group_id,
name=row.name,
classes_raw=row.classes_raw,
)
session.add(cloned)
await session.commit()
async def clone_scheme_version_groups_in_session(
*,
session,
source_scheme_version_id: str,
target_scheme_version_id: str,
) -> None:
result = await session.execute(
select(SchemeGroupRecord).where(SchemeGroupRecord.scheme_version_id == source_scheme_version_id)
)
rows = list(result.scalars().all())
seen_group_ids: set[str] = set()
seen_element_ids: set[str] = set()
for row in rows:
if row.group_id:
if row.group_id in seen_group_ids:
raise _conflict(f"Duplicate group_id='{row.group_id}' while cloning draft")
seen_group_ids.add(row.group_id)
if row.element_id:
if row.element_id in seen_element_ids:
raise _conflict(f"Duplicate element_id='{row.element_id}' while cloning draft")
seen_element_ids.add(row.element_id)
cloned = SchemeGroupRecord(
group_record_id=uuid4().hex,
scheme_id=row.scheme_id,
scheme_version_id=target_scheme_version_id,
element_id=row.element_id,
group_id=row.group_id,
name=row.name,
classes_raw=row.classes_raw,
)
session.add(cloned)
async def list_scheme_version_groups(scheme_version_id: str) -> list[SchemeGroupRecord]:
async with AsyncSessionLocal() as session:
result = await session.execute(
@@ -78,8 +162,7 @@ async def update_scheme_version_group_by_record_id(
*,
scheme_version_id: str,
group_record_id: str,
group_id: str | None,
name: str | None,
**update_data,
) -> tuple[SchemeGroupRecord, str | None]:
async with AsyncSessionLocal() as session:
result = await session.execute(
@@ -96,9 +179,20 @@ async def update_scheme_version_group_by_record_id(
detail="Group record not found in current draft version",
)
if "group_id" in update_data:
await _ensure_group_uniqueness(
session=session,
scheme_version_id=scheme_version_id,
group_id=update_data["group_id"],
element_id=row.element_id,
exclude_group_record_id=group_record_id,
)
old_group_id = row.group_id
row.group_id = group_id
row.name = name
if "group_id" in update_data:
row.group_id = update_data["group_id"]
if "name" in update_data:
row.name = update_data["name"]
await session.commit()
await session.refresh(row)
@@ -115,6 +209,13 @@ async def create_scheme_version_group(
classes_raw: str | None,
) -> SchemeGroupRecord:
async with AsyncSessionLocal() as session:
await _ensure_group_uniqueness(
session=session,
scheme_version_id=scheme_version_id,
group_id=group_id,
element_id=element_id,
)
row = SchemeGroupRecord(
group_record_id=uuid4().hex,
scheme_id=scheme_id,
@@ -166,3 +267,26 @@ async def delete_scheme_version_group_by_record_id(
await session.delete(group)
await session.commit()
async def get_scheme_version_group_by_record_id(
*,
scheme_version_id: str,
group_record_id: str,
) -> SchemeGroupRecord:
async with AsyncSessionLocal() as session:
result = await session.execute(
select(SchemeGroupRecord).where(
SchemeGroupRecord.scheme_version_id == scheme_version_id,
SchemeGroupRecord.group_record_id == group_record_id,
)
)
row = result.scalar_one_or_none()
if row is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Group record not found in current draft version",
)
return row

View File

@@ -51,36 +51,48 @@ async def clone_scheme_version_seats(
target_scheme_version_id: str,
) -> None:
async with AsyncSessionLocal() as session:
result = await session.execute(
select(SchemeSeatRecord).where(SchemeSeatRecord.scheme_version_id == source_scheme_version_id)
await clone_scheme_version_seats_in_session(
session=session,
source_scheme_version_id=source_scheme_version_id,
target_scheme_version_id=target_scheme_version_id,
)
rows = list(result.scalars().all())
for row in rows:
cloned = SchemeSeatRecord(
seat_record_id=__import__("uuid").uuid4().hex,
scheme_id=row.scheme_id,
scheme_version_id=target_scheme_version_id,
element_id=row.element_id,
seat_id=row.seat_id,
sector_id=row.sector_id,
group_id=row.group_id,
row_label=row.row_label,
seat_number=row.seat_number,
tag=row.tag,
classes_raw=row.classes_raw,
x=row.x,
y=row.y,
cx=row.cx,
cy=row.cy,
width=row.width,
height=row.height,
)
session.add(cloned)
await session.commit()
async def clone_scheme_version_seats_in_session(
*,
session,
source_scheme_version_id: str,
target_scheme_version_id: str,
) -> None:
result = await session.execute(
select(SchemeSeatRecord).where(SchemeSeatRecord.scheme_version_id == source_scheme_version_id)
)
rows = list(result.scalars().all())
for row in rows:
cloned = SchemeSeatRecord(
seat_record_id=__import__("uuid").uuid4().hex,
scheme_id=row.scheme_id,
scheme_version_id=target_scheme_version_id,
element_id=row.element_id,
seat_id=row.seat_id,
sector_id=row.sector_id,
group_id=row.group_id,
row_label=row.row_label,
seat_number=row.seat_number,
tag=row.tag,
classes_raw=row.classes_raw,
x=row.x,
y=row.y,
cx=row.cx,
cy=row.cy,
width=row.width,
height=row.height,
)
session.add(cloned)
async def list_scheme_version_seats(scheme_version_id: str) -> list[SchemeSeatRecord]:
async with AsyncSessionLocal() as session:
result = await session.execute(
@@ -114,15 +126,10 @@ async def get_scheme_version_seat_by_seat_id(
return row
async def update_scheme_version_seat_by_record_id(
async def get_scheme_version_seat_by_record_id(
*,
scheme_version_id: str,
seat_record_id: str,
seat_id: str | None,
sector_id: str | None,
group_id: str | None,
row_label: str | None,
seat_number: str | None,
) -> SchemeSeatRecord:
async with AsyncSessionLocal() as session:
result = await session.execute(
@@ -139,11 +146,40 @@ async def update_scheme_version_seat_by_record_id(
detail="Seat record not found in current draft version",
)
row.seat_id = seat_id
row.sector_id = sector_id
row.group_id = group_id
row.row_label = row_label
row.seat_number = seat_number
return row
async def update_scheme_version_seat_by_record_id(
*,
scheme_version_id: str,
seat_record_id: str,
**update_data,
) -> SchemeSeatRecord:
async with AsyncSessionLocal() as session:
result = await session.execute(
select(SchemeSeatRecord).where(
SchemeSeatRecord.scheme_version_id == scheme_version_id,
SchemeSeatRecord.seat_record_id == seat_record_id,
)
)
row = result.scalar_one_or_none()
if row is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Seat record not found in current draft version",
)
if "seat_id" in update_data:
row.seat_id = update_data["seat_id"]
if "sector_id" in update_data:
row.sector_id = update_data["sector_id"]
if "group_id" in update_data:
row.group_id = update_data["group_id"]
if "row_label" in update_data:
row.row_label = update_data["row_label"]
if "seat_number" in update_data:
row.seat_number = update_data["seat_number"]
await session.commit()
await session.refresh(row)
@@ -173,11 +209,16 @@ async def bulk_update_scheme_version_seats_by_record_id(
detail=f"Seat record not found in current draft version: {item['seat_record_id']}",
)
row.seat_id = item.get("seat_id")
row.sector_id = item.get("sector_id")
row.group_id = item.get("group_id")
row.row_label = item.get("row_label")
row.seat_number = item.get("seat_number")
if "seat_id" in item:
row.seat_id = item["seat_id"]
if "sector_id" in item:
row.sector_id = item["sector_id"]
if "group_id" in item:
row.group_id = item["group_id"]
if "row_label" in item:
row.row_label = item["row_label"]
if "seat_number" in item:
row.seat_number = item["seat_number"]
updated_rows.append(row)
await session.commit()

View File

@@ -8,6 +8,49 @@ from app.models.scheme_sector import SchemeSectorRecord
from app.models.scheme_seat import SchemeSeatRecord
def _conflict(message: str) -> HTTPException:
return HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail={
"code": "sector_uniqueness_violation",
"message": message,
},
)
async def _ensure_sector_uniqueness(
*,
session,
scheme_version_id: str,
sector_id: str | None,
element_id: str | None,
exclude_sector_record_id: str | None = None,
) -> None:
if sector_id:
stmt = select(SchemeSectorRecord).where(
SchemeSectorRecord.scheme_version_id == scheme_version_id,
SchemeSectorRecord.sector_id == sector_id,
)
if exclude_sector_record_id:
stmt = stmt.where(SchemeSectorRecord.sector_record_id != exclude_sector_record_id)
existing = (await session.execute(stmt)).scalar_one_or_none()
if existing is not None:
raise _conflict(f"Sector with sector_id='{sector_id}' already exists in current draft version")
if element_id:
stmt = select(SchemeSectorRecord).where(
SchemeSectorRecord.scheme_version_id == scheme_version_id,
SchemeSectorRecord.element_id == element_id,
)
if exclude_sector_record_id:
stmt = stmt.where(SchemeSectorRecord.sector_record_id != exclude_sector_record_id)
existing = (await session.execute(stmt)).scalar_one_or_none()
if existing is not None:
raise _conflict(f"Sector with element_id='{element_id}' already exists in current draft version")
async def replace_scheme_version_sectors(
*,
scheme_id: str,
@@ -23,13 +66,29 @@ async def replace_scheme_version_sectors(
for row in existing_rows:
await session.delete(row)
seen_sector_ids: set[str] = set()
seen_element_ids: set[str] = set()
for item in sectors:
sector_id = item.get("sector_id")
element_id = item.get("id")
if sector_id:
if sector_id in seen_sector_ids:
raise _conflict(f"Duplicate sector_id='{sector_id}' in replacement payload")
seen_sector_ids.add(sector_id)
if element_id:
if element_id in seen_element_ids:
raise _conflict(f"Duplicate element_id='{element_id}' in replacement payload")
seen_element_ids.add(element_id)
row = SchemeSectorRecord(
sector_record_id=item["sector_record_id"] if "sector_record_id" in item and item["sector_record_id"] else uuid4().hex,
scheme_id=scheme_id,
scheme_version_id=scheme_version_id,
element_id=item.get("id"),
sector_id=item.get("sector_id"),
element_id=element_id,
sector_id=sector_id,
name=item.get("sector_id"),
classes_raw=str(item.get("classes")),
)
@@ -44,26 +103,51 @@ async def clone_scheme_version_sectors(
target_scheme_version_id: str,
) -> None:
async with AsyncSessionLocal() as session:
result = await session.execute(
select(SchemeSectorRecord).where(SchemeSectorRecord.scheme_version_id == source_scheme_version_id)
await clone_scheme_version_sectors_in_session(
session=session,
source_scheme_version_id=source_scheme_version_id,
target_scheme_version_id=target_scheme_version_id,
)
rows = list(result.scalars().all())
for row in rows:
cloned = SchemeSectorRecord(
sector_record_id=uuid4().hex,
scheme_id=row.scheme_id,
scheme_version_id=target_scheme_version_id,
element_id=row.element_id,
sector_id=row.sector_id,
name=row.name,
classes_raw=row.classes_raw,
)
session.add(cloned)
await session.commit()
async def clone_scheme_version_sectors_in_session(
*,
session,
source_scheme_version_id: str,
target_scheme_version_id: str,
) -> None:
result = await session.execute(
select(SchemeSectorRecord).where(SchemeSectorRecord.scheme_version_id == source_scheme_version_id)
)
rows = list(result.scalars().all())
seen_sector_ids: set[str] = set()
seen_element_ids: set[str] = set()
for row in rows:
if row.sector_id:
if row.sector_id in seen_sector_ids:
raise _conflict(f"Duplicate sector_id='{row.sector_id}' while cloning draft")
seen_sector_ids.add(row.sector_id)
if row.element_id:
if row.element_id in seen_element_ids:
raise _conflict(f"Duplicate element_id='{row.element_id}' while cloning draft")
seen_element_ids.add(row.element_id)
cloned = SchemeSectorRecord(
sector_record_id=uuid4().hex,
scheme_id=row.scheme_id,
scheme_version_id=target_scheme_version_id,
element_id=row.element_id,
sector_id=row.sector_id,
name=row.name,
classes_raw=row.classes_raw,
)
session.add(cloned)
async def list_scheme_version_sectors(scheme_version_id: str) -> list[SchemeSectorRecord]:
async with AsyncSessionLocal() as session:
result = await session.execute(
@@ -78,8 +162,7 @@ async def update_scheme_version_sector_by_record_id(
*,
scheme_version_id: str,
sector_record_id: str,
sector_id: str | None,
name: str | None,
**update_data,
) -> tuple[SchemeSectorRecord, str | None]:
async with AsyncSessionLocal() as session:
result = await session.execute(
@@ -96,9 +179,20 @@ async def update_scheme_version_sector_by_record_id(
detail="Sector record not found in current draft version",
)
if "sector_id" in update_data:
await _ensure_sector_uniqueness(
session=session,
scheme_version_id=scheme_version_id,
sector_id=update_data["sector_id"],
element_id=row.element_id,
exclude_sector_record_id=sector_record_id,
)
old_sector_id = row.sector_id
row.sector_id = sector_id
row.name = name
if "sector_id" in update_data:
row.sector_id = update_data["sector_id"]
if "name" in update_data:
row.name = update_data["name"]
await session.commit()
await session.refresh(row)
@@ -115,6 +209,13 @@ async def create_scheme_version_sector(
classes_raw: str | None,
) -> SchemeSectorRecord:
async with AsyncSessionLocal() as session:
await _ensure_sector_uniqueness(
session=session,
scheme_version_id=scheme_version_id,
sector_id=sector_id,
element_id=element_id,
)
row = SchemeSectorRecord(
sector_record_id=uuid4().hex,
scheme_id=scheme_id,
@@ -166,3 +267,26 @@ async def delete_scheme_version_sector_by_record_id(
await session.delete(sector)
await session.commit()
async def get_scheme_version_sector_by_record_id(
*,
scheme_version_id: str,
sector_record_id: str,
) -> SchemeSectorRecord:
async with AsyncSessionLocal() as session:
result = await session.execute(
select(SchemeSectorRecord).where(
SchemeSectorRecord.scheme_version_id == scheme_version_id,
SchemeSectorRecord.sector_record_id == sector_record_id,
)
)
row = result.scalar_one_or_none()
if row is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Sector record not found in current draft version",
)
return row

View File

@@ -7,6 +7,125 @@ from sqlalchemy import asc, desc, func, select
from app.db.session import AsyncSessionLocal
from app.models.scheme import SchemeRecord
from app.models.scheme_version import SchemeVersionRecord
from app.repositories.scheme_groups import clone_scheme_version_groups_in_session
from app.repositories.scheme_seats import clone_scheme_version_seats_in_session
from app.repositories.scheme_sectors import clone_scheme_version_sectors_in_session
from app.services.api_errors import raise_conflict
def _raise_current_version_inconsistent(*, scheme_id: str, current_version_number: int) -> None:
raise_conflict(
code="current_version_inconsistent",
message="Scheme current version pointer is inconsistent with scheme_versions state.",
details={
"scheme_id": scheme_id,
"current_version_number": current_version_number,
},
)
def _raise_stale_current_version(*, expected_scheme_version_id: str, actual_scheme_version_id: str) -> None:
raise_conflict(
code="stale_current_version",
message="Current scheme version changed. Reload scheme state before creating a new version.",
details={
"expected_scheme_version_id": expected_scheme_version_id,
"actual_scheme_version_id": actual_scheme_version_id,
},
)
async def _get_scheme_for_update(session, scheme_id: str) -> SchemeRecord:
scheme_result = await session.execute(
select(SchemeRecord)
.where(SchemeRecord.scheme_id == scheme_id)
.with_for_update()
)
scheme = scheme_result.scalar_one_or_none()
if scheme is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Scheme not found",
)
return scheme
async def _get_current_scheme_version_for_update(
session,
*,
scheme_id: str,
current_version_number: int,
) -> SchemeVersionRecord:
current_result = await session.execute(
select(SchemeVersionRecord)
.where(
SchemeVersionRecord.scheme_id == scheme_id,
SchemeVersionRecord.version_number == current_version_number,
)
.with_for_update()
)
current_version = current_result.scalar_one_or_none()
if current_version is None:
_raise_current_version_inconsistent(
scheme_id=scheme_id,
current_version_number=current_version_number,
)
return current_version
async def _build_next_draft_version(
session,
*,
scheme: SchemeRecord,
source_version: SchemeVersionRecord,
) -> SchemeVersionRecord:
max_version_result = await session.execute(
select(func.coalesce(func.max(SchemeVersionRecord.version_number), 0)).where(
SchemeVersionRecord.scheme_id == scheme.scheme_id
)
)
next_version_number = int(max_version_result.scalar_one()) + 1
new_version = SchemeVersionRecord(
scheme_version_id=uuid4().hex,
scheme_id=scheme.scheme_id,
version_number=next_version_number,
status="draft",
normalized_storage_path=source_version.normalized_storage_path,
normalized_elements_count=source_version.normalized_elements_count,
normalized_seats_count=source_version.normalized_seats_count,
normalized_groups_count=source_version.normalized_groups_count,
normalized_sectors_count=source_version.normalized_sectors_count,
display_svg_storage_path=source_version.display_svg_storage_path,
display_svg_status=source_version.display_svg_status,
display_svg_generated_at=source_version.display_svg_generated_at,
)
session.add(new_version)
await session.flush()
await clone_scheme_version_sectors_in_session(
session=session,
source_scheme_version_id=source_version.scheme_version_id,
target_scheme_version_id=new_version.scheme_version_id,
)
await clone_scheme_version_groups_in_session(
session=session,
source_scheme_version_id=source_version.scheme_version_id,
target_scheme_version_id=new_version.scheme_version_id,
)
await clone_scheme_version_seats_in_session(
session=session,
source_scheme_version_id=source_version.scheme_version_id,
target_scheme_version_id=new_version.scheme_version_id,
)
scheme.current_version_number = new_version.version_number
scheme.status = "draft"
scheme.published_at = None
scheme.normalized_elements_count = source_version.normalized_elements_count
scheme.normalized_seats_count = source_version.normalized_seats_count
scheme.normalized_groups_count = source_version.normalized_groups_count
scheme.normalized_sectors_count = source_version.normalized_sectors_count
return new_version
async def create_initial_scheme_version(
@@ -75,9 +194,9 @@ async def get_current_scheme_version(scheme_id: str, current_version_number: int
row = result.scalar_one_or_none()
if row is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Current scheme version not found",
_raise_current_version_inconsistent(
scheme_id=scheme_id,
current_version_number=current_version_number,
)
return row
@@ -113,57 +232,87 @@ async def update_scheme_version_display_artifact(
async def create_next_scheme_version_from_current(scheme_id: str) -> SchemeVersionRecord:
async with AsyncSessionLocal() as session:
scheme_result = await session.execute(
select(SchemeRecord).where(SchemeRecord.scheme_id == scheme_id)
)
scheme = scheme_result.scalar_one_or_none()
if scheme is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Scheme not found",
async with session.begin():
scheme = await _get_scheme_for_update(session, scheme_id)
current_version = await _get_current_scheme_version_for_update(
session,
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
new_version = await _build_next_draft_version(
session,
scheme=scheme,
source_version=current_version,
)
current_result = await session.execute(
select(SchemeVersionRecord).where(
SchemeVersionRecord.scheme_id == scheme.scheme_id,
SchemeVersionRecord.version_number == scheme.current_version_number,
)
)
current_version = current_result.scalar_one_or_none()
if current_version is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Current scheme version not found",
)
next_version_number = current_version.version_number + 1
new_version = SchemeVersionRecord(
scheme_version_id=uuid4().hex,
scheme_id=scheme.scheme_id,
version_number=next_version_number,
status="draft",
normalized_storage_path=current_version.normalized_storage_path,
normalized_elements_count=current_version.normalized_elements_count,
normalized_seats_count=current_version.normalized_seats_count,
normalized_groups_count=current_version.normalized_groups_count,
normalized_sectors_count=current_version.normalized_sectors_count,
display_svg_storage_path=current_version.display_svg_storage_path,
display_svg_status=current_version.display_svg_status,
display_svg_generated_at=current_version.display_svg_generated_at,
)
session.add(new_version)
scheme.current_version_number = next_version_number
scheme.status = "draft"
scheme.published_at = None
scheme.normalized_elements_count = current_version.normalized_elements_count
scheme.normalized_seats_count = current_version.normalized_seats_count
scheme.normalized_groups_count = current_version.normalized_groups_count
scheme.normalized_sectors_count = current_version.normalized_sectors_count
await session.commit()
await session.refresh(new_version)
return new_version
async def create_next_scheme_version_from_current_checked(
*,
scheme_id: str,
expected_current_scheme_version_id: str | None = None,
) -> tuple[SchemeVersionRecord, SchemeVersionRecord]:
async with AsyncSessionLocal() as session:
async with session.begin():
scheme = await _get_scheme_for_update(session, scheme_id)
current_version = await _get_current_scheme_version_for_update(
session,
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
if (
expected_current_scheme_version_id
and expected_current_scheme_version_id != current_version.scheme_version_id
):
_raise_stale_current_version(
expected_scheme_version_id=expected_current_scheme_version_id,
actual_scheme_version_id=current_version.scheme_version_id,
)
new_version = await _build_next_draft_version(
session,
scheme=scheme,
source_version=current_version,
)
await session.refresh(current_version)
await session.refresh(new_version)
return current_version, new_version
async def ensure_draft_scheme_version_consistent(
*,
scheme_id: str,
expected_current_scheme_version_id: str | None = None,
) -> tuple[SchemeVersionRecord, bool, str | None]:
async with AsyncSessionLocal() as session:
async with session.begin():
scheme = await _get_scheme_for_update(session, scheme_id)
current_version = await _get_current_scheme_version_for_update(
session,
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
if (
expected_current_scheme_version_id
and expected_current_scheme_version_id != current_version.scheme_version_id
):
_raise_stale_current_version(
expected_scheme_version_id=expected_current_scheme_version_id,
actual_scheme_version_id=current_version.scheme_version_id,
)
if scheme.status == "draft" and current_version.status == "draft":
await session.refresh(current_version)
return current_version, False, None
new_version = await _build_next_draft_version(
session,
scheme=scheme,
source_version=current_version,
)
source_scheme_version_id = current_version.scheme_version_id
await session.refresh(new_version)
return new_version, True, source_scheme_version_id

View File

@@ -6,6 +6,51 @@ from sqlalchemy import desc, func, select
from app.db.session import AsyncSessionLocal
from app.models.scheme import SchemeRecord
from app.models.scheme_version import SchemeVersionRecord
from app.services.api_errors import raise_conflict
def _raise_current_version_inconsistent(*, scheme_id: str, current_version_number: int) -> None:
raise_conflict(
code="current_version_inconsistent",
message="Scheme current version pointer is inconsistent with scheme_versions state.",
details={
"scheme_id": scheme_id,
"current_version_number": current_version_number,
},
)
async def _get_scheme_for_update(session, scheme_id: str) -> SchemeRecord:
scheme_result = await session.execute(
select(SchemeRecord)
.where(SchemeRecord.scheme_id == scheme_id)
.with_for_update()
)
scheme = scheme_result.scalar_one_or_none()
if scheme is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Scheme not found",
)
return scheme
async def _get_current_version_for_scheme(session, scheme: SchemeRecord) -> SchemeVersionRecord:
version_result = await session.execute(
select(SchemeVersionRecord)
.where(
SchemeVersionRecord.scheme_id == scheme.scheme_id,
SchemeVersionRecord.version_number == scheme.current_version_number,
)
.with_for_update()
)
version = version_result.scalar_one_or_none()
if version is None:
_raise_current_version_inconsistent(
scheme_id=scheme.scheme_id,
current_version_number=scheme.current_version_number,
)
return version
async def create_scheme_from_upload(
@@ -37,6 +82,55 @@ async def create_scheme_from_upload(
return scheme_id
async def create_scheme_from_upload_with_initial_version(
*,
source_upload_id: str,
name: str,
normalized_storage_path: str,
normalized_elements_count: int,
normalized_seats_count: int,
normalized_groups_count: int,
normalized_sectors_count: int,
display_svg_storage_path: str | None = None,
display_svg_status: str = "pending",
display_svg_generated_at=None,
) -> tuple[str, str]:
scheme_id = uuid4().hex
scheme_version_id = uuid4().hex
async with AsyncSessionLocal() as session:
scheme = SchemeRecord(
scheme_id=scheme_id,
source_upload_id=source_upload_id,
name=name,
status="draft",
current_version_number=1,
normalized_elements_count=normalized_elements_count,
normalized_seats_count=normalized_seats_count,
normalized_groups_count=normalized_groups_count,
normalized_sectors_count=normalized_sectors_count,
)
version = SchemeVersionRecord(
scheme_version_id=scheme_version_id,
scheme_id=scheme_id,
version_number=1,
status="draft",
normalized_storage_path=normalized_storage_path,
normalized_elements_count=normalized_elements_count,
normalized_seats_count=normalized_seats_count,
normalized_groups_count=normalized_groups_count,
normalized_sectors_count=normalized_sectors_count,
display_svg_storage_path=display_svg_storage_path,
display_svg_status=display_svg_status,
display_svg_generated_at=display_svg_generated_at,
)
session.add(scheme)
session.add(version)
await session.commit()
return scheme_id, scheme_version_id
async def list_scheme_records(limit: int = 50, offset: int = 0) -> list[SchemeRecord]:
async with AsyncSessionLocal() as session:
result = await session.execute(
@@ -72,127 +166,60 @@ async def get_scheme_record_by_scheme_id(scheme_id: str) -> SchemeRecord:
async def publish_scheme(scheme_id: str) -> SchemeRecord:
async with AsyncSessionLocal() as session:
scheme_result = await session.execute(
select(SchemeRecord).where(SchemeRecord.scheme_id == scheme_id)
)
scheme = scheme_result.scalar_one_or_none()
async with session.begin():
scheme = await _get_scheme_for_update(session, scheme_id)
version = await _get_current_version_for_scheme(session, scheme)
scheme.status = "published"
scheme.published_at = func.now()
version.status = "published"
if scheme is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Scheme not found",
)
version_result = await session.execute(
select(SchemeVersionRecord).where(
SchemeVersionRecord.scheme_id == scheme.scheme_id,
SchemeVersionRecord.version_number == scheme.current_version_number,
)
)
version = version_result.scalar_one_or_none()
if version is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Current scheme version not found",
)
scheme.status = "published"
scheme.published_at = func.now()
version.status = "published"
await session.commit()
await session.refresh(scheme)
return scheme
async def unpublish_scheme(scheme_id: str) -> SchemeRecord:
async with AsyncSessionLocal() as session:
scheme_result = await session.execute(
select(SchemeRecord).where(SchemeRecord.scheme_id == scheme_id)
)
scheme = scheme_result.scalar_one_or_none()
async with session.begin():
scheme = await _get_scheme_for_update(session, scheme_id)
version = await _get_current_version_for_scheme(session, scheme)
scheme.status = "draft"
scheme.published_at = None
version.status = "draft"
if scheme is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Scheme not found",
)
version_result = await session.execute(
select(SchemeVersionRecord).where(
SchemeVersionRecord.scheme_id == scheme.scheme_id,
SchemeVersionRecord.version_number == scheme.current_version_number,
)
)
version = version_result.scalar_one_or_none()
if version is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Current scheme version not found",
)
scheme.status = "draft"
scheme.published_at = None
version.status = "draft"
await session.commit()
await session.refresh(scheme)
return scheme
async def rollback_scheme_to_version(scheme_id: str, target_version_number: int) -> SchemeRecord:
async with AsyncSessionLocal() as session:
scheme_result = await session.execute(
select(SchemeRecord).where(SchemeRecord.scheme_id == scheme_id)
)
scheme = scheme_result.scalar_one_or_none()
async with session.begin():
scheme = await _get_scheme_for_update(session, scheme_id)
current_version = await _get_current_version_for_scheme(session, scheme)
if scheme is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Scheme not found",
target_result = await session.execute(
select(SchemeVersionRecord).where(
SchemeVersionRecord.scheme_id == scheme.scheme_id,
SchemeVersionRecord.version_number == target_version_number,
)
)
target_version = target_result.scalar_one_or_none()
target_result = await session.execute(
select(SchemeVersionRecord).where(
SchemeVersionRecord.scheme_id == scheme.scheme_id,
SchemeVersionRecord.version_number == target_version_number,
)
)
target_version = target_result.scalar_one_or_none()
if target_version is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Target scheme version not found",
)
if target_version is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Target scheme version not found",
)
current_result = await session.execute(
select(SchemeVersionRecord).where(
SchemeVersionRecord.scheme_id == scheme.scheme_id,
SchemeVersionRecord.version_number == scheme.current_version_number,
)
)
current_version = current_result.scalar_one_or_none()
if current_version is not None:
current_version.status = "draft"
target_version.status = "draft"
scheme.current_version_number = target_version.version_number
scheme.status = "draft"
scheme.published_at = None
target_version.status = "draft"
scheme.current_version_number = target_version.version_number
scheme.status = "draft"
scheme.published_at = None
scheme.normalized_elements_count = target_version.normalized_elements_count
scheme.normalized_seats_count = target_version.normalized_seats_count
scheme.normalized_groups_count = target_version.normalized_groups_count
scheme.normalized_sectors_count = target_version.normalized_sectors_count
scheme.normalized_elements_count = target_version.normalized_elements_count
scheme.normalized_seats_count = target_version.normalized_seats_count
scheme.normalized_groups_count = target_version.normalized_groups_count
scheme.normalized_sectors_count = target_version.normalized_sectors_count
await session.commit()
await session.refresh(scheme)
return scheme

View File

@@ -0,0 +1,51 @@
from pydantic import BaseModel, Field
class PricingCleanupPreviewItem(BaseModel):
pricing_category_id: str
name: str
code: str
rules_count: int = Field(ge=0)
matched_by: list[str]
deletable: bool
class PricingCleanupPreviewResponse(BaseModel):
scheme_id: str
code_prefixes: list[str]
name_prefixes: list[str]
pricing_category_ids: list[str]
delete_only_without_rules: bool
total_candidates: int = Field(ge=0)
safe_to_delete_count: int = Field(ge=0)
items: list[PricingCleanupPreviewItem]
class PricingCleanupExecuteRequest(BaseModel):
code_prefixes: list[str] = Field(default_factory=list)
name_prefixes: list[str] = Field(default_factory=list)
pricing_category_ids: list[str] = Field(default_factory=list)
delete_only_without_rules: bool = True
dry_run: bool = True
class PricingCleanupSkippedItem(BaseModel):
pricing_category_id: str
reason: str
class PricingCleanupExecuteResponse(BaseModel):
scheme_id: str
dry_run: bool
delete_only_without_rules: bool
requested_total: int = Field(ge=0)
matched_total: int = Field(ge=0)
would_delete_count: int = Field(ge=0)
deleted_count: int = Field(ge=0)
skipped_count: int = Field(ge=0)
would_delete_category_ids: list[str]
deleted_category_ids: list[str]
skipped: list[PricingCleanupSkippedItem]

View File

@@ -56,6 +56,30 @@ class DraftStructureResponse(BaseModel):
total_groups: int
class EditorContextResponse(BaseModel):
scheme_id: str
current_scheme_version_id: str
current_version_number: int
scheme_status: str
scheme_version_status: str
editor_available: bool
current_is_draft: bool
create_draft_available: bool
recommended_action: str
class DraftSummaryResponse(BaseModel):
scheme_id: str
scheme_version_id: str
status: str
total_seats: int
total_sectors: int
total_groups: int
validation_summary: dict
structure_diff_summary: dict
publish_readiness: dict
class SeatPatchRequest(BaseModel):
seat_id: str | None = Field(default=None, max_length=128)
sector_id: str | None = Field(default=None, max_length=128)

View File

@@ -1,22 +1,4 @@
from decimal import Decimal, InvalidOperation
from pydantic import BaseModel, Field, field_validator
def _validate_decimal_amount(value: Decimal) -> Decimal:
try:
normalized = Decimal(value)
except (InvalidOperation, TypeError, ValueError) as exc:
raise ValueError("Некорректная сумма") from exc
if not normalized.is_finite():
raise ValueError("Некорректная сумма")
return normalized
class DeleteResponse(BaseModel):
status: str
from pydantic import BaseModel, Field
class PricingCategoryCreateRequest(BaseModel):
@@ -29,6 +11,22 @@ class PricingCategoryUpdateRequest(BaseModel):
code: str | None = Field(default=None, max_length=128)
class PriceRuleCreateRequest(BaseModel):
pricing_category_id: str = Field(..., max_length=32)
target_type: str = Field(..., pattern="^(seat|group|sector)$")
target_ref: str = Field(..., min_length=1, max_length=128)
amount: str = Field(..., min_length=1, max_length=32)
currency: str = Field(default="RUB", min_length=3, max_length=8)
class PriceRuleUpdateRequest(BaseModel):
pricing_category_id: str = Field(..., max_length=32)
target_type: str = Field(..., pattern="^(seat|group|sector)$")
target_ref: str = Field(..., min_length=1, max_length=128)
amount: str = Field(..., min_length=1, max_length=32)
currency: str = Field(default="RUB", min_length=3, max_length=8)
class PricingCategoryItem(BaseModel):
pricing_category_id: str
scheme_id: str
@@ -37,6 +35,22 @@ class PricingCategoryItem(BaseModel):
created_at: str
class PriceRuleItem(BaseModel):
price_rule_id: str
scheme_id: str
pricing_category_id: str | None
target_type: str
target_ref: str
amount: str
currency: str
created_at: str
class PricingBundleResponse(BaseModel):
categories: list[PricingCategoryItem]
rules: list[PriceRuleItem]
class PricingCategoryCreateResponse(BaseModel):
pricing_category_id: str
scheme_id: str
@@ -51,50 +65,13 @@ class PricingCategoryUpdateResponse(BaseModel):
code: str | None
class PriceRuleCreateRequest(BaseModel):
pricing_category_id: str | None = Field(default=None, max_length=32)
target_type: str = Field(..., pattern="^(seat|group|sector)$")
target_ref: str = Field(..., min_length=1, max_length=128)
amount: Decimal
currency: str = Field(default="RUB", min_length=3, max_length=8)
@field_validator("amount")
@classmethod
def validate_amount(cls, value: Decimal) -> Decimal:
return _validate_decimal_amount(value)
class PriceRuleUpdateRequest(BaseModel):
pricing_category_id: str | None = Field(default=None, max_length=32)
target_type: str = Field(..., pattern="^(seat|group|sector)$")
target_ref: str = Field(..., min_length=1, max_length=128)
amount: Decimal
currency: str = Field(default="RUB", min_length=3, max_length=8)
@field_validator("amount")
@classmethod
def validate_amount(cls, value: Decimal) -> Decimal:
return _validate_decimal_amount(value)
class PriceRuleItem(BaseModel):
price_rule_id: str
scheme_id: str
pricing_category_id: str | None
target_type: str
target_ref: str
amount: Decimal | str
currency: str
created_at: str
class PriceRuleCreateResponse(BaseModel):
price_rule_id: str
scheme_id: str
pricing_category_id: str | None
pricing_category_id: str
target_type: str
target_ref: str
amount: Decimal
amount: str
currency: str
@@ -104,10 +81,16 @@ class PriceRuleUpdateResponse(BaseModel):
pricing_category_id: str | None
target_type: str
target_ref: str
amount: Decimal
amount: str
currency: str
class DeleteResponse(BaseModel):
deleted: bool
pricing_category_id: str | None = None
price_rule_id: str | None = None
class EffectiveSeatPriceResponse(BaseModel):
scheme_id: str
scheme_version_id: str
@@ -117,15 +100,5 @@ class EffectiveSeatPriceResponse(BaseModel):
matched_rule_level: str
matched_target_ref: str
pricing_category_id: str | None
amount: Decimal | str
amount: str
currency: str
class SchemePricingResponse(BaseModel):
categories: list[PricingCategoryItem]
rules: list[PriceRuleItem]
class PricingBundleResponse(BaseModel):
categories: list[PricingCategoryItem]
rules: list[PriceRuleItem]

View File

@@ -0,0 +1,55 @@
from pydantic import BaseModel
class PricingCategoryMutationResponse(BaseModel):
pricing_category_id: str
scheme_id: str
name: str
code: str
class PricingCategoryDeleteResponse(BaseModel):
deleted: bool
pricing_category_id: str
class PriceRuleMutationResponse(BaseModel):
price_rule_id: str
scheme_id: str
pricing_category_id: str
target_type: str
target_ref: str
amount: str
currency: str
class PriceRuleDeleteResponse(BaseModel):
deleted: bool
price_rule_id: str
class PricingRuleDiagnosticsItem(BaseModel):
price_rule_id: str
pricing_category_id: str
target_type: str
target_ref: str
amount: str
currency: str
matched_seats_count: int
matched_seat_ids: list[str]
orphan: bool
orphan_reason: str | None
class PricingRuleDiagnosticsSummary(BaseModel):
total_rules: int
orphan_rules_count: int
active_rules_count: int
matched_seats_total: int
class PricingRuleDiagnosticsResponse(BaseModel):
scheme_id: str
scheme_version_id: str
summary: PricingRuleDiagnosticsSummary
items: list[PricingRuleDiagnosticsItem]

View File

@@ -0,0 +1,45 @@
from __future__ import annotations
from pydantic import BaseModel
class PublishReadinessSnapshot(BaseModel):
available: bool
categories_count: int
rules_count: int
class PublishReadinessPricingCoverage(BaseModel):
total_seats: int
priced_seats: int
unpriced_seats: int
coverage_percent: float
class PublishReadinessFlags(BaseModel):
validation_publishable: bool
snapshot_available: bool
require_full_pricing_coverage: bool
full_pricing_coverage: bool
pricing_gate_passed: bool
is_ready_to_publish: bool
class PublishReadinessResponse(BaseModel):
scheme_id: str
scheme_version_id: str
status: str
validation_summary: dict
pricing_coverage: PublishReadinessPricingCoverage
snapshot: PublishReadinessSnapshot
readiness: PublishReadinessFlags
class PublishExecutionResponse(BaseModel):
scheme_id: str
scheme_version_id: str
status: str
current_version_number: int
published_at: str | None
pricing_snapshot: dict
validation_summary: dict

View File

@@ -1,5 +1,3 @@
from typing import List
from pydantic import BaseModel
@@ -17,7 +15,7 @@ class SchemeVersionListItem(BaseModel):
class SchemeVersionListResponse(BaseModel):
items: List[SchemeVersionListItem]
items: list[SchemeVersionListItem]
total: int
@@ -27,3 +25,13 @@ class SchemeVersionCreateResponse(BaseModel):
version_number: int
status: str
normalized_storage_path: str
class EnsureDraftResponse(BaseModel):
scheme_id: str
scheme_version_id: str
version_number: int
status: str
normalized_storage_path: str
created: bool
source_scheme_version_id: str | None = None

View File

@@ -1,12 +1,10 @@
from decimal import Decimal
from pydantic import BaseModel
class TestSeatPreviewResponse(BaseModel):
scheme_id: str
scheme_version_id: str
seat_id: str
seat_id: str | None
element_id: str | None
sector_id: str | None
group_id: str | None
@@ -17,5 +15,7 @@ class TestSeatPreviewResponse(BaseModel):
matched_rule_level: str | None
matched_target_ref: str | None
pricing_category_id: str | None
amount: Decimal | None
amount: str | None
currency: str | None
reason_code: str
reason_message: str

View File

@@ -1,4 +1,4 @@
from fastapi import Header, HTTPException, status
from fastapi import Depends, Header, HTTPException, status
from app.core.config import settings
from app.domain.roles import UserRole
@@ -14,7 +14,9 @@ def resolve_role(api_key: str) -> str | None:
return None
async def require_api_key(x_api_key: str | None = Header(default=None, alias="X-API-Key")) -> str:
async def require_api_key(
x_api_key: str | None = Header(default=None, alias=settings.auth_header_name),
) -> str:
if not x_api_key:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
@@ -29,3 +31,12 @@ async def require_api_key(x_api_key: str | None = Header(default=None, alias="X-
)
return role
async def require_admin_api_key(role: str = Depends(require_api_key)) -> str:
if role != UserRole.ADMIN.value:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Admin role required",
)
return role

View File

@@ -0,0 +1,41 @@
from __future__ import annotations
from fastapi import HTTPException, status
def raise_conflict(
*,
code: str,
message: str,
details: dict | None = None,
) -> None:
payload: dict = {
"code": code,
"message": message,
}
if details is not None:
payload["details"] = details
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail=payload,
)
def raise_unprocessable(
*,
code: str,
message: str,
details: dict | None = None,
) -> None:
payload: dict = {
"code": code,
"message": message,
}
if details is not None:
payload.update(details)
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail=payload,
)

View File

@@ -1,10 +1,25 @@
from fastapi import HTTPException, status
from app.repositories.scheme_versions import get_current_scheme_version
from app.repositories.schemes import get_scheme_record_by_scheme_id
from app.services.api_errors import raise_conflict
async def get_current_draft_context(scheme_id: str):
def build_stale_draft_version_detail(
*,
expected_scheme_version_id: str,
actual_scheme_version_id: str,
) -> dict:
return {
"code": "stale_draft_version",
"message": "Draft scheme version is stale. Reload current draft state before applying mutation.",
"expected_scheme_version_id": expected_scheme_version_id,
"actual_scheme_version_id": actual_scheme_version_id,
}
async def get_current_draft_context(
scheme_id: str,
expected_scheme_version_id: str | None = None,
):
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
scheme_id=scheme.scheme_id,
@@ -12,9 +27,38 @@ async def get_current_draft_context(scheme_id: str):
)
if version.status != "draft" or scheme.status != "draft":
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail="Current scheme version is not editable because it is not in draft state",
raise_conflict(
code="draft_not_editable",
message="Current scheme version is not editable because it is not in draft state",
details={
"scheme_status": scheme.status,
"scheme_version_status": version.status,
"actual_scheme_version_id": version.scheme_version_id,
},
)
if expected_scheme_version_id and expected_scheme_version_id != version.scheme_version_id:
raise_conflict(
code="stale_draft_version",
message="Draft scheme version is stale. Reload current draft state before applying mutation.",
details={
"expected_scheme_version_id": expected_scheme_version_id,
"actual_scheme_version_id": version.scheme_version_id,
},
)
return scheme, version
async def validate_expected_draft_version_if_provided(
scheme_id: str,
expected_scheme_version_id: str | None,
):
if not expected_scheme_version_id:
return None
scheme, version = await get_current_draft_context(
scheme_id=scheme_id,
expected_scheme_version_id=expected_scheme_version_id,
)
return scheme, version

View File

@@ -1,8 +1,27 @@
from fastapi import HTTPException, status
from __future__ import annotations
from app.repositories.scheme_groups import list_scheme_version_groups
from app.repositories.scheme_seats import list_scheme_version_seats
from app.repositories.scheme_sectors import list_scheme_version_sectors
from app.services.api_errors import raise_unprocessable
def _raise_uniqueness_error(message: str, detail: dict | None = None) -> None:
if detail:
code = detail.pop("code", "editor_uniqueness_error")
msg = detail.pop("message", message)
raise_unprocessable(code=code, message=msg, details=detail)
else:
raise_unprocessable(code="editor_uniqueness_error", message=message)
def _raise_reference_error(message: str, detail: dict | None = None) -> None:
if detail:
code = detail.pop("code", "editor_reference_error")
msg = detail.pop("message", message)
raise_unprocessable(code=code, message=msg, details=detail)
else:
raise_unprocessable(code="editor_reference_error", message=message)
async def validate_single_seat_patch_uniqueness(
@@ -15,11 +34,18 @@ async def validate_single_seat_patch_uniqueness(
return
seats = await list_scheme_version_seats(scheme_version_id)
for seat in seats:
if seat.seat_id == new_seat_id and seat.seat_record_id != seat_record_id:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail=f"seat_id already exists in draft version: {new_seat_id}",
for row in seats:
if row.seat_record_id == seat_record_id:
continue
if row.seat_id == new_seat_id:
_raise_uniqueness_error(
f"Seat id already exists in current draft version: {new_seat_id}",
{
"code": "duplicate_seat_id",
"message": "Seat id already exists in current draft version",
"seat_id": new_seat_id,
"conflict_seat_record_id": row.seat_record_id,
},
)
@@ -29,36 +55,50 @@ async def validate_bulk_seat_patch_uniqueness(
items: list[dict],
) -> None:
seats = await list_scheme_version_seats(scheme_version_id)
existing = {seat.seat_id: seat.seat_record_id for seat in seats if seat.seat_id}
payload_new_ids = [item.get("seat_id") for item in items if item.get("seat_id")]
duplicates_inside_payload = sorted(
{
seat_id
for seat_id in payload_new_ids
if payload_new_ids.count(seat_id) > 1
}
)
if duplicates_inside_payload:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail=f"Duplicate seat_id values inside bulk payload: {', '.join(duplicates_inside_payload)}",
)
existing_by_seat_id: dict[str, str] = {
row.seat_id: row.seat_record_id
for row in seats
if row.seat_id
}
seen_new_ids: dict[str, str] = {}
for item in items:
new_seat_id = item.get("seat_id")
seat_record_id = item["seat_record_id"]
seat_id = item.get("seat_id")
if not new_seat_id:
if not seat_id:
continue
existing_record_id = existing.get(new_seat_id)
existing_record_id = existing_by_seat_id.get(seat_id)
if existing_record_id and existing_record_id != seat_record_id:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail=f"seat_id already exists in draft version: {new_seat_id}",
_raise_uniqueness_error(
f"Seat id already exists in current draft version: {seat_id}",
{
"code": "duplicate_seat_id",
"message": "Seat id already exists in current draft version",
"seat_id": seat_id,
"conflict_seat_record_id": existing_record_id,
"input_seat_record_id": seat_record_id,
},
)
seen_record_id = seen_new_ids.get(seat_id)
if seen_record_id and seen_record_id != seat_record_id:
_raise_uniqueness_error(
f"Seat id is duplicated inside bulk payload: {seat_id}",
{
"code": "duplicate_seat_id_in_payload",
"message": "Seat id is duplicated inside bulk payload",
"seat_id": seat_id,
"first_seat_record_id": seen_record_id,
"second_seat_record_id": seat_record_id,
},
)
seen_new_ids[seat_id] = seat_record_id
async def validate_sector_patch_uniqueness(
*,
@@ -69,12 +109,19 @@ async def validate_sector_patch_uniqueness(
if not new_sector_id:
return
sectors = await list_scheme_version_sectors(scheme_version_id)
for sector in sectors:
if sector.sector_id == new_sector_id and sector.sector_record_id != sector_record_id:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail=f"sector_id already exists in draft version: {new_sector_id}",
rows = await list_scheme_version_sectors(scheme_version_id)
for row in rows:
if row.sector_record_id == sector_record_id:
continue
if row.sector_id == new_sector_id:
_raise_uniqueness_error(
f"Sector id already exists in current draft version: {new_sector_id}",
{
"code": "duplicate_sector_id",
"message": "Sector id already exists in current draft version",
"sector_id": new_sector_id,
"conflict_sector_record_id": row.sector_record_id,
},
)
@@ -87,10 +134,216 @@ async def validate_group_patch_uniqueness(
if not new_group_id:
return
groups = await list_scheme_version_groups(scheme_version_id)
for group in groups:
if group.group_id == new_group_id and group.group_record_id != group_record_id:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail=f"group_id already exists in draft version: {new_group_id}",
rows = await list_scheme_version_groups(scheme_version_id)
for row in rows:
if row.group_record_id == group_record_id:
continue
if row.group_id == new_group_id:
_raise_uniqueness_error(
f"Group id already exists in current draft version: {new_group_id}",
{
"code": "duplicate_group_id",
"message": "Group id already exists in current draft version",
"group_id": new_group_id,
"conflict_group_record_id": row.group_record_id,
},
)
async def validate_create_sector_uniqueness(
*,
scheme_version_id: str,
sector_id: str,
element_id: str | None,
) -> None:
rows = await list_scheme_version_sectors(scheme_version_id)
for row in rows:
if row.sector_id == sector_id:
_raise_uniqueness_error(
f"Sector id already exists in current draft version: {sector_id}",
{
"code": "duplicate_sector_id",
"message": "Sector id already exists in current draft version",
"sector_id": sector_id,
"conflict_sector_record_id": row.sector_record_id,
},
)
if element_id is None:
return
for row in rows:
if row.element_id == element_id:
_raise_uniqueness_error(
f"Sector element binding already exists in current draft version: {element_id}",
{
"code": "duplicate_sector_element_id",
"message": "Sector element binding already exists in current draft version",
"element_id": element_id,
"conflict_sector_record_id": row.sector_record_id,
},
)
async def validate_create_group_uniqueness(
*,
scheme_version_id: str,
group_id: str,
element_id: str | None,
) -> None:
rows = await list_scheme_version_groups(scheme_version_id)
for row in rows:
if row.group_id == group_id:
_raise_uniqueness_error(
f"Group id already exists in current draft version: {group_id}",
{
"code": "duplicate_group_id",
"message": "Group id already exists in current draft version",
"group_id": group_id,
"conflict_group_record_id": row.group_record_id,
},
)
if element_id is None:
return
for row in rows:
if row.element_id == element_id:
_raise_uniqueness_error(
f"Group element binding already exists in current draft version: {element_id}",
{
"code": "duplicate_group_element_id",
"message": "Group element binding already exists in current draft version",
"element_id": element_id,
"conflict_group_record_id": row.group_record_id,
},
)
async def validate_single_seat_patch_references(
*,
scheme_version_id: str,
sector_id: str | None,
group_id: str | None,
) -> None:
sector_ids = {
row.sector_id
for row in await list_scheme_version_sectors(scheme_version_id)
if row.sector_id
}
group_ids = {
row.group_id
for row in await list_scheme_version_groups(scheme_version_id)
if row.group_id
}
if sector_id is not None and sector_id not in sector_ids:
_raise_reference_error(
f"Sector id does not exist in current draft version: {sector_id}",
{
"code": "unknown_sector_id",
"message": "Sector id does not exist in current draft version",
"sector_id": sector_id,
},
)
if group_id is not None and group_id not in group_ids:
_raise_reference_error(
f"Group id does not exist in current draft version: {group_id}",
{
"code": "unknown_group_id",
"message": "Group id does not exist in current draft version",
"group_id": group_id,
},
)
async def validate_bulk_seat_patch_references(
*,
scheme_version_id: str,
items: list[dict],
) -> None:
sector_ids = {
row.sector_id
for row in await list_scheme_version_sectors(scheme_version_id)
if row.sector_id
}
group_ids = {
row.group_id
for row in await list_scheme_version_groups(scheme_version_id)
if row.group_id
}
unknown_sector_refs = sorted(
{
item["sector_id"]
for item in items
if item.get("sector_id") is not None and item["sector_id"] not in sector_ids
}
)
if unknown_sector_refs:
_raise_reference_error(
"Bulk payload contains unknown sector_id values",
{
"code": "unknown_sector_ids",
"message": "Bulk payload contains unknown sector_id values",
"sector_ids": unknown_sector_refs,
},
)
unknown_group_refs = sorted(
{
item["group_id"]
for item in items
if item.get("group_id") is not None and item["group_id"] not in group_ids
}
)
if unknown_group_refs:
_raise_reference_error(
"Bulk payload contains unknown group_id values",
{
"code": "unknown_group_ids",
"message": "Bulk payload contains unknown group_id values",
"group_ids": unknown_group_refs,
},
)
async def validate_remap_target_references(
*,
scheme_version_id: str,
to_sector_id: str | None,
to_group_id: str | None,
) -> None:
sector_ids = {
row.sector_id
for row in await list_scheme_version_sectors(scheme_version_id)
if row.sector_id
}
group_ids = {
row.group_id
for row in await list_scheme_version_groups(scheme_version_id)
if row.group_id
}
if to_sector_id is not None and to_sector_id not in sector_ids:
_raise_reference_error(
f"Target sector_id does not exist in current draft version: {to_sector_id}",
{
"code": "unknown_target_sector_id",
"message": "Target sector_id does not exist in current draft version",
"sector_id": to_sector_id,
},
)
if to_group_id is not None and to_group_id not in group_ids:
_raise_reference_error(
f"Target group_id does not exist in current draft version: {to_group_id}",
{
"code": "unknown_target_group_id",
"message": "Target group_id does not exist in current draft version",
"group_id": to_group_id,
},
)

View File

@@ -0,0 +1,127 @@
from __future__ import annotations
from app.repositories.pricing_cleanup import (
delete_pricing_categories_by_ids,
list_pricing_categories_with_rule_counts,
)
def _matches_any_prefix(value: str | None, prefixes: list[str]) -> list[str]:
if not value:
return []
matches: list[str] = []
lower_value = value.lower()
for prefix in prefixes:
if lower_value.startswith(prefix.lower()):
matches.append(prefix)
return matches
async def build_pricing_cleanup_preview(
*,
scheme_id: str,
code_prefixes: list[str],
name_prefixes: list[str],
pricing_category_ids: list[str],
delete_only_without_rules: bool,
) -> dict:
rows = await list_pricing_categories_with_rule_counts(scheme_id=scheme_id)
requested_ids = set(pricing_category_ids)
items: list[dict] = []
safe_to_delete_count = 0
for row in rows:
matched_by: list[str] = []
for prefix in _matches_any_prefix(row["code"], code_prefixes):
matched_by.append(f"code_prefix:{prefix}")
for prefix in _matches_any_prefix(row["name"], name_prefixes):
matched_by.append(f"name_prefix:{prefix}")
if row["pricing_category_id"] in requested_ids:
matched_by.append("pricing_category_id")
if not matched_by:
continue
deletable = True
if delete_only_without_rules and row["rules_count"] > 0:
deletable = False
if deletable:
safe_to_delete_count += 1
items.append(
{
"pricing_category_id": row["pricing_category_id"],
"name": row["name"],
"code": row["code"],
"rules_count": row["rules_count"],
"matched_by": matched_by,
"deletable": deletable,
}
)
return {
"scheme_id": scheme_id,
"code_prefixes": code_prefixes,
"name_prefixes": name_prefixes,
"pricing_category_ids": pricing_category_ids,
"delete_only_without_rules": delete_only_without_rules,
"total_candidates": len(items),
"safe_to_delete_count": safe_to_delete_count,
"items": items,
}
async def execute_pricing_cleanup(
*,
scheme_id: str,
code_prefixes: list[str],
name_prefixes: list[str],
pricing_category_ids: list[str],
delete_only_without_rules: bool,
dry_run: bool,
) -> dict:
preview = await build_pricing_cleanup_preview(
scheme_id=scheme_id,
code_prefixes=code_prefixes,
name_prefixes=name_prefixes,
pricing_category_ids=pricing_category_ids,
delete_only_without_rules=delete_only_without_rules,
)
deletable_items = [item for item in preview["items"] if item["deletable"]]
skipped_items = [item for item in preview["items"] if not item["deletable"]]
would_delete_ids = [item["pricing_category_id"] for item in deletable_items]
deleted_ids: list[str] = []
if not dry_run and would_delete_ids:
await delete_pricing_categories_by_ids(
scheme_id=scheme_id,
pricing_category_ids=would_delete_ids,
)
deleted_ids = list(would_delete_ids)
return {
"scheme_id": scheme_id,
"dry_run": dry_run,
"delete_only_without_rules": delete_only_without_rules,
"requested_total": len(pricing_category_ids) + len(code_prefixes) + len(name_prefixes),
"matched_total": preview["total_candidates"],
"would_delete_count": len(would_delete_ids),
"deleted_count": 0 if dry_run else len(deleted_ids),
"skipped_count": len(skipped_items),
"would_delete_category_ids": would_delete_ids,
"deleted_category_ids": deleted_ids,
"skipped": [
{
"pricing_category_id": item["pricing_category_id"],
"reason": "category_has_rules",
}
for item in skipped_items
],
}

View File

@@ -0,0 +1,98 @@
from __future__ import annotations
from app.repositories.pricing import list_price_rules
from app.repositories.scheme_groups import list_scheme_version_groups
from app.repositories.scheme_seats import list_scheme_version_seats
from app.repositories.scheme_sectors import list_scheme_version_sectors
async def build_pricing_rule_diagnostics(
*,
scheme_id: str,
scheme_version_id: str,
) -> dict:
rules = await list_price_rules(scheme_id)
seats = await list_scheme_version_seats(scheme_version_id)
sectors = await list_scheme_version_sectors(scheme_version_id)
groups = await list_scheme_version_groups(scheme_version_id)
sector_ids = {row.sector_id for row in sectors if row.sector_id}
group_ids = {row.group_id for row in groups if row.group_id}
seat_ids = {row.seat_id for row in seats if row.seat_id}
items: list[dict] = []
matched_seats_total = 0
orphan_rules_count = 0
for rule in rules:
matched_seat_ids: list[str] = []
orphan = False
orphan_reason: str | None = None
if rule.target_type == "seat":
if rule.target_ref not in seat_ids:
orphan = True
orphan_reason = "target_seat_not_found"
else:
matched_seat_ids = [
seat.seat_id
for seat in seats
if seat.seat_id and seat.seat_id == rule.target_ref
]
elif rule.target_type == "group":
if rule.target_ref not in group_ids:
orphan = True
orphan_reason = "target_group_not_found"
else:
matched_seat_ids = [
seat.seat_id
for seat in seats
if seat.seat_id and seat.group_id == rule.target_ref
]
elif rule.target_type == "sector":
if rule.target_ref not in sector_ids:
orphan = True
orphan_reason = "target_sector_not_found"
else:
matched_seat_ids = [
seat.seat_id
for seat in seats
if seat.seat_id and seat.sector_id == rule.target_ref
]
else:
orphan = True
orphan_reason = "unsupported_target_type"
if orphan:
orphan_rules_count += 1
matched_seats_total += len(matched_seat_ids)
items.append(
{
"price_rule_id": rule.price_rule_id,
"pricing_category_id": rule.pricing_category_id,
"target_type": rule.target_type,
"target_ref": rule.target_ref,
"amount": str(rule.amount),
"currency": rule.currency,
"matched_seats_count": len(matched_seat_ids),
"matched_seat_ids": matched_seat_ids,
"orphan": orphan,
"orphan_reason": orphan_reason,
}
)
return {
"scheme_id": scheme_id,
"scheme_version_id": scheme_version_id,
"summary": {
"total_rules": len(items),
"orphan_rules_count": orphan_rules_count,
"active_rules_count": len(items) - orphan_rules_count,
"matched_seats_total": matched_seats_total,
},
"items": items,
}

View File

@@ -0,0 +1,99 @@
from __future__ import annotations
from app.core.config import settings
from app.repositories.scheme_seats import list_scheme_version_seats
from app.repositories.scheme_version_pricing import (
find_effective_snapshot_price_rule,
list_scheme_version_snapshot_categories,
list_scheme_version_snapshot_rules,
)
from app.services.scheme_validation import build_scheme_validation_report
async def _build_snapshot_pricing_coverage(*, scheme_version_id: str) -> dict:
seats = await list_scheme_version_seats(scheme_version_id)
snapshot_categories = await list_scheme_version_snapshot_categories(scheme_version_id)
snapshot_rules = await list_scheme_version_snapshot_rules(scheme_version_id)
snapshot_available = len(snapshot_categories) > 0 or len(snapshot_rules) > 0
priced_seats = 0
unpriced_seats = 0
for seat in seats:
if not seat.seat_id:
unpriced_seats += 1
continue
if not snapshot_available:
unpriced_seats += 1
continue
try:
await find_effective_snapshot_price_rule(
scheme_version_id=scheme_version_id,
seat_id=seat.seat_id,
group_id=seat.group_id,
sector_id=seat.sector_id,
)
priced_seats += 1
except Exception:
unpriced_seats += 1
total_seats = len(seats)
coverage_percent = round((priced_seats / total_seats) * 100, 2) if total_seats > 0 else 100.0
return {
"snapshot": {
"available": snapshot_available,
"categories_count": len(snapshot_categories),
"rules_count": len(snapshot_rules),
},
"pricing_coverage": {
"total_seats": total_seats,
"priced_seats": priced_seats,
"unpriced_seats": unpriced_seats,
"coverage_percent": coverage_percent,
},
}
async def build_publish_readiness(
*,
scheme_id: str,
scheme_version_id: str,
status: str,
) -> dict:
validation = await build_scheme_validation_report(
scheme_id=scheme_id,
scheme_version_id=scheme_version_id,
)
snapshot_state = await _build_snapshot_pricing_coverage(
scheme_version_id=scheme_version_id,
)
validation_publishable = bool(validation["summary"]["is_publishable"])
snapshot_available = bool(snapshot_state["snapshot"]["available"])
full_pricing_coverage = snapshot_state["pricing_coverage"]["unpriced_seats"] == 0
require_full_pricing_coverage = bool(settings.publish_require_full_pricing_coverage)
pricing_gate_passed = snapshot_available and (
full_pricing_coverage if require_full_pricing_coverage else True
)
is_ready_to_publish = validation_publishable and pricing_gate_passed
return {
"scheme_id": scheme_id,
"scheme_version_id": scheme_version_id,
"status": status,
"validation_summary": validation["summary"],
"pricing_coverage": snapshot_state["pricing_coverage"],
"snapshot": snapshot_state["snapshot"],
"readiness": {
"validation_publishable": validation_publishable,
"snapshot_available": snapshot_available,
"require_full_pricing_coverage": require_full_pricing_coverage,
"full_pricing_coverage": full_pricing_coverage,
"pricing_gate_passed": pricing_gate_passed,
"is_ready_to_publish": is_ready_to_publish,
},
}

View File

@@ -1,15 +1,17 @@
from fastapi import HTTPException, status
from __future__ import annotations
from app.repositories.audit import create_audit_event
from app.repositories.scheme_version_pricing import replace_scheme_version_pricing_snapshot
from app.repositories.scheme_versions import get_current_scheme_version
from app.repositories.schemes import get_scheme_record_by_scheme_id, publish_scheme
from app.services.scheme_validation import build_scheme_validation_report
from app.services.api_errors import raise_conflict
from app.services.publish_readiness import build_publish_readiness
async def publish_current_draft_scheme(
*,
scheme_id: str,
expected_scheme_version_id: str | None = None,
) -> dict:
scheme = await get_scheme_record_by_scheme_id(scheme_id)
version = await get_current_scheme_version(
@@ -18,20 +20,43 @@ async def publish_current_draft_scheme(
)
if scheme.status != "draft" or version.status != "draft":
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail="Current scheme version is not publishable because it is not in draft state",
raise_conflict(
code="publish_not_ready",
message="Current scheme version is not publishable because it is not in draft state.",
details={
"scheme_status": scheme.status,
"scheme_version_status": version.status,
"scheme_version_id": version.scheme_version_id,
},
)
validation = await build_scheme_validation_report(
if expected_scheme_version_id and expected_scheme_version_id != version.scheme_version_id:
raise_conflict(
code="publish_not_ready",
message="Draft scheme version is stale. Reload current draft state before publishing.",
details={
"expected_scheme_version_id": expected_scheme_version_id,
"actual_scheme_version_id": version.scheme_version_id,
},
)
readiness = await build_publish_readiness(
scheme_id=scheme.scheme_id,
scheme_version_id=version.scheme_version_id,
status=version.status,
)
if not validation["summary"]["is_publishable"]:
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail="Scheme is not publishable in current state",
if not readiness["readiness"]["is_ready_to_publish"]:
raise_conflict(
code="publish_not_ready",
message="Scheme is not ready to publish in current draft state.",
details={
"scheme_version_id": version.scheme_version_id,
"readiness": readiness["readiness"],
"validation_summary": readiness["validation_summary"],
"pricing_coverage": readiness["pricing_coverage"],
"snapshot": readiness["snapshot"],
},
)
snapshot = await replace_scheme_version_pricing_snapshot(
@@ -61,5 +86,5 @@ async def publish_current_draft_scheme(
"current_version_number": published_row.current_version_number,
"published_at": published_row.published_at.isoformat() if published_row.published_at else None,
"pricing_snapshot": snapshot,
"validation_summary": validation["summary"],
"validation_summary": readiness["validation_summary"],
}

View File

@@ -1,11 +1,11 @@
from __future__ import annotations
from fastapi import HTTPException, status
from app.repositories.scheme_seats import (
bulk_remap_scheme_version_seats,
list_scheme_version_seats,
)
from app.services.api_errors import raise_unprocessable
from app.services.editor_validation import validate_remap_target_references
def _match_seat(
@@ -34,11 +34,17 @@ async def preview_remap(
to_group_id: str | None,
) -> list[dict]:
if not any([seat_record_ids, from_sector_id, from_group_id]):
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="At least one remap filter must be provided",
raise_unprocessable(
code="remap_filter_required",
message="At least one remap filter must be provided",
)
await validate_remap_target_references(
scheme_version_id=scheme_version_id,
to_sector_id=to_sector_id,
to_group_id=to_group_id,
)
seats = await list_scheme_version_seats(scheme_version_id)
seat_record_id_set = set(seat_record_ids) if seat_record_ids else None

View File

@@ -6,9 +6,8 @@ from app.repositories.scheme_sectors import list_scheme_version_sectors
from app.services.baseline_selector import select_baseline_scheme_version
def _serialize_sector(row) -> dict:
def _sector_compare_value(row) -> dict:
return {
"sector_record_id": row.sector_record_id,
"element_id": row.element_id,
"sector_id": row.sector_id,
"name": row.name,
@@ -16,9 +15,14 @@ def _serialize_sector(row) -> dict:
}
def _serialize_group(row) -> dict:
def _sector_response_value(row) -> dict:
payload = _sector_compare_value(row)
payload["sector_record_id"] = row.sector_record_id
return payload
def _group_compare_value(row) -> dict:
return {
"group_record_id": row.group_record_id,
"element_id": row.element_id,
"group_id": row.group_id,
"name": row.name,
@@ -26,9 +30,14 @@ def _serialize_group(row) -> dict:
}
def _serialize_seat(row) -> dict:
def _group_response_value(row) -> dict:
payload = _group_compare_value(row)
payload["group_record_id"] = row.group_record_id
return payload
def _seat_compare_value(row) -> dict:
return {
"seat_record_id": row.seat_record_id,
"element_id": row.element_id,
"seat_id": row.seat_id,
"sector_id": row.sector_id,
@@ -38,19 +47,33 @@ def _serialize_seat(row) -> dict:
}
def _build_diff(before_map: dict, after_map: dict) -> list[dict]:
keys = sorted(set(before_map.keys()) | set(after_map.keys()))
def _seat_response_value(row) -> dict:
payload = _seat_compare_value(row)
payload["seat_record_id"] = row.seat_record_id
return payload
def _build_diff(
*,
before_compare_map: dict,
after_compare_map: dict,
before_payload_map: dict,
after_payload_map: dict,
) -> list[dict]:
keys = sorted(set(before_payload_map.keys()) | set(after_payload_map.keys()))
result: list[dict] = []
for key in keys:
before = before_map.get(key)
after = after_map.get(key)
before_compare = before_compare_map.get(key)
after_compare = after_compare_map.get(key)
before_payload = before_payload_map.get(key)
after_payload = after_payload_map.get(key)
if before is None and after is not None:
if before_compare is None and after_compare is not None:
status = "added"
elif before is not None and after is None:
elif before_compare is not None and after_compare is None:
status = "removed"
elif before != after:
elif before_compare != after_compare:
status = "changed"
else:
status = "unchanged"
@@ -59,13 +82,22 @@ def _build_diff(before_map: dict, after_map: dict) -> list[dict]:
{
"key": key,
"status": status,
"before": before,
"after": after,
"before": before_payload,
"after": after_payload,
}
)
return result
def _sector_key(row) -> str:
return row.sector_id if row.sector_id else (row.element_id if row.element_id else row.sector_record_id)
def _group_key(row) -> str:
return row.group_id if row.group_id else (row.element_id if row.element_id else row.group_record_id)
def _seat_key(row) -> str:
return row.seat_id if row.seat_id else (row.element_id if row.element_id else row.seat_record_id)
async def build_structure_diff(
*,
scheme_id: str,
@@ -83,32 +115,68 @@ async def build_structure_diff(
draft_seats = await list_scheme_version_seats(draft_scheme_version_id)
if baseline is None:
baseline_sector_map = {}
baseline_group_map = {}
baseline_seat_map = {}
baseline_sector_compare_map = {}
baseline_group_compare_map = {}
baseline_seat_compare_map = {}
baseline_sector_payload_map = {}
baseline_group_payload_map = {}
baseline_seat_payload_map = {}
baseline_scheme_version_id = None
else:
baseline_scheme_version_id = baseline.scheme_version_id
baseline_sector_map = {
row.sector_record_id: _serialize_sector(row)
for row in await list_scheme_version_sectors(baseline.scheme_version_id)
baseline_sectors = await list_scheme_version_sectors(baseline.scheme_version_id)
baseline_groups = await list_scheme_version_groups(baseline.scheme_version_id)
baseline_seats = await list_scheme_version_seats(baseline.scheme_version_id)
baseline_sector_compare_map = {
_sector_key(row): _sector_compare_value(row)
for row in baseline_sectors
}
baseline_group_map = {
row.group_record_id: _serialize_group(row)
for row in await list_scheme_version_groups(baseline.scheme_version_id)
baseline_sector_payload_map = {
_sector_key(row): _sector_response_value(row)
for row in baseline_sectors
}
baseline_seat_map = {
row.seat_record_id: _serialize_seat(row)
for row in await list_scheme_version_seats(baseline.scheme_version_id)
baseline_group_compare_map = {
_group_key(row): _group_compare_value(row)
for row in baseline_groups
}
baseline_group_payload_map = {
_group_key(row): _group_response_value(row)
for row in baseline_groups
}
baseline_seat_compare_map = {
_seat_key(row): _seat_compare_value(row)
for row in baseline_seats
}
baseline_seat_payload_map = {
_seat_key(row): _seat_response_value(row)
for row in baseline_seats
}
draft_sector_map = {row.sector_record_id: _serialize_sector(row) for row in draft_sectors}
draft_group_map = {row.group_record_id: _serialize_group(row) for row in draft_groups}
draft_seat_map = {row.seat_record_id: _serialize_seat(row) for row in draft_seats}
draft_sector_compare_map = {_sector_key(row): _sector_compare_value(row) for row in draft_sectors}
draft_sector_payload_map = {_sector_key(row): _sector_response_value(row) for row in draft_sectors}
draft_group_compare_map = {_group_key(row): _group_compare_value(row) for row in draft_groups}
draft_group_payload_map = {_group_key(row): _group_response_value(row) for row in draft_groups}
draft_seat_compare_map = {_seat_key(row): _seat_compare_value(row) for row in draft_seats}
draft_seat_payload_map = {_seat_key(row): _seat_response_value(row) for row in draft_seats}
sector_diff = _build_diff(baseline_sector_map, draft_sector_map)
group_diff = _build_diff(baseline_group_map, draft_group_map)
seat_diff = _build_diff(baseline_seat_map, draft_seat_map)
sector_diff = _build_diff(
before_compare_map=baseline_sector_compare_map,
after_compare_map=draft_sector_compare_map,
before_payload_map=baseline_sector_payload_map,
after_payload_map=draft_sector_payload_map,
)
group_diff = _build_diff(
before_compare_map=baseline_group_compare_map,
after_compare_map=draft_group_compare_map,
before_payload_map=baseline_group_payload_map,
after_payload_map=draft_group_payload_map,
)
seat_diff = _build_diff(
before_compare_map=baseline_seat_compare_map,
after_compare_map=draft_seat_compare_map,
before_payload_map=baseline_seat_payload_map,
after_payload_map=draft_seat_payload_map,
)
return {
"baseline_scheme_version_id": baseline_scheme_version_id,

View File

@@ -20,6 +20,8 @@
- GET /api/v1/schemes/{scheme_id}/current
- GET /api/v1/schemes/{scheme_id}/versions
- POST /api/v1/schemes/{scheme_id}/versions
- GET /api/v1/schemes/{scheme_id}/publish/validation
- GET /api/v1/schemes/{scheme_id}/draft/publish-readiness
- POST /api/v1/schemes/{scheme_id}/publish
- POST /api/v1/schemes/{scheme_id}/unpublish
- POST /api/v1/schemes/{scheme_id}/rollback
@@ -42,8 +44,58 @@
- PUT /api/v1/schemes/{scheme_id}/pricing/rules/{price_rule_id}
- DELETE /api/v1/schemes/{scheme_id}/pricing/rules/{price_rule_id}
## app/api/routes/pricing_diagnostics.py
- GET /api/v1/schemes/{scheme_id}/pricing/coverage
- GET /api/v1/schemes/{scheme_id}/pricing/unpriced-seats
- GET /api/v1/schemes/{scheme_id}/pricing/explain/{seat_id}
- GET /api/v1/schemes/{scheme_id}/pricing/rules/diagnostics
## app/api/routes/test_mode.py
- GET /api/v1/schemes/{scheme_id}/test/seats/{seat_id}
## app/api/routes/audit.py
- GET /api/v1/schemes/{scheme_id}/audit
## app/api/routes/publish.py
- POST /api/v1/schemes/{scheme_id}/draft/pricing/snapshot
- GET /api/v1/schemes/{scheme_id}/draft/publish-preview
- POST /api/v1/schemes/{scheme_id}/draft/remap/preview
- POST /api/v1/schemes/{scheme_id}/draft/remap/apply
## app/api/routes/editor.py
- GET /api/v1/schemes/{scheme_id}/editor/context
- POST /api/v1/schemes/{scheme_id}/draft/ensure
- GET /api/v1/schemes/{scheme_id}/draft/summary
- GET /api/v1/schemes/{scheme_id}/draft/structure
- GET /api/v1/schemes/{scheme_id}/draft/validation
- GET /api/v1/schemes/{scheme_id}/draft/compare-preview
- GET /api/v1/schemes/{scheme_id}/draft/seats/records/{seat_record_id}
- GET /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id}
- GET /api/v1/schemes/{scheme_id}/draft/groups/records/{group_record_id}
- POST /api/v1/schemes/{scheme_id}/draft/sectors
- POST /api/v1/schemes/{scheme_id}/draft/groups
- DELETE /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id}
- DELETE /api/v1/schemes/{scheme_id}/draft/groups/records/{group_record_id}
- PATCH /api/v1/schemes/{scheme_id}/draft/seats/records/{seat_record_id}
- POST /api/v1/schemes/{scheme_id}/draft/seats/bulk
- PATCH /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id}
- PATCH /api/v1/schemes/{scheme_id}/draft/groups/records/{group_record_id}
- POST /api/v1/schemes/{scheme_id}/draft/repair-references
## app/api/routes/admin.py
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts
- GET /api/v1/admin/schemes/{scheme_id}/current/validation
- POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate
- POST /api/v1/admin/display/backfill
- GET /api/v1/admin/artifacts/publish-preview/audit
- POST /api/v1/admin/artifacts/publish-preview/cleanup
## app/api/routes/admin_cleanup.py
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup
## Notes
- This file is an operational route index, not a generated OpenAPI export.
- Update this map in the same change set when adding, removing, renaming, or moving routes.
- Query guards such as expected_current_scheme_version_id / expected_scheme_version_id are part of the operational contract for optimistic concurrency on mutable flows.
- Draft editor flow starts from editor/context and draft/ensure, not from direct blind mutation calls.

View File

@@ -0,0 +1,528 @@
# Backend Integration Contract
This document is the frontend handoff contract for the `svg-service` backend. It is written as an integration baseline, not as an internal backend README.
## 1. Base URL and Auth
- Base URL: `http://<host>:9020`
- API prefix: `/api/v1`
- Auth header: `X-API-Key`
All non-`/healthz` routes require an API key.
Auth failure contract:
- missing API key -> `401` with string detail: `Missing API key`
- invalid API key -> `403` with string detail: `Invalid API key`
- valid non-admin key on admin-only route -> `403` with string detail: `Admin role required`
## 2. Roles and Access Boundaries
- `admin`
- full access to protected routes
- required for all `/api/v1/admin/...` routes
- `operator`
- allowed on non-admin protected routes
- denied on admin-only routes
- `viewer`
- allowed on non-admin protected routes
- denied on admin-only routes
Frontend implication:
- admin UI must treat admin routes as optional capabilities gated by role
- frontend must not assume `operator` or `viewer` can call cleanup, audit, backfill, or current-artifact admin routes
## 3. Core Entities
### Upload
Represents one uploaded SVG source and its normalized/sanitized artifacts.
Important fields:
- `upload_id`
- `original_filename`
- `content_type`
- `size_bytes`
- `original_storage_path`
- `sanitized_storage_path`
- `normalized_storage_path`
- `normalized_elements_count`
- `normalized_seats_count`
- `normalized_groups_count`
- `normalized_sectors_count`
### Scheme
Top-level business object created from upload.
Important fields:
- `scheme_id`
- `source_upload_id`
- `name`
- `status`
- `current_version_number`
- `published_at`
### Scheme Version
Versioned snapshot of the scheme structure and publish state.
Important fields:
- `scheme_version_id`
- `scheme_id`
- `version_number`
- `status`
- `normalized_storage_path`
- `normalized_*_count`
### Sector
Structure entity in a specific `scheme_version`.
Important fields:
- `sector_record_id`
- `sector_id`
- `element_id`
- `name`
Business identity priority:
- use `sector_id` when present
- fallback to `element_id`
- never treat `sector_record_id` as business identity across versions
### Group
Important fields:
- `group_record_id`
- `group_id`
- `element_id`
- `name`
Business identity priority:
- use `group_id` when present
- fallback to `element_id`
- never treat `group_record_id` as business identity across versions
### Seat
Important fields:
- `seat_record_id`
- `seat_id`
- `element_id`
- `sector_id`
- `group_id`
- `row_label`
- `seat_number`
Business identity priority:
- use `seat_id` when present
- fallback to `element_id`
- never treat `seat_record_id` as business identity across versions
### Pricing Category
Important fields:
- `pricing_category_id`
- `scheme_id`
- `name`
- `code`
### Price Rule
Important fields:
- `price_rule_id`
- `scheme_id`
- `pricing_category_id`
- `target_type`
- `target_ref`
- `amount`
- `currency`
### Artifact
Artifact registry row for generated backend files.
Important fields:
- `artifact_id`
- `artifact_type`
- `artifact_variant`
- `storage_path`
- `status`
- `meta_json`
Important artifact types currently exercised by regression:
- `sanitized_svg`
- `normalized_json`
- `display_svg`
- `publish_preview`
## 4. Lifecycle State Machine
### Fresh Upload
Flow:
1. `POST /api/v1/schemes/upload`
2. backend creates:
- `upload`
- `scheme`
- initial `scheme_version`
- structure rows
- initial artifacts
Expected initial state:
- `scheme.status = draft`
- `scheme.current_version_number = 1`
- current version status = `draft`
### Current Draft
If current scheme/version is still draft:
- editor works directly against current version
- `draft/ensure` is idempotent
- `draft/ensure` returns `created=false`
### Ensure Draft From Published Current
If current scheme/version is published:
- `POST /api/v1/schemes/{scheme_id}/draft/ensure`
- backend creates a new draft version
- current pointer switches to the new draft
- version number increments
### Publish
Preconditions:
- current scheme is draft
- current version is draft
- publish readiness must be satisfied
Publish path:
1. optional `draft/pricing/snapshot`
2. `GET draft/publish-readiness`
3. optional `GET draft/publish-preview`
4. `POST /api/v1/schemes/{scheme_id}/publish`
Expected result:
- scheme becomes `published`
- current version becomes `published`
### Rollback
Path:
- `POST /api/v1/schemes/{scheme_id}/rollback`
Effect:
- current pointer switches to requested historical `version_number`
- scheme returns to `draft`
- target version becomes current editable draft
### Unpublish
Path:
- `POST /api/v1/schemes/{scheme_id}/unpublish`
Effect:
- current scheme becomes `draft`
- current version becomes `draft`
## 5. Editor Flow
### Entry Point
- `GET /api/v1/schemes/{scheme_id}/editor/context`
Use it first to decide whether:
- current draft can be edited directly
- or a new draft must be created from published current
Important response fields:
- `current_scheme_version_id`
- `current_version_number`
- `scheme_status`
- `scheme_version_status`
- `current_is_draft`
- `create_draft_available`
- `recommended_action`
### Draft Read Models
- `POST /api/v1/schemes/{scheme_id}/draft/ensure`
- `GET /api/v1/schemes/{scheme_id}/draft/summary`
- `GET /api/v1/schemes/{scheme_id}/draft/structure`
- `GET /api/v1/schemes/{scheme_id}/draft/validation`
- `GET /api/v1/schemes/{scheme_id}/draft/compare-preview`
Frontend should treat `draft/structure` as the main editable read model.
### Patch Operations
Supported flows:
- single seat patch
- bulk seat patch
- sector create/patch/delete
- group create/patch/delete
- repair references
- remap preview/apply
Frontend rule:
- always send `expected_scheme_version_id` when mutating or reading draft state after editor entry
### Stale Conflict Handling
If backend returns a stale or draft editability conflict:
- stop optimistic local mutation flow
- re-read:
- `editor/context`
- `draft/summary`
- `draft/structure`
Do not keep editing against stale cached `scheme_version_id`.
## 6. Pricing Flow
### Categories
- `GET /api/v1/schemes/{scheme_id}/pricing`
- `POST /api/v1/schemes/{scheme_id}/pricing/categories`
- `PUT /api/v1/schemes/{scheme_id}/pricing/categories/{pricing_category_id}`
- `DELETE /api/v1/schemes/{scheme_id}/pricing/categories/{pricing_category_id}`
### Rules
- `POST /api/v1/schemes/{scheme_id}/pricing/rules`
- `PUT /api/v1/schemes/{scheme_id}/pricing/rules/{price_rule_id}`
- `DELETE /api/v1/schemes/{scheme_id}/pricing/rules/{price_rule_id}`
### Read Models
- `GET /api/v1/schemes/{scheme_id}/pricing`
- `GET /api/v1/schemes/{scheme_id}/pricing/coverage`
- `GET /api/v1/schemes/{scheme_id}/pricing/unpriced-seats`
- `GET /api/v1/schemes/{scheme_id}/pricing/explain/{seat_id}`
- `GET /api/v1/schemes/{scheme_id}/pricing/rules/diagnostics`
- `GET /api/v1/schemes/{scheme_id}/current/seats/{seat_id}/price`
- `GET /api/v1/schemes/{scheme_id}/test/seats/{seat_id}`
Frontend rule:
- empty pricing on a fresh upload is valid
- do not treat `categories=[]` and `rules=[]` as backend failure
## 7. Publish Flow
Main endpoints:
- `POST /api/v1/schemes/{scheme_id}/draft/pricing/snapshot`
- `GET /api/v1/schemes/{scheme_id}/draft/publish-readiness`
- `GET /api/v1/schemes/{scheme_id}/draft/publish-preview`
- `POST /api/v1/schemes/{scheme_id}/publish`
Frontend sequencing rule:
1. ensure draft
2. mutate if needed
3. create/refresh pricing
4. build pricing snapshot
5. read publish readiness
6. read publish preview if UI needs preview surface
7. publish
## 8. Admin/Ops Flow
Admin-only endpoints:
- `GET /api/v1/admin/schemes/{scheme_id}/current/artifacts`
- `GET /api/v1/admin/schemes/{scheme_id}/current/validation`
- `POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate`
- `POST /api/v1/admin/display/backfill`
- `GET /api/v1/admin/artifacts/publish-preview/audit`
- `POST /api/v1/admin/artifacts/publish-preview/cleanup`
- `GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview`
- `POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup`
Healthy publish-preview audit contract:
- `orphan_files_count = 0`
- `missing_files_for_db_rows_count = 0`
- `db_rows_count == disk_files_count`
Frontend implication:
- admin tools must not be shown as generally available functionality
- admin cleanup/destructive flows must be role-gated on the client and still handle backend `403`
## 9. Typed Error Catalog
### Auth
- `401` string detail: `Missing API key`
- `403` string detail: `Invalid API key`
- `403` string detail: `Admin role required`
### Lifecycle / Draft / Publish
- `stale_draft_version`
- `stale_current_version`
- `current_version_inconsistent`
- `draft_not_editable`
- `publish_not_ready`
### Editor Uniqueness / References
- `editor_uniqueness_error`
- `editor_reference_error`
- `duplicate_seat_id`
- `duplicate_seat_id_in_payload`
- `duplicate_sector_id`
- `duplicate_group_id`
- `duplicate_sector_element_id`
- `duplicate_group_element_id`
- `unknown_sector_id`
- `unknown_group_id`
- `unknown_sector_ids`
- `unknown_group_ids`
- `unknown_target_sector_id`
- `unknown_target_group_id`
- `business_identifier_nullification_forbidden`
### Pricing / Remap / Test
- `invalid_amount`
- `remap_filter_required`
- `test_preview_failed`
### Validation Report Codes
These appear inside validation report payloads rather than as top-level HTTP conflict codes:
- `duplicate_seat_ids`
- `missing_seat_contract`
- `seats_without_sector_or_group`
- `seats_without_price`
Frontend rule:
- do not parse only HTTP status
- always inspect structured `detail.code` when `detail` is an object
## 10. Frontend Obligations
- always handle auth failures `401` and `403`
- always handle stale/conflict responses on draft, publish, and lifecycle operations
- never treat `*_record_id` as stable cross-version business identity
- always prefer business ids:
- seat -> `seat_id`, fallback `element_id`
- sector -> `sector_id`, fallback `element_id`
- group -> `group_id`, fallback `element_id`
- re-read current/draft state after:
- any `409`
- publish
- rollback
- unpublish
- `draft/ensure` returning a newly created draft
- do not assume current version remains stable across concurrent operator sessions
- do not assume publish-preview artifacts or display artifacts are frontend-owned resources
## 11. Non-Persistent Assumptions Frontend Must Avoid
The frontend must not assume that these remain stable forever:
- `scheme_version_id`
- `seat_record_id`
- `sector_record_id`
- `group_record_id`
- artifact `storage_path`
- publish-preview cache artifacts
These are safe to treat as business-stable:
- `scheme_id`
- `version_number` within one scheme
- `seat_id` when present
- `sector_id` when present
- `group_id` when present
## 12. Known Limitations / Deferred Tech Debt
- some lifecycle negative contracts still return mixed styles:
- typed object conflicts for `409`
- plain string details for some `404` and auth cases
- validation warnings and error code families are not yet unified into one single global error envelope
- admin/ops routes are backend-internal tools, not end-user product APIs
- corruption remediation smoke exists only for `publish_preview`, not for every artifact type
## 13. Regression Baseline Frontend Can Rely On
The frontend can rely on the following regression-backed flows:
- fresh upload on clean DB
- current/draft/editor read flow
- editor mutations and stale draft protection
- pricing setup and publish flow
- version lifecycle:
- publish
- ensure draft from published current
- rollback
- unpublish
- admin ops:
- audit
- cleanup
- destructive pricing cleanup for safe fixture categories
- full admin permission matrix on implemented admin endpoints
- controlled `publish_preview` corruption detection and remediation
- negative upload validation
- negative auth matrix
- negative lifecycle matrix
## 14. Recommended Frontend Integration Sequence
For normal editor work:
1. authenticate
2. upload or pick `scheme_id`
3. read `editor/context`
4. call `draft/ensure` if needed
5. read `draft/structure`
6. mutate using current `scheme_version_id`
7. on `409`, reload editor state before retry
8. configure pricing if needed
9. create pricing snapshot
10. read publish readiness / preview
11. publish
For admin UI:
1. verify admin role in client auth state
2. call admin endpoints
3. still handle backend `403`
4. treat cleanup and remediation as explicit operator actions, not background automation

View File

@@ -0,0 +1,673 @@
# Smoke regression checklist
This file is the backend manual regression baseline for svg-service.
## Preconditions
- docker compose stack is up
- backend responds on port 9020
- valid admin API key is available
- stable SVG fixture exists in repository, e.g. `sample-contract.svg`
## Environment
Use these variables in shell:
export API_URL="http://127.0.0.1:9020"
export API_KEY="admin-local-dev-key"
export FIXTURE_SVG_PATH="/home/adminko/svg-service/sample-contract.svg"
## Active regression contour
Primary operator regressions:
- `backend/scripts/smoke_core.sh`
- `backend/scripts/smoke_pricing_publish.sh`
- `backend/scripts/smoke_version_lifecycle.sh`
- `backend/scripts/smoke_lifecycle_negative.sh`
- `backend/scripts/smoke_admin_ops.sh`
- `backend/scripts/smoke_auth_negative.sh`
- `backend/scripts/smoke_authz_admin_all.sh`
- `backend/scripts/smoke_artifact_corruption.sh`
- `backend/scripts/smoke_upload_negative.sh`
- `backend/scripts/smoke_regression.sh`
Only this set is part of the active backend regression contour.
The scripts are expected to fail fast on any contract break or unexpected 5xx.
`smoke_regression.sh` is now an orchestration wrapper:
- first runs `smoke_core.sh`
- then runs `smoke_pricing_publish.sh`
- then runs `smoke_version_lifecycle.sh`
- then runs `smoke_lifecycle_negative.sh`
- then runs `smoke_admin_ops.sh`
- then runs `smoke_authz_admin_all.sh`
- then runs `smoke_auth_negative.sh`
- then runs `smoke_artifact_corruption.sh`
- then runs `smoke_upload_negative.sh`
- returns non-zero if any scenario fails
## Standalone/manual scripts
- `backend/scripts/editor_mutation_regression.sh`
- `backend/scripts/cleanup_test_pricing_data.sh`
These scripts are intentionally not called by `smoke_regression.sh`.
## Scenario split
### Core smoke on clean DB
Use:
- `backend/scripts/smoke_core.sh`
This scenario is designed for a fully clean database.
It uploads a fresh SVG fixture, resolves the created `scheme_id`, validates current/draft read models, validates empty pricing state, and then runs `editor_mutation_regression.sh` on the same fresh scheme.
Important:
- it does not require pre-existing `scheme_id`
- it does not require pricing categories or price rules
- it does not require publish snapshot or published baseline
- empty pricing on a fresh upload is a valid state, not a failure
### Pricing/publish smoke with fixture setup
Use:
- `backend/scripts/smoke_pricing_publish.sh`
This scenario also uploads a fresh SVG fixture, then prepares its own pricing fixture before validating pricing and publish flow.
Important:
- it creates its own pricing category
- it creates its own pricing rule
- it intentionally checks both a priced seat and an unpriced seat on the same fresh scheme
- it does not rely on historical pricing IDs, rules, or old schemes
### Version lifecycle smoke
Use:
- `backend/scripts/smoke_version_lifecycle.sh`
This scenario uploads a fresh SVG, publishes version 1, creates version 2 from published current, mutates the new draft, publishes version 2, rolls back to version 1, and then runs unpublish on the current scheme.
Important:
- it validates multi-version lifecycle beyond fresh upload
- it checks that `draft/ensure` creates a new draft only after current becomes published
- it verifies rollback switches `current_version_number` to the requested target version
- it verifies the rolled-back current structure matches the target version semantics, not the later mutated draft
- it checks audit trail for `scheme.published`, `scheme.version.created`, `scheme.rolled_back`, and `scheme.unpublished`
### Lifecycle negative smoke
Use:
- `backend/scripts/smoke_lifecycle_negative.sh`
This scenario uses fresh disposable scheme data to verify negative lifecycle contracts without leaving the database in a broken state.
Important:
- it checks rollback to a nonexistent version
- it checks stale current-version guards on `draft/ensure`
- it checks stale expected-version guards on `publish`
- it creates a temporary `current_version_inconsistent` pointer only inside the scenario and restores it before exit
### Admin/ops smoke
Use:
- `backend/scripts/smoke_admin_ops.sh`
This scenario uploads a fresh SVG and prepares its own admin-cleanup fixture inside the scenario before checking current-artifact inspection, validation, publish-preview audit/cleanup, and pricing-category cleanup preview/dry-run.
Important:
- it creates its own pricing categories for cleanup preview
- it creates its own protected pricing rule so cleanup preview has both deletable and skipped categories
- it does not rely on historical orphan artifacts, old schemes, or dirty pricing state
- it checks publish-preview cleanup in both dry-run and execute modes
- it requires the final publish-preview audit state to be healthy: `orphan_files_count=0` and `missing_files_for_db_rows_count=0`
- it executes destructive pricing cleanup only for self-created safe fixture data
### Admin authz smoke
Use:
- `backend/scripts/smoke_authz_admin_all.sh`
This scenario uploads a fresh SVG, prepares its own cleanup fixture data, and then checks permission boundaries for admin/operator/viewer on all currently implemented admin endpoints used by the regression contour.
Important:
- admin must be allowed on tested admin endpoints
- operator and viewer must be denied with controlled 403 responses
- the scenario does not rely on historical scheme ids or dirty pricing state
- destructive pricing cleanup execution is validated with fresh self-created fixture categories only
### Artifact corruption smoke
Use:
- `backend/scripts/smoke_artifact_corruption.sh`
This scenario creates fresh publish-preview artifacts and then simulates two controlled corruption cases only on the artifacts created inside the scenario.
Important:
- case A removes a preview file while leaving its DB row in place
- case B removes a preview DB row while leaving its file on disk
- audit must detect both inconsistencies correctly
- cleanup dry-run must stay readable and non-destructive
- cleanup execute must remediate the introduced inconsistency
- the scenario does not touch historical schemes or unrelated artifact rows/files
### Auth negative smoke
Use:
- `backend/scripts/smoke_auth_negative.sh`
This scenario checks the negative auth matrix on a representative route set.
Important:
- missing API key must return `401`
- invalid API key must return `403`
- valid non-admin key must return `403` only on admin-only endpoints
- the route set includes protected, editor, pricing, admin, and admin-cleanup endpoints
### Negative upload smoke
Use:
- `backend/scripts/smoke_upload_negative.sh`
This scenario checks controlled upload failures for invalid inputs.
Important:
- empty upload must fail with a controlled 4xx
- non-SVG uploads must fail with a controlled 4xx
- invalid extension/content-type combinations must fail with a controlled 4xx
- oversize upload must fail with a controlled 413 when the configured size limit is exceeded
- no negative case is allowed to return 500
## 1. Health / system
- GET /healthz -> 200 (smoke uses a bounded retry/wait loop and fails explicitly if the API never becomes ready)
- GET /api/v1/ping -> 200
- GET /api/v1/db/ping -> 200
- GET /api/v1/manifest -> 200
## 2. Core smoke coverage
`smoke_core.sh` checks:
- GET /healthz -> 200
- GET /api/v1/ping -> 200
- GET /api/v1/db/ping -> 200
- GET /api/v1/manifest -> 200
- POST /api/v1/schemes/upload -> 200
- GET /api/v1/schemes -> 200 and resolves the fresh `scheme_id`
- GET /api/v1/schemes/{scheme_id} -> 200
- GET /api/v1/schemes/{scheme_id}/versions -> 200
- GET /api/v1/schemes/{scheme_id}/current -> 200
- GET /api/v1/schemes/{scheme_id}/editor/context -> 200
- POST /api/v1/schemes/{scheme_id}/draft/ensure -> 200
- GET /api/v1/schemes/{scheme_id}/draft/summary -> 200
- GET /api/v1/schemes/{scheme_id}/draft/structure -> 200
- GET /api/v1/schemes/{scheme_id}/draft/validation -> 200
- GET /api/v1/schemes/{scheme_id}/draft/compare-preview -> 200
- GET draft entities by record id -> 200
- stale `expected_scheme_version_id` conflict -> 409 with typed `stale_draft_version`
- GET current sectors/groups/seats -> 200
- GET current SVG display meta -> 200
- GET pricing bundle -> 200 with empty categories/rules
- GET pricing coverage -> 200 with zero priced seats
- GET pricing explain/{seat_id} -> 200 with `no_price_rule`
- GET pricing rules diagnostics -> 200 with empty state
- GET audit -> 200
- `backend/scripts/editor_mutation_regression.sh` on the same fresh scheme
Validate:
- fresh upload is readable immediately through current/draft/editor endpoints
- empty pricing is accepted as normal state for a newly uploaded scheme
- no endpoint in core smoke returns 500
## 3. Pricing/publish smoke coverage
`smoke_pricing_publish.sh` checks:
- POST /api/v1/schemes/upload -> 200
- GET current / POST draft ensure on the fresh scheme -> 200
- POST pricing category -> 200
- POST price rule -> 200
- GET pricing bundle -> 200 with created fixture data
- GET pricing coverage -> 200 with both priced and unpriced seats present
- GET pricing explain/{priced_seat_id} -> 200 with matched rule
- GET pricing explain/{unpriced_seat_id} -> 200 with `no_price_rule`
- GET current/seats/{priced_seat_id}/price -> 200
- GET test/seats/{priced_seat_id} -> 200
- GET test/seats/{unpriced_seat_id} -> 200
- POST draft/pricing/snapshot -> 200
- GET draft/publish-readiness -> 200
- GET draft/publish-preview?refresh=true -> 200
- GET draft/publish-preview -> 200
- POST publish -> 200
- GET scheme detail/current after publish -> 200 and published state
- GET audit -> 200 and contains `scheme.published`
Validate:
- fixture setup is fully self-contained
- priced-seat checks happen only after explicit pricing fixture creation
- publish flow is validated on a fresh scheme, not on historical DB data
## 4. Version lifecycle smoke coverage
`smoke_version_lifecycle.sh` checks:
- POST /api/v1/schemes/upload -> 200
- GET scheme detail/current immediately after upload -> version 1 draft
- POST draft ensure on version 1 -> 200 and remains same draft
- POST pricing category/rule fixture -> 200
- POST draft/pricing/snapshot on version 1 -> 200
- POST publish on version 1 -> 200
- POST draft ensure from published current -> 200 and creates version 2
- PATCH one draft seat field on version 2 -> 200
- GET draft compare-preview on version 2 -> 200 and shows changed state
- POST draft/pricing/snapshot on version 2 -> 200
- POST publish on version 2 -> 200
- POST rollback to version 1 -> 200
- POST unpublish current -> 200
- GET audit -> 200 with lifecycle events present
Validate:
- version numbering advances from 1 to 2 only when current was published
- current pointer tracks the published version before rollback
- rollback switches current pointer back to the requested target version
- rolled-back current structure matches version 1 semantics after version 2 mutation
- lifecycle audit events are present and JSON-serializable
## 5. Lifecycle negative smoke coverage
`smoke_lifecycle_negative.sh` checks:
- POST /api/v1/schemes/upload -> 200
- GET current on the fresh scheme -> 200
- POST rollback with nonexistent `target_version_number` -> controlled 404
- POST draft/ensure with stale `expected_current_scheme_version_id` -> typed 409
- POST publish with stale `expected_scheme_version_id` -> typed 409
- GET current after temporary `current_version_inconsistent` pointer corruption -> typed 409
- GET current again after scenario restoration -> 200
Validate:
- rollback to missing version stays controlled and non-500
- ensure-draft stale current pointer returns typed `stale_current_version`
- publish stale expected version stays controlled and non-500
- temporary pointer inconsistency returns typed `current_version_inconsistent`
- the temporary inconsistency is restored before the scenario exits
## 6. Admin/ops smoke coverage
`smoke_admin_ops.sh` checks:
- POST /api/v1/schemes/upload -> 200
- POST draft ensure on the fresh scheme -> 200
- POST pricing category fixture for cleanup preview -> 200
- POST protected pricing rule fixture -> 200
- POST draft/pricing/snapshot -> 200
- GET draft/publish-preview?refresh=true -> 200
- GET draft/publish-preview -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/validation -> 200
- GET /api/v1/admin/artifacts/publish-preview/audit -> 200
- POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true -> 200
- POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false -> 200
- GET /api/v1/admin/artifacts/publish-preview/audit after cleanup -> 200
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview -> 200
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=true -> 200
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=false -> 200
- GET /api/v1/schemes/{scheme_id}/pricing after destructive cleanup -> 200
- repeated cleanup preview/dry-run after destructive cleanup -> 200
Validate:
- admin artifact listing stays readable for current draft version
- admin validation stays readable for current draft version
- publish-preview cleanup dry-run stays non-destructive and mirrors pre-clean audit counts
- publish-preview cleanup execute removes all orphan preview files and missing DB rows
- final publish-preview audit is strict healthy state: `orphan_files_count=0`, `missing_files_for_db_rows_count=0`, and `db_rows_count == disk_files_count`
- pricing cleanup preview identifies both deletable and protected categories created inside the scenario
- pricing cleanup dry-run never mutates fixture data
- destructive pricing cleanup deletes only the safe category without rules
- protected pricing category and its rule remain after destructive cleanup
- repeated cleanup state remains stable after destructive cleanup
## 7. Admin authz smoke coverage
`smoke_authz_admin_all.sh` checks:
- POST /api/v1/schemes/upload -> 200
- POST draft ensure on the fresh scheme -> 200
- POST pricing fixture categories/rule for cleanup authz checks -> 200
- POST draft/publish-preview refresh fixture -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts as admin -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts as operator/viewer -> 403
- GET /api/v1/admin/schemes/{scheme_id}/current/validation as admin -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/validation as operator/viewer -> 403
- POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate as admin -> 200
- POST /api/v1/admin/schemes/{scheme_id}/current/display/regenerate as operator/viewer -> 403
- POST /api/v1/admin/display/backfill as admin -> 200
- POST /api/v1/admin/display/backfill as operator/viewer -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit as admin -> 200
- GET /api/v1/admin/artifacts/publish-preview/audit as operator/viewer -> 403
- POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true as admin -> 200
- POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true as operator/viewer -> 403
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview as admin -> 200
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview as operator/viewer -> 403
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=true as admin -> 200
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=true as operator/viewer -> 403
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=false as operator/viewer -> 403
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=false as admin -> 200
Validate:
- expected role matrix is explicit and enforced
- admin endpoints stay available to admin
- operator and viewer are denied without 500
- destructive cleanup execution remains constrained to self-created safe fixture data
## 8. Auth negative smoke coverage
`smoke_auth_negative.sh` checks:
- GET /api/v1/manifest without API key -> 401
- GET /api/v1/manifest with invalid API key -> 403
- GET /api/v1/schemes/{scheme_id}/editor/context without API key -> 401
- GET /api/v1/schemes/{scheme_id}/editor/context with invalid API key -> 403
- GET /api/v1/schemes/{scheme_id}/pricing without API key -> 401
- GET /api/v1/schemes/{scheme_id}/pricing with invalid API key -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit without API key -> 401
- GET /api/v1/admin/artifacts/publish-preview/audit with invalid API key -> 403
- GET /api/v1/admin/artifacts/publish-preview/audit with valid viewer key -> 403
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview without API key -> 401
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview with invalid API key -> 403
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview with valid viewer key -> 403
Validate:
- missing key contract is consistently `401`
- invalid key contract is consistently `403`
- valid non-admin key is denied only on admin-only endpoints
## 9. Artifact corruption smoke coverage
`smoke_artifact_corruption.sh` checks:
- POST /api/v1/schemes/upload -> 200
- POST draft ensure on the fresh scheme -> 200
- GET initial /api/v1/admin/artifacts/publish-preview/audit -> healthy 200
- case A: manually delete fresh preview file while keeping DB row
- GET audit after case A -> reports exactly one missing file for DB row
- POST cleanup dry_run=true after case A -> 200
- POST cleanup dry_run=false after case A -> 200 and deletes the broken DB row
- case B: manually delete fresh preview DB row while keeping file
- GET audit after case B -> reports exactly one orphan file
- POST cleanup dry_run=true after case B -> 200
- POST cleanup dry_run=false after case B -> 200 and deletes the orphan file
- final audit -> healthy 200
Validate:
- audit sees DB-row-without-file and file-without-DB-row separately and correctly
- dry-run remains readable and non-destructive in both corruption cases
- execute cleanup remediates only the inconsistency introduced in the scenario
- final audit is healthy again: `orphan_files_count=0`, `missing_files_for_db_rows_count=0`
## 10. Negative upload smoke coverage
`smoke_upload_negative.sh` checks:
- POST /api/v1/schemes/upload with empty SVG body -> controlled 400
- POST /api/v1/schemes/upload with non-SVG text/plain body -> controlled 400
- POST /api/v1/schemes/upload with SVG body but invalid extension/content-type pair -> controlled 400
- POST /api/v1/schemes/upload with body larger than manifest max_file_size_bytes -> controlled 413
Validate:
- upload validation rejects bad inputs with explicit 4xx contracts
- configured max file size is read from manifest, not hardcoded in the script
- no negative upload case returns 500
## 11. Legacy endpoint families
The sections below remain the API baseline by area, but regression execution is now split between clean-DB core smoke and pricing/publish smoke.
## 5. Scheme registry
- GET /api/v1/schemes -> 200
- GET /api/v1/schemes/{scheme_id} -> 200
- GET /api/v1/schemes/{scheme_id}/current -> 200
- GET /api/v1/schemes/{scheme_id}/versions -> 200
Validate:
- scheme_id is stable
- current version exists
- version list contains current version
- status and counts are consistent
## 6. Editor entry flow
- GET /api/v1/schemes/{scheme_id}/editor/context -> 200
- POST /api/v1/schemes/{scheme_id}/draft/ensure -> 200
Validate:
- editor context returns current_scheme_version_id
- editor context distinguishes draft vs published state correctly
- ensure endpoint is idempotent on current draft
- ensure endpoint creates a new draft from published current when needed
- returned scheme_version_id is reusable as expected_scheme_version_id
## 7. Draft read model
Using current draft version id from draft/ensure:
- GET /api/v1/schemes/{scheme_id}/draft/summary?expected_scheme_version_id={draft_version_id} -> 200
- GET /api/v1/schemes/{scheme_id}/draft/structure?expected_scheme_version_id={draft_version_id} -> 200
- GET /api/v1/schemes/{scheme_id}/draft/validation?expected_scheme_version_id={draft_version_id} -> 200
- GET /api/v1/schemes/{scheme_id}/draft/compare-preview?expected_scheme_version_id={draft_version_id} -> 200
Validate:
- summary returns total_seats / total_sectors / total_groups
- summary returns validation_summary / structure_diff_summary / publish_readiness
- structure returns lists for seats / sectors / groups
- validation is deterministic
- compare preview returns stable diff structure
- stale expected_scheme_version_id returns typed 409 conflict
## 8. Draft entity reads
- GET /api/v1/schemes/{scheme_id}/draft/seats/records/{seat_record_id} -> 200
- GET /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id} -> 200
- GET /api/v1/schemes/{scheme_id}/draft/groups/records/{group_record_id} -> 200
Validate:
- record endpoints return exact draft entities
- unknown record id returns 404
- stale expected_scheme_version_id returns typed 409 conflict
## 9. Structure read model
- GET /api/v1/schemes/{scheme_id}/current/sectors -> 200
- GET /api/v1/schemes/{scheme_id}/current/groups -> 200
- GET /api/v1/schemes/{scheme_id}/current/seats -> 200
Validate:
- total counts are non-negative
- known sample scheme returns expected object lists
- seats contain seat_id / sector_id / group_id contract where applicable
## 10. SVG / display pipeline
- GET /api/v1/schemes/{scheme_id}/current/svg -> 200
- GET /api/v1/schemes/{scheme_id}/current/svg/display -> 200
- GET /api/v1/schemes/{scheme_id}/current/svg/display/meta -> 200
- GET /api/v1/schemes/{scheme_id}/current/svg/display?mode=optimized -> 200 or explicit controlled failure
- GET /api/v1/schemes/{scheme_id}/current/svg/display/meta?mode=optimized -> 200 or explicit controlled failure
Validate:
- response content type for svg endpoints is image/svg+xml
- meta returns scheme_id, scheme_version_id, view_box, width, height
- no 500 on passthrough mode
- unsupported mode returns 422
## 11. Pricing read model
- GET /api/v1/schemes/{scheme_id}/pricing -> 200
- GET /api/v1/schemes/{scheme_id}/pricing/coverage -> 200
- GET /api/v1/schemes/{scheme_id}/pricing/unpriced-seats -> 200
- GET /api/v1/schemes/{scheme_id}/pricing/explain/{seat_id} -> 200
- GET /api/v1/schemes/{scheme_id}/pricing/rules/diagnostics -> 200
- GET /api/v1/schemes/{scheme_id}/current/seats/{seat_id}/price -> 200 only after pricing fixture exists
- GET /api/v1/schemes/{scheme_id}/test/seats/{seat_id} -> 200 for known seat
Validate:
- fresh clean upload is allowed to have `categories=[]` and `rules=[]`
- fresh clean upload is allowed to have zero priced seats and `no_price_rule` explanations
- priced seat checks belong to pricing/publish smoke after fixture setup
- diagnostics returns stable empty state with zero rules on clean upload
- diagnostics returns matched seat visibility after fixture setup
- priced test seat amount is serialized as string when pricing exists
## 12. Draft mutation regression
Use:
- `backend/scripts/editor_mutation_regression.sh`
This script checks:
- create sector
- create group
- patch seat
- bulk seat update
- patch sector
- patch group
- duplicate entity validation paths
- stale draft conflict
- remap preview validation path
- repair references
- delete created sector/group
- post-mutation read-model consistency
Validate:
- created entities are returned by API
- patched draft records are actually changed
- bulk update changes persisted fields
- duplicate ids return 422
- stale expected_scheme_version_id returns typed 409
- remap preview without filters returns typed 422
- post-mutation summary / validation / compare-preview remain readable and deterministic
## 13. Draft publish preview
- POST /api/v1/schemes/{scheme_id}/draft/pricing/snapshot -> 200 when scheme is in draft
- GET /api/v1/schemes/{scheme_id}/draft/publish-preview?refresh=true -> 200
- GET /api/v1/schemes/{scheme_id}/draft/publish-preview -> 200
- GET /api/v1/schemes/{scheme_id}/draft/publish-preview?refresh=true&baseline_scheme_version_id={published_version_id} -> 200
Validate:
- refresh and cached read both succeed
- preview summary contains is_publishable / has_structure_changes / has_artifacts / snapshot_available
- pricing_coverage is internally consistent
- baseline override returns override strategy when explicit baseline is provided
- preview retention does not grow unbounded for same version+variant
## 14. Publish readiness and publish flow
For current draft version:
- GET /api/v1/schemes/{scheme_id}/draft/publish-readiness -> 200
- POST /api/v1/schemes/{scheme_id}/publish?expected_scheme_version_id={draft_version_id} -> 200 or 409
Validate:
- readiness explicitly shows snapshot_available and pricing gate state
- publish with stale expected version returns typed 409
- publish without draft state returns typed 409
- publish success updates current status to published
- audit trail contains scheme.published event
## 15. Admin / ops
- GET /api/v1/admin/schemes/{scheme_id}/current/artifacts -> 200
- GET /api/v1/admin/schemes/{scheme_id}/current/validation -> 200
- GET /api/v1/admin/artifacts/publish-preview/audit -> 200
- POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true -> 200
- POST /api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false -> 200
- GET /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup-preview -> 200
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=true -> 200
- POST /api/v1/admin/schemes/{scheme_id}/pricing/categories/cleanup with dry_run=false -> 200
Validate:
- artifact audit does not report orphan files or missing files for DB rows in normal state
- healthy publish-preview audit is strict: `orphan_files_count=0` and `missing_files_for_db_rows_count=0`
- validation report is readable and deterministic
- pricing cleanup preview returns matched candidates and safe_to_delete_count
- pricing cleanup dry-run returns deleted_count=0
- destructive pricing cleanup deletes only safe fixture categories without rules
- admin role is allowed on admin endpoints
- operator/viewer are denied with controlled 403 on admin endpoints
- idempotent cleanup is valid in both states: `matched_total=0` with `would_delete_count=0`, or `matched_total>0` with `would_delete_count>0`
- smoke does not require cleanup dry-run to always find something to delete
- admin routes do not produce 500 for healthy scheme state
## 16. Audit trail
- GET /api/v1/schemes/{scheme_id}/audit -> 200
Validate:
- recent publish preview / pricing / version / publish events are present when corresponding operations were run
- audit total is non-negative
- event payloads stay JSON-serializable
## 17. Fail criteria
Regression is considered failed if any of the following happen:
- health or db ping fails
- any stable read endpoint returns 500
- passthrough display endpoint fails on known-good sample
- publish preview refresh or cached read returns 500
- publish readiness returns 500
- editor context or draft ensure returns 500
- draft summary / structure / validation / compare-preview returns 500
- editor mutation regression returns non-zero exit code
- clean upload empty pricing state is treated as a failure
- pricing bundle or diagnostics contract changes unexpectedly
- admin audit/cleanup endpoints fail on healthy environment
- pricing cleanup dry-run mutates data
- artifact retention grows without bound for repeated preview refresh on same variant
## 18. Operator note
Run this checklist after:
- schema changes
- pricing schema/repository refactors
- artifact lifecycle changes
- display pipeline changes
- route reorganization
- startup/import/config changes
- draft lifecycle changes
- publish readiness changes
- admin cleanup changes
- editor mutation changes

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env bash
set -euo pipefail
API_URL="${API_URL:-http://127.0.0.1:9020}"
API_KEY="${API_KEY:-admin-local-dev-key}"
SCHEME_ID="${SCHEME_ID:-}"
DRY_RUN="${DRY_RUN:-true}"
if [[ -z "${SCHEME_ID}" ]]; then
echo "SCHEME_ID is required"
exit 1
fi
REQUEST_BODY=$(cat <<JSON
{
"code_prefixes": ["FAIL_", "DIAG_", "AUTO_", "TYPED_"],
"name_prefixes": ["should-fail-", "diag-", "auto ", "typed-response-"],
"pricing_category_ids": [],
"delete_only_without_rules": true,
"dry_run": ${DRY_RUN}
}
JSON
)
echo "===== CLEANUP PREVIEW ====="
curl -sS \
-H "X-API-Key: ${API_KEY}" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview?code_prefix=FAIL_&code_prefix=DIAG_&code_prefix=AUTO_&code_prefix=TYPED_&name_prefix=should-fail-&name_prefix=diag-&name_prefix=auto%20&name_prefix=typed-response-" \
| python3 -m json.tool
echo
echo "===== CLEANUP EXECUTE (DRY_RUN=${DRY_RUN}) ====="
curl -sS -X POST \
-H "Content-Type: application/json" \
-H "X-API-Key: ${API_KEY}" \
-d "${REQUEST_BODY}" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" \
| python3 -m json.tool

View File

@@ -0,0 +1,274 @@
#!/usr/bin/env bash
set -Eeuo pipefail
API_URL="${API_URL:-http://127.0.0.1:9020}"
API_KEY="${API_KEY:-admin-local-dev-key}"
SCHEME_ID="${SCHEME_ID:-82086336d385427f9d56244f9e1dd772}"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
log() {
echo
echo "===== $* ====="
}
fail() {
echo
echo "[FAIL] $*" >&2
exit 1
}
request() {
local name="$1"
local method="$2"
local url="$3"
local body="${4:-}"
local expected="${5:-200}"
local body_file="${TMP_DIR}/${name}.body"
local code_file="${TMP_DIR}/${name}.code"
if [[ -n "${body}" ]]; then
curl -sS \
-X "${method}" \
-H "X-API-Key: ${API_KEY}" \
-H "Content-Type: application/json" \
-o "${body_file}" \
-w "%{http_code}" \
--data "${body}" \
"${url}" > "${code_file}"
else
curl -sS \
-X "${method}" \
-H "X-API-Key: ${API_KEY}" \
-o "${body_file}" \
-w "%{http_code}" \
"${url}" > "${code_file}"
fi
local code
code="$(cat "${code_file}")"
echo "[${method}] ${url} -> ${code}"
cat "${body_file}"
echo
if [[ "${code}" != "${expected}" ]]; then
fail "Unexpected HTTP status for ${name}: expected ${expected}, got ${code}"
fi
}
json_get() {
local file="$1"
local expr="$2"
python3 - <<PY
import json
from pathlib import Path
data = json.loads(Path("${file}").read_text())
expr = "${expr}"
value = data
for part in expr.split("."):
if not part:
continue
if part.startswith("[") and part.endswith("]"):
cond = part[1:-1]
try:
if cond.endswith("!=null"):
k = cond[:-6]
value = next(item for item in value if item.get(k) is not None)
elif cond.endswith("==null"):
k = cond[:-6]
value = next(item for item in value if item.get(k) is None)
elif cond == "LAST":
value = value[-1]
else:
value = value[0]
except StopIteration:
value = None
elif part.isdigit():
value = value[int(part)]
else:
value = value[part] if value else None
if value is None:
print("")
elif isinstance(value, bool):
print("true" if value else "false")
else:
print(value)
PY
}
assert_json_eq() {
local file="$1"
local expr="$2"
local expected="$3"
local actual
actual="$(json_get "${file}" "${expr}")"
if [[ "${actual}" != "${expected}" ]]; then
fail "Assertion failed: ${expr} expected '${expected}', got '${actual}'"
fi
echo "[OK] ${expr}=${actual}"
}
extract_current() {
request "current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/current.body" "scheme_version_id")"
CURRENT_STATUS="$(json_get "${TMP_DIR}/current.body" "status")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
echo "CURRENT_STATUS=${CURRENT_STATUS}"
}
ensure_draft() {
request "ensure_draft" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure" "" "200"
DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")"
DRAFT_CREATED="$(json_get "${TMP_DIR}/ensure_draft.body" "created")"
echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}"
echo "DRAFT_CREATED=${DRAFT_CREATED}"
}
read_structure() {
request "draft_structure" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"" "200"
SEAT_RECORD_ID="$(json_get "${TMP_DIR}/draft_structure.body" "seats.[seat_id!=null].seat_record_id")"
SEAT_ID="$(json_get "${TMP_DIR}/draft_structure.body" "seats.[seat_id!=null].seat_id")"
ORIG_SEAT_NUMBER="$(json_get "${TMP_DIR}/draft_structure.body" "seats.[seat_id!=null].seat_number")"
echo "SEAT_RECORD_ID=${SEAT_RECORD_ID}"
echo "SEAT_ID=${SEAT_ID}"
echo "ORIG_SEAT_NUMBER=${ORIG_SEAT_NUMBER}"
}
check_read_models() {
request "draft_summary" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/summary?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"" "200"
request "draft_validation" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/validation?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"" "200"
request "draft_compare_preview" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/compare-preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"" "200"
assert_json_eq "${TMP_DIR}/draft_summary.body" "scheme_version_id" "${DRAFT_VERSION_ID}"
assert_json_eq "${TMP_DIR}/draft_validation.body" "scheme_version_id" "${DRAFT_VERSION_ID}"
assert_json_eq "${TMP_DIR}/draft_compare_preview.body" "draft_scheme_version_id" "${DRAFT_VERSION_ID}"
}
log "health"
curl -fsS "${API_URL}/healthz" >/dev/null || fail "healthz failed"
echo "[OK] healthz"
log "current + ensure draft"
extract_current
ensure_draft
read_structure
check_read_models
STAMP="$(date +%s)"
TEST_SECTOR_ID="reg-sector-${STAMP}"
TEST_GROUP_ID="reg-group-${STAMP}"
TEST_SECTOR_ELEMENT_ID="reg-sector-element-${STAMP}"
TEST_GROUP_ELEMENT_ID="reg-group-element-${STAMP}"
log "create sector"
request "create_sector" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/sectors?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{\"element_id\":\"${TEST_SECTOR_ELEMENT_ID}\",\"sector_id\":\"${TEST_SECTOR_ID}\",\"name\":\"${TEST_SECTOR_ID}\"}" \
"200"
CREATE_SECTOR_RECORD_ID="$(json_get "${TMP_DIR}/create_sector.body" "sector_record_id")"
echo "CREATE_SECTOR_RECORD_ID=${CREATE_SECTOR_RECORD_ID}"
log "create group"
request "create_group" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/groups?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{\"element_id\":\"${TEST_GROUP_ELEMENT_ID}\",\"group_id\":\"${TEST_GROUP_ID}\",\"name\":\"${TEST_GROUP_ID}\"}" \
"200"
CREATE_GROUP_RECORD_ID="$(json_get "${TMP_DIR}/create_group.body" "group_record_id")"
echo "CREATE_GROUP_RECORD_ID=${CREATE_GROUP_RECORD_ID}"
log "patch seat -> bind to new group"
request "patch_seat_group" "PATCH" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/${SEAT_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{\"group_id\":\"${TEST_GROUP_ID}\"}" \
"200"
log "verify seat after patch"
request "seat_after_patch" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/${SEAT_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"" "200"
assert_json_eq "${TMP_DIR}/seat_after_patch.body" "group_id" "${TEST_GROUP_ID}"
assert_json_eq "${TMP_DIR}/seat_after_patch.body" "seat_number" "${ORIG_SEAT_NUMBER}"
log "patch group name"
request "patch_group" "PATCH" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/groups/records/${CREATE_GROUP_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{\"name\":\"${TEST_GROUP_ID}-updated\"}" \
"200"
log "patch sector name"
request "patch_sector" "PATCH" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/sectors/records/${CREATE_SECTOR_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{\"name\":\"${TEST_SECTOR_ID}-updated\"}" \
"200"
log "verify sector after patch"
request "sector_after_patch" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/sectors/records/${CREATE_SECTOR_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"" "200"
assert_json_eq "${TMP_DIR}/sector_after_patch.body" "name" "${TEST_SECTOR_ID}-updated"
assert_json_eq "${TMP_DIR}/sector_after_patch.body" "sector_id" "${TEST_SECTOR_ID}"
log "bulk seat update validation path"
request "bulk_seats" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/bulk?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{\"items\":[{\"seat_record_id\":\"${SEAT_RECORD_ID}\",\"row_label\":\"ZZ\",\"seat_number\":\"999\"}]}" \
"200"
log "verify seat after bulk patch"
request "seat_after_bulk" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/${SEAT_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"" "200"
assert_json_eq "${TMP_DIR}/seat_after_bulk.body" "row_label" "ZZ"
assert_json_eq "${TMP_DIR}/seat_after_bulk.body" "seat_number" "999"
log "typed error: duplicate sector id"
request "duplicate_sector" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/sectors?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{\"element_id\":\"dup-${TEST_SECTOR_ELEMENT_ID}\",\"sector_id\":\"${TEST_SECTOR_ID}\",\"name\":\"dup\"}" \
"422"
log "typed error: duplicate group id"
request "duplicate_group" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/groups?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{\"element_id\":\"dup-${TEST_GROUP_ELEMENT_ID}\",\"group_id\":\"${TEST_GROUP_ID}\",\"name\":\"dup\"}" \
"422"
log "typed error: stale draft version"
request "stale_patch" "PATCH" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/${SEAT_RECORD_ID}?expected_scheme_version_id=deadbeefdeadbeefdeadbeefdeadbeef" \
"{\"row_label\":\"STALE\"}" \
"409"
log "typed error: remap preview without filters"
request "remap_preview_invalid" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/remap/preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{}" \
"422"
log "repair references"
request "repair_refs" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/repair-references?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"{}" \
"200"
log "post-mutation read models"
check_read_models
log "done"
echo "[OK] editor mutation regression completed successfully"

View File

@@ -0,0 +1,319 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
wait_for_health
create_fresh_scheme_from_upload "smoke-admin-ops"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
request "ensure_draft" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure?expected_current_scheme_version_id=${CURRENT_VERSION_ID}" \
"200"
DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")"
echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}"
request "draft_structure" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
TARGET_SEAT_ID="$(python3 - "${TMP_DIR}/draft_structure.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
seat = next((item for item in payload.get("seats", []) if item.get("seat_id")), None)
if seat is None:
raise SystemExit("No seat with seat_id found for admin ops smoke")
print(seat["seat_id"])
PY
)"
echo "TARGET_SEAT_ID=${TARGET_SEAT_ID}"
STAMP="$(date +%s)-$$"
CLEANUP_PREFIX="ADMINOPS_CLEAN_${STAMP}_"
DELETE_CATEGORY_NAME="adminops-clean-delete-${STAMP}"
DELETE_CATEGORY_CODE="${CLEANUP_PREFIX}DELETE"
KEEP_CATEGORY_NAME="adminops-clean-keep-${STAMP}"
KEEP_CATEGORY_CODE="${CLEANUP_PREFIX}KEEP"
request "create_delete_category" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/categories?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \
"{\"name\":\"${DELETE_CATEGORY_NAME}\",\"code\":\"${DELETE_CATEGORY_CODE}\"}"
DELETE_CATEGORY_ID="$(json_get "${TMP_DIR}/create_delete_category.body" "pricing_category_id")"
echo "DELETE_CATEGORY_ID=${DELETE_CATEGORY_ID}"
request "create_keep_category" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/categories?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \
"{\"name\":\"${KEEP_CATEGORY_NAME}\",\"code\":\"${KEEP_CATEGORY_CODE}\"}"
KEEP_CATEGORY_ID="$(json_get "${TMP_DIR}/create_keep_category.body" "pricing_category_id")"
echo "KEEP_CATEGORY_ID=${KEEP_CATEGORY_ID}"
request "create_keep_category_rule" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \
"{\"pricing_category_id\":\"${KEEP_CATEGORY_ID}\",\"target_type\":\"seat\",\"target_ref\":\"${TARGET_SEAT_ID}\",\"amount\":\"555.00\",\"currency\":\"RUB\"}"
KEEP_RULE_ID="$(json_get "${TMP_DIR}/create_keep_category_rule.body" "price_rule_id")"
echo "KEEP_RULE_ID=${KEEP_RULE_ID}"
request "draft_pricing_snapshot" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/pricing/snapshot?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request "publish_preview_refresh" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request "publish_preview_cached" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request "admin_current_artifacts" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" \
"200"
assert_json_eq "${TMP_DIR}/admin_current_artifacts.body" "scheme_version_id" "${DRAFT_VERSION_ID}"
assert_json_int_ge "${TMP_DIR}/admin_current_artifacts.body" "total" "4"
assert_file_contains "${TMP_DIR}/admin_current_artifacts.body" "\"artifact_type\":\"publish_preview\""
request "admin_current_validation" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/validation" \
"200"
assert_json_eq "${TMP_DIR}/admin_current_validation.body" "scheme_version_id" "${DRAFT_VERSION_ID}"
assert_file_contains "${TMP_DIR}/admin_current_validation.body" "\"report\":"
request "publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_eq "${TMP_DIR}/publish_preview_audit.body" "artifact_type" "publish_preview"
assert_json_int_ge "${TMP_DIR}/publish_preview_audit.body" "db_rows_count" "1"
assert_json_int_ge "${TMP_DIR}/publish_preview_audit.body" "disk_files_count" "1"
PRE_CLEANUP_DB_ROWS_COUNT="$(json_get "${TMP_DIR}/publish_preview_audit.body" "db_rows_count")"
PRE_CLEANUP_DISK_FILES_COUNT="$(json_get "${TMP_DIR}/publish_preview_audit.body" "disk_files_count")"
PRE_CLEANUP_ORPHAN_FILES_COUNT="$(json_get "${TMP_DIR}/publish_preview_audit.body" "orphan_files_count")"
PRE_CLEANUP_MISSING_FILES_COUNT="$(json_get "${TMP_DIR}/publish_preview_audit.body" "missing_files_for_db_rows_count")"
echo "PRE_CLEANUP_DB_ROWS_COUNT=${PRE_CLEANUP_DB_ROWS_COUNT}"
echo "PRE_CLEANUP_DISK_FILES_COUNT=${PRE_CLEANUP_DISK_FILES_COUNT}"
echo "PRE_CLEANUP_ORPHAN_FILES_COUNT=${PRE_CLEANUP_ORPHAN_FILES_COUNT}"
echo "PRE_CLEANUP_MISSING_FILES_COUNT=${PRE_CLEANUP_MISSING_FILES_COUNT}"
request "publish_preview_cleanup_dry_run" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" \
"200"
assert_json_eq "${TMP_DIR}/publish_preview_cleanup_dry_run.body" "artifact_type" "publish_preview"
assert_json_eq "${TMP_DIR}/publish_preview_cleanup_dry_run.body" "dry_run" "true"
assert_json_int_eq "${TMP_DIR}/publish_preview_cleanup_dry_run.body" "deleted_files_count" "0"
assert_json_int_eq "${TMP_DIR}/publish_preview_cleanup_dry_run.body" "deleted_db_rows_count" "0"
assert_json_int_eq "${TMP_DIR}/publish_preview_cleanup_dry_run.body" "orphan_files_count" "${PRE_CLEANUP_ORPHAN_FILES_COUNT}"
assert_json_int_eq "${TMP_DIR}/publish_preview_cleanup_dry_run.body" "missing_files_for_db_rows_count" "${PRE_CLEANUP_MISSING_FILES_COUNT}"
request "publish_preview_cleanup_execute" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false" \
"200"
assert_json_eq "${TMP_DIR}/publish_preview_cleanup_execute.body" "artifact_type" "publish_preview"
assert_json_eq "${TMP_DIR}/publish_preview_cleanup_execute.body" "dry_run" "false"
assert_json_int_eq "${TMP_DIR}/publish_preview_cleanup_execute.body" "orphan_files_count" "${PRE_CLEANUP_ORPHAN_FILES_COUNT}"
assert_json_int_eq "${TMP_DIR}/publish_preview_cleanup_execute.body" "missing_files_for_db_rows_count" "${PRE_CLEANUP_MISSING_FILES_COUNT}"
assert_json_int_eq "${TMP_DIR}/publish_preview_cleanup_execute.body" "deleted_files_count" "${PRE_CLEANUP_ORPHAN_FILES_COUNT}"
assert_json_int_eq "${TMP_DIR}/publish_preview_cleanup_execute.body" "deleted_db_rows_count" "${PRE_CLEANUP_MISSING_FILES_COUNT}"
request "publish_preview_audit_after_cleanup" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_eq "${TMP_DIR}/publish_preview_audit_after_cleanup.body" "artifact_type" "publish_preview"
assert_json_int_eq "${TMP_DIR}/publish_preview_audit_after_cleanup.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/publish_preview_audit_after_cleanup.body" "missing_files_for_db_rows_count" "0"
POST_CLEANUP_DB_ROWS_COUNT="$(json_get "${TMP_DIR}/publish_preview_audit_after_cleanup.body" "db_rows_count")"
POST_CLEANUP_DISK_FILES_COUNT="$(json_get "${TMP_DIR}/publish_preview_audit_after_cleanup.body" "disk_files_count")"
POST_CLEANUP_ORPHAN_FILES_COUNT="$(json_get "${TMP_DIR}/publish_preview_audit_after_cleanup.body" "orphan_files_count")"
POST_CLEANUP_MISSING_FILES_COUNT="$(json_get "${TMP_DIR}/publish_preview_audit_after_cleanup.body" "missing_files_for_db_rows_count")"
echo "POST_CLEANUP_DB_ROWS_COUNT=${POST_CLEANUP_DB_ROWS_COUNT}"
echo "POST_CLEANUP_DISK_FILES_COUNT=${POST_CLEANUP_DISK_FILES_COUNT}"
echo "POST_CLEANUP_ORPHAN_FILES_COUNT=${POST_CLEANUP_ORPHAN_FILES_COUNT}"
echo "POST_CLEANUP_MISSING_FILES_COUNT=${POST_CLEANUP_MISSING_FILES_COUNT}"
if [[ "${POST_CLEANUP_DB_ROWS_COUNT}" != "${POST_CLEANUP_DISK_FILES_COUNT}" ]]; then
fail "publish-preview audit mismatch after cleanup: db_rows_count=${POST_CLEANUP_DB_ROWS_COUNT}, disk_files_count=${POST_CLEANUP_DISK_FILES_COUNT}"
fi
echo "[OK] publish-preview audit is fully consistent after cleanup"
request "pricing_cleanup_preview" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview?code_prefix=${CLEANUP_PREFIX}" \
"200"
assert_json_eq "${TMP_DIR}/pricing_cleanup_preview.body" "scheme_id" "${SCHEME_ID}"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_preview.body" "total_candidates" "2"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_preview.body" "safe_to_delete_count" "1"
python3 - "${TMP_DIR}/pricing_cleanup_preview.body" "${DELETE_CATEGORY_ID}" "${KEEP_CATEGORY_ID}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
delete_category_id = sys.argv[2]
keep_category_id = sys.argv[3]
items = {item["pricing_category_id"]: item for item in payload.get("items", [])}
if delete_category_id not in items:
raise SystemExit(f"Delete candidate {delete_category_id} missing from cleanup preview")
if keep_category_id not in items:
raise SystemExit(f"Protected category {keep_category_id} missing from cleanup preview")
if not items[delete_category_id]["deletable"]:
raise SystemExit("Delete candidate is expected to be deletable")
if items[keep_category_id]["deletable"]:
raise SystemExit("Protected category is expected to be skipped because it has rules")
PY
echo "[OK] pricing cleanup preview matched expected deletable/skipped categories"
request "pricing_cleanup_dry_run" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" \
"200" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":true}"
assert_json_eq "${TMP_DIR}/pricing_cleanup_dry_run.body" "scheme_id" "${SCHEME_ID}"
assert_json_eq "${TMP_DIR}/pricing_cleanup_dry_run.body" "dry_run" "true"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_dry_run.body" "matched_total" "2"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_dry_run.body" "would_delete_count" "1"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_dry_run.body" "deleted_count" "0"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_dry_run.body" "skipped_count" "1"
python3 - "${TMP_DIR}/pricing_cleanup_dry_run.body" "${DELETE_CATEGORY_ID}" "${KEEP_CATEGORY_ID}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
delete_category_id = sys.argv[2]
keep_category_id = sys.argv[3]
would_delete_ids = set(payload.get("would_delete_category_ids", []))
if would_delete_ids != {delete_category_id}:
raise SystemExit(
f"Dry run expected would_delete_category_ids={[delete_category_id]}, got={sorted(would_delete_ids)}"
)
skipped_ids = {item["pricing_category_id"] for item in payload.get("skipped", [])}
if skipped_ids != {keep_category_id}:
raise SystemExit(
f"Dry run expected skipped={[keep_category_id]}, got={sorted(skipped_ids)}"
)
PY
echo "[OK] pricing cleanup dry-run kept protected category and selected only empty fixture category"
request "pricing_cleanup_execute" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" \
"200" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":false}"
assert_json_eq "${TMP_DIR}/pricing_cleanup_execute.body" "scheme_id" "${SCHEME_ID}"
assert_json_eq "${TMP_DIR}/pricing_cleanup_execute.body" "dry_run" "false"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_execute.body" "matched_total" "2"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_execute.body" "would_delete_count" "1"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_execute.body" "deleted_count" "1"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_execute.body" "skipped_count" "1"
python3 - "${TMP_DIR}/pricing_cleanup_execute.body" "${DELETE_CATEGORY_ID}" "${KEEP_CATEGORY_ID}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
delete_category_id = sys.argv[2]
keep_category_id = sys.argv[3]
deleted_ids = set(payload.get("deleted_category_ids", []))
if deleted_ids != {delete_category_id}:
raise SystemExit(
f"Cleanup execute expected deleted_category_ids={[delete_category_id]}, got={sorted(deleted_ids)}"
)
skipped_ids = {item["pricing_category_id"] for item in payload.get("skipped", [])}
if skipped_ids != {keep_category_id}:
raise SystemExit(
f"Cleanup execute expected skipped={[keep_category_id]}, got={sorted(skipped_ids)}"
)
PY
echo "[OK] pricing cleanup execute deleted only safe fixture category"
request "pricing_bundle_after_cleanup" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" \
"200"
assert_json_len_eq "${TMP_DIR}/pricing_bundle_after_cleanup.body" "categories" "1"
assert_json_len_eq "${TMP_DIR}/pricing_bundle_after_cleanup.body" "rules" "1"
python3 - "${TMP_DIR}/pricing_bundle_after_cleanup.body" "${DELETE_CATEGORY_ID}" "${KEEP_CATEGORY_ID}" "${KEEP_RULE_ID}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
delete_category_id = sys.argv[2]
keep_category_id = sys.argv[3]
keep_rule_id = sys.argv[4]
category_ids = {item["pricing_category_id"] for item in payload.get("categories", [])}
rule_ids = {item["price_rule_id"] for item in payload.get("rules", [])}
if delete_category_id in category_ids:
raise SystemExit("Deleted cleanup category still present in pricing bundle")
if keep_category_id not in category_ids:
raise SystemExit("Protected cleanup category missing after execute cleanup")
if keep_rule_id not in rule_ids:
raise SystemExit("Protected pricing rule missing after execute cleanup")
PY
echo "[OK] pricing bundle reflects destructive cleanup result"
request "pricing_cleanup_preview_after_cleanup" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview?code_prefix=${CLEANUP_PREFIX}" \
"200"
assert_json_eq "${TMP_DIR}/pricing_cleanup_preview_after_cleanup.body" "scheme_id" "${SCHEME_ID}"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_preview_after_cleanup.body" "total_candidates" "1"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_preview_after_cleanup.body" "safe_to_delete_count" "0"
request "pricing_cleanup_dry_run_after_cleanup" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" \
"200" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":true}"
assert_json_eq "${TMP_DIR}/pricing_cleanup_dry_run_after_cleanup.body" "scheme_id" "${SCHEME_ID}"
assert_json_eq "${TMP_DIR}/pricing_cleanup_dry_run_after_cleanup.body" "dry_run" "true"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_dry_run_after_cleanup.body" "matched_total" "1"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_dry_run_after_cleanup.body" "would_delete_count" "0"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_dry_run_after_cleanup.body" "deleted_count" "0"
assert_json_int_eq "${TMP_DIR}/pricing_cleanup_dry_run_after_cleanup.body" "skipped_count" "1"
python3 - "${TMP_DIR}/pricing_cleanup_dry_run_after_cleanup.body" "${KEEP_CATEGORY_ID}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
keep_category_id = sys.argv[2]
would_delete_ids = payload.get("would_delete_category_ids", [])
if would_delete_ids:
raise SystemExit(f"Expected no deletable categories after cleanup, got={would_delete_ids}")
skipped_ids = {item["pricing_category_id"] for item in payload.get("skipped", [])}
if skipped_ids != {keep_category_id}:
raise SystemExit(
f"Post-cleanup dry run expected skipped={[keep_category_id]}, got={sorted(skipped_ids)}"
)
PY
echo "[OK] repeated cleanup state is stable after destructive cleanup"
echo
echo "===== done ====="
echo "[OK] smoke admin ops completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"
echo "DELETE_CATEGORY_ID=${DELETE_CATEGORY_ID}"
echo "KEEP_CATEGORY_ID=${KEEP_CATEGORY_ID}"
echo "KEEP_RULE_ID=${KEEP_RULE_ID}"

View File

@@ -0,0 +1,173 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
set -a
source "${REPO_ROOT}/.env"
set +a
wait_for_health
create_fresh_scheme_from_upload "smoke-artifact-corruption"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
request "ensure_draft" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure?expected_current_scheme_version_id=${CURRENT_VERSION_ID}" \
"200"
DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")"
echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}"
request "initial_publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/initial_publish_preview_audit.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/initial_publish_preview_audit.body" "missing_files_for_db_rows_count" "0"
request "publish_preview_refresh_case_a" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request "admin_current_artifacts_case_a" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" \
"200"
read -r CASE_A_ARTIFACT_ID CASE_A_STORAGE_PATH <<EOF
$(python3 - "${TMP_DIR}/admin_current_artifacts_case_a.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
items = [item for item in payload.get("items", []) if item.get("artifact_type") == "publish_preview"]
if not items:
raise SystemExit("No publish_preview artifact found for case A")
item = items[-1]
print(item["artifact_id"], item["storage_path"])
PY
)
EOF
echo "CASE_A_ARTIFACT_ID=${CASE_A_ARTIFACT_ID}"
echo "CASE_A_STORAGE_PATH=${CASE_A_STORAGE_PATH}"
docker compose exec -T svg-service python - "${CASE_A_STORAGE_PATH}" <<'PY'
from pathlib import Path
import sys
path = Path(sys.argv[1])
if not path.exists():
raise SystemExit(f"Case A preview file missing before manual removal: {path}")
path.unlink()
PY
echo "[OK] case A manually removed preview file while DB row remains"
request "audit_case_a_broken" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/audit_case_a_broken.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/audit_case_a_broken.body" "missing_files_for_db_rows_count" "1"
assert_file_contains "${TMP_DIR}/audit_case_a_broken.body" "\"artifact_id\":\"${CASE_A_ARTIFACT_ID}\""
request "cleanup_case_a_dry_run" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" \
"200"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_dry_run.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_dry_run.body" "missing_files_for_db_rows_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_dry_run.body" "deleted_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_dry_run.body" "deleted_db_rows_count" "0"
request "cleanup_case_a_execute" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false" \
"200"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_execute.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_execute.body" "missing_files_for_db_rows_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_execute.body" "deleted_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_a_execute.body" "deleted_db_rows_count" "1"
assert_file_contains "${TMP_DIR}/cleanup_case_a_execute.body" "\"${CASE_A_ARTIFACT_ID}\""
request "audit_case_a_healthy" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/audit_case_a_healthy.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/audit_case_a_healthy.body" "missing_files_for_db_rows_count" "0"
request "publish_preview_refresh_case_b" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request "admin_current_artifacts_case_b" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" \
"200"
read -r CASE_B_ARTIFACT_ID CASE_B_STORAGE_PATH <<EOF
$(python3 - "${TMP_DIR}/admin_current_artifacts_case_b.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
items = [item for item in payload.get("items", []) if item.get("artifact_type") == "publish_preview"]
if not items:
raise SystemExit("No publish_preview artifact found for case B")
item = items[-1]
print(item["artifact_id"], item["storage_path"])
PY
)
EOF
echo "CASE_B_ARTIFACT_ID=${CASE_B_ARTIFACT_ID}"
echo "CASE_B_STORAGE_PATH=${CASE_B_STORAGE_PATH}"
CASE_B_DELETE_COUNT="$(docker compose exec -T postgres psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -Atc "with deleted as (delete from scheme_artifacts where artifact_id='${CASE_B_ARTIFACT_ID}' and artifact_type='publish_preview' and scheme_id='${SCHEME_ID}' returning 1) select count(*) from deleted;")"
if [[ "${CASE_B_DELETE_COUNT}" != "1" ]]; then
fail "Case B expected to delete exactly one publish_preview DB row, got ${CASE_B_DELETE_COUNT}"
fi
echo "[OK] case B manually removed publish_preview DB row while file remains"
request "audit_case_b_broken" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/audit_case_b_broken.body" "orphan_files_count" "1"
assert_json_int_eq "${TMP_DIR}/audit_case_b_broken.body" "missing_files_for_db_rows_count" "0"
assert_file_contains "${TMP_DIR}/audit_case_b_broken.body" "\"${CASE_B_STORAGE_PATH}\""
request "cleanup_case_b_dry_run" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" \
"200"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_dry_run.body" "orphan_files_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_dry_run.body" "missing_files_for_db_rows_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_dry_run.body" "deleted_files_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_dry_run.body" "deleted_db_rows_count" "0"
request "cleanup_case_b_execute" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=false" \
"200"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_execute.body" "orphan_files_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_execute.body" "missing_files_for_db_rows_count" "0"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_execute.body" "deleted_files_count" "1"
assert_json_int_eq "${TMP_DIR}/cleanup_case_b_execute.body" "deleted_db_rows_count" "0"
assert_file_contains "${TMP_DIR}/cleanup_case_b_execute.body" "\"${CASE_B_STORAGE_PATH}\""
request "final_publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"200"
assert_json_int_eq "${TMP_DIR}/final_publish_preview_audit.body" "orphan_files_count" "0"
assert_json_int_eq "${TMP_DIR}/final_publish_preview_audit.body" "missing_files_for_db_rows_count" "0"
FINAL_DB_ROWS_COUNT="$(json_get "${TMP_DIR}/final_publish_preview_audit.body" "db_rows_count")"
FINAL_DISK_FILES_COUNT="$(json_get "${TMP_DIR}/final_publish_preview_audit.body" "disk_files_count")"
if [[ "${FINAL_DB_ROWS_COUNT}" != "${FINAL_DISK_FILES_COUNT}" ]]; then
fail "Final publish-preview audit mismatch after remediation: db_rows_count=${FINAL_DB_ROWS_COUNT}, disk_files_count=${FINAL_DISK_FILES_COUNT}"
fi
echo
echo "===== done ====="
echo "[OK] smoke artifact corruption completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"
echo "CASE_A_ARTIFACT_ID=${CASE_A_ARTIFACT_ID}"
echo "CASE_B_ARTIFACT_ID=${CASE_B_ARTIFACT_ID}"

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
INVALID_API_KEY="${INVALID_API_KEY:-definitely-invalid-api-key}"
VIEWER_API_KEY="${VIEWER_API_KEY:-viewer-local-dev-key}"
wait_for_health
create_fresh_scheme_from_upload "smoke-auth-negative"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
request_without_api_key "manifest_missing_key" "GET" \
"${API_URL}/api/v1/manifest" \
"401"
request_with_api_key "${INVALID_API_KEY}" "manifest_invalid_key" "GET" \
"${API_URL}/api/v1/manifest" \
"403"
assert_file_contains "${TMP_DIR}/manifest_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/manifest_invalid_key.body" "Invalid API key"
request_without_api_key "editor_context_missing_key" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/editor/context" \
"401"
request_with_api_key "${INVALID_API_KEY}" "editor_context_invalid_key" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/editor/context" \
"403"
assert_file_contains "${TMP_DIR}/editor_context_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/editor_context_invalid_key.body" "Invalid API key"
request_without_api_key "pricing_bundle_missing_key" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" \
"401"
request_with_api_key "${INVALID_API_KEY}" "pricing_bundle_invalid_key" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" \
"403"
assert_file_contains "${TMP_DIR}/pricing_bundle_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/pricing_bundle_invalid_key.body" "Invalid API key"
request_without_api_key "admin_audit_missing_key" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"401"
request_with_api_key "${INVALID_API_KEY}" "admin_audit_invalid_key" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"403"
request_with_api_key "${VIEWER_API_KEY}" "admin_audit_wrong_role" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" \
"403"
assert_file_contains "${TMP_DIR}/admin_audit_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/admin_audit_invalid_key.body" "Invalid API key"
assert_file_contains "${TMP_DIR}/admin_audit_wrong_role.body" "Admin role required"
request_without_api_key "admin_cleanup_preview_missing_key" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview" \
"401"
request_with_api_key "${INVALID_API_KEY}" "admin_cleanup_preview_invalid_key" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview" \
"403"
request_with_api_key "${VIEWER_API_KEY}" "admin_cleanup_preview_wrong_role" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview" \
"403"
assert_file_contains "${TMP_DIR}/admin_cleanup_preview_missing_key.body" "Missing API key"
assert_file_contains "${TMP_DIR}/admin_cleanup_preview_invalid_key.body" "Invalid API key"
assert_file_contains "${TMP_DIR}/admin_cleanup_preview_wrong_role.body" "Admin role required"
echo
echo "===== done ====="
echo "[OK] smoke auth negative completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"

View File

@@ -0,0 +1,202 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
ADMIN_API_KEY="${ADMIN_API_KEY:-admin-local-dev-key}"
OPERATOR_API_KEY="${OPERATOR_API_KEY:-operator-local-dev-key}"
VIEWER_API_KEY="${VIEWER_API_KEY:-viewer-local-dev-key}"
wait_for_health
create_fresh_scheme_from_upload "smoke-authz-admin-all"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
request "ensure_draft" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure?expected_current_scheme_version_id=${CURRENT_VERSION_ID}" \
"200"
DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")"
echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}"
request "draft_structure" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
TARGET_SEAT_ID="$(python3 - "${TMP_DIR}/draft_structure.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
seat = next((item for item in payload.get("seats", []) if item.get("seat_id")), None)
if seat is None:
raise SystemExit("No seat with seat_id found for authz admin all smoke")
print(seat["seat_id"])
PY
)"
echo "TARGET_SEAT_ID=${TARGET_SEAT_ID}"
STAMP="$(date +%s)-$$"
CLEANUP_PREFIX="AUTHZ_ADMINALL_${STAMP}_"
DELETE_CATEGORY_NAME="authz-adminall-delete-${STAMP}"
DELETE_CATEGORY_CODE="${CLEANUP_PREFIX}DELETE"
KEEP_CATEGORY_NAME="authz-adminall-keep-${STAMP}"
KEEP_CATEGORY_CODE="${CLEANUP_PREFIX}KEEP"
request "create_delete_category" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/categories?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \
"{\"name\":\"${DELETE_CATEGORY_NAME}\",\"code\":\"${DELETE_CATEGORY_CODE}\"}"
DELETE_CATEGORY_ID="$(json_get "${TMP_DIR}/create_delete_category.body" "pricing_category_id")"
echo "DELETE_CATEGORY_ID=${DELETE_CATEGORY_ID}"
request "create_keep_category" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/categories?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \
"{\"name\":\"${KEEP_CATEGORY_NAME}\",\"code\":\"${KEEP_CATEGORY_CODE}\"}"
KEEP_CATEGORY_ID="$(json_get "${TMP_DIR}/create_keep_category.body" "pricing_category_id")"
echo "KEEP_CATEGORY_ID=${KEEP_CATEGORY_ID}"
request "create_keep_category_rule" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \
"{\"pricing_category_id\":\"${KEEP_CATEGORY_ID}\",\"target_type\":\"seat\",\"target_ref\":\"${TARGET_SEAT_ID}\",\"amount\":\"777.00\",\"currency\":\"RUB\"}"
KEEP_RULE_ID="$(json_get "${TMP_DIR}/create_keep_category_rule.body" "price_rule_id")"
echo "KEEP_RULE_ID=${KEEP_RULE_ID}"
request "draft_pricing_snapshot" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/pricing/snapshot?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request "publish_preview_refresh" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200"
request_with_api_key "${ADMIN_API_KEY}" "admin_current_artifacts" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_current_artifacts" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_current_artifacts" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" "403"
assert_file_contains "${TMP_DIR}/operator_current_artifacts.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_current_artifacts.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_current_validation" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/validation" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_current_validation" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/validation" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_current_validation" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/validation" "403"
assert_file_contains "${TMP_DIR}/operator_current_validation.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_current_validation.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_display_regenerate" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/display/regenerate?mode=passthrough" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_display_regenerate" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/display/regenerate?mode=passthrough" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_display_regenerate" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/display/regenerate?mode=passthrough" "403"
assert_file_contains "${TMP_DIR}/operator_display_regenerate.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_display_regenerate.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_display_backfill" "POST" \
"${API_URL}/api/v1/admin/display/backfill?mode=passthrough&limit=1&only_missing=true" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_display_backfill" "POST" \
"${API_URL}/api/v1/admin/display/backfill?mode=passthrough&limit=1&only_missing=true" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_display_backfill" "POST" \
"${API_URL}/api/v1/admin/display/backfill?mode=passthrough&limit=1&only_missing=true" "403"
assert_file_contains "${TMP_DIR}/operator_display_backfill.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_display_backfill.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_publish_preview_audit" "GET" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/audit" "403"
assert_file_contains "${TMP_DIR}/operator_publish_preview_audit.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_publish_preview_audit.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_publish_preview_cleanup_dry_run" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_publish_preview_cleanup_dry_run" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_publish_preview_cleanup_dry_run" "POST" \
"${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" "403"
assert_file_contains "${TMP_DIR}/operator_publish_preview_cleanup_dry_run.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_publish_preview_cleanup_dry_run.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_pricing_cleanup_preview" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview?code_prefix=${CLEANUP_PREFIX}" "200"
request_with_api_key "${OPERATOR_API_KEY}" "operator_pricing_cleanup_preview" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview?code_prefix=${CLEANUP_PREFIX}" "403"
request_with_api_key "${VIEWER_API_KEY}" "viewer_pricing_cleanup_preview" "GET" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview?code_prefix=${CLEANUP_PREFIX}" "403"
assert_file_contains "${TMP_DIR}/operator_pricing_cleanup_preview.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_pricing_cleanup_preview.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_pricing_cleanup_dry_run" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" "200" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":true}"
request_with_api_key "${OPERATOR_API_KEY}" "operator_pricing_cleanup_dry_run" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" "403" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":true}"
request_with_api_key "${VIEWER_API_KEY}" "viewer_pricing_cleanup_dry_run" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" "403" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":true}"
assert_file_contains "${TMP_DIR}/operator_pricing_cleanup_dry_run.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_pricing_cleanup_dry_run.body" "Admin role required"
request_with_api_key "${OPERATOR_API_KEY}" "operator_pricing_cleanup_execute" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" "403" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":false}"
request_with_api_key "${VIEWER_API_KEY}" "viewer_pricing_cleanup_execute" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" "403" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":false}"
assert_file_contains "${TMP_DIR}/operator_pricing_cleanup_execute.body" "Admin role required"
assert_file_contains "${TMP_DIR}/viewer_pricing_cleanup_execute.body" "Admin role required"
request_with_api_key "${ADMIN_API_KEY}" "admin_pricing_cleanup_execute" "POST" \
"${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" "200" \
"{\"code_prefixes\":[\"${CLEANUP_PREFIX}\"],\"name_prefixes\":[],\"pricing_category_ids\":[],\"delete_only_without_rules\":true,\"dry_run\":false}"
assert_json_int_eq "${TMP_DIR}/admin_pricing_cleanup_execute.body" "deleted_count" "1"
assert_json_int_eq "${TMP_DIR}/admin_pricing_cleanup_execute.body" "skipped_count" "1"
request "pricing_bundle_after_admin_cleanup_execute" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" "200"
assert_json_len_eq "${TMP_DIR}/pricing_bundle_after_admin_cleanup_execute.body" "categories" "1"
assert_json_len_eq "${TMP_DIR}/pricing_bundle_after_admin_cleanup_execute.body" "rules" "1"
python3 - "${TMP_DIR}/pricing_bundle_after_admin_cleanup_execute.body" "${DELETE_CATEGORY_ID}" "${KEEP_CATEGORY_ID}" "${KEEP_RULE_ID}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
delete_category_id = sys.argv[2]
keep_category_id = sys.argv[3]
keep_rule_id = sys.argv[4]
category_ids = {item["pricing_category_id"] for item in payload.get("categories", [])}
rule_ids = {item["price_rule_id"] for item in payload.get("rules", [])}
if delete_category_id in category_ids:
raise SystemExit("Authz admin-all cleanup left deletable category behind")
if keep_category_id not in category_ids:
raise SystemExit("Authz admin-all cleanup removed protected category")
if keep_rule_id not in rule_ids:
raise SystemExit("Authz admin-all cleanup removed protected rule")
PY
echo "[OK] admin cleanup execute remained destructive only for safe fixture category"
echo
echo "===== done ====="
echo "[OK] smoke authz admin all completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"

View File

@@ -0,0 +1,500 @@
#!/usr/bin/env bash
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
API_URL="${API_URL:-http://127.0.0.1:9020}"
API_KEY="${API_KEY:-admin-local-dev-key}"
FIXTURE_SVG_PATH="${FIXTURE_SVG_PATH:-${REPO_ROOT}/sample-contract.svg}"
HEALTH_MAX_ATTEMPTS="${HEALTH_MAX_ATTEMPTS:-20}"
HEALTH_RETRY_DELAY_SECONDS="${HEALTH_RETRY_DELAY_SECONDS:-1}"
log() {
echo
echo "===== $* ====="
}
fail() {
echo
echo "[FAIL] $*" >&2
exit 1
}
require_fixture_svg() {
if [[ ! -f "${FIXTURE_SVG_PATH}" ]]; then
fail "Fixture SVG not found: ${FIXTURE_SVG_PATH}"
fi
}
wait_for_health() {
log "health"
echo "waiting for API to be ready..."
local health_ready="false"
local health_status=""
for ((i = 1; i <= HEALTH_MAX_ATTEMPTS; i++)); do
health_status="$(curl -sS -o /dev/null -w "%{http_code}" "${API_URL}/healthz" || true)"
if [[ "${health_status}" == "200" ]]; then
health_ready="true"
echo "API is ready"
break
fi
echo "waiting... (${i}/${HEALTH_MAX_ATTEMPTS}) healthz=${health_status}"
sleep "${HEALTH_RETRY_DELAY_SECONDS}"
done
if [[ "${health_ready}" != "true" ]]; then
fail "API did not become ready on ${API_URL}/healthz after ${HEALTH_MAX_ATTEMPTS} attempts"
fi
curl -sS -i "${API_URL}/healthz"
}
request() {
local name="$1"
local method="$2"
local url="$3"
local expected_status="$4"
local body="${5:-}"
local out_file="${TMP_DIR}/${name}.body"
local status_file="${TMP_DIR}/${name}.status"
echo
echo "===== ${name} ====="
if [[ -n "${body}" ]]; then
curl -sS \
-X "${method}" \
-H "X-API-Key: ${API_KEY}" \
-H "Content-Type: application/json" \
-o "${out_file}" \
-w "%{http_code}" \
"${url}" \
--data "${body}" > "${status_file}"
else
curl -sS \
-X "${method}" \
-H "X-API-Key: ${API_KEY}" \
-o "${out_file}" \
-w "%{http_code}" \
"${url}" > "${status_file}"
fi
local actual_status
actual_status="$(python3 - "$status_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8").strip())
PY
)"
echo "[${method}] ${url} -> ${actual_status}"
python3 - "$out_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8"))
PY
echo
if [[ "${actual_status}" != "${expected_status}" ]]; then
fail "Unexpected HTTP status for ${name}: expected ${expected_status}, got ${actual_status}"
fi
}
request_with_api_key() {
local api_key="$1"
local name="$2"
local method="$3"
local url="$4"
local expected_status="$5"
local body="${6:-}"
local out_file="${TMP_DIR}/${name}.body"
local status_file="${TMP_DIR}/${name}.status"
echo
echo "===== ${name} ====="
if [[ -n "${body}" ]]; then
curl -sS \
-X "${method}" \
-H "X-API-Key: ${api_key}" \
-H "Content-Type: application/json" \
-o "${out_file}" \
-w "%{http_code}" \
"${url}" \
--data "${body}" > "${status_file}"
else
curl -sS \
-X "${method}" \
-H "X-API-Key: ${api_key}" \
-o "${out_file}" \
-w "%{http_code}" \
"${url}" > "${status_file}"
fi
local actual_status
actual_status="$(python3 - "$status_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8").strip())
PY
)"
echo "[${method}] ${url} -> ${actual_status}"
python3 - "$out_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8"))
PY
echo
if [[ "${actual_status}" != "${expected_status}" ]]; then
fail "Unexpected HTTP status for ${name}: expected ${expected_status}, got ${actual_status}"
fi
}
request_without_api_key() {
local name="$1"
local method="$2"
local url="$3"
local expected_status="$4"
local body="${5:-}"
local out_file="${TMP_DIR}/${name}.body"
local status_file="${TMP_DIR}/${name}.status"
echo
echo "===== ${name} ====="
if [[ -n "${body}" ]]; then
curl -sS \
-X "${method}" \
-H "Content-Type: application/json" \
-o "${out_file}" \
-w "%{http_code}" \
"${url}" \
--data "${body}" > "${status_file}"
else
curl -sS \
-X "${method}" \
-o "${out_file}" \
-w "%{http_code}" \
"${url}" > "${status_file}"
fi
local actual_status
actual_status="$(python3 - "$status_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8").strip())
PY
)"
echo "[${method}] ${url} -> ${actual_status}"
python3 - "$out_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8"))
PY
echo
if [[ "${actual_status}" != "${expected_status}" ]]; then
fail "Unexpected HTTP status for ${name}: expected ${expected_status}, got ${actual_status}"
fi
}
upload_svg() {
local name="$1"
local upload_filename="$2"
local out_file="${TMP_DIR}/${name}.body"
local status_file="${TMP_DIR}/${name}.status"
require_fixture_svg
echo
echo "===== ${name} ====="
curl -sS \
-X POST \
-H "X-API-Key: ${API_KEY}" \
-o "${out_file}" \
-w "%{http_code}" \
-F "file=@${FIXTURE_SVG_PATH};filename=${upload_filename};type=image/svg+xml" \
"${API_URL}/api/v1/schemes/upload" > "${status_file}"
local actual_status
actual_status="$(python3 - "$status_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8").strip())
PY
)"
echo "[POST] ${API_URL}/api/v1/schemes/upload -> ${actual_status}"
python3 - "$out_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8"))
PY
echo
if [[ "${actual_status}" != "200" ]]; then
fail "Upload failed for ${upload_filename}: expected 200, got ${actual_status}"
fi
}
upload_file_expect_status() {
local name="$1"
local file_path="$2"
local upload_filename="$3"
local content_type="$4"
local expected_status="$5"
local out_file="${TMP_DIR}/${name}.body"
local status_file="${TMP_DIR}/${name}.status"
if [[ ! -f "${file_path}" ]]; then
fail "Upload fixture file not found: ${file_path}"
fi
echo
echo "===== ${name} ====="
curl -sS \
-X POST \
-H "X-API-Key: ${API_KEY}" \
-o "${out_file}" \
-w "%{http_code}" \
-F "file=@${file_path};filename=${upload_filename};type=${content_type}" \
"${API_URL}/api/v1/schemes/upload" > "${status_file}"
local actual_status
actual_status="$(python3 - "$status_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8").strip())
PY
)"
echo "[POST] ${API_URL}/api/v1/schemes/upload -> ${actual_status}"
python3 - "$out_file" <<'PY'
from pathlib import Path
import sys
print(Path(sys.argv[1]).read_text(encoding="utf-8"))
PY
echo
if [[ "${actual_status}" != "${expected_status}" ]]; then
fail "Upload failed for ${upload_filename}: expected ${expected_status}, got ${actual_status}"
fi
}
json_get() {
local file="$1"
local expr="$2"
python3 - "$file" "$expr" <<'PY'
import json
import re
import sys
from pathlib import Path
data = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
expr = sys.argv[2]
def apply_selector(value, selector):
if value is None:
return None
if selector == "LAST":
return value[-1] if value else None
if selector.isdigit():
idx = int(selector)
return value[idx] if len(value) > idx else None
match = re.fullmatch(r"([^!=]+?)(==|!=)(.+)", selector)
if not match:
return value[0] if value else None
key, op, raw_expected = match.groups()
key = key.strip()
raw_expected = raw_expected.strip()
if raw_expected == "null":
expected = None
else:
expected = raw_expected
for item in value:
if not isinstance(item, dict):
continue
item_value = item.get(key)
matched = item_value == expected if op == "==" else item_value != expected
if matched:
return item
return None
value = data
for part in expr.split("."):
if not part:
continue
if part.startswith("[") and part.endswith("]"):
value = apply_selector(value, part[1:-1])
elif part.isdigit():
idx = int(part)
value = value[idx] if value is not None and len(value) > idx else None
elif isinstance(value, dict):
value = value.get(part)
else:
value = None
if isinstance(value, bool):
print("true" if value else "false")
elif value is None:
print("")
elif isinstance(value, (dict, list)):
print(json.dumps(value, ensure_ascii=False))
else:
print(value)
PY
}
json_len() {
local file="$1"
local expr="$2"
python3 - "$file" "$expr" <<'PY'
import json
import sys
from pathlib import Path
data = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
expr = sys.argv[2]
value = data
for part in expr.split("."):
if not part:
continue
if part.isdigit():
value = value[int(part)]
else:
value = value.get(part) if isinstance(value, dict) else None
if value is None:
print(0)
elif isinstance(value, (list, dict, str)):
print(len(value))
else:
print(0)
PY
}
assert_json_eq() {
local file="$1"
local expr="$2"
local expected="$3"
local actual
actual="$(json_get "${file}" "${expr}")"
if [[ "${actual}" != "${expected}" ]]; then
fail "${expr}: expected '${expected}', got '${actual}'"
fi
echo "[OK] ${expr}=${actual}"
}
assert_json_int_eq() {
local file="$1"
local expr="$2"
local expected="$3"
local actual
actual="$(json_get "${file}" "${expr}")"
if ! [[ "${actual}" =~ ^[0-9]+$ ]]; then
fail "${expr}: expected integer, got '${actual}'"
fi
if (( actual != expected )); then
fail "${expr}: expected ${expected}, got ${actual}"
fi
echo "[OK] ${expr}=${actual}"
}
assert_json_int_gt() {
local file="$1"
local expr="$2"
local threshold="$3"
local actual
actual="$(json_get "${file}" "${expr}")"
if ! [[ "${actual}" =~ ^[0-9]+$ ]]; then
fail "${expr}: expected integer, got '${actual}'"
fi
if (( actual <= threshold )); then
fail "${expr}: expected > ${threshold}, got ${actual}"
fi
echo "[OK] ${expr}=${actual} (> ${threshold})"
}
assert_json_int_ge() {
local file="$1"
local expr="$2"
local threshold="$3"
local actual
actual="$(json_get "${file}" "${expr}")"
if ! [[ "${actual}" =~ ^[0-9]+$ ]]; then
fail "${expr}: expected integer, got '${actual}'"
fi
if (( actual < threshold )); then
fail "${expr}: expected >= ${threshold}, got ${actual}"
fi
echo "[OK] ${expr}=${actual} (>= ${threshold})"
}
assert_json_len_eq() {
local file="$1"
local expr="$2"
local expected="$3"
local actual
actual="$(json_len "${file}" "${expr}")"
if (( actual != expected )); then
fail "len(${expr}): expected ${expected}, got ${actual}"
fi
echo "[OK] len(${expr})=${actual}"
}
assert_file_contains() {
local file="$1"
local needle="$2"
if ! python3 - "$file" "$needle" <<'PY'
from pathlib import Path
import sys
haystack = Path(sys.argv[1]).read_text(encoding="utf-8")
needle = sys.argv[2]
if needle not in haystack:
raise SystemExit(1)
PY
then
fail "Expected '${needle}' in ${file}"
fi
echo "[OK] found '${needle}'"
}
create_fresh_scheme_from_upload() {
local scenario_prefix="$1"
local stamp
stamp="$(date +%s)-$$"
FRESH_SCHEME_NAME="${scenario_prefix}-${stamp}"
local upload_filename="${FRESH_SCHEME_NAME}.svg"
upload_svg "upload_${scenario_prefix}" "${upload_filename}"
request "schemes_after_upload_${scenario_prefix}" "GET" "${API_URL}/api/v1/schemes?limit=200&offset=0" "200"
if ! SCHEME_ID="$(python3 - "${TMP_DIR}/schemes_after_upload_${scenario_prefix}.body" "${FRESH_SCHEME_NAME}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
target_name = sys.argv[2]
for item in payload.get("items", []):
if item.get("name") == target_name:
print(item["scheme_id"])
raise SystemExit(0)
raise SystemExit(1)
PY
)"; then
fail "Unable to resolve uploaded scheme_id for ${FRESH_SCHEME_NAME}"
fi
echo "FRESH_SCHEME_NAME=${FRESH_SCHEME_NAME}"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"
}

View File

@@ -0,0 +1,133 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
wait_for_health
request "ping" "GET" "${API_URL}/api/v1/ping" "200"
request "db_ping" "GET" "${API_URL}/api/v1/db/ping" "200"
request "manifest" "GET" "${API_URL}/api/v1/manifest" "200"
create_fresh_scheme_from_upload "smoke-core"
request "scheme_detail" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}" "200"
assert_json_eq "${TMP_DIR}/scheme_detail.body" "scheme_id" "${SCHEME_ID}"
assert_json_eq "${TMP_DIR}/scheme_detail.body" "name" "${FRESH_SCHEME_NAME}"
assert_json_eq "${TMP_DIR}/scheme_detail.body" "status" "draft"
request "scheme_versions" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/versions?limit=20&offset=0" "200"
assert_json_len_eq "${TMP_DIR}/scheme_versions.body" "items" "1"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
CURRENT_STATUS="$(json_get "${TMP_DIR}/scheme_current.body" "status")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
echo "CURRENT_STATUS=${CURRENT_STATUS}"
assert_json_eq "${TMP_DIR}/scheme_current.body" "status" "draft"
request "editor_context" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/editor/context" "200"
assert_json_eq "${TMP_DIR}/editor_context.body" "current_scheme_version_id" "${CURRENT_VERSION_ID}"
assert_json_eq "${TMP_DIR}/editor_context.body" "current_is_draft" "true"
request "ensure_draft" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure" "200"
DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")"
DRAFT_CREATED="$(json_get "${TMP_DIR}/ensure_draft.body" "created")"
echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}"
echo "DRAFT_CREATED=${DRAFT_CREATED}"
assert_json_eq "${TMP_DIR}/ensure_draft.body" "scheme_version_id" "${CURRENT_VERSION_ID}"
assert_json_eq "${TMP_DIR}/ensure_draft.body" "created" "false"
request "draft_summary" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/summary?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "draft_structure" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "draft_validation" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/validation?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "draft_compare_preview" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/compare-preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
assert_json_eq "${TMP_DIR}/draft_summary.body" "scheme_version_id" "${DRAFT_VERSION_ID}"
assert_json_eq "${TMP_DIR}/draft_structure.body" "scheme_version_id" "${DRAFT_VERSION_ID}"
assert_json_eq "${TMP_DIR}/draft_validation.body" "scheme_version_id" "${DRAFT_VERSION_ID}"
assert_json_eq "${TMP_DIR}/draft_compare_preview.body" "draft_scheme_version_id" "${DRAFT_VERSION_ID}"
TOTAL_SEATS="$(json_get "${TMP_DIR}/draft_summary.body" "total_seats")"
echo "TOTAL_SEATS=${TOTAL_SEATS}"
read -r SEAT_RECORD_ID SECTOR_RECORD_ID GROUP_RECORD_ID EXPLAIN_SEAT_ID <<EOF
$(python3 - "${TMP_DIR}/draft_structure.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
seats = payload.get("seats", [])
sectors = payload.get("sectors", [])
groups = payload.get("groups", [])
seat_with_id = next((seat for seat in seats if seat.get("seat_id")), None)
if seat_with_id is None:
raise SystemExit("No seat with seat_id found in fresh draft structure")
print(
seat_with_id["seat_record_id"],
sectors[0]["sector_record_id"],
groups[0]["group_record_id"],
seat_with_id["seat_id"],
)
PY
)
EOF
echo "SEAT_RECORD_ID=${SEAT_RECORD_ID}"
echo "SECTOR_RECORD_ID=${SECTOR_RECORD_ID}"
echo "GROUP_RECORD_ID=${GROUP_RECORD_ID}"
echo "EXPLAIN_SEAT_ID=${EXPLAIN_SEAT_ID}"
request "stale_draft_conflict" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/summary?expected_scheme_version_id=deadbeefdeadbeefdeadbeefdeadbeef" "409"
assert_json_eq "${TMP_DIR}/stale_draft_conflict.body" "detail.code" "stale_draft_version"
request "draft_seat_record" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/${SEAT_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "draft_sector_record" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/sectors/records/${SECTOR_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "draft_group_record" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/groups/records/${GROUP_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "draft_unknown_record" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/deadbeefdeadbeefdeadbeefdeadbeef" "404"
request "current_sectors" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/sectors" "200"
request "current_groups" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/groups" "200"
request "current_seats" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/seats" "200"
request "display_meta" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/svg/display/meta" "200"
request "pricing_bundle" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" "200"
assert_json_len_eq "${TMP_DIR}/pricing_bundle.body" "categories" "0"
assert_json_len_eq "${TMP_DIR}/pricing_bundle.body" "rules" "0"
request "pricing_coverage" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/coverage" "200"
assert_json_int_eq "${TMP_DIR}/pricing_coverage.body" "priced_seats" "0"
assert_json_int_eq "${TMP_DIR}/pricing_coverage.body" "unpriced_seats" "${TOTAL_SEATS}"
request "pricing_unpriced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/unpriced-seats" "200"
assert_json_int_eq "${TMP_DIR}/pricing_unpriced.body" "total" "${TOTAL_SEATS}"
request "pricing_explain_empty" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/explain/${EXPLAIN_SEAT_ID}" "200"
assert_json_eq "${TMP_DIR}/pricing_explain_empty.body" "has_price" "false"
assert_json_eq "${TMP_DIR}/pricing_explain_empty.body" "reason_code" "no_price_rule"
request "pricing_rule_diagnostics" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules/diagnostics" "200"
assert_json_int_eq "${TMP_DIR}/pricing_rule_diagnostics.body" "summary.total_rules" "0"
assert_json_int_eq "${TMP_DIR}/pricing_rule_diagnostics.body" "summary.active_rules_count" "0"
assert_json_int_eq "${TMP_DIR}/pricing_rule_diagnostics.body" "summary.matched_seats_total" "0"
assert_json_len_eq "${TMP_DIR}/pricing_rule_diagnostics.body" "items" "0"
request "audit_trail" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/audit" "200"
assert_json_int_ge "${TMP_DIR}/audit_trail.body" "total" "0"
log "editor mutation regression"
API_URL="${API_URL}" API_KEY="${API_KEY}" SCHEME_ID="${SCHEME_ID}" \
bash "${SCRIPT_DIR}/editor_mutation_regression.sh"
echo
echo "===== done ====="
echo "[OK] smoke core completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"

View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
set -a
source "${REPO_ROOT}/.env"
set +a
wait_for_health
create_fresh_scheme_from_upload "smoke-lifecycle-negative"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")"
echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}"
request "rollback_nonexistent_version" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/rollback" \
"404" \
"{\"target_version_number\":999}"
assert_file_contains "${TMP_DIR}/rollback_nonexistent_version.body" "Target scheme version not found"
request "ensure_draft_stale_current_version" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure?expected_current_scheme_version_id=deadbeefdeadbeefdeadbeefdeadbeef" \
"409"
assert_json_eq "${TMP_DIR}/ensure_draft_stale_current_version.body" "detail.code" "stale_current_version"
request "publish_stale_expected_version" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/publish?expected_scheme_version_id=deadbeefdeadbeefdeadbeefdeadbeef" \
"409"
assert_json_eq "${TMP_DIR}/publish_stale_expected_version.body" "detail.code" "publish_not_ready"
assert_file_contains "${TMP_DIR}/publish_stale_expected_version.body" "\"actual_scheme_version_id\":\"${CURRENT_VERSION_ID}\""
INCONSISTENT_VERSION_NUMBER="999"
UPDATED_VERSION_NUMBER="$(docker compose exec -T postgres psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -Atc "update schemes set current_version_number=${INCONSISTENT_VERSION_NUMBER} where scheme_id='${SCHEME_ID}' and current_version_number=1 returning current_version_number;" | python3 -c 'import sys; lines=[line.strip() for line in sys.stdin.read().splitlines() if line.strip()]; print(lines[0] if lines else "")')"
if [[ "${UPDATED_VERSION_NUMBER}" != "${INCONSISTENT_VERSION_NUMBER}" ]]; then
fail "Failed to introduce temporary current_version_inconsistent state for ${SCHEME_ID}"
fi
echo "[OK] introduced temporary current_version_inconsistent state for ${SCHEME_ID}"
restore_current_version_pointer() {
docker compose exec -T postgres psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -Atc "update schemes set current_version_number=1 where scheme_id='${SCHEME_ID}' and current_version_number=${INCONSISTENT_VERSION_NUMBER};" >/dev/null
}
trap 'restore_current_version_pointer; rm -rf "${TMP_DIR}"' EXIT
request "current_version_inconsistent" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/current" \
"409"
assert_json_eq "${TMP_DIR}/current_version_inconsistent.body" "detail.code" "current_version_inconsistent"
assert_file_contains "${TMP_DIR}/current_version_inconsistent.body" "\"current_version_number\":${INCONSISTENT_VERSION_NUMBER}"
restore_current_version_pointer
request "scheme_current_restored" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
assert_json_eq "${TMP_DIR}/scheme_current_restored.body" "scheme_version_id" "${CURRENT_VERSION_ID}"
assert_json_int_eq "${TMP_DIR}/scheme_current_restored.body" "version_number" "1"
echo
echo "===== done ====="
echo "[OK] smoke lifecycle negative completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"

View File

@@ -0,0 +1,117 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
wait_for_health
create_fresh_scheme_from_upload "smoke-pricing-publish"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
request "ensure_draft" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure" "200"
DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")"
echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}"
request "draft_structure" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
read -r PRICED_SEAT_ID UNPRICED_SEAT_ID <<EOF
$(python3 - "${TMP_DIR}/draft_structure.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
seat_ids = [seat["seat_id"] for seat in payload.get("seats", []) if seat.get("seat_id")]
if len(seat_ids) < 2:
raise SystemExit("Fixture must contain at least two seats with seat_id for pricing smoke")
print(seat_ids[0], seat_ids[1])
PY
)
EOF
echo "PRICED_SEAT_ID=${PRICED_SEAT_ID}"
echo "UNPRICED_SEAT_ID=${UNPRICED_SEAT_ID}"
STAMP="$(date +%s)-$$"
PRICING_CATEGORY_NAME="smoke-pricing-${STAMP}"
PRICING_CATEGORY_CODE="SMOKE_${STAMP}"
request "create_pricing_category" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/categories?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \
"{\"name\":\"${PRICING_CATEGORY_NAME}\",\"code\":\"${PRICING_CATEGORY_CODE}\"}"
PRICING_CATEGORY_ID="$(json_get "${TMP_DIR}/create_pricing_category.body" "pricing_category_id")"
echo "PRICING_CATEGORY_ID=${PRICING_CATEGORY_ID}"
request "create_price_rule" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules?expected_scheme_version_id=${DRAFT_VERSION_ID}" \
"200" \
"{\"pricing_category_id\":\"${PRICING_CATEGORY_ID}\",\"target_type\":\"seat\",\"target_ref\":\"${PRICED_SEAT_ID}\",\"amount\":\"1234.56\",\"currency\":\"RUB\"}"
PRICE_RULE_ID="$(json_get "${TMP_DIR}/create_price_rule.body" "price_rule_id")"
echo "PRICE_RULE_ID=${PRICE_RULE_ID}"
request "pricing_bundle" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" "200"
assert_json_len_eq "${TMP_DIR}/pricing_bundle.body" "categories" "1"
assert_json_len_eq "${TMP_DIR}/pricing_bundle.body" "rules" "1"
request "pricing_coverage" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/coverage" "200"
assert_json_int_gt "${TMP_DIR}/pricing_coverage.body" "priced_seats" "0"
assert_json_int_gt "${TMP_DIR}/pricing_coverage.body" "unpriced_seats" "0"
request "pricing_explain_priced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/explain/${PRICED_SEAT_ID}" "200"
assert_json_eq "${TMP_DIR}/pricing_explain_priced.body" "has_price" "true"
assert_json_eq "${TMP_DIR}/pricing_explain_priced.body" "reason_code" "ok"
assert_json_eq "${TMP_DIR}/pricing_explain_priced.body" "matched_rule.matched_rule_level" "seat"
assert_json_eq "${TMP_DIR}/pricing_explain_priced.body" "matched_rule.matched_target_ref" "${PRICED_SEAT_ID}"
request "pricing_explain_unpriced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/explain/${UNPRICED_SEAT_ID}" "200"
assert_json_eq "${TMP_DIR}/pricing_explain_unpriced.body" "has_price" "false"
assert_json_eq "${TMP_DIR}/pricing_explain_unpriced.body" "reason_code" "no_price_rule"
request "pricing_rule_diagnostics" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules/diagnostics" "200"
assert_json_int_eq "${TMP_DIR}/pricing_rule_diagnostics.body" "summary.total_rules" "1"
assert_json_int_gt "${TMP_DIR}/pricing_rule_diagnostics.body" "summary.matched_seats_total" "0"
request "seat_price" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/seats/${PRICED_SEAT_ID}/price" "200"
assert_json_eq "${TMP_DIR}/seat_price.body" "matched_rule_level" "seat"
assert_json_eq "${TMP_DIR}/seat_price.body" "matched_target_ref" "${PRICED_SEAT_ID}"
assert_json_eq "${TMP_DIR}/seat_price.body" "amount" "1234.56"
request "test_mode_priced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/test/seats/${PRICED_SEAT_ID}" "200"
assert_json_eq "${TMP_DIR}/test_mode_priced.body" "has_price" "true"
assert_json_eq "${TMP_DIR}/test_mode_priced.body" "selectable" "true"
request "test_mode_unpriced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/test/seats/${UNPRICED_SEAT_ID}" "200"
assert_json_eq "${TMP_DIR}/test_mode_unpriced.body" "has_price" "false"
assert_json_eq "${TMP_DIR}/test_mode_unpriced.body" "reason_code" "no_price_rule"
request "draft_pricing_snapshot" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/pricing/snapshot?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "publish_readiness" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-readiness?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
assert_json_eq "${TMP_DIR}/publish_readiness.body" "snapshot.available" "true"
assert_json_eq "${TMP_DIR}/publish_readiness.body" "readiness.is_ready_to_publish" "true"
request "publish_preview_refresh" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "publish_preview_cached" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
request "publish_scheme" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/publish?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200"
assert_json_eq "${TMP_DIR}/publish_scheme.body" "scheme_version_id" "${DRAFT_VERSION_ID}"
assert_json_eq "${TMP_DIR}/publish_scheme.body" "status" "published"
request "scheme_detail" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}" "200"
assert_json_eq "${TMP_DIR}/scheme_detail.body" "status" "published"
request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
assert_json_eq "${TMP_DIR}/scheme_current.body" "status" "published"
request "audit_trail" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/audit" "200"
assert_file_contains "${TMP_DIR}/audit_trail.body" "\"event_type\":\"scheme.published\""
echo
echo "===== done ====="
echo "[OK] smoke pricing/publish completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"

View File

@@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo "===== smoke core ====="
bash "${SCRIPT_DIR}/smoke_core.sh"
echo
echo "===== smoke pricing/publish ====="
bash "${SCRIPT_DIR}/smoke_pricing_publish.sh"
echo
echo "===== smoke version lifecycle ====="
bash "${SCRIPT_DIR}/smoke_version_lifecycle.sh"
echo
echo "===== smoke lifecycle negative ====="
bash "${SCRIPT_DIR}/smoke_lifecycle_negative.sh"
echo
echo "===== smoke admin ops ====="
bash "${SCRIPT_DIR}/smoke_admin_ops.sh"
echo
echo "===== smoke authz admin all ====="
bash "${SCRIPT_DIR}/smoke_authz_admin_all.sh"
echo
echo "===== smoke auth negative ====="
bash "${SCRIPT_DIR}/smoke_auth_negative.sh"
echo
echo "===== smoke artifact corruption ====="
bash "${SCRIPT_DIR}/smoke_artifact_corruption.sh"
echo
echo "===== smoke upload negative ====="
bash "${SCRIPT_DIR}/smoke_upload_negative.sh"
echo
echo "===== done ====="
echo "[OK] smoke regression orchestration completed successfully"

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
wait_for_health
require_fixture_svg
request "manifest" "GET" "${API_URL}/api/v1/manifest" "200"
MAX_FILE_SIZE_BYTES="$(json_get "${TMP_DIR}/manifest.body" "svg_limits.max_file_size_bytes")"
echo "MAX_FILE_SIZE_BYTES=${MAX_FILE_SIZE_BYTES}"
EMPTY_SVG_PATH="${TMP_DIR}/empty.svg"
NON_SVG_PATH="${TMP_DIR}/not-svg.txt"
SVG_BODY_WRONG_EXTENSION_PATH="${TMP_DIR}/svg-body.txt"
OVERSIZE_SVG_PATH="${TMP_DIR}/oversize.svg"
: > "${EMPTY_SVG_PATH}"
printf 'plain text payload\n' > "${NON_SVG_PATH}"
cp "${FIXTURE_SVG_PATH}" "${SVG_BODY_WRONG_EXTENSION_PATH}"
python3 - "${OVERSIZE_SVG_PATH}" "${MAX_FILE_SIZE_BYTES}" <<'PY'
import sys
from pathlib import Path
output_path = Path(sys.argv[1])
max_file_size_bytes = int(sys.argv[2])
payload = "<svg xmlns='http://www.w3.org/2000/svg'>" + (" " * max_file_size_bytes) + "</svg>"
output_path.write_text(payload, encoding="utf-8")
if output_path.stat().st_size <= max_file_size_bytes:
raise SystemExit("Generated oversize SVG is not larger than configured limit")
PY
upload_file_expect_status "upload_empty_file" "${EMPTY_SVG_PATH}" "empty.svg" "image/svg+xml" "400"
assert_file_contains "${TMP_DIR}/upload_empty_file.body" "Uploaded file is empty"
upload_file_expect_status "upload_non_svg_text_plain" "${NON_SVG_PATH}" "not-svg.txt" "text/plain" "400"
assert_file_contains "${TMP_DIR}/upload_non_svg_text_plain.body" "Only SVG files are allowed"
upload_file_expect_status "upload_svg_body_wrong_extension" "${SVG_BODY_WRONG_EXTENSION_PATH}" "valid-svg-body.txt" "text/plain" "400"
assert_file_contains "${TMP_DIR}/upload_svg_body_wrong_extension.body" "Only SVG files are allowed"
upload_file_expect_status "upload_oversize_svg" "${OVERSIZE_SVG_PATH}" "oversize.svg" "image/svg+xml" "413"
assert_file_contains "${TMP_DIR}/upload_oversize_svg.body" "SVG file exceeds configured size limit"
echo
echo "===== done ====="
echo "[OK] smoke upload negative completed successfully"

View File

@@ -0,0 +1,236 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
# shellcheck source=backend/scripts/smoke_common.sh
source "${SCRIPT_DIR}/smoke_common.sh"
wait_for_health
create_fresh_scheme_from_upload "smoke-version-lifecycle"
request "scheme_detail_initial" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}" "200"
assert_json_eq "${TMP_DIR}/scheme_detail_initial.body" "status" "draft"
assert_json_int_eq "${TMP_DIR}/scheme_detail_initial.body" "current_version_number" "1"
request "scheme_current_initial" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
VERSION1_ID="$(json_get "${TMP_DIR}/scheme_current_initial.body" "scheme_version_id")"
assert_json_int_eq "${TMP_DIR}/scheme_current_initial.body" "version_number" "1"
assert_json_eq "${TMP_DIR}/scheme_current_initial.body" "status" "draft"
echo "VERSION1_ID=${VERSION1_ID}"
request "ensure_draft_v1" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure?expected_current_scheme_version_id=${VERSION1_ID}" \
"200"
assert_json_eq "${TMP_DIR}/ensure_draft_v1.body" "scheme_version_id" "${VERSION1_ID}"
assert_json_eq "${TMP_DIR}/ensure_draft_v1.body" "created" "false"
request "draft_structure_v1" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${VERSION1_ID}" \
"200"
read -r VERSION1_SEAT_RECORD_ID VERSION1_SEAT_ID ORIGINAL_ROW_LABEL ORIGINAL_SEAT_NUMBER <<EOF
$(python3 - "${TMP_DIR}/draft_structure_v1.body" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
seat = next((item for item in payload.get("seats", []) if item.get("seat_id")), None)
if seat is None:
raise SystemExit("No seat with seat_id found in version 1 draft structure")
print(
seat["seat_record_id"],
seat["seat_id"],
seat.get("row_label") or "__EMPTY__",
seat.get("seat_number") or "__EMPTY__",
)
PY
)
EOF
echo "VERSION1_SEAT_RECORD_ID=${VERSION1_SEAT_RECORD_ID}"
echo "VERSION1_SEAT_ID=${VERSION1_SEAT_ID}"
echo "ORIGINAL_ROW_LABEL=${ORIGINAL_ROW_LABEL}"
echo "ORIGINAL_SEAT_NUMBER=${ORIGINAL_SEAT_NUMBER}"
STAMP="$(date +%s)-$$"
PRICING_CATEGORY_NAME="lifecycle-publish-${STAMP}"
PRICING_CATEGORY_CODE="LIFECYCLE_${STAMP}"
UPDATED_ROW_LABEL="LC-${STAMP}"
request "create_pricing_category_v1" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/categories?expected_scheme_version_id=${VERSION1_ID}" \
"200" \
"{\"name\":\"${PRICING_CATEGORY_NAME}\",\"code\":\"${PRICING_CATEGORY_CODE}\"}"
PRICING_CATEGORY_ID="$(json_get "${TMP_DIR}/create_pricing_category_v1.body" "pricing_category_id")"
echo "PRICING_CATEGORY_ID=${PRICING_CATEGORY_ID}"
request "create_price_rule_v1" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules?expected_scheme_version_id=${VERSION1_ID}" \
"200" \
"{\"pricing_category_id\":\"${PRICING_CATEGORY_ID}\",\"target_type\":\"seat\",\"target_ref\":\"${VERSION1_SEAT_ID}\",\"amount\":\"777.00\",\"currency\":\"RUB\"}"
PRICE_RULE_ID="$(json_get "${TMP_DIR}/create_price_rule_v1.body" "price_rule_id")"
echo "PRICE_RULE_ID=${PRICE_RULE_ID}"
request "draft_pricing_snapshot_v1" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/pricing/snapshot?expected_scheme_version_id=${VERSION1_ID}" \
"200"
request "publish_v1" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/publish?expected_scheme_version_id=${VERSION1_ID}" \
"200"
assert_json_eq "${TMP_DIR}/publish_v1.body" "status" "published"
assert_json_int_eq "${TMP_DIR}/publish_v1.body" "current_version_number" "1"
request "scheme_detail_after_publish_v1" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}" "200"
assert_json_eq "${TMP_DIR}/scheme_detail_after_publish_v1.body" "status" "published"
assert_json_int_eq "${TMP_DIR}/scheme_detail_after_publish_v1.body" "current_version_number" "1"
request "scheme_current_after_publish_v1" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
assert_json_eq "${TMP_DIR}/scheme_current_after_publish_v1.body" "scheme_version_id" "${VERSION1_ID}"
assert_json_int_eq "${TMP_DIR}/scheme_current_after_publish_v1.body" "version_number" "1"
assert_json_eq "${TMP_DIR}/scheme_current_after_publish_v1.body" "status" "published"
request "ensure_draft_v2" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure?expected_current_scheme_version_id=${VERSION1_ID}" \
"200"
VERSION2_ID="$(json_get "${TMP_DIR}/ensure_draft_v2.body" "scheme_version_id")"
echo "VERSION2_ID=${VERSION2_ID}"
assert_json_eq "${TMP_DIR}/ensure_draft_v2.body" "created" "true"
assert_json_eq "${TMP_DIR}/ensure_draft_v2.body" "source_scheme_version_id" "${VERSION1_ID}"
assert_json_int_eq "${TMP_DIR}/ensure_draft_v2.body" "version_number" "2"
request "draft_structure_v2_before_mutation" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${VERSION2_ID}" \
"200"
VERSION2_SEAT_RECORD_ID="$(python3 - "${TMP_DIR}/draft_structure_v2_before_mutation.body" "${VERSION1_SEAT_ID}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
seat_id = sys.argv[2]
seat = next((item for item in payload.get("seats", []) if item.get("seat_id") == seat_id), None)
if seat is None:
raise SystemExit("Target seat_id not found in version 2 draft structure")
print(seat["seat_record_id"])
PY
)"
echo "VERSION2_SEAT_RECORD_ID=${VERSION2_SEAT_RECORD_ID}"
request "patch_seat_v2" "PATCH" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/${VERSION2_SEAT_RECORD_ID}?expected_scheme_version_id=${VERSION2_ID}" \
"200" \
"{\"row_label\":\"${UPDATED_ROW_LABEL}\"}"
assert_json_eq "${TMP_DIR}/patch_seat_v2.body" "row_label" "${UPDATED_ROW_LABEL}"
request "draft_structure_v2_after_mutation" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${VERSION2_ID}" \
"200"
assert_file_contains "${TMP_DIR}/draft_structure_v2_after_mutation.body" "\"row_label\":\"${UPDATED_ROW_LABEL}\""
request "draft_compare_preview_v2" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/compare-preview?expected_scheme_version_id=${VERSION2_ID}" \
"200"
assert_file_contains "${TMP_DIR}/draft_compare_preview_v2.body" "\"status\":\"changed\""
request "draft_pricing_snapshot_v2" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/pricing/snapshot?expected_scheme_version_id=${VERSION2_ID}" \
"200"
request "publish_v2" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/publish?expected_scheme_version_id=${VERSION2_ID}" \
"200"
assert_json_eq "${TMP_DIR}/publish_v2.body" "status" "published"
assert_json_int_eq "${TMP_DIR}/publish_v2.body" "current_version_number" "2"
request "scheme_detail_after_publish_v2" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}" "200"
assert_json_eq "${TMP_DIR}/scheme_detail_after_publish_v2.body" "status" "published"
assert_json_int_eq "${TMP_DIR}/scheme_detail_after_publish_v2.body" "current_version_number" "2"
request "scheme_current_after_publish_v2" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
assert_json_eq "${TMP_DIR}/scheme_current_after_publish_v2.body" "scheme_version_id" "${VERSION2_ID}"
assert_json_int_eq "${TMP_DIR}/scheme_current_after_publish_v2.body" "version_number" "2"
assert_json_eq "${TMP_DIR}/scheme_current_after_publish_v2.body" "status" "published"
request "scheme_versions_after_publish_v2" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/versions?limit=20&offset=0" "200"
assert_json_len_eq "${TMP_DIR}/scheme_versions_after_publish_v2.body" "items" "2"
request "rollback_to_v1" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/rollback" \
"200" \
"{\"target_version_number\":1}"
assert_json_eq "${TMP_DIR}/rollback_to_v1.body" "status" "draft"
assert_json_int_eq "${TMP_DIR}/rollback_to_v1.body" "current_version_number" "1"
request "scheme_detail_after_rollback" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}" "200"
assert_json_eq "${TMP_DIR}/scheme_detail_after_rollback.body" "status" "draft"
assert_json_int_eq "${TMP_DIR}/scheme_detail_after_rollback.body" "current_version_number" "1"
request "scheme_current_after_rollback" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200"
assert_json_eq "${TMP_DIR}/scheme_current_after_rollback.body" "scheme_version_id" "${VERSION1_ID}"
assert_json_int_eq "${TMP_DIR}/scheme_current_after_rollback.body" "version_number" "1"
assert_json_eq "${TMP_DIR}/scheme_current_after_rollback.body" "status" "draft"
request "editor_context_after_rollback" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/editor/context" "200"
assert_json_eq "${TMP_DIR}/editor_context_after_rollback.body" "current_scheme_version_id" "${VERSION1_ID}"
assert_json_eq "${TMP_DIR}/editor_context_after_rollback.body" "current_is_draft" "true"
request "draft_structure_after_rollback" "GET" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${VERSION1_ID}" \
"200"
python3 - "${TMP_DIR}/draft_structure_after_rollback.body" "${VERSION1_SEAT_ID}" "${ORIGINAL_ROW_LABEL}" "${ORIGINAL_SEAT_NUMBER}" <<'PY'
import json
import sys
from pathlib import Path
payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8"))
seat_id = sys.argv[2]
expected_row_label = sys.argv[3]
expected_seat_number = sys.argv[4]
seat = next((item for item in payload.get("seats", []) if item.get("seat_id") == seat_id), None)
if seat is None:
raise SystemExit(f"Seat {seat_id} not found after rollback")
actual_row_label = seat.get("row_label") or "__EMPTY__"
actual_seat_number = seat.get("seat_number") or "__EMPTY__"
if actual_row_label != expected_row_label:
raise SystemExit(
f"Rollback row_label mismatch: expected {expected_row_label}, got {actual_row_label}"
)
if actual_seat_number != expected_seat_number:
raise SystemExit(
f"Rollback seat_number mismatch: expected {expected_seat_number}, got {actual_seat_number}"
)
PY
echo "[OK] rollback restored version 1 seat semantics"
request "unpublish_after_rollback" "POST" \
"${API_URL}/api/v1/schemes/${SCHEME_ID}/unpublish" \
"200"
assert_json_eq "${TMP_DIR}/unpublish_after_rollback.body" "status" "draft"
assert_json_int_eq "${TMP_DIR}/unpublish_after_rollback.body" "current_version_number" "1"
request "scheme_detail_after_unpublish" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}" "200"
assert_json_eq "${TMP_DIR}/scheme_detail_after_unpublish.body" "status" "draft"
assert_json_int_eq "${TMP_DIR}/scheme_detail_after_unpublish.body" "current_version_number" "1"
request "audit_trail" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/audit" "200"
assert_file_contains "${TMP_DIR}/audit_trail.body" "\"event_type\":\"scheme.published\""
assert_file_contains "${TMP_DIR}/audit_trail.body" "\"event_type\":\"scheme.version.created\""
assert_file_contains "${TMP_DIR}/audit_trail.body" "\"event_type\":\"scheme.rolled_back\""
assert_file_contains "${TMP_DIR}/audit_trail.body" "\"event_type\":\"scheme.unpublished\""
echo
echo "===== done ====="
echo "[OK] smoke version lifecycle completed successfully"
echo "FRESH_SCHEME_ID=${SCHEME_ID}"
echo "VERSION1_ID=${VERSION1_ID}"
echo "VERSION2_ID=${VERSION2_ID}"

View File

@@ -25,10 +25,11 @@ services:
container_name: svg-service
env_file:
- ./.env
command: ["sh", "-c", "uvicorn app.main:app --host 0.0.0.0 --port ${BACKEND_PORT}"]
ports:
- "9020:9020"
- "${BACKEND_PORT}:${BACKEND_PORT}"
volumes:
- ./storage:/data
- ./storage:${STORAGE_ROOT}
depends_on:
postgres:
condition: service_healthy