diff --git a/backend/docs/smoke-regression.md b/backend/docs/smoke-regression.md index 27da2f1..1d83f51 100644 --- a/backend/docs/smoke-regression.md +++ b/backend/docs/smoke-regression.md @@ -7,7 +7,7 @@ This file is the backend manual regression baseline for svg-service. - docker compose stack is up - backend responds on port 9020 - valid admin API key is available -- test scheme exists +- stable SVG fixture exists in repository, e.g. `sample-contract.svg` ## Environment @@ -15,17 +15,59 @@ Use these variables in shell: export API_URL="http://127.0.0.1:9020" export API_KEY="admin-local-dev-key" -export SCHEME_ID="82086336d385427f9d56244f9e1dd772" +export FIXTURE_SVG_PATH="/home/adminko/svg-service/sample-contract.svg" ## Main scripts Primary operator regressions: +- `backend/scripts/smoke_core.sh` +- `backend/scripts/smoke_pricing_publish.sh` - `backend/scripts/smoke_regression.sh` - `backend/scripts/editor_mutation_regression.sh` The scripts are expected to fail fast on any contract break or unexpected 5xx. +`smoke_regression.sh` is now an orchestration wrapper: + +- first runs `smoke_core.sh` +- then runs `smoke_pricing_publish.sh` +- returns non-zero if either scenario fails + +## Scenario split + +### Core smoke on clean DB + +Use: + +- `backend/scripts/smoke_core.sh` + +This scenario is designed for a fully clean database. + +It uploads a fresh SVG fixture, resolves the created `scheme_id`, validates current/draft read models, validates empty pricing state, and then runs `editor_mutation_regression.sh` on the same fresh scheme. + +Important: + +- it does not require pre-existing `scheme_id` +- it does not require pricing categories or price rules +- it does not require publish snapshot or published baseline +- empty pricing on a fresh upload is a valid state, not a failure + +### Pricing/publish smoke with fixture setup + +Use: + +- `backend/scripts/smoke_pricing_publish.sh` + +This scenario also uploads a fresh SVG fixture, then prepares its own pricing fixture before validating pricing and publish flow. + +Important: + +- it creates its own pricing category +- it creates its own pricing rule +- it intentionally checks both a priced seat and an unpriced seat on the same fresh scheme +- it does not rely on historical pricing IDs, rules, or old schemes + ## 1. Health / system - GET /healthz -> 200 (smoke uses a bounded retry/wait loop and fails explicitly if the API never becomes ready) @@ -33,7 +75,76 @@ The scripts are expected to fail fast on any contract break or unexpected 5xx. - GET /api/v1/db/ping -> 200 - GET /api/v1/manifest -> 200 -## 2. Scheme registry +## 2. Core smoke coverage + +`smoke_core.sh` checks: + +- GET /healthz -> 200 +- GET /api/v1/ping -> 200 +- GET /api/v1/db/ping -> 200 +- GET /api/v1/manifest -> 200 +- POST /api/v1/schemes/upload -> 200 +- GET /api/v1/schemes -> 200 and resolves the fresh `scheme_id` +- GET /api/v1/schemes/{scheme_id} -> 200 +- GET /api/v1/schemes/{scheme_id}/versions -> 200 +- GET /api/v1/schemes/{scheme_id}/current -> 200 +- GET /api/v1/schemes/{scheme_id}/editor/context -> 200 +- POST /api/v1/schemes/{scheme_id}/draft/ensure -> 200 +- GET /api/v1/schemes/{scheme_id}/draft/summary -> 200 +- GET /api/v1/schemes/{scheme_id}/draft/structure -> 200 +- GET /api/v1/schemes/{scheme_id}/draft/validation -> 200 +- GET /api/v1/schemes/{scheme_id}/draft/compare-preview -> 200 +- GET draft entities by record id -> 200 +- stale `expected_scheme_version_id` conflict -> 409 with typed `stale_draft_version` +- GET current sectors/groups/seats -> 200 +- GET current SVG display meta -> 200 +- GET pricing bundle -> 200 with empty categories/rules +- GET pricing coverage -> 200 with zero priced seats +- GET pricing explain/{seat_id} -> 200 with `no_price_rule` +- GET pricing rules diagnostics -> 200 with empty state +- GET audit -> 200 +- `backend/scripts/editor_mutation_regression.sh` on the same fresh scheme + +Validate: + +- fresh upload is readable immediately through current/draft/editor endpoints +- empty pricing is accepted as normal state for a newly uploaded scheme +- no endpoint in core smoke returns 500 + +## 3. Pricing/publish smoke coverage + +`smoke_pricing_publish.sh` checks: + +- POST /api/v1/schemes/upload -> 200 +- GET current / POST draft ensure on the fresh scheme -> 200 +- POST pricing category -> 200 +- POST price rule -> 200 +- GET pricing bundle -> 200 with created fixture data +- GET pricing coverage -> 200 with both priced and unpriced seats present +- GET pricing explain/{priced_seat_id} -> 200 with matched rule +- GET pricing explain/{unpriced_seat_id} -> 200 with `no_price_rule` +- GET current/seats/{priced_seat_id}/price -> 200 +- GET test/seats/{priced_seat_id} -> 200 +- GET test/seats/{unpriced_seat_id} -> 200 +- POST draft/pricing/snapshot -> 200 +- GET draft/publish-readiness -> 200 +- GET draft/publish-preview?refresh=true -> 200 +- GET draft/publish-preview -> 200 +- POST publish -> 200 +- GET scheme detail/current after publish -> 200 and published state +- GET audit -> 200 and contains `scheme.published` + +Validate: + +- fixture setup is fully self-contained +- priced-seat checks happen only after explicit pricing fixture creation +- publish flow is validated on a fresh scheme, not on historical DB data + +## 4. Legacy endpoint families + +The sections below remain the API baseline by area, but regression execution is now split between clean-DB core smoke and pricing/publish smoke. + +## 5. Scheme registry - GET /api/v1/schemes -> 200 - GET /api/v1/schemes/{scheme_id} -> 200 @@ -46,7 +157,7 @@ Validate: - version list contains current version - status and counts are consistent -## 3. Editor entry flow +## 6. Editor entry flow - GET /api/v1/schemes/{scheme_id}/editor/context -> 200 - POST /api/v1/schemes/{scheme_id}/draft/ensure -> 200 @@ -58,7 +169,7 @@ Validate: - ensure endpoint creates a new draft from published current when needed - returned scheme_version_id is reusable as expected_scheme_version_id -## 4. Draft read model +## 7. Draft read model Using current draft version id from draft/ensure: @@ -75,7 +186,7 @@ Validate: - compare preview returns stable diff structure - stale expected_scheme_version_id returns typed 409 conflict -## 5. Draft entity reads +## 8. Draft entity reads - GET /api/v1/schemes/{scheme_id}/draft/seats/records/{seat_record_id} -> 200 - GET /api/v1/schemes/{scheme_id}/draft/sectors/records/{sector_record_id} -> 200 @@ -86,7 +197,7 @@ Validate: - unknown record id returns 404 - stale expected_scheme_version_id returns typed 409 conflict -## 6. Structure read model +## 9. Structure read model - GET /api/v1/schemes/{scheme_id}/current/sectors -> 200 - GET /api/v1/schemes/{scheme_id}/current/groups -> 200 @@ -97,7 +208,7 @@ Validate: - known sample scheme returns expected object lists - seats contain seat_id / sector_id / group_id contract where applicable -## 7. SVG / display pipeline +## 10. SVG / display pipeline - GET /api/v1/schemes/{scheme_id}/current/svg -> 200 - GET /api/v1/schemes/{scheme_id}/current/svg/display -> 200 @@ -111,26 +222,25 @@ Validate: - no 500 on passthrough mode - unsupported mode returns 422 -## 8. Pricing read model +## 11. Pricing read model - GET /api/v1/schemes/{scheme_id}/pricing -> 200 - GET /api/v1/schemes/{scheme_id}/pricing/coverage -> 200 - GET /api/v1/schemes/{scheme_id}/pricing/unpriced-seats -> 200 - GET /api/v1/schemes/{scheme_id}/pricing/explain/{seat_id} -> 200 - GET /api/v1/schemes/{scheme_id}/pricing/rules/diagnostics -> 200 -- GET /api/v1/schemes/{scheme_id}/current/seats/{seat_id}/price -> 200 for priced seat +- GET /api/v1/schemes/{scheme_id}/current/seats/{seat_id}/price -> 200 only after pricing fixture exists - GET /api/v1/schemes/{scheme_id}/test/seats/{seat_id} -> 200 for known seat Validate: -- pricing bundle contains categories and rules arrays -- coverage values are internally consistent -- unpriced seats list explains reason_code / reason_message -- explain endpoint shows matched rule for priced seat and null for unpriced seat -- diagnostics returns orphan/active rule visibility -- test seat preview explains selectable / has_price state -- priced test seat amount is serialized as string +- fresh clean upload is allowed to have `categories=[]` and `rules=[]` +- fresh clean upload is allowed to have zero priced seats and `no_price_rule` explanations +- priced seat checks belong to pricing/publish smoke after fixture setup +- diagnostics returns stable empty state with zero rules on clean upload +- diagnostics returns matched seat visibility after fixture setup +- priced test seat amount is serialized as string when pricing exists -## 9. Draft mutation regression +## 12. Draft mutation regression Use: - `backend/scripts/editor_mutation_regression.sh` @@ -158,7 +268,7 @@ Validate: - remap preview without filters returns typed 422 - post-mutation summary / validation / compare-preview remain readable and deterministic -## 10. Draft publish preview +## 13. Draft publish preview - POST /api/v1/schemes/{scheme_id}/draft/pricing/snapshot -> 200 when scheme is in draft - GET /api/v1/schemes/{scheme_id}/draft/publish-preview?refresh=true -> 200 @@ -172,7 +282,7 @@ Validate: - baseline override returns override strategy when explicit baseline is provided - preview retention does not grow unbounded for same version+variant -## 11. Publish readiness and publish flow +## 14. Publish readiness and publish flow For current draft version: @@ -186,7 +296,7 @@ Validate: - publish success updates current status to published - audit trail contains scheme.published event -## 12. Admin / ops +## 15. Admin / ops - GET /api/v1/admin/schemes/{scheme_id}/current/artifacts -> 200 - GET /api/v1/admin/schemes/{scheme_id}/current/validation -> 200 @@ -204,7 +314,7 @@ Validate: - smoke does not require cleanup dry-run to always find something to delete - admin routes do not produce 500 for healthy scheme state -## 13. Audit trail +## 16. Audit trail - GET /api/v1/schemes/{scheme_id}/audit -> 200 @@ -213,7 +323,7 @@ Validate: - audit total is non-negative - event payloads stay JSON-serializable -## 14. Fail criteria +## 17. Fail criteria Regression is considered failed if any of the following happen: @@ -225,12 +335,13 @@ Regression is considered failed if any of the following happen: - editor context or draft ensure returns 500 - draft summary / structure / validation / compare-preview returns 500 - editor mutation regression returns non-zero exit code +- clean upload empty pricing state is treated as a failure - pricing bundle or diagnostics contract changes unexpectedly - admin audit/cleanup endpoints fail on healthy environment - pricing cleanup dry-run mutates data - artifact retention grows without bound for repeated preview refresh on same variant -## 15. Operator note +## 18. Operator note Run this checklist after: - schema changes diff --git a/backend/scripts/smoke_common.sh b/backend/scripts/smoke_common.sh new file mode 100644 index 0000000..aaebbab --- /dev/null +++ b/backend/scripts/smoke_common.sh @@ -0,0 +1,354 @@ +#!/usr/bin/env bash + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)" + +API_URL="${API_URL:-http://127.0.0.1:9020}" +API_KEY="${API_KEY:-admin-local-dev-key}" +FIXTURE_SVG_PATH="${FIXTURE_SVG_PATH:-${REPO_ROOT}/sample-contract.svg}" +HEALTH_MAX_ATTEMPTS="${HEALTH_MAX_ATTEMPTS:-20}" +HEALTH_RETRY_DELAY_SECONDS="${HEALTH_RETRY_DELAY_SECONDS:-1}" + +log() { + echo + echo "===== $* =====" +} + +fail() { + echo + echo "[FAIL] $*" >&2 + exit 1 +} + +require_fixture_svg() { + if [[ ! -f "${FIXTURE_SVG_PATH}" ]]; then + fail "Fixture SVG not found: ${FIXTURE_SVG_PATH}" + fi +} + +wait_for_health() { + log "health" + echo "waiting for API to be ready..." + local health_ready="false" + local health_status="" + + for ((i = 1; i <= HEALTH_MAX_ATTEMPTS; i++)); do + health_status="$(curl -sS -o /dev/null -w "%{http_code}" "${API_URL}/healthz" || true)" + if [[ "${health_status}" == "200" ]]; then + health_ready="true" + echo "API is ready" + break + fi + echo "waiting... (${i}/${HEALTH_MAX_ATTEMPTS}) healthz=${health_status}" + sleep "${HEALTH_RETRY_DELAY_SECONDS}" + done + + if [[ "${health_ready}" != "true" ]]; then + fail "API did not become ready on ${API_URL}/healthz after ${HEALTH_MAX_ATTEMPTS} attempts" + fi + + curl -sS -i "${API_URL}/healthz" +} + +request() { + local name="$1" + local method="$2" + local url="$3" + local expected_status="$4" + local body="${5:-}" + local out_file="${TMP_DIR}/${name}.body" + local status_file="${TMP_DIR}/${name}.status" + + echo + echo "===== ${name} =====" + + if [[ -n "${body}" ]]; then + curl -sS \ + -X "${method}" \ + -H "X-API-Key: ${API_KEY}" \ + -H "Content-Type: application/json" \ + -o "${out_file}" \ + -w "%{http_code}" \ + "${url}" \ + --data "${body}" > "${status_file}" + else + curl -sS \ + -X "${method}" \ + -H "X-API-Key: ${API_KEY}" \ + -o "${out_file}" \ + -w "%{http_code}" \ + "${url}" > "${status_file}" + fi + + local actual_status + actual_status="$(python3 - "$status_file" <<'PY' +from pathlib import Path +import sys +print(Path(sys.argv[1]).read_text(encoding="utf-8").strip()) +PY +)" + + echo "[${method}] ${url} -> ${actual_status}" + python3 - "$out_file" <<'PY' +from pathlib import Path +import sys +print(Path(sys.argv[1]).read_text(encoding="utf-8")) +PY + echo + + if [[ "${actual_status}" != "${expected_status}" ]]; then + fail "Unexpected HTTP status for ${name}: expected ${expected_status}, got ${actual_status}" + fi +} + +upload_svg() { + local name="$1" + local upload_filename="$2" + local out_file="${TMP_DIR}/${name}.body" + local status_file="${TMP_DIR}/${name}.status" + + require_fixture_svg + + echo + echo "===== ${name} =====" + + curl -sS \ + -X POST \ + -H "X-API-Key: ${API_KEY}" \ + -o "${out_file}" \ + -w "%{http_code}" \ + -F "file=@${FIXTURE_SVG_PATH};filename=${upload_filename};type=image/svg+xml" \ + "${API_URL}/api/v1/schemes/upload" > "${status_file}" + + local actual_status + actual_status="$(python3 - "$status_file" <<'PY' +from pathlib import Path +import sys +print(Path(sys.argv[1]).read_text(encoding="utf-8").strip()) +PY +)" + + echo "[POST] ${API_URL}/api/v1/schemes/upload -> ${actual_status}" + python3 - "$out_file" <<'PY' +from pathlib import Path +import sys +print(Path(sys.argv[1]).read_text(encoding="utf-8")) +PY + echo + + if [[ "${actual_status}" != "200" ]]; then + fail "Upload failed for ${upload_filename}: expected 200, got ${actual_status}" + fi +} + +json_get() { + local file="$1" + local expr="$2" + python3 - "$file" "$expr" <<'PY' +import json +import re +import sys +from pathlib import Path + +data = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8")) +expr = sys.argv[2] + +def apply_selector(value, selector): + if value is None: + return None + if selector == "LAST": + return value[-1] if value else None + if selector.isdigit(): + idx = int(selector) + return value[idx] if len(value) > idx else None + + match = re.fullmatch(r"([^!=]+?)(==|!=)(.+)", selector) + if not match: + return value[0] if value else None + + key, op, raw_expected = match.groups() + key = key.strip() + raw_expected = raw_expected.strip() + if raw_expected == "null": + expected = None + else: + expected = raw_expected + + for item in value: + if not isinstance(item, dict): + continue + item_value = item.get(key) + matched = item_value == expected if op == "==" else item_value != expected + if matched: + return item + return None + +value = data +for part in expr.split("."): + if not part: + continue + if part.startswith("[") and part.endswith("]"): + value = apply_selector(value, part[1:-1]) + elif part.isdigit(): + idx = int(part) + value = value[idx] if value is not None and len(value) > idx else None + elif isinstance(value, dict): + value = value.get(part) + else: + value = None + +if isinstance(value, bool): + print("true" if value else "false") +elif value is None: + print("") +elif isinstance(value, (dict, list)): + print(json.dumps(value, ensure_ascii=False)) +else: + print(value) +PY +} + +json_len() { + local file="$1" + local expr="$2" + python3 - "$file" "$expr" <<'PY' +import json +import sys +from pathlib import Path + +data = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8")) +expr = sys.argv[2] +value = data +for part in expr.split("."): + if not part: + continue + if part.isdigit(): + value = value[int(part)] + else: + value = value.get(part) if isinstance(value, dict) else None + +if value is None: + print(0) +elif isinstance(value, (list, dict, str)): + print(len(value)) +else: + print(0) +PY +} + +assert_json_eq() { + local file="$1" + local expr="$2" + local expected="$3" + local actual + actual="$(json_get "${file}" "${expr}")" + if [[ "${actual}" != "${expected}" ]]; then + fail "${expr}: expected '${expected}', got '${actual}'" + fi + echo "[OK] ${expr}=${actual}" +} + +assert_json_int_eq() { + local file="$1" + local expr="$2" + local expected="$3" + local actual + actual="$(json_get "${file}" "${expr}")" + if ! [[ "${actual}" =~ ^[0-9]+$ ]]; then + fail "${expr}: expected integer, got '${actual}'" + fi + if (( actual != expected )); then + fail "${expr}: expected ${expected}, got ${actual}" + fi + echo "[OK] ${expr}=${actual}" +} + +assert_json_int_gt() { + local file="$1" + local expr="$2" + local threshold="$3" + local actual + actual="$(json_get "${file}" "${expr}")" + if ! [[ "${actual}" =~ ^[0-9]+$ ]]; then + fail "${expr}: expected integer, got '${actual}'" + fi + if (( actual <= threshold )); then + fail "${expr}: expected > ${threshold}, got ${actual}" + fi + echo "[OK] ${expr}=${actual} (> ${threshold})" +} + +assert_json_int_ge() { + local file="$1" + local expr="$2" + local threshold="$3" + local actual + actual="$(json_get "${file}" "${expr}")" + if ! [[ "${actual}" =~ ^[0-9]+$ ]]; then + fail "${expr}: expected integer, got '${actual}'" + fi + if (( actual < threshold )); then + fail "${expr}: expected >= ${threshold}, got ${actual}" + fi + echo "[OK] ${expr}=${actual} (>= ${threshold})" +} + +assert_json_len_eq() { + local file="$1" + local expr="$2" + local expected="$3" + local actual + actual="$(json_len "${file}" "${expr}")" + if (( actual != expected )); then + fail "len(${expr}): expected ${expected}, got ${actual}" + fi + echo "[OK] len(${expr})=${actual}" +} + +assert_file_contains() { + local file="$1" + local needle="$2" + if ! python3 - "$file" "$needle" <<'PY' +from pathlib import Path +import sys +haystack = Path(sys.argv[1]).read_text(encoding="utf-8") +needle = sys.argv[2] +if needle not in haystack: + raise SystemExit(1) +PY + then + fail "Expected '${needle}' in ${file}" + fi + echo "[OK] found '${needle}'" +} + +create_fresh_scheme_from_upload() { + local scenario_prefix="$1" + local stamp + stamp="$(date +%s)-$$" + FRESH_SCHEME_NAME="${scenario_prefix}-${stamp}" + local upload_filename="${FRESH_SCHEME_NAME}.svg" + + upload_svg "upload_${scenario_prefix}" "${upload_filename}" + request "schemes_after_upload_${scenario_prefix}" "GET" "${API_URL}/api/v1/schemes?limit=200&offset=0" "200" + + if ! SCHEME_ID="$(python3 - "${TMP_DIR}/schemes_after_upload_${scenario_prefix}.body" "${FRESH_SCHEME_NAME}" <<'PY' +import json +import sys +from pathlib import Path + +payload = json.loads(Path(sys.argv[1]).read_text(encoding="utf-8")) +target_name = sys.argv[2] +for item in payload.get("items", []): + if item.get("name") == target_name: + print(item["scheme_id"]) + raise SystemExit(0) +raise SystemExit(1) +PY +)"; then + fail "Unable to resolve uploaded scheme_id for ${FRESH_SCHEME_NAME}" + fi + + echo "FRESH_SCHEME_NAME=${FRESH_SCHEME_NAME}" + echo "FRESH_SCHEME_ID=${SCHEME_ID}" +} diff --git a/backend/scripts/smoke_core.sh b/backend/scripts/smoke_core.sh new file mode 100644 index 0000000..21d2910 --- /dev/null +++ b/backend/scripts/smoke_core.sh @@ -0,0 +1,133 @@ +#!/usr/bin/env bash +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +TMP_DIR="$(mktemp -d)" +trap 'rm -rf "${TMP_DIR}"' EXIT + +# shellcheck source=backend/scripts/smoke_common.sh +source "${SCRIPT_DIR}/smoke_common.sh" + +wait_for_health + +request "ping" "GET" "${API_URL}/api/v1/ping" "200" +request "db_ping" "GET" "${API_URL}/api/v1/db/ping" "200" +request "manifest" "GET" "${API_URL}/api/v1/manifest" "200" + +create_fresh_scheme_from_upload "smoke-core" + +request "scheme_detail" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}" "200" +assert_json_eq "${TMP_DIR}/scheme_detail.body" "scheme_id" "${SCHEME_ID}" +assert_json_eq "${TMP_DIR}/scheme_detail.body" "name" "${FRESH_SCHEME_NAME}" +assert_json_eq "${TMP_DIR}/scheme_detail.body" "status" "draft" + +request "scheme_versions" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/versions?limit=20&offset=0" "200" +assert_json_len_eq "${TMP_DIR}/scheme_versions.body" "items" "1" + +request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200" +CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")" +CURRENT_STATUS="$(json_get "${TMP_DIR}/scheme_current.body" "status")" +echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}" +echo "CURRENT_STATUS=${CURRENT_STATUS}" +assert_json_eq "${TMP_DIR}/scheme_current.body" "status" "draft" + +request "editor_context" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/editor/context" "200" +assert_json_eq "${TMP_DIR}/editor_context.body" "current_scheme_version_id" "${CURRENT_VERSION_ID}" +assert_json_eq "${TMP_DIR}/editor_context.body" "current_is_draft" "true" + +request "ensure_draft" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure" "200" +DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")" +DRAFT_CREATED="$(json_get "${TMP_DIR}/ensure_draft.body" "created")" +echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}" +echo "DRAFT_CREATED=${DRAFT_CREATED}" +assert_json_eq "${TMP_DIR}/ensure_draft.body" "scheme_version_id" "${CURRENT_VERSION_ID}" +assert_json_eq "${TMP_DIR}/ensure_draft.body" "created" "false" + +request "draft_summary" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/summary?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" +request "draft_structure" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" +request "draft_validation" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/validation?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" +request "draft_compare_preview" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/compare-preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" + +assert_json_eq "${TMP_DIR}/draft_summary.body" "scheme_version_id" "${DRAFT_VERSION_ID}" +assert_json_eq "${TMP_DIR}/draft_structure.body" "scheme_version_id" "${DRAFT_VERSION_ID}" +assert_json_eq "${TMP_DIR}/draft_validation.body" "scheme_version_id" "${DRAFT_VERSION_ID}" +assert_json_eq "${TMP_DIR}/draft_compare_preview.body" "draft_scheme_version_id" "${DRAFT_VERSION_ID}" + +TOTAL_SEATS="$(json_get "${TMP_DIR}/draft_summary.body" "total_seats")" +echo "TOTAL_SEATS=${TOTAL_SEATS}" + +read -r SEAT_RECORD_ID SECTOR_RECORD_ID GROUP_RECORD_ID EXPLAIN_SEAT_ID < "${status_file}" - else - curl -sS \ - -X "${method}" \ - -H "X-API-Key: ${API_KEY}" \ - -o "${out_file}" \ - -w "%{http_code}" \ - "${url}" > "${status_file}" - fi - - local actual_status - actual_status="$(cat "${status_file}")" - - echo "[${method}] ${url} -> ${actual_status}" - cat "${out_file}" - echo - - if [[ "${actual_status}" != "${expected_status}" ]]; then - echo "[FAIL] Unexpected HTTP status for ${name}: expected ${expected_status}, got ${actual_status}" >&2 - exit 1 - fi -} - -json_get() { - local file="$1" - local expr="$2" - python3 - "$file" "$expr" <<'PY' -import json -import sys - -path = sys.argv[2].split(".") -value = json.load(open(sys.argv[1], "r", encoding="utf-8")) -for part in path: - if not part: - continue - if part.isdigit(): - value = value[int(part)] - else: - value = value[part] -if isinstance(value, bool): - print("true" if value else "false") -elif value is None: - print("null") -else: - print(value) -PY -} - -assert_json_eq() { - local file="$1" - local expr="$2" - local expected="$3" - local actual - actual="$(json_get "${file}" "${expr}")" - if [[ "${actual}" != "${expected}" ]]; then - echo "[FAIL] ${expr}: expected '${expected}', got '${actual}'" >&2 - exit 1 - fi - echo "[OK] ${expr}=${actual}" -} - -assert_json_int_gt() { - local file="$1" - local expr="$2" - local threshold="$3" - local actual - actual="$(json_get "${file}" "${expr}")" - if ! [[ "${actual}" =~ ^[0-9]+$ ]]; then - echo "[FAIL] ${expr}: expected integer, got '${actual}'" >&2 - exit 1 - fi - if (( actual <= threshold )); then - echo "[FAIL] ${expr}: expected > ${threshold}, got ${actual}" >&2 - exit 1 - fi - echo "[OK] ${expr}=${actual} (> ${threshold})" -} - -echo "===== health =====" -echo "waiting for API to be ready..." -health_ready="false" -for ((i = 1; i <= HEALTH_MAX_ATTEMPTS; i++)); do - health_status="$(curl -sS -o /dev/null -w "%{http_code}" "${API_URL}/healthz" || true)" - if [[ "${health_status}" == "200" ]]; then - health_ready="true" - echo "API is ready" - break - fi - echo "waiting... (${i}/${HEALTH_MAX_ATTEMPTS}) healthz=${health_status}" - sleep "${HEALTH_RETRY_DELAY_SECONDS}" -done - -if [[ "${health_ready}" != "true" ]]; then - echo "[FAIL] API did not become ready on ${API_URL}/healthz after ${HEALTH_MAX_ATTEMPTS} attempts" >&2 - exit 1 -fi - -curl -sS -i "${API_URL}/healthz" - -request "ping" "GET" "${API_URL}/api/v1/ping" "200" -request "db_ping" "GET" "${API_URL}/api/v1/db/ping" "200" -request "manifest" "GET" "${API_URL}/api/v1/manifest" "200" - -request "scheme_current" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current" "200" -CURRENT_VERSION_ID="$(json_get "${TMP_DIR}/scheme_current.body" "scheme_version_id")" -CURRENT_STATUS="$(json_get "${TMP_DIR}/scheme_current.body" "status")" -echo "CURRENT_VERSION_ID=${CURRENT_VERSION_ID}" -echo "CURRENT_STATUS=${CURRENT_STATUS}" - -request "editor_context" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/editor/context" "200" -request "ensure_draft" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/ensure" "200" -DRAFT_VERSION_ID="$(json_get "${TMP_DIR}/ensure_draft.body" "scheme_version_id")" -DRAFT_CREATED="$(json_get "${TMP_DIR}/ensure_draft.body" "created")" -echo "DRAFT_VERSION_ID=${DRAFT_VERSION_ID}" -echo "DRAFT_CREATED=${DRAFT_CREATED}" - -request "draft_summary" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/summary?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -assert_json_eq "${TMP_DIR}/draft_summary.body" "scheme_version_id" "${DRAFT_VERSION_ID}" -assert_json_eq "${TMP_DIR}/draft_summary.body" "status" "draft" - -request "draft_structure" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/structure?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -assert_json_eq "${TMP_DIR}/draft_structure.body" "scheme_version_id" "${DRAFT_VERSION_ID}" - -SEAT_RECORD_ID="$(json_get "${TMP_DIR}/draft_structure.body" "seats.0.seat_record_id")" -SECTOR_RECORD_ID="$(json_get "${TMP_DIR}/draft_structure.body" "sectors.0.sector_record_id")" -GROUP_RECORD_ID="$(json_get "${TMP_DIR}/draft_structure.body" "groups.0.group_record_id")" -PRICED_SEAT_ID="$(json_get "${TMP_DIR}/draft_structure.body" "seats.0.seat_id")" -UNPRICED_SEAT_ID="$(json_get "${TMP_DIR}/draft_structure.body" "seats.2.seat_id")" - -echo "SEAT_RECORD_ID=${SEAT_RECORD_ID}" -echo "SECTOR_RECORD_ID=${SECTOR_RECORD_ID}" -echo "GROUP_RECORD_ID=${GROUP_RECORD_ID}" -echo "PRICED_SEAT_ID=${PRICED_SEAT_ID}" -echo "UNPRICED_SEAT_ID=${UNPRICED_SEAT_ID}" - -request "draft_validation" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/validation?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -assert_json_eq "${TMP_DIR}/draft_validation.body" "status" "draft" - -request "draft_compare_preview" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/compare-preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" - -request "stale_draft_conflict" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/summary?expected_scheme_version_id=deadbeefdeadbeefdeadbeefdeadbeef" "409" -assert_json_eq "${TMP_DIR}/stale_draft_conflict.body" "detail.code" "stale_draft_version" - -request "draft_seat_record" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/${SEAT_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -request "draft_sector_record" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/sectors/records/${SECTOR_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -request "draft_group_record" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/groups/records/${GROUP_RECORD_ID}?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -request "draft_unknown_record" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/seats/records/deadbeefdeadbeefdeadbeefdeadbeef" "404" - -request "current_sectors" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/sectors" "200" -request "current_groups" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/groups" "200" -request "current_seats" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/seats" "200" -request "display_meta" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/svg/display/meta" "200" - -request "pricing_bundle" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing" "200" -request "pricing_coverage" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/coverage" "200" -request "pricing_unpriced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/unpriced-seats" "200" -request "pricing_explain_priced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/explain/${PRICED_SEAT_ID}" "200" -request "pricing_explain_unpriced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/explain/${UNPRICED_SEAT_ID}" "200" -request "pricing_rule_diagnostics" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules/diagnostics" "200" -request "seat_price" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/current/seats/${PRICED_SEAT_ID}/price" "200" -request "test_mode_priced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/test/seats/${PRICED_SEAT_ID}" "200" -request "test_mode_unpriced" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/test/seats/${UNPRICED_SEAT_ID}" "200" - -request "typed_invalid_amount" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/pricing/rules?expected_scheme_version_id=${DRAFT_VERSION_ID}" "422" '{"pricing_category_id":"4ef9e2b78fe0447f9f5db02714c7cad5","target_type":"sector","target_ref":"vip","amount":"bad","currency":"RUB"}' -assert_json_eq "${TMP_DIR}/typed_invalid_amount.body" "detail.code" "invalid_amount" - -request "typed_remap_filter_required" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/remap/preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" "422" '{}' -assert_json_eq "${TMP_DIR}/typed_remap_filter_required.body" "detail.code" "remap_filter_required" - -request "draft_pricing_snapshot" "POST" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/pricing/snapshot?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -request "publish_readiness" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-readiness?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -request "publish_preview_refresh" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?refresh=true&expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" -request "publish_preview_cached" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/draft/publish-preview?expected_scheme_version_id=${DRAFT_VERSION_ID}" "200" - -request "admin_artifacts" "GET" "${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/artifacts" "200" -request "admin_validation" "GET" "${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/current/validation" "200" -request "admin_preview_audit" "GET" "${API_URL}/api/v1/admin/artifacts/publish-preview/audit" "200" -request "admin_preview_cleanup_dry_run" "POST" "${API_URL}/api/v1/admin/artifacts/publish-preview/cleanup?dry_run=true" "200" - -request "admin_cleanup_preview" "GET" "${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup-preview?code_prefix=FAIL_&code_prefix=DIAG_&code_prefix=AUTO_&code_prefix=TYPED_&name_prefix=should-fail-&name_prefix=diag-&name_prefix=auto%20&name_prefix=typed-response-" "200" -request "admin_cleanup_dry_run" "POST" "${API_URL}/api/v1/admin/schemes/${SCHEME_ID}/pricing/categories/cleanup" "200" '{"code_prefixes":["FAIL_","DIAG_","AUTO_","TYPED_"],"name_prefixes":["should-fail-","diag-","auto ","typed-response-"],"delete_only_without_rules":true,"dry_run":true}' - -assert_json_eq "${TMP_DIR}/admin_cleanup_dry_run.body" "dry_run" "true" -assert_json_eq "${TMP_DIR}/admin_cleanup_dry_run.body" "deleted_count" "0" -MATCHED_TOTAL="$(json_get "${TMP_DIR}/admin_cleanup_dry_run.body" "matched_total")" -WOULD_DELETE="$(json_get "${TMP_DIR}/admin_cleanup_dry_run.body" "would_delete_count")" -if [[ "${MATCHED_TOTAL}" == "0" ]]; then - if [[ "${WOULD_DELETE}" != "0" ]]; then - echo "[FAIL] would_delete_count expected 0 when matched_total is 0, got ${WOULD_DELETE}" >&2 - exit 1 - fi - echo "[OK] matched_total=0, would_delete_count=0 (clean state)" -else - assert_json_int_gt "${TMP_DIR}/admin_cleanup_dry_run.body" "would_delete_count" "0" -fi - -request "audit_trail" "GET" "${API_URL}/api/v1/schemes/${SCHEME_ID}/audit" "200" +echo +echo "===== smoke pricing/publish =====" +bash "${SCRIPT_DIR}/smoke_pricing_publish.sh" echo echo "===== done =====" -echo "[OK] smoke regression completed successfully" +echo "[OK] smoke regression orchestration completed successfully"