Initial import
This commit is contained in:
20
.cursorignore
Normal file
20
.cursorignore
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
# Инфраструктура и базы данных (исключаем бинарники и логи)
|
||||||
|
data/
|
||||||
|
|
||||||
|
# Виртуальное окружение Python
|
||||||
|
backend/venv/
|
||||||
|
venv/
|
||||||
|
env/
|
||||||
|
.env/
|
||||||
|
|
||||||
|
# Кэш Python
|
||||||
|
__pycache__/
|
||||||
|
*.pyc
|
||||||
|
*.pyo
|
||||||
|
*.pyd
|
||||||
|
.pytest_cache/
|
||||||
|
|
||||||
|
# Среды разработки и OS
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
.DS_Store
|
||||||
13
.cursorrules
Normal file
13
.cursorrules
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
Ты — Senior Fullstack Developer.
|
||||||
|
Проект: Отказоустойчивая система бронирования билетов.
|
||||||
|
|
||||||
|
ТЕХНОЛОГИИ:
|
||||||
|
- Backend: Python 3.12, FastAPI, SQLAlchemy 2.0 (asyncpg), Pydantic v2[cite: 80, 24].
|
||||||
|
- Frontend: Next.js 14 (App Router), TypeScript, Zustand[cite: 81].
|
||||||
|
- UI/UX: Tailwind CSS (темная тема, акцент indigo-500, скругления lg), lucide-react, shadcn/ui. Никакого кастомного CSS, только утилитные классы Tailwind[cite: 58, 60].
|
||||||
|
|
||||||
|
ПРАВИЛА НАПИСАНИЯ КОДА:
|
||||||
|
- Строгая типизация везде[cite: 24].
|
||||||
|
- Вся работа с БД строго асинхронная[cite: 24].
|
||||||
|
- 1 задача = 1 сессия. Не пытайся сделать всё за один раз.
|
||||||
|
- Оставляй код модульным, не ломай существующие эндпоинты (особенно захват локов в БД).
|
||||||
27
.gitignore
vendored
Normal file
27
.gitignore
vendored
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
# Python
|
||||||
|
__pycache__/
|
||||||
|
*.py[cod]
|
||||||
|
*.so
|
||||||
|
*.egg-info/
|
||||||
|
build/
|
||||||
|
dist/
|
||||||
|
.eggs/
|
||||||
|
|
||||||
|
# Virtual envs
|
||||||
|
.venv/
|
||||||
|
venv/
|
||||||
|
backend/venv/
|
||||||
|
|
||||||
|
# Env/secrets (НЕ пушим)
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
!.env.example
|
||||||
|
|
||||||
|
# Data / volumes / runtime artifacts
|
||||||
|
data/
|
||||||
|
*.log
|
||||||
|
|
||||||
|
# IDE / OS
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
.DS_Store
|
||||||
29
ARCHITECTURE.md
Normal file
29
ARCHITECTURE.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
# Архитектура и бизнес-правила (Ticket System)
|
||||||
|
|
||||||
|
## 1. Базовые принципы
|
||||||
|
- **Стек:** FastAPI, SQLAlchemy 2.0 (asyncpg), PostgreSQL, Redis, RabbitMQ.
|
||||||
|
- **Модели данных:** Описаны в `backend/database/models.py`.
|
||||||
|
- **Ключевые сущности:** User, Tournament, Seat, Ticket.
|
||||||
|
|
||||||
|
## 2. Жизненный цикл билета (State Machine)
|
||||||
|
Статус билета (`TicketStatus` в таблице `tickets`) строго ограничен:
|
||||||
|
1. `AVAILABLE` — место свободно для бронирования.
|
||||||
|
2. `LOCKED` — место временно захвачено пользователем (15 минут на оплату).
|
||||||
|
3. `PAID` — оплата прошла успешно.
|
||||||
|
4. `SCANNED` — билет погашен на входе.
|
||||||
|
5. `REFUNDED` — возврат.
|
||||||
|
|
||||||
|
## 3. Сценарий А: Конкурентное бронирование (Критический путь)
|
||||||
|
Захват места должен исключать "состояние гонки" (race condition).
|
||||||
|
1. При POST-запросе на захват места (`/api/seats/{seat_id}/lock`), FastAPI обращается к Redis.
|
||||||
|
2. Пытается установить ключ `lock:seat:{seat_id}` с помощью `SETNX` и TTL 15 минут.
|
||||||
|
3. **Успех:** Если Redis вернул 1, обновляем статус билета в БД на `LOCKED` и привязываем `user_id`. Возвращаем HTTP 200.
|
||||||
|
4. **Отказ:** Если ключ уже существует, немедленно возвращаем HTTP 409 Conflict. БД не трогаем.
|
||||||
|
|
||||||
|
## 4. Сценарий Б: Асинхронная выдача билета (Idempotency)
|
||||||
|
Защита от двойных списаний и зависаний интерфейса при долгой генерации PDF.
|
||||||
|
1. Платежный шлюз присылает Webhook об успешной оплате.
|
||||||
|
2. FastAPI проверяет поле `idempotency_key` в таблице `tickets`. Если ключ уже обработан — игнорируем запрос.
|
||||||
|
3. Обновляем статус билета на `PAID`.
|
||||||
|
4. Отправляем событие `ticket_paid` (содержит `ticket_id` и данные пользователя) в очередь RabbitMQ. Отвечаем шлюзу HTTP 200.
|
||||||
|
5. Фоновый воркер забирает задачу, генерирует PDF (reportlab), грузит в MinIO (boto3) и сохраняет ссылку в БД.
|
||||||
18
backend/.dockerignore
Normal file
18
backend/.dockerignore
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
__pycache__/
|
||||||
|
*.py[cod]
|
||||||
|
.venv/
|
||||||
|
venv/
|
||||||
|
venv/**
|
||||||
|
|
||||||
|
# local venv inside repo
|
||||||
|
venv/
|
||||||
|
../venv/
|
||||||
|
venv/
|
||||||
|
|
||||||
|
# project junk
|
||||||
|
.git/
|
||||||
|
.gitignore
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
data/
|
||||||
|
*.log
|
||||||
16
backend/Dockerfile
Normal file
16
backend/Dockerfile
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
# Отключаем буферизацию логов и запись байткода
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Ставим системные зависимости для сборки psycopg2 и чистим кэш apt
|
||||||
|
RUN apt-get update && apt-get install -y gcc libpq-dev && rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
COPY requirements.txt .
|
||||||
|
RUN pip install --no-cache-dir -r requirements.txt
|
||||||
|
|
||||||
|
# Копируем остальной код
|
||||||
|
COPY . .
|
||||||
149
backend/alembic.ini
Normal file
149
backend/alembic.ini
Normal file
@@ -0,0 +1,149 @@
|
|||||||
|
# A generic, single database configuration.
|
||||||
|
|
||||||
|
[alembic]
|
||||||
|
# path to migration scripts.
|
||||||
|
# this is typically a path given in POSIX (e.g. forward slashes)
|
||||||
|
# format, relative to the token %(here)s which refers to the location of this
|
||||||
|
# ini file
|
||||||
|
script_location = %(here)s/migrations
|
||||||
|
|
||||||
|
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
|
||||||
|
# Uncomment the line below if you want the files to be prepended with date and time
|
||||||
|
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
|
||||||
|
# for all available tokens
|
||||||
|
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
|
||||||
|
# Or organize into date-based subdirectories (requires recursive_version_locations = true)
|
||||||
|
# file_template = %%(year)d/%%(month).2d/%%(day).2d_%%(hour).2d%%(minute).2d_%%(second).2d_%%(rev)s_%%(slug)s
|
||||||
|
|
||||||
|
# sys.path path, will be prepended to sys.path if present.
|
||||||
|
# defaults to the current working directory. for multiple paths, the path separator
|
||||||
|
# is defined by "path_separator" below.
|
||||||
|
prepend_sys_path = .
|
||||||
|
|
||||||
|
# timezone to use when rendering the date within the migration file
|
||||||
|
# as well as the filename.
|
||||||
|
# If specified, requires the tzdata library which can be installed by adding
|
||||||
|
# `alembic[tz]` to the pip requirements.
|
||||||
|
# string value is passed to ZoneInfo()
|
||||||
|
# leave blank for localtime
|
||||||
|
# timezone =
|
||||||
|
|
||||||
|
# max length of characters to apply to the "slug" field
|
||||||
|
# truncate_slug_length = 40
|
||||||
|
|
||||||
|
# set to 'true' to run the environment during
|
||||||
|
# the 'revision' command, regardless of autogenerate
|
||||||
|
# revision_environment = false
|
||||||
|
|
||||||
|
# set to 'true' to allow .pyc and .pyo files without
|
||||||
|
# a source .py file to be detected as revisions in the
|
||||||
|
# versions/ directory
|
||||||
|
# sourceless = false
|
||||||
|
|
||||||
|
# version location specification; This defaults
|
||||||
|
# to <script_location>/versions. When using multiple version
|
||||||
|
# directories, initial revisions must be specified with --version-path.
|
||||||
|
# The path separator used here should be the separator specified by "path_separator"
|
||||||
|
# below.
|
||||||
|
# version_locations = %(here)s/bar:%(here)s/bat:%(here)s/alembic/versions
|
||||||
|
|
||||||
|
# path_separator; This indicates what character is used to split lists of file
|
||||||
|
# paths, including version_locations and prepend_sys_path within configparser
|
||||||
|
# files such as alembic.ini.
|
||||||
|
# The default rendered in new alembic.ini files is "os", which uses os.pathsep
|
||||||
|
# to provide os-dependent path splitting.
|
||||||
|
#
|
||||||
|
# Note that in order to support legacy alembic.ini files, this default does NOT
|
||||||
|
# take place if path_separator is not present in alembic.ini. If this
|
||||||
|
# option is omitted entirely, fallback logic is as follows:
|
||||||
|
#
|
||||||
|
# 1. Parsing of the version_locations option falls back to using the legacy
|
||||||
|
# "version_path_separator" key, which if absent then falls back to the legacy
|
||||||
|
# behavior of splitting on spaces and/or commas.
|
||||||
|
# 2. Parsing of the prepend_sys_path option falls back to the legacy
|
||||||
|
# behavior of splitting on spaces, commas, or colons.
|
||||||
|
#
|
||||||
|
# Valid values for path_separator are:
|
||||||
|
#
|
||||||
|
# path_separator = :
|
||||||
|
# path_separator = ;
|
||||||
|
# path_separator = space
|
||||||
|
# path_separator = newline
|
||||||
|
#
|
||||||
|
# Use os.pathsep. Default configuration used for new projects.
|
||||||
|
path_separator = os
|
||||||
|
|
||||||
|
|
||||||
|
# set to 'true' to search source files recursively
|
||||||
|
# in each "version_locations" directory
|
||||||
|
# new in Alembic version 1.10
|
||||||
|
# recursive_version_locations = false
|
||||||
|
|
||||||
|
# the output encoding used when revision files
|
||||||
|
# are written from script.py.mako
|
||||||
|
# output_encoding = utf-8
|
||||||
|
|
||||||
|
# database URL. This is consumed by the user-maintained env.py script only.
|
||||||
|
# other means of configuring database URLs may be customized within the env.py
|
||||||
|
# file.
|
||||||
|
sqlalchemy.url = driver://user:pass@localhost/dbname
|
||||||
|
|
||||||
|
|
||||||
|
[post_write_hooks]
|
||||||
|
# post_write_hooks defines scripts or Python functions that are run
|
||||||
|
# on newly generated revision scripts. See the documentation for further
|
||||||
|
# detail and examples
|
||||||
|
|
||||||
|
# format using "black" - use the console_scripts runner, against the "black" entrypoint
|
||||||
|
# hooks = black
|
||||||
|
# black.type = console_scripts
|
||||||
|
# black.entrypoint = black
|
||||||
|
# black.options = -l 79 REVISION_SCRIPT_FILENAME
|
||||||
|
|
||||||
|
# lint with attempts to fix using "ruff" - use the module runner, against the "ruff" module
|
||||||
|
# hooks = ruff
|
||||||
|
# ruff.type = module
|
||||||
|
# ruff.module = ruff
|
||||||
|
# ruff.options = check --fix REVISION_SCRIPT_FILENAME
|
||||||
|
|
||||||
|
# Alternatively, use the exec runner to execute a binary found on your PATH
|
||||||
|
# hooks = ruff
|
||||||
|
# ruff.type = exec
|
||||||
|
# ruff.executable = ruff
|
||||||
|
# ruff.options = check --fix REVISION_SCRIPT_FILENAME
|
||||||
|
|
||||||
|
# Logging configuration. This is also consumed by the user-maintained
|
||||||
|
# env.py script only.
|
||||||
|
[loggers]
|
||||||
|
keys = root,sqlalchemy,alembic
|
||||||
|
|
||||||
|
[handlers]
|
||||||
|
keys = console
|
||||||
|
|
||||||
|
[formatters]
|
||||||
|
keys = generic
|
||||||
|
|
||||||
|
[logger_root]
|
||||||
|
level = WARNING
|
||||||
|
handlers = console
|
||||||
|
qualname =
|
||||||
|
|
||||||
|
[logger_sqlalchemy]
|
||||||
|
level = WARNING
|
||||||
|
handlers =
|
||||||
|
qualname = sqlalchemy.engine
|
||||||
|
|
||||||
|
[logger_alembic]
|
||||||
|
level = INFO
|
||||||
|
handlers =
|
||||||
|
qualname = alembic
|
||||||
|
|
||||||
|
[handler_console]
|
||||||
|
class = StreamHandler
|
||||||
|
args = (sys.stderr,)
|
||||||
|
level = NOTSET
|
||||||
|
formatter = generic
|
||||||
|
|
||||||
|
[formatter_generic]
|
||||||
|
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||||
|
datefmt = %H:%M:%S
|
||||||
0
backend/api/__init__.py
Normal file
0
backend/api/__init__.py
Normal file
36
backend/api/deps.py
Normal file
36
backend/api/deps.py
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
import jwt
|
||||||
|
from fastapi import Depends, HTTPException, status
|
||||||
|
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
|
||||||
|
from sqlalchemy import select
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
|
||||||
|
from core.security import decode_access_token
|
||||||
|
from database.models import User
|
||||||
|
from database.session import get_db
|
||||||
|
|
||||||
|
_bearer_scheme = HTTPBearer()
|
||||||
|
|
||||||
|
|
||||||
|
async def get_current_user(
|
||||||
|
credentials: HTTPAuthorizationCredentials = Depends(_bearer_scheme),
|
||||||
|
db: AsyncSession = Depends(get_db),
|
||||||
|
) -> User:
|
||||||
|
credentials_exception = HTTPException(
|
||||||
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
|
detail="Could not validate credentials",
|
||||||
|
headers={"WWW-Authenticate": "Bearer"},
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
user_id_str = decode_access_token(credentials.credentials)
|
||||||
|
user_id = int(user_id_str)
|
||||||
|
except (jwt.PyJWTError, ValueError):
|
||||||
|
raise credentials_exception
|
||||||
|
|
||||||
|
result = await db.execute(select(User).where(User.id == user_id))
|
||||||
|
user: User | None = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if user is None:
|
||||||
|
raise credentials_exception
|
||||||
|
|
||||||
|
return user
|
||||||
0
backend/api/routers/__init__.py
Normal file
0
backend/api/routers/__init__.py
Normal file
41
backend/api/routers/auth.py
Normal file
41
backend/api/routers/auth.py
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from sqlalchemy import select
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
|
||||||
|
from core.security import create_access_token, hash_password, verify_password
|
||||||
|
from database.models import User
|
||||||
|
from database.session import get_db
|
||||||
|
from schemas.user import TokenResponse, UserLoginRequest, UserRegisterRequest, UserResponse
|
||||||
|
|
||||||
|
router = APIRouter(prefix="/api/auth", tags=["auth"])
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/register", response_model=UserResponse, status_code=status.HTTP_201_CREATED)
|
||||||
|
async def register(body: UserRegisterRequest, db: AsyncSession = Depends(get_db)) -> User:
|
||||||
|
result = await db.execute(select(User).where(User.email == body.email))
|
||||||
|
if result.scalar_one_or_none() is not None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_409_CONFLICT,
|
||||||
|
detail="User with this email already exists",
|
||||||
|
)
|
||||||
|
|
||||||
|
user = User(email=body.email, hashed_password=hash_password(body.password))
|
||||||
|
db.add(user)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(user)
|
||||||
|
return user
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/login", response_model=TokenResponse)
|
||||||
|
async def login(body: UserLoginRequest, db: AsyncSession = Depends(get_db)) -> TokenResponse:
|
||||||
|
result = await db.execute(select(User).where(User.email == body.email))
|
||||||
|
user: User | None = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if user is None or not verify_password(body.password, user.hashed_password):
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
|
detail="Invalid email or password",
|
||||||
|
headers={"WWW-Authenticate": "Bearer"},
|
||||||
|
)
|
||||||
|
|
||||||
|
return TokenResponse(access_token=create_access_token(subject=user.id))
|
||||||
0
backend/core/__init__.py
Normal file
0
backend/core/__init__.py
Normal file
32
backend/core/redis.py
Normal file
32
backend/core/redis.py
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
import os
|
||||||
|
from redis.asyncio import Redis, from_url
|
||||||
|
|
||||||
|
# Берем URL из окружения или ставим дефолт для нашей docker-сети
|
||||||
|
REDIS_URL = os.getenv("REDIS_URL", "redis://redis:6379/0")
|
||||||
|
|
||||||
|
# Глобальный пул соединений
|
||||||
|
redis_client: Redis = from_url(REDIS_URL, decode_responses=True)
|
||||||
|
|
||||||
|
async def get_redis() -> Redis:
|
||||||
|
return redis_client
|
||||||
|
|
||||||
|
async def acquire_seat_lock(seat_id: int, user_id: int, ttl_seconds: int = 900) -> bool:
|
||||||
|
"""
|
||||||
|
Пытается захватить блокировку на место.
|
||||||
|
ttl_seconds = 900 (15 минут на оплату по ТЗ).
|
||||||
|
Возвращает True, если блокировка получена, иначе False.
|
||||||
|
"""
|
||||||
|
lock_key = f"lock:seat:{seat_id}"
|
||||||
|
|
||||||
|
# SETNX: Set if Not eXists. Если ключ есть, вернет None/False
|
||||||
|
# ex: устанавливает время жизни ключа (TTL)
|
||||||
|
is_locked = await redis_client.set(lock_key, str(user_id), nx=True, ex=ttl_seconds)
|
||||||
|
|
||||||
|
return bool(is_locked)
|
||||||
|
|
||||||
|
async def release_seat_lock(seat_id: int) -> None:
|
||||||
|
"""
|
||||||
|
Принудительно снимает блокировку (например, при отмене или ошибке БД).
|
||||||
|
"""
|
||||||
|
lock_key = f"lock:seat:{seat_id}"
|
||||||
|
await redis_client.delete(lock_key)
|
||||||
34
backend/core/security.py
Normal file
34
backend/core/security.py
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
import os
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
|
||||||
|
import jwt
|
||||||
|
from passlib.context import CryptContext
|
||||||
|
|
||||||
|
SECRET_KEY: str = os.environ["JWT_SECRET_KEY"]
|
||||||
|
ALGORITHM: str = os.getenv("JWT_ALGORITHM", "HS256")
|
||||||
|
ACCESS_TOKEN_EXPIRE_MINUTES: int = int(os.getenv("JWT_ACCESS_TOKEN_EXPIRE_MINUTES", "60"))
|
||||||
|
|
||||||
|
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
|
||||||
|
|
||||||
|
|
||||||
|
def hash_password(plain: str) -> str:
|
||||||
|
return pwd_context.hash(plain)
|
||||||
|
|
||||||
|
|
||||||
|
def verify_password(plain: str, hashed: str) -> bool:
|
||||||
|
return pwd_context.verify(plain, hashed)
|
||||||
|
|
||||||
|
|
||||||
|
def create_access_token(subject: int | str) -> str:
|
||||||
|
expire = datetime.now(timezone.utc) + timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
|
||||||
|
payload = {"sub": str(subject), "exp": expire}
|
||||||
|
return jwt.encode(payload, SECRET_KEY, algorithm=ALGORITHM)
|
||||||
|
|
||||||
|
|
||||||
|
def decode_access_token(token: str) -> str:
|
||||||
|
"""Декодирует токен и возвращает sub (user_id). Бросает jwt.PyJWTError при невалидном токене."""
|
||||||
|
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
|
||||||
|
sub: str | None = payload.get("sub")
|
||||||
|
if sub is None:
|
||||||
|
raise jwt.InvalidTokenError("Token payload missing 'sub'")
|
||||||
|
return sub
|
||||||
0
backend/database/__init__.py
Normal file
0
backend/database/__init__.py
Normal file
63
backend/database/models.py
Normal file
63
backend/database/models.py
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
import enum
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from sqlalchemy import String, Integer, ForeignKey, DateTime, Enum, Boolean
|
||||||
|
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship
|
||||||
|
|
||||||
|
class Base(DeclarativeBase):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class TicketStatus(str, enum.Enum):
|
||||||
|
AVAILABLE = "AVAILABLE"
|
||||||
|
LOCKED = "LOCKED"
|
||||||
|
PAID = "PAID"
|
||||||
|
SCANNED = "SCANNED"
|
||||||
|
REFUNDED = "REFUNDED"
|
||||||
|
|
||||||
|
class User(Base):
|
||||||
|
__tablename__ = "users"
|
||||||
|
id: Mapped[int] = mapped_column(primary_key=True)
|
||||||
|
email: Mapped[str] = mapped_column(String, unique=True, index=True)
|
||||||
|
hashed_password: Mapped[str] = mapped_column(String)
|
||||||
|
tickets: Mapped[list["Ticket"]] = relationship(back_populates="user")
|
||||||
|
|
||||||
|
class Tournament(Base):
|
||||||
|
__tablename__ = "tournaments"
|
||||||
|
id: Mapped[int] = mapped_column(primary_key=True)
|
||||||
|
title: Mapped[str] = mapped_column(String)
|
||||||
|
event_date: Mapped[datetime] = mapped_column(DateTime(timezone=True))
|
||||||
|
is_active: Mapped[bool] = mapped_column(Boolean, default=True)
|
||||||
|
seats: Mapped[list["Seat"]] = relationship(back_populates="tournament")
|
||||||
|
|
||||||
|
class Seat(Base):
|
||||||
|
__tablename__ = "seats"
|
||||||
|
id: Mapped[int] = mapped_column(primary_key=True)
|
||||||
|
tournament_id: Mapped[int] = mapped_column(ForeignKey("tournaments.id"), index=True)
|
||||||
|
sector: Mapped[str] = mapped_column(String)
|
||||||
|
row: Mapped[int] = mapped_column(Integer)
|
||||||
|
number: Mapped[int] = mapped_column(Integer)
|
||||||
|
price: Mapped[int] = mapped_column(Integer)
|
||||||
|
|
||||||
|
tournament: Mapped["Tournament"] = relationship(back_populates="seats")
|
||||||
|
ticket: Mapped["Ticket"] = relationship(back_populates="seat", uselist=False)
|
||||||
|
|
||||||
|
class Ticket(Base):
|
||||||
|
__tablename__ = "tickets"
|
||||||
|
id: Mapped[int] = mapped_column(primary_key=True)
|
||||||
|
seat_id: Mapped[int] = mapped_column(ForeignKey("seats.id"), unique=True, index=True)
|
||||||
|
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"), nullable=True, index=True)
|
||||||
|
|
||||||
|
status: Mapped[TicketStatus] = mapped_column(
|
||||||
|
Enum(TicketStatus, name="ticket_status_enum", create_type=False),
|
||||||
|
default=TicketStatus.AVAILABLE,
|
||||||
|
index=True
|
||||||
|
)
|
||||||
|
idempotency_key: Mapped[str] = mapped_column(String, unique=True, nullable=True)
|
||||||
|
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc))
|
||||||
|
updated_at: Mapped[datetime] = mapped_column(
|
||||||
|
DateTime(timezone=True),
|
||||||
|
default=lambda: datetime.now(timezone.utc),
|
||||||
|
onupdate=lambda: datetime.now(timezone.utc)
|
||||||
|
)
|
||||||
|
|
||||||
|
seat: Mapped["Seat"] = relationship(back_populates="ticket")
|
||||||
|
user: Mapped["User"] = relationship(back_populates="tickets")
|
||||||
12
backend/database/session.py
Normal file
12
backend/database/session.py
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
import os
|
||||||
|
from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker, AsyncSession
|
||||||
|
|
||||||
|
DATABASE_URL = os.getenv("DATABASE_URL", "postgresql+asyncpg://admin:your_strong_password@postgres:5432/ticket_db")
|
||||||
|
|
||||||
|
# Отключаем echo в проде, но для песочницы можно включить (echo=True), чтобы видеть SQL-запросы
|
||||||
|
engine = create_async_engine(DATABASE_URL, echo=False)
|
||||||
|
async_session = async_sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
|
||||||
|
|
||||||
|
async def get_db():
|
||||||
|
async with async_session() as session:
|
||||||
|
yield session
|
||||||
45
backend/main.py
Normal file
45
backend/main.py
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
from fastapi import FastAPI, Depends, HTTPException, status
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from sqlalchemy import select
|
||||||
|
|
||||||
|
from database.session import get_db
|
||||||
|
from database.models import Ticket, TicketStatus
|
||||||
|
from core.redis import acquire_seat_lock, release_seat_lock
|
||||||
|
from api.routers.auth import router as auth_router
|
||||||
|
|
||||||
|
app = FastAPI(title="Ticketing System API")
|
||||||
|
|
||||||
|
app.include_router(auth_router)
|
||||||
|
|
||||||
|
@app.post("/api/seats/{seat_id}/lock", status_code=status.HTTP_200_OK)
|
||||||
|
async def lock_seat(seat_id: int, user_id: int, db: AsyncSession = Depends(get_db)):
|
||||||
|
# 1. Проверяем статус в БД (грязное чтение, чтобы отсеять уже выкупленные билеты)
|
||||||
|
query = select(Ticket).where(Ticket.seat_id == seat_id)
|
||||||
|
result = await db.execute(query)
|
||||||
|
ticket = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if ticket and ticket.status != TicketStatus.AVAILABLE:
|
||||||
|
raise HTTPException(status_code=409, detail="Seat is already booked or locked in DB")
|
||||||
|
|
||||||
|
# 2. Пытаемся захватить распределенную блокировку в Redis (15 минут по ТЗ)
|
||||||
|
locked = await acquire_seat_lock(seat_id=seat_id, user_id=user_id)
|
||||||
|
if not locked:
|
||||||
|
raise HTTPException(status_code=409, detail="Seat is currently locked by another user")
|
||||||
|
|
||||||
|
# 3. Лок наш. Пишем статус в PostgreSQL
|
||||||
|
try:
|
||||||
|
if not ticket:
|
||||||
|
ticket = Ticket(seat_id=seat_id, user_id=user_id, status=TicketStatus.LOCKED)
|
||||||
|
db.add(ticket)
|
||||||
|
else:
|
||||||
|
ticket.status = TicketStatus.LOCKED
|
||||||
|
ticket.user_id = user_id
|
||||||
|
|
||||||
|
await db.commit()
|
||||||
|
return {"message": "Seat locked successfully", "seat_id": seat_id, "status": "LOCKED"}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
# Критически важно: если БД отвалилась, снимаем лок в Redis, иначе место зависнет на 15 минут
|
||||||
|
await release_seat_lock(seat_id)
|
||||||
|
await db.rollback()
|
||||||
|
raise HTTPException(status_code=500, detail="Database transaction failed")
|
||||||
1
backend/migrations/README
Normal file
1
backend/migrations/README
Normal file
@@ -0,0 +1 @@
|
|||||||
|
Generic single-database configuration with an async dbapi.
|
||||||
61
backend/migrations/env.py
Normal file
61
backend/migrations/env.py
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
import asyncio
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from logging.config import fileConfig
|
||||||
|
|
||||||
|
from sqlalchemy import pool
|
||||||
|
from sqlalchemy.engine import Connection
|
||||||
|
from sqlalchemy.ext.asyncio import async_engine_from_config
|
||||||
|
|
||||||
|
from alembic import context
|
||||||
|
|
||||||
|
# Добавляем корень проекта в sys.path, чтобы Python нашел модуль database
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
from database.models import Base
|
||||||
|
|
||||||
|
config = context.config
|
||||||
|
|
||||||
|
if config.config_file_name is not None:
|
||||||
|
fileConfig(config.config_file_name)
|
||||||
|
|
||||||
|
target_metadata = Base.metadata
|
||||||
|
|
||||||
|
def get_url():
|
||||||
|
return os.getenv("DATABASE_URL", "postgresql+asyncpg://admin:your_strong_password@postgres:5432/ticket_db")
|
||||||
|
|
||||||
|
def run_migrations_offline() -> None:
|
||||||
|
url = get_url()
|
||||||
|
context.configure(
|
||||||
|
url=url,
|
||||||
|
target_metadata=target_metadata,
|
||||||
|
literal_binds=True,
|
||||||
|
dialect_opts={"paramstyle": "named"},
|
||||||
|
)
|
||||||
|
with context.begin_transaction():
|
||||||
|
context.run_migrations()
|
||||||
|
|
||||||
|
def do_run_migrations(connection: Connection) -> None:
|
||||||
|
context.configure(connection=connection, target_metadata=target_metadata)
|
||||||
|
with context.begin_transaction():
|
||||||
|
context.run_migrations()
|
||||||
|
|
||||||
|
async def run_async_migrations() -> None:
|
||||||
|
configuration = config.get_section(config.config_ini_section, {})
|
||||||
|
configuration["sqlalchemy.url"] = get_url()
|
||||||
|
connectable = async_engine_from_config(
|
||||||
|
configuration,
|
||||||
|
prefix="sqlalchemy.",
|
||||||
|
poolclass=pool.NullPool,
|
||||||
|
)
|
||||||
|
async with connectable.connect() as connection:
|
||||||
|
await connection.run_sync(do_run_migrations)
|
||||||
|
await connectable.dispose()
|
||||||
|
|
||||||
|
def run_migrations_online() -> None:
|
||||||
|
asyncio.run(run_async_migrations())
|
||||||
|
|
||||||
|
if context.is_offline_mode():
|
||||||
|
run_migrations_offline()
|
||||||
|
else:
|
||||||
|
run_migrations_online()
|
||||||
28
backend/migrations/script.py.mako
Normal file
28
backend/migrations/script.py.mako
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
"""${message}
|
||||||
|
|
||||||
|
Revision ID: ${up_revision}
|
||||||
|
Revises: ${down_revision | comma,n}
|
||||||
|
Create Date: ${create_date}
|
||||||
|
|
||||||
|
"""
|
||||||
|
from typing import Sequence, Union
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
${imports if imports else ""}
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision: str = ${repr(up_revision)}
|
||||||
|
down_revision: Union[str, Sequence[str], None] = ${repr(down_revision)}
|
||||||
|
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
|
||||||
|
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade() -> None:
|
||||||
|
"""Upgrade schema."""
|
||||||
|
${upgrades if upgrades else "pass"}
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade() -> None:
|
||||||
|
"""Downgrade schema."""
|
||||||
|
${downgrades if downgrades else "pass"}
|
||||||
80
backend/migrations/versions/762b863b233b_init_models.py
Normal file
80
backend/migrations/versions/762b863b233b_init_models.py
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
"""Init models
|
||||||
|
|
||||||
|
Revision ID: 762b863b233b
|
||||||
|
Revises:
|
||||||
|
Create Date: 2026-03-03 16:49:28.746943
|
||||||
|
|
||||||
|
"""
|
||||||
|
from typing import Sequence, Union
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision: str = '762b863b233b'
|
||||||
|
down_revision: Union[str, Sequence[str], None] = None
|
||||||
|
branch_labels: Union[str, Sequence[str], None] = None
|
||||||
|
depends_on: Union[str, Sequence[str], None] = None
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade() -> None:
|
||||||
|
"""Upgrade schema."""
|
||||||
|
# ### commands auto generated by Alembic - please adjust! ###
|
||||||
|
op.create_table('tournaments',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('title', sa.String(), nullable=False),
|
||||||
|
sa.Column('event_date', sa.DateTime(timezone=True), nullable=False),
|
||||||
|
sa.Column('is_active', sa.Boolean(), nullable=False),
|
||||||
|
sa.PrimaryKeyConstraint('id')
|
||||||
|
)
|
||||||
|
op.create_table('users',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('email', sa.String(), nullable=False),
|
||||||
|
sa.Column('hashed_password', sa.String(), nullable=False),
|
||||||
|
sa.PrimaryKeyConstraint('id')
|
||||||
|
)
|
||||||
|
op.create_index(op.f('ix_users_email'), 'users', ['email'], unique=True)
|
||||||
|
op.create_table('seats',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('tournament_id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('sector', sa.String(), nullable=False),
|
||||||
|
sa.Column('row', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('number', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('price', sa.Integer(), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['tournament_id'], ['tournaments.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id')
|
||||||
|
)
|
||||||
|
op.create_index(op.f('ix_seats_tournament_id'), 'seats', ['tournament_id'], unique=False)
|
||||||
|
op.create_table('tickets',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('seat_id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('user_id', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('status', sa.Enum('AVAILABLE', 'LOCKED', 'PAID', 'SCANNED', 'REFUNDED', name='ticket_status_enum'), nullable=False),
|
||||||
|
sa.Column('idempotency_key', sa.String(), nullable=True),
|
||||||
|
sa.Column('created_at', sa.DateTime(timezone=True), nullable=False),
|
||||||
|
sa.Column('updated_at', sa.DateTime(timezone=True), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['seat_id'], ['seats.id'], ),
|
||||||
|
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('idempotency_key')
|
||||||
|
)
|
||||||
|
op.create_index(op.f('ix_tickets_seat_id'), 'tickets', ['seat_id'], unique=True)
|
||||||
|
op.create_index(op.f('ix_tickets_status'), 'tickets', ['status'], unique=False)
|
||||||
|
op.create_index(op.f('ix_tickets_user_id'), 'tickets', ['user_id'], unique=False)
|
||||||
|
# ### end Alembic commands ###
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade() -> None:
|
||||||
|
"""Downgrade schema."""
|
||||||
|
# ### commands auto generated by Alembic - please adjust! ###
|
||||||
|
op.drop_index(op.f('ix_tickets_user_id'), table_name='tickets')
|
||||||
|
op.drop_index(op.f('ix_tickets_status'), table_name='tickets')
|
||||||
|
op.drop_index(op.f('ix_tickets_seat_id'), table_name='tickets')
|
||||||
|
op.drop_table('tickets')
|
||||||
|
op.drop_index(op.f('ix_seats_tournament_id'), table_name='seats')
|
||||||
|
op.drop_table('seats')
|
||||||
|
op.drop_index(op.f('ix_users_email'), table_name='users')
|
||||||
|
op.drop_table('users')
|
||||||
|
op.drop_table('tournaments')
|
||||||
|
# ### end Alembic commands ###
|
||||||
10
backend/requirements.txt
Normal file
10
backend/requirements.txt
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
fastapi
|
||||||
|
uvicorn
|
||||||
|
sqlalchemy==2.0.*
|
||||||
|
alembic
|
||||||
|
asyncpg
|
||||||
|
psycopg2-binary
|
||||||
|
redis
|
||||||
|
passlib[bcrypt]
|
||||||
|
PyJWT
|
||||||
|
pydantic[email]>=2.5.0
|
||||||
0
backend/schemas/__init__.py
Normal file
0
backend/schemas/__init__.py
Normal file
30
backend/schemas/user.py
Normal file
30
backend/schemas/user.py
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
from pydantic import BaseModel, EmailStr, field_validator
|
||||||
|
|
||||||
|
|
||||||
|
class UserRegisterRequest(BaseModel):
|
||||||
|
email: EmailStr
|
||||||
|
password: str
|
||||||
|
|
||||||
|
@field_validator("password")
|
||||||
|
@classmethod
|
||||||
|
def password_min_length(cls, v: str) -> str:
|
||||||
|
if len(v) < 8:
|
||||||
|
raise ValueError("Password must be at least 8 characters long")
|
||||||
|
return v
|
||||||
|
|
||||||
|
|
||||||
|
class UserLoginRequest(BaseModel):
|
||||||
|
email: EmailStr
|
||||||
|
password: str
|
||||||
|
|
||||||
|
|
||||||
|
class UserResponse(BaseModel):
|
||||||
|
id: int
|
||||||
|
email: str
|
||||||
|
|
||||||
|
model_config = {"from_attributes": True}
|
||||||
|
|
||||||
|
|
||||||
|
class TokenResponse(BaseModel):
|
||||||
|
access_token: str
|
||||||
|
token_type: str = "bearer"
|
||||||
96
infra/docker-compose.yml
Normal file
96
infra/docker-compose.yml
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
backend:
|
||||||
|
build: ../backend
|
||||||
|
container_name: backend
|
||||||
|
# Пока приложения нет, просто держим контейнер живым, чтобы зайти в консоль
|
||||||
|
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
environment:
|
||||||
|
# Параметры подключения к БД (внутри сети ticket-network)
|
||||||
|
DATABASE_URL: postgresql+asyncpg://admin:your_strong_password@postgres:5432/ticket_db
|
||||||
|
REDIS_URL: redis://redis:6379/0
|
||||||
|
volumes:
|
||||||
|
- ../backend:/app
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
depends_on:
|
||||||
|
- postgres
|
||||||
|
- redis
|
||||||
|
- rabbitmq
|
||||||
|
networks:
|
||||||
|
- ticket-network
|
||||||
|
|
||||||
|
traefik:
|
||||||
|
image: traefik:v3.6
|
||||||
|
container_name: traefik
|
||||||
|
command:
|
||||||
|
- "--api.insecure=true"
|
||||||
|
- "--providers.docker=true"
|
||||||
|
- "--providers.docker.exposedbydefault=false"
|
||||||
|
- "--providers.docker.httpClientTimeout=300s"
|
||||||
|
- "--entrypoints.web.address=:80"
|
||||||
|
ports:
|
||||||
|
- "8081:80"
|
||||||
|
- "8082:8080"
|
||||||
|
volumes:
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
|
networks:
|
||||||
|
- ticket-network
|
||||||
|
|
||||||
|
postgres:
|
||||||
|
image: postgres:15-alpine
|
||||||
|
container_name: postgres
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: admin
|
||||||
|
POSTGRES_PASSWORD: your_strong_password
|
||||||
|
POSTGRES_DB: ticket_db
|
||||||
|
ports:
|
||||||
|
- "5432:5432" # <-- Добавляем вот это
|
||||||
|
volumes:
|
||||||
|
- ~/ticket-system/data/postgres:/var/lib/postgresql/data
|
||||||
|
networks:
|
||||||
|
- ticket-network
|
||||||
|
|
||||||
|
redis:
|
||||||
|
image: redis:7-alpine
|
||||||
|
container_name: redis
|
||||||
|
command: redis-server --save 60 1 --loglevel warning
|
||||||
|
volumes:
|
||||||
|
- ~/ticket-system/data/redis:/data
|
||||||
|
networks:
|
||||||
|
- ticket-network
|
||||||
|
|
||||||
|
rabbitmq:
|
||||||
|
image: rabbitmq:3-management-alpine
|
||||||
|
container_name: rabbitmq
|
||||||
|
environment:
|
||||||
|
RABBITMQ_DEFAULT_USER: user
|
||||||
|
RABBITMQ_DEFAULT_PASS: password
|
||||||
|
ports:
|
||||||
|
- "15672:15672"
|
||||||
|
volumes:
|
||||||
|
- ~/ticket-system/data/rabbitmq:/var/lib/rabbitmq
|
||||||
|
networks:
|
||||||
|
- ticket-network
|
||||||
|
|
||||||
|
minio:
|
||||||
|
image: quay.io/minio/minio
|
||||||
|
container_name: minio
|
||||||
|
command: server /data --console-address ":9001"
|
||||||
|
environment:
|
||||||
|
MINIO_ROOT_USER: minioadmin
|
||||||
|
MINIO_ROOT_PASSWORD: minioadminpassword
|
||||||
|
ports:
|
||||||
|
- "9000:9000"
|
||||||
|
- "9001:9001"
|
||||||
|
volumes:
|
||||||
|
- ~/ticket-system/data/minio:/data
|
||||||
|
networks:
|
||||||
|
- ticket-network
|
||||||
|
|
||||||
|
networks:
|
||||||
|
ticket-network:
|
||||||
|
driver: bridge
|
||||||
Reference in New Issue
Block a user