Compare commits

..

4 Commits

Author SHA1 Message Date
HoloPanio f56c49e242 fix(migrate): handle existing Company/UnifiSite data in catch-up migration
Two bugs in the catch-up migration that only manifest with real production data:

1. Company (4520 rows): uid was added as TEXT NOT NULL DEFAULT '' causing
   all existing rows to get uid='' which makes the PRIMARY KEY constraint
   fail with 'could not create unique index, Key (uid)=() is duplicated'.
   Fix: add uid as nullable, UPDATE uid = id (copies the existing CUID text
   PK into uid), then SET NOT NULL, then swap PK. Also populate the new
   integer id column from cw_CompanyId (which is fully populated in prod).

2. UnifiSite (180 rows): old approach just dropped the text companyId and
   added a null integer column, destroying all company relationships.
   Fix: add companyId_int, UPDATE via JOIN on Company.uid (= old Company.id
   text), drop old text column, rename integer column.

Also fix the P3009 handler in migrate-entrypoint.sh: Prisma may emit ANSI
color codes even without a TTY, wrapping backticks in escape sequences and
breaking the regex match. Fix: strip ANSI codes with sed before extracting
the migration name. Also simplify the regex from a rigid format match to a
simpler backtick-content grep.

Production DB manually unblocked (migrate resolve --rolled-back) so the
next deploy will cleanly apply the corrected migration.
2026-04-08 18:07:16 +00:00
HoloPanio 4fa13a1d28 fix(migrate): fix set -e swallowing prisma output on failure
POSIX sh exits a script on the assignment line when command substitution
exits non-zero under set -e -- before the subsequent echo ever runs.

  DEPLOY_OUTPUT=$(cmd 2>&1)   # <- script exits here if cmd fails
  EXIT_CODE=$?
  echo "$DEPLOY_OUTPUT"       # <- never reached

Fix: use the || idiom, which puts the LHS in a compound-command context
where set -e does not apply, and still captures the real exit code:

  EXIT_CODE=0
  DEPLOY_OUTPUT=$(cmd 2>&1) || EXIT_CODE=$?
  echo "$DEPLOY_OUTPUT"       # <- always runs

Applied the same fix to the resolve call.
2026-04-08 14:31:22 +00:00
HoloPanio 6b90bab30c fix(api): add catch-up migration to sync db-push schema drift
All schema changes that were applied via 'prisma db push' over the past
several months were never captured in migration files.  When the postgres
pod restarted just before the migration job ran, the database was rebuilt
from the 15 existing migrations -- creating an old schema that was missing
~20 tables and significant structural changes to User, Opportunity,
CatalogItem, and Company.

This migration bridges the gap idempotently:
- New enums: PhoneType, FaxType, BillingMethod, BillingType, GenderType,
  USState, Country, OpportunityInterest
- User: add firstName/lastName/title/active/hidden/cwMemberId/updatedBy;
  drop emailVerified/name; make userId nullable
- CatalogItem: TEXT id → INTEGER id + TEXT uid PK; restructure FK columns
- Company: TEXT id → INTEGER id + TEXT uid PK; drop old CW columns; add
  dateDeleted/deleteFlag/phone/taxExempt/taxId/website/enteredById
- Opportunity: TEXT id → INTEGER id + TEXT uid PK; drop ~25 flat CW
  columns; add typeId/statusId/contactId/siteId/locationId/departmentId/
  closedById/primarySalesRepId/secondarySalesRepId/eneteredBy/updatedBy/
  oppNarrative/taxCodeId/interest; drop cwDateEntered
- UnifiSite: companyId TEXT → INTEGER
- 20+ new tables: CorporateLocation, InternalDepartment, CompanyAddress,
  Contact, CatalogItemType, CatalogCategory, CatalogSubcategory,
  CatalogManufacturer, Warehouse, WarehouseBin, ProductInventory,
  MinimumStockByWarehouse, ProductData, ServiceTicket, ServiceTicketNote,
  ServiceTicketType, ServiceTicketBoard, ServiceTicketLocation,
  ServiceTicketSource, ServiceTicketImpact, ServiceTicketPriority,
  ServiceTicketServerity, ServiceTicketFinalData, OpportunityType,
  OpportunityStatus, ScheduleStatus, ScheduleType, ScheduleSpan,
  Schedule, TaxCode

Verified: all 16 migrations apply cleanly on a fresh DB and produce zero
schema drift (prisma migrate diff outputs '-- This is an empty migration.')

Fixes P2022 ColumnNotFound errors on login and all model queries.
2026-04-08 13:40:29 +00:00
HoloPanio 7914c025a1 chore(migrate): add local migration test harness script 2026-04-08 05:36:41 +00:00
3 changed files with 1632 additions and 5 deletions
+15 -5
View File
@@ -14,8 +14,8 @@ while [ $ATTEMPT -lt $MAX_RETRIES ]; do
ATTEMPT=$((ATTEMPT + 1)) ATTEMPT=$((ATTEMPT + 1))
echo "[migrate] Running prisma migrate deploy (attempt $ATTEMPT)..." echo "[migrate] Running prisma migrate deploy (attempt $ATTEMPT)..."
DEPLOY_OUTPUT=$(bunx prisma migrate deploy 2>&1) EXIT_CODE=0
EXIT_CODE=$? DEPLOY_OUTPUT=$(bunx prisma migrate deploy 2>&1) || EXIT_CODE=$?
echo "$DEPLOY_OUTPUT" echo "$DEPLOY_OUTPUT"
if [ $EXIT_CODE -eq 0 ]; then if [ $EXIT_CODE -eq 0 ]; then
@@ -26,11 +26,21 @@ while [ $ATTEMPT -lt $MAX_RETRIES ]; do
# P3009: a previously-failed migration is blocking deploy. # P3009: a previously-failed migration is blocking deploy.
# The error message contains the migration name in backticks: # The error message contains the migration name in backticks:
# The `20260402000000_fix_severity_typo` migration started at ... failed # The `20260402000000_fix_severity_typo` migration started at ... failed
if echo "$DEPLOY_OUTPUT" | grep -q "P3009"; then # Strip ANSI escape codes first (Prisma may colorize output even without TTY),
FAILED=$(echo "$DEPLOY_OUTPUT" | grep -oE '\`[0-9]{14}(_[a-zA-Z_]+)?\`' | tr -d '\`' | head -1) # then use a simple backtick-content regex rather than a rigid format match.
CLEAN_OUTPUT=$(printf '%s\n' "$DEPLOY_OUTPUT" | sed 's/\x1b\[[0-9;]*[mGKHFJr]//g')
if printf '%s\n' "$CLEAN_OUTPUT" | grep -q "P3009"; then
FAILED=$(printf '%s\n' "$CLEAN_OUTPUT" | grep -o '`[^`]*`' | grep '[0-9]' | tr -d '`' | head -1)
if [ -n "$FAILED" ]; then if [ -n "$FAILED" ]; then
echo "[migrate] Resolving failed migration as rolled-back: $FAILED" echo "[migrate] Resolving failed migration as rolled-back: $FAILED"
bunx prisma migrate resolve --rolled-back "$FAILED" RESOLVE_OUTPUT=""
RESOLVE_EXIT=0
RESOLVE_OUTPUT=$(bunx prisma migrate resolve --rolled-back "$FAILED" 2>&1) || RESOLVE_EXIT=$?
echo "$RESOLVE_OUTPUT"
if [ $RESOLVE_EXIT -ne 0 ]; then
echo "[migrate] Failed to resolve migration $FAILED (exit $RESOLVE_EXIT). Aborting."
exit 1
fi
continue continue
fi fi
fi fi
File diff suppressed because it is too large Load Diff
+84
View File
@@ -0,0 +1,84 @@
#!/bin/sh
# ---------------------------------------------------------------------------
# Local migration test harness.
# Builds the migration Docker image from the monorepo root, spins up a fresh
# throwaway Postgres container, runs the migration job against it, and tears
# everything down when done — pass or fail.
#
# Usage (from monorepo root):
# sh api/prisma/test-migration-local.sh
#
# Requirements: Docker
# ---------------------------------------------------------------------------
set -e
NETWORK=migrate-test-net
DB_CONTAINER=migrate-test-postgres
MIGRATE_IMAGE=optima-api-migrate-local-test
DB_USER=optima
DB_PASS=testpass
DB_NAME=optima
# ---- Cleanup function — always runs on exit ----
cleanup() {
echo "[test] Cleaning up..."
docker rm -f "$DB_CONTAINER" 2>/dev/null || true
docker network rm "$NETWORK" 2>/dev/null || true
docker rmi "$MIGRATE_IMAGE" 2>/dev/null || true
}
trap cleanup EXIT
# ---- 1. Create an isolated Docker network ----
echo "[test] Creating Docker network: $NETWORK"
docker network create "$NETWORK"
# ---- 2. Start a fresh Postgres container ----
echo "[test] Starting fresh Postgres container: $DB_CONTAINER"
docker run -d \
--name "$DB_CONTAINER" \
--network "$NETWORK" \
-e POSTGRES_USER="$DB_USER" \
-e POSTGRES_PASSWORD="$DB_PASS" \
-e POSTGRES_DB="$DB_NAME" \
postgres:17
# Wait for Postgres to be ready (up to 30s)
echo "[test] Waiting for Postgres to be ready..."
READY=0
for i in $(seq 1 30); do
if docker exec "$DB_CONTAINER" pg_isready -U "$DB_USER" -d "$DB_NAME" > /dev/null 2>&1; then
READY=1
break
fi
sleep 1
done
if [ $READY -eq 0 ]; then
echo "[test] Postgres did not become ready in 30s. Aborting."
exit 1
fi
echo "[test] Postgres is ready."
# ---- 3. Build the migration image from the monorepo root ----
echo "[test] Building migration image: $MIGRATE_IMAGE"
# Determine the monorepo root (two levels up from this script's directory)
SCRIPT_DIR=$(cd "$(dirname "$0")" && pwd)
REPO_ROOT=$(cd "$SCRIPT_DIR/../.." && pwd)
docker build \
-f "$REPO_ROOT/api/Dockerfile" \
--target migration \
-t "$MIGRATE_IMAGE" \
"$REPO_ROOT"
# ---- 4. Run the migration container against the test Postgres ----
echo "[test] Running migration container..."
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_CONTAINER}:5432/${DB_NAME}"
docker run --rm \
--network "$NETWORK" \
-e DATABASE_URL="$DATABASE_URL" \
"$MIGRATE_IMAGE"
echo ""
echo "[test] SUCCESS — all migrations applied cleanly to a fresh database."