Skip to content

GCP Cloud Run Deployment

Deploy AppArt Agent to Google Cloud Platform using Cloud Run, Cloud SQL, and managed services.

Architecture Overview

flowchart TB
    subgraph Internet["Internet"]
        Users["Users"]
        DNS["DNS<br/>appartagent.com"]
    end

    subgraph GCP["Google Cloud Platform"]
        subgraph LoadBalancer["Global Load Balancer"]
            LB["Application Load Balancer<br/>+ Certificate Manager SSL"]
        end

        subgraph CloudRun["Cloud Run Services"]
            Frontend["Frontend<br/>Next.js"]
            Backend["Backend<br/>FastAPI"]
        end

        subgraph Data["Data Layer (Private Network)"]
            CloudSQL["Cloud SQL<br/>PostgreSQL 15"]
            Redis["Memorystore<br/>Redis 7"]
            GCS["Cloud Storage<br/>Documents & Photos"]
        end

        subgraph AI["AI Services"]
            VertexAI["Vertex AI<br/>Gemini 2.0"]
        end

        subgraph Security["Security"]
            SecretManager["Secret Manager"]
            IAM["IAM Service Accounts"]
        end

        subgraph Network["VPC Network"]
            VPCConnector["VPC Connector"]
            PrivateAccess["Private Service Access"]
        end
    end

    Users --> DNS
    DNS --> LB
    LB --> Frontend
    LB --> Backend
    Frontend --> Backend
    Backend --> VPCConnector
    VPCConnector --> CloudSQL
    VPCConnector --> Redis
    Backend --> GCS
    Backend --> VertexAI
    Backend --> SecretManager

Domain Routing

flowchart LR
    subgraph Domains["Custom Domains"]
        Apex["appartagent.com"]
        WWW["www.appartagent.com"]
        API["api.appartagent.com"]
    end

    subgraph Services["Cloud Run"]
        FE["Frontend Service"]
        BE["Backend Service"]
    end

    Apex --> FE
    WWW --> FE
    API --> BE

Prerequisites

Required Tools

Tool Version Installation
gcloud CLI Latest Install Guide
Terraform >= 1.5.0 Install Guide
Docker 20.10+ Install Guide

GCP Requirements

  • GCP Project with billing enabled
  • Owner or Editor role on the project
  • Domain name (optional, for custom domain)

Cost Estimation

Service Development Production
Cloud Run (Frontend) $0-50/month $50-100/month
Cloud Run (Backend) $0-100/month $100-200/month
Cloud SQL PostgreSQL ~$10/month (db-g1-small) ~$50/month (db-custom-2-4096)
Memorystore Redis ~$35/month (1GB BASIC) ~$70/month (1GB STANDARD_HA)
Cloud Storage ~$1/month ~$5/month
Load Balancer ~$20/month ~$20/month
Total ~$65-115/month ~$295-445/month

Cost Optimization

  • Set min_instances = 0 in Terraform to enable scale-to-zero (saves ~$50/month per service)
  • Use db-g1-small for development/staging environments
  • Consider BASIC Redis tier for non-production workloads

Quick Start Deployment

1. Initial Setup

# Clone repository
git clone https://github.com/benjamin-karaoglan/appart-agent.git
cd appart-agent

# Set environment variables
export PROJECT_ID="your-gcp-project-id"
export REGION="europe-west1"

# Authenticate with GCP
gcloud auth login
gcloud config set project $PROJECT_ID

2. Enable Required APIs

gcloud services enable \
  run.googleapis.com \
  sqladmin.googleapis.com \
  redis.googleapis.com \
  secretmanager.googleapis.com \
  artifactregistry.googleapis.com \
  cloudbuild.googleapis.com \
  vpcaccess.googleapis.com \
  servicenetworking.googleapis.com \
  compute.googleapis.com \
  aiplatform.googleapis.com \
  dns.googleapis.com \
  certificatemanager.googleapis.com

3. Deploy Infrastructure with Terraform

cd infra/terraform

# Create variables file
cat > terraform.tfvars << EOF
project_id = "$PROJECT_ID"
region     = "$REGION"
environment = "production"

# Optional: Custom domain
# domain = "yourdomain.com"
# use_load_balancer = true
EOF

# Initialize and apply
terraform init
terraform plan
terraform apply

4. Build and Push Docker Images

# Configure Docker for Artifact Registry
gcloud auth configure-docker $REGION-docker.pkg.dev

# Build and push backend
docker build -t $REGION-docker.pkg.dev/$PROJECT_ID/appart-agent/backend:latest \
  --target production ./backend
docker push $REGION-docker.pkg.dev/$PROJECT_ID/appart-agent/backend:latest

# Build and push frontend
docker build -t $REGION-docker.pkg.dev/$PROJECT_ID/appart-agent/frontend:latest \
  --target production -f ./frontend/Dockerfile.pnpm ./frontend
docker push $REGION-docker.pkg.dev/$PROJECT_ID/appart-agent/frontend:latest

5. Run Database Migrations

# Execute migration job
gcloud run jobs execute db-migrate --region $REGION --wait

5.5. Import DVF Dataset (Optional)

# Execute DVF import job
gcloud run jobs execute dvf-import --region $REGION --wait

6. Deploy Cloud Run Services

# Deploy backend
gcloud run deploy appart-backend \
  --image $REGION-docker.pkg.dev/$PROJECT_ID/appart-agent/backend:latest \
  --region $REGION

# Deploy frontend
gcloud run deploy appart-frontend \
  --image $REGION-docker.pkg.dev/$PROJECT_ID/appart-agent/frontend:latest \
  --region $REGION

Infrastructure Details

Terraform Resources

The Terraform configuration creates:

flowchart TD
    subgraph Compute["Compute"]
        CR_FE["Cloud Run: Frontend"]
        CR_BE["Cloud Run: Backend"]
        CR_JOB["Cloud Run Job: Migrations"]
        CR_DVF["Cloud Run Job: DVF Import"]
    end

    subgraph Database["Database & Cache"]
        SQL["Cloud SQL Instance"]
        SQL_DB["Database: appart_agent"]
        SQL_USER["User: appart"]
        REDIS["Memorystore Redis"]
    end

    subgraph Storage["Storage"]
        AR["Artifact Registry"]
        GCS_DOCS["Bucket: documents"]
        GCS_PHOTOS["Bucket: photos"]
    end

    subgraph Secrets["Secrets"]
        SM_DB["Secret: database-url"]
        SM_AUTH["Secret: better-auth-secret"]
        SM_API["Secret: google-cloud-api-key"]
        SM_LOG["Secret: logfire-token"]
    end

    subgraph IAM["Service Accounts"]
        SA_BE["appart-backend"]
        SA_FE["appart-frontend"]
        SA_DEPLOY["appart-deployer"]
        SA_BUILD["appart-cloudbuild"]
    end

    subgraph Network["Networking"]
        VPC["VPC Network"]
        SUBNET["Subnet"]
        CONNECTOR["VPC Connector"]
        PSA["Private Service Access"]
    end

    CR_BE --> SA_BE
    CR_FE --> SA_FE
    SA_BE --> SQL
    SA_BE --> REDIS
    SA_BE --> GCS_DOCS
    SA_BE --> GCS_PHOTOS
    CONNECTOR --> VPC
    SQL --> PSA
    REDIS --> PSA

Terraform Variables

Variable Description Default
project_id GCP Project ID Required
region GCP Region europe-west1
environment Environment name production
posthog_project_token PostHog project token "" (disabled)
domain Custom domain "" (none)
use_load_balancer Use Cloud Load Balancer true
create_dns_zone Create Cloud DNS zone true
db_tier Cloud SQL instance tier db-g1-small
redis_tier Redis tier BASIC
min_instances Minimum Cloud Run instances 0
backend_max_concurrency Max concurrent requests per backend instance 20

Service Account Permissions

flowchart LR
    subgraph Backend["appart-backend"]
        BE_SQL["cloudsql.client"]
        BE_SECRET["secretmanager.secretAccessor"]
        BE_STORAGE["storage.objectAdmin"]
        BE_AI["aiplatform.user"]
        BE_REDIS["redis.editor"]
        BE_LOG["logging.logWriter"]
    end

    subgraph Frontend["appart-frontend"]
        FE_SECRET["secretmanager.secretAccessor"]
        FE_LOG["logging.logWriter"]
    end

    subgraph Deployer["appart-deployer"]
        DEP_RUN["run.admin"]
        DEP_IAM["iam.serviceAccountUser"]
        DEP_AR["artifactregistry.writer"]
        DEP_SECRET["secretmanager.secretAccessor"]
        DEP_STORAGE["storage.admin"]
    end

Custom Domain Setup

Architecture with Load Balancer

flowchart TB
    subgraph DNS["DNS Resolution"]
        Domain["appartagent.com"]
        WWW["www.appartagent.com"]
        API["api.appartagent.com"]
    end

    subgraph GCP["Google Cloud"]
        subgraph LB["Global Load Balancer"]
            IP["Static IP Address"]
            HTTPS["HTTPS Proxy"]
            HTTP["HTTP Proxy<br/>(Redirect to HTTPS)"]
            URLMap["URL Map"]
        end

        subgraph Certs["Certificate Manager"]
            Cert["Managed SSL Certificate"]
            DNSAuth["DNS Authorization"]
        end

        subgraph NEG["Serverless NEGs"]
            FE_NEG["Frontend NEG"]
            BE_NEG["Backend NEG"]
        end

        subgraph Run["Cloud Run"]
            FE["Frontend"]
            BE["Backend"]
        end
    end

    Domain --> IP
    WWW --> IP
    API --> IP
    IP --> HTTPS
    IP --> HTTP
    HTTPS --> URLMap
    URLMap -->|"api.domain"| BE_NEG
    URLMap -->|"domain, www"| FE_NEG
    FE_NEG --> FE
    BE_NEG --> BE
    Cert --> HTTPS
    DNSAuth --> Cert

This approach provides more reliable SSL certificate provisioning and better performance.

Step 1: Verify Domain Ownership

# Verify domain ownership (opens browser)
gcloud domains verify yourdomain.com

Step 2: Configure Terraform

# terraform.tfvars
domain            = "yourdomain.com"
use_load_balancer = true
create_dns_zone   = true
api_subdomain     = "api"

Step 3: Apply Infrastructure

terraform apply

Step 4: Update DNS at Registrar

If using Cloud DNS (recommended):

# Get nameservers
terraform output dns_nameservers

Update your domain registrar to use the output nameservers:

  • ns-cloud-a1.googledomains.com.
  • ns-cloud-a2.googledomains.com.
  • ns-cloud-a3.googledomains.com.
  • ns-cloud-a4.googledomains.com.

Step 5: Verify SSL Certificate

# Check certificate status
gcloud certificate-manager certificates describe appart-agent-cert --location=global

SSL provisioning can take 15-60 minutes after DNS propagation.

Option 2: Using External DNS

If you manage DNS outside GCP (Cloudflare, Namecheap, etc.):

# terraform.tfvars
domain            = "yourdomain.com"
use_load_balancer = true
create_dns_zone   = false  # Don't create Cloud DNS zone

Then configure DNS records at your registrar:

Type Name Value
A @ <load_balancer_ip>
A www <load_balancer_ip>
A api <load_balancer_ip>

Get the load balancer IP:

terraform output lb_ip

Option 3: Cloud Run Domain Mappings (Simpler, Less Reliable)

For simpler setups without a load balancer:

# terraform.tfvars
domain            = "yourdomain.com"
use_load_balancer = false
create_dns_zone   = false

Configure DNS records:

Type Name Value
A @ (See domain mapping status)
CNAME www ghs.googlehosted.com.
CNAME api ghs.googlehosted.com.
# Get A record IPs
gcloud run domain-mappings describe \
  --domain yourdomain.com \
  --region $REGION \
  --format='value(status.resourceRecords)'

Environment Variables

Backend Configuration

Environment variables are set via Terraform and Secret Manager:

Variable Source Description
ENVIRONMENT Terraform production
DATABASE_URL Secret Manager PostgreSQL connection string
SECRET_KEY Secret Manager Application secret key
GOOGLE_CLOUD_PROJECT Terraform GCP project ID
GOOGLE_CLOUD_LOCATION Terraform GCP region
GEMINI_USE_VERTEXAI Terraform true (uses Vertex AI)
STORAGE_BACKEND Terraform gcs
GCS_DOCUMENTS_BUCKET Terraform Documents bucket name
GCS_PHOTOS_BUCKET Terraform Photos bucket name
REDIS_HOST Terraform Redis IP address
REDIS_PORT Terraform Redis port (6379)

Frontend Configuration

Variable Source Description
NEXT_PUBLIC_API_URL Terraform Backend URL (custom domain or Cloud Run URL)
NEXT_PUBLIC_APP_URL Terraform Frontend URL (custom domain or Cloud Run URL)
DATABASE_URL Secret Manager PostgreSQL connection string (for Better Auth)
BETTER_AUTH_SECRET Secret Manager Session signing secret (min 32 chars)
NODE_ENV Terraform production
NEXT_PUBLIC_POSTHOG_PROJECT_TOKEN Terraform / Build arg PostHog project token (optional)
NEXT_PUBLIC_POSTHOG_HOST Terraform / Build arg PostHog host (https://eu.i.posthog.com)
GOOGLE_CLIENT_ID Secret Manager Google OAuth client ID (optional)
GOOGLE_CLIENT_SECRET Secret Manager Google OAuth client secret (optional)

Setting Secrets Manually

If you need to set secrets manually:

# Database URL (automatically set by Terraform)
echo -n "postgresql://..." | gcloud secrets versions add database-url --data-file=-

# Application Secret (automatically set by Terraform)
echo -n "your-secret-key" | gcloud secrets versions add jwt-secret --data-file=-

# Better Auth Secret (for frontend session signing)
echo -n "your-better-auth-secret" | gcloud secrets versions add better-auth-secret --data-file=-

# Google Cloud API Key (optional, for non-Vertex AI usage)
echo -n "your-api-key" | gcloud secrets versions add google-cloud-api-key --data-file=-

# Logfire Token (optional, for observability)
echo -n "your-logfire-token" | gcloud secrets versions add logfire-token --data-file=-

Database Operations

Running Migrations

# Using Cloud Run Job (recommended)
gcloud run jobs execute db-migrate --region $REGION --wait

# Check job logs
gcloud run jobs executions logs db-migrate --region $REGION

DVF Import

The DVF (Demandes de Valeurs Foncieres) dataset contains 20M+ French property transactions and is imported via a dedicated Cloud Run Job.

# Execute DVF import job (downloads and imports full dataset)
gcloud run jobs execute dvf-import --region $REGION --wait

# Check import job logs
gcloud run jobs executions logs dvf-import --region $REGION

# View job configuration
gcloud run jobs describe dvf-import --region $REGION

The dvf-import job:

  • Resources: 8 vCPU, 32 GiB RAM
  • Timeout: 60 minutes
  • Max retries: 0 (fail fast for easier debugging)
  • VPC egress: PRIVATE_RANGES_ONLY — only Cloud SQL traffic goes through the VPC; the data.gouv.fr download bypasses the VPC directly to the internet
  • Process: Downloads DVF data from data.gouv.fr, extracts .csv.gz, imports via polars + COPY FROM STDIN
  • Duration: ~55 seconds locally, ~25 minutes on Cloud Run for full dataset (4.8M sales, 13.5M lots)
  • Trigger: Manual via GitHub Actions workflow (.github/workflows/dvf-import.yml) or gcloud command

To trigger via GitHub Actions:

  1. Go to Actions tab in GitHub
  2. Select DVF Import workflow
  3. Click Run workflow
  4. Select branch and click Run workflow

Direct Database Access

For debugging or manual operations:

# Connect via Cloud SQL Auth Proxy
gcloud sql instances describe appart-agent-db --format='value(connectionName)'

# Install Cloud SQL Proxy
curl -o cloud-sql-proxy https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.8.0/cloud-sql-proxy.darwin.arm64
chmod +x cloud-sql-proxy

# Start proxy (in a separate terminal)
./cloud-sql-proxy $PROJECT_ID:$REGION:appart-agent-db

# Connect with psql
PGPASSWORD=$(gcloud secrets versions access latest --secret=db-password) \
  psql -h localhost -U appart -d appart_agent

Backup and Restore

# Create on-demand backup
gcloud sql backups create --instance=appart-agent-db

# List backups
gcloud sql backups list --instance=appart-agent-db

# Restore from backup
gcloud sql backups restore <BACKUP_ID> \
  --restore-instance=appart-agent-db \
  --backup-instance=appart-agent-db

Monitoring and Logging

View Logs

# Backend logs
gcloud run services logs read appart-backend --region $REGION --limit 100

# Frontend logs
gcloud run services logs read appart-frontend --region $REGION --limit 100

# Real-time log streaming
gcloud run services logs tail appart-backend --region $REGION

Log Explorer Queries

Access Cloud Logging with these filters:

# Backend errors
resource.type="cloud_run_revision"
resource.labels.service_name="appart-backend"
severity>=ERROR

# Slow requests (> 2s)
resource.type="cloud_run_revision"
httpRequest.latency>"2s"

# AI service calls
resource.type="cloud_run_revision"
jsonPayload.message=~"Gemini"

Monitoring Dashboard

flowchart LR
    subgraph Metrics["Key Metrics"]
        Latency["Request Latency<br/>p50, p95, p99"]
        Errors["Error Rate<br/>4xx, 5xx"]
        Instances["Instance Count<br/>Active, Idle"]
        CPU["CPU Utilization"]
        Memory["Memory Usage"]
    end

    subgraph Alerts["Alert Policies"]
        A1["Error Rate > 1%"]
        A2["Latency p95 > 2s"]
        A3["Instance Count = Max"]
        A4["Memory > 80%"]
    end

    Errors --> A1
    Latency --> A2
    Instances --> A3
    Memory --> A4

Set up alerts in Cloud Monitoring:

  1. Navigate to Monitoring > Alerting
  2. Create alerting policies for:
  3. Error rate > 1%
  4. Latency p95 > 2s
  5. Instance count at maximum
  6. Memory utilization > 80%

Logfire Integration (Optional)

For enhanced observability:

# Set Logfire token
echo -n "your-logfire-token" | gcloud secrets versions add logfire-token --data-file=-

The backend automatically sends traces and logs to Logfire when LOGFIRE_ENABLED=true.

VPC Egress Configuration

All Cloud Run services and jobs must use PRIVATE_RANGES_ONLY egress (not ALL_TRAFFIC) unless a Cloud NAT gateway is configured on the VPC. Without Cloud NAT, ALL_TRAFFIC routes all outbound requests through the VPC — which blocks access to the public internet (data.gouv.fr downloads, Logfire, Vertex AI, etc.) and causes connection timeouts.

The Terraform configuration uses PRIVATE_RANGES_ONLY on all resources, which routes only internal traffic (Cloud SQL, Redis) through the VPC while allowing external traffic to use the default internet egress.

Scaling Configuration

Auto-scaling Settings

flowchart LR
    subgraph Traffic["Incoming Traffic"]
        Low["Low Traffic"]
        Medium["Medium Traffic"]
        High["High Traffic"]
    end

    subgraph Scaling["Auto-scaling"]
        Min["min_instances<br/>(0 or 1)"]
        Scale["Scale based on<br/>CPU/Concurrency"]
        Max["max_instances<br/>(10)"]
    end

    Low --> Min
    Medium --> Scale
    High --> Max

Terraform Configuration

# Scale to zero (cost-efficient, cold starts)
min_instances = 0
max_instances = 10

# Always-on (no cold starts, ~$50/month/service)
min_instances = 1
max_instances = 10

# Backend concurrency (default: 20, Cloud Run default is 80)
# Lower value = autoscale sooner, critical for DB-heavy endpoints
backend_max_concurrency = 20

Cold Start Optimization

To minimize cold start latency:

  1. Set min_instances = 1 for production workloads
  2. Optimize container size - use multi-stage builds
  3. Reduce startup time - lazy load heavy dependencies
  4. Use CPU boost - enabled by default on Cloud Run

CI/CD with GitHub Actions

The project includes two GitHub Actions workflows:

  1. .github/workflows/deploy.yml: Main deployment workflow (triggered on push to main)
  2. .github/workflows/dvf-import.yml: DVF import workflow (manual trigger only)

Setup GitHub Actions

  1. Create a service account key:
gcloud iam service-accounts keys create deployer-key.json \
  --iam-account=appart-deployer@$PROJECT_ID.iam.gserviceaccount.com

# Base64 encode for GitHub
cat deployer-key.json | base64
  1. Add GitHub repository secrets:
Secret Value
GCP_PROJECT_ID Your project ID
GCP_REGION europe-west1
GCP_SA_KEY Base64-encoded service account key
POSTHOG_PROJECT_TOKEN PostHog project token (optional, for analytics)

Deployment Workflow

sequenceDiagram
    participant Dev as Developer
    participant GH as GitHub
    participant GA as GitHub Actions
    participant AR as Artifact Registry
    participant CR as Cloud Run

    Dev->>GH: Push to main
    GH->>GA: Trigger workflow
    GA->>GA: Build Docker images
    GA->>AR: Push images
    GA->>CR: Execute db-migrate job
    GA->>CR: Update dvf-import job image
    GA->>CR: Deploy backend
    GA->>CR: Deploy frontend
    GA->>GH: Report status

Security

Security Architecture

flowchart TB
    subgraph Public["Public Internet"]
        Users["Users"]
    end

    subgraph Edge["Edge Security"]
        LB["Load Balancer<br/>DDoS Protection"]
        SSL["TLS 1.3<br/>Managed Certs"]
    end

    subgraph App["Application Security"]
        CORS["CORS Restrictions"]
        Auth["Better Auth Sessions"]
        Validation["Input Validation"]
    end

    subgraph Data["Data Security"]
        Secrets["Secret Manager<br/>Encrypted at Rest"]
        PrivateIP["Private IP<br/>No Public DB Access"]
        IAM["IAM<br/>Least Privilege"]
    end

    Users --> LB
    LB --> SSL
    SSL --> CORS
    CORS --> Auth
    Auth --> Validation
    Validation --> Secrets
    Validation --> PrivateIP

Best Practices

  1. No public database access - Cloud SQL uses private IP only
  2. Secret Manager - All sensitive values stored encrypted
  3. IAM least privilege - Service accounts have minimal permissions
  4. VPC networking - Internal services communicate over private network
  5. Automatic HTTPS - Managed SSL certificates
  6. CORS restrictions - API only accepts requests from known origins

Troubleshooting

Common Issues

Container Won't Start

# Check logs
gcloud run services logs read appart-backend --region $REGION --limit 50

# Check revision status
gcloud run revisions list --service appart-backend --region $REGION

Database Connection Failed

# Verify VPC Connector
gcloud compute networks vpc-access connectors describe \
  appt-agent-connector --region $REGION

# Check Cloud SQL status
gcloud sql instances describe appart-agent-db

# Test connectivity from Cloud Run
gcloud run services update appart-backend \
  --region $REGION \
  --command "python -c \"import sqlalchemy; print('OK')\""

Redis Connection Issues

# Check Redis instance
gcloud redis instances describe appart-agent-cache --region $REGION

# Verify private service access
gcloud compute networks vpc-access connectors describe \
  appt-agent-connector --region $REGION

SSL Certificate Not Provisioning

# Check certificate status
gcloud certificate-manager certificates describe appart-agent-cert --location=global

# Check DNS authorization status
gcloud certificate-manager dns-authorizations describe appart-agent-dns-auth --location=global

# Verify DNS records
dig yourdomain.com A
dig _acme-challenge.yourdomain.com CNAME

Domain Mapping Issues

# Check domain mapping status
gcloud run domain-mappings describe --domain yourdomain.com --region $REGION

# Verify domain ownership
gcloud domains verify yourdomain.com

Logfire/External Service Unreachable

If you see errors like Network is unreachable for external services (Logfire, external APIs):

# Check current VPC egress setting
gcloud run services describe appart-backend --region $REGION \
  --format='value(spec.template.metadata.annotations."run.googleapis.com/vpc-access-egress")'

# Fix: Change to PRIVATE_RANGES_ONLY to allow external traffic
gcloud run services update appart-backend --region $REGION \
  --vpc-egress=private-ranges-only

The VPC egress options:

  • PRIVATE_RANGES_ONLY (recommended): Only internal traffic goes through VPC, external traffic uses default egress
  • ALL_TRAFFIC: All traffic goes through VPC (requires Cloud NAT for external access)

Health Checks

The backend exposes a /health endpoint:

# Check backend health
curl https://api.yourdomain.com/health

# Expected response
{"status": "healthy", "database": "connected", "redis": "connected"}

Cleanup

Destroy All Resources

cd infra/terraform

# Review what will be destroyed
terraform plan -destroy

# Destroy (requires confirmation)
terraform destroy

Data Loss

This will permanently delete:

  • Cloud SQL database and all data
  • Redis cache
  • Cloud Storage buckets and files
  • All Cloud Run services

Partial Cleanup

# Delete only Cloud Run services (keep data)
gcloud run services delete appart-backend --region $REGION
gcloud run services delete appart-frontend --region $REGION

# Delete only the jobs
gcloud run jobs delete db-migrate --region $REGION
gcloud run jobs delete dvf-import --region $REGION

Next Steps