Cloud Images on Azure
Be operational in no time with the automated deployment code we provide alongside.
How it works

Deploy and use with ease
We built the Azure images for TheHive and Cortex with operations and automation in mind. They are DevSecOps-friendly products that would fit in most organizations, no matter how simple or complex their infrastructure.

Production ready
- Dedicated data disks:
TheHive and Cortex data is stored on dedicated disks, not in the root filesystem, which makes operations like instance replacement, upgrades, backups & restore much easier. For Cortex, you must attach two persistent data disks on LUN 0 and LUN 1 at launch (30 GB each recommended).
- Ubuntu-based:
The images are based on the official Ubuntu 20.04 LTS distribution from Canonical.
- Hardened OS:
The underlying Ubuntu OS is hardened, and there’s no network filtering. There are no iptables surprises inside the image to avoid conflicting behavior with security groups.
- Application only:
The images include the applications only. They are not meant to be public-facing on their own and should be deployed within your virtual network and exposed to the public-facing system of your choice (load balancer, reverse proxy).
How to use
How to use the latest TheHive version on Azure
Basics
- Based on the official Ubuntu 20.04 LTS image from Canonical.
- The image can be used with any version of TheHive v5.x. You can simply update a local variable with the desired TheHive version to update whenever a new TheHive version is released!
- TheHive/Cortex data is stored on two dedicated disks, not in the root filesystem. For that purpose, you must attach two persistent data disks at launch on LUN 0 (for the data) and LUN 1 (for Docker). The recommended minimal volume size is 32 GB per volume. If you install Cortex on a separate instance, simply apply the same configuration on the second instance.
- The underlying Ubuntu OS is hardened, and there’s no network filtering. There are no iptables surprises inside the image to avoid conflicting behavior with security groups.
- Migration from TheHive v4 is possible using our migration script baked in the image.
Run context
- TheHive is available on port http 9000 and Cortex, when deployed alongside, is available on port http 9001 (that’s http and NOT https). Needless to say, we encourage you never to open these ports outside your virtual network and use a public-facing load balancer and / or reverse proxy to handle the TLS sessions with end-users.
- As an incentive to use https, both TheHive and Cortex are configured to use secure cookies by default. Connecting to their respective UIs in http will fail. An override to this remains possible in the configuration (for TheHive, set play.http.session.secure = false in /opt/thp_data/nomad/tasks/thehive/application.conf; for Cortex, set play.http.session.secure = false and play.filters.csrf.cookie.secure = false in /opt/thp_data/nomad/tasks/cortex/application.conf). You must restart the thehive and/or cortex service(s) for the change to be effective.
Launching an instance
- Launch an instance from the image and attach two persistent disks: the data disk on LUN 0 and the docker disk LUN
- Set the admin username to be azureuser.
- Provide the following cloud-init bootstrap script to configure the instance (you must update at least the application.baseUrl value for TheHive in this example):
#cloud-config disk_setup: /dev/disk/azure/scsi1/lun0: table_type: gpt layout: True overwrite: True /dev/disk/azure/scsi1/lun1: table_type: gpt layout: True overwrite: True fs_setup: - device: /dev/disk/azure/scsi1/lun0 partition: auto filesystem: ext4 - device: /dev/disk/azure/scsi1/lun1 partition: auto filesystem: ext4 write_files: - path: /opt/strangebee/ops/templates/nomad/tasks/thehive/application.conf.d/service.conf content: | application.baseUrl="https://thehive.mydomain.com/thehive" play.http.context="/thehive" - path: /opt/strangebee/ops/templates/nomad/tasks/cortex/application.conf.d/service.conf content: | play.http.context="/cortex" runcmd: - [ /opt/strangebee/ops/scripts/ops-launch.sh, "-t 1", "-c 0", "-p /dev/sdh", "-d /dev/sdi" ]
You can further customize this script as needed. In the example above we:
- partition and format the data disks attached on LUN 0 and LUN 1
- launch the initialization script with the target data disk mapping names as argument (/dev/sdh and /dev/sdi) – the script will automatically mount the LUN 0 disk at /dev/sdh and the LUN 1 disk at /dev/sdi
- install TheHive only (because of parameters “-t 1“, “-c 0” when calling the ops-launch.sh script)
To install both TheHive and Cortex on the same instance, use “-t 1“, “-c 1“.
To install Cortex only on another instance, use “-t 0”, “-c 1”.
That’s it! TheHive is now available (for your load balancer or reverse proxy) on port 9000 and Cortex on port 9001, if also installed. The default admin account on both applications is admin with password secret (change them!).
Remember to access the apps through an internet-facing https system such as a reverse proxy or load balancer. In our example, TheHive is available at https://thehive.mydomain.com/thehive and Cortex at https://thehive.mydomain.com/cortex.
Even easier using Terraform
You can also provision the whole thing using Terraform.
Check our GitHub repository for turnkey deployment code, including a full SecOps vnet with an https Application Gateway.
How to use Cortex on Azure
Basics
- Based on the official Ubuntu 20.04 LTS image from Canonical.
- The image is updated whenever a new Cortex version is released—just launch a new instance as if it were a container!
- Cortex data is stored on dedicated disks, not in the root filesystem. With that in mind, you must attach two persistent data disks on LUN 0 (for the database) and LUN 1 (for Docker images) at launch (30 GB each recommended).
- The underlying Ubuntu OS is hardened, and there’s no network filtering. There are no iptables surprises inside the image to avoid conflicting behavior with security groups.
Run context
- The Cortex app runs as an unprivileged user cortex and is available on port http 9001 (that’s http and NOT https). Needless to say, we encourage you never to open that port outside your virtual network and use a public-facing load balancer and/or reverse proxy to handle the TLS sessions with end-users. Since many Cortex users also run TheHive and MISP instances alongside and since the right load balancer/reverse proxy is obviously the one you know best, we elected not to include yet another one in this image.
- The Cortex configuration is set to look for custom analyzers under /opt/cortexneurons/analyzers and for custom responders under /opt/cortexneurons/responders.
- A cronjob for user cortex runs every night (@daily) to back up the application configuration and custom analyzers/custom responders to the data volume (/var/lib/elasticsearch/cortex/). If you wish to launch a new instance from existing data, this job must run at least once after the initial install in order to restore the application’s configuration and custom analyzers/responders as well.
Launching an instance with no existing data (new Cortex install)
- Launch an instance from the image and attach two data disks on LUN 0 and LUN 1.
- SSH into the instance with a sudoer user.
- Initialize and format the additional data disks (LUN 0 and LUN 1).
- Launch the application initialization script with the target data disk names as arguments. Example: /opt/cortex/ops/scripts/ops-cortex-init.sh /dev/sdh /dev/sdi (the script will automatically mount the LUN 0 and LUN 1 disks at /dev/sdh and /dev/sdi)
- That’s it! Cortex is now available on port 9001. You can create the admin account on the first connection to the app.
Alternatively, you can easily perform steps 3 and 4 by providing a cloud-init bootstrap script at launch. In the following example, we:
- partition and format the data disks attached on LUN 0 and LUN 1
- improve the random seed with pollinate (because we will generate a secret key in the initialization process)
- and finally, we launch the initialization script with the target data disk mapping names as arguments (/dev/sdh and /dev/sdi) – the script will automatically mount the LUN 0 disk at /dev/sdh and LUN 1 at /dev/sdi
#cloud-config disk_setup: /dev/disk/azure/scsi1/lun0: table_type: gpt layout: True overwrite: True /dev/disk/azure/scsi1/lun1: table_type: gpt layout: True overwrite: True fs_setup: - device: /dev/disk/azure/scsi1/lun0 partition: none filesystem: ext4 - device: /dev/disk/azure/scsi1/lun1 partition: none filesystem: ext4 random_seed: file: /dev/urandom command: ["pollinate", "--server=https://entropy.ubuntu.com/"] command_required: true runcmd: - /opt/cortex/ops/scripts/ops-cortex-init.sh /dev/sdh /dev/sdi
You can also provision the whole thing using Terraform—check our GitHub repository for sample initialization code.
Launching an instance with existing data (Cortex update, migration, restore)
- Launch an instance from the image and attach existing Cortex data disks on LUN 0 and LUN 1 (we recommend you always create disk snapshots first).
- SSH into the instance with a sudoer user.
- Launch the Cortex restore script with the data disk names as arguments, which are /dev/sdh and /dev/sdi if you are using the default setup. Example: /opt/cortex/ops/scripts/ops-cortex-restore.sh /dev/sdh /dev/sdi.
- That’s it! Cortex is now available on port 9001 (or on the custom port you had configured) with all your existing configuration, users and data. Custom analyzers and responders stored under /opt/cortexneurons are also automatically restored, and their pip requirements are reinstalled.
Alternatively, you can easily perform step 3 by providing a cloud-init bootstrap script at launch. In the following example, we:
- launch the restore script with the data disk names as arguments (/dev/sdh and /dev/sdi)
#cloud-config
runcmd:
- /opt/cortex/ops/scripts/ops-cortex-restore.sh /dev/sdh /dev/sdi
You can also provision the whole thing using Terraform—check our GitHub repository for sample update/migration/restore code.
