Cloud Images on AWS
Easily integrate and set up TheHive and Cortex with our automated deployment code. Respond to incidents hassle-free with maintenance and updates provided by our experts.
How it works

Deploy and use with ease
We built the Amazon Machine Images for TheHive and Cortex with operations and automation in mind. They are DevSecOps-friendly products that would fit in most organizations, no matter how simple or complex their infrastructure.

Don’t worry about updates
Your AMIs are updated automatically whenever newer TheHive and Cortex versions are released. Just launch new instances as if they were containers!

Production ready
- Dedicated data volumes:
TheHive and Cortex data is stored on dedicated EBS volumes, not in the root filesystem, which makes operations like instance replacement, upgrades, backups & restore much easier. With that in mind, the AMIs will create persistent EBS volumes (30 GB for Cortex) at launch that will not be deleted when the instance is terminated so that your data isn’t accidentally lost. The volumes will be encrypted with your default KMS key. You can change the default volume sizes and encryption keys.
Docker images for the Cortex analyzers and responders are also stored on a dedicated volume so you can adjust its size to your needs; the AMI will create a 20GB volume by default.
- Ubuntu-based:
The AMI is based on the official Ubuntu 20.04 LTS AMI from Canonical.
- Hardened OS:
The underlying Ubuntu OS is configured to be as secure by default as possible while remaining usable in most contexts. There are no iptables surprises inside the cloud image, so there won’t be behavior conflicts with security groups.
- Application only:
The AMIs include TheHive and Cortex applications only. They are not meant to be public-facing on their own and should be deployed within your VPC and exposed with the public-facing system of your choice (load balancer, reverse proxy). More information on the recommended reference architecture is provided in the user guide below.
Our products
TheHive Cloud Platform
Enjoy TheHive in your secure cloud environment while we handle the rest
How to use it
How to use AMI for the latest TheHive version
Basics
- You can easily initialize a new instance or restore a previous TheHive instance using scripts included in the image.
- Data is stored on 3 dedicated volumes: one for the Cassandra database (/var/lib/cassandra), another one for storage attachments (/opt/thp_data/files) and a final one for indexes (/opt/thp_data/index).
- The AMI is based on the official Ubuntu 20.04 LTS AMI from Canonical
- The AMI is updated whenever a new TheHive version is released—just launch a new instance as if it were a container.
- Migration from TheHive v3 is a manual operation detailed at length here.
- Upgrading from TheHive v4 AMI is possible using a script provided in the AMI. If you prefer to migrate manually, the overall upgrade process is documented here.
Run context
- TheHive app runs as an unprivileged user named thehive and is available on port HTTP 9000 (not HTTPS). We encourage you never to open that port outside your VPC and use a public-facing load balancer and/or reverse proxy to handle the TLS sessions with end-users. Since many TheHive users also run Cortex and MISP instances alongside and since the right load balancer/reverse proxy is obviously the one you know best, we elected not to include yet another one in this AMI.
- As an incentive to use https, TheHive is configured to use secure cookies by default. Connecting to the UI in http will fail. An override to this remains possible in the configuration: set play.http.session.secure = false in /etc/thehive/application.conf. You must restart the thehive service for the change to be effective.
- The default sudoer user is ubuntu and the ssh service listens on port 22.
- A cronjob for user thehive runs every night (@daily) to back up the application configuration to the data volume (/var/lib/cassandra/thehive/). If you wish to launch a new instance from existing data, this job must run at least once after the initial install to restore the application’s configuration as well.
Launching an instance with no existing data (new TheHive install)
- Launch an instance from the AMI.
- SSH into the instance with the ubuntu user.
- Initialize and format the additional EBS volumes (/dev/sdh, /dev/sdi and /dev/sdj). Note that in Nitro-based instances, /dev/sdh might be available as something like /dev/nvme1n1. More information is available in Amazon EBS and NVMe on Linux Instances documentation.
- Launch the application initialization script with the EBS data volumes block device names as arguments, which are /dev/sdh, /dev/sdi and /dev/sdj if you are using a default AMI setup. If you are using a Nitro-based instance, do not use the nvme names (like /dev/nvme0n1). Example: /opt/thehive/ops/scripts/ops-thehive-init.sh /dev/sdh /dev/sdi /dev/sdj
- That’s it! TheHive is now available on port 9000. The default admin account is “admin@thehive.local” with password “secret” (change it!).
Alternatively, you can easily perform steps 3 and 4 by providing cloud-init user data to the AMI at launch. In the following example using an m5 instance (Nitro-based), we:
- launch a script that will expose the external volumes, seen by the instance as /dev/nvme1n1 and /dev/nvme2n1, with their block device mapping names (/dev/sdh, /dev/sdi)
- partition and format the EBS volumes using their block device mapping names (/dev/sdh, /dev/sdi)
- launch the initialization script with the EBS block mapping names as argument (/dev/sdh, /dev/sdi, /dev/sdj)
#cloud-config bootcmd: - [ /usr/sbin/nvme-to-block-mapping ] fs_setup: - filesystem: ext4 device: '/dev/sdh' partition: auto overwrite: false - filesystem: ext4 device: '/dev/sdi' partition: auto overwrite: false - filesystem: ext4 device: '/dev/sdj' partition: auto overwrite: false runcmd: - [ /opt/thehive/ops/scripts/ops-thehive-init.sh, /dev/sdh, /dev/sdi, /dev/sdj ]
You can also provision the whole thing using Terraform; check our GitHub repository for detailed sample code.
Launching an instance with existing data (TheHive update, migration, restore)
- Launch an instance from the AMI and base the additional EBS volumes (/dev/sdh, /dev/sdi and /dev/sdj by default) on existing TheHive EBS volume snapshots for the Cassandra database (/dev/sdh), the storage attachments (/dev/sdi) and the database index (/dev/sdj).
- SSH into the instance with the ubuntu user.
- Launch the TheHive restore script with the EBS data volumes block device names as arguments, which are /dev/sdh, /dev/sdi and /dev/sdj if you are using a default AMI setup. If you are using a Nitro-based instance, do not use the nvme names (like /dev/nvme1n1). Example: /opt/thehive/ops/scripts/ops-thehive-restore.sh /dev/sdh /dev/sdi /dev/sdj.
- That’s it! TheHive is now available on port 9000 (or on the custom port you had configured) with all your existing configuration, users and data.
Alternatively, you can easily perform step #3 by providing cloud-init user data to the AMI at launch. In the following example using an m5 instance (Nitro-based), we:
- launch the restore script with the EBS block mapping names as arguments (/dev/sdh, /dev/sdi and /dev/sdj)
#cloud-config runcmd: - [ /opt/thehive/ops/scripts/ops-thehive-restore.sh, /dev/sdh, /dev/sdi, /dev/sdj ]
You can also provision the whole thing using Terraform; check our GitHub repository for detailed sample code.
How to use Cortex AMI
Basics
- Based on the official Ubuntu 20.04 AMI from Canonical.
- The AMI is updated whenever a new Cortex version is released—just launch a new instance as if it were a container.
- Cortex data and Docker images (for analyzers and responders) are stored on two dedicated EBS volumes, not in the root filesystem. With that in mind, the AMI will create two persistent EBS data volumes at launch that will not be deleted when the instance is terminated so that your data isn’t accidentally lost. The volumes will be encrypted with your default KMS key.
- The Cortex data volume (/dev/sdh) is sized at 30GB by default; the Docker volume (/dev/sdi) is sized at 20GB by default.
Run context
- The Cortex app runs as an unprivileged user cortex and is available on port HTTP 9001 (not HTTPS). We encourage you never to open that port outside your VPC and use a public-facing load balancer and/or reverse proxy to handle the TLS sessions with end-users. Since many Cortex users also run TheHive and MISP instances alongside and since the right load balancer/reverse proxy is obviously the one you know best, we elected not to include yet another one in this AMI. We provide more information on using the AWS Application Load Balancer or reverse proxies in our detailed Cortex AMI user guide.
- The default sudoer user is ubuntu and the ssh service listens on port 22.
- The Cortex configuration is set to look for custom analyzers under /opt/cortexneurons/analyzers and for custom responders under /opt/cortexneurons/responders.
- A cronjob for user cortex runs every night (@daily) to back up the application configuration and custom analyzers / custom responders to the data volume (/var/lib/elasticsearch/cortex/). If you wish to launch a new instance from existing data, this job must run at least once after the initial install to restore the application’s configuration and custom analyzers/responders as well.
Launching an instance with no existing data (new Cortex install)
- Launch an instance from the AMI.
- SSH into the instance with the ubuntu user.
- Initialize and format the additional EBS volumes (/dev/sdh and /dev/sdi). Note that in Nitro-based instances, /dev/sdh and /dev/sdi might be available as something like /dev/nvme1n1 and /dev/nvme2n1. More information is available in Amazon EBS and NVMe on Linux Instances documentation.
- Launch the application initialization script with the EBS data volume block device names as arguments, which is /dev/sdh and /dev/sdi if you are using a default AMI setup. If you are using a Nitro-based instance, do not use the nvme names (like /dev/nvme1n1). Example: /opt/cortex/ops/scripts/ops-cortex-init.sh /dev/sdh /dev/sdi
- That’s it! Cortex is now available on port 9001. You can create the admin account on the first connection to the app.
Alternatively, you can easily perform steps 3 and 4 by providing cloud-init user data to the AMI at launch. In the following example using an m5 instance (Nitro-based), we:
- launch a script that will expose the external volumes, seen by the instance as /dev/nvme1n1 and /dev/nvme2n1, with their block device mapping names (/dev/sdh, /dev/sdi)
- partition and format the EBS volumes using their block device mapping names (/dev/sdh, /dev/sdi)
- launch the initialization script with the EBS block mapping names as arguments (/dev/sdh and /dev/sdi – not /dev/nvme0n1 and /dev/nvme1n1)
#cloud-config bootcmd: - [ /usr/sbin/nvme-to-block-mapping ] fs_setup: - filesystem: ext4 device: '/dev/sdh' partition: auto overwrite: false - filesystem: ext4 device: '/dev/sdi' partition: auto overwrite: false runcmd: - [ /opt/cortex/ops/scripts/ops-cortex-init.sh, /dev/sdh, /dev/sdi ]
You can also provision the whole thing using Terraform; check our GitHub repository for detailed sample code.
Launching an instance with existing data (Cortex update, migration, restore)
- Launch an instance from the AMI and base the additional EBS volumes (/dev/sdh and /dev/sdi by default) on existing Cortex data and Docker volume snapshots.
- SSH into the instance with the ubuntu user.
- Launch the Cortex restore script with the EBS data volume block device names as arguments, which are /dev/sdh and /dev/sdi if you are using a default AMI setup. If you are using a Nitro-based instance, do not use the nvme names (like /dev/nvme1n1). Example: /opt/cortex/ops/scripts/ops-cortex-restore.sh /dev/sdh /dev/sdi.
- That’s it! Cortex is now available on port 9001 (or on the custom port you had configured) with all your existing configuration, users and data. Custom analyzers and responders stored under /opt/cortexneurons are also automatically restored, and their pip requirements are reinstalled.
Alternatively, you can easily perform step 3 by providing cloud-init user data to the AMI at launch. In the following example using an m5 instance (Nitro-based), we:
- launch the restore script with the EBS block mapping names as arguments (/dev/sdh and /dev/sdi – not /dev/nvme1n1 and /dev/nvme2n1)
#cloud-config
runcmd:
- [ /opt/cortex/ops/scripts/ops-cortex-restore.sh, /dev/sdh, /dev/sdi ]
You can also provision the whole thing using Terraform; check our GitHub repository for detailed sample code.
