STIG-compliant OL8
This document describes the process of uploading a VHD image of the Nexthink Appliance to Microsoft Azure. While Nexthink provides a set of scripts and installation packages to streamline the process, some of the required steps are manual.
Uploading the image to Blob Storage
Requirements:
A Microsoft Azure subscription, configured and primed for authentication
A storage account linked to the intended resource group
A Blob container with the intended level of Anonymous access
Distributing the image
Requirements:
The VM generation is set to "Gen 1"
Creating the VM
Requirements:
An Azure Virtual Network (VNet) with a corresponding subnet
A security group configured according to the Connectivity requirements
The VM creation menu in the Networking section can fulfil both requirements.
After creating an image, Create VM.

Configuring the VM
Basic configuration
Nexthink recommends setting up remote access to your VM using SSH keys. Create a new user or use the
nexthink
default.Set inbound port rules to None. You will configure these through the security group.
Set license type to Other.
Disk configuration
Set OS disk size to 64GB.
Data disk size must be configured according to the Azure hardware requirements.
Set source type to None (Empty disk).
Network configuration
Create a VNet, subnet, and public IP.
Set NIC network security group to Advanced.
Select the previously created security group, or create a new one following the same requirements.
Change the rest of the settings as needed.
Review and create to finalize the process.
When creating a new set of SSH credentials, Azure will prompt you to store the information. Be sure to do so, as this information is lost otherwise.
Run the following command to store the private key in your local
.ssh
folder, such as.ssh/nexthink_portal.pem
. The file should be accessible only to the current user.
chmod 400 .ssh/nexthink_portal.pem
Configuring a static private IP
The VHD installation process requires a default dynamic private IP, which must be made static to prevent DHCP lease expiration and subsequent connectivity issues.
From the Virtual Machines tab, select your VM.
Select Networking > Networking Settings.
Select the primary network interface.
Select Settings > IP configurations.
Select the default IP configuration assigned to this interface.
Select Static and insert an IP address that belongs to the VNet's network. You can use the IP given by the DHCP server.
Save and wait for Azure to finish configurations.
Avoid editing network configurations through the Webconsole as if it were a local machine. This can cause loss of connectivity and render the VM unusable, as the only possible access is through SSH.
Configuring the VM disks
Run the following command to access the VM from the Nexthink image in your Azure Compute Gallery (Gallery) using SSH:
ssh -i .ssh/nexthink_portal.pem nexthink@<vm_external_ip>
#NB: replace nexthink user if you picked a different one during VM creation
Run the
lsblk
command to ensure the system can use the entire disk. An example output:
[nexthink@test-azure-ol8 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 500M 0 part /boot
├─sda3 8:3 0 100M 0 part /boot/efi
├─sda4 8:4 0 1K 0 part
└─sda5 8:5 0 15G 0 part
├─nxt-var 253:0 0 2.2G 0 lvm /var
├─nxt-root 253:1 0 4.9G 0 lvm /
├─nxt-var_tmp 253:2 0 1G 0 lvm /var/tmp
├─nxt-var_log_audit 253:3 0 2G 0 lvm /var/log/audit
├─nxt-var_log 253:4 0 2G 0 lvm /var/log
├─nxt-home 253:5 0 1G 0 lvm /home
├─nxt-tmp 253:6 0 1G 0 lvm /tmp
└─nxt-swap 253:7 0 1G 0 lvm [SWAP]
sdb 8:16 0 128G 0 disk
sdc 8:32 0 16G 0 disk
└─sdc1 8:33 0 16G 0 part /mnt/resource
sr0 11:0 1 628K 0 rom
In this example, sda
is the OS disk, sdb
is the data disk, and sdc
is a temporary operations disk managed by Azure. The order of these can change in each appliance built.
Configure the OS disk with the following commands:
# (optional) if growpart is not installed, install it:
# sudo yum install --disablerepo=* --enablerepo=ol8_baseos_latest --enablerepo=ol8_appstream cloud-utils-growpart
sudo growpart /dev/sda 4
sudo growpart /dev/sda 5
sudo pvresize /dev/sda5
sudo lvextend -r -L +3G /dev/mapper/nxt-home
sudo lvextend -r -L +3G /dev/mapper/nxt-tmp
sudo lvextend -r -L +2G /dev/mapper/nxt-var
sudo lvextend -r -L +3G /dev/mapper/nxt-var_tmp
sudo lvextend -r -L +8G /dev/mapper/nxt-var_log
sudo lvextend -r -L +8G /dev/mapper/nxt-var_log_audit
sudo swapoff /dev/nxt/swap
sudo lvextend -L +1G /dev/mapper/nxt-swap
sudo mkswap /dev/nxt/swap
sudo swapon /dev/nxt/swap
sudo lvextend -r -l +100%FREE /dev/mapper/nxt-root
Configure the data disk with the following commands:
sudo bash /root/formatDataDisk.sh /dev/sdc
sudo systemctl daemon-reload
Verify the partition sizes and mounting with the following command:
df -ah
Optional step
Ensure the Azure temporary resource is mounted through GUID. This is important as the VM may not restart due to Azure changing the disk names around.
Find the GUID value of
/mnt/resource
with the following command:
lsblk -a -o MOUNTPOINT,UUID | grep "mnt/resource" | cut -d' ' -f3
Modify or add an entry into
/etc/fstab
with the UUID value returned for the/mnt/resource
filesystem:
UUID=<INSERT UUID VALUE HERE> /mnt/resource ext4 rw,relatime,seclabel,nodev 0 0
Installing Nexthink on the VM
Use any SCP client to download the
Nexthink-offline-install-6.X.tgz
installation package onto the VM. Alternatively, visit the V6 release notes page to download the package.Create a temporary directory and change into that directory with the following command:
mkdir /tmp/
Unpack the package with the following command:
tar -xzvf Nexthink-offline-install-6.X.tgz
Install the package contents with the following commands. Use the
-p
parameter to install the Portal or the-e
parameter to install the Engine.
sudo sh installNexthinkInCloud.sh -p
sudo sh installNexthinkInCloud.sh -e
Verify the components are running with the following command:
sudo systemctl status nxportal
nxinfo info
Change the default root password with the following command:
sudo passwd root
Notes and considerations
Compared to an on-premise installation of Nexthink, Nexthink Appliance faces a public connection and an internal network. Regarding the Portal to Engine Configuration, both the public and private IP/DNS of the machines must be configured in:
Internal and External DNS on the Webconsole parameters
Portal IP/Hostname on the Engine's Webconsole
Engine DNS name when performing Appliance federation. The same name must be resolved as the Engine's:
Internal IP address by the Portal machine.
External IP address using the Finder, so the Finder can have access to the Engine
Last updated
Was this helpful?