STIG-compliant OL8

This document applies to the STIG-based Virtual Hard Disk (VHD) Appliance build. For non-STIG compliant VHDs, refer to the Installing the Appliance on Azure documentation.

This documentation references external sources. Nexthink does not control the accuracy of third-party documentation or any external updates or changes that might create inconsistencies with the information in this documentation.

This document describes the process of uploading a VHD image of the Nexthink Appliance to Microsoft Azure. While Nexthink provides a set of scripts and installation packages to streamline the process, some of the required steps are manual.

Nexthink recommends having basic knowledge of Linux to ensure proper Appliance setup. The Nexthink Appliance is public-facing and requires the Security Hardening protocol.

Uploading the image to Blob Storage

Requirements:

  • A Microsoft Azure subscription, configured and primed for authentication

  • A storage account linked to the intended resource group

  • A Blob container with the intended level of Anonymous access

Nexthink recommends uploading the VHD image to the Azure Storage Explorer. Refer to the Upload a VHD file documentation from Microsoft.

Distributing the image

Requirements:

  • The VM generation is set to "Gen 1"

Nexthink recommends using the Azure Compute Gallery for sharing custom VM images to your Azure organization. Refer to the Create an image definition and an image version documentation from Microsoft.

Creating the VM

Requirements:

  • An Azure Virtual Network (VNet) with a corresponding subnet

  • A security group configured according to the Connectivity requirements

The VM creation menu in the Networking section can fulfil both requirements.

Check the Appliance VM hardware requirements before creating Nexthink Portal or Engine instances in Azure. Refer to the Hardware Requirements PDF file from the Installing the Appliance on Azure documentation.

After creating an image, Create VM.

Configuring the VM

Basic configuration

  1. Nexthink recommends setting up remote access to your VM using SSH keys. Create a new user or use the nexthink default.

  2. Set inbound port rules to None. You will configure these through the security group.

  3. Set license type to Other.

Disk configuration

  1. Set OS disk size to 64GB.

  2. Data disk size must be configured according to the Azure hardware requirements.

    • Set source type to None (Empty disk).

Network configuration

  1. Create a VNet, subnet, and public IP.

  2. Set NIC network security group to Advanced.

  3. Select the previously created security group, or create a new one following the same requirements.

  4. Change the rest of the settings as needed.

  5. Review and create to finalize the process.

  1. Run the following command to store the private key in your local .ssh folder, such as .ssh/nexthink_portal.pem. The file should be accessible only to the current user.

chmod 400 .ssh/nexthink_portal.pem

Configuring a static private IP

The VHD installation process requires a default dynamic private IP, which must be made static to prevent DHCP lease expiration and subsequent connectivity issues.

  1. From the Virtual Machines tab, select your VM.

  2. Select Networking > Networking Settings.

  3. Select the primary network interface.

  4. Select Settings > IP configurations.

  5. Select the default IP configuration assigned to this interface.

  6. Select Static and insert an IP address that belongs to the VNet's network. You can use the IP given by the DHCP server.

  7. Save and wait for Azure to finish configurations.

Configuring the VM disks

  1. Run the following command to access the VM from the Nexthink image in your Azure Compute Gallery (Gallery) using SSH:

ssh -i .ssh/nexthink_portal.pem nexthink@<vm_external_ip>
#NB: replace nexthink user if you picked a different one during VM creation
  1. Run the lsblk command to ensure the system can use the entire disk. An example output:

[nexthink@test-azure-ol8 ~]$ lsblk
NAME 				  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda 				   8:0    0   64G  0 disk
├─sda1 			      	   8:1    0    1M  0 part
├─sda2 				   8:2    0  500M  0 part /boot
├─sda3 				   8:3    0  100M  0 part /boot/efi
├─sda4 				   8:4    0    1K  0 part
└─sda5 				   8:5    0   15G  0 part
  ├─nxt-var 		         253:0    0  2.2G  0 lvm  /var
  ├─nxt-root 		         253:1    0  4.9G  0 lvm  /
  ├─nxt-var_tmp 	         253:2    0    1G  0 lvm  /var/tmp
  ├─nxt-var_log_audit            253:3    0    2G  0 lvm  /var/log/audit
  ├─nxt-var_log 	         253:4    0    2G  0 lvm  /var/log
  ├─nxt-home 		         253:5    0    1G  0 lvm  /home
  ├─nxt-tmp 		         253:6    0    1G  0 lvm  /tmp
  └─nxt-swap 		         253:7    0    1G  0 lvm  [SWAP]
sdb 				   8:16   0  128G  0 disk
sdc				   8:32   0   16G  0 disk
  └─sdc1 			   8:33   0   16G  0 part /mnt/resource
sr0           		          11:0    1  628K  0 rom

In this example, sda is the OS disk, sdb is the data disk, and sdc is a temporary operations disk managed by Azure. The order of these can change in each appliance built.

  1. Configure the OS disk with the following commands:

# (optional) if growpart is not installed, install it:
# sudo yum install --disablerepo=* --enablerepo=ol8_baseos_latest --enablerepo=ol8_appstream  cloud-utils-growpart
sudo growpart /dev/sda 4
sudo growpart /dev/sda 5
sudo pvresize /dev/sda5
sudo lvextend -r -L +3G /dev/mapper/nxt-home
sudo lvextend -r -L +3G /dev/mapper/nxt-tmp
sudo lvextend -r -L +2G /dev/mapper/nxt-var
sudo lvextend -r -L +3G /dev/mapper/nxt-var_tmp
sudo lvextend -r -L +8G /dev/mapper/nxt-var_log
sudo lvextend -r -L +8G /dev/mapper/nxt-var_log_audit
sudo swapoff /dev/nxt/swap
sudo lvextend -L +1G /dev/mapper/nxt-swap
sudo mkswap /dev/nxt/swap
sudo swapon /dev/nxt/swap
sudo lvextend -r -l +100%FREE /dev/mapper/nxt-root
  1. Configure the data disk with the following commands:

sudo bash /root/formatDataDisk.sh /dev/sdc
sudo systemctl daemon-reload
  1. Verify the partition sizes and mounting with the following command:

df -ah

Optional step

Ensure the Azure temporary resource is mounted through GUID. This is important as the VM may not restart due to Azure changing the disk names around.

  1. Find the GUID value of /mnt/resource with the following command:

lsblk -a -o MOUNTPOINT,UUID | grep "mnt/resource" | cut -d' ' -f3
  1. Modify or add an entry into /etc/fstab with the UUID value returned for the /mnt/resource filesystem:

UUID=<INSERT UUID VALUE HERE>        /mnt/resource         ext4        rw,relatime,seclabel,nodev 0 0

Installing Nexthink on the VM

  1. Use any SCP client to download the Nexthink-offline-install-6.X.tgz installation package onto the VM. Alternatively, visit the V6 release notes page to download the package.

  2. Create a temporary directory and change into that directory with the following command:

mkdir /tmp/
  1. Unpack the package with the following command:

tar -xzvf Nexthink-offline-install-6.X.tgz
  1. Install the package contents with the following commands. Use the -p parameter to install the Portal or the -e parameter to install the Engine.

Portal:
sudo sh installNexthinkInCloud.sh -p
Engine:
sudo sh installNexthinkInCloud.sh -e
  1. Verify the components are running with the following command:

Portal:
sudo systemctl status nxportal

Alternatively, connect to the public IP to verify if the Portal is running. Monitor the logs in /var/nexthink/portal/logs.

Engine:
nxinfo info
  1. Change the default root password with the following command:

sudo passwd root

Notes and considerations

Compared to an on-premise installation of Nexthink, Nexthink Appliance faces a public connection and an internal network. Regarding the Portal to Engine Configuration, both the public and private IP/DNS of the machines must be configured in:

  • Internal and External DNS on the Webconsole parameters

  • Portal IP/Hostname on the Engine's Webconsole

  • Engine DNS name when performing Appliance federation. The same name must be resolved as the Engine's:

    • Internal IP address by the Portal machine.

    • External IP address using the Finder, so the Finder can have access to the Engine

Last updated

Was this helpful?