Installation and Configuration Guide
Introduction
This document provides comprehensive information on the installation and configuration of the Nexthink Event Connector, as well as basic maintenance guidelines. This document provides a detailed description of the processes needed for a successful installation.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us via the Nexthink support portal.
This document is intended for readers with a detailed understanding of Nexthink technology and Splunk and ServiceNow technologies, as well as some understanding of concepts such as REST messages, Linux command-line and basic security terms.
These configuration instructions should be executed by a Splunk, ServiceNow or Azure certified professional.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
Version: 1.5.0
Last Revision: 16/10/2023
Overview
Nexthink Event Connector makes it possible for Nexthink customers to integrate end-user IT data (failed connections, system crashes, etc) into Splunk, ServiceNow or Azure Data Lake Storage Gen2 platforms.
You can configure the Nexthink Event Connector service by filling the service configuration files with the data that will be sent to the target endpoint, as well as to Nexthink Engines. Minimum configuration actions may be required for the Splunk, ServiceNow or Azure Data Lake Storage Gen2 instances into which the data will be injected.
Main Components
The Nexthink Event Connector requires several components to be fully operative:
A running implementation of the Nexthink product. It needs at least one instance of Nexthink Engine V6.30 or later, to import data into the target endpoint via the Nexthink Web API V2.0.
A machine with Oracle Linux 8 (OL8) in which to install the connector service. Be aware that this machine must be able to reach both the Engine (Engines) where the data is retrieved and the target endpoint. A separated dedicated Nexthink appliance with Oracle Linux 8 is recommended. However this should not be one of the machines already in use, Nexthink Portal or Engine. This machine must have access to remote or local Oracle Linux standard repositories, in case it is necessary to resolve any dependencies when installing Nexthink Event Connector service. A proxy with either None or Basic authentication is supported. For usage, please refer to the Installation and Configuration sections.
One of the following target endpoints:
Splunk Instance 6.5 or later conveniently configured to receive Nexthink data sent by the connector. In particular, the following items must be present:
HTTP Event Collector (HEC) standard Splunk application enabled and a previously generated token to communicate with it.
Nexthink Add-on for Splunk. Not strictly necessary, but highly recommended. It contains a set of data models to map the data sent by the connector service. These data models make it possible to manage the gathered information using powerful Splunk built-in techniques (Pivot, etc).
ServiceNow Instance London or later conveniently configured to receive Nexthink data sent by the connector. In particular, the following items must be present:
Event Management plugin installed and activated.
User with the role Event Management Integrator [
evt_mgmt_integration
].
Azure Data Lake Storage Gen2 instance configured and ready to receive data:
Azure Web App, properly configured (it will allow us to log in against the container). Refer to the section Download and install Azure Storage Explorer for further information.
Azure Storage Account, it should be of the StorageV2 type with Hierarchical namespace enabled. Refer to the section Creation of an Azure Storage Account for further details.
Azure Container created in the storage account with proper permissions for the Azure App previously created. Refer to the section Creation of an Azure Container for further details.
Target Endpoint configuration
Splunk Set-up
Enabling the Splunk HEC (HTTP Event Connector)
The following instructions are adapted from the official Splunk documentation at Set up and use HTTP Event Collector in Splunk Web - Splunk Documentation:
It is necessary to enable HTTP Event Collector (HEC) to receive events through HTTP before it can be used. For Splunk Enterprise, enable HEC through the Edit Global Settings dialog box, see the illustration below.
Click Settings > Data Inputs.
Click HTTP Event Collector.
Click Global Settings.
In the All Tokens toggle button, select Enabled.
Select Default Source Type.
(Optional) To have HEC listen and communicate over HTTPS rather than HTTP, click the Enable SSL checkbox.
(Optional) Enter a number in the HTTP Port Number field for HEC to listen on. Confirm that there is no firewall blocking the port number specified in the HTTP Port Number field, either on the client-side or the Splunk instance that hosts HEC.
Click Save.
HEC Token
To use HEC, it is necessary to configure at least one token.
Click Settings > Data Inputs.
Click HTTP Event Collector.
Click New Token.
In the Name field, enter a name for the token.
If it is necessary to enable indexer acknowledgment for this token, click the Enable indexer acknowledgment checkbox.
Click Next.
Confirm the source type and the index for HEC events.
Click Review.
Confirm that all settings for the endpoint are ok.
If all settings are ok, click Submit. Otherwise, click < to make changes.
Copy the token value that Splunk Web displays and paste it into another document for reference later.
Nexthink Add-on for Splunk
Click App: Search & Reporting > Find More Apps.
Type Nexthink Add-on for Splunk in the top-left search box.
Click Install to install the add-on.
ServiceNow
Event Management plugin
The following instructions are adapted from the official ServiceNow documentation.
Procedure:
In the HI Service Portal, click Service Requests > Activate Plugin.
Fill out the form.
Target instance | Instance on which to activate the plugin. |
---|---|
Plugin name | Name of the plugin to activate. |
Specify the date and time you would like this plugin to be enabled | Date and time must be at least 2 business days from the current time. Note: Plugins are activated in two batches each business day in the Pacific time zone, once in the morning and once in the evening. If the plugin must be activated at a specific time, enter the request in the Reason/Comments. |
Reason/Comments | Provide any information that would be helpful for the ServiceNow personnel activating the plugin. For example, if you need the plugin activated at a specific time instead of during one of the default activation windows. |
Click Submit.
To install the Event Management module in a developer instance you need to go to ServiceNow developer portal, Manage Instance -> Action -> Activate plugin and look for Event Management.
Azure Data Lake Storage Gen2
Download and install Azure Storage Explorer
As a prerequisite, you will need to download and install the Desktop version of https://azure.microsoft.com/en-us/features/storage-explorer/
Once installed, nothing needs to be done with it until a later step.
Creation of an Azure App
Register server-side web app
The following instructions are adapted from the official Azure Active Directory documentation at https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app
Connect to Azure portal with your account.
Navigate to Azure Active Directory > App registrations > New registration.
Add a name and select the Web type.
Click on Register.
Enable Service Principal for the app
The newly created application tab labeled Overview contains details such as the Service Principal which can be found in the Managed application in local directory item.
Make sure the Service Principal is enabled. Depending on how the app was created, you may need to create a Service Principal for it, follow the official Microsoft documentation.
Create a client secret for the app.
Navigate to the Certificates & secrets tab and create a New client secret. Make sure to note it down as you will need it later.
Creation of an Azure Storage Account
The following instructions are adapted from the official Microsoft Azure Data Lake Storage Gen2 documentation.
Log in to the Azure Portal.
Search for Storage Accounts.
Click on the Add button.
Choose the Resource group or create one
In the Storage account name, enter a name for your storage account (must be unique).
Make sure the Account kind is StorageV2.
Click on the Advanced tab.
Enable Hierarchical namespace for Data Lake Storage Gen2.
Click Review+Create button.
Creation of an Azure Container
The following instructions are adapted from the official Microsoft Azure Data Lake Storage Gen2 documentation.
Access your new Storage Account under the Storage Accounts section.
Click on containers in the left menu.
Click on the Container button.
In Name simply put the name of the container that you want to create.
Click the Create button.
Configure permissions for container:
Open Storage Explorer on the Azure Storage Explorer (Desktop version).
Look for your Storage Account and expand it.
Expand Blob containers.
Right-click your recently created container.
Click Manage Access in the context menu.
Click Add.
Type the user that you want to add and click Search.
Select that user and click Add.
Set the required permissions.
Click on OK.
Installing the Connector Service
Once a machine with Oracle Linux 8 is set up to host the connector service, the Event Connector rpm package can be easily installed using a terminal session by a user with administrative privileges by executing the following command:
$ rpm -Uvh nxeventconnector-x.x.x-x.el7.noarch.rpm --nosignature
Please note that x.x.x-x
specifies the package version for the connector service rpm to be installed.
During the new installation, you will be prompted to state which back-end tool will be the target of the Event Connector so that the proper configuration files will be copied on the target directory.
Only one type of connector (Splunk, ServiceNow or Azure Data Lake Storage Gen2) can be installed on the same appliance.
Do not use CTRL+C to abort the installation process as it might leave the installation in an unstable state.
If the installation has been aborted, the following command will allow the Event Connector to be installed again.
$ rpm -e nxeventconnector
Installation with proxy
To perform a proper installation through a proxy, pip (the python package manager) should be configured to use the proxy. To do so, create or update the file pip.conf
in /etc (/etc/pip.conf) with the following content:
[global]
proxy = ://:@:
Installation in CentOS 7 (deprecated)
Alternatively, if an old appliance based on CentOS 7 is used, the Event Connector rpm package can be installed using a terminal session by a user with administrative privileges by executing the following command:
$ yum install nxeventconnector-x.x.x-x.el7.noarch.rpm
Configuring the Connector Service
General Configuration
The file with the general configuration for the connector service is located at /etc/nxeventconnector/config.conf by default. This file can be conveniently edited by root
and nxeventconnector
system users to adapt configuration. The file content may look similar to the following:
This will continue with one of the following sections, depending on the back-end:
For Splunk:
For ServiceNow:
For Azure Data Lake Storage Gen2:
Below are more detailed descriptions and possible values for each attribute:
Attribute | Description | Values |
---|---|---|
log_conf_file | Path where the service log configuration file is stored | String with a file path |
log_file | Path where the service log file itself is stored | String with a file path |
log_level | Log level configured for the service logger | CRITICAL, ERROR, WARNING, INFO, DEBUG or NOTSET |
log_format | Format of the log messages | String with a proper format for the Python Logger class |
verify_cert_engine | Check for self-signed certificate in Nexthink Engine | true/false |
verify_cert_target | Check for self-signed certificate in the back-end instance | true/false |
proxy_enabled | Proxy enabled | true/false |
proxy_server | URI of the proxy in the format:
| String |
proxy_auth_type | Proxy authentication type. Only supported None or Basic authentication | None/Basic |
proxy_user | User for the proxy. Only with Basic auth. | String |
proxy_password | Password for the proxy user. Only with Basic auth. | String |
[ENGINES] section | Information about every Engine registered on the service with the following format:
Note: Default port is 1671 for On premises (V6.X) and 443 for Cloud | Includes Engine name and WEB API 2.0 endpoint and credentials, as well as its standard timezone If either [NEXTHINK] or [NEXTHINK_OAUTH] section is defined, credentials should not be included. |
Specific configuration for the authentication mechanism against Nexthink APIs. It can be configured in either the [NEXTHINK] section (for Basic) or the [NEXTHINK_OAUTH] section (for OAuth).
[NEXTHINK] section
uri | URI of the Nexthink Portal in the format:
| String |
username | Username of a Nexthink account with permissions to make use of the Web API (NXQL) and the List Engines API | String |
password | Password of the previous Nexthink account | String |
[NEXTHINK_OAUTH] section. Please note that using OAuth is only supported starting from V6.30.8/2021.9.
oauth_provider_uri | Nexthink Cloud endpoint in the format:
| String |
oauth_client_id | Client ID with scopes to make use of the Web API (NXQL) and the List Engines API. Should be requested through Support | String |
oauth_client_secret | Secret for the previous. Should be requested through Support | String |
Specific configuration for the Splunk integration:
URI | URI of the Splunk HEC | String |
---|---|---|
token | Splunk HEC token. It must be consistent with the token generated in HEC Token. | String with the GUID associated to the HEC token |
ack_indexer | Splunk indexer acknowledgment enabled. It must be consistent with the selection in HEC Token. | true/false |
index | Splunk index where the events will be stored. It must be consistent with the index selected in HEC Token. Note that this can be whatever index name the Splunk administrator has chosen and enabled in the system (i.e. main, nexthink, my_personal_index, etc) | String with the index |
max_records_single_push | Maximum number of records to be streamed to Splunk in one single push | Integer |
Specific configuration for the ServiceNow integration:
URI | URI of the ServiceNow endpoint | String |
---|---|---|
login | Username with role Event Management Integrator [evt_mgmt_integration] | String |
password | Password for the user | String |
max_records_single_push | Maximum number of records to be streamed to ServiceNow in one single push | Integer |
Specific configuration for the Azure Data Lake Storage Gen2 integration:
URI | URI of the Data Lake instance | String |
---|---|---|
tenant_id | Id of the application's Azure Active Directory tenant | String |
client_id | Client id of the Azure app created in 3.3.2 | String |
client_secret | Client secret of the Azure app created in 3.3.2 | String |
filesystem | Container where the events will be stored | String |
max_records_single_push | Maximum number of records to be streamed to Azure Data Lake Storage Gen2 in a single push | Integer > 0 |
Event Configuration
The file with the configuration for the connector service is located at /etc/nxeventconnector/events.conf by default. This file can be conveniently edited by root
and nxeventconnector
system users to adapt configuration. The file content may be similar to the following:
For clarity, different types of events are usually listed separately in the file. Punctual events are those whose lifetimes only span the instant when they occur, while long-lasting events can report several updates in addition to the instant they were created. Listing events are those events intended for reporting purposes and are able to query any type of information from the Nexthink database (objects, events, object with event decoration, etc).
Here is a more detailed description and possible values for each attribute:
Attribute | Description | Values |
---|---|---|
section names (i.e. [DEVICE_BOOT]) | Event name | String with the section named as an event |
mode | Event mode | long_lasting, punctual, listing or listing_advanced |
timestamp | Time of event to be used as timestamp in the event. This is only allowed in listing_advanced mode with event-related queries. | time, start_time or end_time, depending on the event table being queried |
query | NXQL query to perform in order to retrieve the data. Mapping and date dynamic fields are enclosed in <> | String with the proper query to retrieve the Nexthink data |
mapping | Dictionary with the mapping associated to each field tag specified in the query. For each field tag the associated Nexthink and back-end data model fields are specified***** | String with the proper mapping |
frequency | Minutes between consecutive event data retrieval. It must be greater than or equal to its delay. | Integer > 0 |
severity | For ServiceNow only. The severity of the event. The options are typically interpreted as follows:
| Integer (0,5) |
description | For ServiceNow only. A reason for event generation. Shows additional details about an issue. For example, a server stack trace or details from a monitoring tool. This field has a maximum length of 4000. | String |
directory | For Azure Data Lake Storage Gen2 only. Path of the base directory that will be created and will contain the CSVs files. Note: A sub-directory with the name of the event will be automatically added under the base directory:
Note, directory allows complex directory paths in the following format:
| String with the Directory name |
date_folders | For Azure Data Lake Storage Gen2 only. Boolean representing if a hierarchical data folder structure must be created:
| true/false |
delay | Minutes of delay to be considered when retrieving event data | Integer >= 0 |
platforms | Tuple with the operating systems which queried devices must belong to | windows, mac_os or mobile |
Note that, if a given dynamic field (like scores, categories, outputs of Remote Actions, etc.) is going to be used in the mapping, the double quotes must be escaped with a backslash ‘\’. Please see the example below:
Also, as the ServiceNow events table has some columns with special metadata meaning, there are some tags with prefixes that can be prepended to the column name in the mapping parameter (in the format servicenow_+), clearly stating that the desired field in Nexthink will be mapped to one of those special columns in ServiceNow. The possible columns that can be used with the prefixes are:
node: device identifier (name, FQDN, IP or MAC address). If it is not used, the tag will be empty.
resource: application, disk, device, etc. If it is not used, the tag will be empty.
metric_name: if not used, the tag will be populated with the corresponding event triggering (i.e., the event name stated as the header in the events.conf file).
type: if not used, the tag will be populated with the list of tables targeted by the underlying NXQL query.
Logrotate
Nexthink Event Connector service comes installed with a default configuration for the logrotate Linux service in case it is installed. The aim of this service is to prevent a file from growing unlimitedly by flushing its content periodically, allowing several files to store only a limited amount of their original information. That configuration can be found at /etc/logrotate.d/nxeventconnector :
This configuration is executed daily and basically rotates the log file when it reaches 2 MB. It will compress the rotated log file and keep a maximum of 5 rotated files. In case any modification is needed, check the man page by typing in a Linux terminal:
man logrotate
Please note that if the logrotate service is not installed along with Nexthink Event Connector service, some manual maintenance for the log file should be done periodically so it does not get excessively big.
Connector Modes – Selecting the appropriate one
Information Units
In the NXQL Data Model of Nexthink two different types of information units exist: events and objects. An event is an occurrence that happens at a defined moment in time, thus having a timestamp. There are several types of events that are at the core of Nexthink technology. These events are the basic unit of information. Each type of event is linked to a well-defined set of objects. Objects, for their part, represent items recognized by Nexthink.
It is vital to not confuse Nexthinkevents with Splunk or ServiceNowevents. Depending on the type, one Nexthinkevent can be reported several times (updated) to Splunk or ServiceNow, thus creating several Splunk or ServiceNowevents for a single Nexthinkevent.
Connector Modes
The Nexthink Event Connector recognizes the difference between the information units and provides four different modes for reporting this data. As discussed in the High level overview guide, these modes are long-lasting
, punctual
, listing
and listing_advanced
with each intended for a specific purpose. Therefore, choosing the mode which best suits your needs is a key aspect of configuring the connector. The following provides more detailed explanations of the differences between the modes:
Long-lasting: Available only for Splunk, the main goal of this mode is to keep track of all the information related to a given durable event. This mode will only take into consideration events either starting or ending during the time window of interest. As the lifetime of long-lasting events is not just an instant, but a period, this mode reports several updates for the given event, in addition to reporting data at the instant when the event is created. More precisely, this mode will send a
long_lasting_started
event to Splunk for each record retrieved by the NXQL query if the event was initiated during the time window of interest. The timestamp of this event will correspond to the Nexthink initial timestamp (start_time field
)of the event. In addition to this initial event, onelong_lasting_updated
event will be sent to Splunk along each of the subsequent time windows as long as the Nexthink event is still alive during these windows of time. The timestamp will be set to the Nexthink current timestamp (end_time field
) of the event. Be aware that the Nexthinkend_time
field is modified after each Collector's update to the Engine.Punctual: This mode is intended to report one-time events, i.e., those events whose life span is just the instant when they occur. As is the case with the previous mode, it will only take into consideration those events occurring during the time window of interest. Therefore, a single punctual event is sent to Splunk/ServiceNow/Azure Data Lake for each record retrieved by the NXQL query. The timestamp of this event corresponds to the exact Nexthink timestamp when the event occurred (
start_time or time fields
).Listing: This mode is mainly dedicated to reporting or inventory purposes, thus being able to query any type of information from Nexthink database (objects, events, object with event decoration, aggregates, etc). This mode does not perform any time-related checks, so any desired date filtering must be explicitly stated in the NXQL query. As inventory is the main goal, all Nexthink records retrieved by the NXQL query will be sent to Splunk/Azure Data Lake with the same timestamp, i.e., the moment when the query was launched.
Listing advanced: This mode is very similar to the previous one, as it does not perform any time-related checks, forcing any desired date filtering to be explicitly stated in the NXQL query. However, the main goal of this mode is to provide event-related reports/listings. Therefore, the Nexthink event date field (
time
,start_time
orend_time
) can be selected to be used as the Splunk/ServiceNow/Azure Data Lake timestamp.
Selecting the Appropriate Mode
Now that we understand the difference between modes, it is time to properly configure the service. Below are some useful tips to help in choosing the best mode for your needs.
When it is neccessary to compose a query based on aggregates, the appropriate mode would be either listing
or listing_advanced
. If this is not the case, you may consider choosing punctual for ServiceNow, or either punctual
or long_lasting
for Splunk. In order to make the correct choice, the following questions should be asked:
Based on needed information
Do you know the precise question to be answered by Nexthink before choosing the mode?
If the answer is yes, listing
mode is likely enough and you will be able to configure queries appropriately. If the answer is no, the best approach is likely to rely on event modes (long_lasting
used only for Splunk, and punctual
), which can provide much more information.
Given the fact that not all modes are available for ServiceNow, it is important to delve into this question by looking at Splunk and ServiceNow cases separately.
Splunk
When doing a capacity study about your network, you might ask: how much daily traffic belongs to Dropbox? This is quite a precise question, so you could simply define an NXQL query with aggregates to retrieve that information on a daily basis, as shown below in the code snippet below.
Note that using and tags in the between clause will allow the information to be obtained between the moment when the query is launched and 24 hours before. The frequency has been set to 1440 minutes, which is equivalent to 24 hours. If you need the information belonging to the entire duration of the previous day, you could simply set the between
clause to midnight-1d
and midnight
.
However, what if you want to know: how much daily incoming traffic belongs to application X on day Y during Z hours? In this case, you could send all Nexthink connection events on a given frequency, let’s say 5 minutes to Splunk. You would also send the application and device-related events to each connection in Splunk. This way, all the information needed to answer the question above, as well as many other possible questions, will already be present in Splunk, as shown in the code snippet below.
ServiceNow
Similar to the case for Splunk, if you know exactly what you want to ask Nexthink, choosing the listing
or listing_advanced
modes seems to be the best option. If you know that certain devices are suffering from, say, Outlook performance issues, we can compose an aggregate query similar to what is shown in the code snippet below:
If, on the other hand, you are not sure which application is suffering from performance issues, the solution would be to configure a more generic query that would belong to the punctual mode, as shown in the code snippet below:
Based on Budget
How much can you spend on sending data?
Splunk
As Splunk pricing is based on the daily amount of data indexed by the instance, the amount of data to be sent to Splunk can impact the decision about what to report from Nexthink. If you have a limited budget for your license, it is best to use listing mode. On the other hand, those customers who have a bigger license capacity can leverage spending against the powerful benefits of the long_lasting and punctual event modes.
ServiceNow
For ServiceNow, it is best to consider minimizing the amount of data that is sent since ServiceNow is not a big-data service. The guidelines that were discussed above regarding Splunk, still apply here.
Azure Data Lake Storage Gen2
The purpose of Azure Data Lake is to work with massive amounts of data. The costs are related to the number of requests made, as well as the amount of data stored, so it is best to balance both.
Azure allows you to define certain storage lifecycle policies to remove old data and only keep necessary data.
You can find more information in the following links:
Tools Supporting Configuration
There is a set of tools included with the Python package containing the Nexthink Event Connector service.
Query Validator
This tool helps users to check if there are any syntax errors in the queries specification. It can be executed by typing the next command on a terminal inside the Linux system where the service is installed:
nxeventconnector-check-query [-c <config_file>] [-e <events_file>]
Default general and event configuration can be overridden by passing optional input parameters. If there is no error, the terminal will print something similar to the information listed below:
If some query generated contains any syntax errors:
By reviewing the error message, it is clear that there is a typo in the ‘select’ statement which resulted in an incorrect response from the Engine.
Engine Availability
This tool helps users to check the availability of configured Engines. It can be executed by typing the next command on a terminal inside the Linux system running the service:
nxeventconnector-check-engine
If there is no error, something similar to the following should print in the terminal:
In this case, there was only one Engine configured and it was possible to reach.
If the Engine had not been reachable, the command output would have looked as follows:
By reviewing the error message, it is clear that there is a connectivity problem with the Engine since maximum connection attempts were exceeded.
Check Timezone
This tool helps users check standard timezones. It can be executed by typing the next command on a terminal inside the linux system running the service:
nxeventconnector-check-tz [timezone]
If no timezone parameter is passed to the command, the output on the terminal consists of a list with all the available timezones in the system:
nxeventconnector-check-tz
If a timezone parameter is passed as an input parameter to the command, the output message should display that the timezone is correct:
nxeventconnector-check-tz Australia/Sydney
Here is a second example with an incorrect timezone:
nxeventconnector-check-tz Universe/Mars
Update Engines
This tool helps users to configure the list of engines. To do so, it will connect to a configured Nexthink Portal and will retrieve the list of Engines from there. In the event this list of Engines differs from those configured in the config.conf, the user will be asked for confirmation to replace the existing Engines configuration in the file with the new one retrieved from the Portal. For this tool to work, the config file needs to have either the NEXTHINK or NEXTHINK_OAUTH section configured with the appropriate credentials.
The tool can be executed by typing the next command on a terminal inside the linux system running the service. It is necessary to stop the Event Connector service before running the tool.
nxeventconnector-update-engines-list [-c <config_file_path>]
If no config_file_path
parameter is passed to the command, the action will be performed over the default config.conf file.
nxeventconnector-update-engines-list
Running the Service
In order to run the connector service, open a terminal session on the machine where it has been installed and execute it as a user holding administrative privileges:
systemctl start nxeventconnector
While the service is up and running, it is possible to check its status by executing the next command in a terminal session:
systemctl status nxeventconnector
Once the service has been started, it is possible to stop it by executing the next command in a terminal session with a user holding administrative privileges:
systemctl stop nxeventconnector
Upgrade Considerations
In order to upgrade the application on a machine that already has it installed, simply use the command:
yum update nxeventconnector-x.x.x-x.el7.noarch.rpm
Note that if the application is upgraded, the current connector service configuration files are kept:
/etc/nxeventconnector/config.conf
/etc/nxeventconnector/events.conf
Service Dimensioning and Performance
Provisioning
The capacity of a service instance installed in a machine may vary depending on a range of variables, such as:
The number of Engines to retrieve data from.
The number of Events configured to retrieve data from all the Engines.