Dimensioning guide
This document provides comprehensive information about the introduction and concepts around Nexthink Chatbot SDK, its API and use cases. The information contained herein is subject to change without notice and is not guaranteed to be error-free. If you find any errors, please report them to us via Nexthink Support Portal. This document is intended for readers with a detailed understanding of Nexthink technology.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
Dimensioning requirements
Appliance requirements
The table below demonstrates recommended hardware characteristics for the Chatbot SDK appliance in relation to the number of Engines and devices. Additionally, it predicts the number of user conversations supported by Chatbot SDK within given parameters.
Small:
1 engine
20K devices
2CPUs
10GB disk
4GB RAM
800 concurrent conversations
assuming 100 requests to Chatbot SDK per second
Medium:
20 engines
200K devices
2CPUs
20GB disk
8GB RAM
1600 concurrent conversations
assuming 200 requests to Chatbot SDK per second
Large:
50 engines
500K devices
4CPUs
40GB disk
16GB RAM
3200 concurrent conversations
assuming 400 requests to Chatbot SDK per second
The number of concurrent conversations is estimated with the assumption that every conversation makes 4 requests every 30 seconds. For more information refer to the Measurement methodology section below.
It is preferable to use SSD for the disk storage as there is a lot of data movement in the cache.
Discovery duration
The discovery process is responsible for downloading required information from Engines to the local cache. The table demonstrates the estimated duration time of the discovery process, which depends on the number of engines and bandwidth. The bandwidth between Chatbot SDK and Engines is critical to get a reasonable time for the discovery process.
50 mbps
10k
1
< 1 minute
10 mbps
10k
1 minute
5 mbps
10k
2 minutes
50 mbps
200k
20
1 minute 35 seconds
10 mbps
200k
3 minutes 15 seconds
5 mbps
200k
5 minutes 30 seconds
50 mbps
500k
50
4 minutes
10 mbps
500k
8 minutes 15 seconds
5 mbps
500k
15 minutes
Measurement methodology
Appliance requirements
To validate a scenario with a high number of endpoints, a custom traffic generator was used to interact with the API. Chatbot SDK was installed on virtual machines where Engines were a part of the performance environment running on Azure.
The hardware requirements were calculated to avoid maximum memory and CPU usage when generating traffic that started to stress the engines. The limits were set in order to avoid an impact on the response time of Engines. They were set when the response time started to increase.
To estimate the maximum number of concurrent conversations the following scenario was used:
For each conversation, there was one request per 30 seconds.
With this estimation, a hundred simultaneous conversations translate to 400 requests in 30 seconds or around 13 requests per second.
Discovery duration
Different tests were conducted using the same scenario. The variables that had a higher impact on the discovery duration were:
Network latency
Number of engines
Number of devices
Last updated