Persona Insight - Troubleshooting - Multiple devices on multiple engines
Multiple-Device / Multiple-Engine Scenarios
Persona Insight and the related packs that use the Persona Insight Score perform the translation of various datapoints into scores by “bucketing”, i.e., by saying that for a given range a certain score is received. For example:
<2 hours: 1-7200 seconds
Score range 0-2
2-4 hours: 7201 to 14400 seconds
Score range 2-4
4-8 hours: 14401 to 28800 seconds
Score range 4-8
And so forth.
The data is being taken from the usage (via Focus Time) on a particular device, which is assigned to an employee.
Where the employee uses multiple devices, care must be taken.
In the situation that the employee is using one device only, everything will work as expected.
Should the employee be using two devices; as long as they are on the same Nexthink Engine, the Focus Time value (in seconds) will be aggregated so the total value of seconds will be taken and translated into a score, which is also correct. This principle will follow for any trait (such as network traffic, execution duration, and so on).
However, should the employee use multiple devices and these devices are attached to different Nexthink Engines, then the score will be an average, not a sum, of the values across the Engines which could lead to unexpected results.
I am an employee who is using Microsoft Teams to a small extent on two devices, which are connected to two engines. On Engine 1 I have a score of 1.62
On engine 2, I am a heavy user with a score of 10:
However, if I list all engines, I get a value of 5.81:
Which is the average.
If you suspect you are experiencing this latter situation, there are a number of options that can be of use.
Use Investigations. Investigations do not have this limitation and are an equivalent way of seeing Persona type behavior. For example, if the requirement was to see all Teams users who are using Teams for 16+ hours per week - which relates to the equivalent Persona Trait that in reality is just normalizing datapoint values into a score - we could create a 7-day Investigation as shown below:
The use of Investigations will give you equivalent data to the score but the limitation of Investigations is the data is static (i.e. not refreshed on a schedule). This will give you the data you need but you will have to run the Investigation each time you want to retrieve the data, and of course these cannot be used in Dashboards.
Use Metrics. Metrics will also give the correct aggregated value, but they have a product limitation in that they span a 24 hour period as the portal dashboard is calculated nightly. Therefore, if this approach is taken please use Persona Trait thresholds that reflect a 24 hour period, and not 7 days.
For example; if we were still looking for heavy Teams consumers, we might consider a 4 hour usage threshold in the metric.
The advantage of this mechanism is that dashboards can be created using widgets that are based on these metrics. Of course, with a 24-hour timeframe, the Persona Traits created in this fashion will be quite dynamic, because they will be based on daily activity.
Modify the scores to exclude zero activity. This is a mitigation that may help reduce exposure to the condition. The averaging effect occurs only in the cases that the same application is used across two devices on different engines. If you use Word on one device and Excel on the other, for example, you can mitigate against the effect. To do this, look at the way each score is constructed. It goes up to a range of time (using the focus time example) but starts at zero, i.e. not in use. This means that if you don’t use excel on the second device you are in fact a “user with a score of zero” on that device, so an average of zero and the other usage value would be taken, which may skew the overall result.
This value of zero is intentional, to find people really not exhibiting that trait.
If you wish to simply “ignore” people who are not using the trait, you can remove the zero entry from the score of each trait you wish. The effect will now be that the trait does not exist for that device on that engine, so no averaging will occur. So in the Nexthink Score Editor, you should export from the Finder the Persona Traits score, then edit any traits to remove the zero portion and set the default output value to NULL.
You can now check your results in the Finder when the score has been re-imported by making sure “list all entities” is checked and you should see the (correct) combined score value:
Modify the root (or child) Score inclusion criteria – In some cases, the multiple device / multiple engine scenario will be because of virtualization, where a given VM or session is on another Engine to the primary device of the employee. If the usage of the virtual session (or the physical device) is not of primary importance, then the score can be modified to exclude the virtual/physical devices or indeed any criteria by modifying the root NXQL which will then scope the score to only the respective users. This is a flexible approach meaning you can define any user based NXQL Query that will then give you the right data, but it does mean that it will only apply to the scope you have defined, so bear in mind the effect this may have on the “true” value if you have excluded a portion of the landscape.
A similar approach would be to duplicate the scores and have one for physical and one for virtual devices. This will of course mean there are two values for each employee, but it will be clear to see these in the finder as they will be under different scores so providing in a way more visibility on the behavior of the employee as you can now see their behavior on different platform types.
Finally, you can do the calculation outside Nexthink.
In this approach, you would take the score values for each engine separately by running the metric to return the scores (either using the ones in the Persona Insight pack or by creating your own) and then exporting the results to Excel or another visualization tool where the data can be sorted and aggregated. This will give the true result but does mean that it will need to be exported from each Engine. From the GUI this can be done using the Finder.
Connect to the first engine, run metric, export contents to excel using right-click context menu.
Switch to the next engine and repeat:
Work in Excel to aggregate the values.
This approach will give the true value but requires more work as the data must be manipulated in the chosen visualization tool.