Alarms Frequency types
Datahub supports 3 frequency types:
- Raw: in this mode, Datahub will validate the value of every datapoint against the Set and Reset values.
- Rolling: in this mode, for each datapoint, Datahub will validate the aggregated value of a specified period preceding and including the datapoint.
- Scheduled: in this mode, Datahub will validate the aggregated value of a specified period at a specified time.
In order to further explain the implementation and the expected results we will use the following drawing:
It represents a simple timeline on which:
- The yellow line represents the Set value.
- The orange line represents the Reset value.
- Every green square is a received datapoint.
We will progressively add more information to describe the different features.
N.B.: in this drawing, the alarm has a 'high' type threshold, as the Set value is above the Reset value. To represent a low type threshold, the Set will be below the Reset.
Raw Alarms
In this mode, Datahub will validate the value of every datapoint, with a datetime before UTCNow+ 65 minutes, against the Set and Reset Value.
Use case 1: Threshold high
The alarm will be triggered four times:
- State = 'present': 'false'
- State = 'present': 'true'
- State = 'present': 'false'
- State = 'present': 'true'
Use case 2: Binary Low
The alarm will be triggered three times:
- State = 'present': 'true'
- State = 'present': 'false'
- State = 'present': 'true'
- State = 'present': 'false'
Rolling Alarms
In this mode, for each datapoint, Datahub will validate the aggregated value of a specified period preceding and including the datapoint.
The period specifies how far in the past Datahub should fetch the data (1 hour, 3 days, 4 weeks, ...).
As the evaluation is applied on a single value, you need to specify how Datahub will aggregate the data. You can choose your aggregation from the following list:
- SUM: returns the sum of the value of all the datapoints in the period
- MIN: returns the minimum value of all the values of the datapoints in the period
- MAX: returns the maximum value of all the values of the datapoints in the period
- AVG: returns the averaged value of all the values of the datapoints in the period
- COUNT: returns the number of datapoints in the period
- VAR: returns the variance of the values of all the datapoints in the period
- STDEV: returns the standard deviation of the values of all the datapoints in the period
Use Case 1 : Threshold High
Given a rolling alarm with the following properties :
- Threshold high
- Granularity : 1 hour
- Aggregation : SUM
- Set = 100
- Reset = 50
The schema represents datapoints inserted on 02-03-2021, at different times :
The alarm will be triggered three times:
Because 20 is lower than the Reset threshold, the alarm becomes 'not present':
- Date : 02-03-2021T10:00:00
- Value : 20
- State = 'present': 'false'
Because 20+30+60 is higher than the Set threshold, the alarm becomes 'present':
- Date : 02-03-2021T10:30:00
- Value : 110
- State = 'present': 'true'
Because 30+10+5+1 is lower than the Reset threshold, the alarm becomes 'not present':
- Date : 02-03-2021T11:15:00
- Value : 46
- State = 'present': 'false'
Use Case 2 : Threshold low
Given a rolling alarm with the following properties :
- Threshold low
- Granularity : 1 day
- Aggregation : MIN
- Set = 10
- Reset = 150
The alarm will be triggered two times:
At the insertion on the datapoint with a value equals to 170, the minimum value inserted in the last day (from 02-03-21 10:30:00 excluded to 03-03-2021 10:30:00 included) equals 160. Because 160 is higher than the Reset threshold, the alarm becomes 'not present':
- Date : 03-03-2021T10:30:00
- Value : 160
- State = 'present': 'false'
At the insertion on the datapoint with a value equals to 5, the minimum value inserted in the last day (from 04-03-21 08:00:00 excluded to 05-03-2021 08:00:00 included) equals 5. Because 5 is lower than the Set threshold, the alarm becomes 'present':
- Date : 05-03-2021T08:00:00
- Value : 5
- State = 'present': 'true'
Scheduled Alarms
In this mode, Datahub will validate the aggregated value of a specified period at a specified time.
In other words, this type is identical as rolling alarms but is triggered on a fixed schedule (defined by a cron) and not by the ingestion of a datapoint.
Use Case 1 : Threshold High
Given a scheduled alarm with the following properties :
- Threshold high
- Granularity : 1 hour
- Aggregation : SUM
- Set = 100
- Reset = 50
- Cron : check every hour at 00:00 (represented by the orange line)
The schema represents datapoints inserted on 02-03-2021, at different times :
The alarm will be triggered three times:
At 10:00, because 20 is lower than the Reset threshold defined, the alarm becomes 'not present':
- Date : 02-03-2021T10:00:00
- Value : 20
- State = 'present': 'false'
At 12:00, because 5+100+45 is higher than the Set threshold defined, the alarm becomes 'present':
- Date : 02-03-2021T12:00:00
- Value : 150
- State = 'present': 'true'
AT 13:00, because 15 is lower than the Reset threshold defined, the alarm becomes 'not present':
- Date : 02-03-2021T13:00:00
- Value : 15
- State = 'present': 'false'
Use Case 2 : Threshold Low
Given a scheduled alarm with the following properties :
- Threshold low
- Granularity : 1 hour
- Aggregation : SUM
- Reset = 100
- Set = 50
- Cron : check every hour at 00:00 (represented by the orange line)
The schema represents datapoints inserted on 02-03-2021, at different times :
The alarm will be triggered two times:
At 10:00, because 20 is lower than the Set threshold defined, the alarm becomes 'present':
- Date : 02-03-2021T10:00:00
- Value : 20
- State = 'present': 'true'
At 11:00, because 60+130 is higher than the Reset threshold defined, the alarm becomes ' not present':
- Date : 02-03-2021T11:00:00
- Value : 190
- State = 'present': 'false'
The alarm will not be triggered at 12:00, because the sum of the preceding hour (100+45+5) is higher than the Reset threshold and the alarm is already in the 'not present' state.
The alarm will not be triggered at 13:00 either, as the sum of the preceding hour (60) is between the two thresholds Set and Reset.
Processing out of order datapoints
As mentioned before, the alarms functionality is designed to evaluate data points streams having (near) real-time and monotonically increasing timestamps.
However, as devices can sometimes encounter connection issues, Datahub is able to handle a datapoint ingested after the last one.
Therefore, an out of order datapoint is processed only if there are no more than two datapoints arrived before it.
If multiple out of order data points are pushed on a variable subject to alarm detection, the alarm state will be unpredictable.
Use Case 1 : reprocessing of an out of order datapoint
In the following case, two datapoints with a timestamp greater than the new ingested datapoint already exist on the variable.
The new ingested datapoint will be processed and the steps are:
- any existing occurrences related to the two datapoints after the new ingested are deleted
- the new ingested datapoint is processed
- the two existing datapoints after the new one are reprocessed.
Use Case 2 : no reprocessing of an out of order datapoint
In the following case, three datapoints with a timestamp greater than the new ingested datapoint already exist on the variable.
There will be no reprocessing. The out of order datapoint will be ignored in the context of the alarm.
Note
Datapoints with a timestamp greater than UTCNow+65 minutes are ignored by the alarms system (and therefore not considered in the count of existing datapoints on the variable).
In the previous case, the out of order datapoint will be processed because only two existing datapoints with a greater timestamp are considered.