Monitoring Module Update: New Look and New Relative Change Monitoring
From today users of the Watching That Monitoring Module will notice a new look design and a new Relative Change monitor type.
Over the last few months our users have provided us with great feedback and we’ve listened, leading to this release. And there is more on the way soon, so subscribe to this blog to be the first to know!
Be the First to Know with our Improved Monitor Management Design
We’ve improved the Monitor Management page by moving away from a Tile based design to a List based one.
As an operator you need, at a glance, as much critical information as possible.
You need to determine:
- Are any of my monitors in alarm state?
- If so, how many are in alarm? Is it everything or just an isolated issue?
- Has it been going on for a long time?
- Is it a storm in a teacup or a hurricane hitting my shores?
By moving to a list view with the monitor’s active history right inline, and advanced sorting and filtering controls, you have everything you need to keep across all of your monitors quickly and effortlessly, ensuring you are always the first to know.
Create the Most Effective Monitor First Time Round
Early adopter feedback we’ve collected included the effectiveness of the Create Monitor page. Our first pass lacked certain clarity as to how to create effective monitors which, in turn, led to some avoidable trial and error attempts.
Collating and analysing all of it has lead to this 2 pronged update to the Create Monitor page:
- A new, more intuitive, layout makes it simpler to navigate the creation process with added help text to assist you along the way; and
- We’ve added in the calculated thresholds to the preview window so you can see exactly how, and where, breaches and alerts are determined.
Introducing the Relative Change Monitor
One of the most fundamental use cases for an automated monitoring system is to help protect against sudden and sharp shocks. To do this the monitoring algorithm needs to be able to determine the size and duration of any change to “normal” levels of the metric being monitored.
Statistically one of the best ways to determine the size of change
is to assess how big the change is relative to what is normal, by determining the change as a percentage of the normal of a given time interval.
So, for example, if you normally deliver 1,000 impressions every ten minutes a drop to 500 impressions is a sudden and sharp shock of 50%.
The basic formula is:
Percent Change = (Initial Value - Current Value)/Initial Value * 100
In the case of the Watching That Relative Change algorithm we have amped it by determining the Initial Value as an average of a number of preceding consecutive values.
This gives you much more control when applying the monitor to a dynamic metric. So rather than use only the 1,000 impressions value as the initial value (in our example) you can choose to look at the past 6 consecutive values and use the average of them as the initial value.
The benefit of this approach is that the a metric might bounce around meaningfully over the values and this approach lets you smooth them out as needed.
Try out a Relative Change monitor now!
The Smart Monitor is Going Back to the Workshop
After analysing the last 6 months of usage data for the Smart monitor type we have decided to send it back to the workshop for a tune up.
From today this type is no longer available. If you have some already setup don’t worry they will keep on working, but we do think you’ll find even better results by either swapping to a Threshold or Relative Change type.
The Smart Monitor is based on a Sigma based anomaly detection algorithm. It uses Standard Deviations to determine how far off normal
the reading is and to determine if it’s far enough to warrant alerting the operator. The challenge, as it turns out, is streaming data metrics such as impressions and video views are not only volatile (unexpected changes in behaviour) but are also “seasonal”: they have a pattern that tends to repeat over regular intervals.
Although a Sigma based algorithm can work in this environment it’s much better when the metric tends to be constant over a long period of time. So we’ve decided to bring this algorithm back to the bench, and look to improve it to better deal with seasonality, especially over longer intervals.
Stay tuned for it’s imminent return!