Within Microsoft OMS, you may sometimes get an error when executing an aggregation based Log Analytics Query (using “Measure” command with “Interval” function):
“Intervals for aggregate functions must result in less than 2000 time slices Unexpected ‘measure’ at position ***”.
What does this error mean?
Any data aggregation operation needs to necessarily happen on past data for a particular range of time duration. This duration could be anything, for example last 1 Year / 1 month / 7 Days / 1 Day / 6 Hours / 1 Hour / 1 minute etc. Once duration is specified, and corresponding data for that duration is marked, next step is to run the aggregation operation on top of this data. If we wish to do a time based aggregation of this data, we will need to further define the smallest unit of time range from which sample data will be considered for the overall aggregation (using the “Interval” value). This unit of time range is the time slice in this context.
The maximum number of time slices allowed for any duration need to be fixed beforehand, because any more time slices than this upper limit (which means shorter time ranges) would be Insufficient for getting necessary data for aggregation, and will Impact the results accuracy and aggregation engine performance. The Log Analytics Query engine defines the maximum number of time slices for any aggregation as 2000, which would be based on the engine performance optimization tests done.
This error occurs when you run an aggregation operation through a Log Analytics Query, for a particular duration of time in past, and specify an “Interval” value resulting in time slices greater than 2000. For whatever Interval value you specify, total number of seconds in the past duration considered is divided by that value, giving the number of time slices for the query.
Note: This is somewhat similar to the time-slicing or preemption concept in OS multitasking, where a time-slice is the time range or quantum, for which a process will run uninterrupted, before the OS scheduler switches to another process as a virtue of multitasking. If the time slice is any longer or shorter than what is defined, would impact multitasking performance of the OS. However, analogy ends there itself.
Now let’s see how the total number of time slices in any aggregation operation through Log Analytics Query are calculated:
In the screenshot below, what you see is the left-hand-side Time Filter panel in the Log Search window within OMS. This lets you select any range of past duration. By default, 1 Day of past duration is selected here.
Now, when you execute the following Log Analytics Query (or any other query having a “Measure” command with an “Interval” function):
Type=Perf CounterName="% Processor Time" | measure avg(CounterValue) by Computer interval 1SECOND
I get the same error defined above, and as shown in the screenshot below:
This error comes because I am querying for data at an Interval of 1 Second, which must be resulting in the total number of time slices greater than the 2000 limit defined by Log Analytics.
So how do I calculate that for each past duration range (6 Hours / 1 Day / 7 Days, or any other custom duration you set) selected on the left-hand-side Time filter panel, what is the minimum interval I can query at to be in the 2000 time slices range? It’s easy, and let me show you how.
Let’s take for example that we want to find out the minimum Interval we can put in a Log Analytics Query for past duration of 1 Day:
- Calculate total number of seconds in the Scope duration selected: 1 Day = 24 Hours * 60 Minutes * 60 Seconds = 86400 Seconds
- Now divide the total number of seconds calculated in the step above by 2000: 86400 Seconds / 2000 = 43.2 seconds
- Now round off the results to the next whole number (because you need a non-decimal value here for Second) = 43.2 seconds rounded off to 44 Seconds
Hence, 44 seconds is the minimum Interval you can query for an aggregation operation for a past duration of 1 Day, from within your Log Analytics Query.
Let’s try out the same query as before, but with an Interval of 44 seconds instead of 1 second. You can see the results in the screenshot below:
Note: If you run the same query with a 43 second Interval instead, Ignoring the rounding off step, you will get same error.
Similarly, when you choose the past Duration as 7 Days in the left-hand-side Time Filter Panel, your minimum Interval value changes.
Calculating as per our earlier established logic again: 7 Days * 86400 Seconds Per Day = 604, 800 Seconds in 7 Days => 604, 800/2000 = 302.4 Seconds = Rounded to 303 Seconds
Now when you use the same Log Analytics Query as above, with a Scope Duration of 7 days and an Interval of 303 seconds, you get result as shown in the below screenshot- Any Interval below 303 seconds will give you the same error for the selected past duration:
Formula to calculate minimum Interval value for a specific Past Duration:
Minimum Interval for Past Duration = (Total Number of Seconds in the Past Duration) /2000 => Rounded off to next Whole number
Now, because the minimum Interval at which OMS collects data is fixed at 10 seconds, does not make any sense to use any value less than 10 Second for Interval anyways.
However, if you did try to put an Interval of less than 10 seconds, for a past duration which allows this value (1 minute in this case), you will see the chart as shown in the screenshot below:
What you see in the screenshot above is sparse data represented as small dots spread across the chart, with around 10 seconds of gap between each dot.
Now, you may also choose to skip the Interval function altogether in your Log Analytics Query, and plot a chart explicitly by piping to “display LineChart” at the end of your query. If you do so, Log Analytics Query engine will automatically determine the appropriate Interval suitable for the past duration selected (considering the 2000 time slices limit), and show output accordingly.
Hope you found this post useful for your understanding If you have any questions/feedback, please do mention in the comments below.