Using SRTM

SRTM (System Response Time Measurement) is an RTAPI timer latency measurement tool that measures timer latency observed by an application. There are two supplied versions: one for the Windows environment (srtm.exe), the other for an RTSS environment (srtm.rtss).

Usage

srtm [/?] [/h] [/s] [/1] [/f] [/n num] seconds_to_sample

Parameters

/h

Display histogram (in addition to summary)

NOTE: The histogram in the Windows version (srtm.exe) may not match the maximum latency value. This is due to Windows being non-deterministic.

/s

Turn on sound (square wave driven by timer)

/1

Use a 10 MS timer period (default is 1 MS)

/f

Use fastest available timer (1MS or better)

/n num

Multiple SRTM instances aware, where num is the total number of instances

seconds_to_sample

Duration in seconds to sample timer response latencies.

/?

Help on usage

 

If no parameters are given, the default is srtm /h /s /f 15

Remarks

SRTM is also provided as sample code.

mSRTM is another sample that shows how to measure the difference between the expected timer interval and the actual timer interval.

The RTSS timer latency observed by an application is made up of hardware and software latency:

System Management Interrupt (SMI) and bus contention between Windows cores and RTSS cores is a major source of hardware latency. To mitigate the side effects of hardware latency on RTSS timer latency and RTSS time, RTX64 uses a timer tick compensation algorithm. This algorithm uses the two Time Stamp Count (TSC) readings between previous and current ISRs to calculate the number of ticks. ISR then uses the calculated number of ticks (instead of 1 tick) to check user timer expiration and to increment RTSS time. If SMI or bus contention occurs at the early stage of user timer period, its handling routine will be called on time. If this contention occurs at the later stage, the current call is later, but the subsequent call will occur on time because of the reduced number of ticks to expire.

SRTM calculates the timer latency by subtracting the expected time from the time obtained by calling RtGetClockTime. The expected time is always incremented by the user timer period, instead of the previous time added to the user timer period. Without timer tick compensation, the time obtained by RtGetClockTime may be slower when compared with universal time. However, because RTSS time always increments by 1 tick, such time drift is not reflected in the difference of the expected time from the time obtained by RtGetClockTime. Without timer tick compensation, SRTM results may not fully reflect the side effect of SMI or bus contention. With timer tick compensation, the time obtained by RtGetClockTime is much closer to the universal time. Therefore, SRTM will fully reflect the side effect of SMI or bus contention.

Beginning with RTX64 4.1, timer tick compensation is the default configuration. Timer tick compensation will decrease occurrences of user timer latency jitters, but it will not reduce the absolute value of jitters (that is, when SMI or bus contention occurs at the last tick of the user timer period). Therefore, SRTM may show a longer tail in its diagram and larger maximum latency on systems with bus contentions between Windows cores and RTSS cores.

If you already have a workaround for timer latency jitters, see the TechNote Real-Time Subsystem Timer Tick Compensation for instructions on how to disable timer tick compensation.

Related topics: