Aeolus Development

Configuring the Timer (Newlib Port)

ARMStick 100

ARMStick 101

ARMStick 102

In System Programming (ISP)

Documentation and Application Notes

Downloads

Other Resources

Warranty and Returns

Contacts

Product Feedback and Suggestions

Consulting

Whats New

Privacy Policy

Store

By Robert Adsett

Monday, February 09, 2004

Choosing parameters for the timer.

Generally the timer should operate fine without any modification of the source. Any needed adjustments for differing speeds and operating parameters are made using the calls to SetNativeSpeed and StartTimer during system initialization. In the event you wish to modify the routines for your own circumstances though here is a guide to the parameters that can be used to tune the timer and the defaults along with the reasons for choosing them.

CLOCK_SPEED is a target speed for the timer. The startup for the timer routines sets the prescaler to get the timer as close to CLOCK_SPEED as possible without going below it (if possible) given the operating speed. There are two constraints on choosing a value for CLOCK_SPEED, resolution and range. Resolution is determined by the speed of the timer, the faster the timer the higher the resolution. That would seem to suggest that the faster the speed the better. The other constraint, range, works against this. The faster the timer the shorter the period of time it can cover before overflowing. In this implementation that shortens the time that can be allowed to elapse before a call is to the timer in order to avoid losing track of longer periods of time. There is an additional constraint that can be used to limit the values considered for CLOCK_SPEED, precision. If CLOCK speed is chosen so that an integral number of counts will occur in each microsecond then conversion between counts and microseconds can take place without errors occurring due to roundoff. This implementation uses a CLOCK_SPEED defined to 10000000. That gives 10 counts per microsecond, more than enough to provide the needed microsecond resolution of the timer. At the same time since the timer is a full 32-bit timer, at a 10MHz nominal rate the full range of the timer can be as long as nearly 430 seconds.

Why provide a resolution greater than is actually needed? The main reason is to provide some protection against the first fractional bit counting error built into any timer. This occurs when timing starts part way through the first bit so that its whole time period is not captured. This means that the actual time will vary from target_counts-1 to target_counts where target_counts is the number of timer counts that equal the time being waited for. With one count being a fraction of the time resolution we are using for wait, this error becomes reduced.

What if the oscillator frequency is not 10MHz? In that case wouldn't setting CLOCK_SPEED and COUNTS_PER_US to match the actual frequency yield better results? Not necessarily. Let's take an example. Say we have an oscillator running at 10.5 MHz. If we set CLOCK_SPEED to 10500000 and COUNTS_PER_US to 10 then if we call WaitUs with 10 (to get a wait of 10 microseconds) we will set up a wait for 100 counts. 100 counts though is only a little over 9.5 microseconds, nearly 5% too fast. On the other hand if we set COUNTS_PER_US to 11 the 10 microsecond request gets converted to 110 counts or nearly 10.5 microseconds. We don't have enough digits in COUNTS_PER_US to give reasonable accuracy.

If instead CLOCK_SPEED is left at 10000000 and COUNTS_PER_US at 10 the system will compensate. Since the timing routines know the actual clock frequency (from a SetNativeSpeed call) it can set and know what the frequency of the clock feed the timer is. It can use the ratio between actual and desired frequencies to provide a correction factor. Again using the example of a 10.5 MHz system clock. If a WaitUs(10) call is made then we will work out a wait of 100 counts. Before that is used, however, it is scaled by the ratio (actual frequency)/(desired frequency) (10500000/10000000) to give a count of 105. This count works out to a wait of 10 microseconds. In general this calculation can be off by up to 1 count (another reason to set CLOCK_SPEED to a large value). The calculation will also have an error of up to 1 part in the smallest number used in the ratio. Since these numbers are large that error source is quite small.

In release 2 of the library improvements were made to the timer to account for the overhead in calling the timer and to protect against requested waits that are too short. These changes revealed significant timing variations due to the divide operation used to scale the wanted time from microseconds to counts. To reduce this variation the divide was replaced with a series of shifts and adds that takes a fixed amount of time to run. This means that changing CLOCK_SPEED now requires either reverting to the original divide operation (with its variation) or rewriting the conversion routine to use a different series of shifts and adds1.

Given that discussion why would anyone ever want to use a different target rate? One reason could be custom timing requirements on one of the timer match or compare registers not used to implement the timer. If the frequency allows an integral number of counts per microsecond the modification can be made with a straightforward change of CLOCK_SPEED and COUNTS_PER_US. If not then more significant work will be required to the scaling routines (between counts and microseconds). Before taking that route a couple of other approaches may be worth trying. Try moving the functionality that requires custom timing to the other timer on the microcontroller, or alternatively change the timing routine to use the other timer.

1See the application note on timer performance for details.

Home