TweetFollow Us on Twitter

Accurate Timing
Volume Number:10
Issue Number:4
Column Tag:Useful Tricks

Related Info: Time Manager

A Sophisticate’s Primer
on Accurate Timing

So you think your code is faster? Now you’ve got a tool to be sure.

By Bill Karsh

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

About the author

Bill is a trained experimental physicist with 13 years experience in data analysis and software development at various accelerator laboratories. After 18 months as a project leader on Mac products for PowerCore, he’s now a hopeful entrepreneur working on both a programming tools package and a desktop publishing layout utility.

Our subject is the accurate timing of two alternative versions of similarly functioning code. Is one truly faster? By how much? Is the difference significant? These are the sorts of things one might like to find out for a Programmer’s Challenge entry, or to improve the performance of time-critical operations, getting the edge over competitor products. Obtaining accurate timing information has been an ongoing project of mine for some time. What is presented here is a distillation of the best techniques I have found so far. I apologize in advance to those readers who are knowledgeable on such topics as interrupt processing and elementary statistics. Since these ideas are necessary for the understanding and proper use of the methods, I include brief discussions in deference to the novice.

Sources of Error

The general scheme for timing a function, call it Foo, is to read a clock, execute Foo, then read the clock again. The difference between the two readings ought to be the duration of Foo’s execution. This is more or less true. The single largest flaw in this simple idea is that Foo’s execution is repeatedly interrupted, while the clock continues to run. Since these interrupts occur pseudo-randomly in time, the elapsed time we measure this way fluctuates pseudo-randomly from one measurement to the next. Therefore, any better method is necessarily a statistical one.

It may be worth pointing out why there are interrupts, and why their character appears random. An interrupt is a temporary shifting of the CPU’s attention away from what it is currently doing (usually, executing application code), to a task of some immediate importance (usually an OS housekeeping chore). When the task is completed, the CPU resumes what it was doing as if nothing happened. Note that part of interrupt processing involves saving and restoring the complete machine state. What are these tasks?

Interrupts occur as a result of special conditions, collectively termed exceptions. Exceptions come in three flavors: internal, asynchronous and synchronous. Internal exceptions are mainly generated by software. These are such things as zero divide errors, explicit TRAP instructions, or calls to the Tool Box. There are many others. However, internal exceptions do not concern us since they are part of the normal execution of code.

Asynchronous exceptions are unscheduled and due to external influences. Among these are mouse movements and serial communications events like a character received or buffer empty message from a modem. The Serial Communications Controller (SCC) chip is largely responsible for catching these and telling the CPU about them. Asynchronous exceptions are of little concern, but one should not move the mouse more than necessary during a timing run.

Synchronous exceptions are scheduled housekeeping operations and occur at regular intervals. Among these are: updating Time, updating Ticks, running tasks scheduled in the VBL and Time Manager queues, and sensing stack collisions with heaps (stack sniffer). The Versatile Interface Adapter (VIA) chip is responsible for most such scheduled activities. These are of importance for us. Most of the synchronous tasks have a fixed period and take a standard amount of time to perform. Still others are scheduled with a fixed period, say every 1/60 s, but are checkup tasks. That is, they regularly check to see if something or other needs to be serviced. How long these take to run depends upon whether servicing is required and how long that servicing takes. Generally, the impact of interrupt processing time on our measurements varies according to these three factors: how many interrupts occur, how long they take and the synchronization of the interrupts with Foo (how much overlap).

If you apply the methods demonstrated here you will observe for yourself how the resulting timing errors are distributed. The overall shape is something like a bell. This is one of the most often encountered distributions for random processes. It’s called a Gaussian or Normal distribution. However, unlike a real Gaussian, which is a continuous curve, our frequencies have a discrete spectrum, coming in only a handful of sizes. Hence, the errors are pseudo-random.

Figure 1 Normal Curve

Timers

Obviously, to make timing measurements we need a suitable timer. The Mac offers several with varying levels of resolution (smallest time interval that can be measured).

Timer Resolution

Time (0x020C) Low Mem variable 1 s

Ticks (0x016A) Low Mem variable 1/60 s

VIA timer 1 (Snd Driver timer) 1.2766 µs

VIA timer 2 (Disk Driver timer) 1.2766 µs

Time Manager (original) 1 ms

Time Manager (revised or extended) 20 µs

Interestingly, they are virtually all the same thing, much as a single timepiece might have two or more hands running at different speeds. The VIA chip provides a high frequency heartbeat; the basis for most timed operations on the Mac. All the above timers derive from the VIA’s clock of 1.2766 µs period. As mentioned above, Time and Ticks are scheduled by the VIA for regular updating. VIA timers 1 and 2 are memory mapped registers available for applications to poke values into, which are subsequently decremented by the VIA (the method used by Think C’s profiler package). The Time Manager uses these same registers itself.

So which incarnation to use? Code executes relatively quickly, and what we want to see are small differences. We need the highest resolution timer available. There are two arguments for the extended Time Manager. The TM is likely to continue to be supported by Apple, even if the underlying hardware changes, possibly even away from the VIA chip altogether. More importantly, the TM incorporates a partial interrupt compensation mechanism. It’s somewhat more stable than any other choice.

A Better Method

As discussed above, measurements of execution time are in error by an amount that fluctuates from trial to trial. Since the problem is that of the timer continuing to run, even while Foo is temporarily suspended, the measurements of execution time are typically too large. If we measure over many trials, we collect a distribution of times, with some sort of overall average time. This average time is still not the actual execution time-it is also too large. Thankfully, we do not care about the absolute time, which would be very difficult to extract. Rather, in comparing Foo to FooBar, we take the difference of averages. Most of the error is subtracted away, without ever needing to know a specific magnitude of the error. This works as long as the (unknown) error is close to the same average size for both functions. Don’t be fooled! Without a feeling for whether the fluctuations have been reasonably averaged, and in the same way for both functions, we would know nothing. This is what the tools provided are all about-checking the quality and sameness of the time distributions for Foo and FooBar.

The process breaks up into two steps (also two files). Step one (GatherTimes.c) is to gather the best raw timing data we can, and accumulate it into the statistical package. Some systematic errors are also corrected in this step. Step two (TStats.c) is to analyze the data using plots of the distributions and appropriate statistical measures.

Gathering Data

GatherTimes.c has a nested loop structure. The outer is the accuracy loop. Each iteration of this loop collects two raw times, time1 and time2, for Foo and FooBar respectively. The times are passed to the TSAccumulate routine of the TStats file. The number of iterations determines the size of the statistical sample. Accuracy in statistical lingo refers to how closely a value approximates the true value, or how closely a sample population represents the true “parent” population. We need enough data to get an accurate portrait of a distribution. Something between 50 and 1000 is a reasonable sample size for most cases.

On a given pass through the accuracy loop, we measure the time for Foo as follows. Initialize any data to be used by Foo, as necessary. Call Foo once, to fill the instruction and data caches as much as possible with Foo stuff. Start the TM clock by installing and priming a TMTask record. Execute Foo one or more times, to be explained shortly. Remove the TMTask. The difference between the tmCount field of the removed record and the time parameter passed to PrimeTime is the elapsed raw time. FooBar gets the same treatment.

The inner loop determines how many times to execute Foo while the timer is running. This is the precision loop. Precision is a notion that complements accuracy, expressing how many significant digits there are in a number. Precision is something like the inherent resolving power of an instrument, independent of how carefully one uses the instrument. The times we collect are integers, known (according to the TM documentation) to ±20µs. The more iterations in the inner loop, the larger our collected times. The more digits we have to work with, the more clearly we can see small relative differences. Choosing a good value for precision is a matter of balancing two things. On the one hand, you want times large enough to see detail of the size you are interested in. You need bigger numbers to confidently see 1% differences than you need for 100% differences. On the other hand, the more iterations, the more fluctuation errors creep in. You have to develop your own sense of what’s happening in your situation. It takes experimentation with different settings of precision-it’s a skill that has to be learned.

This is the right time to mention that there is an important pair of experiments you should do whenever beginning timing work on a new set of functions. They are useful for choosing precision wisely. Try timing Foo against Foo (the identical function), where you know the difference is supposed to be zero. You will notice that the difference is usually not zero, but close to it. Alternatively, time Foo against Foo2 (a duplicate function, not the identical one). Here you may find a larger difference. Probably the best you can hope for is a relative difference of about 1%. These experiments help you feel for what kinds of results to expect, and what your resolution limit is. This is the reason for the macros Fun1 and Fun2 near the top of GatherTimes.c. They make it simple to define which functions you’re running from trial to trial.

Continuing now with the gathering of raw times, we make a further systematic error correction. GatherTimes assumes that the argument lists for Foo and FooBar are identical, (and it has to be defined in the ArgList macro at the top). Since we want to time the guts of these functions, and not the calling time to set up the stack, we time a dummy function, Overhead. Overhead takes the same arguments as Foo and FooBar, and just returns. Also, since it is similarly measured with the TM, this determines the time it takes simply to start and stop the TM clock. This is actually the larger of the two errors. This overhead time is subtracted from each of time1 and time2 before sending them to TSAccumulate.

Understanding The Data

Now comes the analysis. The results window is divided into four areas. The lower two are plots of time1 and time2 respectively. The upper-left reports numerical statistics for all of the raw data for both time1 and time2. The upper-right reports similar numerical results, but for a subset of the raw data, the modes. I’ll explain this in a minute. TStats has a number of calls for displaying these areas. We’ll walk through each one. The sequence of these calls that I find most useful is demonstrated in the GatherTimes file. Note that WAIT is a simple macro for while( !Button() ) do nothing. It’s just a quick and dirty way of walking through the displays at your own pace.

Figure 2 Typical Raw Data

TSRawPlots generates graphs of the raw times, in the order they were collected. The times are collected in arrays of longs (raw1 and raw2), by the TSAccumulate function. The horizontal axis is just the index of the entries, from zero to whatever you set for accuracy (the number of entries). The vertical axis is time (µs in this case). This type of plot is the least useful for analysis, but it’s interesting. You can see the different sizes of fluctuations and how often they occur. A red line is drawn at the average time for all entries.

TSRawHistos does the real work. It creates what are called alternatively: histograms, frequency distributions, or frequency spectra of the raw time arrays. Let’s take the array raw1 as an example. What we do is simple. We find the minimum and maximum of all the time entries in raw1. This range is divided into N buckets or bins, (a new array called bins1 with N entries), which are initialized to zero. Each bin now represents a narrow time range. As we walk through the entries of raw1, we test which bin the entry falls into, say bin k. We increment bins1[k] by one. Doing this for all of the entries of raw1 results in a description of how frequently times of each size are observed. Hence, we form a frequency distribution. TSRawHistos then plots the bins arrays. The horizontal axis is an index in a bins array, which now represents time. The vertical axis is counts.

Figure 3 Typical Frequency Distributions

Sometimes the shapes of the frequency distributions are very symmetrical, and close to Normal. Sometimes they are quite asymmetrical, and even have two or more peaks. This is not a big problem. The most important thing to look for is that the two distributions have a similar shape to each other. If one has its peak bin to the left of center, with a long tail on the right, that’s O.K., if the other is skewed the same way. The similarity of shape gives us some confidence that the average error is similar, and will be subtracted away when we take the difference. If the shapes are clearly different, toss the data out and try again. Always run a few trials anyway, because things do change a little.

TSStats calculates and displays numerical results which describe the distributions. Generally, distributions come in many different shapes, depending on the underlying process at work. Some are bell shaped, some are flat, some are triangles... If it looks something like a bell, that is, having a peak, and getting progressively closer to zero as you go away from the peak on either side, then there are some standard characterizations that are meaningful to apply. These are: the location of the peak and the variability, or “width” of the distribution.

There are many possible choices for describing where a peak is: arithmetic mean, geometric mean, harmonic mean, median, mode, ... We make use of two: the arithmetic mean, which is the simple average, and the mode, which singles out the most frequent value, ignoring the rest. Why these? Among the three types of mean, the geometric and harmonic place slightly stronger emphasis on smaller values. We have no reason to discount our larger values on principle. The median divides a distribution in half. Half of the values are smaller, and half larger than the median. This is more appropriate for flatter and broader curves than what we usually see. The mode is probably the best choice. I find that 40% or more of all the values are of one size, so the peak is very sharply defined at the mode. I also find that when timing Foo against itself, the modes of time1 and time2 are often the closest together of all estimators. Lastly, since we throw out all data besides the mode, we are less dependent on the full distributions having similar shapes. Rather than rely on just one estimator, we show results using means in the left numerical results box, and modes in the right box. Also calculated from these are the difference (1-2), and the relative difference (1-2)/2, being the number we most want to know.

The other thing to discuss is the width of a distribution. We calculate this in the traditional way, as the standard deviation (sd) of the times. The sd is the square-root of another quantity, the variance. The variance is defined as the average of the squared differences from the mean. It tells us about how spread out a distribution is. The narrower the distribution, the better we believe we can locate a peak successfully, or meaningfully, hence, the more confident we are of the results we have extracted. The boxes show the standard deviations for all of raw1 and raw2 (left box), and for the values in the mode bins only (right box). Try moving the mouse around during a timing run. Watch how the standard deviations in the left box increase as you thereby generate more interrupt activity. Also, your distributions will become multimodal (having several peaks). These peaks point out the extra work required to recalculate the cursor location.

Figure 4 Raw Data - Mouse Moved

Figure 5 Distributions- Mouse Moved

Figure 6 Modes- Mouse Moved

The standard deviation is not very interesting by itself, but it is used in calculating a confidence limit (Z) for our results. Z is defined as |difference of means|/sd, where |x| is the absolute value of x. It reflects how separated two values are in comparison to the intrinsic variability of the values. This is how we can attach a measure of significance to the finding that two means differ from one another. Z is interpreted in the following way. Suppose we are measuring some quantity t. Thanks to a fundamental statistical result called the Central Limit Theorem, whether or not t itself is Normally distributed, it’s mean <t> is Normal if the sample size is large enough (N > 50 is adequate). Further, the difference of two means is also Normal. Now, being Normally distributed is a way of saying that <t> fluctuates, by chance alone. One computes Z as above, and asks what is the probability that a Z as large as ours would be found just by chance, due to random fluctuations, and not because the means are really and truly different. This is answered by comparing your Z against a table of probabilities for various Zs (a table of integrated tail areas under the Normal curve). The following is a selection of such values.

Z Prob. of Z this large by chance alone

0.2 84%

0.5 62%

1.0 32%

1.5 13%

2.0 5%

For example, if your Z is about 2.0, then you proclaim, FooBar is thus and such percent faster than Foo with a 95% confidence limit, and your scientifically minded friends applaud and yell “Publish, Publish!” In most scientific circles, if Z is less than 1.5 or so, then you’ve found squat. This is a pretty good rule of thumb. Your means should be at least 2 standard deviations apart, or you’re probably looking at noise and little else.

TSFilterMode calculates the modes for time1 and time2. It scans input arrays for the values falling in the mode bins, and copies these data to other arrays, work1 and work2. You can specify that input be taken from the raw arrays, which you would do on a first pass only, or from the existing work arrays on subsequent passes. Note that you might have to make several passes at the mode if your number of bins is small (i.e.., bin widths are large and capture more than one time magnitude). You can tell if the current pass finished the job by getting a return value of TRUE, or by noting that the standard deviations displayed in the right box are both zero. If you have the true mode, you have a set of values which are all the same, hence, have zero variation. Checking for purity of the mode is the reason for reporting the standard deviations in the right box at all. The Z reported on the right still uses the sd results of the full distribution. All we have changed on the right is how the peaks have been determined.

Summary

Which is better, means or modes? Look for the larger Z, and for the most consistent result from trial to trial. You should also check which gives a better “zero” when timing a function against itself. Finally, you might be asking “Does it really take this much effort to find out something so simple?” Well, is there anything worth knowing that doesn’t?

/* 1 */
Source code Copyright (c) 1993 Bill Karsh. All rights reserved.

/* File GatherTimes
Demo precision timing experiments.  We collect raw time measurements, 
then call the TStats package to display results.
For your functions, you must:
 1) #include their headers.
 2) #define Arglist, Fun1, Fun2 macros with the specs for your functions 
(as shown for Foo and FooBar).
 3) declare and init data needed by your functions in main.
 4) init anything else your functions need before the Loop macro (as 
we set arg1 = arg2 = 1).

You need not modify anything else except the testing parameters {Which, 
Precision, Accuracy}.
*/

#pragma options( !check_ptrs )

#include"TestFuncs.h"
#include"TStats.h"
#include<Timer.h>

// competitors to test

#define First    1
#define Second   2
#define Both3

#define Which    Both

#define ArgList  ( &arg1, &arg2 )
#define Fun1Foo
#define Fun2FooBar

//------------------------------------------------------------

// timing parameters

#define MaxNeg   0x80000000

#define Precision100
#define Accuracy 1000// must be > 0

//------------------------------------------------------------

// timing macros and glue

static void Overhead( ... )
{
 // always empty
}

// call once to fill caches
// call repeatedly to gather timing data

#define Loop( F, T ) \
 F ArgList; \
 \
 tmt.tmWakeUp = tmt.tmReserved = 0L; \
 InsXTime( &tmt ); \
 PrimeTime( &tmt, MaxNeg ); \
 for( prc = 0; prc < Precision; prc++ ) {    \
 F ArgList; \
 } \
 RmvTime( &tmt );\
 T = tmt.tmCount - MaxNeg
 
#define WAIT\
 Delay( 20, &dum ); while( !Button() )
 
//------------------------------------------------------------

void main( void )
{
//--- specific args for your functions

 long arg1, arg2;
 
//--- timer args

 long prc, acc;
 long time1 = 0, time2 = 0, timeOv;
 long dum;
 TMTask tmt;
 Booleandone;
 
// initializations

 InitGraf( &thePort );
 InitFonts();
 InitWindows();
 InitMenus();
 TEInit();
 InitDialogs( nil );
 InitCursor();
 
 tmt.tmAddr = nil;
 
 TSInit( nil, Accuracy, Accuracy );

//------------------------------------------------------------

 for( acc = 0; acc < Accuracy; acc++ ) {
 
#if Which & First
 // init data for your Fun1
 arg1 = 1;
 arg2 = 1;
 
 Loop( Fun1, time1 );
#endif


#if Which & Second
 // init data for your Fun2
 arg1 = 1;
 arg2 = 1;
 
 Loop( Fun2, time2 );
#endif

 Loop( Overhead, timeOv );
 if( time1 > timeOv ) time1 -= timeOv;
 if( time2 > timeOv ) time2 -= timeOv;
 
 TSAccumulate( time1, time2 );
 }
 
//------------------------------------------------------------

 TSRawPlots();
 TSStats( kRaw );
 WAIT;
 
 TSRawHistos();
 WAIT;
 
 done = TSFilterMode( kRaw );
 TSStats( kWork );
 WAIT;
 
 if( !done ) {
 do {
 done = TSFilterMode( kWork );
 TSStats( kWork );
 WAIT;
 } while( !done );
 }
 
 TSDispose();
}

/* File TStats -----------------------------------------------
 Accumulate, calculate and display timing data.
*/
 
#pragma options( honor_register, !assign_registers )
#pragma options( !check_ptrs )

#include"TStats.h"
#include"LongArrayStats.h"
#include"PlotLongArray.h"
#include<math.h>
#include<stdio.h>

#define WMargins 5
#define TitleBarHt 18
#define TextLines3
#define UseSameScales1

// glue and shorthands

#define Alloc( type, n )  \
 (type*)NewPtr( sizeof(type) * (n) )
 
#define Kill( q )\
 if( ts->q.data ) DisposePtr( ts->q.data )
 
#define Limits( q )\
 LongArrayMinMax( ts->q.data, ts->q.N, \
 &ts->q.min, &ts->q.max )
 
#define Plot( q, r, str ) \
 PlotLongArray( ts->q.data, ts->q.N, \
 0, ts->q.N,ts->q.min, ts->q.max,  \
 &ts->r, str )
 
#define Histo( h, q )\
 LongArrayBin( ts->q.data, ts->q.N,\
 ts->q.min, ts->q.max,    \
 ts->h.data, ts->h.N )

#define SameScales( u, v )\
 if( ts->u.min < ts->v.min )\
 ts->v.min = ts->u.min;   \
 else   \
 ts->u.min = ts->v.min;   \
 \
 if( ts->u.max > ts->v.max )\
 ts->v.max = ts->u.max;   \
 else   \
 ts->u.max = ts->v.max;
 
 
#define MaxBin( q, h, w ) \
 LongArrayGetMaxBin( ts->q.data, ts->q.N,\
 ts->q.min, ts->q.max,    \
 ts->h.data, ts->h.N,\
 ts->w.data, &ts->w.N )
 
#define NewPort()\
 GetPort( &oldPort ); SetPort( ts->w )
 
#define Print( h, str, val, dig )  \
 MoveTo( h, v ); \
 *s = sprintf( s+1, "%."#dig"f", val );\
 DrawString( str ); DrawString( s )

 
static pTSgTS;

/* LayoutWindow ----------------------------------------------
 Arrange data areas of window.
*/

static void LayoutWindow( void )
{
 register pTS  ts = gTS;
 Rect   r;
 FontInfo fi;
 short  lineHt, pad;

 r = ts->w->portRect;
 InsetRect( &r, WMargins, WMargins );

 GetFontInfo( &fi );
 lineHt = fi.ascent + fi.descent + fi.leading;

#define t1 ts->statsR1
#define t2 ts->statsR2
#define p1 ts->plotR1
#define p2 ts->plotR2

// left and right

 t1.left  = p1.left  = p2.left  = r.left;
 t2.right = p1.right = p2.right = r.right;

 t1.right = t1.left  + (r.right - t1.left - WMargins)/2;
 t2.left  = t1.right + WMargins;

// top and bottom

 t1.top    = t2.top    = r.top;
 t1.bottom = t2.bottom = t1.top + TextLines * lineHt + 2;
 
 p1.top    = t1.bottom + WMargins;
 p1.bottom = p1.top + (r.bottom - p1.top - WMargins)/2;
 
 p2.top    = p1.bottom + WMargins;
 p2.bottom = r.bottom;

#undef t1
#undef t2
#undef p1
#undef p2
}

/* AllocateArrays ------------------------------------------*/

static void AllocateArrays( long nData, long nBins )
{
 register pTS  ts = gTS;
 
 ts->maxRaw  = nData;
 ts->bins1.N= ts->bins2.N = nBins;
 
 ts->acc1 = ts->raw1.data = Alloc( long, nData );
 ts->acc2 = ts->raw2.data = Alloc( long, nData );
 
 ts->bins1.data = Alloc( long, nBins+1 );
 ts->bins2.data = Alloc( long, nBins+1 );
 
 ts->work1.data = ts->work2.data = nil;
 
 ts->raw1.N = ts->raw2.N = 0;
}

/* TSInit ----------------------------------------------------
 Allocate window and init structures.
*/

void TSInit( Rect *rGlobal, long nData, long nBins )
{
 register pTS  ts;
 GrafPtroldPort;
 Rect   *r, R;
 
 gTS = ts = Alloc( TSRec, 1 );
 
 if( !(r = rGlobal) ) { // auto size window
 R = screenBits.bounds;
 InsetRect( &R, 3, 3 );
 R.top += MBarHeight + TitleBarHt;
 R.bottom >>= 1;
 r = &R;
 }
 
 ts->w = NewWindow( nil, r, nil, true,
 noGrowDocProc, (WindowPtr)-1, false, 0L );
 
 NewPort();
 TextFont( geneva );
 TextSize( 9 );
 
 LayoutWindow();
 
 AllocateArrays( nData, nBins );
 
 ts->combRawSd = 0.0;
 
 SetPort( oldPort );
}

/* TSDispose -------------------------------------------------*/

void TSDispose( void )
{
 register pTS  ts = gTS;
 
 if( !ts ) return;
 
 if( ts->w ) DisposeWindow( ts->w );
 
 Kill( raw1 );
 Kill( raw2 );
 Kill( work1 );
 Kill( work2 );
 Kill( bins1 );
 Kill( bins2 );
 
 DisposePtr( ts );
}

/* TSAccumulate ----------------------------------------------
 Add data to arrays.
*/

void TSAccumulate( long time1, long time2 )
{
 register pTS  ts = gTS;

 if( ts->raw1.N < ts->maxRaw && time1 >= 0 ) {
 *ts->acc1++ = time1;
 ++ts->raw1.N;
 }
 
 if( ts->raw2.N < ts->maxRaw && time2 >= 0 ) {
 *ts->acc2++ = time2;
 ++ts->raw2.N;
 }
}


/* TSRawPlots ------------------------------------------------
 Display plots of accumulated raw data.
*/

void TSRawPlots( void )
{
 register pTS  ts = gTS;
 GrafPtroldPort;
 
 if( !ts->raw1.N ) return;
 
 NewPort();
 
 Limits( raw1 );
 Limits( raw2 );
 
#if UseSameScales == 1
 SameScales( raw1, raw2 );
#endif
 
 Plot( raw1, plotR1, "\pRaw Time1" );
 Plot( raw2, plotR2, "\pRaw Time2" );
 
 SetPort( oldPort );
}


/* TSStats ---------------------------------------------------
 Calculate and display statistics for arrays.
 sourceType is one of the defined constants {kRaw, kWork}.
 */

void TSStats( long sourceType )
{
 register pTS  ts = gTS;
 GrafPtroldPort;
 Rect   *r;
 double mean1, mean2, sd1, sd2, z1, z2;
 long   *data1, *data2;
 long   N1, N2;
 FontInfo fi;
 Byte   s[36];
 short  lineHt, h1, h2, h3, v;
 
 if( sourceType == kRaw ) {
 data1 = ts->raw1.data;
 data2 = ts->raw2.data;
 N1 = ts->raw1.N;
 N2 = ts->raw2.N;
 r = &ts->statsR1;
 }
 else {
 data1 = ts->work1.data;
 data2 = ts->work2.data;
 N1 = ts->work1.N;
 N2 = ts->work2.N;
 r = &ts->statsR2;
 }
 
 if( !N1 ) return;
 
 NewPort();
 
 ForeColor( blackColor );
 EraseRect( r );
 FrameRect( r );
 
 GetFontInfo( &fi );
 h1 = r->left + 2;
 h3 = (r->right - r->left)/3;
 h2 = r->left + h3;
 h3 += h2;
 v  = r->top + fi.ascent + 1;
 
 lineHt = fi.ascent + fi.descent + fi.leading;
 
 LongArrayMeanDev( data1, N1, &mean1, &sd1 );
 LongArrayMeanDev( data2, N2, &mean2, &sd2 );
 
 if( sourceType == kRaw ) {
 Print( h1, "\pmean1 = ", mean1, 0 );
 Print( h2, "\pmean2 = ", mean2, 0 );
 }
 else {
 Print( h1, "\pmode1 = ", mean1, 0 );
 Print( h2, "\pmode2 = ", mean2, 0 );
 }
 
 v += lineHt;
 
 Print( h1, "\psd1 = ", sd1, 2 );
 Print( h2, "\psd2 = ", sd2, 2 );
 
 v += lineHt;
 
 z1 = mean1 - mean2;
 Print( h1, "\pdiff = ", z1, 0 );
 
 z2 = z1 / mean2 * 100.0;
 Print( h2, "\prel diff = ", z2, 2 );
 DrawChar( '%' );
 
 if( sourceType == kRaw ) {
 
 ts->combRawSd = z2 =
 sqrt( sd1*sd1/ts->raw1.N + sd2*sd2/ts->raw2.N );
 
 if( z2 > 0.0 )
 z1 = fabs( z1 ) / z2;
 else
 z1 = 0.0;
 Print( h3, "\pZ = ", z1, 2 );
 }
 else {
 if( ts->combRawSd > 0.0 )
 z1 = fabs( z1 ) / ts->combRawSd;
 else
 z1 = 0.0;
 Print( h3, "\pZ = ", z1, 2 );
 }
 
 SetPort( oldPort );
}


/* TSRawHistos -----------------------------------------------
 Calculate and display plots of freq data.
*/

void TSRawHistos( void )
{
 register pTS  ts = gTS;
 GrafPtroldPort;

 if( !ts->raw1.N ) return;
 
 NewPort();
 
 Histo( bins1, raw1 );
 Histo( bins2, raw2 );
 
 Limits( bins1 );
 Limits( bins2 );
 
#if UseSameScales == 1
 SameScales( bins1, bins2 );
#endif

 Plot( bins1, plotR1, "\pFreq Time1" );
 Plot( bins2, plotR2, "\pFreq Time2" );
 
 SetPort( oldPort );
}

/* TSFilterMode ----------------------------------------------
 Calculate and display plots of data only in max bin ( the mode ).
 Places this separated data in work structures.
 sourceType is one of the defined constants {kRaw, kWork}.
 Returns true if max == min for newly filtered data, else returns false.
*/

Boolean TSFilterMode( long sourceType )
{
 register pTS  ts = gTS;
 GrafPtroldPort;
 
 if( !ts->raw1.N ) return true;
 
 NewPort();
 
 if( !ts->work1.data )
 ts->work1.data = Alloc( long, ts->raw1.N );
 
 if( !ts->work2.data )
 ts->work2.data = Alloc( long, ts->raw1.N );
 
 if( sourceType == kRaw ) {
 MaxBin( raw1, bins1, work1 );
 MaxBin( raw2, bins2, work2 );
 }
 else {
 if( ts->work1.max != ts->work1.min )
 MaxBin( work1, bins1, work1 );
 
 if( ts->work2.max != ts->work2.min )
 MaxBin( work2, bins2, work2 );
 }
 
 Limits( work1 );
 Limits( work2 );
 
 Histo( bins1, work1 );
 Histo( bins2, work2 );
 
 Limits( bins1 );
 Limits( bins2 );
 
 Plot( bins1, plotR1, "\pMost Freq Time1" );
 Plot( bins2, plotR2, "\pMost Freq Time2" );
 
 SetPort( oldPort );
 
 return (ts->work1.max == ts->work1.min &&
 ts->work2.max == ts->work2.min);
}


/* File LongArrayStats -------------------------------------------
 Calculate various statistics for arrays of longs.
*/
 
#pragma options( honor_register, !assign_registers )

#include"LongArrayStats.h"
#include<math.h>


/* LongArrayMinMax -------------------------------------------
 Calculate minimum and maximum values.
*/

void LongArrayMinMax(
 register long *dp,
 register long N,
 long   *min,
 long   *max )
{
 register long mx = 0x80000000, mn = 0x7fffffff, d;
 
 if( !N ) {
 *min = *max = 0;
 return;
 }
 
 do {
 d = *dp++;
 if( d < mn ) mn = d;
 if( d > mx ) mx = d;
 } while( --N );
 
 *min = mn;
 *max = mx;
}


/* LongArrayMeanDev ------------------------------------------
 Calculate array's mean and standard deviation.
*/

void LongArrayMeanDev(
 register long *dp,
 long   N,
 double *mean,
 double *sd )
{
 register long n = N, d;
 register double sumX, sumX2;
 
 sumX = sumX2 = 0;

 if( n ) {
 
 do {
 d = *dp++;
 sumX += d;
 sumX2  += d * d;
 } while( --n );
 
 n = N;
 
 sumX = sumX / n;
 sumX2  = sumX2 / n - sumX * sumX;
 if( n > 1 ) sumX2 *= n / (n - 1);
 sumX2 = sqrt( sumX2 );
 }
 
 *mean = sumX;
 *sd = sumX2;
}


/* LongArrayBin ----------------------------------------------
 Bin data array into a bins array.
  If input bins is nil, this routine allocates the bins array.
*/

long *LongArrayBin(
 register long *data,
 register long N,
 register long min,
 long   max,
 long   *bins,
 register long nBins )
{
 register long *b;
 register long n, span;
 
 if( !bins )
 bins = (long*)NewPtr( sizeof(long)*(nBins + 1) );
 
 if( !(b = bins) ) goto exit;
 
// zero bins array
 
 n = nBins + 1;
 
 do {
 *b++ = 0;
 } while( --n );
 
// bin data
 
 b = bins;

 if( span = max - min ) {
 
 do {
 b[((*data++ - min) * nBins) / span]++;
 } while( --N );
 
 b[nBins - 1] += b[nBins];
 }
 else
 b[0] = N;
 
exit:
 return bins;
}


/* LongArrayGetMaxBin ----------------------------------------
 Using an array {in} of data, and its corresponding array {bins}, return 
in array {out}, only those data falling in maximum height bin.  in and 
out can be the same array.
*/

void LongArrayGetMaxBin(
 long   *in,
 register long nIn,
 register long min,
 long   max,
 long   *bins,
 register long nBins,
 long   *out,
 long   *nOut )
{
 register long *insert, *look;
 register long newN, maxCounts;
 short  maxBinNum, iBin, pad;
 
// quick exits

 if( !nIn ) {
 *nOut = 0;
 return;
 }
 
 if( max == min ) {
 *nOut = nIn;
 if( in != out ) {
 do {
 *out++ = *in++;
 } while( --nIn );
 }
 return;
 }
 
// find max bin

 maxBinNum= 0;
 maxCounts= -1;
 look   = bins;
 
 for( iBin = 0; iBin < (short)nBins; iBin++, look++ ) {
 if( *look > maxCounts ) {
 maxCounts = *look;
 maxBinNum = iBin;
 }
 }

// replace data with data in max bin

 newN = 0;
 look = in;
 insert = out;
 max    -= min;
 
 if( maxBinNum == nBins - 1 ) {
 // edge condition

 do {
 if( ((*look - min)*nBins)/max >= maxBinNum ) {
 *insert++ = *look;
 newN++;
 }
 look++;
 } while( --nIn && newN < maxCounts );
 }
 else {

 do {
 if( ((*look - min)*nBins)/max == maxBinNum ) {
 *insert++ = *look;
 newN++;
 }
 look++;
 } while( --nIn && newN < maxCounts );
 }
 
 *nOut = newN;
}


/* File PlotLongArray ----------------------------------------
 Plot array of longs.
*/

#pragma options( honor_register, !assign_registers )
#include"PlotLongArray.h"
#define TextMargin 1

/* PlotLongArray ---------------------------------------------
 Plot array of N longs.
 min and max values set the scales.
 r bounds the entire plot including labels.
*/
 
#pragma options( honor_register, !assign_registers )
#include"PlotLongArray.h"

void PlotLongArray(
 register long *data,
 long   N,
 long   hMin,
 long   hMax,
 long   vMin,
 long   vMax,
 Rect   *r,
 StringPtrtitle )
{
 register short  v0, wHi, hScale, vScale;
 register long i, sum;
 short  h, wWid, sMinWid, sMaxWid;
 Rect   R;
 Point  p;
 Byte   sMin[16], sMax[16];
 FontInfo fi;
 
// adjust vMin and vMax

 if( vMin > 0 && vMax > 0 )
 vMin = 0;
 else if( vMin < 0 && vMax < 0 )
 vMax = 0;
 
 R = *r;
 EraseRect( &R );
 
// make room for labels on left and bottom
 
 NumToString( vMin, sMin );
 NumToString( vMax, sMax );
 GetFontInfo( &fi );
 sMinWid = StringWidth( sMin );
 sMaxWid = StringWidth( sMax );
 h = sMaxWid;
 if( sMinWid > h ) h = sMinWid;
 
 R.left += h + TextMargin*2;
 R.bottom -= fi.ascent + fi.descent + TextMargin;
 
 ForeColor( greenColor );
 FrameRect( &R );
 InsetRect( &R, 1, 1 );

// vert labels
 
 ForeColor( blackColor );
 MoveTo( R.left - sMaxWid - TextMargin,
 R.top + fi.ascent );
 DrawString( sMax );
 
 MoveTo( R.left - sMinWid - TextMargin,
 R.bottom - fi.descent );
 DrawString( sMin );
 
// horiz labels
 
 v0 = R.bottom + fi.ascent + TextMargin;
 NumToString( hMin, sMin );
 MoveTo( R.left + TextMargin, v0 );
 DrawString( sMin );
 
 NumToString( hMax, sMax );
 MoveTo( R.right - StringWidth( sMax ) - TextMargin, v0 );
 DrawString( sMax );
 
 if( title ) {
 MoveTo((R.right + R.left - StringWidth(title))/2, v0);
 DrawString( title );
 }
// get ready to plot

 hScale = hMax - hMin;
 vScale = vMax - vMin;

 if( !hScale || !vScale ) return;
 
 wWid = R.right - R.left;
 wHi    = R.bottom - R.top;

// draw absissa at v = 0
 
 v0 = R.top + (vMax * wHi) / vScale;
 
 MoveTo( R.left, v0 );
 LineTo( R.right - 1, v0 );
 
// draw data

 sum = 0;
 
 for( i = 0; i < N; i++ ) {
 
 h = R.left + (i * wWid) / hScale;
 
 MoveTo( h, v0 );
 LineTo( h, v0 - (*data * wHi) / vScale );
 
 sum += *data;
 data++;
 }

// draw average v line

 v0 -= (sum/N * wHi) / vScale;
 
 ForeColor( redColor );
 MoveTo( R.left, v0 );
 LineTo( R.right - 1, v0 );
 
 ForeColor( blackColor );
}

/* File TestFuncs --------------------------------------------
 Functions to be timed.
*/

#include"TestFuncs.h"

void Foo( long *x, long *y )
{
 long i = 5;
 
 do {
 *x *= *y;
 } while( --i );
}


void FooBar( long *x, long *y )
{
 long i = 5;
 
 do {
 *x *= *y;
 } while( --i );
}







  
 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Garmin Express 7.0.0.0 - Manage your Gar...
Garmin Express is your essential tool for managing your Garmin devices. Update maps, golf courses and device software. You can even register your device. Update maps Update software Register your... Read more
ClipGrab 3.8.12 - Download videos from Y...
ClipGrab is a free downloader and converter for YouTube, Vimeo, Facebook and many other online video sites. It converts downloaded videos to MPEG4, MP3 or other formats in just one easy step Version... Read more
VMware Fusion 11.5.5 - Run Windows apps...
VMware Fusion and Fusion Pro - virtualization software for running Windows, Linux, and other systems on a Mac without rebooting. The latest version includes full support for Windows 10, macOS Mojave... Read more
Civilization VI 1.3.0 - Next iteration o...
Civilization® VI is the award-winning experience. Expand your empire across the map, advance your culture, and compete against history’s greatest leaders to build a civilization that will stand the... Read more
Corel Painter 20.1.0.285 - Digital art s...
Corel Painter lets you advance your digital art style with painted textures, subtle glazing brushwork, interactive gradients, and realistic Natural-Media. Easily transition from traditional to... Read more
iTubeDownloader 6.5.19 - Easily download...
iTubeDownloader is a powerful-yet-simple YouTube downloader for the masses. Because it contains a proprietary browser, you can browse YouTube like you normally would. When you see something you want... Read more
OmniFocus 3.8 - GTD task manager with iO...
OmniFocus is an organizer app. It uses projects to organize tasks naturally, and then add tags to organize across projects. Easily enter tasks when you’re on the go, and process them when you have... Read more
Hazel 4.4.5 - Create rules for organizin...
Hazel is your personal housekeeper, organizing and cleaning folders based on rules you define. Hazel can also manage your trash and uninstall your applications. Organize your files using a familiar... Read more
Macs Fan Control 1.5.7 - Monitor and con...
Macs Fan Control allows you to monitor and control almost any aspect of your computer's fans, with support for controlling fan speed, temperature sensors pane, menu-bar icon, and autostart with... Read more
Acorn 6.6 - Bitmap image editor.
Acorn is a new image editor built with one goal in mind - simplicity. Fast, easy, and fluid, Acorn provides the options you'll need without any overhead. Acorn feels right, and won't drain your bank... Read more

Latest Forum Discussions

See All

Dungonian is a card-based dungeon crawle...
Dungonian is a card-based dungeon crawler from developer SandFish Games that only recently launched as a free-to-play title. It offers an extensive roster of playable heroes to collect and enemies to take down, and it's available right now for iOS... | Read more »
Steam Link Spotlight - Signs of the Sojo...
Steam Link Spotlight is a feature where we look at PC games that play exceptionally well using the Steam Link app. Our last entry was XCOM: Chimera Squad. Read about how it plays using Steam Link's new mouse and keyboard support over here. | Read more »
Steampunk Tower 2, DreamGate's sequ...
Steampunk Tower 2 is a DreamGate's follow up to their previous tower defence game. It's available now for both iOS and Android as a free-to-play title and will see players defending their lone base by kitting it out with a variety of turrets. [... | Read more »
Clash Royale: The Road to Legendary Aren...
Supercell recently celebrated its 10th anniversary and their best title, Clash Royale, is as good as it's ever been. Even for lapsed players, returning to the game is as easy as can be. If you want to join us in picking the game back up, we've put... | Read more »
Pokemon Go Fest 2020 will be a virtual e...
Niantic has announced that Pokemon Go Fest will still take place this year although understandably it won't be a physical event. Instead, it will become a virtual celebration and is set to be held on 25th and 26th July. [Read more] | Read more »
Marvel Future Fight's major May upd...
Marvel Future Fight's latest update has now landed, and it sounds like a big one. The focus this time around is on Marvel's Guardians of the Galaxy, and it introduces all-new characters, quests, and uniforms for players to collect. [Read more] | Read more »
SINoALICE, Yoko Taro and Pokelabo's...
Yoko Taro and developer Pokelabo's SINoALICE has now opened for pre-registration over on the App Store. It's already amassed 1.5 million Android pre-registrations, and it's currently slated to launch on July 1st. [Read more] | Read more »
Masketeers: Idle Has Fallen's lates...
Masketeers: Idle Has Fallen is the latest endeavour from Appxplore, the folks behind Crab War, Thor: War of Tapnarok and Light A Way. It's an idle RPG that's currently available for Android in Early Access and will head to iOS at a later date. [... | Read more »
Evil Hunter Tycoon celebrates 2 million...
Evil Hunter Tycoon has proved to be quite the hit since launching back in March, with its most recent milestone being 2 million downloads. To celebrate the achievement, developer Super Planet has released a new updated called Darkness' Front Yard... | Read more »
Peak's Edge is an intriguing roguel...
Peak's Edge is an upcoming roguelike puzzle game from developer Kenny Sun that's heading for both iOS and Android on June 4th as a free-to-play title. It will see players rolling a pyramid shape through a variety of different levels. [Read more] | Read more »

Price Scanner via MacPrices.net

Sams Club Sales Event: $100 off every Apple W...
Sams Club is discounting all Apple Watch Series 5 models by $100 off Apple’s MSRP through June 3, 2020. Choose free shipping or free local store pickup (if available). Sale prices for online orders... Read more
New 16″ MacBook Pros now on sale for up to $2...
Apple reseller DataVision is now offering new 16″ Apple MacBook Pros for up to $255 off MSRP, each including free shipping. Prices start at $2194. DataVision charges sales tax for NY, NJ, PA, and CA... Read more
Apple now offering Certified Refurbished iPho...
Apple is now offering Certified Refurbished iPhone Xr models in the refurbished section of their online store starting at $499. Each iPhone comes with Apple’s standard one-year warranty, ships free,... Read more
Sale! Get a 10.2″ 32GB WiFi iPad for only $27...
Walmart has new 10.2″ 32GB WiFi iPads on sale for $50 off Apple’s MSRP, only $279. These are the same iPads sold by Apple in their retail and online stores. Be sure to select Walmart as the seller... Read more
Apple resellers offer new 2020 Mac minis for...
Apple resellers are offering new 2020 Mac minis for up to $50 off Apple’s MSRP with prices available starting at $759. Shipping is free: (1) B&H Photo: – 2020 4-Core Mac mini: $759 $40 off MSRP... Read more
Sprint is offering the Apple iPhone 11 free t...
Did you miss out on Sprint’s recent free iPhone SE promotion? No worries. Sprint has the 64GB iPhone 11 available for $0 per month for new lines when you trade-in a qualifying phone in any condition... Read more
Apple has clearance 2019 13″ 1.4GHz MacBook P...
Apple has Certified Refurbished 2019 13″ 1.4GHz 4-Core Touch Bar MacBook Pros available today starting at $979 and up to $440 off original MSRP. Apple’s one-year warranty is included, shipping is... Read more
Apple restocks 2019 MacBook Airs starting at...
Apple has clearance, Certified Refurbished, 2019 13″ MacBook Airs available again starting at $779. Each MacBook features a new outer case, comes with a standard Apple one-year warranty, and is... Read more
Apple restocks clearance Mac minis for only $...
Apple has restocked Certified Refurbished 2018 4-Core Mac minis for only $599. Each mini comes with a new outer case plus a standard Apple one-year warranty. Shipping is free: – 3.6GHz Quad-Core... Read more
Apple’s new 2020 13″ MacBook Airs on sale for...
B&H Photo has Apple’s new 2020 13″ 4-Core and 6-Core MacBook Airs on sale today for $50-$100 off Apple’s MSRP, starting at $949. Expedited shipping is free to many addresses in the US. The... Read more

Jobs Board

*Apple* Mac Desktop Support - Global Dimensi...
…Operate and support an Active Directory (AD) server-client environment for all Apple devices operating on the BUMED network + Leverage necessary industry enterprise Read more
Surgical Technologist III, *Apple* Hill Sur...
Surgical Technologist III, Apple Hill Surgical Center - Full Time Tracking Code D5.29.2020 Job Description Surgical Technologist III Apple Hill Surgical Center Read more
Security Officer - *Apple* Store - NANA (Un...
**Security Officer \- Apple Store** **Description** About NMS Built on a culture of safety and integrity, NMSdelivers award\-winning, integrated support services to Read more
Transition Into Practice Program (TIP) - Sept...
…Academy-Transition into Practice (TIP) Residency program at St Mary Medical Center in Apple Valley, CA. **We are seekingRegistered Nurses who are:** + New graduate Read more
Essbase Developer - *Apple* - Theorem, LLC...
Job Summary Apple is seeking an experienced, detail-minded Essbase developer to join our worldwide business development and strategy team. If you are someone who Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.