TweetFollow Us on Twitter

June 94 - BALANCE OF POWER

BALANCE OF POWER

Enhancing PowerPC Native Speed

DAVE EVANS

[IMAGE 055-057_Balance_of_Power1.GIF]

When you convert your applications to native PowerPC code, they run lightning fast. To get the most out of RISC processors, however, you need to pay close attention to your code structure and execution. Fast code is no longer measured solely by an instruction timing table. The Power PC 601 processor includes pipelining, multi-issue and speculative execution, branch prediction, and a set associative cache. All these things make it hard to know what code will run fastest on a Power Macintosh.

Writing tight code for the PowerPC processor isn't hard, especially with a good optimizing compiler to help you. In this column I'll pass on some of what I've learned about tuning Power PC code. There are gotchas and coding habits to avoid, and there are techniques for squeezing the most from your speed-critical native code. For a good introduction to RISC pipelining and related concepts that appear in this column, see "Making the Leap to PowerPC" in Issue 16.

MEASURING YOUR SPEED
The power of RISC lies in the ability to execute one or more instructions every machine clock cycle, but RISC processors can do this only in the best of circumstances. At their worst they're as slow as CISC processors. The following loop, for example, averages only one calculation every 2.8 cycles:

float a[], b[], c[], d, e;
for (i=0; i < gArraySize; i++) {
  e = b[i] + c[i] / d;
  a[i] = MySubroutine(b[i], e);
}

By restructuring the code and using other techniques from this column, you can make significant improvements. This next loop generates the same result, yet averages one calculation every 1.9 cycles -- about 50% faster.

reciprocalD = 1 / d;
for (i=0; i < gArraySize; i+=2) {
  float result, localB, localC, localE;
  float result2, localB2, localC2, localE2;

  localB = b[i];
  localC = c[i];
  localB2 = b[i+1];
  localC2 = c[i+1];

  localE = localB + (localC * reciprocalD);
  localE2 = localB2 + (localC2 * reciprocalD);
  InlineSubroutine(&result, localB, localE);
  InlineSubroutine(&result2, localB2, localE2);

  a[i] = result;
  a[i+1] = result2;
}

The rest of this column explains the techniques I just used for that speed gain. They include expanding loops, scoping local variables, using inline routines, and using faster math operations.

UNDERSTANDING YOUR COMPILER
Your compiler is your best friend, and you should try your hardest to understand its point of view. You should understand how it looks at your code and what assumptions and optimizations it's allowed to make. The more you empathize with your compiler, the more you'll recognize opportunities for optimization.

An optimizing compiler reorders instructions to improve speed. Executing your code line by line usually isn't optimal, because the processor stalls to wait for dependent instructions. The compiler tries to move instr uctions that are independent into the stall points. For example, consider this code:

first = input * numerator;
second = first / denominator;
output = second + adjustment;

Each line depends on the previous line's result, and the compiler will be hard pressed to keep the pipeline full of useful work. This simple example could cause 46 stalled cycles on the PowerPC 601, so the compiler will look at other nearby code for independent instructions to move into the stall points.

EXPANDING YOUR LOOPS
Loops are often your most speed-critical code, and you can improve their performance in several ways. Loop expanding is one of the simplest methods. The idea is to perform more than one independent operation in a loop, so that the compiler can reorder more work in the pipeline and thus prevent the processor from stalling.

For example, in this loop there's too little work to keep the processor busy:

float a[], b[], c[], d;
for (i=0; i < multipleOfThree; i++) {
  a[i] = b[i] + c[i] * d;
}

If we know the data always occurs in certain sized increments, we can do more steps in each iteration, as in the following:

for (i=0; i < multipleOfThree; i+=3) {
  a[i] = b[i] + c[i] * d;
  a[i+1] = b[i+1] + c[i+1] * d;
  a[i+2] = b[i+2] + c[i+2] * d;
}

On a CISC processor the second loop wouldn't be much faster, but on the Power PC processor the second loop is twice as fast as the first. This is because the compiler can schedule independent instructions to keep the pipeline constantly moving. (If the data doesn't occur in nice increments, you can still expand the loop; just add a small loop at the end to handle the extra iterations.)Be careful not to expand a loop too much, though. Very large loops won't fit in the cache, causing cache misses for each iteration. In addition, the larger a loop gets, the less work can be done entirely in registers. Expand too much and the compiler will have to use memory  to store intermediate results, outweighing your marginal gains. Besides, you get the biggest gains from the first few expansions.

SCOPING YOUR VARIABLES
If you're new to RISC, you'll be impressed by the number of registers available on the PowerPC chip -- 32 general registers and 32 floating-point registers. By having so many, the processor can often avoid slow memory operations. Your compiler will take advantage of this when it can, but you can help it by carefully scoping your variables and using lots of local variables.

The "scope" of a variable is the area of code in which it is valid. Your compiler examines the scope of each variable when it schedules registers, and your code can provide valuable information about the usage of each variable. Here's an example:

for (i=0; i < gArraySize; i++) {
  a[i] = MyFirstRoutine(b[i], c[i]);
  b[i] = MySecondRoutine(a[i], c[i]);
} 

In this loop, the global variable gArraySize is scoped for the whole program. Because we call a subroutine in the loop, the compiler can't tell if gArraySize will change during each iteration. Since the subroutine might modify gArraySize, the compiler has to be conservative. It will reload gArraySize from memory on every iteration, and it won't optimize the loop any further. This is wastefully slow.

On the other hand, if we use a local  variable, we tell the compiler that gArraySize and c[i] won't be modified and that it's all right to just keep them handy in registers. In addition, we can store data as temporary variables scoped only within the loop. This tells the compiler how we intend to use the data, so that the compiler can use free registers and discard them after the loop. Here's what this would look like:

arraySize = gArraySize;
for (i=0; i < arraySize; i++) {
  float localC;
  localC = c[i];
  a[i] = MyFirstRoutine(b[i], localC);
  b[i] = MySecondRoutine(a[i], localC);
} 

These minor changes give the compiler more information about the data, in this instance accelerating the resulting code by 25%.

STYLING YOUR CODE
Be wary of code that looks complicated. If each line of source code contains complicated dereferences and typecasting, chances are the object code has wasteful memory instructions and inefficient register usage. A great compiler might optimize well anyway, but don't count on it. Judicious use of temporary variables (as mentioned above) will help the compiler understand exactly what you're doing -- plus your code will be easier to read.

Excessive memory dereferencing is a problem exacerbated by the heavy use of handles on the Macintosh. Code often contains double memory dereferences, which is important when memory can move. But when you can guarantee that memory won't  move, use a local pointer, so that you only dereference a handle once. This saves load instructions and allows fur ther optimizations. Casting data types is usually a free operation -- you're just telling the compiler that you know you're copying seemingly incompatible data. But it's not  free if the data types have different bit sizes, which adds conversion instructions. Again, avoid this by using local variables for the commonly casted data.

I've heard many times that branches are "free" on the PowerPC processor. It's true that often the pipeline can keep moving even though a branch is encountered, because the branch execution unit will try to resolve branches very early in the pipeline or will predict the direction of the branch. Still, the more subroutines you have, the less your compiler will be able to reorder and intelligently schedule instructions. Keep speed-critical code together, so that more of it can be pipelined and the compiler can schedule your registers better. Use inline routines for short operations, as I did in the improved version of the first example loop in this column.

KNOWING YOUR PROCESSOR
As with all processors, the PowerPC chip has performance tradeoffs you should know about. Some are processor model specific. For example, the PowerPC 601 has 32K of cache, while the 603 has 16K split evenly into an instruction cache and a data cache. But in general you should know about floating-point performance and the virtues of memory alignment.

Floating-point multiplication is wicked fast -- up to nine times  the speed of integer multiplication. Use floating-point multiplication if you can. Floating-point division takes 17 times as long, so when possible multiply by a reciprocal instead of dividing.

Memory accesses go fastest if addressed on 64-bit memory boundaries. Accesses to unaligned data stall while the processor loads different words and then shifts and splices them. For example, be sure to align floating-point data to 64-bit boundaries, or you'll stall for four cycles while the processor loads 32-bit halves with two 64-bit accesses.

MAKING THE DIFFERENCE
Native PowerPC code runs really fast, so in many cases you don't need to worry about tweaking its performance at all. For your speed-critical code, though, these tips I've given you can make the difference between "too slow" and "fast enough."

RECOMMENDED READING

  • High-Performance Computing  by Kevin Dowd (O'Reilly & Associates, Inc., 1993).
  • High-Performance Computer Architecture  by Harold S. Stone (Addison-Wesley, 1993).
  • PowerPC 601 RISC Microprocessor User's Manual (Motorola, 1993).

DAVE EVANS may be able to tune PowerPC code for Apple, but for the last year he's been repeatedly thwarted when tuning his 1978 Harley-Davidson XLCH motorcycle. Fixing engine stalls, poor timing, and rough starts proved difficult, but he was recently rewarded with the guttural purr of a well-tuned Harley. *

Code examples were compiled with the PPCC compiler using the speed optimization option, and then run on a Power Macintosh 6100/66 for profiling. A PowerPC 601 microsecond timing library is provided on this issue's CD. *

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Top Mobile Game Discounts
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links... | Read more »
Price of Glory unleashes its 1.4 Alpha u...
As much as we all probably dislike Maths as a subject, we do have to hand it to geometry for giving us the good old Hexgrid, home of some of the best strategy games. One such example, Price of Glory, has dropped its 1.4 Alpha update, stocked full... | Read more »
The SLC 2025 kicks off this month to cro...
Ever since the Solo Leveling: Arise Championship 2025 was announced, I have been looking forward to it. The promotional clip they released a month or two back showed crowds going absolutely nuts for the previous competitions, so imagine the... | Read more »
Dive into some early Magicpunk fun as Cr...
Excellent news for fans of steampunk and magic; the Precursor Test for Magicpunk MMORPG Crystal of Atlan opens today. This rather fancy way of saying beta test will remain open until March 5th and is available for PC - boo - and Android devices -... | Read more »
Prepare to get your mind melted as Evang...
If you are a fan of sci-fi shooters and incredibly weird, mind-bending anime series, then you are in for a treat, as Goddess of Victory: Nikke is gearing up for its second collaboration with Evangelion. We were also treated to an upcoming... | Read more »
Square Enix gives with one hand and slap...
We have something of a mixed bag coming over from Square Enix HQ today. Two of their mobile games are revelling in life with new events keeping them alive, whilst another has been thrown onto the ever-growing discard pile Square is building. I... | Read more »
Let the world burn as you have some fest...
It is time to leave the world burning once again as you take a much-needed break from that whole “hero” lark and enjoy some celebrations in Genshin Impact. Version 5.4, Moonlight Amidst Dreams, will see you in Inazuma to attend the Mikawa Flower... | Read more »
Full Moon Over the Abyssal Sea lands on...
Aether Gazer has announced its latest major update, and it is one of the loveliest event names I have ever heard. Full Moon Over the Abyssal Sea is an amazing name, and it comes loaded with two side stories, a new S-grade Modifier, and some fancy... | Read more »
Open your own eatery for all the forest...
Very important question; when you read the title Zoo Restaurant, do you also immediately think of running a restaurant in which you cook Zoo animals as the course? I will just assume yes. Anyway, come June 23rd we will all be able to start up our... | Read more »
Crystal of Atlan opens registration for...
Nuverse was prominently featured in the last month for all the wrong reasons with the USA TikTok debacle, but now it is putting all that behind it and preparing for the Crystal of Atlan beta test. Taking place between February 18th and March 5th,... | Read more »

Price Scanner via MacPrices.net

AT&T is offering a 65% discount on the ne...
AT&T is offering the new iPhone 16e for up to 65% off their monthly finance fee with 36-months of service. No trade-in is required. Discount is applied via monthly bill credits over the 36 month... Read more
Use this code to get a free iPhone 13 at Visi...
For a limited time, use code SWEETDEAL to get a free 128GB iPhone 13 Visible, Verizon’s low-cost wireless cell service, Visible. Deal is valid when you purchase the Visible+ annual plan. Free... Read more
M4 Mac minis on sale for $50-$80 off MSRP at...
B&H Photo has M4 Mac minis in stock and on sale right now for $50 to $80 off Apple’s MSRP, each including free 1-2 day shipping to most US addresses: – M4 Mac mini (16GB/256GB): $549, $50 off... Read more
Buy an iPhone 16 at Boost Mobile and get one...
Boost Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering one year of free Unlimited service with the purchase of any iPhone 16. Purchase the iPhone at standard MSRP, and then choose... Read more
Get an iPhone 15 for only $299 at Boost Mobil...
Boost Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering the 128GB iPhone 15 for $299.99 including service with their Unlimited Premium plan (50GB of premium data, $60/month), or $20... Read more
Unreal Mobile is offering $100 off any new iP...
Unreal Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering a $100 discount on any new iPhone with service. This includes new iPhone 16 models as well as iPhone 15, 14, 13, and SE... Read more
Apple drops prices on clearance iPhone 14 mod...
With today’s introduction of the new iPhone 16e, Apple has discontinued the iPhone 14, 14 Pro, and SE. In response, Apple has dropped prices on unlocked, Certified Refurbished, iPhone 14 models to a... Read more
B&H has 16-inch M4 Max MacBook Pros on sa...
B&H Photo is offering a $360-$410 discount on new 16-inch MacBook Pros with M4 Max CPUs right now. B&H offers free 1-2 day shipping to most US addresses: – 16″ M4 Max MacBook Pro (36GB/1TB/... Read more
Amazon is offering a $100 discount on the M4...
Amazon has the M4 Pro Mac mini discounted $100 off MSRP right now. Shipping is free. Their price is the lowest currently available for this popular mini: – Mac mini M4 Pro (24GB/512GB): $1299, $100... Read more
B&H continues to offer $150-$220 discount...
B&H Photo has 14-inch M4 MacBook Pros on sale for $150-$220 off MSRP. B&H offers free 1-2 day shipping to most US addresses: – 14″ M4 MacBook Pro (16GB/512GB): $1449, $150 off MSRP – 14″ M4... Read more

Jobs Board

All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.