Optimizing for PPC
Volume Number: | | 12
|
Issue Number: | | 5
|
Column Tag: | | Book Review
|
The Need for Speed
Learn the nitty-gritty of PowerPC optimization
By Mike Scanlin
Optimizing PowerPC Code:
Programming the PowerPC Chip in Assembly Language
By Gary Kacmarcik
Addison-Wesley, 1995
ISBN 0-201-40839-2, 694 pages (softback). $39.95.
Im disappointed. Its just no challenge any more. It took me years of careful trial, error, repeated error, and determined study, to perfect my 680x0 optimizing skills to the point where I really understood the chip from a software point of view. I was looking forward to the same kind of challenge on the PowerPC (scrounging for obscure magazine articles, surfing the net looking for example code, writing and timing code three different ways, disassembling all the programs with good performance to see how they did it, etc.). But now that Ive read this book, all the hard theory has been taken care of, and the only thing remaining is to do a few PowerPC assembly language projects and put the theory to the test. Mr. Kacmarcik has cut short my search for knowledge by writing a book which makes plain everything about the PowerPC chip, including the subtle pipeline and cache interactions that a true optimizer wants to know.
This book is intended for programmers with some high-level experience and at least a little experience with assembly language. It does not explain what hexadecimal means, for example, but it does define concepts like latency and throughput.
The first nine of the sixteen chapters review in precise detail the entire PowerPC instruction set and architecture. The purpose of these chapters is to broaden the audience for this book. Anyone with PowerPC experience could skim these 170 pages in an hour or so. For the rest, though, it is a reasonable starting point. Unfortunately, there are too few examples for the descriptions of the individual instructions to be meaningful. Its like someone handing you a book on how to write poetry where the first hundred pages are a dictionary explaining all the words you can use in your poems but not really giving you the context or any examples to appreciate them. Its hard to separate the really important stuff (like everyday instructions, registers and concepts) from the stuff that was just put in for the sake of completeness. An uninitiated person who tries to understand it all will probably become overwhelmed. I can accept that these chapters are meant to be an introduction and a bit of a reference (in addition to the complete references in the appendices), but its a little too much, too soon, in my opinion.
The next seven chapters, and especially Appendix D, are the reason to buy this book. They contain the info that is hard to find elsewhere. The chapter titles will give you a good idea of what youll find:
10. Memory and Caches
11. Pipelining
12. PowerPC 601 Instruction Timing
13. Programming Model [C calling conventions]
14. Introduction to Optimizing
15. Resource Scheduling
16. More Optimization Techniques
Appendix D. Optimization Summary
The cache discussion reviews how set-associative caches work. This is good info that you can apply to designing your own caches in higher-level languages like C. It is interesting to read that cache simulations have shown nearly identical cache hit rates for caches with random line-replacement algorithms and caches with least-recently-used line-replacement algorithms. There are tidbits of useful information sprinkled throughout this chapter, such as the sentence, According to the PowerPC ISA, the programmer should assume that the processor has a split (instruction/data) cache, and that the processor will not automatically keep the instruction cache consistent with data written via the store instructions (that is, with the data cache). Writers of self-modifying code, beware.
Even though the cache discussion is complete, it illustrates a problem that several of the chapters have: its missing down-to-earth examples. For instance, it says the 601 has a unified 32K, eight-way set associative cache, and explains what that means technically, but it doesnt go on to tell me how far apart two addresses need to be before they map to the same cache line. If Im working on an image-filtering application, it is really useful to know what sizes not to use for rowBytes (to avoid thrashing the data cache) if my algorithm visits all the pixels down a vertical column.
The instruction timing chapter was one of my favorites. Heres an example of the kind of precision you can expect:
The Multiply Low Immediate (mulli) instruction always takes five cycles in IE. The length of time that the other multiply instructions spend in IE is dependent on the data contained in rB. If the upper 16 bits of rB are all sign bits, then the instruction spends five cycles in IE, otherwise it spends nine cycles. This means that the lesser (in magnitude) of the two arguments should be placed in rB because there is a potential savings of four cycles if -2^15 <= rB < (2^15 - 1).
All your favorite timing topics are handled here along with micro-examples to illustrate each stage of the pipeline for the entire sequence of instructions. Topics include: branch prediction (taken and not taken), cache hits and misses, pipeline synchronization, pipeline stalls, misaligned data accesses, and more. Heres another example of the kind of details youll find. This is from the discussion of instruction fetching:
This may seem like a strange thing to affect timing, but the address affects where the data will be stored in the cache, and the cache timing is different when the request is from the upper or lower part of a cache line. If your timings always assume that youll receive four or eight instructions at a time, you may be surprised when the code is timed on a real system . For a critical loop, it might be worthwhile to place a few nops before the loop so that it fits nicely into a cache line.
The programming model chapter was good. I especially liked the explanation of how leaf routines that dont need more than 220 bytes of stack space dont need to allocate a stack frame (because, by convention, interrupt routines know not to use the 220 bytes above the current stack pointer - known as the Red Zone in Inside Macintosh). This chapter also discusses why you should not use the Load and Store Multiple instructions.
I must say I was disappointed that the chapter titled Introduction To Optimizing was only eight pages long. I was hoping that after plowing through 300 pages of details I would finally get to see 100 lines of before and after PowerPC assembly. But I didnt. So I kept plowing ahead and on page 317 I found out that, as a rule of thumb, I should always place two independent instructions between two branches that are taken (jumps to subroutines, perhaps). As I got further and further into the book I would find a gem like this every 20 to 50 pages. I couldnt help but think: These are the really useful pieces of information; why cant he just list everything like this and give lots of examples? Then I found Appendix D.
Appendix D begins on page 677 and ends on page 678. But those are the two best pages in the whole book. If you want to apply the 90-10 rule to reading this book and you only have time to read two pages, then you better make it these two - they are the rules of thumb to follow when writing PowerPC assembly code. If you do these things right then a large portion of your optimizing job will be done.
This is a great book. I was frustrated that I had to read almost 700 pages before I found the summary of tricks that I was looking for. But there are lots of little bits sprinkled throughout, such as the table on page 347 that shows how to multiply something by 3 through 10 with no more than 3 integer shifts, adds and subtracts. Mechanically, the book is beautiful to read. It is nicely typeset with fonts, font sizes and diagrams well chosen.
My biggest complaint is that I want to see real-world code examples (i.e. more than five instruction sequences) in action. Id like the author to provide some high-resolution timer code so that I can time my own code and know if Ive made a difference (how about a performance workbench to experiment with?). And Id like to see things like a C program calling some performance bottleneck written in assembly so I could get a bigger picture of how all this code fits together in a real program. Nevertheless, if you have any interest in writing fast PowerPC code, you should buy this book.