Books
Volume Number: | | 10
|
Issue Number: | | 10
|
Column Tag: | | Book Reviews
|
Scanlin on Books
By Mike Scanlin, Mountain View, CA
POWER and PowerPC
By Weiss and Smith
Morgan Kaufmann Publishers, Inc. 1994.
ISBN 1-55860-279-8.
408 pages (hardback).
If you are working on learning PowerPC assembly language, and you want a good, solid technical information that gives you both a base to start with and a reference to return to once youve started going, POWER and PowerPC delivers both. It is a very complete look at the POWER and PowerPC architectures and contains information that would be of interest to anyone who really wants to delve into PowerPC 601 programming.
The book covers three main areas: (1) the POWER architecture, (2) the first two POWER implementations, the POWER1 and POWER2, used by IBM in the RS/6000, and (3) the PowerPC architecture and PowerPC 601 implementation. There are other areas which are interesting but which probably arent as important to most Macintosh programmers, including: A comparison of the POWER and PowerPC architectures, a comparison of the PowerPC 601 and DEC Alpha 21064, and the IEEE 754 Floating-Point standard.
The best part of this book, in my Mac-centric opinion, are Chapters 7-9, which describe the 601 in detail. It starts off with a discussion of the instruction formats and goes on to show how some of the less-obvious instructions work (like rotate with mask). It then goes on to explain the 601s pipelines, branch processing and caches. Within each of these discussions the examples are clearly illustrated with sample code fragments. Youll be able to see where and why pipeline stalls occur (and what you can do to avoid them in some cases), how to optimize your branches and exactly how the combined instruction and data cache works. Understanding these issues is a key part of being able to optimize for the 601 when you need to (in addition to helping you identify why certain code fragments run slower than you would expect).
This book is not a tutorial on how to program in PowerPC assembly language. It is, however, one of those rare technical books that is a pleasure to read for all the right reasons: the examples are clear, the examples are worth studying, the authors know their stuff and, its presented in a neatly typeset and illustrated manner. I would recommend it to self-motivated people who want to start learning PowerPC assembly language programming or to anyone working in a high-level language who wants to know more about their underlying processor.
Zen of Code Optimization
By Michael Abrash
The Coriolis Group, Inc. 1994.
ISBN 1-883577-03-9.
449 pages (soft cover, w/disk).
The guru who so many years ago brought us The Zen of Assembly Language has returned. He has now released a new and improved version of the ideas and examples contained in that sacred volume. And he has added new tricks for the latest Intel processors, the 486 and the Pentium.
Now, you are probably asking yourself Why would I care about a bunch of optimization tricks on Intel processors when Im a Mac fanatic? The answer is because there is something for everyone in this book. Even if you ignore all of the Intel assembly code he presents (there is no 68K code at all) you cant help but be impressed with the methodology he used to determine the optimal instructions as well as the clear (and often times humorous) explanations of the finer points of assembly language programming. Mr. Abrash has a Zen-like understanding of the Intel processors and the environments they live in. He is also a gifted writer who can make an otherwise dry topic come to life. Even though I have personally vowed never to write Intel assembly code, I thoroughly enjoyed both his former Zen book and this latest one.
The author sums up the books essence rather well, This book is the diary of a personal passion, my quest for ways to write the fastest possible software for IBM-compatible computers in C, C++, and assembly language. ... it is a journal of my exploration of the flexible mind in action (with, to be sure, a generous leavening of potent low-level optimization tricks). This book is the summary of years of effort studying the subtle behavior of Intel processors. And most of it is presented in easy-to-read, story-like prose that is both fun to read and very educational.
The book starts off by giving us the Zen Timer; a little piece of code used throughout that gives you the most precise timings possible of your Intel code fragments. After all, you have to be able to measure your code accurately to know if your latest change really improved things or not.
The next couple of chapters teach you various low-level things you need to know to really optimize for the Intel processors, such as: the prefetch queue cycle-eater, dynamic RAM refresh cycle-eater, and the display adapter cycle-eater. The interaction of these cycle-eaters leads to some surprising results (like you cant trust the instruction times in the Intel manuals).
Once the basics are understood (and the reader has accepted assume nothing; time everything) the book proceeds to apply that knowledge to some real-world problems. In particular, the Boyer-Moore string searching algorithm is studied and optimized. There are many examples of peephole optimizations (like fast multiplication by 5 or 9 with the LEA instruction). There are examples given on how to manipulate common data structures efficiently, such as linked lists.
The last few chapters are devoted to the Pentium. In addition to showing you how many of the 386 and 486 tricks (taught in earlier chapters) will break on the Pentium, there is an in-depth discussion of the Pentium and its U-pipe and V-pipe and how to keep them both full most of the time. Sadly, like the 68K family, it is not possible to simultaneously optimize for all members of the Intel family.
As an optimizing assembly language programmer, I found it refreshing to find someone who is both a true master at assembly language programming and at the same time capable of making all the right trade-offs when coding in mixed C and assembly. I would recommend Zen of Code Optimization as a 10 on a scale of 10 if you are working on any Intel processor and I would give it an 8 out of 10 for anyone working on the Macintosh who is interested in writing high performance code.
JPEG: Still Image Data Compression Standard
By Pennebaker and Mitchell
Van Nostrand Reinhold. 1993.
ISBN 0-442-01272-1.
638 pages (hardback).
Ever wonder how those color painting programs manage to store 10MB of pixels in a 1MB file? Well, most of them use JPEG (Joint Photographic Experts Group) image compression. If youve ever wanted to know how it works, or implement it yourself, then JPEG: Still Image Data Compression Standard is the book for you.
Written by two members of the JPEG standard committee, this book gives many of the hows and whys of JPEG that are not in the official JPEG specification (which is given in an appendix). There is a good chance that this book will tell you more than you really want to know about JPEG. It contains a LOT of information.
The book is written for everyone, from non-technical people to programmers to mathematicians. But dont let the inclusion of some non-technical info dissuade you - there is more technical and mathematical info here than you probably care to read. Each section is marked with one of three technical difficulty symbols: one for non-technical readers, one for people with intermediate technical skills and one for people with advanced technical skills who are either going to implement a JPEG engine or else just like hard math problems.
The beginning of the book goes over some basic imaging concepts for the uninitiated (such as low-pass filters and the difference between luminance and chrominance). It then introduces you to the Discrete Cosine Transform (DCT) that is the heart of JPEG. This discussion is very complete and certainly makes it clear how both one dimensional and two dimensional DCTs work (with good illustrations and examples). It also explains some of the blocking effects you sometimes see with JPEG images.
After the DCT, the book goes on to explain the various JPEG modes of operation (sequential, progressive, lossless and hierarchical) and the syntax of the JPEG data stream. If youve read the spec and not been clear on any of those things then this books discussion of them will clear them up for you (it certainly did for me).
Once youve run your image data through the DCT and quantized it, the last step of JPEG is to entropy encode the quantized values. JPEG allows two methods of entropy encoding: Huffman or arithmetic. The book spends ample time on both methods (several chapters, in fact, including one on probability estimation) and, depending on how much you like math, youll come away either really confused or really understanding how it works. (The explanations are clear, but its difficult material.)
The last part of the book gives comparisons of performance for the different kinds of JPEG compression, a list of JPEG applications and vendors, a history of JPEG, possible future directions of JPEG and a discussion of non-JPEG compression standards (JBIG, MPEG, fax).
This book is as complete as you could possibly want on the subject of JPEG image compression. It is a must-read for JPEG implementors and recommended reading for those people who like to understand how common algorithms work or who want to know more about imaging algorithms in general.