Portability
Volume Number: | | 9
|
Issue Number: | | 10
|
Column Tag: | | Software design
|
Related Info: Memory Manager File Manager
Beyond the Macintosh
Heres a way to design your code to be portable
By Lala Red Dutta, DataViz, Inc.
Here we are... 1993 and the playing field has narrowed down to a handful of machines and architectures. The contenders left on the field are Unix boxes, PCs and Macs. However, soon to enter the playing field is the Power PC based Macintosh. And where are we? Since the majority of MacTech readers are Macintosh programmers, it is safe to assume... Were On A Macintosh! (And I say that while making some Tim Allen grunts)
But for a moment, let me state a few observations of mine...
Macintosh programmers know what the game really is... Make something obvious and simple, and people will use it.
Wouldnt it be nice if we could show the rest of the world how to write a better application?
Wouldnt it be nice if we made lots of money selling this stuff to the unenlightend?
So with those observations, let me ask the million dollar questions... Is your code portable? Will you be able to recompile and run on the Power PC? And what about Windows and OS/2? Can you recompile and run there too? After all, if you can do that, you can make money on many fronts!!! (Mo Money, Mo Money, Mo Money!)
At this time, I want to make a couple of statements about whats up and coming, and whats not! A lot of gratuitous code will not be present! After all, were all adults, and were bright enough to code this stuff on our own! (We dont need no stinkin code!) You can also expect to see quotes from many movies and sitcoms. Also, even if youre not a techie, you should have no problem understanding the gist of what Ill be talking about (although a vague idea about C and C++ would help).
Zen And Portability
Before we can walk down the golden path, we need to have an objective. So here is the line my co-workers are use to hearing me say...
If truly portable code is written, it can be used anywhere, any time, and in any form.
So theres the target we wish to hit. But now to get back to the real world, we know we cant hit that 100% of the time. So the true key is to separate what can hit that target from that stuff which cannot. To get a little more specific, we need to achieve the following:
Create platform independent code that relies on other platform dependent code to do the dirty work.
Have uniform APIs between the platform dependent and independent code.
Ensure that the platform independent code is completely insensitive to the underlying hardware and operating environment.
Ensure that the platform dependent code has a completely general interface and takes full advantage of the underlying hardware and operating environment.
Finally, ensure that portability is retained through maintenance.
Portability Overview
So what does this all mean? Hopefully, this diagram clears things up a little bit:
With this type of model, your core code deals with doing the main task at hand. It then relies on other code to take care of the platform specific stuff. So what kind of things go in each group of code? It really is entirely your choice, but this happens to be my model:
Environment Manager TApplication, InitApp, MaxApplZone
User Interface Manager TWindow, TLists, TButton, TStatic,
DrawRect, DrawCircle, Line, etc...
I/O Manager NewFile, OpenFile, CloseFile, Read,
Write, Seek, Tell, etc..
Memory Manager AllocHandle, FreeHandle,
AppendToHandle, HandleSize, etc...
Another preference of mine is to have C++ classes and objects in the Environment and User Interface managers. Also, I prefer standard C routines for I/O and Memory Management. However, you can do whatever you are comfortable with.
Finally, I prefer to keep sub-folders (or sub-directories that contain the non-portable code. This way my main folder (or directory) contains only the portable code. I also prefer keeping everything on a Macintosh server (because of resource forks). Then I can have a batch file that mounts the volumes that contain all the code, copies what I need, and starts the makefile. On the Mac, I can just use a makefile.
Portable Data Types
One of the most prevalent problems in portability is data. (Reds proverb on cross-platform computing: one machines garbage is another machines data.) What do I mean... byte order and data size. The Motorola and Intel chips store numbers in reverse order from each other. For example, lets say I have a long unsigned integer with the value 110 (hex 6E). On the Motorola, it would exist as 0x0000006E. On an Intel machine, it would exist as 0x6E000000.
Another problem to keep in mind is the size of data types. For instance, what is the size of an int under MPW, Think, or Microsoft? Are they all the same size? And moreover, what is the size of a double under all those compilers?
And now for the clinchers... is a Handle always a pointer to a pointer? Is a Handle always four bytes? Does a pointer always point to the right segment?
So how do we get around these problems. Actually, the solution is pretty easy. What you need a set of data types that have the same meaning regardless of their native environment. You can achieve this by having an include file (I usually call it defTypes.h) for each platform which defines some standard types:
Byte Single byte signed character
UByte Unsigned single byte
Word Two byte signed integer
UWord Two byte unsigned integer
Long Four Byte signed integer
ULong Four Byte unsigned integer
Float64 64 bit IEEE Float
Float80 80 bit IEEE Float
Pointer Pointer to an absolute address
Handle Token for relocatable space
etc...
Now lets take some real world examples. If I were under Think C and I had to use 64 bit IEEE floats, I would typedef Float64 as a short double. If I were under MPW and I needed a two byte integer I would typedef Word as a short. If I needed a pointer under Borland C, I would typedef Pointer as a (void far *). This way I can have a known data type and definition under each platform.
Additionally, I have routines that take a Pointer to a certain data type and returns the data in its native form. These routines include:
/* 1 */
Word macword(Pointer p);
Long maclong(Pointer p);
Float64 macFloat64(Pointer p);
Float80 macFloat80(Pointer p);
etc...
Word pcword(Pointer p);
Long pclong(Pointer p);
etc...
Each routine describes the type of data it uses as an argument. For example whenever I use the function pcFloat64, Im saying that the thing p points to is a 64 bit pc based IEEE float and I want a native 64 bit float returned. So what does the code look like?
/* 2 */
// Borland Implementation of pcFloat64
Float64 pcFloat64(Pointer p)
{
return *((Float64 *) p);
}
// MPW Implementation of pcFloat64
Float64 pcFloat64(Pointer p)
{
Float64f;
revmem(p, (Pointer) &f, 8);// reverses p onto f
return f;
}
Notice how the Macintosh implementation needed to reverse the data in order to use it. And since the PC implementation already had it in its required form, it merely sent the value along.
Finally, how do we manipulate this data. Generally, I have a library of platform independent routines that do the following:
/* 3 */
myGetc(TBuff) Get one byte from a buffer
myGetN(TBuff, Ptr, Word) Get n bytes, put it at Ptr
myGetw(TBuff) Get a word from a buffer
myGetl(TBuff) Get a long from a buffer
etc...
myPutc(TBuff, Byte)Put one byte into a buffer
myPutN(TBuff, Ptr, Word) Put n bytes into a buffer
myPutw(TBuff, Word)Put a word into a buffer
myPutl(TBuff, Long)Put a long into a buffer
etc...
// TBuff is a C++ class
Inside each of these routines, they would read the data in its actual form, and return the value in its native (or default form). So lets look at the following platform independent code that fetches a long integer from a pc based spreadsheet file:
/* 4 */
Long myGetl(TBuff b)
{
UByte d[4];
Long x;
myGetN(b, (Pointer) d, (Word) 4);
x = pclong((Pointer) d);
return x;
}
Remember this code is platform independent! However, it does call pclong which has a platform independent interface, and platform dependent code.
Code Fragments And Re-Entrancy
Of course there is more to life than just applications. In the real world there are code resources, shared libraries, and dynamically loaded libraries. These code fragments are really good for localization or creating sub-applications. Or even if you wanted to create different libraries that several applications could share, they are good for that too. The topic of re-entrancy becomes prevalent if you are talking about having several applications share code at run-time. At that point, you cannot have any critical global data. (Not gonna do it! Cant do it! Wouldnt be prudent! Thousand Points of Light!)
But back to the issue at hand, we need to break these things down into their portable and non-portable parts and still be able to build these things as any of the three types of code fragments. Also, to be able to build code as any of the three types, we need to satisfy the most stringent requirements of all three. So lets discuss some of the requirements of each of these types of code fragments:
Code Resources - One main entry point and must be under 32K. Also, under MPW it cannot have global data space. While under Think it may use global data space.
DLLs - Can have multiple entry points and can have global data.
Shared Libraries - These are equivalent to DLLs, but are only available under MPW. The biggie is that it will be available on the PowerPC.
So what are the overall requirements? We need code that has no global data space and is under 32K. Furthermore, it can have only one entry point. This isnt too bad. If anybody has written any control panel devices you know how to get away without using global data and having only one entry point. Furthermore, we can also break our big code fragments into several smaller fragments. After all, isnt that the definition of a code fragment? So were going to use the Macintosh Control Panel as our overall model.
But how do you do it? Well, this takes a little more finesse. Lets go back to our original diagram of platform independent code. But now imagine that we put a platform specific front end (and export module for shared libraries). Also, we must create a standard calling sequence for using these fragments. I prefer the calling sequence described below:
OSErr fragMain (Word instruction, Pointer ctlStruct)
I use instructions for things like fragOpen, fragClose, fragDoTask, etc., and I use ctlStruct as a bucket for whatever I want to pass into and out of a fragment. Here is an example of a ctlStruct:
/* 5 */
typedef struct fragCtl {
Word paramCnt;
PointerparamList;
Handle fragWorkSpace;
} fragCtl;
And the way the fragMain would be laid out is as follows:
/* 6 */
OSErr fragMain (Word instruction, Pointer ctlStruct)
{
OSErr rc = noErr;
switch (instruction) {
case fragOpen:
ctlStruct->fragWorkSpace =
AllocHandle (sizeof(mySpace));
rc = MemErr();
break;
case fragClose:
FreeHandle(ctlStruct->fragWorkSpace);
rc = MemErr();
break;
case ...
case ...
default:
rc = unknownFragInstruction;
break;
};
return rc;
}
Remember you are free to have any types of calls you want. Im just giving you an example of what can be done. At any rate, now that we have platform independent code for a fragment, how do we construct this stuff in the fragment type we need? Actually, this is the easy part. I have a header file that gets included that simple states the following:
/* 7 */
// Think C fragment Front End
#ifdef CodeResource
OSErr main (Word instruction, Pointer ctlStruct)
{
Handle self;
OSErr rc;
RememberA0();
SetUpA4();
asm {
_RecoverHandle
move.l a0,self
}
HLock(self); /* Don't move while running */
rc = fragMain (instruction, ctlStruct);
HUnlock(self);
RestoreA4();
return rc;
}
#endif
And as far as shared libraries are concerned, I have a file called fragMain.exp which contains:
exports = extern fragMain;
For more details on Apples Shared Library Manager, you can contact Apples Developer Support.
Non-Portable Code
Lets take a quick look back at our assertion about non-portable code. That assertion was to insure that the platform dependent code has a completely general interface and takes full advantage of the underlying hardware and operating environment.
Lets also look back at the example of the pcFloat64 code. This is a good example of code that is platform dependent, yet has a completely platform independent interface. This makes life easy for the caller.
Now I want to take that concept one step further. Let's say I am working on the user interface part of my application. What I want is to create a TButton class for each platform that implements a regular push button. After all, I want to use push buttons. To get really specific, lets look at Borlands Object Windows Library.
Wouldnt it be nice if we could have Borlands OWL on all platforms? Actually, we can! All it takes is us rolling up our sleeves a little and creating these UI Libraries for all of the platforms we wish to support. Overall, we want C++ objects for the following user interface metaphors:
TWindow Put up a blank window
TMenu Object for a menu list
TPopUp Object for a pop up list
TScrollList Object for a scroll list
TList Create a list for scrolling, pop ups...
TCheckBox Standard check boxes
TButton Standard push buttons
TRadioButton Standard radio buttons
etc...
These C++ classes should have member functions that do whatever it is that you want. For instance, lets take TButton. For TButton you need a constructor, a destructor, a hilite metaphor, disable metaphor, enable, and whatever else you may want.
You can take the same principles and extrapolate them for file management and memory management. However with those items, I prefer having straight C routines instead of C++ classes. (Give it to me straight Doc!)
Portable Code
Believe it or not, at this point, writing platform independent code is almost elementary. The only thing you need to do is to call your UI Manager for user interface work, call your file manager for I/O, call your memory manager for memory usage, and dont call anything platform specific.
I know youre saying easier said than done. At first it is easier said than done. But once your libraries are created, the problem is much easier the next round. Moreover, once you get into the habit, it becomes very easy. And the knowledge of what is portable, and what is not becomes more apparent. Moreover, I hope I have given you some ideas to think about and some plans to go cross-platform. And for the big Macintosh payoff, hopefully, you can position yourself well so compiling for the native PowerPC is trivial.