The Difference
Volume Number: | | 10
|
Issue Number: | | 6
|
Column Tag: | | Inside Information
|
The Difference That Makes A Difference
Whats valuable depends on your perspective. Will that be changing soon?
By Chris Espinosa, Apple Computer, Inc., MacTech Magazine Regular Contributing Author
Nick Negroponte of the MIT Media Lab defines information as a difference that makes a difference. On Usenet, you hear about this as the signal-to-noise ratio, that is, the kernels of useful wheat in the general chaff of questions, misinformations, rumors, and flames. In most other circumstances, though, information in digital form makes a real difference - and this is most true in developing software.
Every bit of your application makes a difference. At the basic level, each bit has to be a non-buggy bit (as opposed to a buggy bit) or your software will crash, and that could make a big difference to its users and purchasers. A little above that, the bits of your program are carefully compiled to run on a specific family of microprocessor; the system calls in your program are linked to a specific operating system API; and the logical assumptions are based on the performance and capabilities of a certain range of hardware platforms. All of these choices are encoded into your finished product, and they make a substantial difference in who will buy and use it.
Above that, of course, are the features and functions of your product itself. This is supposedly what yo're good at, and ostensibly what your customers are paying money for. Of all the investments you make in research and development, the information you learn about how to make your program solve the customers problem should be most worthwhile to you and to them, shouldnt it?
But as youre probably aware, your choice of platform often makes more of a difference to your customers than your choice of features or technologies. Everybody in the Mac business has been told more than once that your product is great, but if it doesnt run on IBMs I cant use it. And you spend much of your time and money simply porting your application from one system version to the next, or from one hardware platform to another - and recently, from one microprocessor to another. The differences are significant, because compiler technology, hardware evolution, and new system APIs are not simple things; but at least they make a difference to your customers.
What will happen if these differences stop making a difference? What if, for example, you didnt have to worry about what instruction set to compile for? In a small way its true now - if your application is not speed-sensitive, you can just compile it for the 68K, and the emulator on the Power Macintosh products will automatically run your software on the Power PC-based models. And while emulation is admittedly slower than running native, you could be seeing more processor independence in the future. Apples Advanced Technology Group and others in the industry have been researching processor-independent object file formats. With these, you compile and link your application into intermediate code which you ship to customers; then either the Installer or the segment loader transliterates the code into the correct instruction set for each machine. The hardware vendor can use different CPUs, the users get native performance, and you can ship one program that runs on many brands.
And with processors continuing to get faster and cheaper, and multiprocessor designs starting to become available, emulators might be the big win after all. If you can add more processors to run your emulators faster, you might be able to achieve near-native performance through emulation. Just think: if you want to run Windows applications faster, just keep adding more Power PC chips to your Macintosh until its fast enough!
Independence from hardware architecture is getting easier as well. In modern OS architectures, a hardware abstraction layer separates the OS kernel from the particular hardware implementation, making it easier to port the OS to different hardware platforms. And developers of new platforms are trying an alternative to the defacto standards of Macintosh (controlled by Apple) and the Intel-based PC architecture (controlled by nobody in particular). The result is a set of reference platforms, hardware designs that assure certain capabilities in different vendors designs. The last major reference platform, ACE, was built around Windows NT and the MIPS chip; the current hot platform, PReP, is based on the Power PC chip and AIX. If reference platforms dominate the landscape in the future, it should be easier to write code that runs indifferently on multiple platforms.
Finally, APIs are crossing the hardware boundaries. Both OpenDoc and OLE 2.0 are cross-platform, though they dont isolate you from other toolbox calls. Hosting layers like XVT and Novell Appware Foundation add surprisingly little overhead to run the same API on different underlying toolboxes. And future operating systems like Taligents Pink system and IBMs Workplace Shell are meant to host multiple personalities on one OS kernel, so your choice of hardware vendor doesnt dictate your choice of API, and therefore applications software.
So five years from now, our old landmarks - the instruction set, the hardware architecture, and the API - may be rotting and fallen. Will it be a total mix-and-match world? Will people be running Mac code in an emulator box on Windows NT on a Compaq Power PC platform, or x86 OLE objects wrappered by OpenDoc running on OS/2 on a Macintosh with a Cyrix chip emulating the Pentium in microcode?
I say: yes and no. I expect that the majority of successful commercial software will be (more or less) compiled and built for a specific class of microprocessor, hardware platform, and API. Itll just be easier that way, both technically and in the marketplace. Though the technology might be able to jump through hoops, the channels and customers dont get over such fundamental taboos as incompatibility overnight.
But while compatibility may remain a litmus test, itll no longer be a barrier. In-house developers will be able to compile something once and deploy it on their Mac, Windows, and UNIX machines, letting adapters and emulators take care of details. Or you could take a product thats successful on one platform, test-market it in the emulator community on other platforms and, if it sells, then invest in the native port to increase your market share and competitiveness. Or (for extra credit) you could find clever ways to bridge the various environments, perhaps hooking up TAPI in SoftWindows to the Geoport or AV capabilities on a Power Macintosh.
Old differences die hard. Even after technology has made them irrelevant, the distinctions of architecture will color peoples thinking. Most conventional development will probably remain the way its always been, but there may be some interesting new opportunities when the gaps between platforms are bridged over.