OpenDoc vs OLE
Volume Number: | | 10
|
Issue Number: | | 8
|
Column Tag: | | New Technologies
|
OpenDoc vs. OLE
When elephants fight, only the ants get hurt
By Jeff Alger, Consulting Methodologist
A monolith of American industry threatens to unilaterally impose a standard by virtue of its sheer size and dominance of the market. Quickly a consortium develops to counter the move; a rival standard is proposed, the hype wars begin. The lumbering giant points to the fait accompli: its technology is already in use throughout the world and will soon be a standard part of its operating systems on millions of machines. The upstarts point to technical advantages of their proposal and darkly hint at the threat to democracy and family values that will loom if the industry titan is allowed to further consolidate its grip. The press eats it up, sensing a David vs. Goliath story in the making. The battleground: component object-oriented software. The industry titan: Microsoft. The upstart consortium: Apple and IBM? Wait a minute. Am I the only one that sees something weird about this alignment? Age-old IBM a frightened upstart? Entrepreneurial Microsoft the lumbering giant? Whats going on here?
Unless youve been hiding underground the past few months - not a bad idea, actually - youve been ducking blows in the OLE (Object Linking and Embedding) vs. OpenDoc slugfest. Both address roughly the same issues: how to combine object-oriented software components into heterogenous documents, how to exchange data between programs, how to store and retrieve objects, how to distribute them across a network, how to allow objects to interoperate across programming languages and machines. Given these stakes, the battle may well be what both sides are claiming: the key to computing in the second half of the 90s. But sorting out which is better and for what purposes is like trying to get directions to the nearest town from two people who are rolling on the ground trading rabbit punches. Lets step back to a safe distance and take a dispassionate look at whats really going on.
Of Objects and Markets
The basic technology at issue can be divided into basic four tiers of standards.
1. A standard for the binary representation of objects.
2. A standard mechanism for storing and retrieving documents containing arbitrary collections of objects.
3. A standard for how to allow users to combine arbitrary user-visible objects in their documents.
4. A standard mechanism for distributing and sharing objects between applications and over a network of dissimilar machines.
The first issue has to do with how objects are stored in memory. Microsofts standard is known as the Common Object Model (COM). The standard adopted for OpenDoc is IBMs System Object Model (SOM). There is more than simple storage at issue. How do we allow different programming languages to create and access common objects? For that matter, just what is an object, anyway, and where should the line be drawn between an object at run time and the program source code used to create it? These are fundamental issues and the winner, if there is a single winner, of the current object wars will have a huge influence on object-oriented language designers for many years to come.
Older concepts of file systems and databases just cant cope with the storage problems we face today. Both OLE and OpenDoc contain new standards, along with libraries that implement them, for storing and retrieving what are known as compound documents, files that contain nested collections of diverse sorts of objects. This, too, is likely to give the winner - if there is a single winner - a lot of throw weight in battles over other issues in the next generation of object systems.
Ah, yes, lets not forget the users. They dont directly see binary objects or compound document files or programming languages. To them an object is something they can point to on the screen or on hard copy and give a name. They rightly expect objects to be consistent and well-behaved no matter where they appear. They get frustrated at the arbitrary distinctions weve traditionally drawn between data that belongs in one sort of application and data that belongs somewhere else. Wouldnt it be nice, Joe User might well say, if I could simply copy or drag anything I can select into any other document in any other program and have it still work? Indeed. That is the stuff of the user interface and portions of OLE and OpenDoc.
Finally, the arcane world of distributed objects. Neither side really has them yet, but in the struggle for control of tomorrows operating systems this is shaping up as potentially the most important part of the war. In a world of multiple machines and multiple virtual memory spaces, how do we tell one object from another? How are they created and destroyed? Do we store objects in their own little mini programs with separate address spaces, or combine them into a single memory pool? Can an object be shared between multiple documents or simply copied? Can one document contain objects that live on multiple machines? How do you handle failures, of networks, machines and users? Must a programmer write code having already decided which objects are to be distributed, or can those decisions be deferred?
With stakes like these, little wonder that the opposing sides are weighing in so heavily and so early. This is at once the next generations of object-oriented software technology, graphical user interfaces, and network operating systems. Waiting on the sidelines are the anxious software developers whose products must interoperate using one standard or the other; corporate developers hungry to lower their costs; retailers wondering what this means to the ways we package, price and distribute software; and programmers everywhere trembling at the thought that everything they once knew about writing applications might have to be relearned and knowing that theyd better pick the right horse or be left behind.
This issue contains more detailed descriptions of OpenDoc and OLE elsewhere, so I wont go into a great deal of technical detail here. Instead, Ill focus on comparing and evaluating the two products.
Back To School
Of all that has been said and written about OLE and OpenDoc, the only uncontestable claim is that both are complicated pieces of technology.
The OLE documentation numbers in the hundreds of pages, but if you really want to know whats going on you have to read Kraig Brockschmidts 900+ page book Inside OLE 2.0, along with numerous more recent papers from Microsoft. Come to think of it, Brockschmidts book doesnt even begin to talk about OLE Automation, the substrate upon which scripting is built in OLE. It wouldnt hurt to attend a few conferences to get a sneak preview of the upcoming Chicago and Cairo operating systems. Since OLE is best used from within the Microsoft Foundation Classes (MFC) class library, allocate a few mental cycles to become familiar with that set of documents as well.
As to OpenDoc, I havent even been able to estimate the page count of the documentation yet. You can get a quick tour by pouring through a couple hundred pages of OpenDoc preliminary - very preliminary - documentation, then wade into the 500+ page System Object Model (SOM) documentation from IBM. Oh, and if you havent yet had the pleasure, take the opportunity to read up on the Drag and Drop Manager and the Code Fragment Manager (CFM), because they, too, are connected.
If you are trying to compare the two, be prepared to be inundated with all manner of position papers from both sides. The level of sniping in these black papers is exceeded only by the casual attention to the real facts.
Egad! Dont these people know that we all have jobs and families? Learning either of these technologies well enough to write a simple component takes a few days. Learning enough to design a major project looks to me more like weeks of reading and training backed by a at least a few months of hacking. Learning enough to intelligently make a long-term strategic choice is nearly impossible, especially since one of them doesnt even exist yet.
Ghosts Of Objects Past, Present and Future
One of the surreal aspects of the debate is that it isnt always clear whats been there for a long time, whats new and what is still vaporware. The OpenDoc consortium has done a brilliant job of positioning OpenDoc,tomorrows concept, against OLE 2.0, todays product. Microsoft plays into their hands by claiming OLE is better because its already shipping, then in the same breath turning around and claiming that their vaporware will be better than OpenDocs vaporware. This stuff gives me a headache. Lets start the comparison by defining our terms. The following is the situation as of this writing in mid-June.
The Past
On the Microsoft side, OLE 1.0 shipped in 1990 and OLE 2.0, a major enhancement, shipped early last year. It is available today in final form on all Windows and Windows NT platforms and on the Macintosh. Microsoft, lover of big numbers, claims there are about 1.5 million customers using OLE-aware applications and that most of the major software vendors are already committed to supporting OLE, most of them already pretty far down along the learning curve.
On the OpenDoc side, there are a couple of precursor technologies such as the Drag and Drop Manager but thats about it. In short, Microsoft is right, as of today OLE is shipping and OpenDoc isnt.
The Present
As you read this, there are various alpha and beta versions zooming around both worlds, making it horribly confusing to keep score. OpenDoc is in release A6 on the Macintosh as of the Apple World Wide Developers Conference in May. This includes a very early release of the IBM System Object Model implementation for the Macintosh. OpenDoc has not made its way to any other platforms as of this writing, although one piece of the underlying technology, SOM, is supported by IBM on AIX and OS/2.
Meanwhile, Microsoft has seeded a select group of 5000 developers with a prerelease version of the distributed version of OLE. They have also begun beta testing of OLE Controls, a small-component extension to OLE that is both a close analogy to OpenDoc parts and to Windows VBX controls.
The Future
Microsoft is busy implementing their Windows API Everywhere strategy. They have licensed the source code to the Windows operating system to at least two other vendors who have demonstrated partial ports to Unix. They have committed to licensing the Windows and OLE source more broadly, although the terms have not been made public. Where Windows goes, so goes OLE. In addition, Microsoft is working with DEC to refine COM somewhat to go from the Component Object Model to the Common Object Model. In addition to driving everyone nuts keeping the two identical acronyms straight, this will enable DEC to port OLE to other operating systems including OSF/1. Microsoft claims that a port to MVS on IBM mainframes is also under way. It should be made clear what this means: The OLE standard, including the objects themselves, will interoperate across both Windows and non-Windows operating systems.
One of the worst-kept secrets in the computing world is the work Microsoft is doing on their next generations of the Windows and Windows NT operating systems. Daytona, the successor to Windows NT and already in beta testing, will include native support for OLE 2.0 built into the operating system. Chicago, the successor to the Windows operating system and a major enhancement of the user interface, blurs whatever line is left between where the operating system ends and OLE begins. Cairo is an object-oriented operating system built on OLE from the ground up. Another technology Microsoft already has well into the pipeline is an integration of OLE with ODBC database support. All, with the possible exception of Cairo, are scheduled for release within roughly the same time frame as a final release of OpenDoc, so it is fair game to project them into the equation. (I consider anything within six months to be roughly the same time frame.)
The OpenDoc consortium has committed publicly to making the source code to OpenDoc available and to port it to a variety of platforms. They point to support from systems software vendors Apple, IBM, SunSoft, Taligent, Novell, and Xerox with many, many others expected to follow according to one white paper. (italics added) It is unclear from their white papers and documentation exactly what is being supported by all these vendors, OpenDoc or only pieces such as SOM. The phraseology cryptically lapses into support for CILabs technologies, CILabs being Component Integration Laboratories, the formal entity behind OpenDoc.
Recently the OpenDoc crowd has announced that OpenDoc will be written in such a way that it can interoperate with OLE. Details to follow.
Binary Object Models
Both products allow you to create objects in such a way that they can be accessed from programs written in other languages. This is the stuff of so-called binary standards for objects, a subject that has kept lots of folks busy over at the Object Management Group for quite a while. The two approaches are Microsofts COM and IBMs SOM and they couldnt be more different.
The Component Object Model
COM embodies a sharp separation between binary objects and the object-oriented programming languages used to create them. The COM model is based on delegation, not inheritance. Rather than have one class derive from another, in COM you would likely form what is known as an aggregate of two objects. The container object would delegate to the component object in much the same way that a derived class would pass along messages to its base class. Aggregation can also be used to implement the equivalent of complex data members.
COM represents a C++ way of thinking of objects at run time. Methods are dispatched using something that you would swear was a vtable if you didnt know better. In fact, you can use C++ derivation to create your aggregates, as long as you add the interfaces to direct a client to the right base class or data member. Yet, the model is not exactly C++. While servers may be written in terms of derivation, clients see a flat object space. There is more mathematical consistency than in straight C++; if you do the equivalent of typecast from derived to base, you are guaranteed to be able to typecast back to the derived class in a typesafe way. This reflexivity is one of several rules of consistency imposed by COM to maintain type safety.
Speaking of type safety, COM is safe very. The way you access an object is through an interface. Each interface is uniquely identified by an interface ID (IID). The way you obtain an interface from an object is, you ask it. The specific way one would typecast from derived to base, for example, would be something like this:
Derived* d = (Derived*)object->QueryInterface(kDerivedID);
This returns either a valid pointer or NULL to signify that the object doesnt support that interface. That is, in order to typecast any object you have to go through the object, which can verify that the interface is valid for that object. This is all done in either C or C++. It is easy to wrap existing classes with COM-style interface wrappers.
Normally, you will use interfaces that represent standard C++ class interfaces. COM also provides a special interface called IDispatch that essentially allows you to query an object about its capabilities at run time and dynamically dispatch to it using symbols, not precompiled function calls. This provide low-level support for scripting and high-level development tools that need a dynamic environment. Because of the delegation model, an object can change its interface and implementation at run time, within certain limitations to insure consistency.
What is involved in using COM? Basically, you just write the interfaces and the code. If you are using C++, its no more complicated than writing any other C++ program, but with a couple of extra member functions to write such as QueryInterface and perhaps support for OLE Automation. If you are a C programmer, there are some nifty macros that reduce the problem to about the same level of complexity as using C++, although I have a hard time understanding why anyone would want to use C rather than C++ for this particular purpose. If you are using other languages, you have to check with the language vendor. Many are being adapted to work with COM but its not a universal standard yet. As they are adapted, however, the objects you create using, say, C++ code will be accessible and manipulable from, say, Smalltalk and vice versa. Learning to use COM involves perhaps a few dozen pages of manuals and less than a day of effort.
The System Object Model
SOM frightens me. It really does. Its as if a bunch of people with the software equivalent of bazookas sat around one day and said, Hey, lets see if we can solve a problem nobody cares about by taking all the major object-oriented languages and blasting them into smithereens. Brrrr. Whats even more frightening is that no one seems to be taking IBM to task for this unbelievable act of engineering hubris - except Microsoft, and that is written off as competitive snarling. The rest of OpenDoc is a reasonably good piece of engineering, but if SOM comes within a hundred yards, duck.
The stated objective behind SOM is to provide language-neutral access to objects. It is based for the most part on the Object Management Groups (OMG) efforts to standardize the binary representation of objects. The run-time model is dynamic, supporting run-time inheritance and polymorphism. Thus, you could use all the features of a Smalltalk or Common Lisp Object System (CLOS), such as creating classes or changing their derivation on the fly.
However laudable these aims, the result is an over-engineered mess. To use it, you have to first describe your objects using yet another language, the SOM Interface Description Language (IDL). This isnt just a minor detail, but another programming language to master. The syntax is sort-of-Smalltalk, sort-of-C, sort-of-C++ but in the end it is IDL. It is chock full of macros, modifiers, conventions, and isms.
SOM isnt language-neutral; I would call it C++ -hostile and sort of Smalltalk-friendly. The role of C++ is reduced to implementing and calling specific member functions. The C++ notions of class, derivation, constructors, destructors, operators, overloading, etc., are all subsumed by the IDL model. SOM does not support private, protected and public member specifications, unless you consider surrounding a member function with #ifdef __PRIVATE__/#endif to be support. In fact, it is hardly worth bothering to use C++ with SOM, since C does just about as well with whats left. SOM is a slap in the face of the C++ community.
OK, so you dont like C++. If what you really want is dynamic binding and such, Smalltalk or CLOS will do better than IDL at those tasks. Garbage collection is based primarily on reference counting. While this is also true of COM, COM doesnt pretend to provide language neutrality. It is difficult to envision true Smalltalk- or CLOS-style programming using a reference-counted garbage collection scheme. SOM doesnt have anything resembling code blocks from Smalltalk and Objective-C or macros from CLOS. There obviously arent anywhere near the diversity of libraries for SOM that there are for Smalltalk, Object-C or CLOS. It isnt worth going further, because I wouldnt stop for several days. To paraphrase the former candidate for Vice President and current Treasury Secretary, I worked with Smalltalk. Smalltalk was my friend. Sir, you are no Smalltalk.
Interoperability? COM is living proof that you can provide an equivalent level of language interoperability without first tossing conventional languages into a Cuisinart. Whether you like COM or not is not the issue; the issue is that SOM is to interoperating what a flamethrower is to mowing the lawn. Yes, the lawn wont grow back for a while, but havent we forgotten something important about lawns?
The development lifecycle is bizarre. First, decide which objects should be treated as SOM objects and which as normal, local objects. If you later change your mind, its bad news unless youre paid by the hour because youll have to pretty much start over. Next, write a description of your objects in IDL, then compile them using an IDL compiler. This funnels your description through an emitter which spits out template C or C++ or Smalltalk or whatever code. Fill in the blanks. Compile again, this time using your language compiler. On the client side, emit stuff to include with your program. Write your program. Compile. In short, take all your worst nightmares about software management and play them at fast forward with the color knob twisted all the way to green.
The funny thing is, there isnt even a significant problem solved by SOM. Other than the vendors who forked over big bucks to participate in the Object Management Group, I havent noticed a particular groundswell of opinion that language interoperability is a big issue. Language vendors have managed to do a creditable job at that without having to toss everyones salad. A binary standard for how C++ objects are represented in memory would achieve 90% of what SOM does with about 5% of the hassle, because then C++ itself could also serve as the object interface description language. Extensions to C++ to handle intermixed dynamic and static binding of methods - permitted by the language spec, by the way, which does not require the use of vtables - would solve the rest. Even without that, as COM demonstrates, idioms for using C++ more intelligently can solve just about any problem a developer will encounter in real life. Thats what James Copliens book Advanced C++ Programming Styles and Idioms is all about.
It is telling that in OpenDoc one is only expected to use SOM for a small subset of objects that have to be accessed externally. The rest is conventional C++, including almost all of the OpenDoc Parts Framework class library.
Documents and Storage
Both products allow you to store arbitrary, nested, versioned objects in compound document files. Both use streaming as the primary means of turning an object into bits and back. Both provide a way to browse through a compound document file and extract only those pieces you care about. Both provide a way to maintain multiple versions of a document or specific objects within it. There are differences in the area of document storage, but not compelling ones in either direction.
The one major difference coming up is that the Structured Storage model of OLE is being built into the file system of upcoming Microsoft file systems. This will provide a higher level of protection and performance than one can expect from Bento, the underlying storage technology of OpenDoc. This will also fix a nagging problem with the current version, which uses file path names to locate things. If the user moves the file, OLE might not be able to find it when needed. SOM also uses path names, but OpenDoc apparently will stick to the Macintosh model of file references, which stay the same as a file is moved around.
Structured Storage has slightly better features for recovering from a crash, including a very good feature called Transacted mode, in which all changes are considered temporary until all are committed together. Under OpenDoc, changes are written incrementally, possibly leaving a file in an inconsistent state following a crash, at least as far as the user might be concerned. Structured Storage also makes it a little easier to implement a Revert command without having to duplicate the entire file, only the parts that change; OLE keeps track of which is which.
The Users Perspective
OpenDoc looks very good at the user level compared to OLE. Apple has done a very good job of thinking through how a user will interact with parts that have different commands and different user interfaces for those commands; i.e., menus. In OpenDoc, the integration of the parts into a single user-visible document is pretty close to seamless. Click on an object and the user interface changes state as appropriate to that object. New menus show up, old ones go away, windoids come and go as they should. The standards for borders and other user feedback seem thoughtfully designed.
Comparing OpenDocs user interface directly to OLEs is difficult because there are several levels of OLE support possible. Where OpenDoc has essentially one interface and architecture, OLE can potentially support several. This is not so much because the user will appreciate this as that OLE presents a graduated scale of options to the developer, depending on the degree of integration you want or how OLE-capable you are.
A minimal treatment is to turn a normal Windows application into an OLE-aware application, capable of being an OLE server. That is, the applications document becomes a component in some other applications - a containers - document. When the user activates the embedded object, the originating application is fired up and the actual editing takes place in a separate, temporary window. This is a slight extension to the users concept of using two separate applications: There are still two apps, but now the data is combined. It is not necessary to have the original app around if all you want to do is draw and print, since the PICT or metafile needed to do those is er, embedded into each OLE server object. (By the way, OLE automatically turns PICTs into Windows metafiles and back when objects migrate between Macs and Windows machines.) Only if the user needs to edit or you want to do something fancy like animation or sound do you need the original application installed. Most of the work involved in becoming an OLE server is in supporting a stream interface for the clipboard and drag and drop to use in moving the data around.
On the receiving end is the OLE container, an application whose documents can contain OLE components. Like becoming a server, turning an arbitrary application into an OLE container is relatively easy although more involved than writing servers. A container-server is simply a combination of the two techniques. At this level one gets the use of drag and drop and OLE-style clipboard use.
It is also possible to create OLE servers that edit in place, rather than opening a separate window. These modify the user interface environment around them in much the same way that OpenDoc parts do: menus and toolbars swap to reflect the currently active server. This is known as in-place activation.
The next and most common treatment is an in-process server. This is a dynamic linked library (DLL) that runs as part of the client application, though it is loaded at run time. This allows lightweight display or editing code to go with the object without having to bring along the entire originating application.
Finally, there is the future: OLE Controls. These are direct analogies to OpenDoc parts. They are scriptable, they activate with single clicks, not double, even on deeply nested objects, and they run as part of the client process. In the long run they look a good bet to replace VBX controls in the Windows world.
So, which user interface is better? Unfortunately, the comparisons by the combatants arent always fair or accurate. More than in any other arena, one has to ask, What version are you comparing to, todays or tomorrows? If you compare the user experience with OpenDoc parts to OLE 2.0s out-of-place servers in OLE, OpenDoc wins, hands down, from the user experience. Users will appreciate not having to remember when to use a single click and when to use a double click to get the job done and not having to understand much about the underlying nested structure of their objects.
If you compare OpenDoc parts to OLE Controls, the differences become pretty minor both ways. Both activate in exactly the same way, with a single click, even on nested components. Both activate in-place. Both run in the same task space as the client application, although an OLEcontrol can run in its own space if written that way. OpenDoc still maintains a small edge because of its support for arbitrarily shaped borders around objects; as of today, OLE is limited to rectangular borders. However, Microsoft has stated publicly that they, too, are considering supporting funny-shaped embedded objects, so that distinction may not last for long.
So which user interface is better? Today, OLE 2.0 because OpenDoc isnt here yet; this is one of those issues that has gotten clouded by comparing OLE today to OpenDoc tomorrow. Tomorrow, when OLE Controls and Cairo have come along, I suspect OpenDoc will still have the edge, though both will be excellent treatments.
Distributing the Objects
Finally, lets talk about what happens when we stop looking at individual applications and start looking at entire networks. COM becomes a network-wide distributed object system by simply installing the Networked OLE DLL in place of the OLE 2.0 DLL. Thats it. No recompilation, no changes to source. OpenDoc requires the use of the Distributed System Object Model, or DSOM, the distributed extensions to SOM. DSOM is certainly a creditable implementation of distributed objects and the techniques used are very well-established. However, under straight DSOM, you have to decide up front what you want to distribute and what you dont. The interfaces for creating, destroying and in some cases using objects under DSOM are quite different from those under straight SOM. The programming needed to implement DSOM servers adds even more complexity. This is not a trivial retrofit to existing code. Plus, its SOM++ and all that it implies.
Apple states that OPF will hide the differences between SOMand DSOM, and between local and remote objects. The A6 release code and documentation do not indicate how that will be done. [By the time you see this, Apple may have made this clear through a subsequent release - Ed stb]
As Microsoft is apt to point out, this relative simplicity of COM does not mean that the COM implementation is less robust; the opposite seems to be the case. For example, DSOM does not have any built-in capability to detect and resolve deadlocks while COM handles them properly. Programmers must manually design for support of multithreaded object servers under DSOM; not so under COM because of the operating system-level infrastructure underneath.
White Noise
There is a recurring problem with comparisons of the two technologies emanating from both camps. Both approaches are fundamentally sound and many of the problems can be fixed with little or no effort. For example, one of the most vocal criticisms of OpenDoc from Microsoft is that OpenDoc parts must be accompanied by the corresponding part handler code or they cant be displayed or printed. In other words, if you dont have the right part handlers installed on the users machine, she will just see big gray boxes where her objects should be. However, the OpenDoc folks could solve that at a stroke by requiring that all OpenDoc parts store a PICT image as a property, thereby allowing displaying and printing with or without the right part handler code.
There are outright distortions that show just how sloppy the discussion has become. For example, one supposed knock against OLE is that to activate a nested object one has to double click on the outermost object, then keep activating inward one object at a time. There are two problems with this: It is within the OLE spec to implement activation outside-in or inside-out a la OpenDoc and it is also within the OLE spec to use a single click rather than a double click to activate an object. OLE Controls illustrate these two facts by implementing an OpenDoc-style activation model - inside-out, single-click - while staying within the OLE specification. Apple also claims that the way that OLE finds server code, using file pathnames that a user can easily screw up, is a serious flaw in OLE. What Apple doesnt point out is that Cairo, which is supposed to ship not too far from OpenDoc final, fixes all that. Granting equal time, Microsoft has decried OpenDocs requirement for part viewers on all the machines in a network that might need to access parts. This, it is claimed, requires engineering the software separately for each platform, neglecting to mention that the OpenDoc Parts Framework (OPF) class library is designed to provide source code compatibility across platforms and that only on machines supporting the Windows API is similar source compatibility assured for OLE. Both sides cite the supposedly extraordinary difficulty in using the other guys product, forgetting to mention that each has class libraries, OPF on the one hand and the Microsoft Foundation Classes on the other, that dramatically simplify things. Excuse me, my headache is back.
To Market, To Market, Lickety Split
Both OpenDoc and OLE present acceptable user interfaces. There are differences, and as of todays specifications OpenDoc gets the nod. How much of a nod depends on your particular priorities, for while OpenDoc does, in fact, do a better job of improving the user experience, OLE is likely to get you into the component software market faster and is certainly not a bad user interface even today. In the time since Apple first started working on Taligent and later OpenDoc, Microsoft has released a steady stream of new, incrementally improving technologies, bringing the market and the industry along with them slowly but very surely. There are now hundreds of vendors of everything from DLLs to VBX controls to OLE objects that have experience in the marketplace for component software. It will take years for the Apple world to gain that level of experience with how to develop, package, price and support the sort of components needed to make OpenDoc a success.
Microsoft provides as seamless a growth path into OLE as OpenDoc presents a seamless user interface across parts. Companies that already know how to write Windows applications can start shipping OLE-aware applications with an absolute minimum of effort and with little or no rewriting or redesign of existing code. The ramp from there is gently sloped until one is shipping OLE Controls, the full-fledged equivalent of OpenDoc parts. Going from single-process COM to fully distributed objects involves no more than installing a new DLL. There are no new programming languages to learn, as there are with SOM. It is foolish to claim that all of these dollars and cents issues should be swallowed up by a single-minded quest for better user interfaces.
There are two different business models at work, Apples quantum leap approach to software and Microsofts continuous incremental refinement. I suspect there is room in the market for both, as long as the technological differences are narrow enough. Eventually, I expect this intense competition will reveal a case study in the way market economies are supposed to work: Both the OpenDoc consortium and Microsoft are being pushed to adopt the best features of each and this is improving the lot of consumers on both sides of the aisle.
The Mid-Term Report Card
OK, lets cut through all this and reduce it to letter grades. To do this, Im going to use a reasonable extrapolation of what will be available within twelve months; anything else gets too messy to do a direct comparison.
Subject OLE OpenDoc
Binary object standard B F
Compound document storage B B-
User interface A(1) A+
Distributed object support A+ B(2)
Ease of learning B C
Ease of adoption (3) A C
Overall grade A- B-
(1)Assumes OLE Controls and Cairo. Without these the grade would be C+ (no pun intended).
(2)This is purely for the DSOM extensions, since SOM itself is graded separately.
(3)Principally the smoothness of the growth path from existing development practices.
Overall grades are weighted evenly across all subjects. I wont quibble with anyone that wanted to assign the user interface more weight, which would improve OpenDocs grade but not lower OLEs. Were OpenDoc to use COM rather than SOM, the overall OpenDoc score would climb dramatically. Well, I can dream, cant I?