Using SourceServer
Volume | | 9 |
|
Number | | 11 |
|
Column Tag | | Think Top 10
|
Managing SourceServer Projects
By Christopher Prinos, THINK Technical Support, Symantec Corp.
With the introduction of SourceServer in version 6 of Think C / Symantec C++, developers have an integrated way of handling source code control from the Think environment. This is especially important in large projects where the large number of source files, and multiple programmers on the development team would normally make revision control a real challenge.
While SourceServer aids in the maintenance of such large projects, it wont automate the management completely. There are some specific strategies that need to be used to be successful with larger projects. This article will present some of those techniques, and show you some tips for keeping projects manageable. Even if you use SourceServer for single person and/or smaller projects, many of the ideas can still be applied to improve performance and organization of SourceServer projects.
Version 6.0.1 Notes
There have been a significant number of changes and fixes to the SourceServer interface from version 6.0 to 6.0.1 of the environment. This article assumes you have 6.0.1. If youre still working with version 6.0, you can get an update from Applelink, CompuServe, America On-line, or by calling Technical Support.
What is a Large Project?
For the purposes of discussion, large project means any project with too many files for a single SourceServer database. This depends on a lot of factors, including the speed of the Macintoshes youre using, and whether or not you have the database located on the server. Even so, if youve worked with SourceServer you know that its not long before response becomes frustratingly slow. There is a way to prevent this however.
Since well be talking about large projects, Ill assume that theyll be more than one programmer working on the source. Right off the bat, this means that someone will be accessing databases remotely. (Note: even when using servers, programmers need to access the databases with their own, local copies of SourceServer). Even if the hardware you have to work with is the latest and greatest, you wont be able to load those 300-file TCL projects into a single database and expect it to work well.
OK, But How Many Files is Too Many?
As a guideline, you should limit the number of files per database to no more than 50 files. Thats a definite upper limit. A better limit would be 25-30. Although this requiems more work to set up the projects, the return on investment is good because of the time you and the other developers save in the long run.
One result of using more than one SourceServer project is that it requires some planning ahead of time. The precise setup of your multiple SourceServer projects is up to you, but Ill present one way that can be effective, and is a logical extension to the way the files exist when they reside on disk (as opposed to stored in a SourceServer project).
Ill be using an extended example of a project with several folders of source files. Since the Finder also gets bogged down with too many files in a folder, I generally dont have more that 50 files per folder anyway. This sets up a painless mapping of folders on disk to SourceServer projects---each database represents the contents of one folder. Another way to do it might be according to the segmentation in you TPM project file, i.e., all the files from Segment 2 in one SourceServer project, files from Segment 3 in another, and so on.
No matter which way you do it, the end result is that you end up with lots of little SourceServer projects. To make things complete, youll want to nest them so that you have access to all projects without individually mounting/dismounting them.
Setting up Nested SourceServer Projects
Creating a set of nested projects is not any different from creating a normal project. In the example that well be looking at, the top level of the nested SourceServer projects will be called Dino Sources DB. This project will contain main.cp and its header file. The Dino Sources DB folder will contain a projectorDB file after the project is created, as do all new SourceServer projects. Later, well put other SourceServer projects inside the Dino Sources DB folder as well.
After creating the top level SourceServer project, a new SourceServer project will be created for each sub project. In my example, I have four folders with source and header files, so Im going to use a total of four SourceServer databases to keep track of them. The picture below shows the file setup after all the databases have been created, but before the nesting has been performed. The folder Dino++ has the Think Project Manager file and two source files in it. Its database will be Dino Sources DB. Under the Dino Sources folder, there are three more sub folders. Core Headers contains a little over 40 files, as does Core Sources. Each of these will have their own SourceServer database. The folder Other has both source and header files, but only about 18 total, so they will go all in one database.
The following picture shows the results of making the SourceServer databases into a nested hierarchy. Theres no magic here...the folders were just moved in the finder. Note the correspondence between source folders and their database folders. Note also that Ive nested the Core Headers DB project under Core Sources DB. That doesnt quite match the folder setup of the actual source and header files. In reality, this doesnt much matter. As youll see in the Check In/Out dialogs later, all nested projects are listed in a single popup anyway. The only thing that nesting does is insure that all sub projects are mounted at the same time. So, in our current configuration, mounting Dino Source DB mounts all the projects. If we instead mount Core Sources DB, we will only get Core Headers DB along with it, and not Others DB.
By the way, this is the real reason for nesting the header file database inside the Core Sources DB folder. If we are interested in looking at the Core sources, chances are good well want access to the header files also.
Once the databases are properly configured, its time to go through Check In. For some projects, files are introduced incrementally as the project develops. Here, we are assuming most of the files exist to begin with. Before starting the checking, make sure that the TPM project file is up to date (all files compiled). If its not, then some of the source or header files may not show up in the Check In dialog. Doing a mass Check In of files into nested databases is a little more time consuming than using a single database, because youll have to pick and choose your files for each sub database. Notice in the picture below, only the Core_xx.cp files have been selected, and that the Project: popup menu at the upper left is indicating Core Sources DB as the current project. Be sure that the current project is correct when you do the Check In.
After the Check In, the customary write-lock icons will appear next to the Checked In files (assuming the default Check In, which keeps a read-only version of the file)
You will want to continue the Check In process for all other types of files, and their respective databases, again keeping in mind that the current project must be correct for each phase of the Check In. The Check In dialog below shows the setup after we have checked in all the Other files into Other DB. Note that it contains both source and header files, since we had determined that the total number of files was small enough to keep from using separate databases.
Making Use of Named Revisions
When working with a large number of source files, the Check In/Out process can become a lengthy one if you need to pick and choose a subset of particular files or revisions. Using the Named Revisions feature, you can let SourceServer do some of that work for you. The basic idea is that you can defined a named set which always refers to the same files. The sets may be defined on a static (locked revision number) or dynamic (most current version) basis. To create a Named Revision, first choose the Name Revisions... command from the SourceServer menu. A dialog like the one below will appear. At the top is a popup menu showing the database that you are viewing. The top scroll lists shows files from that database, and the bottom scroll list shows what files you have chosen from the database to appear in a named set. You can move files from the top list to the bottom using the All, Sources, Headers, and Add buttons. Likewise, you can remove files from the bottom list with the Remove button. The popup menu which says Named Set is left at that setting when you are creating a new set of revisions. If you click on it, youll see a list of all the Named Revisions that you have created, allowing you to modify the contents of any particular revision. In the example below, all the source files from Other DB have been selected to form a static set named Other .cp (B2). The Rev: popups other choice is for dynamic revisions. Use of the static in this way allows you to define a particular set of files. In this case, all the Beta-2 copies of Other_xx.cp. This named set will always refer to those versions of the files, regardless of the most recent versions.
The next example shows the creation of a dynamic set, which will probably be used more often. In the Name Revisions dialog below, Im defining a dynamic set called other.h to refer to all the header files from the Others DB. Since this named set is dynamic, it will always refer to the most current versions of each of these header files. Dynamic sets are what you want to use to define youre working set of source files.
There is a limitation to the Name Revisions command that applies in particular to those using nested sub projects. You cannot define a single named set which has files from more that one SourceServer database, even if those data bases are nested. This is important to know, since the dialog will allow you to set up just such a revision. In the Name Revisions dialog below, the currently selected data base is Dino Sources DB, but I have files in the set from each of the other databases.
Although this seems like a perfectly natural thing to do, SourceServer wont go for it. If you go ahead, youll get an error message similar to the one below. Note that it didnt flag the file main.cp as having a problem. Thats because main.cp was part of the database selected (Dino Sources DB) when the dialog was dismissed.
If you need to have working set that spans sub projects, the only way to do it is to manually define a similar set of names for each sub project. That is, you might have named sets that look like Core .h B1, Core .cp B1, Other B1, etc.
Once a named set is defined, you can use it to replace manual selection in the Check Out dialog. The example below shows a Check Out of the current Other_xx.h files using the other .h set. If a static set was being checked out instead, then the version number of each selected file would appear. For example, instead of Other_1.h being selected, it might show up as Other_1.h,2b2.
Check Out Directories
In the checkout dialog, there is a Checkout To: popup menu which allows you to define which directory files will be placed in once checked out. Normally, youll leave this at Default Directories as shown in the last dialog. When Default Directories is chose, files are checked out to where last resided on your disk, according to the TPM project file. Note that the locations of any file in a TPM project (.c, .cp, .r, etc.) have their locations stored by the TPM project. This is in contrast, however, to the way header files are located. When you check out any .h file using Default Directories, a search will be done to find a local copy of that file in your TPM or the Think project tree. If the file is found, its replaced by the one SourceServer retrieves from the database.
As an example, lets say that I wanted to modify the file Other_1.h. I would go through the normal Check Out process. I have a local copy of this file in :Dino Sources:Other:, so thats what gets replaced.
It is possible, however, that I dont have a local copy of the file. This would be the case if it was checked in with the Delete my copy option, or if someone else created the file, and this was the first time checking it out. In either case Default Directories wont be able to find a local copy of Other_1.h. Instead it will put up an error alert similar to the one below:
As the alert notes, you still get the file you want, but it appears in your TPM project directory--not necessarily the location it belongs. The finder snapshot below shows the result of checking out Other_1.h when no local copy existed. Once the file is checked out, you can go ahead and move it to the Other folder without consequences as far as the way SourceServer or your TPM project uses the file.
You can avoid having to move header files around like this by always keeping a local copy of the header files that you work with. They will serve as place holders for the Check Out command. Alternately, if you keep all the headers in a single directory, you can set that directory in the Check Out dialog, alleviating the need to keep local copies of the headers as place holders.
Using Newer During Check Out
Most of the time, the majority of files that you are working on are read-only copies that you need to make your builds. As you make changes to the set of files that you have checked out as modifiable, others will be making changes to other files. The Newer button in the Check Out dialog gives you an easy way to update all the read-only copies. You do this by option-clicking on the Newer button. This will select all files that you have local copies of, if you dont have the current version. It will not overwrite files you have checked out as modifiable.
If you just click (instead of option-click), the Newer button will select only those files that you dont have local copies of. If you keep place holders for all your files, nothing will be selected by the Newer button
Mastering the Command Line
If youve used Projector under MPW, then the SourceServer command line will be very familiar to you. In fact, its exactly the same with a few exceptions. Version 6.0.1 contains a SourceServer ReadMe file that details the command line commands and options, so I wont go into them here, but I will touch upon some of the more useful commands that cannot otherwise be used from the TPM.
One of the most useful command-line entries is the ProjectInfo command. ProjectInfo lets you retrieve information about what and when files were checked in/out, along with who was responsible, and all the comments that were made. You can get that information on a project-wide basis, or narrow it down to certain files or date ranges.
As with all command line entries, you issue the ProjectInfo command by typing ProjectInfo followed by <control><enter>. Generally youll do this in an untitled window. The example below shows a simple query that retrieves some of the information on files from the Others DB project.
All of the discussion so far has been with files that we want under source control, but sometimes there are files that we want to remove as well. Theres no way to do this from the SourceServer menu, but you can use the OrphanFiles command to remove all SourceServer information from a file. It works by removing the CKID resource from a file, thus removing all association with a project. Note that once you do this, you will not be able to check that file back into the revision tree from which it came. If you want to check it back in, it has to be done as a new entry.
If you want to remove files from a database entirely, you can use the DeleteRevisions command. There are two ways to use this command. One is to remove an entire revision tree:
DeleteRevisions -y -file Core_3.cp
or you can use it to delete old versions that you no longer need:
DeleteRevisions -y Core_3.cp,8
deletes all revisions up to 8. Note that you cant pick and choose which revisions to delete ( 3 to 6, for example), you can only do it by specifying all revisions up to a certain number. One last note on DeleteRevisions... you should always use the -y option. This dismisses a confirmation dialog that you would otherwise not be able to access.