TweetFollow Us on Twitter

Symbol Tables
Volume Number:6
Issue Number:9
Column Tag:Language Translation

Symbol Tables

By Clifford Story, Mount Prospect, IL

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

A. Introduction

This month, my series on Language Translation returns to lexical analysis, and I present the amazing new, improved Canon tool.

Parts of this tool are identical (or nearly so) to code presented in my third, fourth and fifth installments, and I will not repeat these parts this month (although they are, of course, included on the code disk).

Specifically, the tool is a filter program; I developed a skeleton filter program in my third installment. It uses no fewer than six state machines for lexical analysis and parsing; lexical analysis and state machines were the subject of my fourth part. And it uses the balanced binary tree routines I developed in my fifth part to implement a symbol table.

B. What the Tool Should Do

The Canon tool functions as follows: the program reads in a dictionary of substitutions, then reads input files, performs the substitutions as required, and writes the result. The difference between this Canon tool and the standard MPW Canon is that is will not perform substitutions within comments or strings.

The tool is controlled by the MPW command line. It takes several possible options, which may be in any order.

B(1). The Dictionary File

The dictionary file must be named on the command line, with the “-d <file name>” option. If no dictionary is named, the tool will abort.

The dictionary file’s format is simple: each substitution is specified on a separate line, with the identifier (according to the language’s definition of identifier) to be replaced first, followed by its replacement (which must also be an identifier). For example:

 blip blop

specifies that the identifier “blip” should be replaced by the identifier “blop” whereever it occurs.

There is a second form of substitution, which consists of only one identifier. All identifiers in the input that match the dictionary identifier will be replaced by the dictionary identifier. This can be used to force canonical capitalization.

Finally, the dictionary can include line comments. The tool will ignore everything between a ‘#’ sign and the end of the line. It also ignores blank lines.

B(2). The Input Files

Input files may be specified by simply naming them on the command line.

The input files should be either Pascal or C source files. The tool will read them according to their filename extensions: if the file name ends in “.p”, it will be treated as a Pascal file, and as a C file if it ends in “.c”.

If there are several input files, some “.p” and some “.c”, the first one named on the command line controls. If no input file has either a “.p” or a “.c” extension, then Pascal is the default.

If there are no input files named on the command line, the tool will read from standard input. The language will be Pascal.

The “-p” and “-c” options override all of the above language rules and force the language to Pascal or C, respectively. If there are more than one such option specified, the last one controls.

B(3). Other Command Options

The “-o <file name>” option names an output file. If no output file is named, the tool will write to standard output.

The “-s” option will make the tool case-sensitive. The default is case-insensitive.

B(4). Example

Here is an example of the Canon command line:

 Canon -d dict file1 file2 -p > dummy

tells Canon to read the input files “file1” and “file2”, performing substitutions from the dictionary file “dict”. The input will be treated as Pascal source, and the output will be written to standard output, which is in turn redirected to the file “dummy”.

C. Designing the Tool

You may have formed the impression that I like table-driven software. This program has no fewer than eight tables in it: two for character translation, one character classification table, four lexical analyzers and a parser. These are all kept in the resource fork.

Driving a program with tables makes the coding simpler. The price you pay is that the logic is hidden in a table, and consequently rather obscure. If you lose your notes, you may have to re-write the whole table to make a minor change! Assuming you hang onto your notes, however, tables make your program easy to change.

After I had written this program, I realized that I had forgotten about strings. Sure, I had a version of Canon that did not make substitutions within comments but it still made them within quoted strings. So I added that at the last minute; I added a few lines and columns to the lexical tables in the resource fork, and changed two constants in the code. That was it.

C(1). Main Routine

The main routine reads the command line, sets appropriate flags, reads the dictionary into the symbol table, and finally filters the input file(s).

It reads the command line in two passes. The first pass is for setting flags; the second does the work. I need to set the flags before reading any files because I need to know the source language before I read the dictionary file.

After the first pass, the routine reads in the dictionary, opens the output file (if any), and then goes into the second pass. The second pass reads and filters each input file, writing the result to the output file (or standard output).

C(2). Case Sensitivity

We want the tool to be case-insensitive unless the command line option -s is used. This will require some modifications to last time’s symbol table routines (the only place where string comparisons occur).

One approach would be to transliterate the key strings before calling “strcmp”. I want to minimize changes to the symbol table routines, though, since I don’t intend to reprint them in this article.

Another way, the way I have chosen, is to write a case-insensitive version of “strcmp”. Then all I need to do is change the name of the call in the “insert” and “lookup” routines.

Probably the most efficient way would be to use the first method in “insert” and the second in “lookup”. Since all the comparisons in “lookup” are between a key string and keys in the table, and the table would already be case-insensitive, I’d need only a “half-case-insensitive” comparison routine for “lookup”.

Of course, I still need to allow for case-sensitive lookup, if the -s flag is set. What I’ll do is have two transliteration tables, one converting uppercase to lower, and the other a straight identity table. I’ll set a global pointer to point to the appropriate table for my comparison routine to use.

C(3). Parsing the Dictionary

The first thing the program has to do is read in the dictionary. It does this in two phases: a lexical analyzer breaks the dictionary into tokens (identifiers, carriage returns, and errors), then a parser finds substitutions in the token stream.

C(3)(a). Lexical Analyser

There are two lexical analyzers, one for Pascal and one for C, because C allows underscores in identifiers. (Another, and probably better, way to do this is to have one lexical analyzer and two character tables.) In the interests of brevity, I will limit the discussion to the Pascal version; the C version is identical, except that it adds “underscore” whereever “letter” appears.

The first piece shows the the pound sign is a line comment character; after reading a pound sign, we scan to the next carriage return and then go back to state 0. We push the return character back onto the input, though, since it isn’t part of the comment.

The second segment reads an identifier. Again, the character that ends the identifier isn’t part of it, so it goes back onto the input. The lozenge thing indicates that we are going to return a token (i.e., accept states). The example lexical analyzer in installment 4 was the whole program, and it never returned anything. This one is called by a parser, and it returns one token each time it is called.

Figure 1: Pascal Dictionary Lexical Analyzer

Finally, the scanner eats white space, returns carriage returns, and if it hits anything else, errors.

Here is the class table:

data ‘TABL’ (1001) {
$”00 00 00 05 00 00 00 00 00 04 00 00 00 05 00 00"
$”00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
$”04 00 0D 06 00 00 00 0E 09 0A 0C 00 00 00 00 0B”
$”02 02 02 02 02 02 02 02 02 02 00 00 00 00 00 00"
$”00 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01"
$”01 01 01 01 01 01 01 01 01 01 01 00 00 00 00 00"
$”00 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01"
$”01 01 01 01 01 01 01 01 01 01 01 07 00 08 00 03"
$”00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
$”00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
$”00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
$”00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
$”00 00 00 00 00 00 00 00 00 00 04 00 00 00 00 00"
$”00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
$”00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
$”00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
};

Here is the state table:

data ‘TABL’ (2001) {
$”FD 02 FD FD 00 FE 01 FD FD FD FD FD FD FD FD”
$”01 01 01 01 01 00 01 01 01 01 01 01 01 01 01"
$”FF 02 02 FF FF FF FF FF FF FF FF FF FF FF FF”
};

The negative numbers correspond to the lozenges.

C(3)(b). Parser

The dictionary parser is a simple hand-made thing, and does not use YACC (that would be like swatting a fly with a hammer). A dictionary is a list of lines; a line may be blank, or it may contain a one-ID specification, or a two-ID specification. That is,

 line -> CR
 line -> ID CR
 line -> ID ID CR

Here’s a state machine to implement that grammar:

Figure 2: Dictionary Parser

Recall that the parser gets its data by calling the lexical analyzer, and thus receives only three tokens: ID, CR and ERR. State 3 is the error recovery state; it reads to the end of the line, and then goes back to state 0 for the next line. Returns to 0 from state 0, 1 and 2 correspond to the three lines of the grammar above. In the latter two cases, the specification is added to the symbol table.

Here is the state table:

data ‘TABL’ (1000) {
 $”01 00 03"
 $”02 00 03"
 $”03 00 03"
 $”03 00 03"
};

C(4). Making Substitutions

To make substitutions in the input file, we begin with a lexical analyzer that finds all the identifiers. Again, there are two versions, one for Pascal and one for C. I will discuss the C version only; Pascal does not allow underscores in identifiers, and the two languages have different comment constructs. See the fourth installment of this series for a lexical analyzer that reads Pascal comments.

The first segment reads comments, and is identical to the comment-reader presented in the fourth installment. The next two read strings. (The second segment is also present in the Pascal version, for compatibility, even though Pascal doesn’t use quotation marks for anything.) Canon does not do any syntax checking, and will read strings that go beyond the end of the line.

The fourth segment reads identifiers. When it finds one, the lozenge means “look it up and see if there’s a substitution to be made”. This scanner, unlike the dictionary scanner, doesn’t return anything; it runs until it finds the end of file, making substitutions as appropriate.

Here is the state table (which uses the same class table as the dictionary lexical analyzer):

data ‘TABL’ (3002) {
$”00 07 00 07 00 00 00 00 00 00 00 02 00 05 06"
$”01 01 01 01 01 00 01 01 01 01 01 01 01 01 01"
$”00 00 00 00 00 00 00 00 00 00 00 01 03 00 00"
$”03 03 03 03 03 03 03 03 03 03 03 03 04 03 03"
$”03 03 03 03 03 03 03 03 03 03 03 00 04 03 03"
$”05 05 05 05 05 05 05 05 05 05 05 05 05 00 05"
$”06 06 06 06 06 06 06 06 06 06 06 06 06 06 00"
$”FF 07 07 07 FF FF FF FF FF FF FF FF FF FF FF”
};

Figure 3: C Source Lexical Analyzer

D. Coding the Tool

What follows does not include all the code for the tool. Parts of it are scattered through my last two articles; refer back to those if you need to see it all. Alternately, the entire source is included on the MacTutor source code disk.

// Constants and Macros
 
#define nil 0
 
#define stdinfd  0
#define stdoutfd 1
#define stderrfd 2
 
#define stdunit(x) ((x >= stdinfd)
 && (x <= stderrfd))
#define notstdunit(x)(x > stderrfd)

#define nombuffsize1024
#define truebuffsize 1200

#define classcount 15
#define idstate  7
 
// Types
 
typedef enum 
 {false, true} 
logical;

typedef enum
 {nocode, pascalcode, ccode}
codetype;

typedef struct node
 {
 char   *key;
 struct node*left;
 struct node*right;
 int    balance;
 char   *data;
 } node;
 
// Globals
 
 unsigned char   *CASETABLE;
 
// Prototypes
 
void initmac();
int openoutput(char *thename, int output);
int readinput(int input, Handle inbuffer);
int filter(char *inbuffer, 
 int buffersize, int output,
codetype language, node *symbols);
int writeoutput(int output, 
 char *outbuffer, int buffersize);
node *parser(char *dictname, codetype language);
int gettoken(char *buffer, 
 int buffersize, char *thestring,
char *classtable, char *statetable);
node *createnode(char *thekey, char *thedata);
unsigned int insert(node *parent, 
 char *thekey, char *thedata, int depth);
node *lookup(node *thetable, char *thekey);
void destroy(node *thetable);
void dump(node *thetable);
int compare(unsigned char *string1, unsigned char *string2);
D(1).  Main Routine

// main
 
int main(int argc, char *argv[])
 {
 int    output;
 logicalsensitive;
 codetype language;
 char   *outputname;
 char   *dictname;
 logicalerrors;
 int    index;
 char   *thetail;
 Handle thehandle;
 node   *symbols;
 int    input;
 int    buffersize;
 
 initmac();
 
// “output” is the fd of the output file, initially stdout
// “sensitive” is the case sensitivity, initially insensitive
// “language” is the language to parse, initially unknown
 
 output = stdoutfd;
 sensitive = false;
 language = nocode;
 
// “outputname” is the name of the output file
// “dictname” is the name of the dictionary file
// “errors” is the error flag, initially indicating no errors
 
 outputname = nil;
 dictname = nil;
 errors = false;
 
// command line interpreter: first pass
 
 for (index = 1; index < argc; index++)
 {
 
 if (argv[index][0] == ‘-’)
 {
 
 switch (argv[index][1])
 {
 
// “-p” and “-c” options set 
// language type; these override 
// any previous setting
 
 case ‘C’:
 case ‘c’:
 language = ccode;
 break;
 
 case ‘P’:
 case ‘p’:
 language = pascalcode;
 break;
 
// “-s” option makes Canon case sensitive
 
 case ‘S’:
 case ‘s’:
 sensitive = true;
 break;
 
// “-o” option names the output file; 
// remember the name and read 
// the file later
 
 case ‘O’:
 case ‘o’:
 index++;
 if (outputname == nil)
 outputname = argv[index];
 else
 errors = true;
 break;
 
// “-d” option names the dictionary file; 
// remember the name and read 
// the file later
 
 case ‘D’:
 case ‘d’:
 index++;
 if (dictname == nil)
 dictname = argv[index];
 else
 errors = true;
 break;
 
 default:
 errors = true;
 break;
 
 }
 
 }
 else if (language == nocode)
 {
// argv[index] is the name of an 
// input file; if “language” has 
// not changed since initialization,
// set “language” according to 
// file name
 thetail = argv[index] 
 + strlen(argv[index]) - 2;
 if (compare(thetail, “.p”) == 0)
 language = pascalcode;
 else if (compare(thetail, “.c”) == 0)
 language = ccode;
 }
 }
 
// exit if errors were found in the first pass
 if (errors)
 exit(2);
 
// if “language” is still unknown, set it to Pascal
 if (language == nocode)
 language = pascalcode;
 
// load the case table
 if (sensitive)
 thehandle = GetResource(‘TABL’, 4002);
 else
 thehandle = GetResource(‘TABL’, 4001);
 HLock(thehandle);
 CASETABLE = (unsigned char *) *thehandle;
 
// copy the dictionary into the symbol table
 if (dictname == nil)
 exit(2);
 
 symbols = parser(dictname, language);
 if (symbols == nil)
 exit(2);
 
// if “outputname” has been found, open the output file
 if (outputname != nil)
 {
 output = openoutput( argv[++index], output);
 if (output < 0)
 exit(2);
 }
 
// “input” is the fd of the input file, initially stdin
// “thehandle” is the input buffer, initially empty
// “buffersize” is the size of “thehandle”
 input = stdinfd;
 thehandle = NewHandle(0);
 buffersize = 0;
 
// command line interpreter: second pass
 for (index = 1; index < argc; index++)
 {
// skip all options (read in first pass)
 if (argv[index][0] == ‘-’)
 {
 switch (argv[index][1])
 {
 case ‘D’:
 case ‘O’:
 case ‘d’:
 case ‘o’:
 index++;
 }
 }
 else
 {
 
// argv[index] is the name of an 
// input file; open the file and 
// read it into the input buffer
 input = open(argv[index], O_RDONLY);
 if (input < 0)
 exit(2);
 
 buffersize = readinput(input, thehandle);
 if (buffersize < 0)
 exit(2);
 
 close(input);
 
// call “filter” to read the input buffer 
// and write filtered output
 HLock(thehandle);
 filter(*thehandle, buffersize, 
 output, language, symbols);
 HUnlock(thehandle);
 }
 }
 
// if “input” is still a standard unit 
// number, then no input file was 
// opened, and input must be from
// standard input
 if (stdunit(input))
 {
 buffersize = readinput(input, thehandle);
 if (buffersize < 0)
 exit(2);
 
// call “filter” to read the input buffer 
// and write filtered output
 HLock(thehandle);
 filter(*thehandle, buffersize, 
 output, language, symbols);
 HUnlock(thehandle);
 }
 
// wrapup:  dispose of the input buffer, 
// close “output” if the program 
// opened it and dispose of the symbol table
 DisposHandle(thehandle);
 
 if (notstdunit(output))
 close(output);
 destroy(symbols);
 
 exit(0);
 }

D(2). Case Sensitivity

This is the string comparison routine to use in place of “strcmp” in the symbol table “insert” and “lookup” routines. The only other change I made to those routines was to rename the local variable “compare” “difference” (to avoid conflicts with this routine name).

The routine functions just like the C routine: it returns a negative number if string1 is less than string2, positive it string1 > string2, and zero if they’re equal. The actual number returned is simply the difference between the first pair of different characters. CASETABLE is a global pointer to the appropriate transliteration table.

// compare
int compare(unsigned char *string1, 
 unsigned char *string2)
 {
 register int    char1;
 register int    char2;
 register int    difference;
 
 char1 = *string1++;
 char2 = *string2++;
 
 while (char1 || char2)
 {
 difference = CASETABLE[char1]  - CASETABLE[char2];
 if (difference)
 return(difference);
 
 char1 = *string1++;
 char2 = *string2++;
 }
 return(0);
 }

D(3). Parsing the Dictionary

I parse the dictionary in two steps: lexical analysis and parsing. The “gettoken” routine breaks the input into tokens, which the “parser” routine fits together into substitution specifications.

D(3)(a). Lexical Analyser

This routine is similar to the lexical analyzer I used in my fourth article. The major difference is that it returns tokens as it finds them, rather than keeping control from the beginning to the end of the file. It knows that it has found a token when it gets a negative state number; it converts it into the token number that the parser expects, and returns it. (This is probably unduly complex; I should have just let the parser use negative token numbers and avoided the conversion.)

// gettoken
 
int gettoken(char *buffer, 
 int buffersize, char *thestring,
 char *classtable, char *statetable)
 {
 static int position = 0;
 
 int    thestate;
 unsigned char   thechar;
 int    theclass;
 int    newstate;
 
// start the machine in state 0
 thestate = 0;
 
 while (position < buffersize)
 {
// read the next character, look up its 
// class, and get the new state
 thechar = buffer[position++];
 theclass = classtable[thechar];
 newstate = statetable[classcount * thestate + theclass];
 
 switch (newstate)
 {
// -3 => ERR, -2 => CR; in either case, 
// just return the the token number
 case -3:
 case -2:
 return(- 1 - newstate);
 
// -1 => ID; return the token number
// and the identifier in “thestring”
 case -1:
 *thestring = ‘\0’;
 position--;
 return(- 1 - newstate);
 
 case 0:
 if (thestate == 1)
 position--;
 break;
 
 case 1:
 break;
 
 case 2:
 *thestring++ = thechar;
 break;
 }
 
 thestate = newstate;
 
 }
 return(-1);
 }

D(3)(b). Parser

The first half of this routine is set-up work. In addition to loading its own state machine, the parser also fetches gettoken’s state machine. It’s easier to do the work once, here, than to repeat it each time I can gettoken. The it also opens the dictionary file, reads it in, and so on. Eventually, it gets to do some parsing, and this should look familiar.

There is one complication: gettoken will not only return a token number but will, in the case of an identifier, also return the token’s text. I don’t want to overwrite one identifier when I read the next, so I pass a pointer to one string at the beginning of the line, and then a pointer to a second string when I want to read the next identifier.

// parser
node *parser(char *dictname, 
 codetype language)
 {
 Handle thehandle;
 char   *parsetable;
 char   *classtable;
 char   *statetable;
 int    thefile;
 int    buffersize;
 char   *buffer;
 node   *symbols;
 int    thestate;
 int    newstate;
 int    theline;
 int    errors;
 int    thetoken;
 char   thekey[256];
 char   thedata[256];
 char   dummy[256];
 char   *thestring;
 
// “parsetable” is the parser’s state machine
 thehandle = GetResource(‘TABL’, 1000);
 HLock(thehandle);
 parsetable = *thehandle;
 
// “classtable” is the character class table
 thehandle = GetResource(‘TABL’, 1001);
 HLock(thehandle);
 classtable = *thehandle;
 
// “statetable” is the lexical state machine
 if (language == pascalcode)
 thehandle = GetResource(TABL’, 2001);
 else
 thehandle = GetResource(‘TABL’, 2002);
 HLock(thehandle);
 statetable = *thehandle;
 
// open the dictionary file...
 thefile = open(dictname, O_RDONLY);
 if (thefile < 0)
 return(nil);
 
// and read it into the buffer
 thehandle = NewHandle(0);
 buffersize = readinput(thefile, thehandle);
 if (buffersize < 0)
 {
 close(thefile);
 return(nil);
 }
 
 close(thefile);
 
 HLock(thehandle);
 buffer = (char *)*thehandle;
 
// “symbols” is the symbol table
 symbols = createnode(“”, “”);
 
// start the machine in state 0, and on line 1
 thestate = 0;
 theline = 1;
 errors = 0;
 
// read the first identifier into “thekey”
 thestring = &thekey;
 thetoken = gettoken(buffer, buffersize, thestring, 
 classtable, statetable);
 
 while (thetoken >= 0)
 {
 newstate = parsetable[ 3 * thestate + thetoken];
 
 switch (newstate)
 {
// if we got here from state 1, then we 
// read only one identifier; if from 
// state 2, we read both “thekey”
// and “thedata”
// state 0 is the beginning of a line, so 
// increment the line counter and set 
// “thestring” to “thekey”
 
 case 0:
 if (thestate == 1)
 thetoken = insert(symbols, thekey, thekey, 0);
 else if (thestate == 2)
 thetoken = insert(symbols, thekey, thedata, 0);
 if (thetoken == 4)
 errors++;
 theline++;
 thestring = &thekey;
 break;
 
// having read one identifier into 
// “thekey”, the next one should go 
// into “thedata”
 case 1:
 thestring = &thedata;
 break;
 
// having read one identifier into 
// “thekey”, and the next one 
// “thedata”, read anything else into
// “dummy”
 case 2:
 thestring = &dummy;
 break;
 
// case 3 is the error case; if we just 
// got here, write an error message
 case 3:
 if (thestate != newstate)
 fprintf(stderr,  “”);
 errors++;
 break;
 }
 
 thestate = newstate;
 thetoken = gettoken(buffer,
 buffersize, thestring, 
 classtable, statetable);
 }
 
 DisposHandle(thehandle);
 
 if (errors > 0)
 {
 destroy(symbols);
 return(nil);
 }
 
 return(symbols);
 }

D(4). Making Substitutions

This routine should be familiar by now, except for when it finds an identifier. The state table flags identifiers with a state of -1; when the routine reaches that state, it looks up the identifier in the symbol table and performs any required substitution. In all other cases (things other than identifiers, or identifiers with no substitution), the routine simply copies the input to the output.

// filter
int filter(char *inbuffer, 
 int buffersize, int output,
 codetype language, node *symbols)
 {
 
 int    inposition;
 int    outposition;
 int    thetoken;
 node   *thenode;
 int    thelength;
 Handle thehandle;
 char   *classtable;
 char   *statetable;
 char   outbuffer[truebuffsize];
 int    thestate;
 unsigned char   thechar;
 int    theclass;
 int    newstate;
 int    writesize;

// “inposition” is the current read position
// “outposition” is the current write position
// “thetoken” is the position of the 
// beginning of the current identifier
 inposition = 0;
 outposition = 0;
 thetoken = 0;

// “classtable” converts characters into classes
 thehandle = GetResource(‘TABL’, 1001);
 HLock(thehandle);
 classtable = *thehandle;

// “statetable” is the state machine
 if (language == pascalcode)
 thehandle = GetResource(‘TABL’, 3001);
 else
 thehandle = GetResource(‘TABL’, 3002);
 HLock(thehandle);
 statetable = *thehandle;
 
// start the machine in state 0
 thestate = 0;
 while (inposition < buffersize)
 {
// read the next character, find its class and the new state
 thechar = inbuffer[inposition++];
 theclass = classtable[thechar];
 newstate = statetable[classcount * thestate + theclass];
 
 switch (newstate)
 {
// found an identifier:  if it is in the 
// symbol table, replace it with the 
// table’s data.  Then go to state 0.
 case -1:
 inposition--;
 outbuffer[outposition] = ‘\0’;
 thenode = lookup(symbols, &outbuffer[thetoken]);
 if (thenode != nil)
 {
 outposition -= strlen(&outbuffer[thetoken]);
 thelength = strlen(thenode->data);
 BlockMove((Ptr)thenode->data,
 &outbuffer[outposition], thelength);
 outposition += thelength;
 }
 newstate = 0;
 break;

// retract if going from state 2 to state 
// 0; otherwise, copy input to output
 case 0:
 if (thestate == 2)
 inposition--;
 else
 outbuffer[outposition++] = thechar;
 break;

// reading an identifier:  if this is the 
// beginning, record the position for 
// later use.  Then, fall through to 
// the default
 case idstate:
 if (thestate != idstate)
 thetoken = outposition;

// all other cases, copy input to output
 default:
 outbuffer[outposition++] = thechar;
 break;
 }

// if the output buffer fills up, and 
// we’re not in the middle of an 
// identifier, write it to disk
 if ((outposition >= nombuffsize)
 && (thestate != idstate) 
 && (newstate != idstate))
 {
 outposition = writeoutput(
 output, outbuffer, outposition);
 if (outposition < 0)
 return(outposition);
 }
 
 thestate = newstate;
 }

// write the output buffer to disk
 writesize = write(output, outbuffer, outposition);
 return(writesize);
 }

E. Conclusion

The tool, as I have presented it here, is not quite perfect. It is very slow. I ran it using the “cannon.dict” file that comes with MPW; after first finding all the duplicate lines, it took 22 minutes just to load the dictionary! I was stunned.

The problem, it turned out, was the “createnode” routine. There are over 3200 lines in the dictionary file, and “createnode” calls “NewPtr” three times for each line, for a total of almost 10,000 calls to NewPtr. And NewPtr is very slow. When I re-wrote the tool to reduce the 10,000 to a few dozen, the time to load the dictionary dropped to 16 seconds. (Yes, I’m bragging...)

I chose not to present the faster version in this article, because I feel it confuses the issue. The changes I made are not related to the topic, and make the code more complicated. Instead, I’ve included both versions on the source code disk, and I’ll now give a quick description of the differences between the two.

I got rid of two-thirds of the NewPtr calls by leaving the data where I found it. In the above version, I read the file into memory, then find identifiers in the data and copy them into strings, which I pass to “createnode”. Createnode in turn copies these strings into its data structures. In the faster version, I find identifiers in the data and write nulls at their ends, then pass pointers to “createnode”, which simply copies the pointers into the appropriate node fields. So in addition to 6000 NewPtrs, I’ve saved 12,000 string copies.

The complication is writing the null character. There are times when you don’t want to overwrite the following character right away. Suppose it’s a return character...

I reduced the remaining 3000 calls to a handful by allocating the nodes in large arrays. I put new nodes in the free slots of the array until it fills up, with no calls to NewPtr. Once the array is full, I have to use NewPtr to create a new one, but since I use a large array size, this doesn’t happen very often.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Top Mobile Game Discounts
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links... | Read more »
Price of Glory unleashes its 1.4 Alpha u...
As much as we all probably dislike Maths as a subject, we do have to hand it to geometry for giving us the good old Hexgrid, home of some of the best strategy games. One such example, Price of Glory, has dropped its 1.4 Alpha update, stocked full... | Read more »
The SLC 2025 kicks off this month to cro...
Ever since the Solo Leveling: Arise Championship 2025 was announced, I have been looking forward to it. The promotional clip they released a month or two back showed crowds going absolutely nuts for the previous competitions, so imagine the... | Read more »
Dive into some early Magicpunk fun as Cr...
Excellent news for fans of steampunk and magic; the Precursor Test for Magicpunk MMORPG Crystal of Atlan opens today. This rather fancy way of saying beta test will remain open until March 5th and is available for PC - boo - and Android devices -... | Read more »
Prepare to get your mind melted as Evang...
If you are a fan of sci-fi shooters and incredibly weird, mind-bending anime series, then you are in for a treat, as Goddess of Victory: Nikke is gearing up for its second collaboration with Evangelion. We were also treated to an upcoming... | Read more »
Square Enix gives with one hand and slap...
We have something of a mixed bag coming over from Square Enix HQ today. Two of their mobile games are revelling in life with new events keeping them alive, whilst another has been thrown onto the ever-growing discard pile Square is building. I... | Read more »
Let the world burn as you have some fest...
It is time to leave the world burning once again as you take a much-needed break from that whole “hero” lark and enjoy some celebrations in Genshin Impact. Version 5.4, Moonlight Amidst Dreams, will see you in Inazuma to attend the Mikawa Flower... | Read more »
Full Moon Over the Abyssal Sea lands on...
Aether Gazer has announced its latest major update, and it is one of the loveliest event names I have ever heard. Full Moon Over the Abyssal Sea is an amazing name, and it comes loaded with two side stories, a new S-grade Modifier, and some fancy... | Read more »
Open your own eatery for all the forest...
Very important question; when you read the title Zoo Restaurant, do you also immediately think of running a restaurant in which you cook Zoo animals as the course? I will just assume yes. Anyway, come June 23rd we will all be able to start up our... | Read more »
Crystal of Atlan opens registration for...
Nuverse was prominently featured in the last month for all the wrong reasons with the USA TikTok debacle, but now it is putting all that behind it and preparing for the Crystal of Atlan beta test. Taking place between February 18th and March 5th,... | Read more »

Price Scanner via MacPrices.net

AT&T is offering a 65% discount on the ne...
AT&T is offering the new iPhone 16e for up to 65% off their monthly finance fee with 36-months of service. No trade-in is required. Discount is applied via monthly bill credits over the 36 month... Read more
Use this code to get a free iPhone 13 at Visi...
For a limited time, use code SWEETDEAL to get a free 128GB iPhone 13 Visible, Verizon’s low-cost wireless cell service, Visible. Deal is valid when you purchase the Visible+ annual plan. Free... Read more
M4 Mac minis on sale for $50-$80 off MSRP at...
B&H Photo has M4 Mac minis in stock and on sale right now for $50 to $80 off Apple’s MSRP, each including free 1-2 day shipping to most US addresses: – M4 Mac mini (16GB/256GB): $549, $50 off... Read more
Buy an iPhone 16 at Boost Mobile and get one...
Boost Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering one year of free Unlimited service with the purchase of any iPhone 16. Purchase the iPhone at standard MSRP, and then choose... Read more
Get an iPhone 15 for only $299 at Boost Mobil...
Boost Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering the 128GB iPhone 15 for $299.99 including service with their Unlimited Premium plan (50GB of premium data, $60/month), or $20... Read more
Unreal Mobile is offering $100 off any new iP...
Unreal Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering a $100 discount on any new iPhone with service. This includes new iPhone 16 models as well as iPhone 15, 14, 13, and SE... Read more
Apple drops prices on clearance iPhone 14 mod...
With today’s introduction of the new iPhone 16e, Apple has discontinued the iPhone 14, 14 Pro, and SE. In response, Apple has dropped prices on unlocked, Certified Refurbished, iPhone 14 models to a... Read more
B&H has 16-inch M4 Max MacBook Pros on sa...
B&H Photo is offering a $360-$410 discount on new 16-inch MacBook Pros with M4 Max CPUs right now. B&H offers free 1-2 day shipping to most US addresses: – 16″ M4 Max MacBook Pro (36GB/1TB/... Read more
Amazon is offering a $100 discount on the M4...
Amazon has the M4 Pro Mac mini discounted $100 off MSRP right now. Shipping is free. Their price is the lowest currently available for this popular mini: – Mac mini M4 Pro (24GB/512GB): $1299, $100... Read more
B&H continues to offer $150-$220 discount...
B&H Photo has 14-inch M4 MacBook Pros on sale for $150-$220 off MSRP. B&H offers free 1-2 day shipping to most US addresses: – 14″ M4 MacBook Pro (16GB/512GB): $1449, $150 off MSRP – 14″ M4... Read more

Jobs Board

All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.