TweetFollow Us on Twitter

Jul 00 Challenge

Volume Number: 16 (2000)
Issue Number: 7
Column Tag: Programming

Programmer's Challenge

by Bob Boonstra, Westford, MA

RAID-5+

Those of you in the Information Systems business have certainly heard of Redundant Array of Independent (or Inexpensive) Disks, or RAID technology. As the name suggests, RAID stores data across multiple disks to provide improved performance and some level of protection against disk failure. Several RAID levels have been implemented, and one of the most common, RAID Level 5, employs disk striping (spreading blocks of data across multiple disks) and parity to optimize disk access for applications that perform random small read/write operations, and to protect against the failure of a single disk.

Our Challenge application, however, is a little more demanding. We're operating a mission critical application, one that cannot afford to lose data even with a double hardware failure. Your Challenge is to implement a RAID-5+ system that has the performance advantages of RAID 5, but can continue functioning when two disk drives fail.

The prototype for the code you should write is:

/*
 * ReadProc and WriteProc are callbacks that allow you to write to multiple drives
 * simultaneously, simulating the effect of striping.
 */
typedef void (* WriteProc) (/* conduct physical writes to multiple drives */
	long startByte[], /* start write from startbyte[n] physical byte on disk n */
	long numBytes[], 	/* write numbytes[n] bytes from disk n */
	char *writeBuffer[],	/* write to buffer[n] from disk n */
	Boolean readErr[]	
	/* returns writeErr[n]==true if disk n has a write error or parameters were bad */
);
typedef void (* ReadProc) (/* conduct physical reads from multiple drives */
	long startByte[], 	/* start read from startbyte[n] physical byte on disk n */
			/* bytes startByte..startByte+numBytes-1 must be within 0..diskSize-1 */
	long numBytes[], 		/* read numbytes[n] bytes from disk n */
	char *readBuffer[],	/* read into buffer[n] from disk n */
	Boolean readErr[]
		/* returns readErr[n]==true if disk n has a read error or parameters were bad */
);

/*
 * InitRaid provides the problem parameters
 */
void InitRaid(
	long numDisks,	
		/* problem size, you will have numDisks of real data, plus 2 disks for parity */
	long diskSize,			/* number of bytes in each disk */
	long failureRate, /* expect 1 failure in each failureRate read/write attempts */
	WriteProc physicalWrite,
		/* procedure that allows you to write to numDisks+2 disks of size diskSize */
	ReadProc physicalRead
		/* procedure that allows you to read from numDisks+2 disks of size diskSize */
);

void RepairDisk(
	long whichDisk /* index of disk that has been repaired, no more than 2 at one time */
);

/*
 *  RaidWrite and RaidRead ask you to write to the numDisks*diskSize bytes of 
 *  storage. You use WriteProc and Read proc to actually write to the (numDisks+2) 
 *  physical devices, using redundant writes to compensate for the loss of up to two 
 *	 disks. RaidWrite and RaidRead bytes are numbered 0..numDisks*diskSize-1.
 *  RaidWrite and RaidRead return true unless there is a problem performing the 
 * write/read.
 */
Boolean RaidWrite(
	long startByte,			/* write starting at this byte */
	long numBytes,			/* number of bytes to write */
	char *buffer				/* write bytes from this buffer */
);

Boolean RaidRead(
	long startByte,			/* read starting at this byte */
	long numBytes,			/* number of bytes to read */
	char *buffer				/* read bytes into this buffer */
);

void TermRaid(void);

This Challenge starts with a call to your InitRaid routine. InitRaid is provided with the number of disks (numDisks) available to your Raid 5+ implementation. Actually, numDisks represents the amount of data you need to store and protect; you actually have numDisks+2 disks available, with the additional 2 disks used to provide protection against disk failures. InitRaid is also provided with the size of the identically sized disks in bytes (numBytes), and the approximate failure rate of the disks (1 failure in failureRate read/write attempts). Finally, InitRaid is provided with two callback routines that provide you with read and write access to the physical disks in our simulated disk array, which we'll discuss below.

The Challenge evaluation consists of a large number of RaidWrite and RaidRead operations. Each RaidWrite call provides a logical address to write to (startByte) in the range 0..numDisks*diskSize-1, a number of bytes to write (numBytes), and a buffer from which to write. Your code must write the data to the simulated disks provided using the WriteProc callback, providing for error correction by writing redundant information to other disks. If WriteProc returns a writeErr[n] value of true, the write to disk n failed, and you may need to compensate. RaidRead provides a logical address to read from, a number of bytes to read, and a buffer to read into. If ReadProc returns a readErr[n] value of true, the corresponding read operation failed, and you will need to use the redundant disks to reconstruct the lost information.

At the end of the evaluation, your TermRaid routine will be called. You should free any memory you have allocated.

From time to time, a failed disk may be repaired. You'll be notified of such a repair by a call to RepairDisk. When a disk is repaired, you ought to reconstruct any necessary error correction information, because another disk may fail at any time. Up to two disks may be in "failed" mode at any given time.

WriteProc allows you to perform a write operation on all of the disks in the array simultaneously, writing numBytes[n] from writeBuffer[n] starting at physical location startByte[n] on disk n. Similarly, ReadProc allows you to perform a simultaneous read operation on all of the disks, reading numBytes[n] into readBuffer[n] starting from physical location startByte[n] on disk n. Unfortunately, the disk controllers in our simulated system are not very sophisticated: while disk I/O occurs in parallel, our disks are not able to chain sequential I/O operations. An I/O operation on the array requires read/write time proportional to the largest numBytes[n] value passed to any disk.

Our disks will have, say, 5msec average seek time, and 30MB/sec transfer rates. Your program's score will be the sum of the seek and transfer times for each read/write operation, plus the execution time required by your program. The winner will be the solution that correctly performs all read/write operations with the lowest program score.

There are no specific memory restrictions on this Challenge, but you may not allocate memory to store all of the data written to the simulated disk array, or to persistently store any error correction information. You must use the simulated disks and the WriteProc and ReadProc callbacks for that purpose.

This will be a native PowerPC Challenge, using the CodeWarrior Pro 5 environment. Solutions may be coded in C, C++, or Pascal.

Three Months Ago Winner

Congratulations to Ernst Munter (Kanata, Ontario) for taking first place in the April Text Compression Challenge. This Challenge required contestants to process input text and compress it into as few bytes as possible, while minimizing the execution time used to perform the compression. The input text consisted of English-language text; computer programs written in C, C++, or Pascal; and web pages written in html. Scoring was based on the number of characters of compressed text, with a 10% penalty added for each 100msec of execution time. Ernst's entry was the not the fastest of the 5 correct entries, but it did produce the most highly compressed output.

I used 17 test cases to evaluate the entries, including 8 text files, 3 computer programs, and 6 web pages, totaling ~5.6MB in length. Individual inputs ranged in size from less than 1000 bytes to just over 1MB. Ernst's solution compressed these inputs into about 35% of their original size. While I didn't independently verify this, Ernst observes that his code produces a smaller result for a large text file than a popular Mac compression utility, and does so in less time.

The three entries producing the smallest output all used a variant of Huffman encoding. This technique assigns bit representations of varying lengths to individual tokens in the input, with shorter Huffman codes used for tokens that occur more frequently. Ernst took the additional step of assigning a token code to each input token, and then compressing the token codes using Huffman encoding. This approach resulted in ~50% better compression than that achieved by Jan Schotsman's next best entry (although Jan's entry was submitted after the Challenge deadline and not eligible to win).

The table below lists, for each of the solutions submitted, the cumulative size of the input text and output texts, the execution time in milliseconds, the total score achieved for all test cases. It also provides the code size, data size, and programming language used for each entry. As usual, the number in parentheses after the entrant's name is the total number of Challenge points earned in all Challenges prior to this one.

NameInput Size (KB)Output Size (KB)Time (msec)ScoreCode SizeData SizeLang
Ernst Munter (587)559519121863.80242703787641216C++
Randy Boring (123)55954896256.355186714150456C++
Sebastian Maurer (101)559551421482.87551264653281394C++
Armin Schmich559544072112.0858248962400164C
Jan Schotsman (late entry)559522075845.33392806513060320C
J. T. crash1634813586C

Top Contestants

Listed here are the Top Contestants for the Programmer's Challenge, including everyone who has accumulated 10 or more points during the past two years. The numbers below include points awarded over the 24 most recent contests, including points earned by this month's entrants.

Rank Name Points
1. Munter, Ernst 243
2. Saxton, Tom 139
3. Maurer, Sebastian 78
4. Boring, Randy 56
5. Shearer, Rob 47
6. Rieken, Willeke 41
7. Heathcock, JG 33
8. Taylor, Jonathan 26
9. Brown, Pat 20
10. Downs, Andrew 12
11. Jones, Dennis 12
12. Duga, Brady 10
13. Fazekas, Miklos 10
14. Hewett, Kevin 10
15. Murphy, ACC 10
16. Selengut, Jared 10
17. Strout, Joe 10

There are three ways to earn points: (1) scoring in the top 5 of any Challenge, (2) being the first person to find a bug in a published winning solution or, (3) being the first person to suggest a Challenge that I use. The points you can win are:

1st place 20 points
2nd place 10 points
3rd place 7 points
4th place 4 points
5th place 2 points
finding bug 2 points
suggesting Challenge 2 points

Here is Ernst's winning Text Compression solution:

TextCompression.cp
Copyright © 2000
Ernst Munter

/*
  Task
  ——
  Find and implement an efficient algorithm to compress and expand text. Efficiency 
  is a compromise between compression factor and speed.
  
  Algorithm
  ————-
  The algorithm is based on the recognition that text will consist of alternating "word" 
  and "link" fragments.  Word fragments are character sequences from one character 
  set, links are sequences from a non-overlapping different set.  
  
  Each of the two sets contain 64 characters: alphanumerics + two extra characters 
  ( _ and @ ) to make up the word set, all others fall into the link set.
  
  Many of the fragments will be encountered only once, others many times.
  
  The compressed text is primarily a sequence of tokens, where each token represents 
  a fragment, or introduces a new fragment.  Each fragment is defined explicitely only 
  the first time it is encountered.  If the same lexical fragment occurs multiple times in 
  the text, a token code definition is generated for this fragment.  Subsequent 
  occurrences of the same fragment are then "quoted" by their code only.
  
  Token codes are Huffman encoded.  As a result the codes for frequently occurring 
  fragments (such as the single space between words) are coded with fewer bits than 
  less frequently repeating fragments. Two codes are reserved as flags, one for 
  introducing a fragment which only occurs once in the text; a second for introducing 
  a multiply occurring fragment plus its associated code.
  
  Implementation
  ———————
  Compressor:
  
  The text is parsed into words and links. A token cache tracks unique fragments, and 
  records their frequency. At the same time, a quotes list is built, one quote (token 
  pointer) for each fragment.
  
  Actually two caches and token sets are created, one for word fragments and a 
  separate one for link fragments.
  
  The next step is the creation of the Huffman codes for each token. To this end, all 
  tokens with a frequency > 1 are pushed on a heap (priority queue).  A Huffman tree 
  is built with intermediate nodes each holding pointers to two child nodes.  The leaf 
  nodes are the token records.  Traversing the finished tree from the root then
  provides the mechanism of assigning each leaf node (token) its code.
  
  Finally, the list of quotes is scanned, and each token or token code placed in the 
  compressed text array.
  
  A "BitPump" object provides a method of dealing with arbitrary bit size chunks.  
  The bit pump also provides the packing (and unpacking)  of the original 8-bit 
  characters into 6-bits.
  
  Expander:
  
  The expander reads a few length parameters from the compressed text  header and 
  then procedes to read the compressed text bit by bit to decode token codes.  At the 
  start it is given only the codes for the two flag tokens.  Then, each previously unseen 
  code and fragment is introduced as needed, preceded by a flag code.
  
  The two Huffman decode trees (for decoding word and link token codes) are thus 
  built one code at time, just-in-time, and linked to the decoded fragments which are 
  placed directly into the expanded text. 
  
  Optimizations
  ——————-
  List of possible optimizations considered but not applied:
  - more efficient Huffman decoder
  - recognize capitalized words as "the same" and flag
  - use a fixed dictionary of likey words, and provided default
    token codes for them which then do not need to be made explicit
  - more efficient bit pump
  - Huffman code the fragment characters
  - do run-length coding where it pays off (e.g. multiple spaces)
  
  All of these can bring additional small percentage improvements in compression at 
  the expense of added complexity and time. I have decided to keep it simple.
  
  Compiler Notes
  ———————
  One would expect the compiler to inline simple class methods such as
  
  	int SingleFlag() {return &token[0];}
  
  But the currently latest version of CodeWarrior (CW-5.3) does not do it with 
  inlining set to "Smart" and branches to a 2-instruction function when SingleFlag() is 
  called. I have replaced such instances with #define macros to get inlining. 
  
  Assumptions
  —————-
  Fragments (i.e. a word, or a sequence of non-word characters) should be less than 
  64K characters in length.  This is pretty well guaranteed in text files.
  
  Input text characters are limited to the range 0x00 to 0x7F inclusive.
  Characters above 0x7F will give incorrect results.
*/

#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include "TextCompression.h"
#include "Heap.h"
#include "Utilities.h"

//*************************  Compressor Code **********************//
//
//***************************************************************//
extern Tables gTables;

enum {
	kTokenSize		= 14,
	kMaxUniqueToken = 1<<kTokenSize,
	// = max different tokens in a single block
	
	kHashBits		= 11,
	kMaxHashTable	= 1<<kHashBits,
	kHashMask		= kMaxHashTable-1
};

struct CRC

// Standard CRC based hash method. Good hashing at some expense in time
static struct CRC {
  	enum {POLYNOMIAL=0x04c11db7L};
  	ulong table[256];
  	CRC() 
  	{
    	long i,j,x;
    	for (i=0;i<256;i++) {
      		x=i<<24;
      		for (j=0;j<8;j++) {
        		if (x<0) x=(x<<1) ^ POLYNOMIAL;
        		else x=(x<<1); 
      		}
      		table[i]=x;
    	}
  	}
	ulong HashFunction(uchar* ufrg,int frgLen)
	{
// Uses CRC on length and up to the first 4 chars of a fragment
  		ulong accum=0;
    	accum=(accum<<8) ^ table[(accum>>24) ^ frgLen]; 
    	accum=(accum<<8) ^ table[(accum>>24) ^ ufrg[0]];
    
    	if (frgLen>1) switch (frgLen)
    	{
default:
case 4:	accum=(accum<<8) ^ table[(accum>>24) ^ ufrg[3]];
case 3:	accum=(accum<<8) ^ table[(accum>>24) ^ ufrg[2]];
case 2:	accum=(accum<<8) ^ table[(accum>>24) ^ ufrg[1]];
    	}
 // returns 24 bits of the CRC + 8 bits form the fragment length
		return (accum & 0xFFFFFF) | (frgLen << 24);
	}
} crc;

struct Token

// Each Token describes a unique fragment of text 
// which occurs freq times in a block
// Fragments alternate as words and links, each with a
// different character set of 64 chars.
struct Token {
	Token*	next;		// linked list
	union	{
		ulong	code;		// codebits at LSB, code length at 5 MSB
		ulong	hashValue;
	};
	uchar*	frag;		// fragment
	ushort	fragLength;
	ushort 	freq;		// number of times in block, 
											// set to 0 after code is published
						
	Token(){}
	
	Token(uchar* frg,int frgLen,int n) :
		frag(frg),
		fragLength(frgLen),
		freq(n),
		next(0),
		code(0),
		hashValue((frg)?crc.HashFunction(frg,frgLen):0)
	{}
	
	void 	SetFrequency(int x){freq=x;}
	
//	int		CodeLength(){return code >> 27;}
#define CodeLength() code >> 27
	
	ulong 	MisMatch(uchar* frg,int frgLen,ulong hvalue)
	{
		ulong d = hvalue ^ hashValue;
		if (d == 0) 
		{
			uchar* 	f1=frg;
			uchar*	f2=frag;
			d=*f1 ^ *f2;
			if (d == 0) for (int i=1;i<frgLen;i++)
			{
				d=*++f1 ^ *++f2;
				if (d) break;
			}
		}	
		return d;
	}
};
typedef Token* TokenPtr;

struct TokenNode

// A TokenNode is used in bulding the Huffman tree of tokens
static long gMaxLength;
struct TokenNode
{
	Token* t;
	int	weight;
	TokenNode*	parent;			// 
	TokenNode*	child[2];	// subtrees
	
	TokenNode(){}
	
	TokenNode(Token* token) :
		t(token),weight(t->freq),parent(0)
	{
		child[0]=child[1]=0;
	}
	
	TokenNode(TokenNode & ch1,TokenNode & ch2) :
		t(0),
		weight(ch1.weight + ch2.weight),
		parent(0)
	{
		child[0]=&ch1;ch1.parent=this;
		child[1]=&ch2;ch2.parent=this;
	}
	
	int Freq(){return weight;}
	
	void Traverse(ulong code,int length)
// Builds the Huffman codes for each leaf child recursively.
	{
		if (!child[0] && !child[1])	// leaf
		{
			if (length==0) length=1;
			t->code=code | (length<<27);
			if (gMaxLength < length) gMaxLength = length;
		} else
		{
			assert(child[0] && child[1]);
 			child[0]->Traverse(code<<1,length+1);
			child[1]->Traverse((code<<1) | 1,length+1);
			delete child[0];
			delete child[1];
		}
	}
};
typedef TokenNode* TokenNodePtr;

struct HashBucket

struct HashBucket {
	Token*	first;
	Token*	last;
};

struct TokenSet

// TokenSet holds all tokens of one kind (word or link)
struct TokenSet {
	int		fill;					// number of tokens defined
	int		multi;				// number of tokens used multiple times
	int		maxFragLen;	// longest fragment encountered
	int		fragLenSize;// number of bits to represent longest frag length
	int 	codeLenSize;	// number of bits to represent longest code
	int		numCodes;		// number of Huffman codes defined
// the maximum number of unique tokens, and the hash table size
// are defined statically to some reasonable values
	Token 	token[kMaxUniqueToken];
	HashBucket	hashTable[kMaxHashTable];
	
	TokenSet():
		fill(2),multi(0),maxFragLen(0),fragLenSize(0),
		codeLenSize(0)		
	{
		token[0]=Token(0,0,1);// single occurrence flag
		token[1]=Token(0,0,2);// multiple occurrrence flag
		memset(hashTable,0,sizeof(hashTable));
	}
	
	Token*	Cache(uchar* frg,int frgLen)
// Finds the fragment in the cache, and increments its frequency.
// If fragment is not found, a new token is defined and cached. 
	{
		int hash=crc.HashFunction(frg,frgLen);
		HashBucket* HB=&hashTable[kHashMask & hash];	
		Token* t=HB->first;							
		while (t)
		{											
			if (0 == t->MisMatch(frg,frgLen,hash))
			{
				if (t->freq == 1)
					multi++;
				t->freq++;							
													
				return t;// found cached token
			}
			t=t->next;
		}
		// add a new token to the cache
		if (fill<kMaxUniqueToken-1)
		{
			if (frgLen>maxFragLen)maxFragLen=frgLen;
			token[fill]=Token(frg,frgLen,1);
			t=&token[fill++];
			
			if (HB->last)
			{
				HB->last->next=t;
			} else HB->first=t;
			HB->last=t;
		}
		return t;// could be 0!
	}
	
// The first two token spaces are reserved for the flags
//	Token* SingleFlag() {return &token[0];}
//	Token* MultiFlag()  {return &token[1];}

#define SingleFlag() (&token[0])
#define MultiFlag() (&token[1])

	void MakeTokenCodes()
// Pushes all relevant tokens on a priority queue, generates
// the Huffman tree, and then traverses it to assign each token a code.
	{
		Finish();
	// collect multi-use tokens on heap
		Heap<TokenNodePtr,int> qmap(3+multi);
		Token* t=token;
		for (int i=0;i<fill;i++,t++){
			if (t->freq > 1)
			qmap.Insert(new TokenNode(t));
		}
		numCodes=qmap.heapSize;
		
	// build the Huffman tree for link codes
		while(qmap.heapSize > 1) 
		{
			TokenNodePtr ch1=qmap.Pop();
			TokenNodePtr ch2=qmap.Pop();
			TokenNodePtr parent=new TokenNode(*ch1,*ch2);
			qmap.Insert(parent);
		}
	
		TokenNode* root=qmap.Pop();
	
		gMaxLength=0;
		root->Traverse(0,0);
		codeLenSize=BitsNeeded(gMaxLength);
		
		delete root;	
	}
	
	void Finish()
// Updates flag tokens with their frequencies and calculates the number
// of bits needed to send the fragment length when sending fragments
// explicitely.  The longest actually occurring fragment determines this.
	{
		int f0=fill-multi;
		if (f0<2) f0=2;
		token[0].SetFrequency(f0);
		int f1=multi;
		if (f1<2) f1=2;
		token[1].SetFrequency(f1);
		fragLenSize=BitsNeeded(maxFragLen);
	}
	
	void SendParms(BitPump & B)
// Sends the compressed text header, defining all constants needed
// by the expander before decoding can begin.
	{
		B.Send(fragLenSize,5);
		B.Send(codeLenSize,5);
		int numCodesSize=BitsNeeded(numCodes);
		B.Send(numCodesSize,5);
		B.Send(numCodes,numCodesSize);
		B.Send(token[0].CodeLength(),codeLenSize);
		B.Send(token[0].code,token[0].CodeLength());
		B.Send(token[1].CodeLength(),codeLenSize);
		B.Send(token[1].code,token[1].CodeLength());
	}
	
	void Send(Token* t,BitPump & B)
// Sends a single token, either:
//		just the token code (2nd or later occurrence)
//		"SingleFlag" and a single fragment which will not recur
//		"MultiFlag" followed by a fragment and its token code
	{
		if (t->freq==0)
		{
			B.Send(t->code,t->CodeLength());
		} else if (t->freq==1)
		{
			B.Send(SingleFlag()->code,SingleFlag()->CodeLength());
			B.CompressFragment((uchar*)t->frag,
				t->fragLength,fragLenSize); 
		} else
		{
			B.Send(MultiFlag()->code,MultiFlag()->CodeLength());
			B.CompressFragment((uchar*)t->frag,
				t->fragLength,fragLenSize); 
			B.Send(t->CodeLength(),codeLenSize);
			B.Send(t->code,t->CodeLength());
		// set frq=0 so this token will trigger code-only from now on
			t->freq=0;
		}
	}
};

NextLink

// NextLink and NextWord parse the next link or word respectively
// from the input text and return the length of the fragment.
inline int NextLink(uchar* inText,uchar* endText)
{
	uchar* wordStart=inText;
	while ((inText<endText) && 
		!gTables.IsWordChar(*inText)) inText++;
	return inText-wordStart;
}

NextWord

inline int NextWord(uchar* inText,uchar* endText)
{
	uchar* wordStart=inText;
	while ((inText<endText) && 
		gTables.IsWordChar(*inText)) inText++;
	return inText-wordStart;
}

Compress

static uchar* Compress(uchar *text,long length,
	uchar* compressedText,long & compressedLength)
{
// Compresses a block of text, until we either run out of text to compress, 
// unique quote space (fragLimit, allocated as a function of text size),
// or the number of unique tokens required exceeds the static limit.
// In the unlikely case we do run out, we return the compressed block
// and caller will call Compress again to compress the remaining text.
	uchar* ptext=text;
	uchar* textEnd=text+length;
	TokenSet* linkTokens=new TokenSet;
	TokenSet* wordTokens=new TokenSet;
	int estNumQuotePairs=length/5;
	int fragLimit=2*estNumQuotePairs+128;
	TokenPtr* quotes=new TokenPtr[fragLimit+1];
	TokenPtr* q=quotes;
	
// Text might start with a word or a link fragment:
	bool linkFirst=gTables.IsLinkChar(*text);
	int fragPair=0;
	
// scan text and gather all tokens 	
	if (linkFirst)
	{
		int fragLength=NextLink(ptext,textEnd);
		*q++=linkTokens->Cache(ptext,fragLength);
		ptext+=fragLength;
	}
	
	while ((ptext<textEnd)&&(fragPair<fragLimit))
	{
		int fragLength=NextWord(ptext,textEnd);
		
		Token* 
		token=wordTokens->Cache(ptext,fragLength);
		if (0==token)
			break;
		*q++=token;
		if ((ptext+=fragLength) >= textEnd)
			break;
			
		fragLength=NextLink(ptext,textEnd);
		
		token=linkTokens->Cache(ptext,fragLength);
		if (0==token)
			break;
		*q++=token;
		ptext+=fragLength;
		
		fragPair++;
	}
	
	int numQuotes=q-quotes;
	
// Make the codes

	linkTokens->MakeTokenCodes();
	wordTokens->MakeTokenCodes();

// Send all quotes ...	
	q=quotes;
	TokenPtr* endQuotes=quotes+numQuotes;
// using a bit pump	
	BitPump B((uchar*)compressedText,0);

// send the header of a few parameters	
	B.Send(linkFirst,1);
	int numQuotesSize=BitsNeeded(numQuotes);
	B.Send(numQuotesSize,5);
	B.Send(numQuotes,numQuotesSize);
	linkTokens->SendParms(B);
	wordTokens->SendParms(B);

// scan the quotes array wich will alternate between word and link tokens	
	if (linkFirst)
	{
		Token* t=*q++;
		assert(t);
		linkTokens->Send(t,B);
	}

	while (q<endQuotes)
	{
		Token* t=*q++;
		assert(t);
		wordTokens->Send(t,B);
		
		if (q>=endQuotes)
			break;
		
		t=*q++;
		assert(t);
		linkTokens->Send(t,B);
	}
	
 	uchar* endOutText=B.Close();
	compressedLength=endOutText-compressedText;

// Clean up explicitely allocated memory.
// Automatic objects (Heap, BitPump) will automatically destruct
	delete [] quotes;
	delete wordTokens;	
	delete linkTokens;		
	
	return ptext;
}

//*************************  Expander Code ************************//
//
//*****************************************************************//

struct Node

struct Node {				// Node in decode tree
	long	fragLength;	// fragLength=0 for non-terminal nodes
	uchar*	frag;			// frag=0 for FLAG nodes
	Node*	child[2];
};

struct Expander

struct Expander {
	int		fragLenSize;
	int 	codeLenSize;
	int		numCodesSize;
	int		numCodes;
	int		flag0size;
	int		flag0code;
	int		flag1size;
	int		flag1code;
	int 	numNodes;
	Node* 	node;
	Node*	nextNode;
	Node*	endNode;
	int		numQuotesRcvd;
	bool	link;
	
// Constructor parses compressed file header to establish needed
// parameters and define the bootstrap nodes of the Huffman tree  
	Expander(BitPump & B,bool lnk) :
		fragLenSize(B.Receive(5)),
		codeLenSize(B.Receive(5)),
		numCodesSize(B.Receive(5)),
		numCodes(B.Receive(numCodesSize)),
		flag0size(B.Receive(codeLenSize)),
		flag0code(B.Receive(flag0size)),
		flag1size(B.Receive(codeLenSize)),
		flag1code(B.Receive(flag1size)),
		numNodes(2*numCodes-1),
		node(new Node[numNodes]),
		nextNode(node),
		endNode(node+numNodes),
		numQuotesRcvd(0),
		link(lnk)
	{
		memset(node,0,numNodes*sizeof(Node));
		nextNode=node+1;
		MakeNode(flag0size,flag0code,0,1);// single flag
		MakeNode(flag1size,flag1code,0,2);// multi flag
	}
	
	~Expander(){delete [] node;}
	
	void MakeNode(int size,ulong code,uchar* frag,int fragLength)
// Traces from root to the node define by the code, assigning missing
// intermediate nodes as needed.
// Installs fragment length and text pointer in the leaf node.
	{
		Node* n=node;
		int bit;
		for (int i=size-1;i>0;—i)
		{
			bit=1 & (code >> i);
			if (0==n->child[bit]) 
			{
				assert(nextNode < endNode);
				n->child[bit]=nextNode++;
			}
			n=n->child[bit];
		}
		bit=1 & code;
		assert(nextNode < endNode);
		Node* terminalNode=nextNode++;
		assert(0==n->child[bit]);
		n->child[bit]=terminalNode;
		terminalNode->fragLength=fragLength;
		terminalNode->frag=frag;
	}
	
	int GetLinkQuote(BitPump & B,uchar* dest)
// Parses one link quote from the compressed text.
// returns the number of expanded characters decoded
	{
		int bit=B.Receive(1);
		Node* n=node->child[bit];
		int fragLength;
		for (;;)
		{
			int xflag=n->fragLength;
			if (xflag)
			{
				uchar* frag=n->frag;
				if (frag)	// cached quote
				{
					fragLength=xflag;
					
					*dest=*frag;
					while (—xflag) *++dest = *++frag;
						
					break;
				} else	// new quote
				{
					fragLength=B.Receive(fragLenSize);
					B.ExpandLink(dest,fragLength);
					if (xflag==1)	// SingleFlag
						break;
					// else MultiFlag: cache the code
					int codeLength=B.Receive(codeLenSize);
					int code=B.Receive(codeLength);
					MakeNode(codeLength,code,dest,fragLength);
					break;
				}
			} else
			{
				bit=B.Receive1Bit();// trace path in tree, bit by bit
				n=n->child[bit];
				assert(n);
			}
		} 
		return fragLength;
	}
	
	int GetWordQuote(BitPump & B,uchar* dest)
// Parses one word quote from the compressed text.
// returns the number of expanded characters decoded 
	{
		int bit=B.Receive(1);
		Node* n=node->child[bit];
		int fragLength;
		for (;;)
		{
			int xflag=n->fragLength;
			if (xflag)
			{
				uchar* frag=n->frag;
				if (frag)	// cached quote
				{
					fragLength=xflag;
					
					*dest=*frag;
					while (—xflag) *++dest = *++frag;
					
					break;
				} else	// new quote
				{
					fragLength=B.Receive(fragLenSize);
					B.ExpandWord(dest,fragLength);
					if (xflag==1)	// SingleFlag
						break;
					// else MultiFlag: cache the code
					int codeLength=B.Receive(codeLenSize);
					int code=B.Receive(codeLength);
					MakeNode(codeLength,code,dest,fragLength);
					break;
				}
			} else
			{
				bit=B.Receive1Bit();// trace path in tree, bit by bit
				n=n->child[bit];
				assert(n);
			}
		} 
		return fragLength;
	}
};

Expand
static uchar* Expand(
	uchar* compressedText,long compressedLength,
	uchar* expandedText,long & expandedLength)
{	
// Creates expanders for both word and link fragments,
// reads the compressed text header fields, and expands the text
	BitPump B(
		(uchar*)compressedText,
		(uchar*)compressedText+compressedLength);
	bool linkFirst=B.Receive(1);
	int numQuotesSize=B.Receive(5);
	int numQuotes=B.Receive(numQuotesSize);
	Expander linkCodes(B,true);
	Expander wordCodes(B,false);
	uchar* etext=expandedText;
	
// Starting with either a link or a word fragment, get alternating
// link and word quotes from the bit pump, decode into plain text,
// and place results into the expandedText array. 
	if (linkFirst)
	{
		int fragLength=linkCodes.GetLinkQuote(B,etext);
		etext += fragLength;
		numQuotes—;
	}
	
	while (numQuotes)
	{
		int fragLength=wordCodes.GetWordQuote(B,etext);
		etext += fragLength;
		numQuotes—;
		if (numQuotes<=0)
			break;
		
		fragLength=linkCodes.GetLinkQuote(B,etext);
		etext += fragLength;
		numQuotes—;
	}

	expandedLength = etext-expandedText;
// return the state of the compressed text pointer, in case
// the whole text was not compressed as a single block.
	return B.Buffer();	
}

//*********************  External Functions ***********************//
//
//*****************************************************************//

InitCompression
void * /* yourStorage */ InitCompression(void)
{
// No persistent storage needed
	return 0;
}

CompressText
long /* compressedLength */ CompressText(
 	char *inputText,						/* text to be compressed */
	long numInputChars,				/* length of inputText in bytes */
	char *compressedText,		/* return compressedText here */
	const void *yourStorage	/* storage returned by InitCompression */
) {
#pragma unused(yourStorage)

	uchar* it=(uchar*)inputText;
	uchar* ct=(uchar*)compressedText;
	long lit=numInputChars,lct;
	long compressedLength=0;
// Usually, text should compress in a single block.  
// Just in case it does not, this loop will compress text in blocks
	do
	{
		uchar* t2=Compress(it,lit,ct,lct);
		compressedLength += lct;
		lit -= t2 - it;
		it = t2;
		ct += lct;
	} while (lit>0);
	
	return compressedLength;
}

ExpandText
long /* expandedLength */ ExpandText(
	char *compressedText,		/* encoded text to be expanded */
	long compressedLength,		/* length of encoded text in bytes */
	char *expandedText,				/* return expanded text here */
	const void *yourStorage	/* storage returned by InitCompression */
) {
#pragma unused(yourStorage)
	
	uchar* ct=(uchar*)compressedText;
	uchar* et=(uchar*)expandedText;
	long  lct=compressedLength,let;
	long  expandedLength=0;
// Usually, text should come as a single compressed block.  
// Just in case it does not, this loop will expand multiple blocks
	do
	{
		uchar* t2=Expand(ct,lct,et,let);
		expandedLength += let;
		lct -= t2 - ct;
		ct = t2;
		et += let;
	} while (lct > 0);	
	
	return expandedLength;
}

TermCompression
void TermCompression(
	void *yourStorage		/* storage returned by InitCompression */
) {
#pragma unused(yourStorage)
// No persistent storage to destroy
}

Utilities.h

#ifndef UTILITIES_H
#define UTILITIES_H

#define NDEBUG
#include <assert.h>

typedef unsigned char uchar;
typedef uchar* ucharPtr;
typedef unsigned short ushort;
typedef unsigned long ulong;


inline int Caps(uchar* w1,uchar* w2)
// returns true if capitalization of w1 and w2 differs.
{
	return ((*w1 ^ *w2) & 0x20);
}

inline int BitsNeeded(ulong x)
// returns number of bits needed to encode range 0 to x
{
	if (x==0) return 1;
	int n=0;
	do { x >>= 1; n++;} while(x);
	return n;
}

class Tables {
public:
	Tables();
	bool 	IsWordChar(uchar c){return textchars[c];}
	bool 	IsLinkChar(uchar c){return !textchars[c];}
	uchar 	Ascii2Code(uchar c){return ascii2code[c];}
	uchar 	Code2Link(uchar c){return code2link[c];}
	uchar 	Code2Alnum(uchar c){return code2alnum[c];}
private:
	bool 	textchars[256];
	uchar	ascii2code[256];
	uchar	code2link[64];
	uchar	code2alnum[64];	
};


class BitPump {
public:
	BitPump(uchar* text,uchar* end):
		buffer(text),
		endBuffer(end),
		acc(0),
		fill(0)
	{}

	void Send(ulong x,int numBits)
	{
		acc <<= numBits;
		acc |= x & ((1<<numBits)-1);
		fill+=numBits;
		while (fill >= 8)
		{
			fill -= 8;
			*buffer++ = (acc >> fill) & 0xFF;
		}
	}
	
	ulong Receive(int numBits)
	{
		while (fill < numBits)
		{
			if (buffer>=endBuffer) return -1;
			acc = (acc<<8) | *buffer++;
			fill+=8;
		}
		fill -= numBits;
		ulong x = (acc>>fill) & ((1<<numBits)-1);
		return x;
	}
	
	ulong Receive1Bit()
	{
		if (!fill)
		{
			if (buffer>=endBuffer) return -1;
			acc = (acc<<8) | *buffer++;
			fill+=8;
		}
		fill —;
		return (acc>>fill) & 1;
	}
	
	void CompressFragment(uchar* frag,int length,int numBits);
	uchar* ExpandLink(uchar* outText,int length);
	uchar* ExpandWord(uchar* outText,int length);
	uchar* Buffer(){return buffer;}
	uchar* Close();
	
private:
	ulong 	Receive6Bits();
	uchar*	buffer;
	uchar*	endBuffer;
	ulong	acc;
	long	fill;
};

#endif // UTILITIES_H
Utilities.CP
#include <ctype.h>
#include "Utilities.h"

//********************  Utilities Implementation ******************//
//
//*****************************************************************//

Tables gTables;

Tables::Tables()
{
	int numAlnum=0;
	for (int c=0;c<128;c++)
	{
		if (isalnum(c))
		{
			textchars[c] = true;
			numAlnum++;
		} else textchars[c] = false;
	}
		 
	if (!textchars['@']){textchars['@']=true;numAlnum++;}
	if (!textchars['_']){textchars['_']=true;numAlnum++;}
	assert(numAlnum == 64);
		
	int alnumCode=0,linkCode=0;
	for (int c=0;c<128;c++)
	{
		if (textchars[c])
		{
			code2alnum[alnumCode]=c;
			ascii2code[c]=alnumCode++;
		} else
		{
			code2link[linkCode]=c;
			ascii2code[c]=linkCode++;
		}
		textchars[128+c] = textchars[c];
		ascii2code[128+c] = ascii2code[c];
	}
}

inline ulong BitPump::Receive6Bits()
// return -1U if buffer empty
{
	if (fill < 6)
	{
		if (buffer>=endBuffer) return -1;
		acc = (acc<<8) | *buffer++;
		fill+=8;
	}
	fill -= 6;
	return (acc>>fill) & 63;
}

void BitPump::CompressFragment(uchar* frag,int length,int numBits)
{
	uchar* s=frag-1;
	Send(length,numBits);		
	for (int i=0;i<length;i++)
	{
		int c=*++s;
		Send(gTables.Ascii2Code(c),6);// xlate ASCII to 6 bits 
	}
}

uchar* BitPump::ExpandLink(uchar* outText,int length)
{
	for (int i=0;i<length;i++)
	{
		*outText++ =
		gTables.Code2Link(Receive6Bits());// xlate to ASCII
	}
	return outText;
}
uchar* BitPump::ExpandWord(uchar* outText,int length)
{
	for (int i=0;i<length;i++)
	{
		*outText++ =
		gTables.Code2Alnum(Receive6Bits());// xlate to ASCII
	}
	return outText;
}
	
uchar* BitPump::Close()
{
	if (fill)
		Send(0,7&(32-fill));// zero pad last byte
	return buffer;
}
Heap.h
#ifndef HEAP_H
#define HEAP_H

template <class Base,class Value>
// Note: class Base must be a pointer to a class
//		having a member Freq() of type Value
// Value must have the operator > defined
struct Heap {
	Base* 	heapBase;
	int		heapSize;
	int 	maxHeapSize;
	Heap(int size) :
		heapBase(new Base[size+1]),
		heapSize(0),
		maxHeapSize(size){}
	~Heap(){Clear();}
	void Clear(){delete[] heapBase;}
	void Init(int size){
		heapBase=new Base[size+1];
		heapSize=0;
		maxHeapSize=size;
	}
	void Insert(Base k) 
	{
    	int i=++heapSize;
    	Value wk=k->Freq();
    	int j=i>>1;
    	Base z;
    	while (j && (wk < (z=heapBase[j])->Freq())) 
    	{
      		heapBase[i]=z;     
       		i=j;
      		j=i>>1;
    	}
    	heapBase[i]=k;    
  	}
  
  	Base Pop() 
  	{
//the node at heapBase[1] is the lowest weight
//it is removed from the heap and returned
//the heap is readjusted.
    	Base rc=heapBase[1];
    	Base k=heapBase[heapSize—];
    	if (heapSize<=1) {
      		heapBase[1]=k;            
      		return rc;
    	}
    	int i=1,j=2;
    	Value wk=k->Freq();
    	while (j<=heapSize) 
    	{
      		if ((j<heapSize)
      		&& (heapBase[j]->Freq() > heapBase[j+1]->Freq()))
        	j++;
      		if (heapBase[j]->Freq() > wk)
        		break;
      		heapBase[i]=heapBase[j];  
      		i=j;j+=j;
    	}
    	heapBase[i]=k;        
    	return rc;
  	}
};

#endif
 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Tor Browser Bundle 8.0.1 - Anonymize Web...
The Tor Browser Bundle is an easy-to-use portable package of Tor, Vidalia, Torbutton, and a Firefox fork preconfigured to work together out of the box. It contains a modified copy of Firefox that... Read more
Typinator 7.7 - Speedy and reliable text...
Typinator turbo-charges your typing productivity. Type a little. Typinator does the rest. We've all faced projects that require repetitive typing tasks. With Typinator, you can store commonly used... Read more
KeyCue 9.1 - Displays all menu shortcut...
KeyCue has always been a handy tool for learning and remembering keyboard shortcuts. With a simple keystroke or click, KeyCue displays a table with all available keyboard shortcuts, system-wide... Read more
PopChar 8.4 - Floating window shows avai...
PopChar helps you get the most out of your font collection. With its crystal-clear interface, PopChar provides a frustration-free way to access any font's special characters. Features Expanded... Read more
Smultron 11 - Easy-to-use, powerful text...
Smultron 11 is the text editor for all of us. Smultron is powerful and confident without being complicated. Its elegance and simplicity helps everyone being creative and to write and edit all sorts... Read more
Smultron 11 - Easy-to-use, powerful text...
Smultron 11 is the text editor for all of us. Smultron is powerful and confident without being complicated. Its elegance and simplicity helps everyone being creative and to write and edit all sorts... Read more
PCalc 4.6 - Full-featured scientific cal...
PCalc is a full-featured, scriptable scientific calculator with support for hexadecimal, octal, and binary calculations, as well as an RPN mode, programmable functions, and an extensive set of unit... Read more
Typinator 7.7 - Speedy and reliable text...
Typinator turbo-charges your typing productivity. Type a little. Typinator does the rest. We've all faced projects that require repetitive typing tasks. With Typinator, you can store commonly used... Read more
Adobe Acrobat Reader 18.011.20063 - View...
Adobe Acrobat Reader allows users to view PDF documents. You may not know what a PDF file is, but you've probably come across one at some point. PDF files are used by companies and even the IRS to... Read more
Printopia 3.0.12 - Share Mac printers wi...
Run Printopia on your Mac to share its printers to any capable iPhone, iPad, or iPod Touch. Printopia will also add virtual printers, allowing you to save print-outs to your Mac and send to apps.... Read more

Latest Forum Discussions

See All

We talked to Tony Swatton, legendary Hol...
Sometimes we like to tell you about the awesome stuff going on around mobile gaming, not just about how awesome mobile games are. And this is definitely one of those times. Because what's more awesome than real-life weapons based on things from... | Read more »
Which of these 5 games do you think shou...
You know what time it is, right? It's time for you, our esteemed and charming readership, to choose which of the five games below deserves to be crowned our game of the week. It's a pretty good week as well, so we expect that every single vote is... | Read more »
The winner of last week's game of t...
It's been another close run thing in the 148Apps game of the week award, but after checking on some possible areas of contention with the Frankfurt bureau, we can finally reveal that the winner of last week's coveted title is the impressively deep... | Read more »
The best iPhone games like Starcraft
Starcraft sits at the top of the RTS tree for a number of very good reasons. It also isn't on mobile, again, for a number of very good reasons. But that doesn't mean you can't find a way to indulge your sci-fi, competitive, massive, or engaging RTS... | Read more »
The best cyberpunk games for mobile
Cyberpunk games have come and gone through the years, but CD Projekt Red’s upcoming Cyberpunk 2077 has reignited everyone’s interest in dystopian hacker worlds lately. It also got me interested in exploring the App Store to see what the best... | Read more »
Take back a fallen kingdom in new 3D MMO...
If you’ve ever craved to be the hero of your own story within a colourful, yet endangered land in need of saving, Final Destiny could be the journey you’ve been waiting for. A new fantasy 3D MMORPG that sees you team up or fight against others, the... | Read more »
The 5 best dogs in mobile gaming
Not too long ago we talked about all of the finest kitty-cats mobile gaming had to offer, but what about the best dogs on the market? You don't have to have thumbs to be an awesome protagonist. [Read more] | Read more »
The 5 best iPhone games like The Room
The Room has had a massive impact on the world of mobile gaming. Not only is it a brilliant adventure, it also shows how the touchscreen controls on your iPhone can be turned into something far more elegant and tactile than just a bunch of buttons... | Read more »
The 5 best iPhone games like The Room
The Room has had a massive impact on the world of mobile gaming. Not only is it a brilliant adventure, it also shows how the touchscreen controls on your iPhone can be turned into something far more elegant and tactile than just a bunch of buttons... | Read more »
Go on an epic adventure with Clicker Kni...
Clicker Knight lands on Android this week, offering brave adventurers a fresh take on the tried and true idle genre. The game combines satisfying RPG mechanics with addictive clicker gameplay to create a rich fantasy adventure that will satisfy... | Read more »

Price Scanner via MacPrices.net

Clearance 2017 15″ 2.9GHz MacBook Pros availa...
B&H Photo has clearance 2017 15″ 2.9GHz Touch Bar MacBook Pros available for up to $800 off original MSRP. B&H charges sales tax in NY & NJ only, and shipping is free: – 15″ 2.9GHz Touch... Read more
Apple Should’ve Created A 4-inch Smartphone W...
EDITORIAL: 09.19.18- Apple created the new iPhone XR as an entry level model catered to those on a budget and to bring the high end experience of the iPhone X to the masses at a lower price but I... Read more
Apple offering Certified Refurbished 2017 13″...
Save $230-$200 on the purchase of a 2017 13″ 2.3GHz non-Touch Bar MacBook Pro with Certified Refurbished models at Apple. In many cases, Apple’s refurbished prices are the lowest available for each... Read more
Wide range of Certified Refurbished 12″ MacBo...
Apple has a wide range of Certified Refurbished 2017 12″ Retina MacBooks available for $200-$290 off the cost of new models. Apple will include a standard one-year warranty with each MacBook, and... Read more
13″ 2.3GHz non-Touch Bar MacBook Pros on sale...
B&H Photo has new 2017 13″ 2.3GHz non-Touch Bar MacBook Pros on sale for $150-$200 off MSRP. B&H charges sales tax in NY & NJ only, and shipping is free. – 13-inch 2.3GHz/128GB Space Gray... Read more
Mac mini sales continue with models available...
B&H Photo has Mac minis on sale for $100 off MSRP including free shipping plus NY & NJ sales tax only. Prices start at only $399: – 1.4GHz Mac mini: $399 $100 off MSRP – 2.6GHz Mac mini: $599... Read more
7.9″ 128GB iPad mini 4 on sale for $100 off A...
Jet.com, one of Apple’s newest authorized resellers, has Space Gray & Silver 128GB iPad mini 4 models on sale for $299.99 or 25% off MSRP: – 128GB iPad mini 4: $299.99, $100 off Shipping is free... Read more
High-end 27″ 3.8GHz 5K iMac on sale for $250...
Amazon has the 27″ 3.8GHz 5K iMac on sale today for $250 off Apple’s price. Shipping is free: – 27″ 3.8GHz 5K iMac (MNED2LL/A): $2049.99 $200 off MSRP Their price is the lowest available for a new 3.... Read more
12″ MacBook with upgraded 1.3GHz CPU on sale...
B&H Photo has new 2017 12″ 1.3GHz MacBooks on sale for $100-$200 off Apple’s price. Shipping is free, and B&H charges sales tax for NY & NJ residents only: – 12″ 1.3GHz Space Gray MacBook... Read more
Apple restocks refurbished clearance 2017 9.7...
Apple has clearance Certified Refurbished 2017 9.7″ iPads available starting at $239. An Apple one-year warranty is included with each iPad, and shipping is free: – 9″ 32GB WiFi iPad: $239, original... Read more

Jobs Board

Sephora Product Consultant - *Apple* Blosso...
Sephora Product Consultant - Apple Blossom Mall Location:Winchester, VA, United States- Apple Blossom Mall 1850 Apple Blossom Dr Job ID:1056553 Date:September Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States- Apple Blossom Mall 1850 Apple Blossom Dr Job ID:1042611 Date:September 17, 2018 Job Read more
Hair Stylist - *Apple* Blossom Mall - JCPen...
Hair Stylist - Apple Blossom Mall Location:Winchester, VA, United States- Apple Blossom Mall 1850 Apple Blossom Dr Job ID:1065040 Date:September 17, 2018 Job Read more
Operations Associate - *Apple* Blossom Mall...
Operations Associate - Apple Blossom Mall Location:Winchester, VA, United States- Apple Blossom Mall 1850 Apple Blossom Dr Job ID:1044618 Date:September 17, Read more
Omnichannel Associate - *Apple* Blossom Mal...
Omnichannel Associate - Apple Blossom Mall Location:Winchester, VA, United States- Apple Blossom Mall 1850 Apple Blossom Dr Job ID:1074107 Date:September 17, Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.