TweetFollow Us on Twitter

Asynchronous IO
Volume Number:12
Issue Number:12
Column Tag:Toolbox Techniques

Building Better Applications
Via Asynchronous I/O

By Richard Clark, General Magic, Inc.

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

Have you ever looked at an application and wondered how to make it faster? Sure, you can select better algorithms or rewrite sections in assembly language, but sometimes a fast processor or great algorithm is not enough. Many applications reach a limit where they can process the information faster than they can get it. These applications are said to be I/O bound. Improving such programs is straightforward, once you know something more about how the Macintosh reads and writes information.

Most developers go through several basic stages in getting information in and out of their programs. In the first stage, they use their programming language’s built-in I/O commands - printf and scanf for C, WRITELN and READLN for Pascal. Soon, driven by the derision of their peers, a desire to manipulate something other than text streams, or a feeling they should be using the underlying operating system directly, they will shift over to the Macintosh FSWrite and FSRead routines.

Quite a few Macintosh programmers spend the remainder of their careers using FSRead and FSWrite. Some use FSRead’s “newline mode” to emulate scanf or READLN. Others read their data in as needed, whether they need a single character or an entire structure. The wisest users of FSRead use buffering - they read the data in large blocks and process the information in memory.

All of these techniques have one property in common - they all use “synchronous I/O.” A synchronous I/O operation makes the calling program wait until the operation has been completed. Programmers who want to get the best possible performance out of their applications can eliminate this wait by switching to “asynchronous I/O” which asks the OS to transfer information at the same time the other code is running. There is another reason why advanced Macintosh programmers use asynchronous I/O - it’s the only way to get at some of the more advanced communications features such as TCP/IP and to get real-time information from users.

A Programmer’s Look at I/O

We will take a look at the uses of synchronous and asynchronous I/O through a function that counts occurrences of the letter “A” in a text file. The simplest version of this program uses the C Standard I/O Library functions.

int countChars(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file
 FILE *f = NULL;
 int  counter = 0;
 char currChar;
 char filename[64];
 
  // Homemade PtoCstr operation which makes a copy of the string
 BlockMove((Ptr)&fsp->name[1], filename, fsp->name[0]);
 filename[fsp->name[0]] = ‘\0’;
 
  // Count the characters
 f = fopen(filename, “r”);
 while ((currChar = fgetc(f)) != EOF) {
 if (currChar == ‘A’) counter += 1;
 }
 fclose(f);
 return counter;
}

While this looks like a simple program, quite a bit is going on behind the scenes. fgetc() does not simply read each character from the disk as the program requests it, but uses a buffering scheme instead. When buffering, the application (or library) reads a block of information into memory all at once, then returns each item from that block of memory. Without buffering, each read would have to position the disk’s read/write head to the proper location on the disk, then wait for the correct area of the disk to rotate into place. Thus the program would spend most of its time waiting for the drive hardware itself.

Even with buffering, most Standard I/O library implementations are not as fast as going directly to the machine’s own file system. The extra bookkeeping associated with tracking an arbitrary number of files slows things down. We can write a faster program using the “high level” File Manager calls. When we build our new program, we will buffer the data by reading it into memory in large blocks, then process the information directly in memory. The algorithm for our buffered program is as follows.

• Allocate a fixed-size buffer (for best results, the size should be an even multiple of 1K so the Macintosh OS can read entire blocks off the disk)

• Repeat:

• Read one buffer’s worth of data

• Process data

• Until the entire file has been read (charCount == 0 after FSRead)

• Release the memory used by the buffer

And here is the source code:

int countCharsFS(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file reading the file in blocks
 int counter = 0;
 char *buffer, *currChar;
 short refNum;
 long charCount;
 OSErr err;
 
 err = FSpOpenDF(fsp, fsRdPerm, &refNum); 
 if (err == noErr) {
 buffer = (char*)NewPtr(kBufferSize);
 if (buffer != nil) {
  for (;;) {
  charCount = kBufferSize;
  err = FSRead(refNum, &charCount, (Ptr)buffer);
  if ((err != noErr) && (err != eofErr)) break;
  if (charCount == 0) break;
  currChar= buffer;
   while (charCount- > 0) {
   if (*currChar++ == ‘A’) counter++;
   }
  }
   DisposePtr(buffer);
 }
 FSClose(refNum);
 }
 return counter;
}

In the most extreme case, our program could read the whole file in at once before processing it. This would reduce the number of seek operations to an absolute minimum, at the cost of allocating a huge block of memory. This is not always faster than reading a few blocks at a time.

Let’s compare some real-world timing figures for these three routines. We ran these tests on a variety of Macintosh systems, including 680x0 and PowerPC models. The system disk cache was set to 32K for all tests. This article includes only the results from a PowerMac 7100/66, but the other systems were similar. If you want to see the values for your own machine, the test application’s sources are available from MacTech NOW.

file size countchars countCharsFS

1000K 1003 ms 218 ms

2000K 2754 ms 661 ms

3000K 4031 ms 1076 ms

4000K 5328 ms 1467 ms

5000K 6608 ms 1885 ms

Shown graphically, the advantage of going directly to the file system becomes even more apparent:

Improving the Program with Asynchronous I/O

In all of these routines the “count characters” code has to wait for the data to arrive from the disk before starting processing. We can make the code even faster by reading in the next buffer at the same time we are processing the current buffer’s contents. Reading in some data while performing other work is known as asynchronous I/O.

Asynchronous I/O works on the basis of scheduling I/O operations. Instead of calling FSRead and waiting until the buffer has been filled, we will pass the system a request to fill a buffer and instructions on how to notify us when the request has been completed. The Macintosh OS puts our request into a list (known as a Queue) and fills the requests in the order they were made.

Here’s how to structure our program using asynchronous I/O:

• Allocate two buffers in memory.

• Tell the Macintosh OS we want the first block of data to go into the first buffer. (This schedules the buffer for filling as soon as possible and returns control immediately.)

• Tell the OS we want another block of data to go into the second buffer.

• Repeat:

• Wait until a full buffer is available,

• Process it, and

• Make another request for data using this buffer as the destination until the entire file has been processed.

• Release the memory used by the buffers

Notice that our program may have to wait for a buffer to finish filling, but it also gets to work for part of that time. Since we used to do nothing while waiting for the read to complete, any work we do while waiting now happens “for free.”

Changing Your I/O Model

Taking advantage of asynchronous I/O requires that you break away from the “high level” calls we used in the previous code samples. Fortunately, the operating system provides PBRead and PBWrite as the more flexible “low level” counterparts to FSRead and FSWrite.

The PB calls don’t take their parameters in the stack like the FS calls. Instead, each PB call takes a pointer to a “parameter block” structure containing all of the required information. You can easily translate an FS call into a PB call by allocating a parameter block and filling in the appropriate fields. In fact, the Macintosh OS basically does this every time you use an FS call.

Converting from FSRead to PBRead

err = FSRead(refNum, &charCount, (Ptr)buffer);

// Create a parameter block. We’ll use “clear” to zero fields we don’t need for this example
pb = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
pb->ioParam.ioRefNum = refNum;
pb->ioParam.ioVRefNum = vRefNum;
// Note: Somebody has to supply the volume RefNum
pb->ioParam.ioBuffer = (Ptr)buffer;
pb->ioParam.ioReqCount = charCount;
err = PBReadSync(pb);
charCount = pb->ioParam.ioActCount;
DisposePtr((Ptr)pb);

So far it looks like a lot of extra work to use a PBRead call instead of an FSRead call. That is true for basic synchronous I/O, but the PB calls can do more. One of the better aspects of PBRead and PBWrite is the ability to set the positioning mode and offset each time. If you make a simple FSWrite call, you’ll get the information located at the “mark” - a value which indicates the current position in the file. The PB calls allow you begin reading or writing from the file mark, at an offset relative to the file mark, or at an offset relative to the start of the file. In addition, PBWrite can perform a “read-verify” operation after writing data to confirm that it went out correctly.

Our real reason for introducing the PB calls in this article is to use them for asynchronous I/O. We want to place a request with the system to get some data and learn when that request has been fulfilled. The parameter blocks have just the information we need to make this happen: the ioResult and ioCompletion fields.

The ioResult field gives the result code of the operation - either 0 for “no error” or a negative value designating an operating system error. This field is filled in after the data has been transferred, which gives us one way to learn when the OS is finished. The File Manager places a positive value into this field when the request is posted. When the value changes, we know the transfer is finished and we can use the data. Hopefully to a 0, meaning “no error”, but it might also contain a negative error code.

Using what we’ve learned so far, we can improve on all of our FSRead-based routines. The code can run as fast as the “read the whole file into memory” version, but only use two small blocks of memory as buffers. Notice how we use a two entry table to hold a pair of parameter blocks. This allows us to fill one block while processing the other.

int countCharsAsync(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file, reading the file one 
  // character at a time
 int   counter = 0;
 ParmBlkPtr pb[2], currPBPtr;
 int   currPB = 0;
 char  *buffer, *currChar;
 short  refNum;
 long  charCount;
 OSErr  err;
 
  // Allocate parameter blocks
  // Open the file
 err = FSpOpenDF(fsp, fsRdPerm, &refNum); 
 if (err == noErr) {
  // Set up parameter blocks
 pb[0] = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
 pb[1] = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
 setup(pb[0], refNum, fsp->vRefNum, kBufferSize);
 setup(pb[1], refNum, fsp->vRefNum, kBufferSize);
 
  // Start 2 read operations going
 (void) PBReadAsync(pb[0]); 
 (void) PBReadAsync(pb[1]); 
 currPBPtr = pb[0];
 
 for (;;) {
    // Wait for the I/O operation to complete
  while (currPBPtr->ioParam.ioResult > 0) {};
  
    // The data is ready, so count the characters
  buffer = currPBPtr->ioParam.ioBuffer;
  charCount = currPBPtr->ioParam.ioActCount;
  if (charCount == 0) break;
  currChar= buffer;
  while (charCount- > 0) {
   if (*currChar++ == ‘A’) counter++;
  }
  
    // Put this buffer back into the reading queue
  (void) PBReadAsync(currPBPtr);
  
    // Switch to the other buffer
  currPB = 1 - currPB;
  currPBPtr = pb[currPB];
  currPBPtr->ioParam.ioPosMode = fsAtMark;
 }
    // Release the memory
  destroy(pb[0]);
  destroy(pb[1]);
  FSClose(refNum);
 }
 return counter;
}

void setup (ParmBlkPtr pb, short refNum,
   short vRefNum, long bufSize)
{
 pb->ioParam.ioCompletion = NULL;
 pb->ioParam.ioResult = 1;
 pb->ioParam.ioRefNum = refNum;
 pb->ioParam.ioVRefNum = vRefNum;
 pb->ioParam.ioReqCount = bufSize;
 pb->ioParam.ioBuffer = NewPtr(bufSize);
 pb->ioParam.ioPosMode = fsAtMark;
 pb->ioParam.ioPosOffset = 0;
}
void destroy (ParmBlkPtr pb)
{
  DisposePtr(pb->ioParam.ioBuffer);
  DisposePtr((Ptr)pb);
}

Let’s look at the timing results for the above code. Again, since the PowerPC and 68K numbers follow the same pattern, we will show only the PowerPC numbers here.

file size countCharsFS PBRead async

1000K 218 ms 167 ms

2000K 661 ms 454 ms

3000K 1076 ms 700 ms

4000K 1467 ms 934 ms

5000K 1885 ms 1183 ms

Graphically, the timing looks like this: (Notice that we’ve changed scale from our previous graph so you can get a better look at the difference between synchronous and asynchronous I/O.)

While not a dramatic change, the last result is still an improvement over synchronous I/O. It’s hard to make this code much faster, but you can take advantage of the available processing time to add features and improve the user’s experience.

Improving the User’s Experience

All of these routines share a common drawback - they don’t allow any time for other programs to run. They sit in tight loops reading and processing information or waiting for the next read to complete. This is OK when writing demonstration code for a magazine article, but it isn’t a reasonable practice in “real” programs. Real applications should arrange for their I/O intensive routines to give time to other applications and to allow the user to cancel at any time.

A well-designed application will give away time while it’s simply waiting for an I/O request to finish. Most applications could do this by calling WaitNextEvent, processing the event, then checking the result code of the pending operation before giving away any more time. The only problem is that when an application gives away time with WaitNextEvent, there’s no telling how soon control will be returned. Applications that need immediate notification at the end of an I/O operation must use completion routines.

A completion routine is a function in the program that the Macintosh OS calls when a specific I/O operation ends. (The requesting program supplies the function pointer in the ioCompletion field of the parameter block.) Completion routines run under the same tight restrictions as any other “interrupt time” code including not being able to allocate or move memory nor being able to use the content of any relocatable block. Most completion routines are not guaranteed access to their application’s globals, and the information passed into each completion routine varies wildly. For these reasons, we will defer a thorough discussion of completion routines to another article.

The completion routine for PBRead is especially poor, as it receives no parameters and the Parameter Block has been pulled off of the I/O queue by the time the routine is called. This routine appears to have A5 set up for it so it can reach the application’s globals, but it can’t do much even then - only set a flag indicating the completion of I/O or take an existing block and issuing another I/O call for it.

Besides giving away time, there is one other thing a well-behaved application should do, and that is allow the user to cancel an operation. If the user asks to cancel during a synchronous I/O operation, the application simply completes that operation and doesn’t begin another. However, if the user cancels during asynchronous I/O, the application has to remove all of the pending requests. The KillIO() call takes a file or driver reference number and removes all of its pending I/O requests, so applications can kill the pending requests then wait for the current operation to complete before closing the file or driver.

Conclusion

Developers need to look beyond basic I/O calls if they want to get maximum performance from their programs. Asynchronous I/O, while the most complicated way to read and write information, is one of the best ways to improve your application’s performance. These same techniques that improve the performance of File I/O become critical when dealing in near real-time applications such as TCP/IP networking or serial communications that cannot afford pauses in their data collection.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Tokkun Studio unveils alpha trailer for...
We are back on the MMORPG news train, and this time it comes from the sort of international developers Tokkun Studio. They are based in France and Japan, so it counts. Anyway, semantics aside, they have released an alpha trailer for the upcoming... | Read more »
Win a host of exclusive in-game Honor of...
To celebrate its latest Jujutsu Kaisen crossover event, Honor of Kings is offering a bounty of login and achievement rewards kicking off the holiday season early. [Read more] | Read more »
Miraibo GO comes out swinging hard as it...
Having just launched what feels like yesterday, Dreamcube Studio is wasting no time adding events to their open-world survival Miraibo GO. Abyssal Souls arrives relatively in time for the spooky season and brings with it horrifying new partners to... | Read more »
Ditch the heavy binders and high price t...
As fun as the real-world equivalent and the very old Game Boy version are, the Pokemon Trading Card games have historically been received poorly on mobile. It is a very strange and confusing trend, but one that The Pokemon Company is determined to... | Read more »
Peace amongst mobile gamers is now shatt...
Some of the crazy folk tales from gaming have undoubtedly come from the EVE universe. Stories of spying, betrayal, and epic battles have entered history, and now the franchise expands as CCP Games launches EVE Galaxy Conquest, a free-to-play 4x... | Read more »
Lord of Nazarick, the turn-based RPG bas...
Crunchyroll and A PLUS JAPAN have just confirmed that Lord of Nazarick, their turn-based RPG based on the popular OVERLORD anime, is now available for iOS and Android. Starting today at 2PM CET, fans can download the game from Google Play and the... | Read more »
Digital Extremes' recent Devstream...
If you are anything like me you are impatiently waiting for Warframe: 1999 whilst simultaneously cursing the fact Excalibur Prime is permanently Vault locked. To keep us fed during our wait, Digital Extremes hosted a Double Devstream to dish out a... | Read more »
The Frozen Canvas adds a splash of colou...
It is time to grab your gloves and layer up, as Torchlight: Infinite is diving into the frozen tundra in its sixth season. The Frozen Canvas is a colourful new update that brings a stylish flair to the Netherrealm and puts creativity in the... | Read more »
Back When AOL WAS the Internet – The Tou...
In Episode 606 of The TouchArcade Show we kick things off talking about my plans for this weekend, which has resulted in this week’s show being a bit shorter than normal. We also go over some more updates on our Patreon situation, which has been... | Read more »
Creative Assembly's latest mobile p...
The Total War series has been slowly trickling onto mobile, which is a fantastic thing because most, if not all, of them are incredibly great fun. Creative Assembly's latest to get the Feral Interactive treatment into portable form is Total War:... | Read more »

Price Scanner via MacPrices.net

Early Black Friday Deal: Apple’s newly upgrad...
Amazon has Apple 13″ MacBook Airs with M2 CPUs and 16GB of RAM on early Black Friday sale for $200 off MSRP, only $799. Their prices are the lowest currently available for these newly upgraded 13″ M2... Read more
13-inch 8GB M2 MacBook Airs for $749, $250 of...
Best Buy has Apple 13″ MacBook Airs with M2 CPUs and 8GB of RAM in stock and on sale on their online store for $250 off MSRP. Prices start at $749. Their prices are the lowest currently available for... Read more
Amazon is offering an early Black Friday $100...
Amazon is offering early Black Friday discounts on Apple’s new 2024 WiFi iPad minis ranging up to $100 off MSRP, each with free shipping. These are the lowest prices available for new minis anywhere... Read more
Price Drop! Clearance 14-inch M3 MacBook Pros...
Best Buy is offering a $500 discount on clearance 14″ M3 MacBook Pros on their online store this week with prices available starting at only $1099. Prices valid for online orders only, in-store... Read more
Apple AirPods Pro with USB-C on early Black F...
A couple of Apple retailers are offering $70 (28%) discounts on Apple’s AirPods Pro with USB-C (and hearing aid capabilities) this weekend. These are early AirPods Black Friday discounts if you’re... Read more
Price drop! 13-inch M3 MacBook Airs now avail...
With yesterday’s across-the-board MacBook Air upgrade to 16GB of RAM standard, Apple has dropped prices on clearance 13″ 8GB M3 MacBook Airs, Certified Refurbished, to a new low starting at only $829... Read more
Price drop! Apple 15-inch M3 MacBook Airs now...
With yesterday’s release of 15-inch M3 MacBook Airs with 16GB of RAM standard, Apple has dropped prices on clearance Certified Refurbished 15″ 8GB M3 MacBook Airs to a new low starting at only $999.... Read more
Apple has clearance 15-inch M2 MacBook Airs a...
Apple has clearance, Certified Refurbished, 15″ M2 MacBook Airs now available starting at $929 and ranging up to $410 off original MSRP. These are the cheapest 15″ MacBook Airs for sale today at... Read more
Apple drops prices on 13-inch M2 MacBook Airs...
Apple has dropped prices on 13″ M2 MacBook Airs to a new low of only $749 in their Certified Refurbished store. These are the cheapest M2-powered MacBooks for sale at Apple. Apple’s one-year warranty... Read more
Clearance 13-inch M1 MacBook Airs available a...
Apple has clearance 13″ M1 MacBook Airs, Certified Refurbished, now available for $679 for 8-Core CPU/7-Core GPU/256GB models. Apple’s one-year warranty is included, shipping is free, and each... Read more

Jobs Board

Seasonal Cashier - *Apple* Blossom Mall - J...
Seasonal Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Seasonal Fine Jewelry Commission Associate -...
…Fine Jewelry Commission Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) Read more
Seasonal Operations Associate - *Apple* Blo...
Seasonal Operations Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Read more
Hair Stylist - *Apple* Blossom Mall - JCPen...
Hair Stylist - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Mall Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.