TweetFollow Us on Twitter

Asynchronous IO
Volume Number:12
Issue Number:12
Column Tag:Toolbox Techniques

Building Better Applications
Via Asynchronous I/O

By Richard Clark, General Magic, Inc.

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

Have you ever looked at an application and wondered how to make it faster? Sure, you can select better algorithms or rewrite sections in assembly language, but sometimes a fast processor or great algorithm is not enough. Many applications reach a limit where they can process the information faster than they can get it. These applications are said to be I/O bound. Improving such programs is straightforward, once you know something more about how the Macintosh reads and writes information.

Most developers go through several basic stages in getting information in and out of their programs. In the first stage, they use their programming language’s built-in I/O commands - printf and scanf for C, WRITELN and READLN for Pascal. Soon, driven by the derision of their peers, a desire to manipulate something other than text streams, or a feeling they should be using the underlying operating system directly, they will shift over to the Macintosh FSWrite and FSRead routines.

Quite a few Macintosh programmers spend the remainder of their careers using FSRead and FSWrite. Some use FSRead’s “newline mode” to emulate scanf or READLN. Others read their data in as needed, whether they need a single character or an entire structure. The wisest users of FSRead use buffering - they read the data in large blocks and process the information in memory.

All of these techniques have one property in common - they all use “synchronous I/O.” A synchronous I/O operation makes the calling program wait until the operation has been completed. Programmers who want to get the best possible performance out of their applications can eliminate this wait by switching to “asynchronous I/O” which asks the OS to transfer information at the same time the other code is running. There is another reason why advanced Macintosh programmers use asynchronous I/O - it’s the only way to get at some of the more advanced communications features such as TCP/IP and to get real-time information from users.

A Programmer’s Look at I/O

We will take a look at the uses of synchronous and asynchronous I/O through a function that counts occurrences of the letter “A” in a text file. The simplest version of this program uses the C Standard I/O Library functions.

int countChars(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file
 FILE *f = NULL;
 int  counter = 0;
 char currChar;
 char filename[64];
 
  // Homemade PtoCstr operation which makes a copy of the string
 BlockMove((Ptr)&fsp->name[1], filename, fsp->name[0]);
 filename[fsp->name[0]] = ‘\0’;
 
  // Count the characters
 f = fopen(filename, “r”);
 while ((currChar = fgetc(f)) != EOF) {
 if (currChar == ‘A’) counter += 1;
 }
 fclose(f);
 return counter;
}

While this looks like a simple program, quite a bit is going on behind the scenes. fgetc() does not simply read each character from the disk as the program requests it, but uses a buffering scheme instead. When buffering, the application (or library) reads a block of information into memory all at once, then returns each item from that block of memory. Without buffering, each read would have to position the disk’s read/write head to the proper location on the disk, then wait for the correct area of the disk to rotate into place. Thus the program would spend most of its time waiting for the drive hardware itself.

Even with buffering, most Standard I/O library implementations are not as fast as going directly to the machine’s own file system. The extra bookkeeping associated with tracking an arbitrary number of files slows things down. We can write a faster program using the “high level” File Manager calls. When we build our new program, we will buffer the data by reading it into memory in large blocks, then process the information directly in memory. The algorithm for our buffered program is as follows.

• Allocate a fixed-size buffer (for best results, the size should be an even multiple of 1K so the Macintosh OS can read entire blocks off the disk)

• Repeat:

• Read one buffer’s worth of data

• Process data

• Until the entire file has been read (charCount == 0 after FSRead)

• Release the memory used by the buffer

And here is the source code:

int countCharsFS(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file reading the file in blocks
 int counter = 0;
 char *buffer, *currChar;
 short refNum;
 long charCount;
 OSErr err;
 
 err = FSpOpenDF(fsp, fsRdPerm, &refNum); 
 if (err == noErr) {
 buffer = (char*)NewPtr(kBufferSize);
 if (buffer != nil) {
  for (;;) {
  charCount = kBufferSize;
  err = FSRead(refNum, &charCount, (Ptr)buffer);
  if ((err != noErr) && (err != eofErr)) break;
  if (charCount == 0) break;
  currChar= buffer;
   while (charCount- > 0) {
   if (*currChar++ == ‘A’) counter++;
   }
  }
   DisposePtr(buffer);
 }
 FSClose(refNum);
 }
 return counter;
}

In the most extreme case, our program could read the whole file in at once before processing it. This would reduce the number of seek operations to an absolute minimum, at the cost of allocating a huge block of memory. This is not always faster than reading a few blocks at a time.

Let’s compare some real-world timing figures for these three routines. We ran these tests on a variety of Macintosh systems, including 680x0 and PowerPC models. The system disk cache was set to 32K for all tests. This article includes only the results from a PowerMac 7100/66, but the other systems were similar. If you want to see the values for your own machine, the test application’s sources are available from MacTech NOW.

file size countchars countCharsFS

1000K 1003 ms 218 ms

2000K 2754 ms 661 ms

3000K 4031 ms 1076 ms

4000K 5328 ms 1467 ms

5000K 6608 ms 1885 ms

Shown graphically, the advantage of going directly to the file system becomes even more apparent:

Improving the Program with Asynchronous I/O

In all of these routines the “count characters” code has to wait for the data to arrive from the disk before starting processing. We can make the code even faster by reading in the next buffer at the same time we are processing the current buffer’s contents. Reading in some data while performing other work is known as asynchronous I/O.

Asynchronous I/O works on the basis of scheduling I/O operations. Instead of calling FSRead and waiting until the buffer has been filled, we will pass the system a request to fill a buffer and instructions on how to notify us when the request has been completed. The Macintosh OS puts our request into a list (known as a Queue) and fills the requests in the order they were made.

Here’s how to structure our program using asynchronous I/O:

• Allocate two buffers in memory.

• Tell the Macintosh OS we want the first block of data to go into the first buffer. (This schedules the buffer for filling as soon as possible and returns control immediately.)

• Tell the OS we want another block of data to go into the second buffer.

• Repeat:

• Wait until a full buffer is available,

• Process it, and

• Make another request for data using this buffer as the destination until the entire file has been processed.

• Release the memory used by the buffers

Notice that our program may have to wait for a buffer to finish filling, but it also gets to work for part of that time. Since we used to do nothing while waiting for the read to complete, any work we do while waiting now happens “for free.”

Changing Your I/O Model

Taking advantage of asynchronous I/O requires that you break away from the “high level” calls we used in the previous code samples. Fortunately, the operating system provides PBRead and PBWrite as the more flexible “low level” counterparts to FSRead and FSWrite.

The PB calls don’t take their parameters in the stack like the FS calls. Instead, each PB call takes a pointer to a “parameter block” structure containing all of the required information. You can easily translate an FS call into a PB call by allocating a parameter block and filling in the appropriate fields. In fact, the Macintosh OS basically does this every time you use an FS call.

Converting from FSRead to PBRead

err = FSRead(refNum, &charCount, (Ptr)buffer);

// Create a parameter block. We’ll use “clear” to zero fields we don’t need for this example
pb = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
pb->ioParam.ioRefNum = refNum;
pb->ioParam.ioVRefNum = vRefNum;
// Note: Somebody has to supply the volume RefNum
pb->ioParam.ioBuffer = (Ptr)buffer;
pb->ioParam.ioReqCount = charCount;
err = PBReadSync(pb);
charCount = pb->ioParam.ioActCount;
DisposePtr((Ptr)pb);

So far it looks like a lot of extra work to use a PBRead call instead of an FSRead call. That is true for basic synchronous I/O, but the PB calls can do more. One of the better aspects of PBRead and PBWrite is the ability to set the positioning mode and offset each time. If you make a simple FSWrite call, you’ll get the information located at the “mark” - a value which indicates the current position in the file. The PB calls allow you begin reading or writing from the file mark, at an offset relative to the file mark, or at an offset relative to the start of the file. In addition, PBWrite can perform a “read-verify” operation after writing data to confirm that it went out correctly.

Our real reason for introducing the PB calls in this article is to use them for asynchronous I/O. We want to place a request with the system to get some data and learn when that request has been fulfilled. The parameter blocks have just the information we need to make this happen: the ioResult and ioCompletion fields.

The ioResult field gives the result code of the operation - either 0 for “no error” or a negative value designating an operating system error. This field is filled in after the data has been transferred, which gives us one way to learn when the OS is finished. The File Manager places a positive value into this field when the request is posted. When the value changes, we know the transfer is finished and we can use the data. Hopefully to a 0, meaning “no error”, but it might also contain a negative error code.

Using what we’ve learned so far, we can improve on all of our FSRead-based routines. The code can run as fast as the “read the whole file into memory” version, but only use two small blocks of memory as buffers. Notice how we use a two entry table to hold a pair of parameter blocks. This allows us to fill one block while processing the other.

int countCharsAsync(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file, reading the file one 
  // character at a time
 int   counter = 0;
 ParmBlkPtr pb[2], currPBPtr;
 int   currPB = 0;
 char  *buffer, *currChar;
 short  refNum;
 long  charCount;
 OSErr  err;
 
  // Allocate parameter blocks
  // Open the file
 err = FSpOpenDF(fsp, fsRdPerm, &refNum); 
 if (err == noErr) {
  // Set up parameter blocks
 pb[0] = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
 pb[1] = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
 setup(pb[0], refNum, fsp->vRefNum, kBufferSize);
 setup(pb[1], refNum, fsp->vRefNum, kBufferSize);
 
  // Start 2 read operations going
 (void) PBReadAsync(pb[0]); 
 (void) PBReadAsync(pb[1]); 
 currPBPtr = pb[0];
 
 for (;;) {
    // Wait for the I/O operation to complete
  while (currPBPtr->ioParam.ioResult > 0) {};
  
    // The data is ready, so count the characters
  buffer = currPBPtr->ioParam.ioBuffer;
  charCount = currPBPtr->ioParam.ioActCount;
  if (charCount == 0) break;
  currChar= buffer;
  while (charCount- > 0) {
   if (*currChar++ == ‘A’) counter++;
  }
  
    // Put this buffer back into the reading queue
  (void) PBReadAsync(currPBPtr);
  
    // Switch to the other buffer
  currPB = 1 - currPB;
  currPBPtr = pb[currPB];
  currPBPtr->ioParam.ioPosMode = fsAtMark;
 }
    // Release the memory
  destroy(pb[0]);
  destroy(pb[1]);
  FSClose(refNum);
 }
 return counter;
}

void setup (ParmBlkPtr pb, short refNum,
   short vRefNum, long bufSize)
{
 pb->ioParam.ioCompletion = NULL;
 pb->ioParam.ioResult = 1;
 pb->ioParam.ioRefNum = refNum;
 pb->ioParam.ioVRefNum = vRefNum;
 pb->ioParam.ioReqCount = bufSize;
 pb->ioParam.ioBuffer = NewPtr(bufSize);
 pb->ioParam.ioPosMode = fsAtMark;
 pb->ioParam.ioPosOffset = 0;
}
void destroy (ParmBlkPtr pb)
{
  DisposePtr(pb->ioParam.ioBuffer);
  DisposePtr((Ptr)pb);
}

Let’s look at the timing results for the above code. Again, since the PowerPC and 68K numbers follow the same pattern, we will show only the PowerPC numbers here.

file size countCharsFS PBRead async

1000K 218 ms 167 ms

2000K 661 ms 454 ms

3000K 1076 ms 700 ms

4000K 1467 ms 934 ms

5000K 1885 ms 1183 ms

Graphically, the timing looks like this: (Notice that we’ve changed scale from our previous graph so you can get a better look at the difference between synchronous and asynchronous I/O.)

While not a dramatic change, the last result is still an improvement over synchronous I/O. It’s hard to make this code much faster, but you can take advantage of the available processing time to add features and improve the user’s experience.

Improving the User’s Experience

All of these routines share a common drawback - they don’t allow any time for other programs to run. They sit in tight loops reading and processing information or waiting for the next read to complete. This is OK when writing demonstration code for a magazine article, but it isn’t a reasonable practice in “real” programs. Real applications should arrange for their I/O intensive routines to give time to other applications and to allow the user to cancel at any time.

A well-designed application will give away time while it’s simply waiting for an I/O request to finish. Most applications could do this by calling WaitNextEvent, processing the event, then checking the result code of the pending operation before giving away any more time. The only problem is that when an application gives away time with WaitNextEvent, there’s no telling how soon control will be returned. Applications that need immediate notification at the end of an I/O operation must use completion routines.

A completion routine is a function in the program that the Macintosh OS calls when a specific I/O operation ends. (The requesting program supplies the function pointer in the ioCompletion field of the parameter block.) Completion routines run under the same tight restrictions as any other “interrupt time” code including not being able to allocate or move memory nor being able to use the content of any relocatable block. Most completion routines are not guaranteed access to their application’s globals, and the information passed into each completion routine varies wildly. For these reasons, we will defer a thorough discussion of completion routines to another article.

The completion routine for PBRead is especially poor, as it receives no parameters and the Parameter Block has been pulled off of the I/O queue by the time the routine is called. This routine appears to have A5 set up for it so it can reach the application’s globals, but it can’t do much even then - only set a flag indicating the completion of I/O or take an existing block and issuing another I/O call for it.

Besides giving away time, there is one other thing a well-behaved application should do, and that is allow the user to cancel an operation. If the user asks to cancel during a synchronous I/O operation, the application simply completes that operation and doesn’t begin another. However, if the user cancels during asynchronous I/O, the application has to remove all of the pending requests. The KillIO() call takes a file or driver reference number and removes all of its pending I/O requests, so applications can kill the pending requests then wait for the current operation to complete before closing the file or driver.

Conclusion

Developers need to look beyond basic I/O calls if they want to get maximum performance from their programs. Asynchronous I/O, while the most complicated way to read and write information, is one of the best ways to improve your application’s performance. These same techniques that improve the performance of File I/O become critical when dealing in near real-time applications such as TCP/IP networking or serial communications that cannot afford pauses in their data collection.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Six fantastic ways to spend National Vid...
As if anyone needed an excuse to play games today, I am about to give you one: it is National Video Games Day. A day for us to play games, like we no doubt do every day. Let’s not look a gift horse in the mouth. Instead, feast your eyes on this... | Read more »
Old School RuneScape players turn out in...
The sheer leap in technological advancements in our lifetime has been mind-blowing. We went from Commodore 64s to VR glasses in what feels like a heartbeat, but more importantly, the internet. It can be a dark mess, but it also brought hundreds of... | Read more »
Today's Best Mobile Game Discounts...
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links below... | Read more »
Nintendo and The Pokémon Company's...
Unless you have been living under a rock, you know that Nintendo has been locked in an epic battle with Pocketpair, creator of the obvious Pokémon rip-off Palworld. Nintendo often resorts to legal retaliation at the drop of a hat, but it seems this... | Read more »
Apple exclusive mobile games don’t make...
If you are a gamer on phones, no doubt you have been as distressed as I am on one huge sticking point: exclusivity. For years, Xbox and PlayStation have done battle, and before this was the Sega Genesis and the Nintendo NES. On console, it makes... | Read more »
Regionally exclusive events make no sens...
Last week, over on our sister site AppSpy, I babbled excitedly about the Pokémon GO Safari Days event. You can get nine Eevees with an explorer hat per day. Or, can you? Specifically, you, reader. Do you have the time or funds to possibly fly for... | Read more »
As Jon Bellamy defends his choice to can...
Back in March, Jagex announced the appointment of a new CEO, Jon Bellamy. Mr Bellamy then decided to almost immediately paint a huge target on his back by cancelling the Runescapes Pride event. This led to widespread condemnation about his perceived... | Read more »
Marvel Contest of Champions adds two mor...
When I saw the latest two Marvel Contest of Champions characters, I scoffed. Mr Knight and Silver Samurai, thought I, they are running out of good choices. Then I realised no, I was being far too cynical. This is one of the things that games do best... | Read more »
Grass is green, and water is wet: Pokémo...
It must be a day that ends in Y, because Pokémon Trading Card Game Pocket has kicked off its Zoroark Drop Event. Here you can get a promo version of another card, and look forward to the next Wonder Pick Event and the next Mass Outbreak that will be... | Read more »
Enter the Gungeon review
It took me a minute to get around to reviewing this game for a couple of very good reasons. The first is that Enter the Gungeon's style of roguelike bullet-hell action is teetering on the edge of being straight-up malicious, which made getting... | Read more »

Price Scanner via MacPrices.net

Take $150 off every Apple 11-inch M3 iPad Air
Amazon is offering a $150 discount on 11-inch M3 WiFi iPad Airs right now. Shipping is free: – 11″ 128GB M3 WiFi iPad Air: $449, $150 off – 11″ 256GB M3 WiFi iPad Air: $549, $150 off – 11″ 512GB M3... Read more
Apple iPad minis back on sale for $100 off MS...
Amazon is offering $100 discounts (up to 20% off) on Apple’s newest 2024 WiFi iPad minis, each with free shipping. These are the lowest prices available for new minis among the Apple retailers we... Read more
Apple’s 16-inch M4 Max MacBook Pros are on sa...
Amazon has 16-inch M4 Max MacBook Pros (Silver and Black colors) on sale for up to $410 off Apple’s MSRP right now. Shipping is free. Be sure to select Amazon as the seller, rather than a third-party... Read more
Red Pocket Mobile is offering a $150 rebate o...
Red Pocket Mobile has new Apple iPhone 17’s on sale for $150 off MSRP when you switch and open up a new line of service. Red Pocket Mobile is a nationwide MVNO using all the major wireless carrier... Read more
Switch to Verizon, and get any iPhone 16 for...
With yesterday’s introduction of the new iPhone 17 models, Verizon responded by running “on us” promos across much of the iPhone 16 lineup: iPhone 16 and 16 Plus show as $0/mo for 36 months with bill... Read more
Here is a summary of the new features in Appl...
Apple’s September 2025 event introduced major updates across its most popular product lines, focusing on health, performance, and design breakthroughs. The AirPods Pro 3 now feature best-in-class... Read more
Apple’s Smartphone Lineup Could Use A Touch o...
COMMENTARY – Whatever happened to the old adage, “less is more”? Apple’s smartphone lineup. — which is due for its annual refresh either this month or next (possibly at an Apple Event on September 9... Read more
Take $50 off every 11th-generation A16 WiFi i...
Amazon has Apple’s 11th-generation A16 WiFi iPads in stock on sale for $50 off MSRP right now. Shipping is free: – 11″ 11th-generation 128GB WiFi iPads: $299 $50 off MSRP – 11″ 11th-generation 256GB... Read more
Sunday Sale: 14-inch M4 MacBook Pros for up t...
Don’t pay full price! Amazon has Apple’s 14-inch M4 MacBook Pros (Silver and Black colors) on sale for up to $220 off MSRP right now. Shipping is free. Be sure to select Amazon as the seller, rather... Read more
Mac mini with M4 Pro CPU back on sale for $12...
B&H Photo has Apple’s Mac mini with the M4 Pro CPU back on sale for $1259, $140 off MSRP. B&H offers free 1-2 day shipping to most US addresses: – Mac mini M4 Pro CPU (24GB/512GB): $1259, $... Read more

Jobs Board

All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.