TweetFollow Us on Twitter

Asynchronous IO
Volume Number:12
Issue Number:12
Column Tag:Toolbox Techniques

Building Better Applications
Via Asynchronous I/O

By Richard Clark, General Magic, Inc.

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

Have you ever looked at an application and wondered how to make it faster? Sure, you can select better algorithms or rewrite sections in assembly language, but sometimes a fast processor or great algorithm is not enough. Many applications reach a limit where they can process the information faster than they can get it. These applications are said to be I/O bound. Improving such programs is straightforward, once you know something more about how the Macintosh reads and writes information.

Most developers go through several basic stages in getting information in and out of their programs. In the first stage, they use their programming language’s built-in I/O commands - printf and scanf for C, WRITELN and READLN for Pascal. Soon, driven by the derision of their peers, a desire to manipulate something other than text streams, or a feeling they should be using the underlying operating system directly, they will shift over to the Macintosh FSWrite and FSRead routines.

Quite a few Macintosh programmers spend the remainder of their careers using FSRead and FSWrite. Some use FSRead’s “newline mode” to emulate scanf or READLN. Others read their data in as needed, whether they need a single character or an entire structure. The wisest users of FSRead use buffering - they read the data in large blocks and process the information in memory.

All of these techniques have one property in common - they all use “synchronous I/O.” A synchronous I/O operation makes the calling program wait until the operation has been completed. Programmers who want to get the best possible performance out of their applications can eliminate this wait by switching to “asynchronous I/O” which asks the OS to transfer information at the same time the other code is running. There is another reason why advanced Macintosh programmers use asynchronous I/O - it’s the only way to get at some of the more advanced communications features such as TCP/IP and to get real-time information from users.

A Programmer’s Look at I/O

We will take a look at the uses of synchronous and asynchronous I/O through a function that counts occurrences of the letter “A” in a text file. The simplest version of this program uses the C Standard I/O Library functions.

int countChars(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file
 FILE *f = NULL;
 int  counter = 0;
 char currChar;
 char filename[64];
 
  // Homemade PtoCstr operation which makes a copy of the string
 BlockMove((Ptr)&fsp->name[1], filename, fsp->name[0]);
 filename[fsp->name[0]] = ‘\0’;
 
  // Count the characters
 f = fopen(filename, “r”);
 while ((currChar = fgetc(f)) != EOF) {
 if (currChar == ‘A’) counter += 1;
 }
 fclose(f);
 return counter;
}

While this looks like a simple program, quite a bit is going on behind the scenes. fgetc() does not simply read each character from the disk as the program requests it, but uses a buffering scheme instead. When buffering, the application (or library) reads a block of information into memory all at once, then returns each item from that block of memory. Without buffering, each read would have to position the disk’s read/write head to the proper location on the disk, then wait for the correct area of the disk to rotate into place. Thus the program would spend most of its time waiting for the drive hardware itself.

Even with buffering, most Standard I/O library implementations are not as fast as going directly to the machine’s own file system. The extra bookkeeping associated with tracking an arbitrary number of files slows things down. We can write a faster program using the “high level” File Manager calls. When we build our new program, we will buffer the data by reading it into memory in large blocks, then process the information directly in memory. The algorithm for our buffered program is as follows.

• Allocate a fixed-size buffer (for best results, the size should be an even multiple of 1K so the Macintosh OS can read entire blocks off the disk)

• Repeat:

• Read one buffer’s worth of data

• Process data

• Until the entire file has been read (charCount == 0 after FSRead)

• Release the memory used by the buffer

And here is the source code:

int countCharsFS(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file reading the file in blocks
 int counter = 0;
 char *buffer, *currChar;
 short refNum;
 long charCount;
 OSErr err;
 
 err = FSpOpenDF(fsp, fsRdPerm, &refNum); 
 if (err == noErr) {
 buffer = (char*)NewPtr(kBufferSize);
 if (buffer != nil) {
  for (;;) {
  charCount = kBufferSize;
  err = FSRead(refNum, &charCount, (Ptr)buffer);
  if ((err != noErr) && (err != eofErr)) break;
  if (charCount == 0) break;
  currChar= buffer;
   while (charCount- > 0) {
   if (*currChar++ == ‘A’) counter++;
   }
  }
   DisposePtr(buffer);
 }
 FSClose(refNum);
 }
 return counter;
}

In the most extreme case, our program could read the whole file in at once before processing it. This would reduce the number of seek operations to an absolute minimum, at the cost of allocating a huge block of memory. This is not always faster than reading a few blocks at a time.

Let’s compare some real-world timing figures for these three routines. We ran these tests on a variety of Macintosh systems, including 680x0 and PowerPC models. The system disk cache was set to 32K for all tests. This article includes only the results from a PowerMac 7100/66, but the other systems were similar. If you want to see the values for your own machine, the test application’s sources are available from MacTech NOW.

file size countchars countCharsFS

1000K 1003 ms 218 ms

2000K 2754 ms 661 ms

3000K 4031 ms 1076 ms

4000K 5328 ms 1467 ms

5000K 6608 ms 1885 ms

Shown graphically, the advantage of going directly to the file system becomes even more apparent:

Improving the Program with Asynchronous I/O

In all of these routines the “count characters” code has to wait for the data to arrive from the disk before starting processing. We can make the code even faster by reading in the next buffer at the same time we are processing the current buffer’s contents. Reading in some data while performing other work is known as asynchronous I/O.

Asynchronous I/O works on the basis of scheduling I/O operations. Instead of calling FSRead and waiting until the buffer has been filled, we will pass the system a request to fill a buffer and instructions on how to notify us when the request has been completed. The Macintosh OS puts our request into a list (known as a Queue) and fills the requests in the order they were made.

Here’s how to structure our program using asynchronous I/O:

• Allocate two buffers in memory.

• Tell the Macintosh OS we want the first block of data to go into the first buffer. (This schedules the buffer for filling as soon as possible and returns control immediately.)

• Tell the OS we want another block of data to go into the second buffer.

• Repeat:

• Wait until a full buffer is available,

• Process it, and

• Make another request for data using this buffer as the destination until the entire file has been processed.

• Release the memory used by the buffers

Notice that our program may have to wait for a buffer to finish filling, but it also gets to work for part of that time. Since we used to do nothing while waiting for the read to complete, any work we do while waiting now happens “for free.”

Changing Your I/O Model

Taking advantage of asynchronous I/O requires that you break away from the “high level” calls we used in the previous code samples. Fortunately, the operating system provides PBRead and PBWrite as the more flexible “low level” counterparts to FSRead and FSWrite.

The PB calls don’t take their parameters in the stack like the FS calls. Instead, each PB call takes a pointer to a “parameter block” structure containing all of the required information. You can easily translate an FS call into a PB call by allocating a parameter block and filling in the appropriate fields. In fact, the Macintosh OS basically does this every time you use an FS call.

Converting from FSRead to PBRead

err = FSRead(refNum, &charCount, (Ptr)buffer);

// Create a parameter block. We’ll use “clear” to zero fields we don’t need for this example
pb = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
pb->ioParam.ioRefNum = refNum;
pb->ioParam.ioVRefNum = vRefNum;
// Note: Somebody has to supply the volume RefNum
pb->ioParam.ioBuffer = (Ptr)buffer;
pb->ioParam.ioReqCount = charCount;
err = PBReadSync(pb);
charCount = pb->ioParam.ioActCount;
DisposePtr((Ptr)pb);

So far it looks like a lot of extra work to use a PBRead call instead of an FSRead call. That is true for basic synchronous I/O, but the PB calls can do more. One of the better aspects of PBRead and PBWrite is the ability to set the positioning mode and offset each time. If you make a simple FSWrite call, you’ll get the information located at the “mark” - a value which indicates the current position in the file. The PB calls allow you begin reading or writing from the file mark, at an offset relative to the file mark, or at an offset relative to the start of the file. In addition, PBWrite can perform a “read-verify” operation after writing data to confirm that it went out correctly.

Our real reason for introducing the PB calls in this article is to use them for asynchronous I/O. We want to place a request with the system to get some data and learn when that request has been fulfilled. The parameter blocks have just the information we need to make this happen: the ioResult and ioCompletion fields.

The ioResult field gives the result code of the operation - either 0 for “no error” or a negative value designating an operating system error. This field is filled in after the data has been transferred, which gives us one way to learn when the OS is finished. The File Manager places a positive value into this field when the request is posted. When the value changes, we know the transfer is finished and we can use the data. Hopefully to a 0, meaning “no error”, but it might also contain a negative error code.

Using what we’ve learned so far, we can improve on all of our FSRead-based routines. The code can run as fast as the “read the whole file into memory” version, but only use two small blocks of memory as buffers. Notice how we use a two entry table to hold a pair of parameter blocks. This allows us to fill one block while processing the other.

int countCharsAsync(FSSpecPtr fsp)
{
  // Count the number of times the letter A appears in the file, reading the file one 
  // character at a time
 int   counter = 0;
 ParmBlkPtr pb[2], currPBPtr;
 int   currPB = 0;
 char  *buffer, *currChar;
 short  refNum;
 long  charCount;
 OSErr  err;
 
  // Allocate parameter blocks
  // Open the file
 err = FSpOpenDF(fsp, fsRdPerm, &refNum); 
 if (err == noErr) {
  // Set up parameter blocks
 pb[0] = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
 pb[1] = (ParmBlkPtr)NewPtrClear(sizeof(ParamBlockRec));
 setup(pb[0], refNum, fsp->vRefNum, kBufferSize);
 setup(pb[1], refNum, fsp->vRefNum, kBufferSize);
 
  // Start 2 read operations going
 (void) PBReadAsync(pb[0]); 
 (void) PBReadAsync(pb[1]); 
 currPBPtr = pb[0];
 
 for (;;) {
    // Wait for the I/O operation to complete
  while (currPBPtr->ioParam.ioResult > 0) {};
  
    // The data is ready, so count the characters
  buffer = currPBPtr->ioParam.ioBuffer;
  charCount = currPBPtr->ioParam.ioActCount;
  if (charCount == 0) break;
  currChar= buffer;
  while (charCount- > 0) {
   if (*currChar++ == ‘A’) counter++;
  }
  
    // Put this buffer back into the reading queue
  (void) PBReadAsync(currPBPtr);
  
    // Switch to the other buffer
  currPB = 1 - currPB;
  currPBPtr = pb[currPB];
  currPBPtr->ioParam.ioPosMode = fsAtMark;
 }
    // Release the memory
  destroy(pb[0]);
  destroy(pb[1]);
  FSClose(refNum);
 }
 return counter;
}

void setup (ParmBlkPtr pb, short refNum,
   short vRefNum, long bufSize)
{
 pb->ioParam.ioCompletion = NULL;
 pb->ioParam.ioResult = 1;
 pb->ioParam.ioRefNum = refNum;
 pb->ioParam.ioVRefNum = vRefNum;
 pb->ioParam.ioReqCount = bufSize;
 pb->ioParam.ioBuffer = NewPtr(bufSize);
 pb->ioParam.ioPosMode = fsAtMark;
 pb->ioParam.ioPosOffset = 0;
}
void destroy (ParmBlkPtr pb)
{
  DisposePtr(pb->ioParam.ioBuffer);
  DisposePtr((Ptr)pb);
}

Let’s look at the timing results for the above code. Again, since the PowerPC and 68K numbers follow the same pattern, we will show only the PowerPC numbers here.

file size countCharsFS PBRead async

1000K 218 ms 167 ms

2000K 661 ms 454 ms

3000K 1076 ms 700 ms

4000K 1467 ms 934 ms

5000K 1885 ms 1183 ms

Graphically, the timing looks like this: (Notice that we’ve changed scale from our previous graph so you can get a better look at the difference between synchronous and asynchronous I/O.)

While not a dramatic change, the last result is still an improvement over synchronous I/O. It’s hard to make this code much faster, but you can take advantage of the available processing time to add features and improve the user’s experience.

Improving the User’s Experience

All of these routines share a common drawback - they don’t allow any time for other programs to run. They sit in tight loops reading and processing information or waiting for the next read to complete. This is OK when writing demonstration code for a magazine article, but it isn’t a reasonable practice in “real” programs. Real applications should arrange for their I/O intensive routines to give time to other applications and to allow the user to cancel at any time.

A well-designed application will give away time while it’s simply waiting for an I/O request to finish. Most applications could do this by calling WaitNextEvent, processing the event, then checking the result code of the pending operation before giving away any more time. The only problem is that when an application gives away time with WaitNextEvent, there’s no telling how soon control will be returned. Applications that need immediate notification at the end of an I/O operation must use completion routines.

A completion routine is a function in the program that the Macintosh OS calls when a specific I/O operation ends. (The requesting program supplies the function pointer in the ioCompletion field of the parameter block.) Completion routines run under the same tight restrictions as any other “interrupt time” code including not being able to allocate or move memory nor being able to use the content of any relocatable block. Most completion routines are not guaranteed access to their application’s globals, and the information passed into each completion routine varies wildly. For these reasons, we will defer a thorough discussion of completion routines to another article.

The completion routine for PBRead is especially poor, as it receives no parameters and the Parameter Block has been pulled off of the I/O queue by the time the routine is called. This routine appears to have A5 set up for it so it can reach the application’s globals, but it can’t do much even then - only set a flag indicating the completion of I/O or take an existing block and issuing another I/O call for it.

Besides giving away time, there is one other thing a well-behaved application should do, and that is allow the user to cancel an operation. If the user asks to cancel during a synchronous I/O operation, the application simply completes that operation and doesn’t begin another. However, if the user cancels during asynchronous I/O, the application has to remove all of the pending requests. The KillIO() call takes a file or driver reference number and removes all of its pending I/O requests, so applications can kill the pending requests then wait for the current operation to complete before closing the file or driver.

Conclusion

Developers need to look beyond basic I/O calls if they want to get maximum performance from their programs. Asynchronous I/O, while the most complicated way to read and write information, is one of the best ways to improve your application’s performance. These same techniques that improve the performance of File I/O become critical when dealing in near real-time applications such as TCP/IP networking or serial communications that cannot afford pauses in their data collection.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Top Mobile Game Discounts
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links... | Read more »
Price of Glory unleashes its 1.4 Alpha u...
As much as we all probably dislike Maths as a subject, we do have to hand it to geometry for giving us the good old Hexgrid, home of some of the best strategy games. One such example, Price of Glory, has dropped its 1.4 Alpha update, stocked full... | Read more »
The SLC 2025 kicks off this month to cro...
Ever since the Solo Leveling: Arise Championship 2025 was announced, I have been looking forward to it. The promotional clip they released a month or two back showed crowds going absolutely nuts for the previous competitions, so imagine the... | Read more »
Dive into some early Magicpunk fun as Cr...
Excellent news for fans of steampunk and magic; the Precursor Test for Magicpunk MMORPG Crystal of Atlan opens today. This rather fancy way of saying beta test will remain open until March 5th and is available for PC - boo - and Android devices -... | Read more »
Prepare to get your mind melted as Evang...
If you are a fan of sci-fi shooters and incredibly weird, mind-bending anime series, then you are in for a treat, as Goddess of Victory: Nikke is gearing up for its second collaboration with Evangelion. We were also treated to an upcoming... | Read more »
Square Enix gives with one hand and slap...
We have something of a mixed bag coming over from Square Enix HQ today. Two of their mobile games are revelling in life with new events keeping them alive, whilst another has been thrown onto the ever-growing discard pile Square is building. I... | Read more »
Let the world burn as you have some fest...
It is time to leave the world burning once again as you take a much-needed break from that whole “hero” lark and enjoy some celebrations in Genshin Impact. Version 5.4, Moonlight Amidst Dreams, will see you in Inazuma to attend the Mikawa Flower... | Read more »
Full Moon Over the Abyssal Sea lands on...
Aether Gazer has announced its latest major update, and it is one of the loveliest event names I have ever heard. Full Moon Over the Abyssal Sea is an amazing name, and it comes loaded with two side stories, a new S-grade Modifier, and some fancy... | Read more »
Open your own eatery for all the forest...
Very important question; when you read the title Zoo Restaurant, do you also immediately think of running a restaurant in which you cook Zoo animals as the course? I will just assume yes. Anyway, come June 23rd we will all be able to start up our... | Read more »
Crystal of Atlan opens registration for...
Nuverse was prominently featured in the last month for all the wrong reasons with the USA TikTok debacle, but now it is putting all that behind it and preparing for the Crystal of Atlan beta test. Taking place between February 18th and March 5th,... | Read more »

Price Scanner via MacPrices.net

AT&T is offering a 65% discount on the ne...
AT&T is offering the new iPhone 16e for up to 65% off their monthly finance fee with 36-months of service. No trade-in is required. Discount is applied via monthly bill credits over the 36 month... Read more
Use this code to get a free iPhone 13 at Visi...
For a limited time, use code SWEETDEAL to get a free 128GB iPhone 13 Visible, Verizon’s low-cost wireless cell service, Visible. Deal is valid when you purchase the Visible+ annual plan. Free... Read more
M4 Mac minis on sale for $50-$80 off MSRP at...
B&H Photo has M4 Mac minis in stock and on sale right now for $50 to $80 off Apple’s MSRP, each including free 1-2 day shipping to most US addresses: – M4 Mac mini (16GB/256GB): $549, $50 off... Read more
Buy an iPhone 16 at Boost Mobile and get one...
Boost Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering one year of free Unlimited service with the purchase of any iPhone 16. Purchase the iPhone at standard MSRP, and then choose... Read more
Get an iPhone 15 for only $299 at Boost Mobil...
Boost Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering the 128GB iPhone 15 for $299.99 including service with their Unlimited Premium plan (50GB of premium data, $60/month), or $20... Read more
Unreal Mobile is offering $100 off any new iP...
Unreal Mobile, an MVNO using AT&T and T-Mobile’s networks, is offering a $100 discount on any new iPhone with service. This includes new iPhone 16 models as well as iPhone 15, 14, 13, and SE... Read more
Apple drops prices on clearance iPhone 14 mod...
With today’s introduction of the new iPhone 16e, Apple has discontinued the iPhone 14, 14 Pro, and SE. In response, Apple has dropped prices on unlocked, Certified Refurbished, iPhone 14 models to a... Read more
B&H has 16-inch M4 Max MacBook Pros on sa...
B&H Photo is offering a $360-$410 discount on new 16-inch MacBook Pros with M4 Max CPUs right now. B&H offers free 1-2 day shipping to most US addresses: – 16″ M4 Max MacBook Pro (36GB/1TB/... Read more
Amazon is offering a $100 discount on the M4...
Amazon has the M4 Pro Mac mini discounted $100 off MSRP right now. Shipping is free. Their price is the lowest currently available for this popular mini: – Mac mini M4 Pro (24GB/512GB): $1299, $100... Read more
B&H continues to offer $150-$220 discount...
B&H Photo has 14-inch M4 MacBook Pros on sale for $150-$220 off MSRP. B&H offers free 1-2 day shipping to most US addresses: – 14″ M4 MacBook Pro (16GB/512GB): $1449, $150 off MSRP – 14″ M4... Read more

Jobs Board

All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.