MACINTOSH C: A Hobbyist's Guide To Programming the Mac OS in C
Version 2.3
© 2000 K. J. Bricknell

SOUND
A link to the associated demonstration program listing is at the bottom of this page

Introduction to Sound
On the Macintosh, the hardware and software aspects of producing and recording sounds are very tightly integrated.
Audio Hardware
The audio hardware includes an internal speaker, a microphone, and one or more integrated circuits that convert digital data to analog signals and analog signals to digital data. The actual integrated circuits that perform these conversions vary between different models of Macintosh computers.
Sound-Related System Software
The sound-related system software managers are as follows:
- The Sound Manager. The Sound Manager provides the ability to:
- Play sounds through the speaker.
- Manipulate sounds, that is, vary such characteristics as loudness, pitch, timbre, and duration.
- Compress sounds so that they occupy less disk space.
- The Sound Manager can work with sounds stored in resources or in a file's data fork. It can also play sounds that are generated dynamically, and not necessarily stored on disk.
- The Sound Input Manager. The Sound Input Manager provides the ability to record sounds through a microphone or other sound input device.
- The Speech Manager. The Speech Manager provides the ability to convert written text into spoken words.
Sound Input and Output Capabilities
The basic audio hardware, together with the sound-related system software, provides for the following sound input and output capabilities:
- Playback of digitally recorded (that is, sampled) sounds.
- Playback of simple sequences of notes or of complex waveforms.
- Recording of sampled sounds.
- Conversion of text to spoken words.
- Mixing and synchronisation of multiple channels of sampled sounds.
- Compression and decompression of sound data to minimise storage space.
The basic audio hardware and system software also provide the ability to integrate and synchronise sound production with the display of other types of information, such as video and still images. For example, QuickTime uses the Sound Manager to handle all the sound data in a QuickTime movie.
Monitors and Sound Control Panel. For playback, the user can select a sound output device, and set certain characteristics of the selected device, using the Monitors and Sound control panel. The Monitors and Sound control panel also allows the user to select the input device for recording sounds.
Basic and Enhanced Sound Capabilities
It's very easy for users to enhance the quality of the sounds they play back or record by substituting different speakers and microphones for the ones built into a Macintosh computer. Audio capabilities may be further enhanced by adding an expansion card containing very high quality digital signal processing (DSP) circuitry, together with sound input or output hardware. Another enhancement option is to add a MIDI interface to one of the serial ports. Fig 1 illustrates the basic sound capabilities of the Macintosh and how those capabilities may be further enhanced and extended.
Sound Data
The Sound Manager can play sounds defined using one of three kinds of sound data:
- Square Wave Data. Square wave data is the simplest kind sound data Your application can use square-wave data to play a simple sequence of sounds in which each sound is described completely by three factors: frequency (or pitch); amplitude (or volume); duration.
- Wave-Table Data. To produce more complex sounds than are possible using square-wave data, your application can use wave-table data. Wave-table data is based on a description of a single wave cycle. The wave cycle is represented as an array of 512 bytes that describe the timbre (or tone) of a sound at any point in the cycle.
- Sampled-Sound Data. You can use sampled-sound data to play back sounds that have been digitally recorded (that is, sampled sounds). Sampled sounds are a continuous list of relative voltages over time that allow the Sound Manager to reconstruct an arbitrary analog wave form. They are typically used to play back prerecorded sounds such as speech or special sound effects.
This chapter is oriented primarily towards the recording and playback of sampled sounds.
About Sampled Sound
Two basic characteristics affect the quality of sampled sound. Those characteristics are sample rate and sample size.
Sample Rate
Sample rate, or the rate at which voltage samples are taken, determines the highest possible frequency that can be recorded. Specifically, for a given sample rate, sounds can be sampled up to half that frequency. For example, if the sample rate is 22,254 samples per second (that is, 22,254 hertz, or Hz), the highest frequency that can be recorded is about 11,000 Hz. A commercial compact disc is sampled at 44,100 Hz, providing a frequency response of up to about 20,000 Hz, which is the limit of human hearing.
Sample Size
Sample size, or quantisation, determines the dynamic range of the recording (the difference between the quietest and the loudest sound). If the sample size is eight bits, 256 discrete voltage levels can be recorded. This provides approximately 48 decibels (dB) of dynamic range. A compact disc's sample size is 16 bits, which provides about 96 dB of dynamic range. (Humans with good hearing are sensitive to ranges greater than 100 dB.)
Sound Manager Capabilities
The current Sound Manager supports 16-bit stereo audio samples with sample rates up to 64kHz, which allows your application to produce CD-quality sound. On Macintosh models which do not have the hardware to output 16-bit sound, the Sound Manager automatically converts 16-bit samples to 8-bit samples.
Storing Sampled Sounds
Sampled-sound data is made up of a series of sample frames, which are stored contiguously in order of increasing time. You can use the Sound Manager to store sampled sounds in one of two ways, either in sound resources or in sound files.
Sound Components
The Sound Manager supports arbitrary modifications of sound data using stand-alone code resources known as sound components. A sound component can perform one or more signal-processing operations on sound data. For example, the Sound Manager includes sound components for compressing and decompressing sound data and for converting sample rates. Sound components may be hooked together in series to perform complex tasks, as shown in the example at Fig 2.
Compression/Decompression Components. Components which compress and decompress sound are called codecs (compression/decompression components). Apple Computer supplies codecs that can handle 3:1 and 6:1 compression and expansion, which are suitable for most audio requirements. The Sound Manager can use any available codec to handle compression and expansion of audio data
 |
A term closely associated with the subject of codecs is MACE (Macintosh Audio Compression and Expansion). MACE is a collection of Sound Manager functions which provide audio data compression and expansion capabilities in ratios of either 3:1 or 6:1. The Sound Manager uses codecs to handle the MACE capabilities. |
In general, your application is unaware of the sound component chain required to produce a sound on the current sound output device. The Sound Manager keeps track of which sound output device the user has selected and constructs a component chain suitable for producing the desired quality of sound on that device. Accordingly, even though the capabilities of the available sound output hardware can vary greatly from one Macintosh to another, the Sound Manager ensures that a given chunk of audio data always sounds as good as possible on the available sound hardware. This means that you can use the same code to play sounds regardless of the actual sound-producing hardware available on a particular machine.
Sound Resources and Sound Files
Sound Resources
A sound resource is a resource of type 'snd ' that contains sound commands (see below) and possibly also sound data. Sound resources are widely used by Macintosh applications that produce sound and provide a simple and portable way for you to incorporate sounds into your application.
Sound Files
Although most sampled sounds that you want your application to produce can be stored as sound resources, there are times when it is preferable to store sounds in sound files. Some reasons for using sound files rather than sound resources are as follows:
- You want your application to play a sampled sound created by another application, or you want other applications to be able to play a sampled sound created by your application. (It is usually easier for different applications to share files than it is to share resources.)
- If you have a very large sampled sound, it might not be possible to create a resource large enough to hold all the audio data. If the sound occupies more than about a half megabyte of space, you should probably store it as a file.
 |
Resources are limited in size by the structure of resource files and, in particular, because offsets to resource data are stored as 24-bit quantities. |
Sound File Formats. Apple and several third-party developers have defined two sampled-sound file formats, known as the Audio Interchange File Format (AIFF) and the Audio Interchange File Format Extension for Compression (AIFF-C). The main difference between the AIFF and AIFF-C formats is that AIFF-C allows you to store either compressed or noncompressed audio data, whereas AIFF allows you to store noncompressed audio data only.
 |
Do not confuse AIFF and AIFF-C files (referred to in this chapter as sound files) with Finder sound files. A Finder sound file contains a sound resource that plays when the user double clicks on the file in the Finder. You can create a Finder sound file by creating a file of type 'sfil' with a creator of 'movr' and placing in the file a single sound resource. You can play such a file by using Resource Manager functions to open the Finder sound file and then by using the SndPlay function to play the single sound resource contained in it. |
The Sound Manager includes play-from-disk functions that allow you to play AIFF and AIFF-C files continuously from disk even while other tasks are executing.
Sound Production
Sound Channels
A Macintosh produces sound when the Sound Manager sends some data through a sound channel to the available audio hardware. A sound channel is a queue of sound commands (see below), together with other information about the sounds to be played in that channel. The commands placed into the channel might originate from an application or from the Sound Manager itself.
The Sound Manager uses the SndChannel data type to define a sound channel:
struct SndChannel
{
SndChannelPtr nextChan; // Pointer to next channel.
Ptr firstMod; // (Used internally.)
SndCallBackUPP callBack; // Pointer to callback function.
long userInfo; // Free for application's use.
long wait; // (Used internally.)
SndCommand cmdInProgress; // (Used internally.)
short flags; // (Used internally.)
short qLength; // (Used internally.)
short qHead; // (Used internally.)
short qTail; // (Used internally.)
SndCommand queue[128]; // (Used internally.)
}
typedef struct SndChannel SndChannel;
typedef SndChannel *SndChannelPtr;
Multiple Sound Channels
It is possible to have several channels of sound open at one time. The Sound Manager (using the Apple Mixer sound component) mixes together the data coming from all open sound channels and sends a single stream of sound data to the current sound output device. This allows a single application to play two or more sounds at once. It also allows multiple applications to play sounds at the same time.
Sound Commands
When you call the appropriate Sound Manager function to play a sound, the Sound Manager issues one or more sound commands to the audio hardware. A sound command is an instruction to produce sound, modify sound, or otherwise assist in the overall process of sound production. The structure of a sound command is defined by the SndCommand data type:
struct SndCommand
{
unsigned short cmd; // Command number.
short param1; // First parameter.
long param2; // Second parameter.
};
typedef struct SndCommand SndCommand;
The Sound Manager provides a rich set of sound commands, which are defined by constants. Some examples are as follows:
quietCmd = 3 Stop the sound currently playing.
flushCmd = 4 Remove all commands currently queued in specified sound channel.
syncCmd = 14 Synchronise multiple channels of sound.
freqCmd = 42 Change the frequency of the sound. If the sound is not currently
playing, begin playing at the frequency specified in param2.
ampCmd = 43 Change the amplitude of the sound.
soundCmd = 80 Install a sampled sound as a voice in a channel.
bufferCmd = 81 Play a buffer of sampled-sound data.
rateCmd = 82 Set the pitch of a sampled sound.
Sound Commands In 'snd ' Resources
A simple way to issue sound commands is to call the function SndPlay, specifying a sound resource of type 'snd ' that contains the sound commands you want to issue. A sound resource can contain any number of sound commands. As a result, you might be able to satisfy your sound-related requirements simply by creating sound resources and calling SndPlay.
Often, a 'snd ' resource consists only of a single sound command (usually the bufferCmd command) together with data that describes a sampled sound to be played. The following is an example of such a 'snd ' resource, shown in the form of the output of the MPW tool DeRez when applied to the resource:
data 'snd ' (19068,"Looped sound",purgeable)
{
/* Sound resource header */
$"0001" /* Format type. */
$"0001" /* Number of data types. */
$"0005" /* Sampled-sound data. */
$"00000080" /* Initialisation option: initMono. */
/* Sound commands */
$"0001" /* Number of sound commands that follow (1). */
$"8051" /* Command 1 (bufferCmd). */
$"0000" /* param1 = 0. */
$"00000014" /* param2 = offset to sound header (20 bytes). */
/* Sampled sound header (Standard sound header)*/
$"00000000" /* samplePtr Pointer to data (it follows immediately). */
$"00000BB8" /* length Number of bytes in sample (3000 bytes). */
$"56EE8BA3" /* sampleRate Sampling rate of this sound (22 kHz). */
$"000007D0" /* loopStart Starting of the sample's loop point. */
$"00000898" /* loopEnd Ending of the sample's loop point. */
$"00" /* encode Standard sample encoding. */
$"3C" /* baseFrequency BaseFrequency at which sample was taken. */
/* sampleArea[] Sampled sound data */
$"80 80 81 81 81 81 81 81 80 80 80 80 80 81 82 82"
$"82 83 82 82 81 80 80 7F 7F 7F 7E 7D 7D 7D 7C 7C"
(Rest of sampled sound data.)
};
This resource indicates that the sound is defined using sampled-sound data and includes a call to a single sound command (the bufferCmd command). The offset bit of the command number is set to indicate that the sound data is contained within the resource itself. (Data can can also be stored in a buffer separate from a sound resource.) The second parameter to the bufferCmd command indicates the offset from the beginning of the resource to the sampled sound header, which immediately follows the command and its two parameters.
 |
The sampled sound header shown is a standard sound header, which can reference only buffers of monophonic 8-bit sound. The extended sound header is used for 8-bit or 16-bit stereo sound data as well as monophonic sound data. The compressed sound header is used to describe compressed sound data, whether monophonic or stereo. |
Note that the first part of the sampled sound header contains important information about the sample and that the sampled sound data is itself part of the sampled sound header. Note also the loopStart and loopEnd fields of the sampled sound header, which are central to the matter of looping a sound indefinitely.
Sending Sound Commands Directly From the Application
You can also send sound commands one at a time into a sound channel by repeatedly calling the SndDoCommand function. The commands are held in a queue and processed in a first-in, first-out order. Alternatively, you can bypass a sound queue altogether by calling the SndDoImmediate function
Synchronous and Asynchronous Sound
You can play sounds either synchronously or asynchronously.
Synchronous Sound
When you play a sound synchronously, the Sound Manager alone has control over the CPU while it executes commands in a sound channel. Your application does not continue executing until the sound has finished playing.
Asynchronous Sound
When you play a sound asynchronously, your application can continue other processing while the sound is playing. From a programming standpoint, asynchronous sound production is considerably more complex than synchronous sound production.
Playing a Sound
Playing a Sound Resource
You can load a sound resource into memory and then play it using the SndPlay function. As previously stated, a 'snd ' resource contains sound commands that play the desired sound and might also contain sound data. If it does contain sound data, that data might be either compressed or noncompressed. SndPlay decompresses the data, if necessary, to play the sound.
Channel Allocation. When you pass SndPlay a NULL sound channel pointer in its first parameter, the Sound Manager automatically allocates a sound channel for the sound and then disposes of the channel when the sound has completed playing. The sound channel is allocated in the application's heap.
Playing a Sound File
You can play a sampled sound stored in a file of type AIFF or AIFF-C by opening the file and passing its file reference number to the SndStartFilePlay function.
The SndStartFilePlay function works like the SndPlay function but does not require the entire sound to be in RAM at one time. Instead, the Sound Manager uses two buffers, each of which is smaller than the sound itself. The Sound Manager plays one buffer of sound while filling the other with data from disk. After it finishes playing the first buffer, the Sound Manager switches buffers, and plays data in the second while refilling the first. This double-buffering technique minimises RAM usage (at the expense of additional disk overhead). SndStartFilePlay is thus ideal for playing very large sounds.
Channel Allocation. When you pass SndStartFilePlay a NULL sound channel pointer in the first parameter, the Sound Manager automatically allocates a sound channel for the sound.
Checking For Play-From-Disk Capability. The Sound Manager supports play-from-disk only on certain Macintosh computers. Accordingly, you should use the Gestalt function (see Chapter 23 - Miscellany) to check for this capability before calling SndStartFilePlay.
Playing Sounds Asynchronously
The Sound Manager allows you to play sounds asynchronously only if you allocate sound channels yourself. If you use such a technique, your application will need to dispose of a sound channel whenever the application finishes playing a sound. In addition, your application might need to release a sound resource that you played on a sound channel.
The Sound Manager provides certain mechanisms that allow your application to ascertain when a sound finishes playing, so that it can arrange to dispose of, firstly, a sound channel no longer being used and, secondly, other data (such as a sound resource) that you no longer need after disposing of the channel. Despite the existence of these mechanisms, the programming aspects of asynchronous sound remain rather complex. For that reason, the demonstration program files associated with this chapter include a library, called AsynchSoundLib (AsynchSoundLib68K for the 680x0 version or AsynchSoundLibPPC for thePowerPC version), which support asynchronous sound playback and which eliminates the necessity for your application to itself include source code relating to the more complex aspects of asynchronous sound management.
AsynchSoundLib, which may be used by any application that requires a straightforward and uncomplicated interface for asynchronous sound playback, is documented following the Constants, Data Types, and Functions section of this chapter.
Sound Recording
The Sound Input Manager provides the ability to record and digitally store sounds in a device-independent manner, and provides two high-level functions that allow your application to record sounds from the user and store them in memory or in a file. When you call these functions, the Sound Input Manager presents the sound recording dialog box shown at Fig 3.
Recording a Sound Resource
You can record sounds from the current input device using the SndRecord function. When calling SndRecord, you can pass a handle to a block of memory as the fourth parameter. The incoming data will then be stored in that block, the size of which determines the recording time available. If you pass NULL as the fourth parameter, the Sound Input Manager allocates the largest possible block in the application heap. Either way, the Sound Input Manager resizes the block when the user clicks the Save button.
When you have recorded a sound, you can play it back by calling SndPlay and passing it the handle to the block of memory in which the sound data is stored. That block has the structure of a 'snd ' resource, but its handle is not a handle to an existing resource. To save the recorded data as a resource, you can use the appropriate Resource Manager functions in the usual way.
Recording a Sound File
To record a sound directly into a file, you can call the SndRecordToFile function, which works exactly like SndRecord except that you pass it the file reference number of an open file instead of a handle to a block of memory. When SndRecordToFile exits successfully, that file contains the recorded audio data in AIFF or AIFF-C format. You can then play the recorded sound by passing that file reference number to the SndStartFilePlay function.
Recording Quality
One of the following constants should be passed in the third parameter of both the SndRecord and the SndRecordToFile call so as to specify the recording quality required:
Constant |
Value |
Meaning |
siCDQuality |
'cd ' |
44.1kHz, stereo, 16 bit. |
siBestQuality |
'best' |
22kHz, mono, 8 bit. |
siBetterQuality |
'betr' |
22kHz, mono, 3:1 compression. |
siGoodQuality |
'good' |
22KHz, mono, 6:1 compression |
The highest quality sound naturally requires the greatest storage space. Accordingly, be aware that, for most voice recording, you should specify siGoodQuality.
As an example of the storage space required for sounds, one minute of monophonic sound recorded with the fidelity you would expect from a commercial compact disc occupies about 5.3 MB of disk space. Even one minute of telephone-quality speech takes up more than half a megabyte.
Checking For Sound Recording Equipment
Not all Macintosh models support sound recording. Accordingly, before calling SndRecord or SndRecordToFile, you must use the Gestalt function to determine whether sound-recording hardware and software are installed.
Speech
The Speech Manager converts text into sound data, which it passes to the Sound Manager to play through the current sound output device. The Speech Manager's interaction with the Sound Manager is transparent to your application, so you do not need to be familiar with the Sound Manager to take advantage of the Speech Manager's capabilities.
Your application can initiate speech generation by passing a string or a buffer of text to the Speech Manager. The Speech Manager is responsible for sending the text to a speech synthesiser, a component that contains executable code that manages all communication between the Speech Manager and the Sound Manager. A synthesiser is usually contained in a resource in a file within the System folder. A speech synthesiser can include one or more voices, each of which may have different tonal qualities.
Generating Speech From a String
The SpeakString function is used to convert a text string into speech. SpeakString automatically allocates a speech channel, uses that channel to produce speech, and then disposes of the speech channel.
Asynchronous Speech
Speech generation is asynchronous, that is, control returns to your application before SpeakString finishes speaking the string. However, because SpeakString copies the string you pass it into an internal buffer, you are free to release the memory you allocated for the string as soon as SpeakString returns.
Synchronous Speech
If you wish to generate speech synchronously, you can use SpeakString in conjunction with the SpeechBusy function, which returns the number of active speech channels, including the speech channel created by the SpeakString function.
Checking For Speech Capabilities
Because the Speech Manager is not available in all system software versions, your application should always check for speech capabilities, using the Gestalt function, before calling SpeakString or SpeechBusy.

Relevant Constants, Data Types, and Functions
Constants
Gestalt Sound Attributes Selector and Response Bits
gestaltSoundAttr 'snd ' Sound attributes.
gestaltStereoCapability = 0 Sound hardware has stereo capability.
gestaltStereoMixing = 1 Stereo mixing on external speaker.
gestaltSoundIOMgrPresent = 3 Sound I/O Manager is present.
gestaltBuiltInSoundInput = 4 Built-in Sound Input hardware is present.
gestaltHasSoundInputDevice = 5 Sound Input device available.
gestaltPlayAndRecord = 6 Built-in hardware can play & record simultaneously.
gestalt16BitSoundIO = 7 Sound hardware can play and record 16-bit samples.
gestaltStereoInput = 8 Sound hardware can record stereo.
gestaltLineLevelInput = 9 Sound input port requires line level.
gestaltSndPlayDoubleBuffer = 10 SndPlayDoubleBuffer available.
gestaltMultiChannels = 11 Multiple channel support.
gestalt16BitAudioSupport = 12 16 bit audio data supported.
gestaltSpeechAttr 'ttsc' Speech Manager attributes.
gestaltSpeechMgrPresent = 0 Speech Manager exists.
gestaltSpeechHasPPCGlue = 1 Native PPC glue for Speech Manager API exists.
Recording Qualities
siCDQuality = 'cd ' 44.1kHz, stereo, 16 bit.
siBestQuality = 'best' 22kHz, mono, 8 bit.
siBetterQuality = 'betr' 22kHz, mono, MACE 3:1.
siGoodQuality = 'good' 22kHz, mono, MACE 6:1.
Typical Sound Commands
quietCmd = 3 Stop the sound currently playing.
flushCmd = 4 Remove all commands currently queued in the specified sound channel.
syncCmd = 14 Synchronise multiple channels of sound.
freqCmd = 42 Change the frequency of the sound. If the sound is not currently
playing, begin playing indefinitely at the frequency specified in
param2.
ampCmd = 43 Change the amplitude of the sound.
soundCmd = 80 Install a sampled sound as a voice in a channel.
bufferCmd = 81 Play a buffer of sampled-sound data.
rateCmd = 82 Set the pitch of a sampled sound.
Data Types
Sound Channel Structure
struct SndChannel
{
SndChannelPtr nextChan; // Pointer to next channel.
Ptr firstMod; // (Used internally.)
SndCallBackUPP callBack; // Pointer to callback function.
long userInfo; // Free for application's use.
long wait; // (Used internally.)
SndCommand cmdInProgress; // (Used internally.)
short flags; // (Used internally.)
short qLength; // (Used internally.)
short qHead; // (Used internally.)
short qTail; // (Used internally.)
SndCommand queue[128]; // (Used internally.)
}
typedef struct SndChannel SndChannel;
typedef SndChannel *SndChannelPtr;
Sound Command Structure
struct SndCommand
{
unsigned short cmd; // Command number.
short param1; // First parameter.
long param2; // Second parameter.
};
typedef struct SndCommand SndCommand;
Functions
Playing Sound Resources
void SysBeep(short duration);
OSErr SndPlay(SndChannelPtr chan,SndListHandle sndHdl,Boolean async);
Playing From Disk
OSErr SndStartFilePlay(SndChannelPtr chan,short fRefNum,short resNum,
long bufferSize,void *theBuffer,AudioSelectionPtrtheSelection,
FilePlayCompletionUPP theCompletion,Boolean async);
OSErr SndPauseFilePlay(SndChannelPtr chan);
OSErr SndStopFilePlay(SndChannelPtr chan,Boolean quietNow);
Allocating and Releasing Sound Channels
OSErr SndNewChannel(SndChannelPtr *chan,short synth,long init,
SndCallBackUPP userRoutine);
OSErr SndDisposeChannel(SndChannelPtr chan,Boolean quietNow);
Sending Commands to a Sound Channel
OSErr SndDoCommand(SndChannelPtr chan,const SndCommand *cmd,Boolean noWait);
OSErr SndDoImmediate(SndChannelPtr chan,const SndCommand *cmd);
Recording Sounds
OSErr SndRecord(ModalFilterUPP filterProc,Point corner,OSType quality,
SndListHandle *sndHandle);
OSErr SndRecordToFile(ModalFilterUPP filterProc,Point corner,OSType quality,
short fRefNum);
Generating Speech
OSErr SpeakString(ConstStr255Param textToBeSpoken);
short SpeechBusy(void);

The AsynchSoundLib Library
The AsynchSoundLib library is intended to provide a straightforward and uncomplicated interface for asynchronous sound playback.
AsynchSoundLib requires that you include a global "attention" flag in your application. At startup, your application must call AsynchSoundLib's initialisation function and provide the address of this attention flag. Thereafter, the application must continually check the attention flag within its main event loop.
AsynchSoundLib's main function is to spawn asynchronous sound tasks, and communication between your application and AsynchSoundLib is carried out on an as-required basis. The basic phases of communication for a typical sound playback sequence are as follows.
- Your application tells AsynchSoundLib to play some sound.
- AsynchSoundLib uses the Sound Manager to allocate a sound channel and begins asynchronous playback of your sound.
- The application continues executing, with the sound playing asynchronously in the background.
- The sound completes playback. AsynchSoundLib has set up a sound command that causes it (AsynchSoundLib) to be informed immediately upon completion of playback. When playback ceases, AsynchSoundLib sets the application's global attention flag.
- The next time through your application's event loop, the application notices that the attention flag is set and calls AsynchSoundLib to free up the sound channel.
When your application terminates, it must call AsynchSoundLib to stop any asynchronous playback in progress at the time.
AsynchSoundLib's method of communication with the application minimises processing overhead. By using the attention flag scheme, your application calls AsynchSoundLib's cleanup function only when it is really necessary.
AsynchSoundLib Functions
The following documents those AsynchSoundLib functions that may be called from an application.
To facilitate an understanding of the following, it is necessary to be aware that AsynchSoundLib associates a data structure, referred to in the following as an ASStructure, with each channel. Each ASStructure includes the following fields:
SndChannel channel; // The sound channel.
SInt32 refNum; // Reference number.
Handle sound; // The sound.
char handleState; // State to which to restore the sound handle.
Boolean inUse; // Is this ASStructure currently in use?
OSErr AS_Initialise (attnFlag,numChannels);
Boolean *attnFlag; Pointer to application's "attention" flag global variable.
SInt16 numChannels; Number of channels required to be open simultaneously. If 0 is
specified, numChannels defaults to 4.
Returns: 0 No errors.
Non-zero results of MemError call.
This function stores the address of the application's "attention" flag global variable and then allocates memory for a number of ASStructures equal to the requested number of sound channels.
OSErr AS_PlayID (resID,refNum);
SInt16 resID Resource ID of the 'snd ' resource.
SInt32 *refNum A pointer to a reference number storage variable. Optional.
Returns: 0 No errors.
1 No channels available.
Non-zero results of ResError call.
Non-zero results of SndNewChannel call.
Non-zero results of SndPlay call.
This function initiates asynchronous playback of the 'snd ' resource with ID resID.
 |
If you pass a pointer to a variable in their refNum parameters, AS_PlayID and its sister function AS_PlayHandle (see below) return a reference number in that parameter. As will be seen, this reference number may be used to gain more control over the playback process. However, if you simply want to trigger a sound and let it to run to completion, with no further control over the playback process, you can pass NULL in the refNum parameter. In this case, a reference number will not be returned. |
First, AS_PlayID attempts to load the specified 'snd ' resource. If successful, the handle state is saved for later restoration, and the handle is made unpurgeable. The function then gets a reference number and a pointer to the next free ASStructure. A sound channel is then allocated via a call to SndNewChannel and the associated ASStructure is initialised. HLockHi is then called to move the sound handle high in the heap and lock it. SndPlay is then called to start the sound playing, playing, the channel.userInfo field is set to indicate that the sound is playing, and a callback function is queued so that AsynchSoundLib will know when the sound has stopped playing. If all this is successful, AS_PlayID returns the reference number associated with the channel (if the caller wants it).
OSErr AS_PlayHandle(sound,refNum);
Handle sound A handle to the sound to be played.
SInt32 *refNum A pointer to a reference number storage variable. Optional.
Returns: 0 If no errors.
1 No channels available.
Non-zero results of SndNewChannel call.
Non-zero results of SndPlay call.
This function initiates asynchronous playback of the sound referred to by sound.
 |
The AS_PlayHandle function is similar to AS_PlayID, except that it supports a special case: You can pass AS_PlayHandle a NULL handle. This causes AS_PlayHandle to open a sound channel but not call SndPlay. Normally, you do this when you want to get a sound channel and then send sound commands directly to that channel yourself. (See AS_GetChannel, below.) |
If a handle is provided, its current state is saved for later restoration before it is made unpurgeable. AS_PlayHandle then gets a reference number and a pointer to a free ASStructure. A sound channel is then allocated via a call to SndNewChannel and the associated ASStructure is initialised. Then, if a handle was provided, HLockHi is called to move the sound handle high in the heap and lock it, following which SndPlay is called to start the sound playing, the channel.userInfo field is set to indicate that the sound is playing, and a callback function is queued so that AsynchSoundLib will know when the sound has stopped playing. Finally, the reference number associated with the channel is returned (if the caller wants it).
OSErr AS_GetChannel(refNum,channel);
Sint32 refNum Reference number.
SndChannelPtr *channel A pointer to a SoundChannelPtr.
Returns: 0 No errors.
2 If refNum does not refer to any current ASStructure.
This function searches for the ASStructure associated with refNum. If one is found, a pointer to the associated sound channel is retuned in the channel parameter.
AS_GetChannel is provided so as to allow an application to gain access to the sound channel associated with a specified reference number and thus gain the potential for more control over the playback process. It allows an application to use AsynchSoundLib to handle sound channel management while at the same time retaining the ability to send sound commands to the channel. This is most commonly done to play looped continuous music, for which you will need to provide a sound resource with a loop and a sound command to install the music as a voice. First, you open a channel by calling AS_PlayHandle, specifying NULL in the first parameter. (This causes AS_PlayHandle to open a sound channel but not call SndPlay.) Armed with the returned reference number associated with that channel, you then call AS_GetChannel to get the SndChannelPtr, which you then pass as the first parameter in a call to SndPlay. Finally, you send a freqCmd command to the channel to start the music playing. The playback will keep looping until you send a quietCmd command to the channel.
void AS_CloseChannel(void);
This function is called from the application's event loop if the application's "attention" flag is set. It clears the "attention" flag and then performs playback cleanup by iterating through the ASStructures looking for structures which are both in use (that is, the inUse field contains true) and complete (that is, the channel.userInfo field has been set by AsyncSoundLib's callback function to indicate that the sound has stopped playing). It frees up such structures for later use and closes the associated sound channel.
void AS_CloseDown(void);
AS_CloseDown checks that AsynchSoundLib was previously initialised, stops all current playback, calls AS_CloseChannel to close open sound channels, and disposes of the associated ASStructures.


|