Mar 02 QT Toolkit
Volume Number: 18 (2002)
Issue Number: 03
Column Tag: QuickTime Toolkit
Broadcast News
by Tim Monroe
Broadcasting Movies Over a Network
Introduction
The key new technology introduced in QuickTime 4 was support for receiving real-time streamed data. That is to say, QuickTime-savvy applications like QuickTime Player or our own QTShell-based sample applications can receive video, audio, and other kinds of data streamed across the Internet. Real-time streaming, unlike the progressive downloading of movie files that has been available since QuickTime 3, can handle live data and does not require downloading potentially huge files onto the user's computer; this permits QuickTime playback applications to support uses such as video-on-demand and rebroadcast streaming.
The real-time streaming provided by QuickTime 4 was a client-side technology only; it did not provide any means to serve up, or transmit, the data streams. At that time, special software was required to create the streams of data that could be sent out over a network and then received and played back by QuickTime-savvy applications. As we'll see, the protocols used in transmitting the data from the server to the client conform to Internet Engineering Task Force (IETF) standards, but implementing those protocols in a transmitter application required intimate knowledge of a handful of IETF specifications, as well as a good bit of programming.
QuickTime 5 provides a set of broadcasting functions that allow us to create transmitter applications. For example, we can take the audiovisual data captured by a camcorder attached to one computer and broadcast that data to other computers on a network. Together, the transmitter technologies provided by QuickTime 5 and the receiver technologies provided by QuickTime 4 give us the complete set of tools we need to send audiovisual streams from one computer and view them on another. The good news here is that we need to know virtually nothing about the applicable IETF specifications to do all this; the really good news is that the amount of code we need to write to create a broadcasting application is surprisingly small. Indeed, we'll be able to write this application using fewer than a dozen of these new broadcasting functions.
In this article, we're going to see how to use these new APIs to build a sample application that broadcasts live data to other computers. (Let's call this application QTBroadcast.) In a previous QuickTime Toolkit article ("Captured", in MacTech December 2001), we saw how to use the sequence grabber to capture sound and video data from devices attached to our local computer; here we'll see how to send captured data to other computers located remotely on a network.
When it starts up, QTBroadcast automatically opens a monitor window, shown in Figure 1; this is modeled on the monitor window we created in the previous article, but now contains a button to start and stop the broadcasting.
Figure 1: The monitor window of QTBroadcast
The Test menu of QTBroadcast is shown in Figure 2; as you can see, it contains items that allow the user to select an SDP file (which we'll describe later) and to configure the broadcasting settings.
Figure 2: The Test menu of QTBroadcast
We'll begin by surveying the various streaming capabilities provided by QuickTime, and we'll take a brief look at the protocols that underlie those capabilities. Then we'll consider the code we need to add to our basic application shell to allow it to broadcast live data streams. Note that QuickTime's broadcasting APIs are currently available only on Macintosh operating systems (Mac OS 8 and 9, and Mac OS X).
QuickTime Streaming
Streaming is the process whereby one computer (the transmitter) chops a file or sequence of bytes up into discrete chunks (called packets) and sends them across a network to another computer (the receiver or client). The client's job is to reassemble the packets and do the right thing with them. The series of packets is a stream. For present purposes, we are interested only in streams of audiovisual data that can be reassembled and played back as a QuickTime movie.
It's important to keep in mind that QuickTime has supported a kind of streaming, called HTTP streaming or progressive downloading, ever since QuickTime 3 was released in 1998. QuickTime's HTTP streaming allows the QuickTime browser plug-in to begin playing a movie embedded in a web page before the entire movie has downloaded to the local computer. HTTP streaming is essentially a file transfer protocol, in the sense that the entire movie is downloaded to the user's computer. One key advantage of QuickTime's HTTP streaming is that a user can begin interacting with web-based content before the entire movie arrives on the user's computer. As we saw in an earlier article, the movie file embedded on the web page is a standard QuickTime movie file saved in Fast Start format (where the movie atom is the first atom in the movie file).
QuickTime 4 introduced a different kind of streaming, called real-time streaming. With real-time streaming, packets are sent out over the network in real time (so that, for instance, a 10-minute movie would take 10 minutes to download). When packets are received by the client application (for instance, by QuickTime Player), they are reassembled into a QuickTime movie and played for the user. In general, packets are discarded as soon as they have been played, so no file is ever created on the user's local storage devices.
QuickTime's real-time streaming uses an open IETF streaming protocol known as the Realtime Transport Protocol (RTP) to send the packets of video and audio data. For control information, such as establishing a connection between the client and server or telling the server to jump to a new time in a video-on-demand movie, QuickTime uses a different protocol known as the Realtime Streaming Protocol (RTSP). Figure 3 shows the basic network protocols used by QuickTime's streaming architecture. RTSP uses the TCP/IP transport layer, while RTP uses the lower-level UDP/IP transport layer.
Figure 3: The basic QuickTime streaming protocols
Because RTP uses UDP/IP, it does not guarantee delivery of packets. So it's possible for some packets to get lost in transit and never arrive at the client computer. This packet loss can result in some degradation of the video and sound when the movie is played back on the client computer. If you need guaranteed delivery of packets, you should use HTTP streaming, which uses TCP/IP to ensure that all transmitted packets are in fact received (by retransmitting any lost packets).
QuickTime supports streaming of video, audio, and text data in any of the formats that can be played locally by QuickTime, including AVI, Sorenson, QDesign, MP3, and MIDI. It cannot currently stream movies that contain sprite tracks or that incorporate features that depend on track references, such as QuickTime video effects, chapter lists, and some tweening effects. To handle these sorts of movies, you can either use HTTP streaming or you can create movies that store some data locally and combine that data with data received via RTP streaming. You can even create movies some of whose tracks are delivered via HTTP streaming and some via RTP streaming.
The files that reside on the streaming server are standard QuickTime movie files, with one important addition: each track in the movie that is to be streamed across the network must be accompanied by a hint track. The hint track contains information that tells the server software how to packetize the corresponding streamed track. In other words, the hint track is a sort of blueprint for creating streams of data. Without the hint track, the server software would have to know a great deal about the particular audio or video format contained in the streamed track, so that it would know how best to chop the data up into packets (for instance, possibly duplicating some of the packets to protect against losing important frames of video). The hint track insulates the server software from having to know anything about the actual video or audio data it's serving.
This is especially important because it allows streaming servers to run on operating systems that don't even run the QuickTime client software. Indeed, the first commercially available QuickTime server application, Apple's QuickTime Streaming Server (QTSS), debuted on machines running the Mac OS X Server software. Moreover, Apple has released the source code for the QuickTime Streaming Server under the Apple Public Source License, which conforms to the Open Source community guidelines. Versions of QTSS currently run under Windows and various flavors of UNIX, including Linux.
QuickTime Broadcasting
The broadcasting APIs introduced in QuickTime 5 allow us to create broadcasting applications (or, more simply, broadcasters). A broadcaster is an application that takes data from a source other than a hinted movie, compresses that data (if necessary), packetizes that data into streams, and then sends the streams out over a network. In the simplest case, the streams are targeted at a single specific client on the network. This type of serving is called unicasting. It's also possible to target a set of streams at more than one client on the network, using a special reserved address; this type of serving is called multicasting.
The QuickTime broadcasting APIs support both unicasting and multicasting. Some routers, however, are not configured to allow multicast streams. In this case, we still need to use a streaming server (such as QTSS). We can transmit the streams to a machine running QTSS, which then unicasts distinct streams to any number of remote clients.
The broadcasting APIs can be divided into two general categories, which we'll call the sourcer APIs and the presentation APIs. We can use the sourcer APIs to select and configure a source for the broadcast data. (A sourcer component, or sourcer, is a component that can read data from a specific kind of source.) Currently QuickTime supports broadcasting data from any of these kinds of sources:
- Data captured using a sequence grabber component from audiovisual devices
- Movie files stored locally on the transmitter
- Precompressed media data that is not stored in a movie file
This last kind of source allows us to broadcast virtually any kind of media data; we simply need to pass the appropriate data, a sample description for that data, and (optionally) a timestamp and duration. Keep in mind, however, that the client receiving the broadcast needs to have a media handler for that kind of data. Also, some kinds of data (for instance, sprite media data or Flash media data) do not respond well to packet loss and hence are currently not good candidates for broadcast streaming.
We use the presentation APIs to present (or broadcast) the data provided by a sourcer. A presentation is a collection of one or more streams of data, which can consist of packets of audio, video, text, or other data. You might think of a presentation as the streaming equivalent of a movie and the streams within the presentation as the streaming equivalent of a movie's tracks. The client application receives the presentation and reassembles it into a movie, which it plays back in exactly the same way it plays a local movie.
The sourcer and presentation APIs are a marvel of simplicity. As I mentioned earlier, we will be able to develop a broadcasting application using fewer than a dozen broadcasting functions. In fact, for the moment we won't need to use any of the sourcer functions at all, since we'll rely on QuickTime's default behavior of using the sequence grabber as the source for the broadcast data. So let's get started.
Setting Up For Broadcasting
Before we can begin broadcasting live audio and video data captured from an audiovisual device attached to our computer, we need to initialize our application for broadcasting and then create a presentation. As we've seen, a presentation is analogous to a movie: it consists of one or more streams of data targeted at one or more remote computers. In QTBroadcast, we shall allow at most one active presentation at a time, and the data being broadcast will be displayed in the monitor window (see Figure 1 again), which also contains a button that can be used to start and pause the broadcast. To keep track of the presentation and the monitor window, we'll use these global variables:
QTSPresentation gPresentation = kQTSInvalidPresentation;
QTSNotificationUPP gNotificationUPP = NULL;
DialogPtr gMonitor = NULL;
UserItemUPP gMonitorUserItemProcUPP = NULL;
Boolean gBroadcasting = false;
The gPresentation global variable is an identifier for the single presentation supported by QTBroadcast; as you can see, it's of type QTSPresentation and is initialized to the value kQTSInvalidPresentation. Associated with the presentation is a presentation notification procedure, which we identify using the gNotificationUPP global variable. Our notification procedure is called on specific events involving the presentation, such as when a presentation is first created and when a connection to the client machine occurs.
We use the gMonitor and gMonitorUserItemProcUPP global variables to keep track of the monitor window and the user item in that window (where we draw the video data that's being broadcast). Finally, we'll use the gBroadcasting global variable to keep track of whether we're currently broadcasting data or not.
When QTBroadcast starts up, it calls the QTBC_Init function, defined in Listing 1, to create the monitor window and to allocate the universal procedure pointers gNotificationUPP and gMonitorUserItemProcUPP.
Listing 1: Initializing for broadcasting
OSErr QTBC_Init (void)
{
OSErr myErr = noErr;
// allocate global storage
gNotificationUPP = (QTSNotificationUPP)
NewQTSNotificationUPP(QTBC_NotificationProc);
if (gNotificationUPP == NULL) {
myErr = paramErr;
goto bail;
}
gMonitorUserItemProcUPP = NewUserItemUPP
(QTBC_UserItemProcedure);
if (gMonitorUserItemProcUPP == NULL) {
myErr = paramErr;
goto bail;
}
// open the monitor window
gMonitor = QTBC_CreateMonitorWindow();
if (gMonitor == NULL) {
myErr = memFullErr;
goto bail;
}
bail:
// if an error occurred, clean up
if (myErr != noErr)
QTBC_Stop();
return(myErr);
}
QTBC_Init calls NewQTSNotificationUPP and NewUserItemUPP to create the two UPPs, and it calls the application function QTBC_CreateMonitorWindow to create and display the monitor window. We'll consider QTBC_CreateMonitorWindow in depth a bit later (Listing 9); for the moment, it's sufficient to know that it calls GetNewDialog to create a dialog window from information in the application's resources.
If any error occurs in QTBC_Init, we don't want to continue. In that case, we'll call the QTBC_Stop function, which is defined in Listing 2. QTBC_Stop simply deallocates the two UPPs created by QTBC_Init and disposes of the monitor window.
Listing 2: Shutting down broadcasting
void QTBC_Stop (void)
{
// deallocate any global storage
if (gNotificationUPP != NULL) {
DisposeQTSNotificationUPP(gNotificationUPP);
gNotificationUPP = NULL;
}
if (gMonitorUserItemProcUPP != NULL) {
DisposeUserItemUPP(gMonitorUserItemProcUPP);
gMonitorUserItemProcUPP = NULL;
}
// close the monitor window
if (gMonitor != NULL)
DisposeDialog(gMonitor);
}
We also call QTBC_Stop just before QTBroadcast terminates; at that time we will also call the QTBC_StopBroadcasting function, which calls QTSDisposePresentation to dispose of the presentation gPresentation. (QTBC_StopBroadcasting is defined later, in Listing 7.) It's important to dispose of the UPPs and the monitor window only after we dispose of the presentation itself; accordingly, our application-specific shutdown routine, QTApp_Stop, contains these two lines of code:
QTBC_StopBroadcasting();
QTBC_Stop();
Creating a Presentation
If QTBC_Init completes successfully, the monitor window is displayed on the screen but it contains no image and the Start button is inactive (as shown in Figure 4).
Figure 4: The monitor window on application launch
Before we can begin broadcasting data, we need to create a presentation. We do this by calling the QTSNewPresentation function, like this:
myErr = QTSNewPresentation(myPresParamsPtr, &gPresentation);
The second parameter is the address of our global variable gPresentation; if QTSNewPresentation completes successfully, it returns an identifier for the new presentation in that variable. The first parameter is the address of a new presentation parameters structure, which is defined like this:
struct QTSNewPresentationParams {
OSType dataType;
const void * data;
UInt32 dataLength;
QTSEditListHandle editList;
SInt32 flags;
TimeScale timeScale;
QTSMediaParams * mediaParams;
QTSNotificationUPP notificationProc;
void * notificationRefCon;
};
So before we can call QTSNewPresentation, we need to allocate a new presentation parameters structure and fill in most of its fields. We allocate this structure by calling NewPtrClear:
myPresParamsPtr = (QTSNewPresentationParams *)
NewPtrClear(sizeof(QTSNewPresentationParams));
The first three fields of the new presentation parameters structure contain information about some of the network settings to be used by the presentation, such as the destination IP address for the broadcast and the ports to use for the data streams. The IETF has defined a standard format for this information, called the session description protocol (SDP). An SDP file (that is, a file that conforms to the SDP) is used by clients and servers for initiating a network connection and transfer of multimedia streams between a server and its clients. This file describes the types and formats of the media to be transferred, the transport protocols, and the addresses to which the media are to be streamed. Here's a sample SDP file:
v=0
c=IN IP4 224.2.1.2/15/1
m=audio 1000 RTP/AVP 12
m=video 2000 RTP/AVP 101
a=rtpmap:101 H263-1998
The line beginning with "v=" specifies the protocol version, which currently is 0. The line beginning with "c=" specifies the connection information, which consists of a network type and address type (here, "IN IP4" for IP version 4 addressing on the Internet), the destination address (here, "224.2.1.2", which is an address reserved for multicasting), a time-to-live value (here, 15), and the number of contiguous multicast addresses (here, 1). The lines beginning with "m=" indicate which transport protocols and ports to use for specific media types. In the file shown above, the audio data is to be sent via RTP to port 1000 and the video data is to be sent via RTP to port 2000. Finally, the line beginning with "a=" specifies media attributes. In this case, the rtpmap attribute provides information about dynamic payload binding.
In our sample application QTBroadcast, the user selects an SDP file using the first menu item in the Test menu (namely, "Select SDP File..."). In response to menu item, we call the QTBC_SetupPresentation function, which in turn uses our standard framework function QTFrame_GetOneFileWithPreview to allow the user to select an SDP file. QTFrame_GetOneFileWithPreview returns a file system specification for that file. We can then fill in the relevant fields of the new presentation parameters structure like this:
myPresParamsPtr->dataType = kQTSFileDataType;
myPresParamsPtr->data = &myFSSpec;
myPresParamsPtr->dataLength = sizeof(myFSSpec);
QuickTime also supports SDP information stored in memory (in which case the dataType field should be set to kQTSSDPDataType). In QTBroadcast, we'll restrict our attention to SDP files only.
The editList field of the new presentation parameters structure is a handle to a QTSEditList structure, which holds information about edits for stored streams. For live broadcasting, we can ignore that field.
The flags field of the new presentation parameters structure contains information for the presentation we are about to create. Currently these flags are defined:
enum {
kQTSAutoModeFlag = 0x00000001,
kQTSDontShowStatusFlag = 0x00000008,
kQTSSendMediaFlag = 0x00010000,
kQTSReceiveMediaFlag = 0x00020000
};
The kQTSAutoModeFlag flag indicates that the presentation should be automatically configured to its default settings; most importantly, this means that the default sourcer is to be used, which is the sequence grabber. The kQTSDontShowStatusFlag flag indicates that we do not want QuickTime to create a streaming status handler for the presentation; if we request a status handler, it displays connection and status information in the monitor window (that is, in the window associated with the sequence grabber). For instance, when we first create a presentation, QuickTime attempts to connect to the specified target or targets and displays the status message shown in Figure 5.
Figure 5: The status message while connecting
When the presentation is ready and the requisite network connections have been established, we'll see the message shown in Figure 6. (Keep in mind that these status messages appear on the transmitter only; the broadcasting client also displays some status information while establishing a connection to the transmitter, which we'll encounter later.)
Figure 6: The status message when ready
The remaining two flags, kQTSSendMediaFlag and kQTSReceiveMediaFlag, indicate whether we want to send or receive data with the presentation we're creating. In QTBroadcast, we set our flags like this:
myPresParamsPtr->flags = kQTSAutoModeFlag |
kQTSDontShowStatusFlag | kQTSSendMediaFlag
The timeScale field of the new presentation parameters structure is currently unused (as far as I can determine); in QTBroadcast, we set that field to 0.
The mediaParams field holds a pointer to a media parameters structure, of type QTSMediaParams:
struct QTSMediaParams {
QTSVideoParams v;
QTSAudioParams a;
};
As you can see, this structure contains a video parameters structure and an audio parameters structure (of types QTSVideoParams and QTSAudioParams respectively); these structures contain information about the video and audio media data being previewed locally — that is, the data being displayed in the monitor window and the sound played through the computer's speakers. In QTBroadcast we want to leave the audio settings at their default values, but we want to configure the video settings so that the preview draws in the correct location in the monitor window. We begin by allocating a new media parameters structure:
myMediaParamsPtr = (QTSMediaParams *)
NewPtrClear(sizeof(QTSMediaParams));
myPresParamsPtr->mediaParams = myMediaParamsPtr;
At this point we want to call the QTSInitializeMediaParams function, which initializes the media parameters structure to some default values:
myErr = QTSInitializeMediaParams(myMediaParamsPtr);
Now let's configure the video previewing. The video parameters structure looks like this:
struct QTSVideoParams {
Fixed width;
Fixed height;
MatrixRecord matrix;
CGrafPtr gWorld;
GDHandle gdHandle;
RgnHandle clip;
short graphicsMode;
RGBColor opColor;
};
Notice (in Figure 1, once again) that the preview images are drawn in the user item rectangle, which is 176 pixels wide and 144 pixels high and is offset from the window origin by 10 pixels vertically and horizontally. So we'll use these lines of code to place the images in the proper location:
myPresParamsPtr->mediaParams->v.width = Long2Fix(176);
myPresParamsPtr->mediaParams->v.height = Long2Fix(144);
TranslateMatrix(&(myPresParamsPtr->mediaParams->v.matrix),
Long2Fix(10), Long2Fix(10));
And we'll set the port and graphics device like this:
myPresParamsPtr->mediaParams->v.gWorld =
GetDialogPort(gMonitor);
myPresParamsPtr->mediaParams->v.gdHandle = NULL;
Finally, the notificationProc and notificationRefCon fields of the new presentation parameters structure specify the notification procedure and a reference constant that is passed to the notification procedure.
myPresParamsPtr->notificationProc = gNotificationUPP;
myPresParamsPtr->notificationRefCon = 0L;
At long last, we are ready to call QTSNewPresentation and QTSPresPreview:
myErr = QTSNewPresentation(myPresParamsPtr, &gPresentation);
myErr = QTSPresPreview(gPresentation, kQTSAllStreams, NULL,
kQTSNormalForwardRate, 0);
Listing 3 gives the complete definition for QTBC_SetupPresentation.
Listing 3: Setting up a presentation
OSErr QTBC_SetupPresentation (void)
{
QTSNewPresentationParams *myPresParamsPtr = NULL;
QTSMediaParams *myMediaParamsPtr = NULL;
FSSpec myFSSpec;
OSType myTypeList[] = {kQTFileTypeText};
short myNumTypes = 1;
QTFrameFileFilterUPP myFileFilterUPP = NULL;
OSErr myErr = noErr;
#if TARGET_OS_MAC
myNumTypes = 0;
#endif
// create a new presentation parameters structure
myPresParamsPtr = (QTSNewPresentationParams *)
NewPtrClear(sizeof(QTSNewPresentationParams));
if (myPresParamsPtr == NULL) {
myErr = MemError();
goto bail;
}
// create a new media parameters structure
myMediaParamsPtr = (QTSMediaParams *)
NewPtrClear(sizeof(QTSMediaParams));
if (myMediaParamsPtr == NULL) {
myErr = MemError();
goto bail;
}
// initialize the media parameters to default values
myErr = QTSInitializeMediaParams(myMediaParamsPtr);
if (myErr != noErr)
goto bail;
// elicit an SDP file from the user
myFileFilterUPP = QTFrame_GetFileFilterUPP
((ProcPtr)QTFrame_FilterFiles);
myErr = QTFrame_GetOneFileWithPreview(myNumTypes,
(QTFrameTypeListPtr)myTypeList, &myFSSpec,
myFileFilterUPP);
if (myErr != noErr)
goto bail;
// start broadcasting from an SDP file
myPresParamsPtr->dataType = kQTSFileDataType;
myPresParamsPtr->data = &myFSSpec;
myPresParamsPtr->dataLength = sizeof(myFSSpec);
// set the presentation flags: use Sequence Grabber, don't display blue Q movie,
// and send data
myPresParamsPtr->flags = kQTSAutoModeFlag |
kQTSDontShowStatusFlag | kQTSSendMediaFlag;
myPresParamsPtr->timeScale = 0;
myPresParamsPtr->mediaParams = myMediaParamsPtr;
// fill these in to get status notifications
myPresParamsPtr->notificationProc = gNotificationUPP;
myPresParamsPtr->notificationRefCon = 0L;
// define the display size and the default transmission size
myPresParamsPtr->mediaParams->v.width = Long2Fix(176);
myPresParamsPtr->mediaParams->v.height = Long2Fix(144);
TranslateMatrix
(&(myPresParamsPtr->mediaParams->v.matrix),
Long2Fix(10), Long2Fix(10));
// set the window that Sequence Grabber will draw into
myPresParamsPtr->mediaParams->v.gWorld =
GetDialogPort(gMonitor);
myPresParamsPtr->mediaParams->v.gdHandle = NULL;
// create a new presentation
myErr = QTSNewPresentation(myPresParamsPtr,
&gPresentation);
if (myErr != noErr)
goto bail;
myErr = QTSPresPreview(gPresentation, kQTSAllStreams,
NULL, kQTSNormalForwardRate, 0);
bail:
if (myPresParamsPtr != NULL)
DisposePtr((Ptr)myPresParamsPtr);
if (myMediaParamsPtr != NULL)
DisposePtr((Ptr)myMediaParamsPtr);
if (myFileFilterUPP != NULL)
DisposeNavObjectFilterUPP(myFileFilterUPP);
return(myErr);
}
Notice that we dispose of the new presentation parameters structure (myPresParamsPtr) and the media parameters structure (myMediaParamsPtr) before returning from QTBC_SetupPresentation. QTSNewPresentation takes care to copy all the information we pass it, including any blocks of memory referenced in the new presentation parameters structure. If we had allocated any other information (for instance, a clip region), we should also dispose of that memory too.
Broadcasting
Now we've created a presentation and requested that the video data that is to be broadcast be previewed in the monitor window. In order for the previewing and subsequent broadcasting to proceed, we need to grant QuickTime some time to process the presentation. We do this by calling the QTSPresIdle function periodically. As usual, we'll add some code to our application's QTApp_Idle function, which is called whenever we receive a null event:
if (gPresentation != kQTSInvalidPresentation)
QTSPresIdle(gPresentation, NULL);
This just says: if we have an active presentation, grant it some processor time.
Starting the Broadcast
So far, however, no broadcasting is happening. In QTBroadcast, we wait until the user clicks the Start button in the monitor window to begin that process. When the user clicks that button, QTBroadcast calls the QTBC_StartBroadcasting function, defined in Listing 4. (Later we'll take a look at the code that handles those button clicks.)
Listing 4: Starting broadcasting
OSErr QTBC_StartBroadcasting (void)
{
// stop the preview
QTSPresPreview(gPresentation, kQTSAllStreams, NULL,
kQTSStoppedRate, 0);
return(QTSPresPreroll(gPresentation, kQTSAllStreams, 0,
(Fixed)kQTSNormalForwardRate, 0L));
}
QTBC_StartBroadcasting first calls QTSPresPreview with a rate of kQTSStoppedRate to stop previewing the data; then it calls QTSPresPreroll, which readies the media for subsequent broadcasting and performs any necessary handshaking between the transmitter (that is, the broadcasting computer) and the client computer or computers. (As you've probably guessed, prerolling a presentation with QTSPresPreroll is analogous to prerolling a movie with PrerollMovie.)
QTSPresPreroll returns fairly quickly, as some of its operations involve establishing network connections and hence should occur asynchronously. We get informed of the success of the prerolling in our presentation notification procedure, by receiving a notification message of type kQTSPrerollAckNotification. Once we receive this acknowledgement, we can then begin actually broadcasting data by calling the QTSPresStart function. Listing 5 shows our complete presentation notification procedure.
Listing 5: Handling notification messages
static PASCAL_RTN ComponentResult QTBC_NotificationProc
(ComponentResult theErr, OSType theNotificationType,
void *theNotificationParams, void *theRefCon)
{
#pragma unused(theErr, theNotificationParams, theRefCon)
QTSPresentation myPresentation = kQTSInvalidPresentation;
ComponentResult myErr = noErr;
switch (theNotificationType) {
case kQTSNewPresentationNotification:
// when we get this notification, the presentation has been created
// and is sent to us in theNotificationParams;
// if we needed it, we could retrieve it as follows:
myPresentation =
(QTSPresentation)theNotificationParams;
break;
case kQTSPrerollAckNotification:
myErr = QTSPresStart(gPresentation, kQTSAllStreams,
0L);
break;
case kQTSStartAckNotification:
case kQTSStopAckNotification:
break;
default:
break;
}
return(myErr);
}
As you can see, we ignore most of the notification messages. We handle the kQTSPrerollAckNotification message, to start broadcasting once the prerolling is complete. We also handle the kQTSNewPresentationNotification message, which is sent whenever a new presentation is created; in this case, the identifier for the new presentation is passed in the theNotificationParams parameter. (We don't actually do anything with that identifier; this code is provided to illustrate how to retrieve it if you need it.)
Finally we're off and running. We've created a presentation, prerolled it, and (hopefully) received a notification that the prerolling was successful — in which case we've called QTSPresStart to start broadcasting data. Now it's up to the client to start receiving the data we're transmitting. This can happen in one of several ways. The easiest way is to provide the client with a copy of the SDP file that we used to initiate the broadcast session. If the user opens this file with a QuickTime-savvy application, then QuickTime's SDP importer is called to convert the SDP data into a movie. When the application opens that movie, QuickTime connects to the server, requests that it start sending it data, and then receives the data into a local buffer; while this buffering is happening, the user will see a connection status display like the one shown in Figure 7.
Figure 7: A client status message
The user can then save the movie into a new file; in this case, the new file is a QuickTime movie file that contains a streaming track; the media data for this track is just the text data in the original SDP file. Subsequently opening the movie file is pretty much identical to opening the SDP file.
Controlling the Broadcast
So we now see how to start broadcasting from a transmitter and how to receive the broadcast stream on a client. If we want to pause the broadcasting, we can call the QTBC_PauseBroadcasting function defined in Listing 6. This function calls QTSPresStop to stop the presentation and then QTSPresPreview with a rate of kQTSNormalForwardRate to resume previewing in the monitor window.
Listing 6: Pausing broadcasting
OSErr QTBC_PauseBroadcasting (void)
{
OSErr myErr = noErr;
myErr = QTSPresStop(gPresentation, kQTSAllStreams, 0L);
if (myErr != noErr)
goto bail;
// restart the preview
myErr = QTSPresPreview(gPresentation, kQTSAllStreams, NULL,
kQTSNormalForwardRate, 0);
bail:
return(myErr);
}
After pausing a broadcast by calling QTBC_PauseBroadcasting, we could later resume the broadcast by calling QTBC_StartBroadcasting once again. If we want to stop broadcasting for good (perhaps because the application is shutting down), we can call the QTBC_StopBroadcasting function defined in Listing 7. This function calls QTSPresStop to stop broadcasting and then QTSDisposePresentation to dispose of the presentation. Notice that we also set the global variable gPresentation to kQTSInvalidPresentation, so that QTSPresIdle is no longer called in QTApp_Idle.
Listing 7: Stopping broadcasting
OSErr QTBC_StopBroadcasting (void)
{
OSErr myErr = noErr;
if (gPresentation != kQTSInvalidPresentation) {
myErr = QTSPresStop(gPresentation, kQTSAllStreams, 0L);
if (myErr != noErr)
myErr = QTSDisposePresentation(gPresentation, 0L);
gPresentation = kQTSInvalidPresentation;
}
return(myErr);
}
Broadcast Settings
Once we've created a new presentation based on an existing SDP file, we want to allow the user to view and configure the presentation's settings. QTBroadcast provides the "Configure Settings..." menu item in its Test menu; when the user selects this item and a presentation is underway, QTBroadcast displays the Transmission Settings dialog box shown in Figure 8.
Figure 8: The Transmission Settings dialog box
This dialog box shows the source, compression algorithm, and packetizer currently being used for the audio and video streams, and also displays the current data rate for each stream. Notice that there is no way for the user to modify any of these settings, since the broadcast has already begun.
If a presentation is not underway, then when the user selects the "Configure Settings..." menu item, QTBroadcast displays the Transmission Settings dialog box shown in Figure 9.
Figure 9: The Transmission Settings dialog box (modifiable)
As you can see, the dialog box now contains some buttons that allow the user to modify various broadcast settings. For instance, if the user clicks the "Source..." button associated with the video stream, the dialog box shown in Figure 10 appears.
Figure 10: The video stream Sourcer Settings dialog box
And if the user clicks the "Packetizer..." button associated with the video stream, the dialog box shown in Figure 11 appears.
Figure 11: The video stream Packetizer dialog box
It's actually quite simple to display all these dialog boxes; indeed, we need to call only a single function, QTSPresSettingsDialog, as shown in Listing 8 (which lists the code we execute in response to the "Configure Settings..." menu item).
Listing 8: Handling the "Configure Settings..." menu item
case IDM_GET_SETTINGS:
if (gPresentation != kQTSInvalidPresentation)
myErr = QTSPresSettingsDialog(gPresentation,
kQTSAllStreams, 0, NULL, 0L);
if (myErr != noErr)
QTFrame_Beep();
myIsHandled = true;
break;
Monitor Window Control
We're just about finished learning how to use QuickTime to broadcast data across a network. We've worked with all the basic functions we need to use to create and preroll presentations, start and stop previewing, and actually start sending streams of data to a remote client. Let's finish off quickly by taking a look at the code we use to create and manage the monitor window.
We create a single monitor window at application launch time by calling the QTBC_CreateMonitorWindow function, defined in Listing 9. This function calls GetNewDialog to open a dialog box whose attributes and items are defined by a ‘DLOG' resource of ID kMonitorDLOGID.
Listing 9: Creating the monitor window
DialogPtr QTBC_CreateMonitorWindow (void)
{
DialogPtr myDialog = NULL;
myDialog = GetNewDialog(kMonitorDLOGID, NULL,
(WindowPtr)-1L);
if (myDialog != NULL) {
short myItemKind;
Handle myItemHandle = NULL;
Rect myItemRect;
MacSetPort(GetDialogPort(myDialog));
// set the user item drawing procedure
GetDialogItem(myDialog, kMonitorUserItemID, &myItemKind,
&myItemHandle, &myItemRect);
SetDialogItem(myDialog, kMonitorUserItemID, myItemKind,
(Handle)gMonitorUserItemProcUPP, &myItemRect);
MacShowWindow(GetDialogWindow(myDialog));
QTBC_UserItemProcedure(myDialog, kMonitorUserItemID);
}
return(myDialog);
}
QTBC_CreateMonitorWindow also sets the user item procedure for the user item in the monitor window to be the function QTBC_UserItemProcedure, defined in Listing 10. This user item procedure draws a frame around the user item and also sets the state of the Start/Pause button; if no presentation is active, we want to disable the button so that it cannot be pressed.
Listing 10: Drawing the user item in the monitor window
PASCAL_RTN void QTBC_UserItemProcedure (
DialogPtr theDialog, short theItem)
{
short myItemKind;
Handle myItemHandle = NULL;
Rect myItemRect;
GrafPtr mySavedPort;
OSErr myErr = noErr;
GetPort(&mySavedPort);
MacSetPort(GetDialogPort(theDialog));
if (theItem != kMonitorUserItemID)
goto bail;
// draw a frame around the user item rectangle
GetDialogItem(theDialog, kMonitorUserItemID, &myItemKind,
&myItemHandle, &myItemRect);
InsetRect(&myItemRect, -1, -1);
MacFrameRect(&myItemRect);
// enable the Start/Stop button in the monitor window
GetDialogItem(theDialog, kMonitorButtonID, &myItemKind,
&myItemHandle, &myItemRect);
if (myItemHandle != NULL) {
if (gPresentation == kQTSInvalidPresentation)
HiliteControl((ControlHandle)myItemHandle, 255);
else
HiliteControl((ControlHandle)myItemHandle, 0);
}
bail:
MacSetPort(mySavedPort);
}
We handle clicks on the button in the QTBC_HandleMonitorWindowEvents function (Listing 11), which is called by QTApp_HandleEvent for any events targeted at a dialog box.
Listing 11: Handling events in the monitor window
void QTBC_HandleMonitorWindowEvents
(DialogPtr theDialog, DialogItemIndex theItemHit)
{
short myItemKind;
Handle myItemHandle = NULL;
Rect myItemRect;
if ((theDialog == gMonitor) &&
(theItemHit == kMonitorButtonID)) {
GetDialogItem(theDialog, kMonitorButtonID, &myItemKind,
&myItemHandle, &myItemRect);
if (gBroadcasting) {
QTBC_PauseBroadcasting();
SetControlTitle((ControlHandle)myItemHandle,
"\pStart");
} else {
QTBC_StartBroadcasting();
SetControlTitle((ControlHandle)myItemHandle,
"\pPause");
}
gBroadcasting = !gBroadcasting;
}
}
Conclusion
The broadcasting support included in QuickTime 5 allows us to write applications that transmit data captured or stored on a local server to a client (or set of clients) located remotely on a network. It's important to remember that the streams of media data transmitted by QuickTime's broadcasting components conform to the IETF protocols (principally, RTP and RTSP) and hence can be received by any standards-compliant client application. As we've seen, QuickTime 4 provided support for receiving those kinds of streams, so we can use QuickTime APIs to develop both the transmitter and the receiver.
Credits
Thanks are due to Kevin Marks for reviewing this article and providing some comments, and to Anne Jones for providing some helpful background information.
Tim Monroe is a member of the QuickTime engineering team at Apple. You can contact him at monroe@apple.com.