TweetFollow Us on Twitter

Performance Sampling

Volume Number: 20 (2004)
Issue Number: 5
Column Tag: Programming

Performance Sampling

by John A. Vink

Making code faster through introspection

Do It

Profiling your code is essential. You can't speed up your code if you don't know what is taking so long. You might think you know where the slowdown is, but most likely you'd be surprised. Engineers from the Safari team recount that they had a perfect record of incorrectly predicting what was slowing down their code. Only after doing some profiling, they discovered the real bottlenecks.

The process of profiling is:

    1. profile your code

    2. find the parts of the profile that belong to you and take significant amounts of time

    3. optimize

    4. lather, rinse, repeat

Here I am going to discuss the first two steps of profiling. You should already be familiar with "repeat". The second and following times through this loop you also need to see if the changes you made really did make things faster.

What are you talking about?

Sampling can be done from a command line tool or from a GUI application.

First, let's talk about what sampling actually does.

Sampling is finding out what your application is doing at any given time. About every 10 ms your application is asked, "What are you doing now? How about now? And now?" Your application responds by giving a stack trace each time. These are called samples. When the sampling period has completed, the results are summarized into a call graph.

Actually, that's just a simple way to conceptualize it. What's really happening is that the sampling application suspends the sampled application at periodic times. While the sampled app is suspended, the sampling app walks the stack for each of the sampled process' threads to ascertain the stack trace.

As an aside, sampling is very useful when your application appears hung. You can sample your hung application to see exactly where it is hung, giving you clues about how to fix it.

So let's imagine you sampled for 5 seconds, which would mean 500 samples when sampling every 10 ms. When sampling the main thread, the main() function is going to appear near the top of the stack since it's part of that thread's entry point. So it'll show up 500 times. Let's say you only have 2 functions in main - KindaQuick() and KindaLong(). KindaQuick() might show up 100 times, and KindaLong() 400 times. So your sample log will show main at 500 samples, and inside that, it will show KindaLong() at 400 samples and KindaQuick() at 100 samples. It would look something like this:

    500 main
      400 KindaLong
      100 KindaQuick

Some things to note about samples is that if you have a function that can complete between two samples and you call it just once, then it might not show up in your sample log. Because it started after sample n, and completed before sample n + 1, no samples will show this function. But if you call that function a bunch of times, then chances are it will show up in your sample. This shouldn't be of much concern since if your function runs so quickly to be invisible to samples, there probably isn't much opportunity for optimization.

If your thread is sleeping, it is still being sampled. Sampling doesn't concern itself with actual CPU time used. It will look like a function is being really inefficient because it shows up in so many samples, but that's because the thread is just sitting around waiting for a reason to wake up. Sleeping threads are a good thing since they don't take up any CPU time. Your application would be really efficient if all its threads were always sleeping, although your application wouldn't do much.

Sampling doesn't tell you when a function appeared in a stack trace - only how often. The only "when" information you can learn is which function called the function you're interested in at a particular point in the sample. You also can't tell how many times a function was called, only the number of times that the function appeared in a stack trace. However, you can learn much of this from gprof, described later.

Sampler

Sampler is the GUI sampling application that lives in /Developer/Applications. You can attach to a running application, or specify an application you want launched and sampled.

Give it a whirl. Run Sampler. Pick Attach... from the File menu. You'll get a list of applications that Sampler is able to attach to. Typically these are applications that are running as the same uid as you. If you need to sample something that is running as another user, you can try running Sampler as root or that other user.

Pick an application and hit OK. You'll get a sampling window which lets you choose the sampling interval. Actual sampling doesn't start until you hit the Start Sampling button. Hit the start button, then play around in the application for a few seconds. Then come back to Sampler and hit the stop button. After a few seconds of processing, it displays the result of your sampling. Look at Figure 1 for an example.


Figure 1. Main Sampler window.

As you click on function names in the left column view, the next column to the right will populate showing all the functions called by the function you just clicked along with the number of samples for each. The right scroller will show you the stack trace up to that function, and the highest sampling functions after that. You can see in this figure that we've drilled down to __CFRunLoopDoSources(). You can see exactly where its parent, __CFRunLoopRun, spent all of its 534 samples. 202 samples were in mach_msg, which, if that path were followed, would reveal that the thread was sleeping. All of the time spent in __CFRunLoopDoSources() was spent in _sendCallbacks. The remaining 104 samples from __CFRunLoopRun were shared among __CFRunLoopDoObservers, __CFRunLoopDoTimers, __CFRunLoopDoSource1, and __CFRunLoopRun.

If you were tracking performance problems, you want to investigate the functions that are taking the most time, ignoring the samples that are sleeping. Keep drilling down until you see something that surprises you. 534 samples in __CFRunLoopRun is not surprising, and neither is 228 samples in __CFRunLoopDoSources, but perhaps 97 samples in WebIconLoader might be, so if that's the case, that's what you want to check out.

sample

sample is the command line tool that allows you to sample a process. This can be useful if you're remotely connected to the machine.

To sample a process, you invoke sample with the PID of the process you're interested in, and the number of seconds to sample for. You can optionally provide the duration between samples. So, first get the PID of the process you're interested in:

[vinkjo:~] jav% ps -aux | grep MyApp
jav    452   0.0  2.2    99616  22624  ??  S    Sun03PM   2:55.17 MyApp
jav   1696   0.0  0.0     1416    308 std  S+    5:49PM   0:00.00 grep MyApp

So now you know the PID you are interested in is 452. Now run the sample command:

[vinkjo:~] jav% sample 452 5
Sampling process 452 each 10 msecs 500 times
Sample analysis of process 452 written to file /tmp/MyApp_452.sample.txt

Opening the resulting sample file will reveal that it looks something like this:

Analysis of sampling pid 452 every 10 milliseconds
Call graph:
    500 main
      400 KindaLong
        400  BlockMoveData [STACK TOP]
      100 KindaQuick
        100  memcpy [STACK TOP]

Sort by top of stack, same collapsed (when >= 5):
        BlockMoveData [STACK TOP]        400
        memcpy [STACK TOP]        100

In this hypothetical example we see that KindaLong took 4 times longer than KindaQuick. Perhaps this surprises us since both functions copy the same amount of data. If that's true, we can see that memcpy is much faster than BlockMoveData for the type and size of data we're giving it.

The sample shows [STACK TOP] to show when a sample shows that particular function at the top of the stack. This means, at the time the sample was taken, the code in that function was executing - not code in any other function that might be called from it.

You can open the result of the sample command line tool in the Sampler 2.0 GUI application. You can select the sample file from the Open... dialog in Sampler, or open it from the command line like this:

[vinkjo:~] jav% open -a Sampler /tmp/MyApp_452.sample.txt

gprof

Sampler and sample watch your code while it's running. For gprof, you run your code with profiling compiled and linked in, and when you're done, you use gprof to analyze the results. This allows you to profile command line tools and quickly running applications.

Using gprof requires you to rebuild your code. Because you need to have your code recompiled to take advantage of gprof, it might not be suitable when you're using a lot of third party frameworks whose code you can't recompile. Make a new build style and set the OTHER_CFLAGS and OTHER_LDFLAGS as shown in Figure 2.


Figure 2. Setting compiler options in Project Builder

When your program completes, a file named gmon.out will be created in the current working folder from where you launched the application. This can be confusing, since if you launched it from the Finder, the gmon.out file will appear at /.

After you get your gmon.out file, you need to process it with gprof into something readable. To do that, run gprof something like this:

> gprof /BuildResults/MyApp.app/Contents/MacOS/MyApp gmon.out > gprof.out

This will give you a report in the file gprof.out. There are two main sections to this report - the Call Graph and the Flat Profile.

The Flat Profile looks something like this:

granularity: each sample hit covers 4 byte(s) for 1.56% of 0.64 seconds
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  ms/call  ms/call  name    
 12.5       0.08     0.08                             _objc_msgSend [1]
  4.7       0.11     0.03                             _DoLigatureXSubtable [2]
  3.1       0.13     0.02                             _CFHash [3]
  3.1       0.15     0.02                             __class_lookupMethodAndLoadCache [4]
  3.1       0.17     0.02                             _objc_getNilObjectMsgHandler [5]
  3.1       0.19     0.02                             _pthread_getspecific [6]
  1.6       0.20     0.01                             +[NSDictionary 
                                                         dictionaryWithObjectsAndKeys:] [7]
  1.6       0.21     0.01                             -[NSLayoutManager 
                                                         defaultLineHeightForFont:] [8]
  1.6       0.24     0.01                             -[NSString isEqual:] [11]
  1.6       0.25     0.01                             -[NSUnarchiver 
                                                         decodeValuesOfObjCTypes:] [12]
  1.6       0.27     0.01                             _CFAllocatorDeallocate [14]
  1.6       0.28     0.01                             _CFDictionaryGetValue [15]
  1.6       0.29     0.01                             _CFRelease [16]
  1.6       0.30     0.01                             _CFRetain [17]
.
.
.
  0.0       0.64     0.00       20     0.00     0.00  __ZN13BaseConverter15GenericSetValueEtPc 
                                                         [18043]
  0.0       0.64     0.00       10     0.00     0.00  -[ConverterView textFieldType:] [52]
  0.0       0.64     0.00        5     0.00     0.00  -[ConverterView textDidChange:] [53]
  0.0       0.64     0.00        5     0.00     0.00  -[ConverterView 
                                                         updateFieldsWithNewNumbers:] [54]

This shows the amount of time spent in each function, sorted in decreasing order by the number of seconds actually spent in each function (as opposed to time spent in it and the functions that it calls). Then it is sorted by the number of calls (this is only available for sources compiled with the -pg flag. So, your sources, not the frameworks), and then alphabetically by name.

The % time is the percentage of total execution time that your program spent in this function. The cumulative seconds is the amount of time that was spent running this function plus any function that it calls. If the number of calls for a function are available, you can discover the number of milliseconds spent in just this function per call (self ms/call), and the number of milliseconds spent in this function plus any functions it calls per call (total ms/call).

Here I can see that the C++ function BaseConverter::GenericSetValue() gets called 20 times. If this is more than I expect, then I should look into why it's being called so many times. You can see that the flat profile can tell you how many times a particular function was called, which is not easy to do with the output from sample, and you can also see the amount of time spent in an individual function compared to how long was spent in the functions that that function called.

It's important to note when a function appears to take a long time to execute because the function itself is slow or because it is called a large number of times. In the above example, _objc_msgSend comes out as the biggest "time sink", which may lead you to believe that it is the performance issue. When in fact, it probably isn't. The performance issue, if any, is likely to be that some code gets executed too much that happens to call _objc_msgSend a lot, and instead of focussing on speeding up the leaf routine, one should find out why the leaf routine is called so much. In your sources that you compile with the -pg flag, this will be more obvious since you get the call count, but keep this in mind for functions that you don't get the call count.

The other part of the gprof report is the Call Graph, which looks something like this:

granularity: each sample hit covers 4 byte(s) for 1.56% of 0.64 seconds
                                  called/total       parents 
index  %time    self descendents  called+self    name           index
                                  called/total       children
                0.00        0.00       5/10          -[ConverterView textDidChange:] [53]
                0.00        0.00       5/10          -[ConverterView 
                                                       updateFieldsWithNewNumbers:] [54]
[52]     0.0    0.00        0.00      10         -[ConverterView textFieldType:] [52]
-----------------------------------------------
                0.00        0.00       5/5           __nsNotificationCenterCallBack [85241]
[53]     0.0    0.00        0.00       5         -[ConverterView textDidChange:] [53]
                0.00        0.00       5/10          -[ConverterView textFieldType:] [52]
                0.00        0.00       5/5           __ZN13BaseConverter14SetUnsignedDecEm 
                                                        [18045]
                0.00        0.00       5/5           -[ConverterView 
                                                         updateFieldsWithNewNumbers:] [54]
-----------------------------------------------
                0.00        0.00       1/1           __start [85480]
[18052   0.0    0.00        0.00       1         _main [18052]
-----------------------------------------------

Using the call graph, you can see which functions call a particular function, and also see what functions a particular function calls. Looking at the first entry, we can see that -[ConverterView textFieldType:] is called a total of 10 times - 5 times from -[ConverterView textDidChange:] and 5 times from -[ConverterView updateFieldsWithNewNumbers:]. Either -[ConverterView textFieldType:] did not call any other functions, or the functions that it did call were not compiled and linked with the -pg flag.

In the next entry, we can see the functions that -[ConverterView textDidChange:] called. It called -[ConverterView textFieldType:] 5 times out of the 10 times that the function was called throughout the program execution. It also called BaseConverter::SetUnsignedDec and -[ConverterView updateFieldsWithNewNumbers:] each 5 times.

With the results you get from gprof, here are some of the things you should be looking for:

    1. Look for functions that use up a lot of self ms/call in the flat profile. A lot of time is spent in these functions, and the amount of time can not be blamed on other functions that it calls.

    2. Take a look at the number of calls that your functions get. If they are larger than you expect, track down why they are larger than you expect. Some functions may be called redundantly.

    3. Scan over the numbers and see if anything looks surprising or slightly unexpected. A big part of optimization entails looking for things that do not look right.

Which Functions to Optimize

Here are some ideas for finding which functions you should spend some attention on:

    1. If a function takes a long time to execute but only executes once, then tuning that function's code is the best thing you can do. If a function gets run millions of times but spends little time executing, then the best thing you can do is get rid of the need to call it millions of times.

    2. Scan your results to find "things that make you go hmmmm..." Surprising results means things aren't operating the way you had anticiapted. This could mean some design issues with your algorithm, some functions are more expensive than you had anticipated, or just implementation mishaps.

    3. Go for the biggest bang. You may have a terribly inefficient function, but if it only takes up 0.1% of the time, then the biggest gain you can possibly get is 0.1%. Go after the function that takes 10% instead.

Summary

Don't postulate at what's wrong. Look at what's wrong.

References

For additional information, see Inside Mac OS X : Performance. More information on gprof is available at <http://www.gnu.org/manual/gprof-2.9.1>. Thanks to Yan Arrouye, Robert Bowdidge, Scott Boyd, and John Wendt for reviewing this article.


John A. Vink is one of Apple's most gifted engineers. He currently does performance analysis on code that you, the user, run constantly every day. He hopes you'll read this and make his job easier. It's possible to email him at vink@apple.com.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Make the passage of time your plaything...
While some of us are still waiting for a chance to get our hands on Ash Prime - yes, don’t remind me I could currently buy him this month I’m barely hanging on - Digital Extremes has announced its next anticipated Prime Form for Warframe. Starting... | Read more »
If you can find it and fit through the d...
The holy trinity of amazing company names have come together, to release their equally amazing and adorable mobile game, Hamster Inn. Published by HyperBeard Games, and co-developed by Mum Not Proud and Little Sasquatch Studios, it's time to... | Read more »
Amikin Survival opens for pre-orders on...
Join me on the wonderful trip down the inspiration rabbit hole; much as Palworld seemingly “borrowed” many aspects from the hit Pokemon franchise, it is time for the heavily armed animal survival to also spawn some illegitimate children as Helio... | Read more »
PUBG Mobile teams up with global phenome...
Since launching in 2019, SpyxFamily has exploded to damn near catastrophic popularity, so it was only a matter of time before a mobile game snapped up a collaboration. Enter PUBG Mobile. Until May 12th, players will be able to collect a host of... | Read more »
Embark into the frozen tundra of certain...
Chucklefish, developers of hit action-adventure sandbox game Starbound and owner of one of the cutest logos in gaming, has released their roguelike deck-builder Wildfrost. Created alongside developers Gaziter and Deadpan Games, Wildfrost will... | Read more »
MoreFun Studios has announced Season 4,...
Tension has escalated in the ever-volatile world of Arena Breakout, as your old pal Randall Fisher and bosses Fred and Perrero continue to lob insults and explosives at each other, bringing us to a new phase of warfare. Season 4, Into The Fog of... | Read more »
Top Mobile Game Discounts
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links below... | Read more »
Marvel Future Fight celebrates nine year...
Announced alongside an advertising image I can only assume was aimed squarely at myself with the prominent Deadpool and Odin featured on it, Netmarble has revealed their celebrations for the 9th anniversary of Marvel Future Fight. The Countdown... | Read more »
HoYoFair 2024 prepares to showcase over...
To say Genshin Impact took the world by storm when it was released would be an understatement. However, I think the most surprising part of the launch was just how much further it went than gaming. There have been concerts, art shows, massive... | Read more »
Explore some of BBCs' most iconic s...
Despite your personal opinion on the BBC at a managerial level, it is undeniable that it has overseen some fantastic British shows in the past, and now thanks to a partnership with Roblox, players will be able to interact with some of these... | Read more »

Price Scanner via MacPrices.net

You can save $300-$480 on a 14-inch M3 Pro/Ma...
Apple has 14″ M3 Pro and M3 Max MacBook Pros in stock today and available, Certified Refurbished, starting at $1699 and ranging up to $480 off MSRP. Each model features a new outer case, shipping is... Read more
24-inch M1 iMacs available at Apple starting...
Apple has clearance M1 iMacs available in their Certified Refurbished store starting at $1049 and ranging up to $300 off original MSRP. Each iMac is in like-new condition and comes with Apple’s... Read more
Walmart continues to offer $699 13-inch M1 Ma...
Walmart continues to offer new Apple 13″ M1 MacBook Airs (8GB RAM, 256GB SSD) online for $699, $300 off original MSRP, in Space Gray, Silver, and Gold colors. These are new MacBook for sale by... Read more
B&H has 13-inch M2 MacBook Airs with 16GB...
B&H Photo has 13″ MacBook Airs with M2 CPUs, 16GB of memory, and 256GB of storage in stock and on sale for $1099, $100 off Apple’s MSRP for this configuration. Free 1-2 day delivery is available... Read more
14-inch M3 MacBook Pro with 16GB of RAM avail...
Apple has the 14″ M3 MacBook Pro with 16GB of RAM and 1TB of storage, Certified Refurbished, available for $300 off MSRP. Each MacBook Pro features a new outer case, shipping is free, and an Apple 1-... Read more
Apple M2 Mac minis on sale for up to $150 off...
Amazon has Apple’s M2-powered Mac minis in stock and on sale for $100-$150 off MSRP, each including free delivery: – Mac mini M2/256GB SSD: $499, save $100 – Mac mini M2/512GB SSD: $699, save $100 –... Read more
Amazon is offering a $200 discount on 14-inch...
Amazon has 14-inch M3 MacBook Pros in stock and on sale for $200 off MSRP. Shipping is free. Note that Amazon’s stock tends to come and go: – 14″ M3 MacBook Pro (8GB RAM/512GB SSD): $1399.99, $200... Read more
Sunday Sale: 13-inch M3 MacBook Air for $999,...
Several Apple retailers have the new 13″ MacBook Air with an M3 CPU in stock and on sale today for only $999 in Midnight. These are the lowest prices currently available for new 13″ M3 MacBook Airs... Read more
Multiple Apple retailers are offering 13-inch...
Several Apple retailers have 13″ MacBook Airs with M2 CPUs in stock and on sale this weekend starting at only $849 in Space Gray, Silver, Starlight, and Midnight colors. These are the lowest prices... Read more
Roundup of Verizon’s April Apple iPhone Promo...
Verizon is offering a number of iPhone deals for the month of April. Switch, and open a new of service, and you can qualify for a free iPhone 15 or heavy monthly discounts on other models: – 128GB... Read more

Jobs Board

IN6728 Optometrist- *Apple* Valley, CA- Tar...
Date: Apr 9, 2024 Brand: Target Optical Location: Apple Valley, CA, US, 92308 **Requisition ID:** 824398 At Target Optical, we help people see and look great - and Read more
Medical Assistant - Orthopedics *Apple* Hil...
Medical Assistant - Orthopedics Apple Hill York Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Now Read more
*Apple* Systems Administrator - JAMF - Activ...
…**Public Trust/Other Required:** None **Job Family:** Systems Administration **Skills:** Apple Platforms,Computer Servers,Jamf Pro **Experience:** 3 + years of Read more
Liquor Stock Clerk - S. *Apple* St. - Idaho...
Liquor Stock Clerk - S. Apple St. Boise Posting Begin Date: 2023/10/10 Posting End Date: 2024/10/14 Category: Retail Sub Category: Customer Service Work Type: Part Read more
Top Secret *Apple* System Admin - Insight G...
Job Description Day to Day: * Configure and maintain the client's Apple Device Management (ADM) solution. The current solution is JAMF supporting 250-500 end points, Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.