Channels ▼

Al Williams

Dr. Dobb's Bloggers

The Profiler

July 07, 2014

I've often thought doctors — and maybe most professions — do better at differentiation than engineers do. Almost no one would go to a podiatrist for a head cold. But it seems all too often engineers get lumped together, at least by broad strokes. I'm an electrical engineer, but you don't want me telling you how to wire up a generator or a commercial air conditioner. I'm not that kind of electrical engineer.

However, as someone who develops a lot of embedded systems, I find that people often ask me about things that aren't directly related to embedded but maybe are related tangentially. In particular, I get a lot of questions related to performance.

I think that many programmers today don't get a chance to work at the "bare metal" level, and if you really want to squeeze out performance, there's no substitute for understanding and exploiting the actual hardware available to you.

Of course, if you work with the tiny processors I often use, you almost have no choice. But even embedded systems now often work with C or C++ and insulate you from the direct control of the hardware.

Don't get me wrong. Most of the time, a compiler will generate code that is good enough. In fact, I would guess that a compiler overall generates better code than a human on significant programs. Even if your skill level is very high, the compiler tirelessly applies its best code generation. So even the places where you might beat it will get swamped out by the places where you didn't do your best.

However, it is well known that usually only a small percentage of your program accounts for the lion's share of the program's execution time. If you can bring your human intelligence to bear on that small percentage, sometimes it can have big payoffs.

This is especially true because the biggest payoffs of all are things that compilers can't do. Consider, for example, building a table that relates strings to an integer key. You can write a linear search and optimize it by hand (or let the compiler do it). It will never be as fast as even a poorly written hash table search. Picking the right algorithm is almost always the best optimization, and one that compilers are not very good at, at least in general. Humans, on the other hand, can apply heuristics (even if they don’t know exactly what the heuristics are) very easily and often pick a good alternative among many.

There are at least three things that are crucial to this kind of optimization:

  1. Understand the execution profile of your software.
  2. Understand what the compiler is doing.
  3. Know the processor and system architecture.

I put these in rough order. If you really know what part of your program is consuming the most time (or whatever resource you want to optimize for), you might be able to fix it without worrying much about number 2 and number 3. Changing a slow bubble sort for a fast quick sort might make things better with just the change of a function call.

However, more often you need to understand the compiler's code and your options for the specific system. But first things first. To satisfy number 1, you need a profiler or you'll have to instrument your code. Instrumenting your code is the process of adding statements to tell you about the profile resource (usually execution time). This could be as simple as putting time stamps in a log to show when key subroutines start and finish. On small systems, you might toggle a digital output to read a time on an oscilloscope.

However, if your system supports a profiler, that will be easier. If you are running gcc under Linux, you can use the -pg option to have gcc add profiling instrumentation to your code automatically. When you compile the code with -pg, it doesn't appear to do anything different. You run your program as usual. However, the program will now silently write a file named gmon.out during execution. This file contains execution statistics you can display using gprof.

The gprof output contains a lot of documentation. The listing below shows a session, compiling, running, and profiling a simple program that computes factorials using two different methods. The a_fact function uses a loop to compute a factorial while the b_fact function uses recursion.

gcc -o profiler -pg profiler.c
./profiler 50 1000000
15188249005818642432
15188249005818642432
gprof profiler
Flat profile:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  ns/call  ns/call  name    
 66.54      0.29     0.29  1000000   292.76   292.76  b_fact
 32.12      0.43     0.14  1000000   141.33   141.33  a_fact
  2.29      0.44     0.01                             main

 %         the percentage of the total running time of the
time       program used by this function.

cumulative a running sum of the number of seconds accounted
 seconds   for by this function and those listed above it.

 self      the number of seconds accounted for by this
seconds    function alone.  This is the major sort for this
           listing.

calls      the number of times this function was invoked, if
           this function is profiled, else blank.
 
 self      the average number of milliseconds spent in this
ms/call    function per call, if this function is profiled,
	   else blank.

 total     the average number of milliseconds spent in this
ms/call    function and its descendents per call, if this 
	   function is profiled, else blank.

name       the name of the function.  This is the minor sort
           for this listing. The index shows the location of
	   the function in the gprof listing. If the index is
	   in parenthesis it shows where it would appear in
	   the gprof listing if it were to be printed.

Copyright (C) 2012 Free Software Foundation, Inc.

Copying and distribution of this file, with or without modification,
are permitted in any medium without royalty provided the copyright
notice and this notice are preserved.

		     Call graph (explanation follows)


gpranularity: each sample hit covers 2 byte(s) for 2.25% of 0.44 seconds

index % time    self  children    called     name
                                                 <spontaneous>
[1]    100.0    0.01    0.43                 main [1]
                0.29    0.00 1000000/1000000     b_fact [2]
                0.14    0.00 1000000/1000000     a_fact [3]
-----------------------------------------------
                             49000000             b_fact [2]
                0.29    0.00 1000000/1000000     main [1]
[2]     65.9    0.29    0.00 1000000+49000000 b_fact [2]
                             49000000             b_fact [2]
-----------------------------------------------
                0.14    0.00 1000000/1000000     main [1]
[3]     31.8    0.14    0.00 1000000         a_fact [3]
-----------------------------------------------

 This table describes the call tree of the program, and was sorted by
 the total amount of time spent in each function and its children.

 Each entry in this table consists of several lines.  The line with the
 index number at the left hand margin lists the current function.
 The lines above it list the functions that called this function,
 and the lines below it list the functions this one called.
 This line lists:
     index	A unique number given to each element of the table.
		Index numbers are sorted numerically.
		The index number is printed next to every function name so
		it is easier to look up where the function is in the table.

     % time	This is the percentage of the 'total' time that was spent
		in this function and its children.  Note that due to
		different viewpoints, functions excluded by options, etc,
		these numbers will NOT add up to 100%.

     self	This is the total amount of time spent in this function.

     children	This is the total amount of time propagated into this
		function by its children.

     called	This is the number of times the function was called.
		If the function called itself recursively, the number
		only includes non-recursive calls, and is followed by
		a '+' and the number of recursive calls.

     name	The name of the current function.  The index number is
		printed after it.  If the function is a member of a
		cycle, the cycle number is printed between the
		function's name and the index number.


 For the function's parents, the fields have the following meanings:

p     self	This is the amount of time that was propagated directly
		from the function into this parent.

     children	This is the amount of time that was propagated from
		the function's children into this parent.

     called	This is the number of times this parent called the
		function '/' the total number of times the function
		was called.  Recursive calls to the function are not
		included in the number after the '/'.

     name	This is the name of the parent.  The parent's index
		number is printed after it.  If the parent is a
		member of a cycle, the cycle number is printed between
		the name and the index number.

 If the parents of the function cannot be determined, the word
 '<spontaneous>' is printed in the 'name' field, and all the other
 fields are blank.

 For the function's children, the fields have the following meanings:

     self	This is the amount of time that was propagated directly
		from the child into the function.

     children	This is the amount of time that was propagated from the
		child's children to the function.

     called	This is the number of times the function called
		this child '/' the total number of times the child
		was called.  Recursive calls by the child are not
		listed in the number after the '/'.

     name	This is the name of the child.  The child's index
		number is printed after it.  If the child is a
		member of a cycle, the cycle number is printed
		between the name and the index number.

 If there are any cycles (circles) in the call graph, there is an
 entry for the cycle-as-a-whole.  This entry shows who called the
 cycle (as parents) and the members of the cycle (as children.)
 The '+' recursive calls entry shows the number of function calls that
 were internal to the cycle, and the calls entry for each member shows,
 for that member, how many times it was called from other members of
 the cycle.

Copyright (C) 2012 Free Software Foundation, Inc.

Copying and distribution of this file, with or without modification,
are permitted in any medium without royalty provided the copyright
notice and this notice are preserved.

Index by function name

   [3] a_fact                  [2] b_fact                  [1] main

This is a simple example (you can find the source file in the online listings) and the result isn't surprising. The loop version is faster than the recursive version (note, I ran the loops a million time to make the execution time more apparent). What's the effect of using the -O1 optimization when compiling?

On my computer, it made the a_fact subroutine slightly slower (0.16 seconds) and the b_fact subroutine faster (0.15). I expect the 0.01 second different in a_fact is probably just a difference from run to run. But the b_fact subroutine clearly prefers the -O1 option. If you try the -O2 option, you'll get a surprise because the program's very simple to highly optimize.

Of course, if your program has any interactive input or device input, that will perturb functions that have to wait for that input. You may want to profile a non-interactive version of your program or call key functions from a test bed instead of your real program.

With gprof you can identify functions that are called often and those that take a lot of time to execute. Then you can focus on the areas that are best to improve. If you are on a highly embedded system, you might have to be more creative. As I mentioned earlier, you might resort to triggering outputs to read with an oscilloscope or printing time stamps, but however you do it, knowing what's happening is the first step.

Sometimes you can identify a simple fix. In my example, I can simply pick the function that works best. Or I could investigate another algorithm or write more efficient C code. If that doesn’t work? I'll talk about that next time.

Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 


Video