Development: Buildbot

From WxWiki
Jump to: navigation, search

Our Buildbot status page:

There is also a second status page with access to run builds manually:

The Buildbot documentation:

Things to do

  • Set up slaves.
  • Update the wx website to mention the buildbot.
  • Use 'make install' and build and run programs against the installed libs and headers.
  • Test wx-config, try compiling using it and take it through its basic options.
  • Add binary compatibility tests, test programs from one version against libs from another with ldd -r.
  • Recently there was a discussion about GUI tests on wx-dev, running those would be immensely useful.
  • Set up wxPython builds if they are wanted.
  • Upgrade the build master; newer versions of buildbot have some improvements to the waterfall display.
  • Also consider a web view with a simple list of builds and no time axis.
  • We need to be able to see config.log when configure fails.
  • Install an IA64 VC8 build of cppunit on the Testdrive.
  • Install ccache for the Windows testdrive machines if that is possible.
  • Tarball source step for the Windows testdrive machines.
  • Write the schema.

Editing the Configuration

The configuration files can be found in SVN trunk under build/buildbot/config/. For a quick start it should be enough to read example.xml and maybe look at some existing configuration files.

The configuration files checked into trunk cover all the branches, the build/buildbot directory isn't used on the other branches.

Our configuration files are XML rather than the python configuration that is standard for buildbot. The reason for this is that we can then allow the buildbot to automatically reload the configuration when it changes in SVN. We wouldn't be able to do this with a python configuration as it would allow arbitrary commands to be run on the master.

If there are builds in progress when the configuration changes in SVN, it waits for these to finish before reloading, otherwise it does it right away. Any errors are shown in

To help catch simple errors before check in, you can verify your changes using the script '' under build/buildbot/tools. For example:

$ cd build/buildbot
$ tools/ config/*.xml

The script requires xmllint and xsltproc be installed, these tools come with Gnome's libxml2 and libxslt.

Instead of having a single configuration file, it has been split into one for each slave, plus common.xml for shared objects such as locks and schedulers. This leaves open the option of using per slave access control or signing, though we probably won't need this.

Since the configuration files are hand edited, we need to be able to define reusable snippets of code to avoid repetition. Rather than invent a complex language to do this, it has been kept minimal and the additional features are provided by allowing XSLT to be embedded in the config files. This allows constants and new elements and attributes to be defined in terms of the basic configuration language. These are usually put in include/defs.xml which is then included by all the slave XML files:

<bot xmlns:xi="">

<xi:include href="include/defs.xml"/>

Whitespace handling: the XML processor removes any text nodes containing only whitespace, and the application strips leading and trailing whitespace from any text nodes.

Buildbot Elements

In general we have an element for each buildbot class we need to be able to use, with the same name baring case (the class names use mixed case the elements are all lower case). For example we have <slavelock>, <compile> and <shellcommand> corresponding to buildbot's SlaveLock, Compile and ShellCommand classes.

Each of these elements can have children corresponding to the parameters of the class. For example:

    <command>tar xjf wxWidgets.tar.bz2</command>
    <description>extracting tarball</description>
    <descriptionDone>extract tarball</descriptionDone>


          command = "tar xjf wxWidgets.tar.bz2",
          description = "extracting tarball",
          descriptionDone = "extract tarball"
          haltOnFailure = true,
          locks = [ one_per_slave ])



This is a top level element. You have one of these for each build you want to run. Each one adds a column to the status page.


<name> string, required, unique. This is the title in the status page. Must be unique across all slaves.
<sandbox> string. The meaning of <sandbox> elements is slave specific, if it is allowed or required there should be a comment in the slave's configuration file explaining usage.
<builddir> string, required, unique. A directory for the build. This names a subdirectory on the build slave for the build, and also one on the master for the logs, therefore it must be unique across all slaves. It is a good idea to give it a slave specific prefix.
<lock> string, multiple. Each lock element names a <slavelock> or <masterlock> object. The build will acquire all its locks before it runs, allowing serialisation. See Locks below.
<scheduler> string, multiple. Each <scheduler> names a scheduler object that will trigger the build. These are usually defined in common.xml.
<steps> build steps Build step elements (see below) defining the actual commands that the build will execute.

Build Steps

The build step elements are <svn>, <testdrive-svn>, <configure>, <compile>, <test> and <shellcommand>.

All these except <testdrive-svn> correspond to buildbot classes of the same names; <testdrive-svn> is an <svn> derived class that checks out to the Testdrive machines.

Instead of using <svn> or <testdrive-svn> directly you usually use <checkout>. This is defined in include/defs.xml (and redefined in include/testdrive.xml for the Testdrive), and behaves as <svn> but provides defaults for the URL and branch.

Another element you will see being used as a build step is <setup>. This can be used by Testdrive unix builds to set up some prerequisites for the build. It is defined in include/testdrive-unix.xml.

All the build steps take the following common parameters:

<name> string.
<haltOnFailure> boolean.
<flunkOnWarnings> boolean.
<flunkOnFailure> boolean.
<warnOnWarnings> boolean.
<warnOnFailure> boolean.
<lock> string, multiple. Each lock element names a <slavelock> or <masterlock> object. The step will acquire all its locks before it runs, allowing serialisation. See Locks below.

For the boolean elements, in addition to 'true' and 'false' they can also be empty, which is taken to mean 'true'.

For full descriptions of the parameters see:


Takes the following parameters in addition to the common parameters above:

<svnurl> string.
<baseURL> string.
<defaultBranch> string.
<workdir> string.
<mode> 'update', 'copy', 'clobber' or 'export'
<alwaysUseLatest> boolean.

The URL to check out can be given with <baseURL> and <defaultBranch>, the concatenation of the two giving the URL, or alternatively using the <svnurl> parameter.

Usually you use <checkout> defined in include/defs.xml instead of <svn>, it takes the same parameters as <svn> but has defaults for <baseURL> and <defaultBranch>.

See the buildbot manual for the descriptions of the parameters:


A class derived from <svn> that checks out to the testdrive. It takes the same parameters as <svn> with a couple of exceptions.

  • It has no <mode> parameter, instead it always does an 'update' to a shared checkout directory.
  • It has an additional <command> parameter which executes a post checkout command on the remote testdrive machine in the shared directory, this can be used to create a private copy of the shared directory so that builds do not need to be serialised (the <testdrive-svn> steps themselves are always serialised).

Usually you use <checkout> instead, defined in include/testdrive.xml. It provides defaults for <baseURL> and <defaultBranch> and runs a post checkout command to make a clean copy of the shared directory in the 'build' subdirectory of the build's <builddir>. The net effect is similar to 'clobber', you always get a non-shared clean checkout.


This allows an arbitrary command to be run on the slave to fetch the sources. For example it could be used to download and unpack a tarball instead of the usual SVN checkout.

It's a custom class, but is similar in nature to <shellcommand> and basically takes the parameters of both standard buildbot's Source-Checkout and ShellCommand classes:

In addition to the common parameters above:

<defaultBranch> string.
<workdir> string. Defaults to 'build'.
<mode> 'update', 'copy', 'clobber' or 'export' Defaults to 'clobber'.
<alwaysUseLatest> boolean.
<description> string.
<descriptionDone> string.
<command> string, multiple. If multiple <command> elements are given they are concatenated with a newline between them.

The differences to <shellcommand> are:

It runs outside <workdir> and must create this directory if it doesn't exist (<shellcommand> and company OTOH run inside the <workdir> and won't start if it's not already there).

When <command> is executed, the following environment variables will be predefined if they are not blank:

WORKDIR The value of <workdir> this must be created if it doesn't exist. This one will always be available.
BRANCH The value of <defaultBranch> unless overridden for a manually started build.
REVISION The revision to be fetched. If not set assume the latest.
<shellcommand>, <configure>, <compile> and <test>

These build steps take the following parameters in addition to the common parameters above:

<workdir> string.
<description> string.
<descriptionDone> string.
<command> string, multiple. If multiple <command> elements are given they are concatenated with a newline between them.

Note: commands containing newlines do not work on all platforms, so using multiple <command> elements does not turn out to be useful in most cases. You can't use them in include/defs.xml for example.

See the buildbot manual for the descriptions of the parameters: and

<shellcommand> is the base class for the others, they are simple subclasses each with a different default for <command>. You can use this default either by omitting the <command> element, or using an empty <command/> tag.


Locks can be used to synchronise builds or build steps that must be serialised. There are two types of locks <slavelock> and <masterlock>.

They are top level elements taking one parameter <name>:



Both types have global visibility, i.e. they are available for all slaves no matter which XML file they are defined in. Therefore if your lock could be generally useful put it in common.xml, otherwise it is a good idea to name it with a slave specific prefix.

The difference between the two types is in the scope of the locking. If two builds try to acquire the same slavelock they will run concurrently if they are on different slaves, but one will block if they are on the same slave. With a masterlock OTOH only one build will run at a time across all slaves.

To use a lock add an element:


to your <build> or build step.

For more information about locks see:


Scheduler objects can be used to automatically trigger builds. They are created using the top level elements <scheduler>, <anybranchscheduler>, <periodic>, <nightly> and <dependent>. They have global visibility, therefore usually the are created in common.xml, for example:


To use this scheduler add an element:


as a child of your <build>. You can add more than one to the same build, for example, 'sunday_6am' and 'wednesday_6am' to run a build twice a week. You can also have builds with no scheduler which will only run when started manually.

You'll notice in the buildbot documentation that this differs from the way the standard build configuration files work. They make the associations between schedulers and builds the other way around, the scheduler is given a list of builds.

For more information about schedulers see:

<scheduler> and <anybranchscheduler>

These schedulers watch a branch and trigger a build whenever there are relevant changes. They take the same parameters, the only difference being that <scheduler> can only have a single <branch> child, while <anybranchscheduler> can have multiple.

<name> string, unique, required. Name for the scheduler.
<branch> string. A branch to watch for changes.
<treeStableTimer> integer. Wait until the branch stops changing for this many seconds before firing the builds.
<fileIsImportant> string, multiple. Whitespace separated list of globing patterns, defaults to everything.
<fileNotImportant> string, multiple. Whitespace separated list of globing patterns, defaults to nothing.

<fileIsImportant> and <fileNotImportant> take a list of globbing patterns separated by whitespace, for example:

<fileNotImportant>docs/* distrib/*</fileNotImportant>

The scheduler only fires if a file matching <fileIsImportant> and not matching <fileNotImportant> has changed.

<name> string. unique, required. Name for the scheduler.
<periodicBuildTimer> integer. Fires the builds every time this many seconds elapses.
<branch> string. Override a build's normal branch when triggering it.

Cron style scheduling:

<name> string. unique, required. Name for the scheduler.
<minute> 0-59 defaults to 0.
<hour> 0-23 defaults to *.
<dayOfMonth> 1-31 defaults to *.
<month> 1-12 defaults to *.
<dayOfWeek> 0-6 (0 = monday) defaults to *.
<branch> string. Override a build's normal branch when triggering it.

The values can be just '*' for everything, single numbers or ranges, or lists of numbers and/or ranges.

For example:

<minute>*</minute>          <!-- every minute -->
<minute>0,15,30,45</minute> <!-- every quarter of an hour -->
<dayOfWeek>0-4</dayOfWeek>  <!-- Monday to Friday -->
<name> string. unique, required. Name for the scheduler.
<upstream> string, required. The name of the upstream scheduler.

For more information see:

Setting up a Slave

Setting up a Slave

Instructions for each platform are here:

Instructions for setting up a Windows slave are here:

Your slave needs to be able to make outgoing connections to the build master on, and to the SVN server or the York tarball site. It doesn't need to be able to accept incoming connections, the slave initiates the connection to the master rather than the other way around. Therefore buildbot slaves don't require a fixed IP address and can be behind a NAT router.

The build master is using SSL, which is not directly supported by buildbot slaves at the moment. However you can connect to it by installing stunnel

You choose your own slave name and password and let Vadim know these and the IP address(es) you will be connecting from. Make yourself a configuration file in SVN under trunk/build/buildbot/config, its name must be your chosen slave name + '.xml'. You can use example.xml as a starting point.

The simplest approach in most cases for setting up the right PATH and environment variables for the compiler you will be using, is to do it as part of the slave's startup. Either set up the environment before running buildbot, or you can do it by modifying the os.environ dict in buildbot.tac.

If you will need more than one environment on the same machine, say you will be doing both VC++ and Watcom builds for example, then setting up multiple slaves on the same machine is simplest in most cases.

As the buildbot docs say, the slave should have its own user account. This prevents the buildbot from modifying anyone else's files, or interfering with other processes by doing things like sending signals or using debugging APIs on them.

The build slave will be running scripts and programs out of the wxWidgets SVN unchecked. To help with this the build master prefixes all commands it sends with the word 'sandbox'. This allows you do implement a program or script 'sandbox' on your slave to run the SVN code safely. This can change to another user account from the build slave so that it cannot do things like access the build slave's password or reconfigure it to take commands from another build master.

Here are some examples of no-op sandbox scripts you can use initially. Once you have your slave working you can replace the no-op script with one that gives you more security if you require it, and post it here if it will be useful to others.

For unix systems you can use:

exec sh -c "$@"

For Windows you can use sandbox.bat:

@%COMSPEC% /c %*

For Cygwin, you could run buildbot under Cygwin python. In that case it behaves as a unix system and you could use the unix sandbox script above.

Alternatively, you could set buildbot up as a Windows slave running under the Windows version of python, then for sandbox.bat:

@c:\cygwin\bin\sh -c %*

This approach has the advantage that you can use the buildbot Windows service, and would be more efficient especially if you are running more slave processes for other compilers on the same machine.

Similarly for an MSYS sandbox.bat:

@c:\msys\1.0\bin\sh -c %*

Any <sandbox> elements in your builds are sent as extra parameters to your sandbox script, and it is entirely up to you what the sandbox does with them. For example, if you wanted to use multiple compilers on the same machine, say VC++ and Watcom, then the recommended approach is to set up multiple slaves on the machine. An alternative approach with just one slave process would be for your sandbox.bat to call either VC's vcvars32.bat or Watcom's setvars.bat to set up the build environment depending on the value of the first parameter. Then in your builds you could put <sandbox>MSVC</sandbox> or <sandbox>Watcom</sandbox> to select between them.

Commands sent to your build slave for execution as part of a build step are formed as follows: arg[0] is the word 'sandbox', the content of each of the build's <sandbox> elements is appended as a separate argument and all the <command> elements of the build step are concatenated with '\n' and appended as a single argument. This is then executed without shell expansion. The <sandbox> and <command> arguments are in the order they appear in the build.

For example:



then for the <compile/> build step your slave receives:

   'sandbox', 'foo', 'make all', 'bar'

Note that having multiple <command> elements in a step is not usually useful as not all platforms can handle '\n' in parameters.

Sending in Logs from Builds Done Elsewhere

If running a full build slave is not an option (e.g. because you are behind a firewall that does not allow the necessary outgoing connections) but you would still like to do regular test builds, then it is possible to do the builds elsewhere and then upload the logs to the buildbot.

For each column you will require in the waterfall display add something like the following to push.xml in SVN under trunk/build/buildbot/config:

    <name>wxOS2 Stable gcc</name>

        <show log="update"/>
        <show log="configure"/>
        <show log="compile"/>
        <show log="demos"/>
        <show log="samples"/>

There is a 'buildbot' category in the wxWidgets patch manager that you can use if you are doing this by supplying patches, there should be no delay applying changes to your own configuration.

You should have one column for each OS/Toolkit/Branch/Compiler combination you are building. The <name> and <builddir> must be unique across all build slaves.

Then when you do your test builds, you should generate two files for each <show log=...> in your configuration: a '.log' file containing the log itself, and a '.err' file which is a text file contain just the decimal exit code of that build step.

For example, for <show log="configure"> you should generate configure.log and configure.err.

These should then be zipped or tarred and gzipped into a single archive file.

To upload your logs go to the buildbot admin page, and click on your build's name at the top of its column. In there you will find a form which will allow you to upload your log archive file.

In the required 'revision' field you should supply the SVN revision of the sources you have built. If you are getting the sources with 'svn update', then it is reported at the end of the update. Or you can check it later using 'svn info'. The 'branch' can be left blank since it is indicated in the <name>.

Uploading can also be done programmatically using curl, for example:

$ curl -u user:pass --cacert tt.cacert.pem \
    -F revision=r12345 -F file=@build.tar.gz \

Note that the <name> of your build appears in the url.

The certificate file tt.cacert.pem is:

$ openssl s_client -connect < /dev/null | \
        openssl x509 > tt.cacert.pem
$ cat tt.cacert.pem

Setting up your automatic builds for sending in Logs

While I am setting up some automatic builds, I'll add some notes on what needs to be done to try and simplify others' life.

If you are not setting up a build slave, chances are that you are behind a firewall with a HTTP proxy. In that case, you might have other problems as well. For me, the first one was to access svn. Your proxy needs to support some extensions to allow svn to work, e.g. squid appears to require an additional configuration line like:


Supposedly, setting up desproxy( might be a solution as well, however for me it failed to work, always claiming that refused the connection, so I attempted to just use our local proxy which seemed to work fine.

A completely different and maybe easier to handle solution would be to download the daily tarball(s) of the branch(es) you are interested in, however currently only HEAD seems to be available in this way.

Anyway, once you do have the (initial) sources you want to build, create a build directory (e.g. build/solaris), go to that directory and do your build, e.g. by doing:

  ../../configure --with-motif > configure.log 2>&1 
  echo $? > configure.err
  make > make.log 2>&1 
  echo $? > make.err
  # Optionally build demos and/or samples
  tar cvf sol-motif.tar configure.log \
      configure.err make.log make.err \
      #optionally more files ...
  gzip -9 sol-motif.tar

(together with a command to update your sources, you could put this in a script to be run by a cron demon). Finally upload the generated tar.gz to the buildbot admin page as described above.

Notes for specific platforms

OSX (10.5)


Download of Twisted 8.0.1 here

Download of sources of buildbot and building according to README with python ./ install

Install 4 way binary of cppunit:

$ ./configure --disable-dependency-tracking CXXFLAGS="-arch ppc -arch i386 -arch ppc64 -arch x86_64 -gdwarf-2 -O2"
$ make AM_LDFLAGS="-XCClinker -arch -XCClinker ppc -XCClinker -arch -XCClinker i386 -XCClinker -arch -XCClinker ppc64 -XCClinker -arch -XCClinker x86_64"
$ sudo make install

in the stunnel 4 configuration file I've had to set a path with write access for th pid, some socket infos

socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1

and then the service configuration for the buildbot

; Service-level configuration

client = yes
accept  = 9989
CAfile = /Users/sandboxuser/stunnel/bb.cert.pem
verify = 2
connect =
sslVersion = SSLv3

I've put both the .conf and the .pem file into a folder stunnel in the user's home, so I can startup the connection using

stunnel ~/stunnel/stunnel.conf

if you have problems then add the following debug settings at the beginning of the file

debug = 7
foreground = yes

you will then be able to read the output directly in Terminal


A really clear guide to setting up and running a Windows Buildbot for wxWidgets is available here.

Debian Based Linux Systems

Currently writing the instructions: here.

Embedded XSLT

The configuration files can include XSLT elements. These are evaluated and replaced by any output they generate. This allows us to make the configuration files convenient to use and avoid repetition while keeping the underlying schema simple.

The alternative would have been a much more complex schema. XSLT has the advantage that a lot of devs will already know it and if OTOH someone has to learn it at least it's useful to know anyway, and there is plenty written about it to learn from.

For most tasks simply following the examples in the next section will be enough. Or if you are familiar with XSLT and want technical details of how it works embedded then see: todo

Please put a comment on any reusable XSLT that you add, with enough information for someone who doesn't read XSLT well to use it.



A named template defines a new element, for example:

<xsl:template name="STABLE_BRANCH">branches/WX_2_8_BRANCH</xsl:template>

This defines an element:


that expands to 'branches/WX_2_8_BRANCH'. -p

You can see the output of your XSLT using the script under build/buildbot/tools (requires xsltproc) using the option '-p':

$ cat test.xml
<?xml version="1.0"?>
<bot xmlns:xsl="" xsl:version="1.0">
    <xsl:template name="STABLE_BRANCH">branches/WX_2_8_BRANCH</xsl:template>
$ -p test.xml
<?xml version="1.0"?>


constants including elements

The named template can contain child elements:

<xsl:template name="profile">
for %I in ("%USERPROFILE%") do set BUILDDIR=%~sI\build

Now <profile/> expands to:

for %I in ("%USERPROFILE%") do set BUILDDIR=%~sI\build

defining a new element with content

It is also possible for templates to copy any child elements of their invocations into their output. To do this follow this pattern:

<xsl:template name="compile-samples">
    <xsl:param name="content"/>
    <compile-subdir dir="samples">
        <xsl:copy-of select="$content"/>

Begin the template with <xsl:param> as shown, this defines "$content" which will receive the children of the invocation. Then use <xsl:copy-of> to copy these children to the output.

This invocation:


expands to:

    <compile-subdir dir="samples">

templates aren't expanded inside themselves

Named templates are not expanded inside themselves, so for example this is not recursive:

<xsl:template name="sandbox">
    <xsl:param name="content"/>
    <sandbox><xsl:copy-of select="$content"/></sandbox>
    <lock><xsl:copy-of select="$content"/></lock>



expands to:


giving an element defaults for missing children

To provide default values for child elements follow this pattern:

<xsl:template name="checkout">
    <xsl:param name="content"/>
        <defaults content="{$content}">
        <xsl:copy-of select="$content"/>

The <defaults> element is a template defined in include/defs.xml. It copies any of its children that don't already exist in $content.



expands to:


<baseURL> was omitted in the invocation so a default was supplied.

giving an element new attributes

The template's first param always receives the invocation's children, any further params define attributes. For example:

<xsl:template name="checkout">
    <xsl:param name="content"/>
    <xsl:param name="branch" select="'trunk'"/>
        <defaults content="{$content}">
            <defaultBranch><xsl:value-of select="$branch"/></defaultBranch>
        <xsl:copy-of select="$content"/>

The second param defines an attribute 'branch' with a default value 'trunk'. The value of the attribute can be used inside the template using xsl:value as shown.


<checkout branch="branches/WX_2_8_BRANCH"/>

expands to:


constants that can be used in attributes

It's not possible to use an element inside an attribute, for for example you can't write:

<checkout branch="<STABLE_BRANCH/>"/>

However you can define a variable at top level:

<xsl:variable name="STABLE_BRANCH"><STABLE_BRANCH/></xsl:variable>

Which does then allow:

<checkout branch="{$STABLE_BRANCH}"/>

overriding templates and variables from include files

A template or variable definition will hide any with the same name that exists in an included file. Therefore a slave can override any of the usual templates from include/defs.xml by simply defining its own version.