Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for a snapshot of the master branch, built from commit b14f8ca719.

1. Introduction

Want to learn about B2 features? Start with the tutorial and continue with the overview. When you’re ready to try B2 in practice, go to the installation.

Building a project with B2? See the installation and then read the overview.

Setting up B2 on your project? Take a look at the overview and extender manual.

If there’s anything you find unclear in this documentation, report the problem directly in the issue tracker. For more general questions, please post them to our discussion forums (https://github.com/bfgroup/b2/discussions).

Copyright 2018-2021 René Ferdinand Rivera Morell; Copyright 2006, 2014 Vladimir Prus. Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE.txt or copy at https://www.bfgroup.xyz/b2/LICENSE.txt)

2. Installation

To install B2 from an official release, as available on GitHub, follow these steps:

  1. Unpack the release. On the command line, go to the root of the unpacked tree.

  2. Run either .\bootstrap.bat (on Windows), or ./bootstrap.sh (on other operating systems).

  3. Run

    $ ./b2 install --prefix=PREFIX

    where PREFIX is a directory where you want B2 to be installed.

  4. Optionally, add PREFIX/bin to your PATH environment variable.

$ PREFIX/bin/b2

A simple executable should be built.

A C++11 capable compiler is needed to build the b2 engine. But using the b2 engine and build system does not require C++11.

3. Tutorial

This section will guide you through the most basic features of B2. We will start with the “Hello, world” example, learn how to use libraries, and finish with testing and installing features.

3.1. Hello, world

The simplest project that B2 can construct is stored in example/hello/ directory. The project is described by a file called Jamfile that contains:

exe hello : hello.cpp ;

Even with this simple setup, you can do some interesting things. First of all, just invoking b2 will build the hello executable by compiling and linking hello.cpp. By default, the debug variant is built. Now, to build the release variant of hello, invoke

b2 release

Note that the debug and release variants are created in different directories, so you can switch between variants or even build multiple variants at once, without any unnecessary recompilation. Let us extend the example by adding another line to our project’s Jamfile:

exe hello2 : hello.cpp ;

Now let us build both the debug and release variants of our project again:

b2 debug release

Note that two variants of hello2 are linked. Since we have already built both variants of hello, hello.cpp will not be recompiled; instead the existing object files will just be linked into the corresponding variants of hello2. Now let us remove all the built products:

b2 --clean debug release

It is also possible to build or clean specific targets. The following two commands, respectively, build or clean only the debug version of hello2.

b2 hello2
b2 --clean hello2

3.2. Properties

To represent aspects of target configuration such as debug and release variants, or single- and multi-threaded builds portably, B2 uses features with associated values. For example, the debug-symbols feature can have a value of on or off. A property is just a (feature, value) pair. When a user initiates a build, B2 automatically translates the requested properties into appropriate command-line flags for invoking toolset components like compilers and linkers.

There are many built-in features that can be combined to produce arbitrary build configurations. The following command builds the project’s release variant with inlining disabled and debug symbols enabled:

b2 release inlining=off debug-symbols=on

Properties on the command-line are specified with the syntax:

feature-name=feature-value

The release and debug that we have seen in b2 invocations are just a shorthand way to specify values of the variant feature. For example, the command above could also have been written this way:

b2 variant=release inlining=off debug-symbols=on

variant is so commonly-used that it has been given special status as an implicit feature—B2 will deduce its identity just from the name of one of its values.

A complete description of features can be found in the section called “Features and properties”.

3.2.1. Build Requests and Target Requirements

The set of properties specified on the command line constitutes a build request—a description of the desired properties for building the requested targets (or, if no targets were explicitly requested, the project in the current directory). The actual properties used for building targets are typically a combination of the build request and properties derived from the project’s Jamfile (and its other Jamfiles, as described in the section called “Project Hierarchies”). For example, the locations of `#include`d header files are normally not specified on the command-line, but described in Jamfiles as target requirements and automatically combined with the build request for those targets. Multi-threaded compilation is another example of a typical target requirement. The Jamfile fragment below illustrates how these requirements might be specified.

exe hello
    : hello.cpp
    : <include>boost <threading>multi
    ;

When hello is built, the two requirements specified above will always be present. If the build request given on the b2 command-line explicitly contradicts a target’s requirements, the target requirements usually override (or, in the case of “free” features like <include>, [1] augment) the build request.

The value of the <include> feature is relative to the location of Jamfile where it is used.

3.2.2. Project Attributes

If we want the same requirements for our other target, hello2, we could simply duplicate them. However, as projects grow, that approach leads to a great deal of repeated boilerplate in Jamfiles. Fortunately, there’s a better way. Each project can specify a set of attributes, including requirements:

project
    : requirements <include>/home/ghost/Work/boost <threading>multi
    ;

exe hello : hello.cpp ;
exe hello2 : hello.cpp ;

The effect would be as if we specified the same requirement for both hello and hello2.

3.3. Project Hierarchies

So far we have only considered examples with one project, with one user-written Jamfile file. A typical large codebase would be composed of many projects organized into a tree. The top of the tree is called the project root. Every subproject is defined by a file called Jamfile in a descendant directory of the project root. The parent project of a subproject is defined by the nearest Jamfile file in an ancestor directory. For example, in the following directory layout:

top/
  |
  +-- Jamfile
  |
  +-- app/
  |    |
  |    +-- Jamfile
  |    `-- app.cpp
  |
  `-- util/
       |
       +-- foo/
       .    |
       .    +-- Jamfile
       .    `-- bar.cpp

the project root is top/. The projects in top/app/ and top/util/foo/ are immediate children of the root project.

When we refer to a “Jamfile,” set in normal type, we mean a file called either Jamfile or Jamroot. When we need to be more specific, the filename will be set as “Jamfile” or “Jamroot.”

Projects inherit all attributes (such as requirements) from their parents. Inherited requirements are combined with any requirements specified by the subproject. For example, if top/Jamfile has

<include>/home/ghost/local

in its requirements, then all of its sub-projects will have it in their requirements, too. Of course, any project can add include paths to those specified by its parents. [2] More details can be found in the section called “Projects”.

Invoking b2 without explicitly specifying any targets on the command line builds the project rooted in the current directory. Building a project does not automatically cause its sub-projects to be built unless the parent project’s Jamfile explicitly requests it. In our example, top/Jamfile might contain:

build-project app ;

which would cause the project in top/app/ to be built whenever the project in top/ is built. However, targets in top/util/foo/ will be built only if they are needed by targets in top/ or top/app/.

3.4. Dependent Targets

When building a target X that depends on first building another target Y (such as a library that must be linked with X), Y is called a dependency of X and X is termed a dependent of Y.

To get a feeling of target dependencies, let’s continue the above example and see how top/app/Jamfile can use libraries from top/util/foo. If top/util/foo/Jamfile contains

lib bar : bar.cpp ;

then to use this library in top/app/Jamfile, we can write:

exe app : app.cpp ../util/foo//bar ;

While app.cpp refers to a regular source file, ../util/foo//bar is a reference to another target: a library bar declared in the Jamfile at ../util/foo.

Some other build system have special syntax for listing dependent libraries, for example LIBS variable. In B2, you just add the library to the list of sources.

Suppose we build app with:

b2 app optimization=full define=USE_ASM

Which properties will be used to build foo? The answer is that some features are propagated — B2 attempts to use dependencies with the same value of propagated features. The <optimization> feature is propagated, so both app and foo will be compiled with full optimization. But <define> is not propagated: its value will be added as-is to the compiler flags for a.cpp, but won’t affect foo.

Let’s improve this project further. The library probably has some headers that must be used when compiling app.cpp. We could manually add the necessary #include paths to the app requirements as values of the <include> feature, but then this work will be repeated for all programs that use foo. A better solution is to modify util/foo/Jamfile in this way:

project
    : usage-requirements <include>.
    ;

lib foo : foo.cpp ;

Usage requirements are applied not to the target being declared but to its dependents. In this case, <include>. will be applied to all targets that directly depend on foo.

Another improvement is using symbolic identifiers to refer to the library, as opposed to Jamfile location. In a large project, a library can be used by many targets, and if they all use Jamfile location, a change in directory organization entails much work. The solution is to use project ids—symbolic names not tied to directory layout. First, we need to assign a project id by adding this code to Jamfile:

use-project /library-example/foo : util/foo ;

Second, we modify app/Jamfile to use the project id:

exe app : app.cpp /library-example/foo//bar ;

The /library-example/foo//bar syntax is used to refer to the target bar in the project with id /library-example/foo. We’ve achieved our goal—if the library is moved to a different directory, only top/Jamfile must be modified. Note that project ids are global—two Jamfiles are not allowed to assign the same project id to different directories.

If you want all applications in some project to link to a certain library, you can avoid having to specify directly the sources of every target by using the <library> property. For example, if /boost/filesystem//fs should be linked to all applications in your project, you can add <library>/boost/filesystem//fs to the project’s requirements, like this:

project
   : requirements <library>/boost/filesystem//fs
   ;

3.5. Static and shared libraries

Libraries can be either static, which means they are included in executable files that use them, or shared (a.k.a. dynamic), which are only referred to from executables, and must be available at run time. B2 can create and use both kinds.

The kind of library produced from a lib target is determined by the value of the link feature. Default value is shared, and to build a static library, the value should be static. You can request a static build either on the command line:

b2 link=static

or in the library’s requirements:

lib l : l.cpp : <link>static ;

We can also use the <link> property to express linking requirements on a per-target basis. For example, if a particular executable can be correctly built only with the static version of a library, we can qualify the executable’s target reference to the library as follows:

exe important : main.cpp helpers/<link>static ;

No matter what arguments are specified on the b2 command line, important will only be linked with the static version of helpers.

Specifying properties in target references is especially useful if you use a library defined in some other project (one you can’t change) but you still want static (or dynamic) linking to that library in all cases. If that library is used by many targets, you could use target references everywhere:

exe e1 : e1.cpp /other_project//bar/<link>static ;
exe e10 : e10.cpp /other_project//bar/<link>static ;

but that’s far from being convenient. A better approach is to introduce a level of indirection. Create a local alias target that refers to the static (or dynamic) version of foo:

alias foo : /other_project//bar/<link>static ;
exe e1 : e1.cpp foo ;
exe e10 : e10.cpp foo ;

The alias rule is specifically used to rename a reference to a target and possibly change the properties.

When one library uses another, you put the second library in the source list of the first. For example:

lib utils : utils.cpp /boost/filesystem//fs ;
lib core : core.cpp utils ;
exe app : app.cpp core ;

This works no matter what kind of linking is used. When core is built as a shared library, links utils directly into it. Static libraries can’t link to other libraries, so when core is built as a static library, its dependency on utils is passed along to core's dependents, causing app to be linked with both core and utils.

(Note for non-UNIX system). Typically, shared libraries must be installed to a directory in the dynamic linker’s search path. Otherwise, applications that use shared libraries can’t be started. On Windows, the dynamic linker’s search path is given by the PATH environment variable. This restriction is lifted when you use B2 testing facilities—the PATH variable will be automatically adjusted before running the executable.

3.6. Conditions and alternatives

Sometimes, particular relationships need to be maintained among a target’s build properties. For example, you might want to set specific #define when a library is built as shared, or when a target’s release variant is built. This can be achieved using conditional requirements.

lib network : network.cpp
    : <link>shared:<define>NETWORK_LIB_SHARED
      <variant>release:<define>EXTRA_FAST
    ;

In the example above, whenever network is built with <link>shared, <define>NETWORK_LIB_SHARED will be in its properties, too. Also, whenever its release variant is built, <define>EXTRA_FAST will appear in its properties.

Sometimes the ways a target is built are so different that describing them using conditional requirements would be hard. For example, imagine that a library actually uses different source files depending on the toolset used to build it. We can express this situation using target alternatives:

lib demangler : dummy_demangler.cpp ;                # (1)
lib demangler : demangler_gcc.cpp : <toolset>gcc ;   # (2)
lib demangler : demangler_msvc.cpp : <toolset>msvc ; # (3)

When building demangler, B2 will compare requirements for each alternative with build properties to find the best match. For example, when building with <toolset>gcc alternative (2), will be selected, and when building with <toolset>msvc alternative (3) will be selected. In all other cases, the most generic alternative (1) will be built.

3.7. Prebuilt targets

To link to libraries whose build instructions aren’t given in a Jamfile, you need to create lib targets with an appropriate file property. Target alternatives can be used to associate multiple library files with a single conceptual target. For example:

# util/lib2/Jamfile
lib lib2
    :
    : <file>lib2_release.a <variant>release
    ;

lib lib2
    :
    : <file>lib2_debug.a <variant>debug
    ;

This example defines two alternatives for lib2, and for each one names a prebuilt file. Naturally, there are no sources. Instead, the <file> feature is used to specify the file name.

Once a prebuilt target has been declared, it can be used just like any other target:

exe app : app.cpp ../util/lib2//lib2 ;

As with any target, the alternative selected depends on the properties propagated from lib2's dependents. If we build the release and debug versions of app it will be linked with lib2_release.a and lib2_debug.a, respectively.

System libraries — those that are automatically found by the toolset by searching through some set of predetermined paths — should be declared almost like regular ones:

lib pythonlib : : <name>python22 ;

We again don’t specify any sources, but give a name that should be passed to the compiler. If the gcc toolset were used to link an executable target to pythonlib, -lpython22 would appear in the command line (other compilers may use different options).

We can also specify where the toolset should look for the library:

lib pythonlib : : <name>python22 <search>/opt/lib ;

And, of course, target alternatives can be used in the usual way:

lib pythonlib : : <name>python22 <variant>release ;
lib pythonlib : : <name>python22_d <variant>debug ;

A more advanced use of prebuilt targets is described in the section called “Targets in site-config.jam”.

4. Overview

This section will provide the information necessary to create your own projects using B2. The information provided here is relatively high-level, and the Reference as well as the on-line help system must be used to obtain low-level documentation (see --help).

B2 has two parts — a build engine with its own interpreted language, and B2 itself, implemented in that language. The chain of events when you type b2 on the command line is as follows:

  1. The B2 executable tries to find B2 modules and loads the top-level module. The exact process is described in the section called “Initialization”

  2. The top-level module loads user-defined configuration files, user-config.jam and site-config.jam, which define available toolsets.

  3. The Jamfile in the current directory is read. That in turn might cause reading of further Jamfiles. As a result, a tree of projects is created, with targets inside projects.

  4. Finally, using the build request specified on the command line, B2 decides which targets should be built and how. That information is passed back to Boost.Jam, which takes care of actually running the scheduled build action commands.

So, to be able to successfully use B2, you need to know only four things:

4.1. Concepts

B2 has a few unique concepts that are introduced in this section. The best way to explain the concepts is by comparison with more classical build tools.

When using any flavor of make, you directly specify targets and commands that are used to create them from other target. The below example creates a.o from a.c using a hardcoded compiler invocation command.

a.o: a.c
    g++ -o a.o -g a.c

This is a rather low-level description mechanism and it’s hard to adjust commands, options, and sets of created targets depending on the compiler and operating system used.

To improve portability, most modern build system provide a set of higher-level functions that can be used in build description files. Consider this example:

add_program ("a", "a.c")

This is a function call that creates the targets necessary to create an executable file from the source file a.c. Depending on configured properties, different command lines may be used. However, add_program is higher-level, but rather thin level. All targets are created immediately when the build description is parsed, which makes it impossible to perform multi-variant builds. Often, change in any build property requires a complete reconfiguration of the build tree.

In order to support true multi-variant builds, B2 introduces the concept of a metatarget definition main target metatarget metatarget — an object that is created when the build description is parsed and can be called later with specific build properties to generate actual targets.

Consider an example:

exe a : a.cpp ;

When this declaration is parsed, B2 creates a metatarget, but does not yet decide what files must be created, or what commands must be used. After all build files are parsed, B2 considers the properties requested on the command line. Supposed you have invoked B2 with:

b2 toolset=gcc toolset=msvc

In that case, the metatarget will be called twice, once with toolset=gcc and once with toolset=msvc. Both invocations will produce concrete targets, that will have different extensions and use different command lines.

Another key concept is build property. A build property is a variable that affects the build process. It can be specified on the command line, and is passed when calling a metatarget. While all build tools have a similar mechanism, B2 differs by requiring that all build properties are declared in advance, and providing a large set of properties with portable semantics.

The final concept is property propagation. B2 does not require that every metatarget is called with the same properties. Instead, the "top-level" metatargets are called with the properties specified on the command line. Each metatarget can elect to augment or override some properties (in particular, using the requirements mechanism, see the section called “Requirements”). Then, the dependency metatargets are called with the modified properties and produce concrete targets that are then used in the build process. Of course, dependency metatargets maybe in turn modify build properties and have dependencies of their own.

For a more in-depth treatment of the requirements and concepts, you may refer to SYRCoSE 2009 B2 article.

4.2. Boost.Jam Language

This section will describe the basics of the Boost.Jam language—just enough for writing Jamfiles. For more information, please see the Boost.Jam documentation.

Boost.Jam has an interpreted, procedural language. On the lowest level, a Boost.Jam program consists of variables and rules (the Jam term for functions). They are grouped into modules—there is one global module and a number of named modules. Besides that, a Boost.Jam program contains classes and class instances.

Syntactically, a Boost.Jam program consists of two kinds of elements—keywords (which have a special meaning to Boost.Jam) and literals. Consider this code:

a = b ;

which assigns the value b to the variable a. Here, = and ; are keywords, while a and b are literals.

All syntax elements, even keywords, must be separated by spaces. For example, omitting the space character before ; will lead to a syntax error.

If you want to use a literal value that is the same as some keyword, the value can be quoted:

a = "=" ;

All variables in Boost.Jam have the same type—list of strings. To define a variable one assigns a value to it, like in the previous example. An undefined variable is the same as a variable with an empty value. Variables can be accessed using the $(variable) syntax. For example:

a = $(b) $(c) ;

Rules are defined by specifying the rule name, the parameter names, and the allowed value list size for each parameter.

rule example
 (
     parameter1 :
     parameter2 ? :
     parameter3 + :
     parameter4 *
 )
 {
    # rule body
 }

When this rule is called, the list passed as the first argument must have exactly one value. The list passed as the second argument can either have one value of be empty. The two remaining arguments can be arbitrarily long, but the third argument may not be empty.

The overview of Boost.Jam language statements is given below:

helper 1 : 2 : 3 ;
x = [ helper 1 : 2 : 3 ] ;

This code calls the named rule with the specified arguments. When the result of the call must be used inside some expression, you need to add brackets around the call, like shown on the second line.

if cond { statements } [ else { statements } ]

This is a regular if-statement. The condition is composed of:

  • Literals (true if at least one string is not empty)

  • Comparisons: a operator b where operator is one of =, !=, <, >, or >=. The comparison is done pairwise between each string in the left and the right arguments.

  • Logical operations: ! a, a && b, a || b

  • Grouping: ( cond )

for var in list { statements }

Executes statements for each element in list, setting the variable var to the element value.

while cond { statements }

Repeatedly execute statements while cond remains true upon entry.

return values ;

This statement should be used only inside a rule and returns values to the caller of the rule.

import module ;
import module : rule ;

The first form imports the specified module. All rules from that module are made available using the qualified name: module.rule. The second form imports the specified rules only, and they can be called using unqualified names.

Sometimes, you need to specify the actual command lines to be used when creating targets. In the jam language, you use named actions to do this. For example:

actions create-file-from-another
{
    create-file-from-another $(<) $(>)
}

This specifies a named action called create-file-from-another. The text inside braces is the command to invoke. The $(<) variable will be expanded to a list of generated files, and the $(>) variable will be expanded to a list of source files.

To adjust the command line flexibly, you can define a rule with the same name as the action and taking three parameters — targets, sources and properties. For example:

rule create-file-from-another ( targets * : sources * : properties * )
{
   if <variant>debug in $(properties)
   {
       OPTIONS on $(targets) = --debug ;
   }
}
actions create-file-from-another
{
    create-file-from-another $(OPTIONS) $(<) $(>)
}

In this example, the rule checks if a certain build property is specified. If so, it sets the variable OPTIONS that is then used inside the action. Note that the variables set "on a target" will be visible only inside actions building that target, not globally. Were they set globally, using variable named OPTIONS in two unrelated actions would be impossible.

More details can be found in the Jam reference, the section called “Rules”.

4.3. Configuration

On startup, B2 searches and reads three configuration files: site-config.jam, user-config.jam, and project-config.jam. The first one is usually installed and maintained by a system administrator, and the second is for the user to modify. You can edit the one in the top-level directory of your B2 installation or create a copy in your home directory and edit the copy. The third is used for project specific configuration. The following table explains where the files are searched.

Table 1. Search paths for configuration files
site-config.jam user-config.jam project-config.jam

Linux

/etc

$HOME

$BOOST_BUILD_PATH

$HOME

$BOOST_BUILD_PATH

.

..

../..

…​

Windows

%SystemRoot%

%HOMEDRIVE%%HOMEPATH%

%HOME%

%BOOST_BUILD_PATH%

%HOMEDRIVE%%HOMEPATH%

%HOME%

%BOOST_BUILD_PATH%

.

..

../..

…​

Any of these files may also be overridden on the command line.

You can use the --debug-configuration option to find which configuration files are actually loaded.

Usually, user-config.jam just defines the available compilers and other tools (see the section called “Targets in site-config.jam” for more advanced usage). A tool is configured using the following syntax:

using tool-name : ... ;

The using rule is given the name of tool, and will make that tool available to B2. For example,

using gcc ;

will make the GCC compiler available.

You can put using <tool> ; with no other argument in a Jamfile that needs the tool, provided that the tool supports this usage. In all other cases, the using rule should be in a configuration file. The general principle is that descriptions in Jamfile should be maintained as portable while configuration files are system specific.

All the supported tools are documented in the section called “Builtin tools”, including the specific options they take. Some general notes that apply to most C++ compilers are below.

For all the C++ compiler toolsets that B2 supports out-of-the-box, the list of parameters to using is the same: toolset-name, version, invocation-command, and options.

If you have a single compiler, and the compiler executable

  • has its “usual name” and is in the PATH, or

  • was installed in a standard “installation directory”, or

  • can be found using a global system like the Windows registry.

it can be configured by simply:

using tool-name ;

If the compiler is installed in a custom directory, you should provide the command that invokes the compiler, for example:

using gcc : : g++-3.2 ;
using msvc : : "Z:/Programs/Microsoft Visual Studio/vc98/bin/cl" ;

Some B2 toolsets will use that path to take additional actions required before invoking the compiler, such as calling vendor-supplied scripts to set up its required environment variables. When the compiler executables for C and C++ are different, the path to the C++ compiler executable must be specified. The command can be any command allowed by the operating system. For example:

using msvc : : echo Compiling && foo/bar/baz/cl ;

will work.

To configure several versions of a toolset, simply invoke the using rule multiple times:

using gcc : 3.3 ;
using gcc : 3.4 : g++-3.4 ;
using gcc : 3.2 : g++-3.2 ;
using gcc : 5 ;
using clang : 3.9 ;
using msvc : 14.0 ;

Note that in the first call to using, the compiler found in the PATH will be used, and there is no need to explicitly specify the command.

Many of toolsets have an options parameter to fine-tune the configuration. All of B2’s standard compiler toolsets accept four options cflags, cxxflags, compileflags and linkflags as options specifying flags that will be always passed to the corresponding tools. There must not be a space between the tag for the option name and the value. Values of the cflags feature are passed directly to the C compiler, values of the cxxflags feature are passed directly to the C++ compiler, and values of the compileflags feature are passed to both. For example, to configure a gcc toolset so that it always generates 64-bit code you could write:

using gcc : 3.4 : : <compileflags>-m64 <linkflags>-m64 ;

If multiple of the same type of options are needed, they can be concatenated with quotes or have multiple instances of the option tag.

using gcc : 5 : : <cxxflags>"-std=c++14 -O2" ;
using clang : 3.9 : : <cxxflags>-std=c++14 <cxxflags>-O2 ;

Multiple variations of the same tool can be used for most tools. These are delineated by the version passed in. Because the dash '-' cannot be used here, the convention has become to use the tilde '~' to delineate variations.

using gcc : 5 : g++-5 : ; # default is C++ 98
using gcc : 5~c++03 : g++-5 : <cxxflags>-std=c++03 ; # C++ 03
using gcc : 5~gnu03 : g++-5 : <cxxflags>-std=gnu++03 ; # C++ 03 with GNU
using gcc : 5~c++11 : g++-5 : <cxxflags>-std=c++11 ; # C++ 11
using gcc : 5~c++14 : g++-5 : <cxxflags>-std=c++14 ; # C++ 14

These are then used as normal toolsets:

b2 toolset=gcc-5 stage
b2 toolset=gcc-5~c++14 stage
Although the syntax used to specify toolset options is very similar to syntax used to specify requirements in Jamfiles, the toolset options are not the same as features. Don’t try to specify a feature value in toolset initialization.

4.4. Invocation

To invoke B2, type b2 on the command line. Three kinds of command-line tokens are accepted, in any order:

options

Options start with either one or two dashes. The standard options are listed below, and each project may add additional options

properties

Properties specify details of what you want to build (e.g. debug or release variant). Syntactically, all command line tokens with an equal sign in them are considered to specify properties. In the simplest form, a property looks like feature=value

target

All tokens that are neither options nor properties specify what targets to build. The available targets entirely depend on the project you are building.

4.4.1. Examples

To build all targets defined in the Jamfile in the current directory with the default properties, run:

b2

To build specific targets, specify them on the command line:

b2 lib1 subproject//lib2

To request a certain value for some property, add property=value to the command line:

b2 toolset=gcc variant=debug optimization=space

4.4.2. Options

B2 recognizes the following command line options.

--help

Invokes the online help system. This prints general information on how to use the help system with additional --help* options.

--clean

Cleans all targets in the current directory and in any sub-projects. Note that unlike the clean target in make, you can use --clean together with target names to clean specific targets.

--clean-all

Cleans all targets, no matter where they are defined. In particular, it will clean targets in parent Jamfiles, and targets defined under other project roots.

--build-dir

Changes the build directories for all project roots being built. When this option is specified, all Jamroot files must declare a project name. The build directory for the project root will be computed by concatenating the value of the --build-dir option, the project name specified in Jamroot, and the build dir specified in Jamroot (or bin, if none is specified). The option is primarily useful when building from read-only media, when you can’t modify Jamroot.

--abbreviate-paths

Compresses target paths by abbreviating each component. This option is useful to keep paths from becoming longer than the filesystem supports. See also the section called “Target Paths”.

--hash

Compresses target paths using an MD5 hash. This option is useful to keep paths from becoming longer than the filesystem supports. This option produces shorter paths than --abbreviate-paths does, but at the cost of making them less understandable. See also the section called “Target Paths”.

--version

Prints information on the B2 and Boost.Jam versions.

-a

Causes all files to be rebuilt.

-n

Do not execute the commands, only print them.

-q

Stop at the first error, as opposed to continuing to build targets that don’t depend on the failed ones.

-j N

Run up to N commands in parallel. Default number of jobs is the number of detected available CPU threads. Note: There are circumstances when that default can be larger than the allocated cpu resources, for instance in some virtualized container installs.

--config=filename

Override all configuration files

--site-config=filename

Override the default site-config.jam

--user-config=filename

Override the default user-config.jam

--project-config=filename

Override the default project-config.jam

--debug-configuration

Produces debug information about the loading of B2 and toolset files.

--debug-building

Prints what targets are being built and with what properties.

--debug-generators

Produces debug output from the generator search process. Useful for debugging custom generators.

-d0

Suppress all informational messages.

-d N

Enable cumulative debugging levels from 1 to n. Values are:

  1. Show the actions taken for building targets, as they are executed (the default).

  2. Show "quiet" actions and display all action text, as they are executed.

  3. Show dependency analysis, and target/source timestamps/paths.

  4. Show arguments and timing of shell invocations.

  5. Show rule invocations and variable expansions.

  6. Show directory/header file/archive scans, and attempts at binding to targets.

  7. Show variable settings.

  8. Show variable fetches, variable expansions, and evaluation of '"if"' expressions.

  9. Show variable manipulation, scanner tokens, and memory usage.

  10. Show profile information for rules, both timing and memory.

  11. Show parsing progress of Jamfiles.

  12. Show graph of target dependencies.

  13. Show change target status (fate).

-d +N

Enable debugging level N.

-o file

Write the updating actions to the specified file instead of running them.

-s var=value

Set the variable var to value in the global scope of the jam language interpreter, overriding variables imported from the environment.

--command-database=format

Output a compile commands database as format. Currently format can be: json. (See Command Database for details.)

--command-database-out=file

Specify the file path to output the commands database to.

4.4.3. Properties

In the simplest case, the build is performed with a single set of properties, that you specify on the command line with elements in the form feature=value. The complete list of features can be found in the section called “Builtin features”. The most common features are summarized below.

Feature Allowed values Notes

variant

debug,release

link

shared,static

Determines if B2 creates shared or static libraries

threading

single,multi

Cause the produced binaries to be thread-safe. This requires proper support in the source code itself.

address-model

32,64

Explicitly request either 32-bit or 64-bit code generation. This typically requires that your compiler is appropriately configured. Please refer to the section called “C++ Compilers” and your compiler documentation in case of problems.

toolset

(Depends on configuration)

The C++ compiler to use. See the section called “C++ Compilers” for a detailed list.

include

(Arbitrary string)

Additional include paths for C and C++ compilers.

define

(Arbitrary string)

Additional macro definitions for C and C++ compilers. The string should be either SYMBOL or SYMBOL=VALUE

cxxflags

(Arbitrary string)

Custom options to pass to the C++ compiler.

cflags

(Arbitrary string)

Custom options to pass to the C compiler.

linkflags

(Arbitrary string)

Custom options to pass to the C++ linker.

runtime-link

shared,static

Determines if shared or static version of C and C++ runtimes should be used.

If you have more than one version of a given C++ toolset (e.g. configured in user-config.jam, or autodetected, as happens with msvc), you can request the specific version by passing toolset-version as the value of the toolset feature, for example toolset=msvc-8.0.

If a feature has a fixed set of values it can be specified more than once on the command line. In which case, everything will be built several times — once for each specified value of a feature. For example, if you use

b2 link=static link=shared threading=single threading=multi

Then a total of 4 builds will be performed. For convenience, instead of specifying all requested values of a feature in separate command line elements, you can separate the values with commas, for example:

b2 link=static,shared threading=single,multi

The comma has this special meaning only if the feature has a fixed set of values, so

b2 include=static,shared

is not treated specially.

Multiple features may be grouped by using a forwards slash.

b2 gcc/link=shared msvc/link=static,shared

This will build 3 different variants, altogether.

4.4.4. Targets

All command line elements that are neither options nor properties are the names of the targets to build. See the section called “Target identifiers and references”. If no target is specified, the project in the current directory is built.

4.5. Declaring Targets

A Main target is a user-defined named entity that can be built, for example an executable file. Declaring a main target is usually done using one of the main target rules described in the section called “Builtin rules”. The user can also declare custom main target rules as shown in the section called “Main target rules”.

Most main target rules in B2 have the same common signature:

rule rule-name (
     main-target-name :
     sources + :
     requirements * :
     default-build * :
     usage-requirements * )
  • main-target-name is the name used to request the target on command line and to use it from other main targets. A main target name may contain alphanumeric characters, dashes (‘-’), and underscores (‘_’).

  • sources is the list of source files and other main targets that must be combined.

  • requirements is the list of properties that must always be present when this main target is built.

  • default-build is the list of properties that will be used unless some other value of the same feature is already specified, e.g. on the command line or by propagation from a dependent target.

  • usage-requirements is the list of properties that will be propagated to all main targets that use this one, i.e. to all its dependents.

Some main target rules have a different list of parameters as explicitly stated in their documentation.

The actual requirements for a target are obtained by refining the requirements of the project where the target is declared with the explicitly specified requirements. The same is true for usage-requirements. More details can be found in the section called “Property refinement”.

4.5.1. Name

The name of main target has two purposes. First, it’s used to refer to this target from other targets and from command line. Second, it’s used to compute the names of the generated files. Typically, filenames are obtained from main target name by appending system-dependent suffixes and prefixes.

The name of a main target can contain alphanumeric characters, dashes, underscores and dots. The entire name is significant when resolving references from other targets. For determining filenames, only the part before the first dot is taken. For example:

obj test.release : test.cpp : <variant>release ;
obj test.debug : test.cpp : <variant>debug ;

will generate two files named test.obj (in two different directories), not two files named test.release.obj and test.debug.obj.

4.5.2. Sources

The list of sources specifies what should be processed to get the resulting targets. Most of the time, it’s just a list of files. Sometimes, you’ll want to automatically construct the list of source files rather than having to spell it out manually, in which case you can use the glob rule. Here are two examples:

exe a : a.cpp ; (1)
exe b : [ glob *.cpp ] ; (2)
  1. a.cpp is the only source file

  2. all .cpp files in this directory are sources

Unless you specify a file with an absolute path, the name is considered relative to the source directory — which is typically the directory where the Jamfile is located, but can be changed as described in the section called “Projects”.

The list of sources can also refer to other main targets. Targets in the same project can be referred to by name, while targets in other projects must be qualified with a directory or a symbolic project name. The directory/project name is separated from the target name by a double forward slash. There is no special syntax to distinguish the directory name from the project name—the part before the double slash is first looked up as project name, and then as directory name. For example:

lib helper : helper.cpp ;
exe a : a.cpp helper ;
exe b : b.cpp ..//utils ; (1)
exe c : c.cpp /boost/program_options//program_options ;
  1. Since all project ids start with slash, “..” is a directory name.

The first exe uses the library defined in the same project. The second one uses some target (most likely a library) defined by a Jamfile one level higher. Finally, the third target uses a C++ Boost library, referring to it using its absolute symbolic name. More information about target references can be found in the section called “Dependent Targets” and the section called “Target identifiers and references”.

4.5.3. Requirements

Requirements are the properties that should always be present when building a target. Typically, they are includes and defines:

exe hello : hello.cpp : <include>/opt/boost <define>MY_DEBUG ;

There are a number of other features, listed in the section called “Builtin features”. For example if a library can only be built statically, or a file can’t be compiled with optimization due to a compiler bug, one can use.

lib util : util.cpp : <link>static ;
obj main : main.cpp : <optimization>off ;

Sometimes, particular relationships need to be maintained among a target’s build properties. This can be achieved with conditional requirements. For example, you might want to set specific #defines when a library is built as shared, or when a target’s release variant is built in release mode.

lib network : network.cpp
    : <link>shared:<define>NETWORK_LIB_SHARED
     <variant>release:<define>EXTRA_FAST
    ;

In the example above, whenever network is built with <link>shared, <define>NETWORK_LIB_SHARED will be in its properties, too.

You can use several properties in the condition, for example:

lib network : network.cpp
    : <toolset>gcc,<optimization>speed:<define>USE_INLINE_ASSEMBLER
    ;

A more powerful variant of conditional requirements is indirect conditional requirements. You can provide a rule that will be called with the current build properties and can compute additional properties to be added. For example:

lib network : network.cpp
    : <conditional>@my-rule
    ;
rule my-rule ( properties * )
{
    local result ;
    if <toolset>gcc <optimization>speed in $(properties)
    {
        result += <define>USE_INLINE_ASSEMBLER ;
    }
    return $(result) ;
}

This example is equivalent to the previous one, but for complex cases, indirect conditional requirements can be easier to write and understand.

Requirements explicitly specified for a target are usually combined with the requirements specified for the containing project. You can cause a target to completely ignore a specific project requirement using the syntax by adding a minus sign before the property, for example:

exe main : main.cpp : -<define>UNNECESSARY_DEFINE ;

This syntax is the only way to ignore free properties, such as defines, from a parent. It can be also useful for ordinary properties. Consider this example:

project test : requirements <threading>multi ;
exe test1 : test1.cpp ;
exe test2 : test2.cpp : <threading>single ;
exe test3 : test3.cpp : -<threading>multi ;

Here, test1 inherits the project requirements and will always be built in multi-threaded mode. The test2 target overrides the project’s requirements and will always be built in single-threaded mode. In contrast, the test3 target removes a property from the project requirements and will be built either in single-threaded or multi-threaded mode depending on which variant is requested by the user.

Note that the removal of requirements is completely textual: you need to specify exactly the same property to remove it.

4.5.4. Default Build

The default-build parameter is a set of properties to be used if the build request does not otherwise specify a value for features in the set. For example:

exe hello : hello.cpp : : <threading>multi ;

would build a multi-threaded target unless the user explicitly requests a single-threaded version. The difference between the requirements and the default-build is that the requirements cannot be overridden in any way.

4.5.5. Additional Information

The ways a target is built can be so different that describing them using conditional requirements would be hard. For example, imagine that a library actually uses different source files depending on the toolset used to build it. We can express this situation using target alternatives:

lib demangler : dummy_demangler.cpp ;                # alternative 1
lib demangler : demangler_gcc.cpp : <toolset>gcc ;   # alternative 2
lib demangler : demangler_msvc.cpp : <toolset>msvc ; # alternative 3

In the example above, when built with gcc or msvc, demangler will use a source file specific to the toolset. Otherwise, it will use a generic source file, dummy_demangler.cpp.

It is possible to declare a target inline, i.e. the "sources" parameter may include calls to other main rules. For example:

exe hello : hello.cpp
    [ obj helpers : helpers.cpp : <optimization>off ] ;

Will cause "helpers.cpp" to be always compiled without optimization. When referring to an inline main target, its declared name must be prefixed by its parent target’s name and two dots. In the example above, to build only helpers, one should run b2 hello..helpers.

When no target is requested on the command line, all targets in the current project will be built. If a target should be built only by explicit request, this can be expressed by the explicit rule:

explicit install_programs ;

4.6. Projects

As mentioned before, targets are grouped into projects, and each Jamfile is a separate project. Projects are useful because they allow us to group related targets together, define properties common to all those targets, and assign a symbolic name to the project that can be used in referring to its targets.

Projects are named using the project rule, which has the following syntax:

project id : attributes ;

Here, attributes is a sequence of rule arguments, each of which begins with an attribute-name and is followed by any number of build properties. The list of attribute names along with its handling is also shown in the table below. For example, it is possible to write:

project tennis
    : requirements <threading>multi
    : default-build release
    ;

The possible attributes are listed below.

Project id is a short way to denote a project, as opposed to the Jamfile’s pathname. It is a hierarchical path, unrelated to filesystem, such as "boost/thread". Target references make use of project ids to specify a target.

Source location specifies the directory where sources for the project are located.

Project requirements are requirements that apply to all the targets in the projects as well as all sub-projects.

Default build is the build request that should be used when no build request is specified explicitly.

The default values for those attributes are given in the table below.

Attribute Name Default value Handling by the project rule

Project id

none

none

Assigned from the first parameter of the 'project' rule. It is assumed to denote absolute project id.

Source location

source-location

The location of jamfile for the project

Sets to the passed value

Requirements

requirements

The parent’s requirements

The parent’s requirements are refined with the passed requirement and the result is used as the project requirements.

Default build

default-build

none

Sets to the passed value

Build directory

build-dir

Empty if the parent has no build directory set. Otherwise, the parent’s build directory with the relative path from parent to the current project appended to it.

Sets to the passed value, interpreted as relative to the project’s location.

Besides defining projects and main targets, Jamfiles often invoke various utility rules. For the full list of rules that can be directly used in Jamfile see the section called “Builtin rules”.

Each subproject inherits attributes, constants and rules from its parent project, which is defined by the nearest Jamfile in an ancestor directory above the subproject. The top-level project is declared in a file called Jamroot, or Jamfile. When loading a project, B2 looks for either Jamroot or Jamfile. They are handled identically, except that if the file is called Jamroot, the search for a parent project is not performed. A Jamfile without a parent project is also considered the top-level project.

Even when building in a subproject directory, parent project files are always loaded before those of their sub-projects, so that every definition made in a parent project is always available to its children. The loading order of any other projects is unspecified. Even if one project refers to another via the use-project or a target reference, no specific order should be assumed.

Giving the root project the special name “Jamroot” ensures that B2 won’t misinterpret a directory above it as the project root just because the directory contains a Jamfile.

4.7. The Build Process

When you’ve described your targets, you want B2 to run the right tools and create the needed targets. This section will describe two things: how you specify what to build, and how the main targets are actually constructed.

The most important thing to note is that in B2, unlike other build tools, the targets you declare do not correspond to specific files. What you declare in a Jamfile is more like a “metatarget.” Depending on the properties you specify on the command line, each metatarget will produce a set of real targets corresponding to the requested properties. It is quite possible that the same metatarget is built several times with different properties, producing different files.

This means that for B2, you cannot directly obtain a build variant from a Jamfile. There could be several variants requested by the user, and each target can be built with different properties.

4.7.1. Build Request

The command line specifies which targets to build and with which properties. For example:

b2 app1 lib1//lib1 toolset=gcc variant=debug optimization=full

would build two targets, "app1" and "lib1//lib1" with the specified properties. You can refer to any targets, using target id and specify arbitrary properties. Some of the properties are very common, and for them the name of the property can be omitted. For example, the above can be written as:

b2 app1 lib1//lib1 gcc debug optimization=full

The complete syntax, which has some additional shortcuts, is described in the section called “Invocation”.

4.7.2. Building a main target

When you request, directly or indirectly, a build of a main target with specific requirements, the following steps are done. Some brief explanation is provided, and more details are given in the section called “Build process”.

  1. Applying default build. If the default-build property of a target specifies a value of a feature that is not present in the build request, that value is added.

  2. Selecting the main target alternative to use. For each alternative we look how many properties are present both in alternative’s requirements, and in build request. The alternative with largest number of matching properties is selected.

  3. Determining "common" properties. The build request is refined with target’s requirements. The conditional properties in requirements are handled as well. Finally, default values of features are added.

  4. Building targets referred by the sources list and dependency properties. The list of sources and the properties can refer to other target using target references. For each reference, we take all propagated properties, refine them by explicit properties specified in the target reference, and pass the resulting properties as build request to the other target.

  5. Adding the usage requirements produced when building dependencies to the "common" properties. When dependencies are built in the previous step, they return both the set of created "real" targets, and usage requirements. The usage requirements are added to the common properties and the resulting property set will be used for building the current target.

  6. Building the target using generators. To convert the sources to the desired type, B2 uses "generators" — objects that correspond to tools like compilers and linkers. Each generator declares what type of targets it can produce and what type of sources it requires. Using this information, B2 determines which generators must be run to produce a specific target from specific sources. When generators are run, they return the "real" targets.

  7. Computing the usage requirements to be returned. The conditional properties in usage requirements are expanded and the result is returned.

4.7.3. Building a Project

Often, a user builds a complete project, not just one main target. In fact, invoking b2 without arguments builds the project defined in the current directory.

When a project is built, the build request is passed without modification to all main targets in that project. It’s is possible to prevent implicit building of a target in a project with the explicit rule:

explicit hello_test ;

would cause the hello_test target to be built only if explicitly requested by the user or by some other target.

The Jamfile for a project can include a number of build-project rule calls that specify additional projects to be built.

5. Common tasks

This section describes main targets types that B2 supports out-of-the-box. Unless otherwise noted, all mentioned main target rules have the common signature, described in the section called “Declaring Targets”.

5.1. Programs

Programs are created using the exe rule, which follows the common syntax. For example:

exe hello
    : hello.cpp some_library.lib /some_project//library
    : <threading>multi
    ;

This will create an executable file from the sources—​in this case, one C++ file, one library file present in the same directory, and another library that is created by B2. Generally, sources can include C and C++ files, object files and libraries. B2 will automatically try to convert targets of other types.

On Windows, if an application uses shared libraries, and both the application and the libraries are built using B2, it is not possible to immediately run the application, because the PATH environment variable should include the path to the libraries. It means you have to either add the paths manually, or have the build place the application and the libraries into the same directory. See the section called “Installing”.

5.2. Libraries

Library targets are created using the lib rule, which follows the common syntax. For example:

lib helpers : helpers.cpp ;

This will define a library target named helpers built from the helpers.cpp source file. It can be either a static library or a shared library, depending on the value of the <link> feature.

Library targets can represent:

  • Libraries that should be built from source, as in the example above.

  • Prebuilt libraries which already exist on the system. Such libraries can be searched for by the tools using them (typically with the linker’s -l option) or their paths can be known in advance by the build system.

The syntax for prebuilt libraries is given below:

lib z : : <name>z <search>/home/ghost ;
lib compress : : <file>/opt/libs/compress.a ;

The name property specifies the name of the library without the standard prefixes and suffixes. For example, depending on the system, z could refer to a file called z.so, libz.a, or z.lib, etc. The search feature specifies paths in which to search for the library in addition to the default compiler paths. search can be specified several times or it can be omitted, in which case only the default compiler paths will be searched. The file property specifies the file location.

The difference between using the file feature and using a combination of the name and search features is that file is more precise.

The value of the search feature is just added to the linker search path. When linking to multiple libraries, the paths specified by search are combined without regard to which lib target each path came from. Thus, given

lib a : : <name>a <search>/pool/release ;
lib b : : <name>b <search>/pool/debug ;

If /pool/release/a.so, /pool/release/b.so, /pool/debug/a.so, and /pool/release/b.so all exist, the linker will probably take both a and b from the same directory, instead of finding a in /pool/release and b in /pool/debug. If you need to distinguish between multiple libraries with the same name, it’s safer to use file.

For convenience, the following syntax is allowed:

lib z ;
lib gui db aux ;

which has exactly the same effect as:

lib z : : <name>z ;
lib gui : : <name>gui ;
lib db : : <name>db ;
lib aux : : <name>aux ;

When a library references another library you should put that other library in its list of sources. This will do the right thing in all cases. For portability, you should specify library dependencies even for searched and prebuilt libraries, otherwise, static linking on Unix will not work. For example:

lib z ;
lib png : z : <name>png ;

When a library has a shared library as a source, or a static library has another static library as a source then any target linking to the first library with automatically link to its source library as well.

On the other hand, when a shared library has a static library as a source then the first library will be built so that it completely includes the second one.

If you do not want a shared library to include all the libraries specified in its sources (especially statically linked ones), you would need to use the following:

lib b : a.cpp ;
lib a : a.cpp : <use>b : : <library>b ;

This specifies that library a uses library b, and causes all executables that link to a to link to b also. In this case, even for shared linking, the a library will not refer to b.

Usage requirements are often very useful for defining library targets. For example, imagine that you want you build a helpers library and its interface is described in its helpers.hpp header file located in the same directory as the helpers.cpp source file. Then you could add the following to the Jamfile located in that same directory:

lib helpers : helpers.cpp : : : <include>. ;

which would automatically add the directory where the target has been defined (and where the library’s header file is located) to the compiler’s include path for all targets using the helpers library. This feature greatly simplifies Jamfiles.

5.3. Alias

The alias rule gives an alternative name to a group of targets. For example, to give the name core to a group of three other targets with the following code:

alias core : im reader writer ;

Using core on the command line, or in the source list of any other target is the same as explicitly using im, reader, and writer.

Another use of the alias rule is to change build properties. For example, if you want to link statically to the Boost Threads library, you can write the following:

alias threads : /boost/thread//boost_thread : <link>static ;

and use only the threads alias in your Jamfiles.

You can also specify usage requirements for the alias target. If you write the following:

alias header_only_library : : : :  <include>/usr/include/header_only_library ;

then using header_only_library in sources will only add an include path. Also note that when an alias has sources, their usage requirements are propagated as well. For example:

lib library1 : library1.cpp : : : <include>/library/include1 ;
lib library2 : library2.cpp : : : <include>/library/include2 ;
alias static_libraries : library1 library2 : <link>static ;
exe main : main.cpp static_libraries ;

will compile main.cpp with additional includes required for using the specified static libraries.

5.4. Installing

This section describes various ways to install built targets and arbitrary files.

5.4.1. Basic install

For installing a built target you should use the install rule, which follows the common syntax. For example:

install dist : hello helpers ;

will cause the targets hello and helpers to be moved to the dist directory, relative to the Jamfile’s directory. The directory can be changed using the location property:

install dist : hello helpers : <location>/usr/bin ;

While you can achieve the same effect by changing the target name to /usr/bin, using the location property is better as it allows you to use a mnemonic target name.

The location property is especially handy when the location is not fixed, but depends on the build variant or environment variables:

install dist : hello helpers :
    <variant>release:<location>dist/release
    <variant>debug:<location>dist/debug ;
install dist2 : hello helpers : <location>$(DIST) ;

5.4.2. Installing with all dependencies

Specifying the names of all libraries to install can be boring. The install allows you to specify only the top-level executable targets to install, and automatically install all dependencies:

install dist : hello :
    <install-dependencies>on <install-type>EXE
    <install-type>LIB
    ;

will find all targets that hello depends on, and install all of those which are either executables or libraries. More specifically, for each target, other targets that were specified as sources or as dependency properties, will be recursively found. One exception is that targets referred with the use feature are not considered, as that feature is typically used to refer to header-only libraries. If the set of target types is specified, only targets of that type will be installed, otherwise, all found target will be installed.

5.4.3. Preserving Directory Hierarchy

By default, the install rule will strip paths from its sources. So, if sources include a/b/c.hpp, the a/b part will be ignored. To make the install rule preserve the directory hierarchy you need to use the <install-source-root> feature to specify the root of the hierarchy you are installing. Relative paths from that root will be preserved. For example, if you write:

install headers
    : a/b/c.h
    : <location>/tmp <install-source-root>a
    ;

the a file named /tmp/b/c.h will be created.

The glob-tree rule can be used to find all files below a given directory, making it easy to install an entire directory tree.

5.4.4. Installing into Several Directories

The alias rule can be used when targets need to be installed into several directories:

alias install : install-bin install-lib ;
install install-bin : applications : /usr/bin ;
install install-lib : helper : /usr/lib ;

Because the install rule just copies targets, most free features [3] have no effect when used in requirements of the install rule. The only two that matter are dependency and, on Unix, dll-path.

(Unix specific) On Unix, executables built using B2 typically contain the list of paths to all used shared libraries. For installing, this is not desired, so B2 relinks the executable with an empty list of paths. You can also specify additional paths for installed executables using the dll-path feature.

5.5. Testing

B2 has convenient support for running unit tests. The simplest way is the unit-test rule, which follows the common syntax. For example:

unit-test helpers_test : helpers_test.cpp helpers ;

The unit-test rule behaves like the exe rule, but after the executable is created it is also run. If the executable returns an error code, the build system will also return an error and will try running the executable on the next invocation until it runs successfully. This behavior ensures that you can not miss a unit test failure.

There are few specialized testing rules, listed below:

rule compile ( sources : requirements * : target-name ? )
rule compile-fail ( sources : requirements * : target-name ? )
rule link ( sources + : requirements * : target-name ? )
rule link-fail ( sources + : requirements * : target-name ? )

They are given a list of sources and requirements. If the target name is not provided, the name of the first source file is used instead. The compile* tests try to compile the passed source. The link* rules try to compile and link an application from all the passed sources. The compile and link rules expect that compilation/linking succeeds. The compile-fail and link-fail rules expect that the compilation/linking fails.

There are two specialized rules for running executables, which are more powerful than the unit-test rule. The run rule has the following signature:

rule run ( sources + : args * : input-files * : requirements * : target-name ?
    : default-build * )

The rule builds application from the provided sources and runs it, passing args and input-files as command-line arguments. The args parameter is passed verbatim and the values of the input-files parameter are treated as paths relative to containing Jamfile, and are adjusted if b2 is invoked from a different directory. The run-fail rule is identical to the run rule, except that it expects that the run fails.

All rules described in this section, if executed successfully, create a special manifest file to indicate that the test passed. For the unit-test rule the files is named target-name.passed and for the other rules it is called target-name.test. The run* rules also capture all output from the program, and store it in a file named target-name.output.

If the preserve-test-targets feature has the value off, then run and the run-fail rules will remove the executable after running it. This somewhat decreases disk space requirements for continuous testing environments. The default value of preserve-test-targets feature is on.

It is possible to print the list of all test targets (except for unit-test) declared in your project, by passing the --dump-tests command-line option. The output will consist of lines of the form:

boost-test(test-type) path : sources

It is possible to process the list of tests, B2 output and the presence/absence of the *.test files created when test passes into human-readable status table of tests. Such processing utilities are not included in B2.

The following features adjust behavior of the testing metatargets.

testing.arg

Defines an argument to be passed to the target when it is executed before the list of input files.

unit-test helpers_test
    : helpers_test.cpp helpers
    : <testing.arg>"--foo bar"
    ;
testing.input-file

Specifies a file to be passed to the executable on the command line after the arguments. All files must be specified in alphabetical order due to constraints in the current implementation.

testing.launcher

By default, the executable is run directly. Sometimes, it is desirable to run the executable using some helper command. You should use this property to specify the name of the helper command. For example, if you write:

unit-test helpers_test
    : helpers_test.cpp helpers
    : <testing.launcher>valgrind
    ;

The command used to run the executable will be:

valgrind bin/$toolset/debug/helpers_test
test-info

A description of the test. This is displayed as part of the --dump-tests command-line option.

5.6. Custom commands

For most main target rules, B2 automatically figures out the commands to run. When you want to use new file types or support new tools, one approach is to extend B2 to support them smoothly, as documented in Extender Manual. However, if the new tool is only used in a single place, it might be easier just to specify the commands to run explicitly.

Three main target rules can be used for that. The make rule allows you to construct a single file from any number of source file, by running a command you specify. The notfile rule allows you to run an arbitrary command, without creating any files. And finally, the generate rule allows you to describe a transformation using B2’s virtual targets. This is higher-level than the file names that the make rule operates with and allows you to create more than one target, create differently named targets depending on properties, or use more than one tool.

The make rule is used when you want to create one file from a number of sources using some specific command. The notfile is used to unconditionally run a command.

Suppose you want to create the file file.out from the file file.in by running the command in2out. Here is how you would do this in B2:

make file.out : file.in : @in2out ;
actions in2out
{
    in2out $(<) $(>)
}

If you run b2 and file.out does not exist, B2 will run the in2out command to create that file. For more details on specifying actions, see the section called “Boost.Jam Language”.

It could be that you just want to run some command unconditionally, and that command does not create any specific files. For that you can use the notfile rule. For example:

notfile echo_something : @echo ;
actions echo
{
    echo "something"
}

The only difference from the make rule is that the name of the target is not considered a name of a file, so B2 will unconditionally run the action.

The generate rule is used when you want to express transformations using B2’s virtual targets, as opposed to just filenames. The generate rule has the standard main target rule signature, but you are required to specify the generating-rule property. The value of the property should be in the form @rule-name, the named rule should have the following signature:

rule generating-rule ( project name : property-set : sources * )

and will be called with an instance of the project-target class, the name of the main target, an instance of the property-set class containing build properties, and the list of instances of the virtual-target class corresponding to sources. The rule must return a list of virtual-target instances. The interface of the virtual-target class can be learned by looking at the build/virtual-target.jam file. The generate example contained in the B2 distribution illustrates how the generate rule can be used.

5.7. Precompiled Headers

Precompiled headers is a mechanism to speed up compilation by creating a partially processed version of some header files, and then using that version during compilations rather then repeatedly parsing the original headers. B2 supports precompiled headers with gcc and msvc toolsets.

To use precompiled headers, follow the following steps:

  1. Create a header that includes headers used by your project that you want precompiled. It is better to include only headers that are sufficiently stable — like headers from the compiler and external libraries. B2 will include the header automatically and on-demand.

  2. Declare a new B2 target for the precompiled header and add that precompiled header to the sources of the target whose compilation you want to speed up:

    cpp-pch pch : pch.hpp ;
    exe main : main.cpp pch ;

    You can use the c-pch rule if you want to use the precompiled header in C programs.

The pch example in B2 distribution can be used as reference.

Please note the following:

  • The build properties used to compile the source files and the precompiled header must be the same. Consider using project requirements to assure this.

  • Precompiled headers must be used purely as a way to improve compilation time, not to save the number of #include statements. If a source file needs to include some header, explicitly include it in the source file, even if the same header is included from the precompiled header. This makes sure that your project will build even if precompiled headers are not supported.

  • Prior to version 4.2, the gcc compiler did not allow anonymous namespaces in precompiled headers, which limits their utility. See the bug report for details.

  • Previosuly B2 had not been automatically inluding the header, a user was required to include the header at the top of every source file the precompiled header will be used with.

5.8. Generated headers

Usually, B2 handles implicit dependencies completely automatically. For example, for C++ files, all #include statements are found and handled. The only aspect where user help might be needed is implicit dependency on generated files.

By default, B2 handles such dependencies within one main target. For example, assume that main target "app" has two sources, "app.cpp" and "parser.y". The latter source is converted into "parser.c" and "parser.h". Then, if "app.cpp" includes "parser.h", B2 will detect this dependency. Moreover, since "parser.h" will be generated into a build directory, the path to that directory will automatically be added to the include path.

Making this mechanism work across main target boundaries is possible, but imposes certain overhead. For that reason, if there is implicit dependency on files from other main targets, the <implicit-dependency> feature must be used, for example:

lib parser : parser.y ;
exe app : app.cpp : <implicit-dependency>parser ;

The above example tells the build system that when scanning all sources of "app" for implicit-dependencies, it should consider targets from "parser" as potential dependencies.

5.9. Cross-compilation

B2 supports cross compilation with the gcc and msvc toolsets.

When using gcc, you first need to specify your cross compiler in user-config.jam (see the section called “Configuration”), for example:

using gcc : arm : arm-none-linux-gnueabi-g++ ;

After that, if the host and target os are the same, for example Linux, you can just request that this compiler version be used:

b2 toolset=gcc-arm

If you want to target a different operating system from the host, you need to additionally specify the value for the target-os feature, for example:

# On windows box
b2 toolset=gcc-arm target-os=linux
# On Linux box
b2 toolset=gcc-mingw target-os=windows

For the complete list of allowed operating system names, please see the documentation for target-os feature.

When using the msvc compiler, it’s only possible to cross-compile to a 64-bit system on a 32-bit host. Please see the section called “64-bit support” for details.

5.10. Package Managers

B2 supports automatic, or manual, loading of generated build files from package managers. For example using the Conan package manager which generates conanbuildinfo.jam files B2 will load that files automatically when it loads the project at the same location. The included file can define targets and other project declarations in the context of the project it’s being loaded into. Control over what package manager file is loaded can be controlled with (in order of priority):

  • With the use-packages rule.

  • Command line argument --use-package-manager=X.

  • Environment variable PACKAGE_MANAGER_BUILD_INFO.

  • Built-in detection of the file. Currently this includes: "conan".

use-packages rule:

rule use-packages ( name-or-glob-pattern ? )

The use-packages rule allows one to specify in the projects themselves kind of package definitions to use either as the ones for a built-in package manager support. For example:

use-packages conan ;

Or to specify a glob pattern to find the file with the definitions. For instance:

use-packages "packages.jam" ;

--use-package-manager command line option:

The --use-package-manager=NAME command line option allows one to non-intrusively specify per invocation which of the built-in package manager types to use.

PACKAGE_MANAGER_BUILD_INFO variable:

The PACKAGE_MANAGER_BUILD_INFO variable, which is taken from the environment or defined with the -sX=Y option, specifies a glob pattern to use to find the package definitions.

Built-in detection:

There are a number of built-in glob patterns to support popular package managers. Currently the supported ones are:

5.11. Searching For Projects

B2 supports automatic searching for referenced global projects. For example, if you have references to /boost/predef with some minimal configuration B2 can find the B2 project for it and automatically resolve the reference. The searching supports two modes: finding regular B2 project directories, and package/config style loading of single jam files.

5.11.1. Search Path

To control which and where projects are found one can use different methods:

  • B2_PROJECT_PATH environment variable.

  • --project-search command line argument.

  • rule project-search project rule.

The search path in both B2_PROJECT_PATH and --project-search specifies a key-value list of project-id and path. The parts of that key-value list, as the name indicates, is a path delimiter separated list. For example if we had these two projects we wanted to find: /zlib and /boost/asio the search paths could look like:

Linux

/zlib:/usr/local/share/zlib:/boost/asio:/home/user/external/boost-1.81/libs/asio

Windows, VxWorks

/zlib;C:/Dev/zlib;/boost/asio;C:Dev/boost-1.81/libs/asio

VMS

/zlib,X:external.zlib,/boost/asio,X:external.boost181.libs.asio

The project-id in the search path specification maps that project root to the indicated path. Which B2 will use to search for any projects and sub-projects with that project-id root.

5.11.2. Search Process

Regardless of how the search path is specified, how the search happens is the same. Searching involves either searching for a B2 project directory, i.e. a directory containing a jamfile, or searching for a specially named *.jam file to include (similar to how the Package Managers support includes jam files).

For a given project-id of the form /d1/d2/../dn we search for the following, in this order:

  1. The project at d1/d2/../dn in any path registered for the / root.

  2. The project at dn in any path registered for the /d1/d2/../dn-1 root.

  3. The jamfile dn.jam in any path registered for the /d1/d2/../dn-1 root.

  4. The project at dn-1_dn in any path registered for the /d1/d2/../dn-2 root.

  5. The jamfile dn-1_dn.jam in any path registered for the /d1/d2/../dn-2 root.

  6. And so on until it searches for the project d1_d2_.._dn in any path registered for the / root.

  7. And for the jamfile d1_d2_.._dn.jam in any path registered for the / root.

For example, with this search paths:

  • /boost: /usr/share/boost-1.81.0, /home/user/boost-dev/libs

  • /: /usr/share/b2/external

And given the /boost/core project-id to resolve, we search for:

  1. /usr/share/b2/external/boost/core/<jamfile>

  2. /usr/share/boost-1.81.0/core/<jamfile>

  3. /home/user/boost-dev/libs/core/<jamfile>

  4. /usr/share/boost-1.81.0/core.jam

  5. /home/user/boost-dev/libs/core.jam

  6. /usr/share/boost-1.81.0/boost_core/<jamfile>

  7. /home/user/boost-dev/libs/boost_core/<jamfile>

  8. /usr/share/boost-1.81.0/boost_core.jam

  9. /home/user/boost-dev/libs/boost_core.jam

  10. /usr/share/b2/external/boost_core.jam

The first project jamfile will be assigned to the project-id. Or the first *.jam file found will be loaded.

5.11.3. Loading Process

Depending on whether a project jamfile or *.jam file determines how the project is loaded.

When loading a project jamfile with a project-id and path it is equivalent to calling use-project project-id : path ; from the context of the project that has the reference.

When loading a *.jam file as the path it is equivalent to calling: use-packages path ; from the context of the project that has the reference. In this case it means that the file will be loaded as part of the referenced project and hence any bare targets or information it declares will be part of the project.

5.12. Command Database, and IDE Integration

Many IDE programs accept the use of a compile_commands.json file to learn what and how your project builds. B2 supports generating such files for any build you make. B2 supports this through a generic facility to extract commands from the actions it executes. There are two options that control this. The --command-database=format option indicates to generate the file for the given format. It has a couple of effects when specified:

  • It tells B2 to start observing and extracting commands from actions (as specified by the toolset).

  • It disables execution of actions. I.e. equivalent to adding the -n option.

  • It enables building all default and specified targets. I.e. the equivalent to adding the -a option.

  • It disables all action execution output. I.e. as if specifying -d0 option.

  • At the end of the main build it writes out the results of what it observed to the database file.

Currently on json is supported as a format that follows the Clang JSON Compilation Database Format Specification.

The --command-database-out=file option controls the name, and optionally location, of the generated file. By default the file is compile_commands.json to follow the ecosystem convention. And it is generated, by default, in one of the following locations:

  • Relative to the build-dir of the root project, if it’s specified by the project. With the default file name or as given.

  • At the absolute file path if it is rooted.

  • At the current working directory.

The following fields are populated in the generated database:

  • directory - This will always be the current directory as B2 makes all paths relative to that (or absolute).

  • file - The first source of each action recorded.

  • command - The quoted, full, command as extracted by the toolset.

  • output - The first target file of each action recorded. As B2 can build multiple variants at once this is required to differentiate between multiple compilations of the same source file.

Only one command database file is generated per b2 invocation. And each time it is generated it overwrites any previous such file.

6. Reference

6.1. General information

6.1.1. Initialization

Immediately upon starting, the B2 engine (b2) loads the Jam code that implements the build system. To do this, it searches for the build system bootstrap.jam file in specific installation locations. The search is based on the location of the b2(.exe) executable location.

The default bootstrap.jam, after loading some standard definitions, loads both site-config.jam and user-config.jam.

To maintain backward compatibility the file called boost-build.jam, is loaded if present. The search starts first in the invocation directory, then in its parent and so forth up to the filesystem root, and finally in the directories specified by the environment variable BOOST_BUILD_PATH. On Unix BOOST_BUILD_PATH defaults to /usr/share/b2.

6.2. Builtin rules

This section contains the list of all rules that can be used in Jamfile — both rules that define new targets and auxiliary rules.

exe

Creates an executable file. See the section called “Programs”.

lib

Creates an library file. See the section called “Libraries”.

install

Installs built targets and other files. See the section called “Installing”.

alias

Creates an alias for other targets. See the section called “Alias”.

unit-test

Creates an executable that will be automatically run. See the section called “Testing”.

compile; compile-fail; link; link-fail; run; run-fail

Specialized rules for testing. See the section called “Testing”.

check-target-builds

The check-target-builds allows you to conditionally use different properties depending on whether some metatarget builds, or not. This is similar to functionality of configure script in autotools projects. The function signature is:

rule check-target-builds ( target message ? : true-properties * : false-properties * )

This function can only be used when passing requirements or usage requirements to a metatarget rule. For example, to make an application link to a library if it’s available, one has use the following:

exe app : app.cpp : [ check-target-builds has_foo "System has foo" : <library>foo : <define>FOO_MISSING=1 ] ;

For another example, the alias rule can be used to consolidate configuration choices and make them available to other metatargets, like so:

alias foobar : : : : [ check-target-builds has_foo "System has foo" : <library>foo : <library>bar ] ;
obj

Creates an object file. Useful when a single source file must be compiled with special properties.

preprocessed

Creates an preprocessed source file. The arguments follow the common syntax.

glob

The glob rule takes a list shell pattern and returns the list of files in the project’s source directory that match the pattern. For example:

lib tools : [ glob *.cpp ] ;

It is possible to also pass a second argument—the list of exclude patterns. The result will then include the list of files matching any of include patterns, and not matching any of the exclude patterns. For example:

lib tools : [ glob *.cpp : file_to_exclude.cpp bad*.cpp ] ;
glob-tree

The glob-tree is similar to the glob except that it operates recursively from the directory of the containing Jamfile. For example:

ECHO [ glob-tree *.cpp : .svn ] ;

will print the names of all C++ files in your project. The .svn exclude pattern prevents the glob-tree rule from entering administrative directories of the Subversion version control system.

project

Declares project id and attributes, including project requirements. See the section called “Projects”.

use-project

Assigns a symbolic project ID to a project at a given path. This rule must be better documented!

explicit

The explicit rule takes a single parameter—​a list of target names. The named targets will be marked explicit, and will be built only if they are explicitly requested on the command line, or if their dependents are built. Compare this to ordinary targets, that are built implicitly when their containing project is built.

always

The always function takes a single parameter—a list of metatarget names. The targets produced by the named metatargets will be always considered out of date. Consider this example:

exe hello : hello.cpp ;
exe bye : bye.cpp ;
always hello ;

If a build of hello is requested, then it will always be recompiled. Note that if a build of hello is not requested, for example you specify just bye on the command line, hello will not be recompiled.

constant

Sets project-wide constant. Takes two parameters: variable name and a value and makes the specified variable name accessible in this Jamfile and any child Jamfiles. For example:

constant VERSION : 1.34.0 ;
path-constant

Same as constant except that the value is treated as path relative to Jamfile location. For example, if b2 is invoked in the current directory, and Jamfile in helper subdirectory has:

path-constant DATA : data/a.txt ;

then the variable DATA will be set to helper/data/a.txt, and if b2 is invoked from the helper directory, then the variable DATA will be set to data/a.txt.

build-project

Cause some other project to be built. This rule takes a single parameter—a directory name relative to the containing Jamfile. When the containing Jamfile is built, the project located at that directory will be built as well. At the moment, the parameter to this rule should be a directory name. Project ID or general target references are not allowed.

test-suite

This rule is deprecated and equivalent to alias.

import-search
Jam

rule import-search ( reference )

Adds the given reference path to the set of directories that an import will search. The reference can be a plain directory or a known project path. If a project path is given it will be searched for and resolved to include any sub-project path in the reference. If a directory is given it will be rooted relative to the current project location. Example project path usage:

import-search /boost/config/checks ;
import-search /boost/predef/tools/checks ;

6.2.1. b2::require_b2

Jam

rule require-b2 ( minimum : maximum ? )

C++

void require_b2(value_ref minimum, value_ref maximum, bind::context_ref_ context_ref);

Checks that the b2 engine version is at least minimum and strictly less than maximum. If no maximum is given the version is matched to be at least the minimum. If the check fails b2 will exit with an error message and failure.

6.3. Builtin features

This section documents the features that are built-in into B2. For features with a fixed set of values, that set is provided, with the default value listed first.

address-model

Allowed values: 32, 64.

Specifies if 32-bit or 64-bit code should be generated by the compiler. Whether this feature works depends on the used compiler, its version, how the compiler is configured, and the values of the architecture instruction-set features. Please see the section C++ Compilers for details.

address-sanitizer

Allowed values: on, norecover.

Enables address sanitizer. Value norecover disables recovery for the sanitizer. The feature is optional, thus no sanitizer is enabled by default.

allow

This feature is used to allow specific generators to run. For example, Qt tools can only be invoked when Qt library is used. In that case, <allow>qt will be in usage requirement of the library.

architecture

Allowed values: x86, ia64, sparc, power, mips, mips1, mips2, mips3, mips4, mips32, mips32r2, mips64, parisc, arm, s390x, loongarch.

Specifies the general processor family to generate code for.

archiveflags

The value of this feature is passed without modification to the archiver tool when creating static libraries.

asmflags

The value of this feature is passed without modification to the assembler.

asynch-exceptions

Allowed values: off, on.

Selects whether there is support for asynchronous EH (e.g. catching SEGVs).

build

Allowed values: no

Used to conditionally disable build of a target. If <build>no is in properties when building a target, build of that target is skipped. Combined with conditional requirements this allows you to skip building some target in configurations where the build is known to fail.

cflags; cxxflags; linkflags

The value of these features is passed without modification to the corresponding tools. For cflags that is both the C and C++ compilers, for cxxflags that is the C++ compiler, and for linkflags that is the linker. The features are handy when you are trying to do something special that cannot be achieved by a higher-level feature in B2.

compileflags

The value of this feature is passed without modification to the corresponding tools. The values from the compileflags is applied to all compilation of any language for the tools.

conditional

Used to introduce indirect conditional requirements. The value should have the form:

@rulename

where rulename should be a name of a rule with the following signature:

rule rulename ( properties * )

The rule will be called for each target with its properties and should return any additional properties. See also section Requirements for an example.

coverage

Allowed values: off, on.

Enables code instrumentation to generate coverage data during execution.

cxxflags

See <cflags>.

cxxstd

Allowed values: 98, 03, 0x, 11, 1y, 14, 1z, 17, 2a, 20, 2b, 23, 2c, 26, latest.

Specifies the version of the C++ Standard Language to build with. All the official versions of the standard since "98" are included. It is also possible to specify using the experimental, work in progress, latest version. Some compilers specified intermediate versions for the experimental versions leading up to the released standard version. Those are included following the GNU nomenclature as 0x, 1y, 1z, 2a, 2b and 2c. Depending on the compiler latest would map to one of those.

This is an optional feature. Hence when not specified the compiler default behaviour is used.
Please consult the toolset specific documentation for which cxxstd is supported.
cxxstd-dialect

Subfeature of cxxstd

Allowed values: iso, gnu, ms.

Indicates if a non-standard dialect should be used. These usually have either/or extensions or platform specific functionality. Not specifying the dialect will default to 'iso' which will attempt to use ISO C++ Standard conformance to the best of the compiler’s ability.

c++abi

Selects a specific variant of C++ ABI if the compiler supports several.

c++-template-depth

Allowed values: Any positive integer.

Allows configuring a C++ compiler with the maximal template instantiation depth parameter. Specific toolsets may or may not provide support for this feature depending on whether their compilers provide a corresponding command-line option.

Due to some internal details in the current B2 implementation it is not possible to have features whose valid values are all positive integer. As a workaround a large set of allowed values has been defined for this feature and, if a different one is needed, user can easily add it by calling the feature.extend rule.
debug-symbols

Allowed values: on, off.

Specifies if produced object files, executables, and libraries should include debug information. Typically, the value of this feature is implicitly set by the variant feature, but it can be explicitly specified by the user. The most common usage is to build release variant with debugging information.

define

Specifies a preprocessor symbol that should be defined on the command line. You may either specify just the symbol, which will be defined without any value, or both the symbol and the value, separated by equal sign.

def-file

Provides a means to specify def-file for windows DLLs.

dependency

Introduces a dependency on the target named by the value of this feature (so it will be brought up-to-date whenever the target being declared is). The dependency is not used in any other way.

dll-path (Unix only)

Allowed values: A directory path to shared libraries.

Specifies an additional directory where the system should look for shared libraries when the target is run.

Note that a relative path will be prepended with the directory of the relevant jam file - as supplied on the b2 command line - thus severely limiting its practical use!.

Please see the FAQ entry on the dll-path and hardcode-dll-paths for details, and the hardcode-dll-paths feature to automatically add paths for development.

embed-manifest

Allowed values: on, off.

This feature is specific to the msvc toolset (see Microsoft Visual C++), and controls whether the manifest files should be embedded inside executables and shared libraries, or placed alongside them. This feature corresponds to the IDE option found in the project settings dialog, under Configuration Properties → Manifest Tool → Input and Output → Embed manifest.

embed-manifest-file

This feature is specific to the msvc toolset (see Microsoft Visual C++), and controls which manifest files should be embedded inside executables and shared libraries. This feature corresponds to the IDE option found in the project settings dialog, under Configuration Properties → Manifest Tool → Input and Output → Additional Manifest Files.

embed-manifest-via

This feature is specific to the msvc toolset (see Microsoft Visual C++), and controls whether a manifest should be embedded via linker or manifest tool.

exception-handling

Allowed values: on, off.

Disables exceptions.

extern-c-nothrow

Allowed values: off, on.

Selects whether all extern "C" functions are considered nothrow by default.

fflags

The value of this feature is passed without modification to the tool when compiling Fortran sources.

file

When used in requirements of a prebuilt library target this feature specifies the path to the library file. See Prebuilt targets for examples.

find-shared-library

Adds a shared library to link to. Usually lib targets should be preferred over using this feature.

find-static-library

Adds a static library to link to. Usually lib targets should be preferred over using this feature.

flags

This feature is used for generic, i.e. non-language specific, flags for tools. The value of this feature is passed without modification to the tool that will build the target.

hardcode-dll-paths (Unix only)

Allowed values: true, false.
Defaults to: true(exe), false(install)
Ignored for: lib

When an executable is built with <hardcode-dll-paths>true (default), the target binary will be linked with an rpath list that contains all the paths to the directories of used shared libraries.

When a target is installed with <hardcode-dll-paths>true, those same paths, and any added with <dll-path>, are propagated through to the resulting binary.

The purpose of this feature is to aid development; the resulting executable (exe - but not install target) can by default be run, without changing system paths to shared libraries or installing the libraries to system paths.
Please see the FAQ entry for details.

implicit-dependency

Indicates that the target named by the value of this feature may produce files that are included by the sources of the target being declared. See the section Generated headers for more information.

force-include

Specifies an include path that has to be included in a way like if #include "file" appeared as the first line of every target’s source file.

The include order is not guaranteed if used multiple times on a single target.

include

Specifies an additional include path that is to be passed to C and C++ compilers.

inlining

Allowed values: off, on, full.

Enables inlining.

install-package

Specifies the name of the package to which installed files belong. This is used for default installation prefix on certain platforms.

install-<name>

Specifies installation prefix for install targets. These named installation prefixes are registered by default:

  • prefix: C:\<package name> if <target-os>windows is in the property set, /usr/local otherwise

  • exec-prefix: (prefix)

  • bindir: (exec-prefix)/bin

  • sbindir: (exec-prefix)/sbin

  • libexecdir: (exec-prefix)/libexec

  • libdir: (exec-prefix)/lib

  • datarootdir: (prefix)/share

  • datadir: (datarootdir)

  • sysconfdir: (prefix)/etc

  • sharedstatedir: (prefix)/com

  • localstatedir: (prefix)/var

  • runstatedir: (localstatedir)/run

  • includedir: (prefix)/include

  • oldincludedir: /usr/include

  • docdir: (datarootdir)/doc/<package name>

  • infodir: (datarootdir)/info

  • htmldir: (docdir)

  • dvidir : (docdir)

  • pdfdir : (docdir)

  • psdir : (docdir)

  • lispdir: (datarootdir)/emacs/site-lisp

  • localedir: (datarootdir)/locale

  • mandir: (datarootdir)/man

If more are necessary, they could be added with stage.add-install-dir.

instruction-set

Allowed values: depends on the used toolset.

Specifies for which specific instruction set the code should be generated. The code in general might not run on processors with older/different instruction sets.

While B2 allows a large set of possible values for this features, whether a given value works depends on which compiler you use. Please see the section C++ Compilers for details.

library

This feature is almost equivalent to the <source> feature, except that it takes effect only for linking. When you want to link all targets in a Jamfile to certain library, the <library> feature is preferred over <source>X — the latter will add the library to all targets, even those that have nothing to do with libraries.

library-path

Adds to the list of directories which will be used by the linker to search for libraries.

leak-sanitizer

Allowed values: on, norecover.

Enables leak sanitizer. Value norecover disables recovery for the sanitizer. The feature is optional, thus no sanitizer is enabled by default.

linemarkers

Allowed values: off.

On preprocessing targets changes behavior to emit/omit line directives like #line and #linenum.

NOTE: The value doesn’t propagate.

link

Allowed values: shared, static

Controls how libraries are built.

linkflags

See <cflags>.

local-visibility

Allowed values: global, protected, hidden.

This feature has the same effect as the visibility feature but is intended to be used by targets that require a particular symbol visibility. Unlike the visibility feature, local-visibility is not inherited by the target dependencies and only affects the target to which it is applied.

The local-visibility feature supports the same values with the same meaning as the visibility feature. By default, if local-visibility is not specified for a target, the value of the visibility feature is used.

location

Specifies the build directory for a target. The feature is used primarily with <install> rule.

location-prefix

Sets the build directory for a target as the project’s build directory prefixed with the value of this feature. See section Target Paths for an example.

mflags

The value of this feature is passed without modification to the tool when compiling Objective C sources.

mmflags

The value of this feature is passed without modification to the tool when compiling Objective C++ sources.

name

When used in requirements of a prebuilt library target this feature specifies the name of the library (the name of the library file without any platform-specific suffixes or prefixes). See Prebuilt targets for examples.

When used in requirements of an <install> target it specifies the name of the target file.

optimization

Allowed values: off, speed, space, 'minimal', 'debug'.

Enables optimization. speed optimizes for faster code, space optimizes for smaller binary.

profiling

Allowed values: off, on.

Enables generation of extra code to write profile information.

relevant

Allowed values: the name of any feature.

Indicates which other features are relevant for a given target. It is usually not necessary to manage it explicitly, as B2 can deduce it in most cases. Features which are not relevant will not affect target paths, and will not cause conflicts.

  • A feature will be considered relevant if any of the following are true

    • It is referenced by toolset.flags or toolset.uses-features

    • It is used by the requirements of a generator

    • It is a sub-feature of a relevant feature

    • It has a sub-feature which is relevant

    • It is a composite feature, and any composed feature is relevant

    • It affects target alternative selection for a main target

    • It is a propagated feature and is relevant for any dependency

    • It is relevant for any dependency created by the same main target

    • It is used in the condition of a conditional property and the corresponding value is relevant

    • It is explicitly named as relevant

  • Relevant features cannot be automatically deduced in the following cases:

    • Indirect conditionals. Solution: return properties of the form <relevant>result-feature:<relevant>condition-feature

      This isn’t really a conditional, although for most purposes it functions like one. In particular, it does not support multiple comma-separated elements in the condition, and it does work correctly even in contexts where conditional properties are not allowed
    • Action rules that read properties. Solution: add toolset.uses-features to tell B2 that the feature is actually used.

    • Generators and targets that manipulate property-sets directly. Solution: set <relevant> manually.

rtti

Allowed values: on, off.

Disables run-time type information.

runtime-debugging

Allowed values: on, off.

Specifies whether produced object files, executables, and libraries should include behavior useful only for debugging, such as asserts. Typically, the value of this feature is implicitly set by the variant feature, but it can be explicitly specified by the user. The most common usage is to build release variant with debugging output.

runtime-link

Allowed values: shared, static

Controls if a static or shared C/C++ runtime should be used. There are some restrictions how this feature can be used, for example on some compilers an application using static runtime should not use shared libraries at all, and on some compilers, mixing static and shared runtime requires extreme care. Check your compiler documentation for more details.

search

When used in requirements of a prebuilt library target this feature adds to the list of directories to search for the library file. See Prebuilt targets for examples.

source

The <source>X property has the same effect on building a target as putting X in the list of sources. It is useful when you want to add the same source to all targets in the project (you can put <source> in requirements) or to conditionally include a source (using conditional requirements, see the section Conditions and alternatives. See also the <library> feature.

staging-prefix

Specifies staging prefix for install targets. If present, it will be used instead of the path to named directory prefix. Example:

project : requirements <install-prefix>x/y/z ;
install a1 : a : <location>(bindir) ; # installs into x/y/z/bin
install a2 : a : <location>(bindir) <staging-prefix>q ; # installs into q/bin

The feature is useful when you cannot (or don’t want to) put build artfiacts into their intented locations during the build (such as when cross-compiling), but still need to communicate those intended locations to the build system, e.g. to generate configuration files.

stdlib

Allowed values: native, gnu, gnu11, libc++, sun-stlport, apache.

Specifies C++ standard library to link to and in some cases the library ABI to use:

native

Use compiler’s default.

gnu

Use GNU Standard Library (a.k.a. libstdc++) with the old ABI.

gnu11

Use GNU Standard Library with the new ABI.

libc++

Use LLVM libc++.

sun-stlport

Use the STLport implementation of the standard library provided with the Solaris Studio compiler.

apache

Use the Apache stdcxx version 4 C++ standard library provided with the Solaris Studio compiler.

strip

Allowed values: off, on.

Controls whether the binary should be stripped — that is have everything not necessary to running removed.

This feature will show up in target paths of everything, not just binaries.
suppress-import-lib

Suppresses creation of import library by the linker.

tag

Used to customize the name of the generated files. The value should have the form:

@rulename

where rulename should be a name of a rule with the following signature:

rule tag ( name : type ? : property-set )

The rule will be called for each target with the default name computed by B2, the type of the target, and property set. The rule can either return a string that must be used as the name of the target, or an empty string, in which case the default name will be used.

Most typical use of the tag feature is to encode build properties, or library version in library target names. You should take care to return non-empty string from the tag rule only for types you care about — otherwise, you might end up modifying names of object files, generated header file and other targets for which changing names does not make sense.

target-os

Allowed values: aix, android, appletv, bsd, cygwin, darwin, freebsd, haiku, hpux, iphone, linux, netbsd, openbsd, osf, qnx, qnxnto, sgi, solaris, unix, unixware, windows, vms, vxworks, freertos.

Specifies the operating system for which the code is to be generated. The compiler you used should be the compiler for that operating system. This option causes B2 to use naming conventions suitable for that operating system, and adjust build process accordingly. For example, with gcc, it controls if import libraries are produced for shared libraries or not.

See the section Cross-compilation for details of cross-compilation.

threadapi

Allowed values: pthread, win32.

Selects threading implementation. The default is win32 if <target-os> is windows and pthread otherwise.

threading

Allowed values: single, multi

Controls if the project should be built in multi-threaded mode. This feature does not necessary change code generation in the compiler, but it causes the compiler to link to additional or different runtime libraries, and define additional preprocessor symbols (for example, _MT on Windows and _REENTRANT on Linux). How those symbols affect the compiled code depends on the code itself.

thread-sanitizer

Allowed values: on, norecover.

Enables thread sanitizer. Value norecover disables recovery for the sanitizer. The feature is optional, thus no sanitizer is enabled by default.

toolset

Allowed values: any of the toolset modules.

Selects the toolset that will be used to build binary targets. The full list of toolset modules is in the Builtin tools section.

undef

Specifies a preprocessor symbol to undefine.

undefined-sanitizer

Allowed values: on, norecover.

Enables undefined behavior sanitizer. Value norecover disables recovery for the sanitizer. The feature is optional, thus no sanitizer is enabled by default.

use

Introduces a dependency on the target named by the value of this feature (so it will be brought up-to-date whenever the target being declared is), and adds its usage requirements to the build properties of the target being declared. The dependency is not used in any other way. The primary use case is when you want the usage requirements (such as #include paths) of some library to be applied, but do not want to link to it.

user-interface

Allowed values: console, gui, wince, native, auto.

Specifies the environment for the executable which affects the entry point symbol (or entry point function) that the linker will select. This feature is Windows-specific.

console

console application.

gui

application does not require a console (it is supposed to create its own windows.

wince

application is intended to run on a device that has a version of the Windows CE kernel.

native

application runs without a subsystem environment.

auto

application runs in the POSIX subsystem in Windows.

variant

Allowed values: debug, release, profile.

A feature combining several low-level features, making it easy to request common build configurations.

The value debug expands to

<optimization>off <debug-symbols>on <inlining>off <runtime-debugging>on

The value release expands to

<optimization>speed <debug-symbols>off <inlining>full <runtime-debugging>off

The value profile expands to the same as release, plus:

<profiling>on <debug-symbols>on

Users can define their own build variants using the variant rule from the common module.

Runtime debugging is on in debug builds to suit the expectations of people used to various IDEs.
vectorize

Allowed values: off, on, full.

Enables vectorization.

version

This feature isn’t used by any of the builtin tools, but can be used, for example, to adjust target’s name via <tag> feature.

visibility

Allowed values: global, protected, hidden.

Specifies the default symbol visibility in compiled binaries. Not all values are supported on all platforms and on some platforms (for example, Windows) symbol visibility is not supported at all.

The supported values have the following meaning:

global

a.k.a. "default" in gcc documentation. Global symbols are considered public, they are exported from shared libraries and can be redefined by another shared library or executable.

protected

a.k.a. "symbolic". Protected symbols are exported from shared ibraries but cannot be redefined by another shared library or executable. This mode is not supported on some platforms, for example OS X.

hidden

Hidden symbols are not exported from shared libraries and cannot be redefined by a different shared library or executable loaded in a process. In this mode, public symbols have to be explicitly marked in the source code to be exported from shared libraries. This is the recommended mode.

By default compiler default visibility mode is used (no compiler flags are added).

In Boost super-project Jamroot file this property is set to the default value of hidden. This means that Boost libraries are built with hidden visibility by default, unless the user overrides it with a different visibility or a library sets a different local-visibility (see below).
warnings

Allowed values: on, all, extra, pedantic, off.

Controls the warning level of compilers.

on

enable default/"reasonable" warning level.

all

enable most warnings.

extra

enable extra, possibly conflicting, warnings.

pedantic

enable likely inconsequential, and conflicting, warnings.

off

disable all warnings.

Default value is all.

warnings-as-errors

Allowed values: off, on.

Makes it possible to treat warnings as errors and abort compilation on a warning.

translate-path

Used to introduce custom path feature translation. The value should have the form:

@rulename

where rulename should be a name of a rule with the following signature:

rule rulename ( feature value : properties * : project-id : project-location )

The rule is called for each target with the feature of a path property, the path property value, target properties, the target project ID, and the target project location. It should return the translated path value. Or return nothing if it doesn’t do path translation. Leaving it do the default path translation.

lto

Allowed values: on.

Enables link time optimizations (also known as interprocedural optimizations or whole-program optimizations). Currently supported toolsets are GNU C++, clang and Microsoft Visual C++. The feature is optional.

lto-mode

Subfeature of lto

Allowed values: full, thin, fat.

Specifies the type of LTO to use.

full

Use the monolithic LTO: on linking all input is merged into a single module.

thin

Use clang’s ThinLTO: each compiled file contains a summary of the module, these summaries are merged into a single index. This allows to avoid merging all modules together, which greatly reduces linking time.

fat

Produce gcc’s fat LTO objects: compiled files contain both the intermidiate language suitable for LTO and object code suitable for regular linking.

response-file

Allowed values: auto, file, contents.

Controls whether a response file is used, or not, during the build of the applicable target. For file a response file is created and the filename replaced in the action. For contents the contents (:E=) is replaced in the action and no response file is created. For auto either a response file is created, or the contents replaced, based on the length of the contents such that if the contents fits within the limits of the command execution line length limits the contents is replaced. Otherwise a response file is created and the filename is replaced in the actions.

Supported for clang-linux and msvc toolsets.

6.4. Builtin tools

B2 comes with support for a large number of C++ compilers, and other tools. This section documents how to use those tools.

Before using any tool, you must declare your intention, and possibly specify additional information about the tool’s configuration. This is done by calling the using rule, typically in your user-config.jam, for example:

using gcc ;

additional parameters can be passed just like for other rules, for example:

using gcc : 4.0 : g++-4.0 ;

The options that can be passed to each tool are documented in the subsequent sections.

6.4.1. C++ Compilers

This section lists all B2 modules that support C++ compilers and documents how each one can be initialized. The name of support module for compiler is also the value for the toolset feature that can be used to explicitly request that compiler.

HP aC++ compiler

The acc module supports the HP aC++ compiler for the HP-UX operating system.

The module is initialized using the following syntax:

using acc : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the command is not specified, the aCC binary will be searched in PATH.

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

Borland C++ Compiler

The borland module supports the 32-bit command line C compilers running on Microsoft Windows. This is the bcc32 executable for all versions of Borland C and C Builder, as well as the command line compatible compiler bcc32c on later versions of C Builder.

The module is initialized using the following syntax:

using borland : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the command is not specified, Boost.Build will search for a binary named bcc32 in PATH.

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

user-interface

Specifies the user interface for applications. Valid choices are console for a console applicatiuon and gui for a Windows application.

Comeau C/C++ Compiler

The como-linux and the como-win modules supports the Comeau C/C++ Compiler on Linux and Windows respectively.

The module is initialized using the following syntax:

using como : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the command is not specified, B2 will search for a binary named como in PATH.

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

Before using the Windows version of the compiler, you need to setup necessary environment variables per compiler’s documentation. In particular, the COMO_XXX_INCLUDE variable should be set, where XXX corresponds to the used backend C compiler.

Code Warrior

The cw module support CodeWarrior compiler, originally produced by Metrowerks and presently developed by Freescale. B2 supports only the versions of the compiler that target x86 processors. All such versions were released by Metrowerks before acquisition and are not sold any longer. The last version known to work is 9.4.

The module is initialized using the following syntax:

using cw : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the command is not specified, B2 will search for a binary named mwcc in default installation paths and in PATH.

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

setup

The command that sets up environment variables prior to invoking the compiler. If not specified, cwenv.bat alongside the compiler binary will be used.

compiler

The command that compiles C and C++ sources. If not specified, mwcc will be used. The command will be invoked after the setup script was executed and adjusted the PATH variable.

linker

The command that links executables and dynamic libraries. If not specified, mwld will be used. The command will be invoked after the setup script was executed and adjusted the PATH variable.

Digital Mars C/C++ Compiler

The dmc module supports the Digital Mars C++ compiler.

The module is initialized using the following syntax:

using dmc : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the command is not specified, B2 will search for a binary named dmc in PATH.

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

Embarcadero C++ Compiler

The embarcadero module supports the 32-bit command line C compiler bcc32x and the 64-bit command line C compiler bcc64, both clang-based, running on Microsoft Windows. These are the clang-based Windows compilers for all versions of Embarcadero C++.

The module is initialized using the following syntax:

using embarcadero : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

version:

The version should be the compiler version if specified. if the version is not specified Boost Build will find the latest installed version of Embarcadero C and use that for the version. If the version is specified Boost Build does not check if this matches any particular version of Embarcadero C, so you may use the version as a mnemonic to configure separate 'versions'.

c++-compile-command:

If the c-compile-command is not specified, Boost.Build will default to the bcc64 compiler. If you specify a compiler option of <address-model>32 the default compiler will be bcc32x. In either case when the command is not given Boost Build will assume the compiler is in the PATH. So it is not necessary to specify a command if you accept the default compiler and the Embarcadero C binary directory is in the PATH.

If the command is specified it will be used as is to invoke the compiler. If the command has either 'bcc32x(.exe)' or 'bcc64(.exe)' in it Boost Build will use the appropriate compiler to configure the toolset. If the command does not have either 'bcc32x(.exe)' or 'bcc64(.exe)' in it, Boost Build will use the default compiler to configure the toolset. If you have your own command, which does not have 'bcc32x(.exe)' in it but invokes the 'bcc32x(.exe)' compiler, specify the <address-model>32 compiler option.

compiler options:

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C and C++ sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

asmflags

Specifies additional command line options that will be passed to the assembler.

archiveflags

Specifies additional command line options that will be passed to the archiver, which creates a static library.

address-model

This option can be used to specify the default compiler as specified in the dicsussion above of the c++-compile-command. Otherwise the address model is not used to initialize the toolset.

user-interface

Specifies the user interface for applications. Valid choices are console for a console applicatiuon and gui for a Windows application.

root

Normallly Boost Build will automatically be able to determine the root of the Embarcadero C installation. It does this in various ways, but primarily by checking a registry entry. If you specify the root it will use that path, and the root you specify should be the full path to the Embarcadero C installation on your machine ( without a trailing \ or / ). You should not need to specify this option unless Boost Build can not find the Embarcadero C++ root directory.

Examples

using embarcadero ;

Configures the toolset to use the latest version, with bcc64 as the compiler. The bcc64 compiler must be in the PATH.

using embarcadero : 7.40 ;

Configures the toolset to use the 7.40 version, with bcc64 as the compiler. The bcc64 compiler must be in the PATH.

using embarcadero : 7.40 : bcc32x ; using embarcadero : 7.40 : : <address-model>32 ;

Configures the toolset to use the 7.40 version, with bcc32x as the compiler. The bcc32x compiler must be in the PATH.

using embarcadero : : c:/some_path/bcc64 ;

Configures the toolset to use the latest version, with full command specified.

using embarcadero : : full_command : <address-model>32 ;

Configures the toolset to use the latest version, with full command specified and bcc32x as the compiler.

using embarcadero : : : <root>c:/root_path ;

Configures the toolset to use the latest version, with bcc64 as the compiler and the root directory of the installation specified. The bcc64 compiler must be in the PATH.

GNU C++

The gcc module supports the GNU C++ compiler on Linux, a number of Unix-like system including SunOS and on Windows (either Cygwin or MinGW).

The gcc module is initialized using the following syntax:

using gcc : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the version is not explicitly specified, it will be automatically detected by running the compiler with the -v option. If the command is not specified, the g++ binary will be searched in PATH.

The following options can be provided, using `<option-name>option-value syntax`:

asmflags

Specifies additional compiler flags that will be used when compiling assembler sources.

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

fflags

Specifies additional compiler flags that will be used when compiling Fortran sources.

mflags

Specifies additional compiler flags that will be used when compiling Objective-C sources.

mmflags

Specifies additional compiler flags that will be used when compiling Objective-C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling any language sources.

linkflags

Specifies additional command line options that will be passed to the linker.

root

Specifies root directory of the compiler installation. This option is necessary only if it is not possible to detect this information from the compiler command—​for example if the specified compiler command is a user script.

archiver

Specifies the archiver command that is used to produce static libraries. Normally, it is autodetected using gcc -print-prog-name option or defaulted to ar, but in some cases you might want to override it, for example to explicitly use a system version instead of one included with gcc.

rc

Specifies the resource compiler command that will be used with the version of gcc that is being configured. This setting makes sense only for Windows and only if you plan to use resource files. By default windres will be used.

rc-type

Specifies the type of resource compiler. The value can be either windres for msvc resource compiler, or rc for borland’s resource compiler.

In order to compile 64-bit applications, you have to specify address-model=64, and the instruction-set feature should refer to a 64 bit processor. Currently, those include nocona, opteron, athlon64 and athlon-fx.

HP C++ Compiler for Tru64 Unix

The hp_cxx modules supports the HP C++ Compiler for Tru64 Unix.

The module is initialized using the following syntax:

using hp_cxx : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the command is not specified, B2 will search for a binary named hp_cxx in PATH.

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

Intel C++

The intel-* modules support the Intel C++ command-line compiler.

The module is initialized using the following syntax:

using intel : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If compiler command is not specified, then B2 will look in PATH for an executable icpc (on Linux), or icl.exe (on Windows).

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

root

For the Linux version, specifies the root directory of the compiler installation. This option is necessary only if it is not possible to detect this information from the compiler command — for example if the specified compiler command is a user script. For the Windows version, specifies the directory of the iclvars.bat file, for versions prior to 21 ( or 2021 ), or of the setvars.bat, for versions from 21 ( or 2021 ) on up, for configuring the compiler. Specifying the root option without specifying the compiler command allows the end-user not to have to worry about whether they are compiling 32-bit or 64-bit code, as the toolset will automatically configure the compiler for the appropriate address model and compiler command using the iclvars.bat or setvars.bat batch file.

Microsoft Visual C++

The msvc module supports the Microsoft Visual C++ command-line tools on Microsoft Windows. The supported products and versions of command line tools are listed below:

  • Visual Studio 2022-14.3

  • Visual Studio 2019-14.2

  • Visual Studio 2017—14.1

  • Visual Studio 2015—14.0

  • Visual Studio 2013—12.0

  • Visual Studio 2012—11.0

  • Visual Studio 2010—10.0

  • Visual Studio 2008—9.0

  • Visual Studio 2005—8.0

  • Visual Studio .NET 2003—7.1

  • Visual Studio .NET—7.0

  • Visual Studio 6.0, Service Pack 5—​6.5

The user would then call the boost build executable with the toolset set equal to msvc-[version number] for example to build with Visual Studio 2019 one could run:

.\b2 toolset=msvc-14.2 target

The msvc module is initialized using the following syntax:

using msvc : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the version is not explicitly specified, the most recent version found in the registry will be used instead. If the special value all is passed as the version, all versions found in the registry will be configured. If a version is specified, but the command is not, the compiler binary will be searched in standard installation paths for that version, followed by PATH.

The compiler command should be specified using forward slashes, and quoted.

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

assembler

The command that compiles assembler sources. If not specified, ml will be used. The command will be invoked after the setup script was executed and adjusted the PATH variable.

compiler

The command that compiles C and C++ sources. If not specified, cl will be used. The command will be invoked after the setup script was executed and adjusted the PATH variable.

compiler-filter

Command through which to pipe the output of running the compiler. For example to pass the output to STLfilt.

idl-compiler

The command that compiles Microsoft COM interface definition files. If not specified, midl will be used. The command will be invoked after the setup script was executed and adjusted the PATH variable.

linker

The command that links executables and dynamic libraries. If not specified, link will be used. The command will be invoked after the setup script was executed and adjusted the PATH variable.

mc-compiler

The command that compiles Microsoft message catalog files. If not specified, mc will be used. The command will be invoked after the setup script was executed and adjusted the PATH variable.

resource-compiler

The command that compiles resource files. If not specified, rc will be used. The command will be invoked after the setup script was executed and adjusted the PATH variable.

setup

The filename of the global environment setup script to run before invoking any of the tools defined in this toolset. Will not be used in case a target platform specific script has been explicitly specified for the current target platform. Used setup script will be passed the target platform identifier (x86, x86_amd64, x86_ia64, amd64 or ia64) as a parameter. If not specified a default script is chosen based on the used compiler binary, e.g. vcvars32.bat or vsvars32.bat.

setup-amd64; setup-i386; setup-ia64

The filename of the target platform specific environment setup script to run before invoking any of the tools defined in this toolset. If not specified the global environment setup script is used.

64-bit support

Starting with version 8.0, Microsoft Visual Studio can generate binaries for 64-bit processor, both 64-bit flavours of x86 (codenamed AMD64/EM64T), and Itanium (codenamed IA64). In addition, compilers that are itself run in 64-bit mode, for better performance, are provided. The complete list of compiler configurations are as follows (we abbreviate AMD64/EM64T to just AMD64):

  • 32-bit x86 host, 32-bit x86 target

  • 32-bit x86 host, 64-bit AMD64 target

  • 32-bit x86 host, 64-bit IA64 target

  • 64-bit AMD64 host, 64-bit AMD64 target

  • 64-bit IA64 host, 64-bit IA64 target

The 32-bit host compilers can be always used, even on 64-bit Windows. On the contrary, 64-bit host compilers require both 64-bit host processor and 64-bit Windows, but can be faster. By default, only 32-bit host, 32-bit target compiler is installed, and additional compilers need to be installed explicitly.

To use 64-bit compilation you should:

  1. Configure you compiler as usual. If you provide a path to the compiler explicitly, provide the path to the 32-bit compiler. If you try to specify the path to any of 64-bit compilers, configuration will not work.

  2. When compiling, use address-model=64, to generate AMD64 code.

  3. To generate IA64 code, use architecture=ia64

The (AMD64 host, AMD64 target) compiler will be used automatically when you are generating AMD64 code and are running 64-bit Windows on AMD64. The (IA64 host, IA64 target) compiler will never be used, since nobody has an IA64 machine to test.

It is believed that AMD64 and EM64T targets are essentially compatible. The compiler options /favor:AMD64 and /favor:EM64T, which are accepted only by AMD64 targeting compilers, cause the generated code to be tuned to a specific flavor of 64-bit x86. B2 will make use of those options depending on the value of the`instruction-set` feature.

Starting with version 14.0, Microsoft Visual Studio can generate binaries using native arm64 tools to compile for x86, x86_64, and arm64.

Windows Runtime support

Starting with version 11.0, Microsoft Visual Studio can produce binaries for Windows Store and Phone in addition to traditional Win32 desktop. To specify which Windows API set to target, use the windows-api feature. Available options are desktop, store, or phone. If not specified, desktop will be used.

When using store or phone the specified toolset determines what Windows version is targeted. The following options are available:

  • Windows 8.0: toolset=msvc-11.0 windows-api=store

  • Windows 8.1: toolset=msvc-12.0 windows-api=store

  • Windows Phone 8.0: toolset=msvc-11.0 windows-api=phone

  • Windows Phone 8.1: toolset=msvc-12.0 windows-api=phone

For example use the following to build for Windows Store 8.1 with the ARM architecture:

.\b2 toolset=msvc-12.0 windows-api=store architecture=arm

Note that when targeting Windows Phone 8.1, version 12.0 didn’t include the vcvars phone setup scripts. They can be separately downloaded from here.

Sun Studio

The sun module supports the Sun Studio C++ compilers for the Solaris OS.

The module is initialized using the following syntax:

using sun : [version] : [c++-compile-command] : [compiler options] ;

This statement may be repeated several times, if you want to configure several versions of the compiler.

If the command is not specified, B2 will search for a binary named CC in /opt/SUNWspro/bin and in PATH.

When using this compiler on complex C code, such as the http://boost.org[Boost C library], it is recommended to specify the following options when initializing the sun module:

-library=stlport4 -features=tmplife -features=tmplrefstatic

See the Sun C++ Frontend Tales for details.

The following options can be provided, using `<option-name>option-value syntax`:

cflags

Specifies additional compiler flags that will be used when compiling C sources.

cxxflags

Specifies additional compiler flags that will be used when compiling C++ sources.

compileflags

Specifies additional compiler flags that will be used when compiling both C and C++ sources.

linkflags

Specifies additional command line options that will be passed to the linker.

Starting with Sun Studio 12, you can create 64-bit applications by using the address-model=64 property.

IBM Visual Age

The vacpp module supports the IBM Visual Age C++ Compiler, for the AIX operating system. Versions 7.1 and 8.0 are known to work.

The module is initialized using the following syntax:

using vacpp ;

The module does not accept any initialization options. The compiler should be installed in the /usr/vacpp/bin directory.

Later versions of Visual Age are known as XL C/C++. They were not tested with the the vacpp module.

6.4.2. Third-party libraries

B2 provides special support for some third-party C++ libraries, documented below.

STLport library

The STLport library is an alternative implementation of C++ runtime library. B2 supports using that library on Windows platform. Linux is hampered by different naming of libraries in each STLport version and is not officially supported.

Before using STLport, you need to configure it in user-config.jam using the following syntax:

using stlport : version : header-path : library-path ;

Where version is the version of STLport, for example 5.1.4, headers is the location where STLport headers can be found, and libraries is the location where STLport libraries can be found. The version should always be provided, and the library path should be provided if you’re using STLport’s implementation of iostreams. Note that STLport 5.* always uses its own iostream implementation, so the library path is required.

When STLport is configured, you can build with STLport by requesting stdlib=stlport on the command line.

zlib

Provides support for the zlib library. zlib can be configured either to use precompiled binaries or to build the library from source.

zlib can be initialized using the following syntax

using zlib : version : options : condition : is-default ;

Options for using a prebuilt library:

search

The directory containing the zlib binaries.

name

Overrides the default library name.

include

The directory containing the zlib headers.

If none of these options is specified, then the environmental variables ZLIB_LIBRARY_PATH, ZLIB_NAME, and ZLIB_INCLUDE will be used instead.

Options for building zlib from source:

source

The zlib source directory. Defaults to the environmental variable ZLIB_SOURCE.

tag

Sets the tag property to adjust the file name of the library. Ignored when using precompiled binaries.

build-name

The base name to use for the compiled library. Ignored when using precompiled binaries.

Examples:

# Find zlib in the default system location
using zlib ;
# Build zlib from source
using zlib : 1.2.7 : <source>/home/steven/zlib-1.2.7 ;
# Find zlib in /usr/local
using zlib : 1.2.7 : <include>/usr/local/include <search>/usr/local/lib ;
# Build zlib from source for msvc and find
# prebuilt binaries for gcc.
using zlib : 1.2.7 : <source>C:/Devel/src/zlib-1.2.7 : <toolset>msvc ;
using zlib : 1.2.7 : : <toolset>gcc ;
bzip2

Provides support for the bzip2 library. bzip2 can be configured either to use precompiled binaries or to build the library from source.

bzip2 can be initialized using the following syntax

using bzip2 : version : options : condition : is-default ;

Options for using a prebuilt library:

search

The directory containing the bzip2 binaries.

name

Overrides the default library name.

include

The directory containing the bzip2 headers.

If none of these options is specified, then the environmental variables BZIP2_LIBRARY_PATH, BZIP2_NAME, and BZIP2_INCLUDE will be used instead.

Options for building bzip2 from source:

source

The bzip2 source directory. Defaults to the environmental variable BZIP2_SOURCE.

tag

Sets the tag property to adjust the file name of the library. Ignored when using precompiled binaries.

build-name

The base name to use for the compiled library. Ignored when using precompiled binaries.

Examples:

# Find bzip in the default system location
using bzip2 ;
# Build bzip from source
using bzip2 : 1.0.6 : <source>/home/sergey/src/bzip2-1.0.6 ;
# Find bzip in /usr/local
using bzip2 : 1.0.6 : <include>/usr/local/include <search>/usr/local/lib ;
# Build bzip from source for msvc and find
# prebuilt binaries for gcc.
using bzip2 : 1.0.6 : <source>C:/Devel/src/bzip2-1.0.6 : <toolset>msvc ;
using bzip2 : 1.0.6 : : <toolset>gcc ;
Python

Provides support for the python language environment to be linked in as a library.

python can be initialized using the following syntax

using python : [version] : [command-or-prefix] : [includes] : [libraries] : [conditions] : [extension-suffix] ;

Options for using python:

version

The version of Python to use. Should be in Major.Minor format, for example 2.3. Do not include the sub-minor version.

command-or-prefix

Preferably, a command that invokes a Python interpreter. Alternatively, the installation prefix for Python libraries and includes. If empty, will be guessed from the version, the platform’s installation patterns, and the python executables that can be found in PATH.

includes

the include path to Python headers. If empty, will be guessed.

libraries

the path to Python library binaries. If empty, will be guessed. On MacOS/Darwin, you can also pass the path of the Python framework.

conditions

if specified, should be a set of properties that are matched against the build configuration when B2 selects a Python configuration to use.

extension-suffix

A string to append to the name of extension modules before the true filename extension. Ordinarily we would just compute this based on the value of the <python-debugging> feature. However ubuntu’s python-dbg package uses the windows convention of appending _d to debug-build extension modules. We have no way of detecting ubuntu, or of probing python for the "_d" requirement, and if you configure and build python using --with-pydebug, you’ll be using the standard *nix convention. Defaults to "" (or "_d" when targeting windows and <python-debugging> is set).

Examples:

# Find python in the default system location
using python ;
# 2.7
using python : 2.7 ;
# 3.5
using python : 3.5 ;

# On ubuntu 16.04
using python
: 2.7 # version
: # Interpreter/path to dir
: /usr/include/python2.7 # includes
: /usr/lib/x86_64-linux-gnu # libs
: # conditions
;

using python
: 3.5 # version
: # Interpreter/path to dir
: /usr/include/python3.5 # includes
: /usr/lib/x86_64-linux-gnu # libs
: # conditions
;

# On windows
using python
: 2.7 # version
: C:\\Python27-32\\python.exe # Interperter/path to dir
: C:\\Python27-32\\include # includes
: C:\\Python27-32\\libs # libs
: <address-model>32 <address-model> # conditions - both 32 and unspecified
;

using python
: 2.7 # version
: C:\\Python27-64\\python.exe # Interperter/path to dir
: C:\\Python27-64\\include # includes
: C:\\Python27-64\\libs # libs
: <address-model>64 # conditions
;

6.4.3. Documentation tools

B2 support for the Boost documentation tools is documented below.

xsltproc

To use xsltproc, you first need to configure it using the following syntax:

using xsltproc : xsltproc ;

Where xsltproc is the xsltproc executable. If xsltproc is not specified, and the variable XSLTPROC is set, the value of XSLTPROC will be used. Otherwise, xsltproc will be searched for in PATH.

The following options can be provided, using `<option-name>option-value syntax`:

xsl:param

Values should have the form name=value

xsl:path

Sets an additional search path for xi:include elements.

catalog

A catalog file used to rewrite remote URL’s to a local copy.

The xsltproc module provides the following rules. Note that these operate on jam targets and are intended to be used by another toolset, such as boostbook, rather than directly by users.

xslt
rule xslt ( target : source stylesheet : properties * )

Runs xsltproc to create a single output file.

xslt-dir
rule xslt-dir ( target : source stylesheet : properties * : dirname )

Runs xsltproc to create multiple outputs in a directory. dirname is unused, but exists for historical reasons. The output directory is determined from the target.

boostbook

To use boostbook, you first need to configure it using the following syntax:

using boostbook : docbook-xsl-dir : docbook-dtd-dir : boostbook-dir ;

docbook-xsl-dir is the DocBook XSL stylesheet directory. If not provided, we use DOCBOOK_XSL_DIR from the environment (if available) or look in standard locations. Otherwise, we let the XML processor load the stylesheets remotely.

docbook-dtd-dir is the DocBook DTD directory. If not provided, we use DOCBOOK_DTD_DIR From the environment (if available) or look in standard locations. Otherwise, we let the XML processor load the DTD remotely.

boostbook-dir is the BoostBook directory with the DTD and XSL sub-dirs.

The boostbook module depends on xsltproc. For pdf or ps output, it also depends on fop.

The following options can be provided, using `<option-name>option-value syntax`:

format

Allowed values: html, xhtml, htmlhelp, onehtml, man, pdf, ps, docbook, fo, tests.

The format feature determines the type of output produced by the boostbook rule.

The boostbook module defines a rule for creating a target following the common syntax.

boostbook
rule boostbook ( target-name : sources * : requirements * : default-build * )

Creates a boostbook target.

doxygen

To use doxygen, you first need to configure it using the following syntax:

using doxygen : name ;

name is the doxygen command. If it is not specified, it will be found in the PATH.

The doxygen module depends on the boostbook module when generating BoostBook XML.

The following options can be provided, using `<option-name>option-value syntax`:

doxygen:param

All the values of doxygen:param are added to the doxyfile.

prefix

Specifies the common prefix of all headers when generating BoostBook XML. Everything before this will be stripped off.

reftitle

Specifies the title of the library-reference section, when generating BoostBook XML.

doxygen:xml-imagedir

When generating BoostBook XML, specifies the directory in which to place the images generated from LaTex formulae.

The path is interpreted relative to the current working directory, not relative to the Jamfile. This is necessary to match the behavior of BoostBook.

The doxygen module defines a rule for creating a target following the common syntax.

doxygen
rule doxygen ( target : sources * : requirements * : default-build * : usage-requirements * )

Creates a doxygen target. If the target name ends with .html, then this will generate an html directory. Otherwise it will generate BoostBook XML.

quickbook

The quickbook module provides a generator to convert from Quickbook to BoostBook XML.

To use quickbook, you first need to configure it using the following syntax:

using quickbook : command ;

command is the quickbook executable. If it is not specified, B2 will compile it from source. If it is unable to find the source it will search for a quickbook executable in PATH.

fop

The fop module provides generators to convert from XSL formatting objects to Postscript and PDF.

To use fop, you first need to configure it using the following syntax:

using fop : fop-command : java-home : java ;

fop-command is the command to run fop. If it is not specified, B2 will search for it in PATH and FOP_HOME.

Either java-home or java can be used to specify where to find java.

6.5. Builtin modules

This section describes the modules that are provided by B2. The import rule allows rules from one module to be used in another module or Jamfile.

6.5.1. modules module.

The modules module defines basic functionality for handling modules.

A module defines a number of rules that can be used in other modules. Modules can contain code at the top level to initialize the module. This code is executed the first time the module is loaded.

A Jamfile is a special kind of module which is managed by the build system. Although they cannot be loaded directly by users, the other features of modules are still useful for Jamfiles.

Each module has its own namespaces for variables and rules. If two modules A and B both use a variable named X, each one gets its own copy of X. They won’t interfere with each other in any way. Similarly, importing rules into one module has no effect on any other module.

Every module has two special variables. $(file) contains the name of the file that the module was loaded from and $(name) contains the name of the module.

$(file) does not contain the full path to the file. If you need this, use modules.binding.
b2::modules::binding
Jam

rule binding ( module_name )

C++

value_ref binding(std::string module_name);

Returns the filesystem binding of the given module.

For example, a module can get its own location with:

me = [ modules.binding $(__name__) ] ;
b2::modules::record_binding
Jam

rule record-binding ( module_name : binding )

C++

void record_binding(std::string module_name, value_ref value);

This helper is used by load to record the binding (path) of each loaded module.

The module_name is ignored. Instead the internal tracking of the currently loading module is used to record the binding.
b2::modules::poke
Jam

rule poke ( module_name ? : variables + : value * )

C++

void poke(std::string module_name, list_cref variables, list_cref value);

Sets the module-local value of a variable. This is the most reliable way to set a module-local variable in a different module; it eliminates issues of name shadowing due to dynamic scoping.

For example, to set a variable in the global module:

modules.poke : ZLIB_INCLUDE : /usr/local/include ;
b2::modules::peek
Jam

rule peek ( module_name ? : variables + )

C++

list_ref peek(std::string module_name, list_cref variables);

Returns the module-local value of a variable. This is the most reliable way to examine a module-local variable in a different module; it eliminates issues of name shadowing due to dynamic scoping.

For example, to read a variable from the global module:

local ZLIB_INCLUDE = [ modules.peek : ZLIB_INCLUDE ] ;
b2::modules::clone_rules
Jam

rule clone-rules ( source_module target_module )

C++

void clone_rules(std::tuple<std::string, std::string> source_target_modules);

Define exported copies in target_module of all rules exported from source_module. Also make them available in the global module with qualification, so that it is just as though the rules were defined originally in target_module.

b2::modules::call_in
Jam

rule call-in ( module-name ? : rule-name args * : * )

C++

list_ref call_in(value_ref module_name, std::tuple<value_ref, list_ref> const lists & rest, bind::context_ref_ context_ref);

Call the given rule locally in the given module. Use this for rules accepting rule names as arguments, so that the passed rule may be invoked in the context of the rule’s caller (for example, if the rule accesses module globals or is a local rule). Note that rules called this way may accept at most 18 parameters.

Example:

rule filter ( f : values * )
{
	local m = [ CALLER_MODULE ] ;
	local result ;
	for v in $(values)
	{
		if [ modules.call-in $(m) : $(f) $(v) ]
		{
			result += $(v) ;
		}
	}
	return result ;
}
b2::modules::call_locally
Jam

rule call-locally ( qualified-rule-name args * : * )

C++

list_ref call_locally(std::tuple<value_ref, list_ref> rule_name_a1, const lists & rest, const bind::context_ref_ & context_ref);

Given a possibly qualified rule name and arguments, remove any initial module qualification from the rule and invoke it in that module. If there is no module qualification, the rule is invoked in the global module. Note that rules called this way may accept at most 18 parameters.

b2::modules::run_tests
Jam

rule run-tests ( m )

C++

void run_tests(value_ref m, bind::context_ref_ context_ref);

Runs internal B2 unit tests for the specified module. The module’s test rule is executed in its own module to eliminate any inadvertent effects of testing module dependencies (such as assert) on the module itself.

b2::modules::load
Jam

rule load ( module-name : filename ? : search * )

C++

void load(value_ref module_name, value_ref filename, list_cref search, bind::context_ref_ context_ref);

Load the indicated module if it is not already loaded.

module-name

Name of module to load.

filename

(partial) path to file; Defaults to $(module-name).jam

search

Directories in which to search for filename. Defaults to $(BOOST_BUILD_PATH).

b2::modules::import
Jam

rule import ( module-names + : rules-opt * : rename-opt * )

C++

void import(list_cref module_names, list_cref rules_opt, list_cref rename_opt, bind::context_ref_ context_ref);

Load the indicated module and import rule names into the current module. Any members of rules-opt will be available without qualification in the caller’s module. Any members of rename-opt will be taken as the names of the rules in the caller’s module, in place of the names they have in the imported module. If rules-opt = *, all rules from the indicated module are imported into the caller’s module. If rename-opt is supplied, it must have the same number of elements as rules-opt.

The import rule is available without qualification in all modules.

Examples:

import path ;
import path : * ;
import path : join ;
import path : native make : native-path make-path ;

6.5.2. class module.

b2::class::make
Jam

rule new ( class args * : * )

C++

std::string make(std::tuple<value_ref, list_ref> name_arg1, const lists & rest, bind::context_ref_ context_ref)

Instantiates a new instance of the given class and calls the init with the given arguments. Returns the instance ID.

b2::class::bases
Jam

rule bases ( class )

C++

list_ref bases(std::string class_name);

Returns the base classes of the given class.

b2::class::is_derived
Jam

rule is-derived ( class : bases + )

C++

bool is_derived(value_ref class_name, list_cref class_bases);

Returns true when the given class is derived from the given bases.

b2::class::is_instance
Jam

rule is-instance ( value )

C++

bool is_instance(std::string value);

Returns true if the given value is an instance of any class.

b2::class::is_a
Jam

rule is-a ( instance : type )

C++

bool is_a(std::string instance, value_ref type);

Returns true if the given instance is of the given type.

6.5.3. errors module.

b2::jam::errors::backtrace
Jam

rule backtrace ( skip-frames prefix messages * : * )

C++

void backtrace(std::tuple<int, std::string, list_ref> skip_prefix_messages, const lists & rest, bind::context_ref_ context_ref);

Print a stack backtrace leading to this rule’s caller. Each argument represents a line of output to be printed after the first line of the backtrace.

b2::jam::errors::error_skip_frames
Jam

rule error-skip-frames ( skip-frames messages * : * )

C++

void error_skip_frames(std::tuple<int, list_ref> skip_messages, const lists & rest, bind::context_ref_ context_ref);

b2::jam::errors::try-catch
Jam

rule error-skip-frames ( skip-frames messages * : * )

C++

void error_skip_frames(std::tuple<int, list_ref> skip_messages, const lists & rest, bind::context_ref_ context_ref);

This is not really an exception-handling mechanism, but it does allow us to perform some error-checking on our error-checking. Errors are suppressed after a try, and the first one is recorded. Use catch to check that the error message matched expectations.

Jam

rule try ( )

C++

void error_try();

Begin looking for error messages.

Jam

rule catch ( messages * : * )

C++

void error_catch(const lists & rest, bind::context_ref_ context_ref);

Stop looking for error messages; generate an error if an argument of messages is not found in the corresponding argument in the error call.

b2::jam::errors::error
Jam

rule error ( messages * : * )

C++

void error(const lists & rest, bind::context_ref_ context_ref);

Print an error message with a stack backtrace and exit.

b2::jam::errors::user_error
Jam

rule user-error ( messages * : * )

C++

void user_error(const lists & rest, bind::context_ref_ context_ref);

Same as 'error', but the generated backtrace will include only user files.

b2::jam::errors::warning
Jam

rule warning ( messages * : * )

C++

void warning(const lists & rest, bind::context_ref_ context_ref);

Print a warning message with a stack backtrace and exit.

b2::jam::errors::lol_to_list
Jam

rule lol→list ( * )

C++

list_ref lol_to_list(const lists & rest);

Convert an arbitrary argument list into a list with ":" separators and quoted elements representing the same information. This is mostly useful for formatting descriptions of arguments with which a rule was called when reporting an error.

b2::jam::errors::nearest_user_location
Jam

rule nearest-user-location ( )

C++

list_ref nearest_user_location(bind::context_ref_ context_ref);

Return the file:line for the nearest entry in backtrace which correspond to a user module.

6.5.4. regex module.

Contains rules for string processing using regular expressions.

  • "x*" matches the pattern "x" zero or more times.

  • "x+" matches "x" one or more times.

  • "x?" matches "x" zero or one time.

  • "[abcd]" matches any of the characters, "a", "b", "c", and "d". A character range such as "[a-z]" matches any character between "a" and "z". "[^abc]" matches any character which is not "a", "b", or "c".

  • "x|y" matches either pattern "x" or pattern "y"

  • (x) matches "x" and captures it.

  • "^" matches the beginning of the string.

  • "$" matches the end of the string.

  • "<" matches the beginning of a word.

  • ">" matches the end of a word.

See also: MATCH

b2::regex_split
Jam

rule split ( string separator )

C++

b2::list_ref rb2::egex_split(const std::tuple<b2::value_ref, b2::value_ref> & string_separator);

Returns a list of the following substrings:

  1. from beginning till the first occurrence of 'separator' or till the end,

  2. between each occurrence of 'separator' and the next occurrence,

  3. from the last occurrence of 'separator' till the end.

If no separator is present, the result will contain only one element.

b2::regex_split_each
Jam

rule split-list ( list * : separator )

C++

b2::list_ref b2::regex_split_each(b2::list_cref to_split, b2::value_ref separator);

Returns the concatenated results of Applying regex.split to every element of the list using the separator pattern.

b2::regex_match
Jam

rule match ( pattern : string : indices * )

C++

b2::list_ref regex_match(b2::value_ref pattern, b2::value_ref string, const std::vector<int_t> & indices);

Match string against pattern, and return the elements indicated by indices.

b2::regex_transform
Jam

rule transform ( list * : pattern : indices * )

C++

b2::list_ref regex_transform(b2::list_cref list, b2::value_ref pattern, const std::vector<int_t> & indices);

Matches all elements of list against the pattern and returns a list of elements indicated by indices of all successful matches. If indices is omitted returns a list of first parenthesized groups of all successful matches.

b2::regex_escape
Jam

rule escape ( string : symbols : escape-symbol )

C++

b2::value_ref regex_escape(b2::value_ref string,b2:: value_ref symbols, b2::value_ref escape_symbol);

Escapes all of the characters in symbols using the escape symbol escape-symbol for the given string, and returns the escaped string.

b2::regex_replace
Jam

rule replace ( string match replacement )

C++

b2::value_ref regex_replace(const std::tuple<b2::value_ref, b2::value_ref, b2::value_ref> & string_match_replacement);

Replaces occurrences of a match string in a given string and returns the new string. The match string can be a regex expression.

  • string — The string to modify.

  • match — The characters to replace.

  • replacement — The string to replace with.

b2::regex_replace_each
Jam

rule replace-list ( list * : match : replacement )

C++

b2::list_ref regex_replace_each(b2::list_cref list, b2::value_ref match, b2::value_ref replacement);

Replaces occurrences of a match string in a given list of strings and returns a list of new strings. The match string can be a regex expression.

  • list — The list of strings to modify.

  • match — The search expression.

  • replacement — The string to replace with.

b2::regex_grep
Jam

rule grep ( directories + : files + : patterns + : result_expressions * : options * )

C++

b2::list_ref regex_grep(b2::list_cref directories, b2::list_cref files, b2::list_cref patterns, list_cref result_expressions, list_cref options);

Match any of the patterns against the globbed files in directories, and return a list of files and indicated result_expressions (file1, re1, re.., …​). The result_expressions are indices from 0 to 10. Where 0 is the full match.

6.5.5. set module.

Classes and functions to manipulate sets of unique values.

b2::set

Set class contains unique values.

b2::set::add
Jam

rule add ( elements * )

C++

void b2::set::add(b2::list_cref elements);, void b2::set::add(const b2::set & elements);

Add the elements to the set.

b2::set::contains
Jam

rule contains ( element )

C++

bool b2::set::contains(b2::value_ref element) const;

Does the set contain the given element.

b2::set::to_list
Jam

rule list ( )

C++

b2::list_ref b2::set::to_list() const;

Return a list with all the elements of the set.

b2::set::difference
Jam

rule difference ( set1 * : set2 * )

C++

static b2::list_ref b2::set::difference(b2::list_cref set1, b2::list_cref set2);

Returns the elements of set1 that are not in set2.

b2::set::intersection
Jam

rule intersection ( set1 * : set2 * )

C++

static b2::list_ref b2::set::intersection(b2::list_cref set1, b2::list_cref set2);

Removes all the items appearing in both set1 & set2.

b2::set::equal
Jam

rule equal ( set1 * : set2 * )

C++

static bool b2::set::equal(b2::list_cref set1, b2::list_cref set2);

Returns whether set1 & set2 contain the same elements. Note that this ignores any element ordering differences as well as any element duplication.

6.5.6. string module.

b2::string_whitespace
Jam

rule whitespace ( )

C++

b2::value_ref string_whitespace();

Returns the canonical set of whitespace characters, as a single string.

b2::string_chars
Jam

rule chars ( string )

C++

b2::list_ref string_chars(b2::value_ref s);

Splits the given string into a list of strings composed of each character of the string in sequence.

b2::string_abbreviate
Jam

rule abbreviate ( string )

C++

b2::value_ref string_abbreviate(b2::value_ref s);

Apply a set of standard transformations to string to produce an abbreviation no more than 5 characters long.

b2::string_join
Jam

rule join ( strings * : separator ? )

C++

b2::value_ref string_join(b2::list_cref strings, b2::value_ref separator);

Concatenates the given strings, inserting the given separator between each string.

b2::string_words
Jam

rule words ( string : whitespace * )

C++

b2::list_ref string_words(std::string s, b2::list_cref whitespace);

Split a string into whitespace separated words.

b2::string_is_whitespace
Jam

rule is-whitespace ( string ? )

C++

bool string_is_whitespace(b2::value_ref s);

Check that the given string is composed entirely of whitespace.

6.5.7. version module.

b2::version_less
Jam

rule version-less ( lhs + : rhs + )

C++

bool version_less(const std::vector<int> & lhs, const std::vector<int> & rhs);

Returns true if the first version, lhs, is semantically less than the second version, rhs.

6.5.8. db module.

Classes and functions to manage structured data.

b2::property_db (property-db)

Container for values structured as a tree with keys that are the tree path to the value. Arrays and objects (named fields) are supported.

b2::property_db::emplace
Jam

rule emplace ( key + : value )

C++

void emplace(list_cref k, value_ref v);

Set, or add, element at path key with value. The path can contain two kinds of position items: an array index or an object member. An array index is "[]" n. Where the "[]" indicates that a zero based index in the array, n follows. Anything else is treated as a member field name.

b2::property_db::write_file
Jam

rule write-file ( filename : format ? )

C++

void write_file(value_ref filename, value_ref format);

Writes out a representation of the data to the given filename file formatted as format. Supported formats are: JSON.

b2::property_db::dump
Jam

rule dump ( format ? )

C++

std::string dump(value_ref format);

Writes out a representation of the data as a string formatted as format. Supported formats are: JSON.

6.5.9. path

Performs various path manipulations. Paths are always in a 'normalized' representation. In it, a path may be either:

  • '.', or

  • ['/'] [ ( '..' '/' )* (token '/')* token ]

In plain english, a path can be rooted, '..' elements are allowed only at the beginning, and it never ends in slash, except for the path consisting of slash only.

  1. rule make ( native )

    Converts the native path into normalized form.

  2. rule native ( path )

    Builds the native representation of the path.

  3. rule is-rooted ( path )

    Tests if a path is rooted.

  4. rule has-parent ( path )

    Tests if a path has a parent.

  5. rule basename ( path )

    Returns the path without any directory components.

  6. rule parent ( path )

    Returns the parent directory of the path. If no parent exists, an error is issued.

  7. rule reverse ( path )

    Returns path2 such that [ join path path2 ] = ".". The path may not contain ".." element or be rooted.

  8. rule join ( elements + )

    Concatenates the passed path elements. Generates an error if any element other than the first one is rooted. Skips any empty or undefined path elements.

  9. rule root ( path root )

    If path is relative, it is rooted at root. Otherwise, it is unchanged.

  10. rule pwd ( )

    Returns the current working directory.

  11. rule glob ( dirs * : patterns + : exclude-patterns * )

    Returns the list of files matching the given pattern in the specified directory. Both directories and patterns are supplied as portable paths. Each pattern should be a non-absolute path, and can’t contain "." or ".." elements. Each slash separated element of a pattern can contain the following special characters:

    • '?' matches any character

    • '*' matches an arbitrary number of characters

      A file $(d)/e1/e2/e3 (where 'd' is in $(dirs)) matches the pattern p1/p2/p3 if and only if e1 matches p1, e2 matches p2 and so on. For example:

      [ glob . : *.cpp ]
      [ glob . : */build/Jamfile ]
  12. rule glob-tree ( roots * : patterns + : exclude-patterns * )

    Recursive version of glob. Builds the glob of files while also searching in the subdirectories of the given roots. An optional set of exclusion patterns will filter out the matching entries from the result. The exclusions also apply to the subdirectory scanning, such that directories that match the exclusion patterns will not be searched.

  13. rule exists ( file )

    Returns true if the specified file exists.

  14. rule all-parents ( path : upper_limit ? : cwd ? )

    Find out the absolute name of path and return the list of all the parents, starting with the immediate one. Parents are returned as relative names. If upper_limit is specified, directories above it will be pruned.

  15. rule glob-in-parents ( dir : patterns + : upper-limit ? )

    Search for patterns in parent directories of dir, up to and including upper_limit, if it is specified, or till the filesystem root otherwise.

  16. rule relative ( child parent : no-error ? )

    Assuming child is a subdirectory of parent, return the relative path from parent to child.

  17. rule relative-to ( path1 path2 )

    Returns the minimal path to path2 that is relative path1.

  18. rule programs-path ( )

    Returns the list of paths which are used by the operating system for looking up programs.

  19. rule makedirs ( path )

    Creates a directory and all parent directories that do not already exist.

6.5.10. sequence

Various useful list functions. Note that algorithms in this module execute largely in the caller’s module namespace, so that local rules can be used as function objects. Also note that most predicates can be multi-element lists. In that case, all but the first element are prepended to the first argument which is passed to the rule named by the first element.

  1. rule filter ( predicate + : sequence * )

    Return the elements e of $(sequence) for which [ $(predicate) e ] has a non-null value.

  2. rule transform ( function + : sequence * )

    Return a new sequence consisting of [ $(function) $(e) ] for each element e of $(sequence).

  3. rule reverse ( s * )

    Returns the elements of s in reverse order.

  4. rule insertion-sort ( s * : ordered * )

    Insertion-sort s using the BinaryPredicate ordered.

  5. rule merge ( s1 * : s2 * : ordered * )

    Merge two ordered sequences using the BinaryPredicate ordered.

  6. rule join ( s * : joint ? )

    Join the elements of s into one long string. If joint is supplied, it is used as a separator.

  7. rule length ( s * )

    Find the length of any sequence.

  8. rule unique ( list * : stable ? )

    Removes duplicates from list. If stable is passed, then the order of the elements will be unchanged.

  9. rule max-element ( elements + : ordered ? )

    Returns the maximum number in elements. Uses ordered for comparisons or numbers.less if none is provided.

  10. rule select-highest-ranked ( elements * : ranks * )

    Returns all of elements for which the corresponding element in the parallel list rank is equal to the maximum value in rank.

6.5.11. stage

This module defines the install rule, used to copy a set of targets to a single location.

  1. rule add-install-dir ( name : suffix ? : parent ? : options * )

    Defines a named installation directory.

    For example, add-install-dir foo : bar : baz ; creates feature <install-foo> and adds support for named directory (foo) to install rule. The rule will try to use the value of <install-foo> property if present, otherwise will fallback to (baz)/bar.

    Arguments:

    • name: the name of the directory.

    • suffix: the path suffix appended to the parent named directory.

    • parent: the optional name of parent named directory.

    • options: special options that modify treatment of the directory. Allowed options:

      • package-suffix: append the package name to the default value. For example:

        add-install-dir foo : bar : baz : package-suffix ;
        install (foo) : a : <install-package>xyz ;

        installs a into (baz)/bar/xyz.

  2. rule install-dir-names ( )

    Returns names of all registered installation directories.

  3. rule get-dir ( name : property-set : package-name : flags * )

    Returns the path to a named installation directory. For a given name=xyz the rule uses the value of <install-xyz> property if it is present in property-set. Otherwise it tries to construct the default value of the path recursively getting the path to name's registered base named directory and relative path. For example:

    stage.add-install-dir foo : bar : baz ;
    
    local ps = [ property-set.create <install-foo>x/y/z ] ;
    echo [ stage.get-dir foo : $(ps) : $(__name__) ] ; # outputs x/y/z
    
    ps = [ property-set.create <install-baz>a/b/c/d ] ;
    echo [ stage.get-dir foo : $(ps) : $(__name__) ] ; # outputs a/b/c/d/bar

    The argument package-name is used to construct the path for named directories that were registered with package-suffix option and also to construct install-prefix when targeting Windows.

    Available flags:

    • staged: take staging-prefix into account.

    • relative: return the path to name relative to its base directory.

  4. rule get-package-name ( property-set : project-module ? )

    Returns the package name that will be used for install targets when constructing installation location. The rule uses the value of <install-package> property if it’s present in property-set. Otherwise it deduces the package name using project-module's attributes. It traverses the project hierarchy up to the root searching for the first project with an id. If none is found, the base name of the root project’s location is used. If project-module is empty, the caller module is used (this allows invoking just [ get-package-name $(ps) ] in project jam files).

6.5.12. type

Deals with target type declaration and defines target class which supports typed targets.

  1. rule register ( type : suffixes * : base-type ? )

    Registers a target type, possible derived from a base-type. Providing a list of suffixes here is a shortcut for separately calling the register-suffixes rule with the given suffixes and the set-generated-target-suffix rule with the first given suffix.

  2. rule register-suffixes ( suffixes + : type )

    Specifies that files with suffix from suffixes be recognized as targets of type type. Issues an error if a different type is already specified for any of the suffixes.

  3. rule registered ( type )

    Returns true iff type has been registered.

  4. rule validate ( type )

    Issues an error if type is unknown.

  5. rule set-scanner ( type : scanner )

    Sets a scanner class that will be used for this type.

  6. rule get-scanner ( type : property-set )

    Returns a scanner instance appropriate to type and property-set.

  7. rule base ( type )

    Returns a base type for the given type or nothing in case the given type is not derived.

  8. rule all-bases ( type )

    Returns the given type and all of its base types in order of their distance from type.

  9. rule all-derived ( type )

    Returns the given type and all of its derived types in order of their distance from type.

  10. rule is-derived ( type base )

    Returns true if type is equal to base or has base as its direct or indirect base.

  11. rule set-generated-target-suffix ( type : properties * : suffix )

    Sets a file suffix to be used when generating a target of type with the specified properties. Can be called with no properties if no suffix has already been specified for the type. The suffix parameter can be an empty string ("") to indicate that no suffix should be used.

    Note that this does not cause files with suffix to be automatically recognized as being of type. Two different types can use the same suffix for their generated files but only one type can be auto-detected for a file with that suffix. User should explicitly specify which one using the register-suffixes rule.

  12. rule change-generated-target-suffix ( type : properties * : suffix )

    Change the suffix previously registered for this type/properties combination. If suffix is not yet specified, sets it.

  13. rule generated-target-suffix ( type : property-set )

    Returns the suffix used when generating a file of type with the given properties.

  14. rule set-generated-target-prefix ( type : properties * : prefix )

    Sets a target prefix that should be used when generating targets of type with the specified properties. Can be called with empty properties if no prefix for type has been specified yet.

    The prefix parameter can be empty string ("") to indicate that no prefix should be used.

    Usage example: library names use the "lib" prefix on unix.

  15. rule change-generated-target-prefix ( type : properties * : prefix )

    Change the prefix previously registered for this type/properties combination. If prefix is not yet specified, sets it.

  16. rule generated-target-prefix ( type : property-set )

    Returns the prefix used when generating a file of type with the given properties.

  17. rule type ( filename )

    Returns file type given its name. If there are several dots in filename, tries each suffix. E.g. for name of "file.so.1.2" suffixes "2", "1", and "so" will be tried.

6.6. Builtin classes

6.6.1. Class abstract-target

Base class for all abstract targets.

class abstract-target {
    rule __init__ ( name : project )
    rule name ( )
    rule project ( )
    rule location ( )
    rule full-name ( )
    rule generate ( property-set )
}

Classes derived from abstract-target:

  • project-target

  • main-target

  • basic-target

  1. rule init ( name : project )

    name

    The name of the target in the Jamfile.

    project

    The project to which this target belongs.

  2. rule name ( )

    Returns the name of this target.

  3. rule project ( )

    Returns the project for this target.

  4. rule location ( )

    Returns the location where the target was declared.

  5. rule full-name ( )

    Returns a user-readable name for this target.

  6. rule generate ( property-set )

    Generates virtual targets for this abstract target using the specified properties, unless a different value of some feature is required by the target. This is an abstract method which must be overridden by derived classes.

    On success, returns:

    • a property-set with the usage requirements to be applied to dependents

    • a list of produced virtual targets, which may be empty.

      If property-set is empty, performs the default build of this target, in a way specific to the derived class.

6.6.2. Class project-target

class project-target : abstract-target {
    rule generate ( property-set )
    rule build-dir ( )
    rule main-target ( name )
    rule has-main-target ( name )
    rule find ( id : no-error ? )

    # Methods inherited from abstract-target
    rule name ( )
    rule project ( )
    rule location ( )
    rule full-name ( )
}

This class has the following responsibilities:

  • Maintaining a list of main targets in this project and building them.

  1. rule generate ( property-set )

    Overrides abstract-target.generate. Generates virtual targets for all the targets contained in this project.

    On success, returns:

    • a property-set with the usage requirements to be applied to dependents

    • a list of produced virtual targets, which may be empty.

  2. rule build-dir ( )

    Returns the root build directory of the project.

  3. rule main-target ( name )

    Returns a main-target class instance corresponding to name. Can only be called after the project has been fully loaded.

  4. rule has-main-target ( name )

    Returns whether a main-target with the specified name exists. Can only be called after the project has been fully loaded.

  5. rule find ( id : no-error ? )

    Find and return the target with the specified id, treated relative to self. Id may specify either a target or a file name with the target taking priority. May report an error or return nothing if the target is not found depending on the no-error parameter.

6.6.3. Class main-target

class main-target : abstract-target {
    rule generate ( property-set )

    # Methods inherited from abstract-target
    rule name ( )
    rule project ( )
    rule location ( )
    rule full-name ( )
}

A main-target represents a named top-level target in a Jamfile.

  1. rule generate ( property-set )

    Overrides abstract-target.generate. Select an alternative for this main target, by finding all alternatives whose requirements are satisfied by property-set and picking the one with the longest requirements set. Returns the result of calling generate on that alternative.

    On success, returns:

    • a property-set with the usage requirements to be applied to dependents

    • a list of produced virtual targets, which may be empty.

6.6.4. Class basic-target

class basic-target : abstract-target {
    rule __init__ ( name : project : sources * : requirements * : default-build * : usage-requirements * )
    rule generate ( property-set )
    rule construct ( name : source-targets * : property-set )

    # Methods inherited from abstract-target
    rule name ( )
    rule project ( )
    rule location ( )
    rule full-name ( )
}

Implements the most standard way of constructing main target alternative from sources. Allows sources to be either files or other main targets and handles generation of those dependency targets.

  1. rule init ( name : project : sources * : requirements * : default-build * : usage-requirements * )

    name

    The name of the target

    project

    The project in which the target is declared.

  2. rule generate ( property-set )

    Overrides abstract-target.generate. Determines final build properties, generates sources, and calls construct. This method should not be overridden.

    On success, returns:

    • a property-set with the usage requirements to be applied to dependents

    • a list of produced virtual targets, which may be empty.

  3. rule construct ( name : source-targets * : property-set )

    Constructs virtual targets for this abstract target. Returns a usage-requirements property-set and a list of virtual targets. Should be overridden in derived classes.

6.6.5. Class typed-target

class typed-target : basic-target {
    rule __init__ ( name : project : type : sources * : requirements * : default-build * : usage-requirements * )
    rule type ( )
    rule construct ( name : source-targets * : property-set )

    # Methods inherited from abstract-target
    rule name ( )
    rule project ( )
    rule location ( )
    rule full-name ( )

    # Methods inherited from basic-target
    rule generate ( property-set )
  }

typed-target is the most common kind of target alternative. Rules for creating typed targets are defined automatically for each type.

  1. rule init ( name : project : type : sources * : requirements * : default-build * : usage-requirements * )

    name

    The name of the target

    project

    The project in which the target is declared.

    type

    The type of the target.

  2. rule type ( )

    Returns the type of the target.

  3. rule construct ( name : source-targets * : property-set )

    Implements basic-target.construct. Attempts to create a target of the correct type using generators appropriate for the given property-set. Returns a property-set containing the usage requirements and a list of virtual targets.

    This function is invoked automatically by basic-target.generate and should not be called directly by users.

6.6.6. Class property-set

Class for storing a set of properties.

class property-set {
    rule raw ( )
    rule str ( )
    rule propagated ( )
    rule add ( ps )
    rule add-raw ( properties * )
    rule refine ( ps )
    rule get ( feature )
}

There is 1<→1 correspondence between identity and value. No two instances of the class are equal. To maintain this property, the 'property-set.create' rule should be used to create new instances. Instances are immutable.

  1. rule raw ( )

    Returns a Jam list of the stored properties.

  2. rule str ( )

    Returns the string representation of the stored properties.

  3. rule propagated ( )

    Returns a property-set containing all the propagated properties in this property-set.

  4. rule add ( ps )

    Returns a new property-set containing the union of the properties in this property-set and in ps.

    If ps contains non-free properties that should override the values in this object, use refine instead.
  5. rule add-raw ( properties * )

    Link add, except that it takes a list of properties instead of a property-set.

  6. rule refine ( ps )

    Refines properties by overriding any non-free and non-conditional properties for which a different value is specified in ps. Returns the resulting property-set.

  7. rule get ( feature )

    Returns all the values of feature.

6.7. Build process

The general overview of the build process was given in the user documentation. This section provides additional details, and some specific rules.

To recap, building a target with specific properties includes the following steps:

  1. applying the default build,

  2. selecting the main target alternative to use,

  3. determining the "common" properties,

  4. building targets referred by the the sources list and dependency properties,

  5. adding the usage requirements produced when building dependencies to the "common" properties,

  6. building the target using generators,

  7. computing the usage requirements to be returned.

6.7.1. Alternative selection

When a target has several alternatives, one of them must be selected. The process is as follows:

  1. For each alternative, its condition is defined as the set of base properties in its requirements. Conditional properties are excluded.

  2. An alternative is viable only if all properties in its condition are present in the build request.

  3. If there’s only one viable alternative, it’s chosen. Otherwise, an attempt is made to find the best alternative. An alternative a is better than another alternative b, if the set of properties in b’s condition is a strict subset of the set of properties of a’s condition. If one viable alternative is better than all the others, it’s selected. Otherwise, an error is reported.

6.7.2. Determining common properties

"Common" properties is a somewhat artificial term. This is the intermediate property set from which both the build request for dependencies and the properties for building the target are derived.

Since the default build and alternatives are already handled, we have only two inputs: the build request and the requirements. Here are the rules about common properties.

  1. Non-free features can have only one value

  2. A non-conditional property in the requirements is always present in common properties.

  3. A property in the build request is present in common properties, unless it is overridden by a property in the requirements.

  4. If either the build request, or the requirements (non-conditional or conditional) include an expandable property (either composite, or with a specified sub-feature value), the behavior is equivalent to explicitly adding all the expanded properties to the build request or the requirements respectively.

  5. If the requirements include a conditional property, and the condition of this property is true in the context of common properties, then the conditional property should be in common properties as well.

  6. If no value for a feature is given by other rules here, it has default value in common properties.

These rules are declarative. They don’t specify how to compute the common properties. However, they provide enough information for the user. The important point is the handling of conditional requirements. The condition can be satisfied either by a property in the build request, by non-conditional requirements, or even by another conditional property. For example, the following example works as expected:

exe a : a.cpp
      : <toolset>gcc:<variant>release
        <variant>release:<define>FOO ;

6.7.3. Target Paths

Several factors determine the location of a concrete file target. All files in a project are built under the directory bin unless this is overridden by the build-dir project attribute. Under bin is a path that depends on the properties used to build each target. This path is uniquely determined by all non-free, non-incidental properties. For example, given a property set containing: <toolset>gcc <toolset-gcc:version>4.6.1 <variant>debug <warnings>all <define>_DEBUG <include>/usr/local/include <link>static, the path will be gcc-4.6.1/debug/link-static. <warnings> is an incidental feature and <define> and <include> are free features, so they do not affect the path.

Sometimes the paths produced by B2 can become excessively long. There are a couple of command line options that can help with this. --abbreviate-paths reduces each element to no more than five characters. For example, link-static becomes lnk-sttc. The --hash option reduces the path to a single directory using an MD5 hash.

There are two features that affect the build directory. The <location> feature completely overrides the default build directory. For example,

exe a : a.cpp : <location>. ;

builds all the files produced by a in the directory of the Jamfile. This is generally discouraged, as it precludes variant builds.

The <location-prefix> feature adds a prefix to the path, under the project’s build directory. For example,

exe a : a.cpp : <location-prefix>subdir ;

will create the files for a in bin/subdir/gcc-4.6.1/debug

6.8. Definitions

6.8.1. Features and properties

A feature is a normalized (toolset-independent) aspect of a build configuration, such as whether inlining is enabled. Feature names may not contain the ‘>’ character.

Each feature in a build configuration has one or more associated values. Feature values for non-free features may not contain the punctuation characters of pointy bracket (‘<’), colon (‘:’ ), equal sign (‘=’) and dashes (‘-’). Feature values for free features may not contain the pointy bracket (‘<’) character.

A property is a (feature,value) pair, expressed as <feature>value.

A subfeature is a feature that only exists in the presence of its parent feature, and whose identity can be derived (in the context of its parent) from its value. A subfeature’s parent can never be another subfeature. Thus, features and their subfeatures form a two-level hierarchy.

A value-string for a feature F is a string of the form value-subvalue1-subvalue2…​-subvalueN, where value is a legal value for F and subvalue1…​subvalueN are legal values of some of F's subfeatures separated with dashes (‘-’). For example, the properties <toolset>gcc <toolset-version>3.0.1 can be expressed more concisely using a value-string, as <toolset>gcc-3.0.1.

A property set is a set of properties (i.e. a collection without duplicates), for instance: <toolset>gcc <runtime-link>static.

A property path is a property set whose elements have been joined into a single string separated by slashes. A property path representation of the previous example would be <toolset>gcc/<runtime-link>static.

A build specification is a property set that fully describes the set of features used to build a target.

6.8.2. Property Validity

For free features, all values are valid. For all other features, the valid values are explicitly specified, and the build system will report an error for the use of an invalid feature-value. Subproperty validity may be restricted so that certain values are valid only in the presence of certain other subproperties. For example, it is possible to specify that the <gcc-target>mingw property is only valid in the presence of <gcc-version>2.95.2.

6.8.3. Feature Attributes

Each feature has a collection of zero or more of the following attributes. Feature attributes are low-level descriptions of how the build system should interpret a feature’s values when they appear in a build request. We also refer to the attributes of properties, so that an incidental property, for example, is one whose feature has the incidental attribute.

  • incidental

    Incidental features are assumed not to affect build products at all. As a consequence, the build system may use the same file for targets whose build specification differs only in incidental features. A feature that controls a compiler’s warning level is one example of a likely incidental feature.

    Non-incidental features are assumed to affect build products, so the files for targets whose build specification differs in non-incidental features are placed in different directories as described in Target Paths.

  • propagated

    Features of this kind are propagated to dependencies. That is, if a main target is built using a propagated property, the build systems attempts to use the same property when building any of its dependencies as part of that main target. For instance, when an optimized executable is requested, one usually wants it to be linked with optimized libraries. Thus, the <optimization> feature is propagated.

  • free

    Most features have a finite set of allowed values, and can only take on a single value from that set in a given build specification. Free features, on the other hand, can have several values at a time and each value can be an arbitrary string. For example, it is possible to have several preprocessor symbols defined simultaneously:

    <define>NDEBUG=1 <define>HAS_CONFIG_H=1
  • optional

    An optional feature is a feature that is not required to appear in a build specification. Every non-optional non-free feature has a default value that is used when a value for the feature is not otherwise specified, either in a target’s requirements or in the user’s build request. [A feature’s default value is given by the first value listed in the feature’s declaration. — move this elsewhere - dwa]

  • symmetric

    Normally a feature only generates a sub-variant directory when its value differs from its default value, leading to an asymmetric sub-variant directory structure for certain values of the feature. A symmetric feature always generates a corresponding sub-variant directory.

  • path

    The value of a path feature specifies a path. The path is treated as relative to the directory of Jamfile where path feature is used and is translated appropriately by the build system when the build is invoked from a different directory

  • implicit

    Values of implicit features alone identify the feature. For example, a user is not required to write "<toolset>gcc", but can simply write "gcc". Implicit feature names also don’t appear in variant paths, although the values do. Thus: bin/gcc/…​ as opposed to bin/toolset-gcc/…​. There should typically be only a few such features, to avoid possible name clashes.

  • composite

    Composite features actually correspond to groups of properties. For example, a build variant is a composite feature. When generating targets from a set of build properties, composite features are recursively expanded and added to the build property set, so rules can find them if necessary. Non-composite non-free features override components of composite features in a build property set.

  • dependency

    The value of a dependency feature is a target reference. When used for building of a main target, the value of dependency feature is treated as additional dependency.

    For example, dependency features allow to state that library A depends on library B. As the result, whenever an application will link to A, it will also link to B. Specifying B as dependency of A is different from adding B to the sources of A.

Features that are neither free nor incidental are called base features.

6.8.4. Feature Declaration

The low-level feature declaration interface is the feature rule from the feature module:

rule feature ( name : allowed-values * : attributes * )

A feature’s allowed-values may be extended with the feature.extend rule.

6.8.5. Property refinement

When a target with certain properties is requested, and that target requires some set of properties, it is needed to find the set of properties to use for building. This process is called property refinement and is performed by these rules

  1. Each property in the required set is added to the original property set

  2. If the original property set includes property with a different value of non free feature, that property is removed.

6.8.6. Conditional properties

Sometime it’s desirable to apply certain requirements only for a specific combination of other properties. For example, one of compilers that you use issues a pointless warning that you want to suppress by passing a command line option to it. You would not want to pass that option to other compilers. Conditional properties allow you to do just that. Their syntax is:

property ( "," property ) * ":" property

For example, the problem above would be solved by:

exe hello : hello.cpp : <toolset>yfc:<cxxflags>-disable-pointless-warning ;

The syntax also allows several properties in the condition, for example:

exe hello : hello.cpp : <os>NT,<toolset>gcc:<link>static ;

6.8.7. Target identifiers and references

Target identifier is used to denote a target. The syntax is:

target-id -> (target-name | file-name | project-id | directory-name)
              | (project-id | directory-name) "//" target-name
project-id -> path
target-name -> path
file-name -> path
directory-name -> path

This grammar allows some elements to be recognized as either

  • name of target declared in current Jamfile (note that target names may include slash).

  • a regular file, denoted by absolute name or name relative to project’s sources location.

  • project id (at this point, all project ids start with slash).

  • the directory of another project, denoted by absolute name or name relative to the current project’s location.

To determine the real meaning the possible interpretations are checked in this order. For example, valid target ids might be:

a

target in current project

lib/b.cpp

regular file

/boost/thread

project "/boost/thread"

/home/ghost/build/lr_library//parser

target in specific project

../boost_1_61_0

project in specific directory

Rationale:Target is separated from project by special separator (not just slash), because:

  • It emphasis that projects and targets are different things.

  • It allows to have main target names with slashes.

Target reference is used to specify a source target, and may additionally specify desired properties for that target. It has this syntax:

target-reference -> target-id [ "/" requested-properties ]
requested-properties -> property-path

For example,

exe compiler : compiler.cpp libs/cmdline/<optimization>space ;

would cause the version of cmdline library, optimized for space, to be linked in even if the compiler executable is build with optimization for speed.

7. Utilities

7.1. Debugger

7.1.1. Overview

B2 comes with a debugger for Jamfiles. To run the debugger, start B2 with b2 -dconsole.

$ b2 -dconsole
(b2db) break gcc.init
Breakpoint 1 set at gcc.init
(b2db) run
Starting program: /usr/bin/b2
Breakpoint 1, gcc.init ( ) at /usr/share/b2/tools/gcc.jam:74
74      local tool-command = ;
(b2db) quit

7.1.2. Running the Program

The run command is used to start a new b2 subprocess for debugging. The arguments to run are passed on the command line. If a child process is already running, it will be terminated before the new child is launched.

When the program is paused continue will resume execution. The step command will advance the program by a single statement, stopping on entry to another function or return from the current function. next is like step except that it skips over function calls. finish executes until the current function returns.

The kill command terminates the current child immediately.

7.1.3. Breakpoints

Breakpoints are set using the break command. The location of the breakpoint can be specified as either the name of a function (including the module name) or or a file name and line number of the form file:line. When a breakpoint is created it is given a unique id which is used to identify it for other commands.

(b2db) break Jamfile:10
Breakpoint 1 set at Jamfile:10
(b2db) break msvc.init
Breakpoint 2 set at msvc.init

A breakpoint can be temporarily disabled using the disable command. While a breakpoint is disabled, the child will not stop when it is hit. A disabled breakpoint can be activated again with enable.

(b2db) disable 1
(b2db) enable 1

Breakpoints can be removed permanently with delete or clear. The difference between them is that delete takes the breakpoint id while clear takes the location of the breakpoint as originally specified to break.

(b2db) clear Jamfile:10
Deleted breakpoint 1
(b2db) delete 2

7.1.4. Examining the Stack

The backtrace command will print a summary of every frame on the stack.

The print command can be used to show the value of an expression.

(b2db) print [ modules.peek : ARGV ]
/usr/bin/b2 toolset=msvc install
(b2db) print $(__file__)
Jamfile.jam

7.1.5. Miscellaneous Commands

quit exits the debugger. help describes the available commands.

8. Extender Manual

8.1. Introduction

This section explains how to extend B2 to accommodate your local requirements — primarily to add support for non-standard tools you have. Before we start, be sure you have read and understood the concept of metatarget, Concepts, which is critical to understanding the remaining material.

The current version of B2 has three levels of targets, listed below.

metatarget

Object that is created from declarations in Jamfiles. May be called with a set of properties to produce concrete targets.

concrete target

Object that corresponds to a file or an action.

jam target

Low-level concrete target that is specific to Boost.Jam build engine. Essentially a string — most often a name of file.

In most cases, you will only have to deal with concrete targets and the process that creates concrete targets from metatargets. Extending metatarget level is rarely required. The jam targets are typically only used inside the command line patterns.

All of the Boost.Jam target-related builtin functions, like DEPENDS or ALWAYS operate on jam targets. Applying them to metatargets or concrete targets has no effect.

8.1.1. Metatargets

Metatarget is an object that records information specified in Jamfile, such as metatarget kind, name, sources and properties, and can be called with specific properties to generate concrete targets. At the code level it is represented by an instance of class derived from abstract-target. [4]

The generate method takes the build properties (as an instance of the property-set class) and returns a list containing:

  • As front element — Usage-requirements from this invocation (an instance of property-set)

  • As subsequent elements — created concrete targets ( instances of the virtual-target class.)

It’s possible to lookup a metatarget by target-id using the targets.resolve-reference function, and the targets.generate-from-reference function can both lookup and generate a metatarget.

The abstract-target class has three immediate derived classes:

  • project-target that corresponds to a project and is not intended for further subclassing. The generate method of this class builds all targets in the project that are not marked as explicit.

  • main-target corresponds to a target in a project and contains one or more target alternatives. This class also should not be subclassed. The generate method of this class selects an alternative to build, and calls the generate method of that alternative.

  • basic-target corresponds to a specific target alternative. This is base class, with a number of derived classes. The generate method processes the target requirements and requested build properties to determine final properties for the target, builds all sources, and finally calls the abstract construct method with the list of source virtual targets, and the final properties.

The instances of the project-target and main-target classes are created implicitly — when loading a new Jamfiles, or when a new target alternative with as-yet unknown name is created. The instances of the classes derived from basic-target are typically created when Jamfile calls a metatarget rule, such as such as exe.

It it permissible to create a custom class derived from basic-target and create new metatarget rule that creates instance of such target. However, in the majority of cases, a specific subclass of basic-target — typed-target is used. That class is associated with a type and relays to generators to construct concrete targets of that type. This process will be explained below. When a new type is declared, a new metatarget rule is automatically defined. That rule creates new instance of type-target, associated with that type.

8.1.2. Concrete targets

Concrete targets are represented by instance of classes derived from virtual-target. The most commonly used subclass is file-target. A file target is associated with an action that creates it —  an instance of the action class. The action, in turn, hold a list of source targets. It also holds the property-set instance with the build properties that should be used for the action.

Here’s an example of creating a target from another target, source

local a = [ new action $(source) : common.copy : $(property-set) ] ;
local t = [ new file-target $(name) : CPP : $(project) : $(a) ] ;

The first line creates an instance of the action class. The first parameter is the list of sources. The second parameter is the name a jam-level action. The third parameter is the property-set applying to this action. The second line creates a target. We specify a name, a type and a project. We also pass the action object created earlier. If the action creates several targets, we can repeat the second line several times.

In some cases, code that creates concrete targets may be invoked more than once with the same properties. Returning two different instances of file-target that correspond to the same file clearly will result in problems. Therefore, whenever returning targets you should pass them via the virtual-target.register function, besides allowing B2 to track which virtual targets got created for each metatarget, this will also replace targets with previously created identical ones, as necessary.[5] Here are a couple of examples:

return [ virtual-target.register $(t) ] ;
return [ sequence.transform virtual-target.register : $(targets) ] ;

8.1.3. Generators

In theory, every kind of metatarget in B2 (like exe, lib or obj) could be implemented by writing a new metatarget class that, independently of the other code, figures what files to produce and what commands to use. However, that would be rather inflexible. For example, adding support for a new compiler would require editing several metatargets.

In practice, most files have specific types, and most tools consume and produce files of specific type. To take advantage of this fact, B2 defines concept of target type and generators generators, and has special metatarget class typed-target. Target type is merely an identifier. It is associated with a set of file extensions that correspond to that type. Generator is an abstraction of a tool. It advertises the types it produces and, if called with a set of input target, tries to construct output targets of the advertised types. Finally, typed-target is associated with specific target type, and relays the generator (or generators) for that type.

A generator is an instance of a class derived from generator. The generator class itself is suitable for common cases. You can define derived classes for custom scenarios.

8.2. Example: 1-to-1 generator

Say you’re writing an application that generates C++ code. If you ever did this, you know that it’s not nice. Embedding large portions of C++ code in string literals is very awkward. A much better solution is:

  1. Write the template of the code to be generated, leaving placeholders at the points that will change

  2. Access the template in your application and replace placeholders with appropriate text.

  3. Write the result.

It’s quite easy to achieve. You write special verbatim files that are just C++, except that the very first line of the file contains the name of a variable that should be generated. A simple tool is created that takes a verbatim file and creates a cpp file with a single char* variable whose name is taken from the first line of the verbatim file and whose value is the file’s properly quoted content.

Let’s see what B2 can do.

First off, B2 has no idea about "verbatim files". So, you must register a new target type. The following code does it:

import type ;
type.register VERBATIM : verbatim ;

The first parameter to type.register gives the name of the declared type. By convention, it’s uppercase. The second parameter is the suffix for files of this type. So, if B2 sees code.verbatim in a list of sources, it knows that it’s of type VERBATIM.

Next, you tell B2 that the verbatim files can be transformed into C++ files in one build step. A generator is a template for a build step that transforms targets of one type (or set of types) into another. Our generator will be called verbatim.inline-file; it transforms VERBATIM files into CPP files:

import generators ;
generators.register-standard verbatim.inline-file : VERBATIM : CPP ;

Lastly, you have to inform B2 about the shell commands used to make that transformation. That’s done with an actions declaration.

actions inline-file
{
    "./inline-file.py" $(<) $(>)
}

Now, we’re ready to tie it all together. Put all the code above in file verbatim.jam, add import verbatim ; to Jamroot.jam, and it’s possible to write the following in your Jamfile:

exe codegen : codegen.cpp class_template.verbatim usage.verbatim ;

The listed verbatim files will be automatically converted into C++ source files, compiled and then linked to the codegen executable.

In subsequent sections, we will extend this example, and review all the mechanisms in detail. The complete code is available in the example/customization directory.

8.3. Target types

The first thing we did in the introduction was declaring a new target type:

import type ;
type.register VERBATIM : verbatim ;

The type is the most important property of a target. B2 can automatically generate necessary build actions only because you specify the desired type (using the different main target rules), and because B2 can guess the type of sources from their extensions.

The first two parameters for the type.register rule are the name of new type and the list of extensions associated with it. A file with an extension from the list will have the given target type. In the case where a target of the declared type is generated from other sources, the first specified extension will be used.

Sometimes you want to change the suffix used for generated targets depending on build properties, such as toolset. For example, some compiler uses extension elf for executable files. You can use the type.set-generated-target-suffix rule:

type.set-generated-target-suffix EXE : <toolset>elf : elf ;

A new target type can be inherited from an existing one.

type.register PLUGIN : : SHARED_LIB ;

The above code defines a new type derived from SHARED_LIB. Initially, the new type inherits all the properties of the base type - in particular generators and suffix. Typically, you’ll change the new type in some way. For example, using type.set-generated-target-suffix you can set the suffix for the new type. Or you can write a special generator for the new type. For example, it can generate additional meta-information for the plugin. In either way, the PLUGIN type can be used whenever SHARED_LIB can. For example, you can directly link plugins to an application.

A type can be defined as "main", in which case B2 will automatically declare a main target rule for building targets of that type. More details can be found later.

8.4. Scanners

Sometimes, a file can refer to other files via some include system. To make B2 track dependencies between included files, you need to provide a scanner. The primary limitation is that only one scanner can be assigned to a target type.

First, we need to declare a new class for the scanner:

class verbatim-scanner : common-scanner
{
    rule pattern ( )
    {
        return "//###include[ ]*\"([^\"]*)\"" ;
    }
}

All the complex logic is in the common-scanner class, and you only need to override the method that returns the regular expression to be used for scanning. The parentheses in the regular expression indicate which part of the string is the name of the included file. Only the first parenthesized group in the regular expression will be recognized; if you can’t express everything you want that way, you can return multiple regular expressions, each of which contains a parenthesized group to be matched.

After that, we need to register our scanner class:

scanner.register verbatim-scanner : include ;

The value of the second parameter, in this case include, specifies the properties that contain the list of paths that should be searched for the included files.

Finally, we assign the new scanner to the VERBATIM target type:

type.set-scanner VERBATIM : verbatim-scanner ;

That’s enough for scanning include dependencies.

8.5. Tools and generators

This section will describe how B2 can be extended to support new tools.

For each additional tool, a B2 object called generator must be created. That object has specific types of targets that it accepts and produces. Using that information, B2 is able to automatically invoke the generator. For example, if you declare a generator that takes a target of the type D and produces a target of the type OBJ, when placing a file with extension .d in a list of sources will cause B2 to invoke your generator, and then to link the resulting object file into an application. (Of course, this requires that you specify that the .d extension corresponds to the D type.)

Each generator should be an instance of a class derived from the generator class. In the simplest case, you don’t need to create a derived class, but simply create an instance of the generator class. Let’s review the example we’ve seen in the introduction.

import generators ;
generators.register-standard verbatim.inline-file : VERBATIM : CPP ;
actions inline-file
{
    "./inline-file.py" $(<) $(>)
}

We declare a standard generator, specifying its id, the source type and the target type. When invoked, the generator will create a target of type CPP with a source target of type VERBATIM as the only source. But what command will be used to actually generate the file? In B2, actions are specified using named "actions" blocks and the name of the action block should be specified when creating targets. By convention, generators use the same name of the action block as their own id. So, in above example, the "inline-file" actions block will be used to convert the source into the target.

There are two primary kinds of generators: standard and composing, which are registered with the generators.register-standard and the generators.register-composing rules, respectively. For example:

generators.register-standard verbatim.inline-file : VERBATIM : CPP ;
generators.register-composing mex.mex : CPP LIB : MEX ;

The first (standard) generator takes a single source of type VERBATIM and produces a result. The second (composing) generator takes any number of sources, which can have either the CPP or the LIB type. Composing generators are typically used for generating top-level target type. For example, the first generator invoked when building an exe target is a composing generator corresponding to the proper linker.

You should also know about two specific functions for registering generators: generators.register-c-compiler and generators.register-linker. The first sets up header dependency scanning for C files, and the seconds handles various complexities like searched libraries. For that reason, you should always use those functions when adding support for compilers and linkers.

(Need a note about UNIX)

Custom generator classes

The standard generators allows you to specify source and target types, an action, and a set of flags. If you need anything more complex, you need to create a new generator class with your own logic. Then, you have to create an instance of that class and register it. Here’s an example how you can create your own generator class:

class custom-generator : generator
{
    rule __init__ ( * : * )
    {
        generator.__init__ $(1) : $(2) : $(3) : $(4) : $(5) : $(6) : $(7) : $(8) : $(9) ;
    }

}

generators.register
  [ new custom-generator verbatim.inline-file : VERBATIM : CPP ] ;

This generator will work exactly like the verbatim.inline-file generator we’ve defined above, but it’s possible to customize the behavior by overriding methods of the generator class.

There are two methods of interest. The run method is responsible for the overall process - it takes a number of source targets, converts them to the right types, and creates the result. The generated-targets method is called when all sources are converted to the right types to actually create the result.

The generated-targets method can be overridden when you want to add additional properties to the generated targets or use additional sources. For a real-life example, suppose you have a program analysis tool that should be given a name of executable and the list of all sources. Naturally, you don’t want to list all source files manually. Here’s how the generated-targets method can find the list of sources automatically:

class itrace-generator : generator {
...
    rule generated-targets ( sources + : property-set : project name ? )
    {
        local leaves ;
        local temp = [ virtual-target.traverse $(sources[1]) : : include-sources ] ;
        for local t in $(temp)
        {
            if ! [ $(t).action ]
            {
                leaves += $(t) ;
            }
        }
        return [ generator.generated-targets $(sources) $(leafs)
          : $(property-set) : $(project) $(name) ] ;
    }
}
generators.register [ new itrace-generator nm.itrace : EXE : ITRACE ] ;

The generated-targets method will be called with a single source target of type EXE. The call to virtual-target.traverse will return all targets the executable depends on, and we further find files that are not produced from anything. The found targets are added to the sources.

The run method can be overridden to completely customize the way the generator works. In particular, the conversion of sources to the desired types can be completely customized. Here’s another real example. Tests for the Boost Python library usually consist of two parts: a Python program and a C++ file. The C++ file is compiled to Python extension that is loaded by the Python program. But in the likely case that both files have the same name, the created Python extension must be renamed. Otherwise, the Python program will import itself, not the extension. Here’s how it can be done:

rule run ( project name ? : property-set : sources * )
{
    local python ;
    for local s in $(sources)
    {
        if [ $(s).type ] = PY
        {
            python = $(s) ;
        }
    }

    local libs ;
    for local s in $(sources)
    {
        if [ type.is-derived [ $(s).type ] LIB ]
        {
            libs += $(s) ;
        }
    }

    local new-sources ;
    for local s in $(sources)
    {
        if [ type.is-derived [ $(s).type ] CPP ]
        {
            local name = [ $(s).name ] ;    # get the target's basename
            if $(name) = [ $(python).name ]
            {
                name = $(name)_ext ;        # rename the target
            }
            new-sources += [ generators.construct $(project) $(name) :
              PYTHON_EXTENSION : $(property-set) : $(s) $(libs) ] ;
        }
    }

    result = [ construct-result $(python) $(new-sources) : $(project) $(name)
                 : $(property-set) ] ;
}

First, we separate all source into python files, libraries and C++ sources. For each C++ source we create a separate Python extension by calling generators.construct and passing the C++ source and the libraries. At this point, we also change the extension’s name, if necessary.

8.6. Features

Often, we need to control the options passed the invoked tools. This is done with features. Consider an example:

# Declare a new free feature
import feature : feature ;
feature verbatim-options : : free ;

# Cause the value of the 'verbatim-options' feature to be
# available as 'OPTIONS' variable inside verbatim.inline-file
import toolset : flags ;
flags verbatim.inline-file OPTIONS <verbatim-options> ;

# Use the "OPTIONS" variable
actions inline-file
{
    "./inline-file.py" $(OPTIONS) $(<) $(>)
}

We first define a new feature. Then, the flags invocation says that whenever verbatim.inline-file action is run, the value of the verbatim-options feature will be added to the OPTIONS variable, and can be used inside the action body. You’d need to consult online help (--help) to find all the features of the toolset.flags rule.

Although you can define any set of features and interpret their values in any way, B2 suggests the following coding standard for designing features.

Most features should have a fixed set of values that is portable (tool neutral) across the class of tools they are designed to work with. The user does not have to adjust the values for a exact tool. For example, <optimization>speed has the same meaning for all C++ compilers and the user does not have to worry about the exact options passed to the compiler’s command line.

Besides such portable features there are special 'raw' features that allow the user to pass any value to the command line parameters for a particular tool, if so desired. For example, the <cxxflags> feature allows you to pass any command line options to a C++ compiler. The <include> feature allows you to pass any string preceded by -I and the interpretation is tool-specific. (See Can I get capture external program output using a Boost.Jam variable? for an example of very smart usage of that feature). Of course one should always strive to use portable features, but these are still be provided as a backdoor just to make sure B2 does not take away any control from the user.

Using portable features is a good idea because:

  • When a portable feature is given a fixed set of values, you can build your project with two different settings of the feature and B2 will automatically use two different directories for generated files. B2 does not try to separate targets built with different raw options.

  • Unlike with “raw” features, you don’t need to use specific command-line flags in your Jamfile, and it will be more likely to work with other tools.

Steps for adding a feature

Adding a feature requires three steps:

  1. Declaring a feature. For that, the "feature.feature" rule is used. You have to decide on the set of feature attributes:

    • if you want a feature value set for one target to automatically propagate to its dependent targets then make it “propagated”.

    • if a feature does not have a fixed list of values, it must be “free.” For example, the include feature is a free feature.

    • if a feature is used to refer to a path relative to the Jamfile, it must be a “path” feature. Such features will also get their values automatically converted to B2’s internal path representation. For example, include is a path feature.

    • if feature is used to refer to some target, it must be a “dependency” feature.

  2. Representing the feature value in a target-specific variable. Build actions are command templates modified by Boost.Jam variable expansions. The toolset.flags rule sets a target-specific variable to the value of a feature.

  3. Using the variable. The variable set in step 2 can be used in a build action to form command parameters or files.

Another example

Here’s another example. Let’s see how we can make a feature that refers to a target. For example, when linking dynamic libraries on Windows, one sometimes needs to specify a "DEF file", telling what functions should be exported. It would be nice to use this file like this:

lib a : a.cpp : <def-file>a.def ;

Actually, this feature is already supported, but anyway…​

  1. Since the feature refers to a target, it must be "dependency".

    feature def-file : : free dependency ;
  2. One of the toolsets that cares about DEF files is msvc. The following line should be added to it.

    flags msvc.link DEF_FILE <def-file> ;
  3. Since the DEF_FILE variable is not used by the msvc.link action, we need to modify it to be:

    actions link bind DEF_FILE
    {
        $(.LD) .... /DEF:$(DEF_FILE) ....
    }

    Note the bind DEF_FILE part. It tells B2 to translate the internal target name in DEF_FILE to a corresponding filename in the link action. Without it the expansion of $(DEF_FILE) would be a strange symbol that is not likely to make sense for the linker.

    We are almost done, except for adding the following code to msvc.jam:

    rule link
    {
        DEPENDS $(<) : [ on $(<) return $(DEF_FILE) ] ;
    }

    This is a workaround for a bug in B2 engine, which will hopefully be fixed one day.

Variants and composite features.

Sometimes you want to create a shortcut for some set of features. For example, release is a value of <variant> and is a shortcut for a set of features.

It is possible to define your own build variants. For example:

variant crazy : <optimization>speed <inlining>off
                <debug-symbols>on <profiling>on ;

will define a new variant with the specified set of properties. You can also extend an existing variant:

variant super_release : release : <define>USE_ASM ;

In this case, super_release will expand to all properties specified by release, and the additional one you’ve specified.

You are not restricted to using the variant feature only. Here’s example that defines a brand new feature:

feature parallelism : mpi fake none : composite link-incompatible ;
feature.compose <parallelism>mpi : <library>/mpi//mpi/<parallelism>none ;
feature.compose <parallelism>fake : <library>/mpi//fake/<parallelism>none ;

This will allow you to specify the value of feature parallelism, which will expand to link to the necessary library.

8.7. Main target rules

A main target rule (e.g “exe” Or “lib”) creates a top-level target. It’s quite likely that you’ll want to declare your own and there are two ways to do that.

The first way applies when your target rule should just produce a target of specific type. In that case, a rule is already defined for you! When you define a new type, B2 automatically defines a corresponding rule. The name of the rule is obtained from the name of the type, by down-casing all letters and replacing underscores with dashes. For example, if you create a module obfuscate.jam containing:

import type ;
type.register OBFUSCATED_CPP  : ocpp ;

import generators ;
generators.register-standard obfuscate.file : CPP : OBFUSCATED_CPP ;

and import that module, you’ll be able to use the rule "obfuscated-cpp" in Jamfiles, which will convert source to the OBFUSCATED_CPP type.

The second way is to write a wrapper rule that calls any of the existing rules. For example, suppose you have only one library per directory and want all cpp files in the directory to be compiled into that library. You can achieve this effect using:

lib codegen : [ glob *.cpp ] ;

If you want to make it even simpler, you could add the following definition to the Jamroot.jam file:

rule glib ( name : extra-sources * : requirements * )
{
    lib $(name) : [ glob *.cpp ] $(extra-sources) : $(requirements) ;
}

allowing you to reduce the Jamfile to just

glib codegen ;

Note that because you can associate a custom generator with a target type, the logic of building can be rather complicated. For example, the boostbook module declares a target type BOOSTBOOK_MAIN and a custom generator for that type. You can use that as example if your main target rule is non-trivial.

8.8. Toolset modules

If your extensions will be used only on one project, they can be placed in a separate .jam file and imported by your Jamroot.jam. If the extensions will be used on many projects, users will thank you for a finishing touch.

The using rule provides a standard mechanism for loading and configuring extensions. To make it work, your module should provide an init rule. The rule will be called with the same parameters that were passed to the using rule. The set of allowed parameters is determined by you. For example, you can allow the user to specify paths, tool versions, and other options.

Here are some guidelines that help to make B2 more consistent:

  • The init rule should never fail. Even if the user provided an incorrect path, you should emit a warning and go on. Configuration may be shared between different machines, and wrong values on one machine can be OK on another.

  • Prefer specifying the command to be executed to specifying the tool’s installation path. First of all, this gives more control: it’s possible to specify

    /usr/bin/g++-snapshot
    time g++

    as the command. Second, while some tools have a logical "installation root", it’s better if the user doesn’t have to remember whether a specific tool requires a full command or a path.

  • Check for multiple initialization. A user can try to initialize the module several times. You need to check for this and decide what to do. Typically, unless you support several versions of a tool, duplicate initialization is a user error. If the tool’s version can be specified during initialization, make sure the version is either always specified, or never specified (in which case the tool is initialized only once). For example, if you allow:

    using yfc ;
    using yfc : 3.3 ;
    using yfc : 3.4 ;

    Then it’s not clear if the first initialization corresponds to version 3.3 of the tool, version 3.4 of the tool, or some other version. This can lead to building twice with the same version.

  • If possible, init must be callable with no parameters. In which case, it should try to autodetect all the necessary information, for example, by looking for a tool in PATH or in common installation locations. Often this is possible and allows the user to simply write:

    using yfc ;
  • Consider using facilities in the tools/common module. You can take a look at how tools/gcc.jam uses that module in the init rule.

9. Frequently Asked Questions

9.1. How do I get the current value of feature in Jamfile?

This is not possible, since Jamfile does not have "current" value of any feature, be it toolset, build variant or anything else. For a single run of B2, any given main target can be built with several property sets. For example, user can request two build variants on the command line. Or one library is built as shared when used from one application, and as static when used from another. Each Jamfile is read only once so generally there is no single value of a feature you can access in Jamfile.

A feature has a specific value only when building a target, and there are two ways you can use that value:

9.2. I am getting a "Duplicate name of actual target" error. What does that mean?

The most likely case is that you are trying to compile the same file twice, with almost the same, but differing properties. For example:

exe a : a.cpp : <include>/usr/local/include ;
exe b : a.cpp ;

The above snippet requires two different compilations of a.cpp, which differ only in their include property. Since the include feature is declared as free B2 does not create a separate build directory for each of its values and those two builds would both produce object files generated in the same build directory. Ignoring this and compiling the file only once would be dangerous as different includes could potentially cause completely different code to be compiled.

To solve this issue, you need to decide if the file should be compiled once or twice.

  1. To compile the file only once, make sure that properties are the same for both target requests:

    exe a : a.cpp : <include>/usr/local/include ;
    exe b : a.cpp : <include>/usr/local/include ;

    or:

    alias a-with-include : a.cpp : <include>/usr/local/include ;
    exe a : a-with-include ;
    exe b : a-with-include ;

    or if you want the includes property not to affect how any other sources added for the built a and b executables would be compiled:

    obj a-obj : a.cpp : <include>/usr/local/include ;
    exe a : a-obj ;
    exe b : a-obj ;

    Note that in both of these cases the include property will be applied only for building these object files and not any other sources that might be added for targets a and b.

  2. To compile the file twice, you can tell B2 to compile it to two separate object files like so:

    obj a_obj : a.cpp : <include>/usr/local/include ;
    obj b_obj : a.cpp ;
    exe a : a_obj ;
    exe b : b_obj ;

    or you can make the object file targets local to the main target:

    exe a : [ obj a_obj : a.cpp : <include>/usr/local/include ] ;
    exe b : [ obj a_obj : a.cpp ] ;

    which will cause B2 to actually change the generated object file names a bit for you and thus avoid any conflicts.

    Note that in both of these cases the include property will be applied only for building these object files and not any other sources that might be added for targets a and b.

A good question is why B2 can not use some of the above approaches automatically. The problem is that such magic would only help in half of the cases, while in the other half it would be silently doing the wrong thing. It is simpler and safer to ask the user to clarify his intention in such cases.

9.3. Accessing environment variables

Many users would like to use environment variables in Jamfiles, for example, to control the location of external libraries. In many cases it is better to declare those external libraries in the site-config.jam file, as documented in the recipes section. However, if the users already have the environment variables set up, it may not be convenient for them to set up their site-config.jam files as well and using the environment variables might be reasonable.

Boost.Jam automatically imports all environment variables into its built-in .ENVIRON module so user can read them from there directly or by using the helper os.environ rule. For example:

import os ;
local unga-unga = [ os.environ UNGA_UNGA ] ;
ECHO $(unga-unga) ;

or a bit more realistic:

import os ;
local SOME_LIBRARY_PATH = [ os.environ SOME_LIBRARY_PATH ] ;
exe a : a.cpp : <include>$(SOME_LIBRARY_PATH) ;

9.4. How to control properties order?

For internal reasons, B2 sorts all the properties alphabetically. This means that if you write:

exe a : a.cpp : <include>b <include>a ;

then the command line with first mention the a include directory, and then b, even though they are specified in the opposite order. In most cases, the user does not care. But sometimes the order of includes, or other properties, is important. For such cases, a special syntax is provided:

exe a : a.cpp : <include>a&&b ;

The && symbols separate property values and specify that their order should be preserved. You are advised to use this feature only when the order of properties really matters and not as a convenient shortcut. Using it everywhere might negatively affect performance.

9.5. How to control the library linking order on Unix?

On Unix-like operating systems, the order in which static libraries are specified when invoking the linker is important, because by default, the linker uses one pass though the libraries list. Passing the libraries in the incorrect order will lead to a link error. Further, this behavior is often used to make one library override symbols from another. So, sometimes it is necessary to force specific library linking order.

B2 tries to automatically compute the right order. The primary rule is that if library a "uses" library b, then library a will appear on the command line before library b. Library a is considered to use b if b is present either in the a library’s sources or its usage is listed in its requirements. To explicitly specify the use relationship one can use the <use> feature. For example, both of the following lines will cause a to appear before b on the command line:

lib a : a.cpp b ;
lib a : a.cpp : <use>b ;

The same approach works for searched libraries as well:

lib z ;
lib png : : <use>z ;
exe viewer : viewer png z ;

9.6. Can I get capture external program output using a Boost.Jam variable?

The SHELL builtin rule may be used for this purpose:

local gtk_includes = [ SHELL "gtk-config --cflags" ] ;

9.7. How to get the project root (a.k.a. Jamroot) location?

You might want to use your project’s root location in your Jamfiles. To access it just declare a path constant in your Jamroot.jam file using:

path-constant TOP : . ;

After that, the TOP variable can be used in every Jamfile.

9.8. How to change compilation flags for one file?

If one file must be compiled with special options, you need to explicitly declare an obj target for that file and then use that target in your exe or lib target:

exe a : a.cpp b ;
obj b : b.cpp : <optimization>off ;

Of course you can use other properties, for example to specify specific C/C++ compiler options:

exe a : a.cpp b ;
obj b : b.cpp : <cflags>-g ;

You can also use conditional properties for finer control:

exe a : a.cpp b ;
obj b : b.cpp : <variant>release:<optimization>off ;

9.9. Why are the dll-path and hardcode-dll-paths properties useful?

This entry is specific to Unix systems.

Before answering the questions, let us recall a few points about shared libraries. Shared libraries can be used by several applications — or other libraries — without physically including the library in the linked binary. This can greatly decrease the total application size. It is also possible to upgrade a shared library after an application is already installed.

However, in order for an application depending on shared libraries to be started, the OS will need to find the shared libraries. The dynamic linker will search in a system-defined list of paths, load the library and resolve the symbols. This means that you should either change the system-defined list, given by the LD_LIBRARY_PATH environment variable, or install the libraries to a system location. This can be inconvenient when developing, since the libraries are not yet ready to be installed, and cluttering system paths may be undesirable. Luckily, on Unix there is another way.

Using the hardcode-dll-paths and dll-path features, a target can be linked with an additional list of library directory paths that will be searched before the system paths — these are called "runtime library search paths" or "run paths", or "run path list", depending on which platform’s documentation you’re reading — See your platform’s dynamic linker man page or Wikipedia for more.

We’ll just use rpath list for conciseness below.

9.9.1. hardcode-dll-paths

The hardcode-dll-paths feature for exe targets, is especially helpful for development; As the build system already knows the paths to all the used shared libraries, it will by default automatically add them to the executable rpath list.

When the executable is installed however, the story is different; Obviously, installed executables should not contain hardcoded paths to your development tree. The install rule therefore implicitly (i.e. by default) negates the hardcode-dll-paths feature, by re-linking an executable without the automatic paths if necessary.

  • For the exe rule:

    • With <hardcode-dll-paths>true (default), the paths to all directories with used shared libraries are automatically added to the target’s rpath list.

    • An explicit <hardcode-dll-paths>false property is needed to disable the automatic adding of directory paths to the shared libraries.

  • For the install rule:

    • If so desired, an explicit <hardcode-dll-paths>true is needed to propagate the rpath list, added to the source targets, through to the install targets. (This include explicit dll-path entries added to the source targets.)

    • By default, the implicit <hardcode-dll-paths>false property, will ensure that the source targets' rpath lists are not propagated through to the install targets.

  • The <hardcode-dll-paths> feature is ignored for the lib rule.

9.9.2. dll-path

As an alternative — or in addition — you can use the dll-path feature to add explicit directory paths manually to the rpath list.
For example:

install installed : application : <dll-path>/usr/lib/snake
                                  <location>/usr/bin ;

will allow the application to find libraries placed in the /usr/lib/snake directory.

9.9.3. Conclusion

If you install libraries to a non-standard location and add an explicit path, you get more control over libraries that will be used. A library of the same name in a system location will not be inadvertently used. If you install libraries to a system location and do not add any paths, the system administrator will have more control. Each library can be individually upgraded, and all applications will use the new library.

Which approach is best depends on your situation. If the libraries are relatively standalone and can be used by third party applications, they should be installed in the system location. If you have lots of libraries which can be used only by your application, it makes sense to install them to a non-standard directory and add an explicit path, like the example above shows. Please also note that guidelines for different systems differ in this respect. For example, the Debian GNU guidelines prohibit any additional search paths while Solaris guidelines suggest that they should always be used.

Shared Library Search Path Summary

(Applicable to the client target - i.e. the target that uses the shared library.)

Rule Feature Value Rpath List Additions

exe

hardcode-dll-paths

true (default)

The absolute paths to the directories of all shared libraries.
If these are targets themselves, their build directory paths are added.

exe

hardcode-dll-paths

false

(none)

install

hardcode-dll-paths

true

Propagate rpath list from the sources (exe & lib targets) to the installed binary. (This include explicit dll-path entries added to the source targets.)

install

hardcode-dll-paths

false (default)

(none)

lib

hardcode-dll-paths

disabled
(no effect)

(none)

exe, lib, install

dll-path

(Absolute path)

The absolute path as specified.

exe, lib, install

dll-path

(Relative path)
⚠️

A path comprised of the specified relative path, prepended with the path to the jam directory, as specified on the command line.

⚠️ WARNING: The resulting path will depend on the specific command line invocation, thus severely limited in practical use.

9.10. Targets in site-config.jam

It is desirable to declare standard libraries available on a given system. Putting target declaration in a specific project’s Jamfile is not really good, since locations of the libraries can vary between different development machines and then such declarations would need to be duplicated in different projects. The solution is to declare the targets in B2’s site-config.jam configuration file:

project site-config ;
lib zlib : : <name>z ;

Recall that both site-config.jam and user-config.jam are projects, and everything you can do in a Jamfile you can do in those files as well. So, you declare a project id and a target. Now, one can write:

exe hello : hello.cpp /site-config//zlib ;

in any Jamfile.

9.11. Header-only libraries

In modern C++, libraries often consist of just header files, without any source files to compile. To use such libraries, you need to add proper includes and possibly defines to your project. But with a large number of external libraries it becomes problematic to remember which libraries are header only, and which ones you have to link to. However, with B2 a header-only library can be declared as B2 target and all dependents can use such library without having to remember whether it is a header-only library or not.

Header-only libraries may be declared using the alias rule, specifying their include path as a part of its usage requirements, for example:

alias my-lib
    : # no sources
    : # no build requirements
    : # no default build
    : <include>whatever ;

The includes specified in usage requirements of my-lib are automatically added to all of its dependents build properties. The dependents need not care if my-lib is a header-only or not, and it is possible to later make my-lib into a regular compiled library without having to add the includes to its dependents declarations.

If you already have proper usage requirements declared for a project where a header-only library is defined, you do not need to duplicate them for the alias target:

project my : usage-requirements <include>whatever ;
alias mylib ;

9.12. What is the difference between B2, b2, bjam and Perforce Jam?

B2 is the name of the complete build system. The executable that runs it is b2. That executable is written in C and implements performance-critical algorithms, like traversal of dependency graph and executing commands. It also implements an interpreted language used to implement the rest of B2. This executable is formally called "B2 engine".

The B2 engine is derived from an earlier build tool called Perforce Jam. Originally, there were just minor changes, and the filename was bjam. Later on, with more and more changes, the similarity of names became a disservice to users, and as of Boost 1.47.0, the official name of the executable was changed to b2. A copy named bjam is still created for compatibility, but you are encouraged to use the new name in all cases.

Perforce Jam was an important foundation, and we gratefully acknowledge its influence, but for users today, these tools share only some basics of the interpreted language.

10. Extra Tools

10.1. Documentation Tools

10.1.1. Asciidoctor

The asciidoctor tool converts the ascidoc documentation format to various backend formats for either viewing or further processing by documentation tools. This tool supports the baseline asciidoctor distribution (i.e. the Ruby based tool).

Feature: asciidoctor-attribute

Defines arbitrary asciidoctor attributes. The value of the feature should be specified with the CLI syntax for attributes. For example to use as a target requirement:

html example : example.adoc :
    <asciidoctor-attribute>idprefix=ex ;

This is a free feature and is not propagated. I.e. it applies only to the target it’s specified on.

Feature: asciidoctor-doctype

Specifies the doctype to use for generating the output format. Allowed doctype values are: article, book, manpage, and inline.

Feature: asciidoctor-backend

Specifies the backend to use to produce output from the source asciidoc. This feature is automatically applied to fit the build target type. For example, when specifying an html target for an asciidoc source:

html example : example.adoc ;

The target will by default acquire the <asciidoctor-backend>html5 requirement. The default for each target type are:

  • html: <asciidoctor-backend>html5

  • docbook: <asciidoctor-backend>docbook45

  • man: <asciidoctor-backend>manpage

  • pdf: <asciidoctor-backend>pdf

To override the defaults you specify it as a requirement on the target:

docbook example : example.adoc :
    <asciidoctor-backend>docbook5 ;

Allowed backend values are: html5, docbook45, docbook5, pdf.

Initialization

To use the asciidoctor tool you need to declare it in a configuration file with the using rule. The initialization takes the following arguments:

  • command: The command, with any extra arguments, to execute.

For example you could insert the following in your user-config.jam:

using asciidoctor : "/usr/local/bin/asciidoctor" ;

If no command is given it defaults to just asciidoctor with assumption that the asciidoctor is available in the search PATH.

10.2. Miscellaneous Tools

10.2.1. pkg-config

The pkg-config program is used to retrieve information about installed libraries in the system. It retrieves information about packages from special metadata files. These files are named after the package, and have a .pc extension. The package name specified to pkg-config is defined to be the name of the metadata file, minus the .pc extension.

Feature: pkg-config

Selects one of the initialized pkg-config configurations. This feature is propagated to dependencies. Its use is dicussed in section Initialization.

Feature: pkg-config-define

This free feature adds a variable assignment to pkg-config invocation. For example,

pkg-config.import mypackage : requirements <pkg-config-define>key=value ;

is equivalent to invoking on the comand line

pkg-config --define-variable=key=value mypackage ;
Rule: import

Main target rule that imports a pkg-config package. When its consumer targets are built, pkg-config command will be invoked with arguments that depend on current property set. The features that have an effect are:

  • <pkg-config-define>: adds a --define-variable argument;

  • <link>: adds --static argument when <link>static;

  • <link>: adds --static argument when <link>static;

  • <name>: specifies package name (target name is used instead if the property is not present);

  • <version>: specifies package version range, can be used multiple times and should be a dot-separated sequence of numbers optionally prefixed with =, <, >, or >=.

Example:

pkg-config.import my-package
    : requirements <name>my_package <version><4 <version>>=3.1 ;
Initialization

To use the pkg-config tool you need to declare it in a configuration file with the using rule:

using pkg-config : [config] : [command] ... : [ options ] ... ;
  • config: the name of initialized configuration. The name can be omitted, in which case the configuration will become the default one.

  • command: the command, with any extra arguments, to execute. If no command is given, first PKG_CONFIG environment variable is checked, and if its empty the string pkg-config is used.

  • options: options that modify pkg-config behavior. Allowed options are:

  • <path>: sets PKG_CONFIG_PATH environment variable; multiple occurences are allowed.

  • <libdir>: sets PKG_CONFIG_LIBDIR environment variable; multiple occurences are allowed.

  • <allow-system-cflags>: sets PKG_CONFIG_ALLOW_SYSTEM_CFLAGS environment variable; multiple occurences are allowed.

  • <allow-system-libs>: sets PKG_CONFIG_ALLOW_SYSTEM_LIBS environment variable; multiple occurences are allowed.

  • <sysroot>: sets PKG_CONFIG_SYSROOT_DIR environment variable; multiple occurences are allowed.

  • <variable>: adds a variable definition argument to command invocation; multiple occurences are allowed.

Class pkg-config-target
class pkg-config-target : alias-target-class {
    rule construct ( name : sources * : property-set )
    rule version ( property-set )
    rule variable ( name : property-set )
}

The class of objects returned by import rule. The objects themselves could be useful in situations that require more complicated logic for consuming a package. See Tips for examples.

  1. rule construct ( name : sources * : property-set ) Overrides alias-target.construct.

  2. rule version ( property-set ) Returns the package’s version, in the context of property-set.

  3. rule variable ( name : property-set ) Returns the value of variable name in the package, in the context of property-set.

Tips
Using several configurations

Suppose, you have 2 collections of .pc files: one for platform A, and another for platform B. You can initialize 2 configurations of pkg-config tool each corresponding to specific collection:

using pkg-config : A : : <libdir>path/to/collection/A ;
using pkg-config : B : : <libdir>path/to/collection/B ;

Then, you can specify that builds for platform A should use configuration A, while builds for B should use configuration B:

project
    : requirements
      <target-os>A-os,<architecture>A-arch:<pkg-config>A
      <target-os>B-os,<architecture>B-arch:<pkg-config>B
    ;

Thanks to the fact, that project-config, user-config and site-config modules are parents of jamroot module, you can put it in any of those files.o

Choosing the package name based on the property set

Since a file for a package should be named after the package suffixed with .pc, some projects came up with naming schemes in order to allow simultaneous installation of several major versions or build variants. In order to pick the specific name corresponding to the build request you can use <conditional> property in requirements:

pkg-config.import mypackage : requirements <conditional>@infer-name ;

rule infer-name ( properties * )
{
    local name = mypackage ;
    local variant = [ property.select <variant> : $(properties) ] ;
    if $(variant) = debug
    {
      name += -d ;
    }
    return <name>$(name) ;
}

The common.format-name rule can be very useful in this situation.

Modify usage requirements based on package version or variable

Sometimes you need to apply some logic based on package’s version or a variable that it defines. For that you can use <conditional> property in usage requirements:

mypackage =
  [ pkg-config.import mypackage : usage-requirements <conditional>@define_ns
  ] ;

rule extra-props ( properties * )
{
    local ps = [ property-set.create $(properties) ] ;
    local prefix = [ $(mypackage).variable name_prefix : $(ps) ] ;
    prefix += [ $(mypackage).version $(ps) ] ;
    return <define>$(prefix:J=_) ;
}

10.2.2. Sass

This tool converts SASS and SCSS files into CSS. This tool explicitly supports both the version written in C (sassc) and the original Ruby implementation (scss) but other variants might also work. In addition to tool-specific features, described in this section, the tool recognizes features <flags> and <include>.

Feature: sass-style

Sets the output style. Available values are

  • nested: each property is put on its own line, rules are indented based on how deeply they are nested;

  • expanded: each property is put on its own line, rules are not indented;

  • compact: each rule is put on a single line, nested rules occupy adjacent lines, while groups of unrelated rules are separated by newlines;

  • compressed: takes minimum amount of space: all unnecessary whitespace is removed, property values are compressed to have minimal representation.

The feature is optional and is not propagated to dependent targets. If no style is specified, then, if property set contains property <optimization>on, compressed style is selected. Otherwise, nested style is selected.

Feature: sass-line-numbers

Enables emitting comments showing original line numbers for rules. This can be useful for debugging a stylesheet. Available values are on and off. The feature is optional and is not propagated to dependent targets. If no value for this feature is specified, then one is copied from the feature debug-symbols.

Initialization

To use the sass tool you need to declare it in a configuration file with the using rule. The initialization takes the following arguments:

  • command: the command, with any extra arguments, to execute.

For example you could insert the following in your user-config.jam:

using sass : /usr/local/bin/psass -p2 ; # Perl libsass-based version

If no command is given, sassc is tried, after which scss is tried.

11. Examples

11.1. Introduction

Here we include a collection of simple to complex fully working examples of using Boost Build v2 for various tasks. They show the gamut from simple to advanced features. If you find yourself looking at the examples and not finding something you want to see working please post to our support list and we’ll try and come up with a solution and add it here for others to learn from.

11.2. Hello

This example shows a very basic Boost Build project set up so it compiles a single executable from a single source file:

hello.cpp
#include <iostream>

int main()
{
    std::cout << "Hello!\n";
}

Our jamroot.jam is minimal and only specifies one exe target for the program:

jamroot.jam
exe hello : hello.cpp ;

Building the example yields:

> cd /example/hello
> b2
...found 8 targets...
...updating 4 targets...
common.mkdir bin/clang-darwin-4.2.1
common.mkdir bin/clang-darwin-4.2.1/debug
clang-darwin.compile.c++ bin/clang-darwin-4.2.1/debug/hello.o
clang-darwin.link bin/clang-darwin-4.2.1/debug/hello
...updated 4 targets...
> bin/clang-darwin-4.2.1/debug/hello
Hello!
The actual paths in the bin sub-directory will depend on your toolset.

11.3. Sanitizers

This example shows how to enable sanitizers when using a clang or gcc toolset

main.cpp
int main()
{
    char* c = nullptr;
    std::cout << "Hello sanitizers\n " << *c;
}

Our jamroot.jam is minimal and only specifies one exe target for the program:

jamroot.jam
exe main : main.cpp ;

Sanitizers can be enabled by passing on or norecover to the appropriate sanitizer feature (e.g. thread-sanitizer=on). The norecover option causes the program to terminate after the first sanitizer issue is detected. The following example shows how to enable address and undefined sanitizers in a simple program:

> cd /example/sanitizers
> b2 toolset=gcc address-sanitizer=norecover undefined-sanitizer=on
...found 10 targets...
...updating 7 targets...
gcc.compile.c++ bin/gcc-7.3.0/debug/address-sanitizer-norecover/undefined-sanitizer-on/main.o
gcc.link bin/gcc-7.3.0/debug/address-sanitizer-norecover/undefined-sanitizer-on/main
...updated 7 targets...

Running the produced program may produce an output simillar to the following:

> ./bin/gcc-7.3.0/debug/address-sanitizer-norecover/undefined-sanitizer-on/main
Hello sanitizers
main.cpp:6:43: runtime error: load of null pointer of type 'char'
ASAN:DEADLYSIGNAL
=================================================================
==29767==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55ba7988af1b bp 0x7ffdf3d76560 sp 0x7ffdf3d76530 T0)
==29767==The signal is caused by a READ memory access.
==29767==Hint: address points to the zero page.
    #0 0x55ba7988af1a in main /home/damian/projects/boost/tools/build/example/sanitizers/main.cpp:6
    #1 0x7f42f2ba1b96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
    #2 0x55ba7988adb9 in _start (/home/damian/projects/boost/tools/build/example/sanitizers/bin/gcc-7.3.0/debug/address-sanitizer-norecover/undefined-sanitizer-on/main+0xdb9)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /home/damian/projects/boost/tools/build/example/sanitizers/main.cpp:6 in main
==29767==ABORTING
The actual paths in the bin sub-directory will depend on your toolset and configuration. The presented output may vary depending on your compiler version.

12. Boost.Jam Documentation

Jam is a make(1) replacement that makes building simple things simple and building complicated things manageable.

12.1. Building B2

Installing B2 after building it is simply a matter of copying the generated executables someplace in your PATH. For building the executables there are a set of build bootstrap scripts to accommodate particular environments. The scripts take one optional argument, the name of the toolset to build with. When the toolset is not given an attempt is made to detect an available toolset and use that. The build scripts accept these arguments:

build [toolset]

Running the scripts without arguments will give you the best chance of success. On Windows platforms from a command console do:

cd jam source location
.\build.bat

On Unix type platforms do:

cd jam source location
sh ./build.sh

For the Boost.Jam source included with the Boost distribution the jam source location is BOOST_ROOT/tools/build/src/engine.

If the scripts fail to detect an appropriate toolset to build with your particular toolset may not be auto-detectable. In that case, you can specify the toolset as the first argument, this assumes that the toolset is readily available in the PATH.

The toolset used to build Boost.Jam is independent of the toolsets used for B2. Only one version of Boost.Jam is needed to use B2.

The supported toolsets, and whether they are auto-detected, are:

Table 2. Supported Toolsets
Script Platform Toolset Detection and Notes

build.bat

Windows

vc142

Microsoft Visual Studio C++ 2019

  • Uses vswhere utility.

vc141

Microsoft Visual Studio C++ 2017

  • Uses vswhere utility.

  • Common install location: %ProgramFiles%\Microsoft Visual Studio\2017\Enterprise\VC\

  • Common install location: %ProgramFiles%\Microsoft Visual Studio\2017\Professional\VC\

  • Common install location: %ProgramFiles%\Microsoft Visual Studio\2017\Community\VC\

vc14

Microsoft Visual Studio C++ 2015

  • Env var %VS140COMNTOOLS%

  • Common install location: %ProgramFiles%\Microsoft Visual Studio 14.0\VC\

vc12

Microsoft Visual Studio C++ 2013

  • Env var %VS120COMNTOOLS%

  • Common install location: %ProgramFiles%\Microsoft Visual Studio 12.0\VC\

borland

Embarcadero C++Builder

  • bcc32c.exe in PATH

intel-win32

Intel C++ Compiler for Windows

  • icl.exe in PATH

mingw

GNU GCC as the MinGW configuration

  • Common install location: C:\MinGW

como

Comeau Computing C/C++

gcc,

GNU GCC

clang

Clang LLVM

gcc-nocygwin

GNU GCC

build.sh

Unix, Linux, Cygwin, Windows Bash, etc.

gcc

GNU GCC

  • g++ in PATH

clang

Clang LLVM

  • clang++ in PATH

intel-linux

Intel C++ (oneAPI) for Linux

  • icpx in PATH

  • icc in PATH

  • icpc in PATH

  • setvars.sh in common install locations: $HOME/intel/oneapi, /opt/intel/oneapi, /opt/intel/inteloneapi

  • iccvars.sh in common install locations: /opt/intel/cc/9.0/bin, /opt/intel_cc_80/bin

mipspro

SGI MIPSpro C++

  • uname is "IRIX" or "IRIX64" and CC in PATH

true64cxx

Compaq C++ Compiler for True64 UNIX

  • uname is "OSF1" and cc in PATH

qcc

QNX Neutrino

  • uname is "QNX" and QCC in PATH

xlcpp and vacpp

IBM VisualAge C++

  • uname is "Linux" and xlC_r in PATH (xlcpp or vacpp depending on machine endian)

  • uname is "AIX" and xlC_r in PATH (vacpp)

pgi

PGI Compilers

  • pgc++ in PATH

pathscale

Pathscale C++

  • pathCC in PATH

como

Comeau Computing C/C++

  • como in PATH

kylix

Borland C++

  • bc++ in PATH (kylix)

acc

HP-UX aCC

  • aCC in PATH

sunpro

Sun Workshop 6 C++

  • Standard install location: /opt/SUNWspro/bin/CC

The built executables are placed in the src/engine directory.

The build.sh script supports additional invocation options used to control the the build and custom compilers:

build.sh [--option|--option=x] [toolset]
--help

Shows some help information, including these options.

--verbose

Show messages about what this script is doing.

--debug

Builds debugging versions of the executable. The default is to build an optimized executable.

--guess-toolset

Print the toolset we can detect for building. This is used by external scripts, like the Boost Libraries main bootstrap script.

--cxx=CXX

The compiler exec to use instead of the detected compiler exec.

--cxxflags=CXXFLAGS

The compiler flags to use in addition to the flags for the detected compiler.

12.2. Language

B2 has an interpreted, procedural language. Statements in b2 are rule (procedure) definitions, rule invocations, flow-of-control structures, variable assignments, and sundry language support.

12.2.1. Lexical Features

B2 treats its input files as whitespace-separated tokens, with two exceptions: double quotes (") can enclose whitespace to embed it into a token, and everything between the matching curly braces (\{}) in the definition of a rule action is treated as a single string. A backslash (\) can escape a double quote, or any single whitespace character.

B2 requires whitespace (blanks, tabs, or newlines) to surround all tokens, including the colon (:) and semicolon (;) tokens.

B2 keywords (an mentioned in this document) are reserved and generally must be quoted with double quotes (") to be used as arbitrary tokens, such as variable or target names.

Comments start with the # character and extend until the end of line. And block comments start with #| and extend until the next |#.

12.2.2. Targets

The essential b2 data entity is a target. Build targets are files to be updated. Source targets are the files used in updating built targets. Built targets and source targets are collectively referred to as file targets, and frequently built targets are source targets for other built targets. Pseudo-targets are symbols representing dependencies on other targets, but which are not themselves associated with any real file.

A file target’s identifier is generally the file’s name, which can be absolutely rooted, relative to the directory of b2`s invocation, or simply local (no directory). Most often it is the last case, and the actual file path is bound using the `$(SEARCH) and $(LOCATE) special variables. See SEARCH and LOCATE Variables below. A local filename is optionally qualified with grist, a string value used to assure uniqueness. A file target with an identifier of the form file(member) is a library member (usually an ar(1) archive on Unix).

Binding Detection

Whenever a target is bound to a location in the filesystem, Boost Jam will look for a variable called BINDRULE (first "on" the target being bound, then in the global module). If non-empty, $(BINDRULE[1]) names a rule which is called with the name of the target and the path it is being bound to. The signature of the rule named by $(BINDRULE[1]) should match the following:

rule bind-rule ( target : path )

This facility is useful for correct header file scanning, since many compilers will search for #include files first in the directory containing the file doing the #include directive. $(BINDRULE) can be used to make a record of that directory.

12.2.3. Rules

The basic b2 language entity is called a rule. A rule is defined in two parts: the procedure and the actions. The procedure is a body of jam statements to be run when the rule is invoked; the actions are the OS shell commands to execute when updating the built targets of the rule.

Rules can return values, which can be expanded into a list with "[ rule args …​ ]". A rule’s value is the value of its last statement, though only the following statements have values: 'if' (value of the leg chosen), 'switch' (value of the case chosen), set (value of the resulting variable), and 'return' (value of its arguments).

The b2 statements for defining and invoking rules are as follows:

Define a rule’s procedure, replacing any previous definition.

rule rulename { statements }

Define a rule’s updating actions, replacing any previous definition.

actions [ modifiers ] rulename { commands }

Invoke a rule.

rulename field1 : field2 : ... : fieldN ;

Invoke a rule under the influence of target’s specific variables..

on target rulename field1 : field2 : ... : fieldN ;

Used as an argument, expands to the return value of the rule invoked.

[ rulename field1 : field2 : ... : fieldN ]
[ on target rulename field1 : field2 : ... : fieldN ]

A rule is invoked with values in field1 through fieldN. They may be referenced in the procedure’s statements as $(1) through $(N) (9 max), and the first two only may be referenced in the action’s commands as $(1) and $(2). $(<) and $(>) are synonymous with $(1) and $(2).

Rules fall into two categories: updating rules (with actions), and pure procedure rules (without actions). Updating rules treat arguments $(1) and $(2) as built targets and sources, respectively, while pure procedure rules can take arbitrary arguments.

When an updating rule is invoked, its updating actions are added to those associated with its built targets ($(1)) before the rule’s procedure is run. Later, to build the targets in the updating phase, commands are passed to the OS command shell, with $(1) and $(2) replaced by bound versions of the target names. See Binding above.

Rule invocation may be indirected through a variable:

$(var) field1 : field2 : ... : fieldN ;

on target $(var) field1 : field2 : ... : fieldN ;

[ $(var) field1 : field2 : ... : fieldN ]
[ on target $(var) field1 : field2 : ... : fieldN ]

The variable’s value names the rule (or rules) to be invoked. A rule is invoked for each element in the list of $(var)`s values. The fields `field1 : field2 : …​ are passed as arguments for each invocation For the [ …​ ] forms, the return value is the concatenation of the return values for all of the invocations.

Action Modifiers

The following action modifiers are understood:

actions bind vars

$(vars) will be replaced with bound values.

actions existing

$(>) includes only source targets currently existing.

actions ignore

The return status of the commands is ignored.

actions piecemeal

commands are repeatedly invoked with a subset of $(>) small enough to fit in the command buffer on this OS.

actions quietly

The action is not echoed to the standard output.

actions together

The $(>) from multiple invocations of the same action on the same built target are glommed together.

actions updated

$(>) includes only source targets themselves marked for updating.

Argument lists

You can describe the arguments accepted by a rule, and refer to them by name within the rule. For example, the following prints "I’m sorry, Dave" to the console:

rule report ( pronoun index ? : state : names + )
{
    local he.suffix she.suffix it.suffix = s ;
    local I.suffix = m ;
    local they.suffix you.suffix = re ;
    ECHO $(pronoun)'$($(pronoun).suffix) $(state), $(names[$(index)]) ;
}
report I 2 : sorry : Joe Dave Pete ;

Each name in a list of formal arguments (separated by : in the rule declaration) is bound to a single element of the corresponding actual argument unless followed by one of these modifiers:

Symbol Semantics of preceding symbol

?

optional

*

Bind to zero or more unbound elements of the actual argument. When * appears where an argument name is expected, any number of additional arguments are accepted. This feature can be used to implement "varargs" rules.

+

Bind to one or more unbound elements of the actual argument.

The actual and formal arguments are checked for inconsistencies, which cause b2 to exit with an error code:

### argument error
# rule report ( pronoun index ?  : state  : names + )
# called with: ( I 2 foo  : sorry  : Joe Dave Pete )
# extra argument foo
### argument error
# rule report ( pronoun index ?  : state  : names + )
# called with: ( I 2  : sorry )
# missing argument names

If you omit the list of formal arguments, all checking is bypassed as in "classic" Jam. Argument lists drastically improve the reliability and readability of your rules, however, and are strongly recommended for any new Jam code you write.

12.2.4. Built-in Rules

B2 has a growing set of built-in rules, all of which are pure procedure rules without updating actions. They are in three groups: the first builds the dependency graph; the second modifies it; and the third are just utility rules.

Dependency Building
DEPENDS
rule DEPENDS ( targets1 * : targets2 * )

Builds a direct dependency: makes each of targets1 depend on each of targets2. Generally, targets1 will be rebuilt if targets2 are themselves rebuilt or are newer than targets1.

INCLUDES
rule INCLUDES ( targets1 * : targets2 * )

Builds a sibling dependency: makes any target that depends on any of targets1 also depend on each of targets2. This reflects the dependencies that arise when one source file includes another: the object built from the source file depends both on the original and included source file, but the two sources files don’t depend on each other. For example:

DEPENDS foo.o : foo.c ;
INCLUDES foo.c : foo.h ;

foo.o depends on foo.c and foo.h in this example.

Modifying Binding

The six rules ALWAYS, LEAVES, NOCARE, NOTFILE, NOUPDATE, and TEMPORARY modify the dependency graph so that b2 treats the targets differently during its target binding phase. See Binding above. Normally, b2 updates a target if it is missing, if its filesystem modification time is older than any of its dependencies (recursively), or if any of its dependencies are being updated. This basic behavior can be changed by invoking the following rules:

ALWAYS
rule ALWAYS ( targets * )

Causes targets to be rebuilt regardless of whether they are up-to-date (they must still be in the dependency graph). This is used for the clean and uninstall targets, as they have no dependencies and would otherwise appear never to need building. It is best applied to targets that are also NOTFILE targets, but it can also be used to force a real file to be updated as well.

LEAVES
rule LEAVES ( targets * )

Makes each of targets depend only on its leaf sources, and not on any intermediate targets. This makes it immune to its dependencies being updated, as the "leaf" dependencies are those without their own dependencies and without updating actions. This allows a target to be updated only if original source files change.

NOCARE
rule NOCARE ( targets * )

Causes b2 to ignore targets that neither can be found nor have updating actions to build them. Normally for such targets b2 issues a warning and then skips other targets that depend on these missing targets. The HdrRule in Jambase uses NOCARE on the header file names found during header file scanning, to let b2 know that the included files may not exist. For example, if an #include is within an #ifdef, the included file may not actually be around.

For targets with build actions: if their build actions exit with a nonzero return code, dependent targets will still be built.
NOTFILE
rule NOTFILE ( targets * )

Marks targets as pseudo-targets and not real files. No timestamp is checked, and so the actions on such a target are only executed if the target’s dependencies are updated, or if the target is also marked with ALWAYS. The default b2 target all is a pseudo-target In Jambase, NOTFILE is used to define several addition convenient pseudo-targets

NOUPDATE
rule NOUPDATE ( targets * )

Causes the timestamps on targets to be ignored. This has two effects: first, once the target has been created it will never be updated; second, manually updating target will not cause other targets to be updated. In Jambase, for example, this rule is applied to directories by the MkDir rule, because MkDir only cares that the target directory exists, not when it has last been updated.

TEMPORARY
rule TEMPORARY ( targets * )

Marks targets as temporary, allowing them to be removed after other targets that depend upon them have been updated. If a TEMPORARY target is missing, b2 uses the timestamp of the target’s parent. Jambase uses TEMPORARY to mark object files that are archived in a library after they are built, so that they can be deleted after they are archived.

FAIL_EXPECTED
rule FAIL_EXPECTED ( targets * )

For handling targets whose build actions are expected to fail (e.g. when testing that assertions or compile-time type checking work properly), Boost Jam supplies the FAIL_EXPECTED rule in the same style as NOCARE, et. al. During target updating, the return code of the build actions for arguments to FAIL_EXPECTED is inverted: if it fails, building of dependent targets continues as though it succeeded. If it succeeds, dependent targets are skipped.

RMOLD
rule RMOLD ( targets * )

B2 removes any target files that may exist on disk when the rule used to build those targets fails. However, targets whose dependencies fail to build are not removed by default. The RMOLD rule causes its arguments to be removed if any of their dependencies fail to build.

ISFILE
rule ISFILE ( targets * )

ISFILE marks targets as required to be files. This changes the way b2 searches for the target such that it ignores matches for file system items that are not files, like directories. This makes it possible to avoid #include "exception" matching if one happens to have a directory named exception in the header search path.

This is currently not fully implemented.
Utility

The two rules ECHO and EXIT are utility rules, used only in `b2`s parsing phase.

ECHO
rule ECHO ( args * )

Blurts out the message args to stdout.

EXIT
rule EXIT ( message * : result-value ? )

Blurts out the message to stdout and then exits with a failure status if no result-value is given, otherwise it exits with the given result-value.

Echo, echo, Exit, and exit are accepted as aliases for ECHO and EXIT, since it is hard to tell that these are built-in rules and not part of the language, like include.

GLOB

The GLOB rule does filename globing

rule GLOB ( directories * : patterns * : downcase-opt ? )

Using the same wildcards as for the patterns in the switch statement. It is invoked by being used as an argument to a rule invocation inside of "[ ]". For example: FILES = [ GLOB dir1 dir2 : *.c *.h ] sets FILES to the list of C source and header files in dir1 and dir2. The resulting filenames are the full pathnames, including the directory, but the pattern is applied only to the file name without the directory.

If downcase-opt is supplied, filenames are converted to all-lowercase before matching against the pattern; you can use this to do case-insensitive matching using lowercase patterns. The paths returned will still have mixed case if the OS supplies them. On Windows NT and Cygwin, and OpenVMS, filenames are always down-cased before matching.

GLOB_ARCHIVE

The GLOB_ARCHIVE rule does name globing of object archive members.

rule GLOB_ARCHIVE ( archives * : member-patterns * : downcase-opt ? : symbol-patterns ? )

Similarly to GLOB, this rule is used to match names of member files in an archive (static object library). List of successfully matched members is returned or null otherwise. The resulting member names are qualified with pathname of the containing archive in the form archive-path(member-name). Member patterns are for matching member name only; when no wildcards specified — an exact match is assumed. Member names generally correspond to object file names and as such are platform-specific — use of platform-defined object suffix in the matching patterns can allow for portability.

If downcase-opt is supplied, the member names are converted to all-lowercase before matching against the pattern; you can use this to do case-insensitive matching using lowercase patterns. The paths returned will still have mixed case if the OS supplies them. On Windows NT, Cygwin, and OpenVMS, filenames are always down-cased before matching.

Additionally, members can be matched with symbol/function patterns on supported platforms (currently, OpenVMS only). In this case, members containing the matching symbols are returned. Member and symbol patterns are applied as OR conditions, with member patterns taking precedence. On unsupported platforms, null is returned when any symbol patterns are specified.

MATCH

The MATCH rule does pattern matching.

rule MATCH ( regexps + : list * )

Matches the egrep(1) style regular expressions regexps against the strings in list. The result is a list of matching () subexpressions for each string in list, and for each regular expression in regexps.

BACKTRACE
rule BACKTRACE ( )

Returns a list of quadruples: filename line module rulename…​, describing each shallower level of the call stack. This rule can be used to generate useful diagnostic messages from Jam rules.

UPDATE
rule UPDATE ( targets * )

Classic jam treats any non-option element of command line as a name of target to be updated. This prevented more sophisticated handling of command line. This is now enabled again but with additional changes to the UPDATE rule to allow for the flexibility of changing the list of targets to update. The UPDATE rule has two effects:

  1. It clears the list of targets to update, and

  2. Causes the specified targets to be updated.

If no target was specified with the UPDATE rule, no targets will be updated. To support changing of the update list in more useful ways, the rule also returns the targets previously in the update list. This makes it possible to add targets as such:

local previous-updates = [ UPDATE ] ;
UPDATE $(previous-updates) a-new-target ;
W32_GETREG
rule W32_GETREG ( path : data ? )

Defined only for win32 platform. It reads the registry of Windows. 'path' is the location of the information, and 'data' is the name of the value which we want to get. If 'data' is omitted, the default value of 'path' will be returned. The 'path' value must conform to MS key path format and must be prefixed with one of the predefined root keys. As usual,

  • HKLM is equivalent to HKEY_LOCAL_MACHINE.

  • HKCU is equivalent to HKEY_CURRENT_USER.

  • HKCR is equivalent to HKEY_CLASSES_ROOT.

Other predefined root keys are not supported.

Currently supported data types : REG_DWORD, REG_SZ, REG_EXPAND_SZ, REG_MULTI_SZ. The data with REG_DWORD type will be turned into a string, REG_MULTI_SZ into a list of strings, and for those with REG_EXPAND_SZ type environment variables in it will be replaced with their defined values. The data with REG_SZ type and other unsupported types will be put into a string without modification. If it can’t receive the value of the data, it just return an empty list. For example,

local PSDK-location =
  [ W32_GETREG HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\MicrosoftSDK\\Directories : "Install Dir" ] ;
W32_GETREGNAMES
rule W32_GETREGNAMES ( path : result-type )

Defined only for win32 platform. It reads the registry of Windows. 'path' is the location of the information, and 'result-type' is either subkeys or values. For more information on 'path' format and constraints, please see W32_GETREG.

Depending on 'result-type', the rule returns one of the following:

subkeys

Names of all direct sub-keys of 'path'.

values

Names of values contained in registry key given by 'path'. The "default" value of the key appears in the returned list only if its value has been set in the registry.

If 'result-type' is not recognized, or requested data cannot be retrieved, the rule returns an empty list. Example:

local key = "HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\App Paths" ;
local subkeys = [ W32_GETREGNAMES "$(key)" : subkeys ] ;
for local subkey in $(subkeys)
{
    local values = [ W32_GETREGNAMES "$(key)\\$(subkey)" : values ] ;
    for local value in $(values)
    {
        local data = [ W32_GETREG "$(key)\\$(subkey)" : "$(value)" ] ;
        ECHO "Registry path: " $(key)\\$(subkey) ":" $(value) "=" $(data) ;
    }
}
SHELL
rule SHELL ( command : * )

SHELL executes command, and then returns the standard output of command. SHELL only works on platforms with a popen() function in the C library. On platforms without a working popen() function, SHELL is implemented as a no-op. SHELL works on Unix, MacOS X, and most Windows compilers. SHELL is a no-op on Metrowerks compilers under Windows. There is a variable set of allowed options as additional arguments:

exit-status

In addition to the output the result status of the executed command is returned as a second element of the result.

no-output

Don’t capture the output of the command. Instead an empty ("") string value is returned in place of the output.

strip-eol

Remove trailing end-of-line character from output, if any.

Because the Perforce/Jambase defines a SHELL rule which hides the builtin rule, COMMAND can be used as an alias for SHELL in such a case.

MD5
rule MD5 ( string )

MD5 computes the MD5 hash of the string passed as parameter and returns it.

SPLIT_BY_CHARACTERS
rule SPLIT_BY_CHARACTERS ( string : delimiters )

SPLIT_BY_CHARACTERS splits the specified string on any delimiter character present in delimiters and returns the resulting list.

PRECIOUS
rule PRECIOUS ( targets * )

The PRECIOUS rule specifies that each of the targets passed as the arguments should not be removed even if the command updating that target fails.

PAD
rule PAD ( string : width )

If string is shorter than width characters, pads it with whitespace characters on the right, and returns the result. Otherwise, returns string unmodified.

FILE_OPEN
rule FILE_OPEN ( filename : mode )

The FILE_OPEN rule opens the specified file and returns a file descriptor if the mode parameter is either "w" or "r". Note that at present, only the UPDATE_NOW rule can use the resulting file descriptor number. If the mode parameter is "t" this opens the file as a text file and returns the contents as a single string.

UPDATE_NOW
rule UPDATE_NOW ( targets * : log ? : ignore-minus-n ? )

The UPDATE_NOW caused the specified targets to be updated immediately. If update was successful, non-empty string is returned. The log parameter, if present, specifies a descriptor of a file where all output from building is redirected. If the ignore-minus-n parameter is specified, the targets are updated even if the -n parameter is specified on the command line.

12.2.5. Flow-of-Control

B2 has several simple flow-of-control statements:

for var in list { statements }

Executes statements for each element in list, setting the variable var to the element value.

if cond { statements }
[ else { statements } ]

Does the obvious; the else clause is optional. cond is built of:

a

true if any a element is a non-zero-length string

a = b

list a matches list b string-for-string

a != b

list a does not match list b

a < b

a[i] string is less than b[i] string, where i is first mismatched element in lists a and b

a <= b

every a string is less than or equal to its b counterpart

a > b

a[i] string is greater than b[i] string, where i is first mismatched element

a >= b

every a string is greater than or equal to its b counterpart

a in b

true if all elements of a can be found in b, or if a has no elements

! cond

condition not true

cond && cond

conjunction

cond || cond

disjunction

( cond )

precedence grouping

include file ;

Causes b2 to read the named file. The file is bound like a regular target (see Binding above) but unlike a regular target the include file cannot be built.

The include file is inserted into the input stream during the parsing phase. The primary input file and all the included file(s) are treated as a single file; that is, b2 infers no scope boundaries from included files.

local vars [ = values ] ;

Creates new vars inside to the enclosing {} block, obscuring any previous values they might have. The previous values for vars are restored when the current block ends. Any rule called or file included will see the local and not the previous value (this is sometimes called Dynamic Scoping). The local statement may appear anywhere, even outside of a block (in which case the previous value is restored when the input ends). The vars are initialized to values if present, or left uninitialized otherwise.

return values ;

Within a rule body, the return statement sets the return value for an invocation of the rule and returns to the caller.

switch value
{
    case pattern1 : statements ;
    case pattern2 : statements ;
    ...
}

The switch statement executes zero or one of the enclosed statements, depending on which, if any, is the first case whose pattern matches value. The pattern values are not variable-expanded. The pattern values may include the following wildcards:

?

match any single character

*

match zero or more characters

[chars]

match any single character in chars

[^chars]

match any single character not in chars

\x

match x (escapes the other wildcards)

while cond { statements }

Repeatedly execute statements while cond remains true upon entry. (See the description of cond expression syntax under if, above).

break ;

Immediately exits the nearest enclosing while or for loop.

continue ;

Jumps to the top of the nearest enclosing while or for loop.

12.2.6. Variables

B2 variables are lists of zero or more elements, with each element being a string value. An undefined variable is indistinguishable from a variable with an empty list, however, a defined variable may have one more elements which are null strings. All variables are referenced as $(variable).

Variables are either global or target-specific. In the latter case, the variable takes on the given value only during the updating of the specific target.

A variable is defined with:

variable = elements ;
variable += elements ;
variable on targets = elements ;
variable on targets += elements ;
variable default = elements ;
variable ?= elements ;

The first two forms set variable globally. The third and forth forms set a target-specific variable. The = operator replaces any previous elements of variable with elements; the += operation adds elements to variable's list of elements. The final two forms are synonymous: they set variable globally, but only if it was previously unset.

Variables referenced in updating commands will be replaced with their values; target-specific values take precedence over global values. Variables passed as arguments ($(1) and $(2)) to actions are replaced with their bound values; the bind modifier can be used on actions to cause other variables to be replaced with bound values. See Action Modifiers above.

B2 variables are not re-exported to the environment of the shell that executes the updating actions, but the updating actions can reference b2 variables with $(variable).

Variable Expansion

During parsing, b2 performs variable expansion on each token that is not a keyword or rule name. Such tokens with embedded variable references are replaced with zero or more tokens. Variable references are of the form $(v) or $(vm), where v is the variable name, and m are optional modifiers.

Variable expansion in a rule’s actions is similar to variable expansion in statements, except that the action string is tokenized at whitespace regardless of quoting.

The result of a token after variable expansion is the product of the components of the token, where each component is a literal substring or a list substituting a variable reference. For example:

$(X) -> a b c
t$(X) -> ta tb tc
$(X)z -> az bz cz
$(X)-$(X) -> a-a a-b a-c b-a b-b b-c c-a c-b c-c

The variable name and modifiers can themselves contain a variable reference, and this partakes of the product as well:

$(X) -> a b c
$(Y) -> 1 2
$(Z) -> X Y
$($(Z)) -> a b c 1 2

Because of this product expansion, if any variable reference in a token is undefined, the result of the expansion is an empty list. If any variable element is a null string, the result propagates the non-null elements:

$(X) -> a ""
$(Y) -> "" 1
$(Z) ->
-$(X)$(Y)- -> -a- -a1- -- -1-
-$(X)$(Z)- ->

A variable element’s string value can be parsed into grist and filename-related components. Modifiers to a variable are used to select elements, select components, and replace components. The modifiers are:

[n]

Select element number n (starting at 1). If the variable contains fewer than n elements, the result is a zero-element list. n can be negative in which case the element number n from the last leftward is returned.

[n-m]

Select elements number n through m. n and m can be negative in which case they refer to elements counting from the last leftward.

[n-]

Select elements number n through the last. n can be negative in which case it refers to the element counting from the last leftward.

:B

Select filename base — a basename without extension.

:S

Select file extension — a (last) filename suffix.

:M

Select archive member name.

:D

Select directory path.

:P

Select parent directory.

:G

Select grist.

:U

Replace lowercase characters with uppercase.

:L

Replace uppercase characters with lowercase.

:T

Converts all back-slashes ("\") to forward slashes ("/"). For example

x = "C:\\Program Files\\Borland" ; ECHO $(x:T) ;

prints C:/Program Files/Borland

:W

When invoking Windows-based tools from Cygwin it can be important to pass them true windows-style paths. The :W modifier, under Cygwin only, turns a cygwin path into a Win32 path using the cygwin_conv_to_win32_path function. For example

x = "/cygdrive/c/Program Files/Borland" ; ECHO $(x:W) ;

prints C:\Program Files\Borland on Cygwin

Similarly, when used on OpenVMS, the :W modifier translates a POSIX-style path into native VMS-style format using decc$to_vms CRTL function. This modifier is generally used inside action blocks to properly specify file paths in VMS-specific commands. For example

x = "subdir/filename.c" ; ECHO $(x:W) ;

prints [.subdir]filename.c on OpenVMS

On other platforms, the string is unchanged.

:chars

Select the components listed in chars.

For example, :BS selects filename (basename and extension).

:G=grist

Replace grist with grist.

:D=path

Replace directory with path.

:B=base

Replace the base part of file name with base.

:S=suf

Replace the suffix of file name with suf.

:M=mem

Replace the archive member name with mem.

:R=root

Prepend root to the whole file name, if not already rooted.

:E=value

Assign value to the variable if it is unset.

:J=joinval

Concatenate list elements into single element, separated by joinval.

:O=value

Sets semantic options for the evaluation of the variable. The format of the value is specific to either variable or generated file expansion.

On VMS, $(var:P) is the parent directory of $(var:D).

:⇐value

After evaluating the expansion of the variable prefixes the given value to the elements of the expanded expression values.

:>=value

After evaluating the expansion of the variable postfixes the given value to the elements of the expanded expression values.

Local For Loop Variables

Boost Jam allows you to declare a local for loop control variable right in the loop:

x = 1 2 3 ;
y = 4 5 6 ;
for local y in $(x)
{
    ECHO $(y) ; # prints "1", "2", or "3"
}
ECHO $(y) ;     # prints "4 5 6"
Generated File Expansion

During expansion of expressions b2 also looks for subexpressions of the form @(filename:E=filecontents) and replaces the expression with filename after creating the given file with the contents set to filecontents. This is useful for creating compiler response files, and other "internal" files. The expansion works both during parsing and action execution. Hence it is possible to create files during any of the three build phases. This expansion follows the same modifiers as variable expansion. The generated file expansion accepts these (:O=) expansion option values:

F

Always replace the @() reference with the name of the file generated.

C

Always replace the @() reference with the contents, i.e. the value in the :E=value expression.

FC or CF

Replace with either the file or contents depending on the length of the contents (:E=value). It will replace with the contents in an action if the length of the command is shorter than the allowed command length limit. Otherwise the reference is replaced with the filename.

Built-in Variables

This section discusses variables that have special meaning to b2. All of these must be defined or used in the global module — using those variables inside a named module will not have the desired effect. See Modules.

SEARCH and LOCATE

These two variables control the binding of file target names to locations in the file system. Generally, $(SEARCH) is used to find existing sources while $(LOCATE) is used to fix the location for built targets.

Rooted (absolute path) file targets are bound as is. Unrooted file target names are also normally bound as is, and thus relative to the current directory, but the settings of $(LOCATE) and $(SEARCH) alter this:

  • If $(LOCATE) is set then the target is bound relative to the first directory in $(LOCATE). Only the first element is used for binding.

  • If $(SEARCH) is set then the target is bound to the first directory in $(SEARCH) where the target file already exists.

  • If the $(SEARCH) search fails, the target is bound relative to the current directory anyhow.

Both $(SEARCH) and $(LOCATE) should be set target-specific and not globally. If they were set globally, b2 would use the same paths for all file binding, which is not likely to produce sane results. When writing your own rules, especially ones not built upon those in Jambase, you may need to set $(SEARCH) or $(LOCATE) directly. Almost all of the rules defined in Jambase set $(SEARCH) and $(LOCATE) to sensible values for sources they are looking for and targets they create, respectively.

HDRSCAN and HDRRULE

These two variables control header file scanning. $(HDRSCAN) is an egrep(1) pattern, with ()'s surrounding the file name, used to find file inclusion statements in source files. Jambase uses $(HDRPATTERN) as the pattern for $(HDRSCAN). $(HDRRULE) is the name of a rule to invoke with the results of the scan: the scanned file is the target, the found files are the sources. This is the only place where b2 invokes a rule through a variable setting.

Both $(HDRSCAN) and $(HDRRULE) must be set for header file scanning to take place, and they should be set target-specific and not globally. If they were set globally, all files, including executables and libraries, would be scanned for header file include statements.

The scanning for header file inclusions is not exact, but it is at least dynamic, so there is no need to run something like makedepend(GNU) to create a static dependency file. The scanning mechanism errs on the side of inclusion (i.e., it is more likely to return filenames that are not actually used by the compiler than to miss include files) because it can’t tell if #include lines are inside #ifdefs or other conditional logic. In Jambase, HdrRule applies the NOCARE rule to each header file found during scanning so that if the file isn’t present yet doesn’t cause the compilation to fail, b2 won’t care.

Also, scanning for regular expressions only works where the included file name is literally in the source file. It can’t handle languages that allow including files using variable names (as the Jam language itself does).

Semaphores

It is sometimes desirable to disallow parallel execution of some actions. For example:

  • Old versions of yacc use files with fixed names. So, running two yacc actions is dangerous.

  • One might want to perform parallel compiling, but not do parallel linking, because linking is i/o bound and only gets slower.

Craig McPeeters has extended Perforce Jam to solve such problems, and that extension was integrated in Boost.Jam.

Any target can be assigned a semaphore, by setting a variable called JAM_SEMAPHORE on that target. The value of the variable is the semaphore name. It must be different from names of any declared target, but is arbitrary otherwise.

The semantic of semaphores is that in a group of targets which have the same semaphore, only one can be updated at the moment, regardless of -j option.

Platform Identifier

A number of Jam built-in variables can be used to identify runtime platform:

OS

OS identifier string

OSPLAT

Underlying architecture, when applicable

MAC

true on MAC platform

NT

true on NT platform

OS2

true on OS2 platform

UNIX

true on Unix platforms

VMS

true on VMS platform

Jam Version
JAMDATE

Time and date at b2 start-up as an ISO-8601 UTC value.

JAMUNAME

Output of uname(1) command (Unix only)

JAMVERSION

b2 version, as a sematic triplet "X.Y.Z".

JAM_VERSION

A predefined global variable with two elements indicates the version number of Boost Jam. Boost Jam versions start at 03 00. Earlier versions of Jam do not automatically define JAM_VERSION.

JAMSHELL

When b2 executes a rule’s action block, it forks and execs a shell, passing the action block as an argument to the shell. The invocation of the shell can be controlled by $(JAMSHELL). The default on Unix is, for example:

JAMSHELL = /bin/sh -c % ;

The % is replaced with the text of the action block.

B2 does not directly support building in parallel across multiple hosts, since that is heavily dependent on the local environment. To build in parallel across multiple hosts, you need to write your own shell that provides access to the multiple hosts. You then reset $(JAMSHELL) to reference it.

Just as b2 expands a % to be the text of the rule’s action block, it expands a ! to be the multi-process slot number. The slot number varies between 1 and the number of concurrent jobs permitted by the -j flag given on the command line. Armed with this, it is possible to write a multiple host shell. For example:

#!/bin/sh

# This sample JAMSHELL uses the SunOS on(1) command to execute a
# command string with an identical environment on another host.

# Set JAMSHELL = jamshell ! %
#
# where jamshell is the name of this shell file.
#
# This version handles up to -j6; after that they get executed
# locally.

case $1 in
1|4) on winken sh -c "$2";;
2|5) on blinken sh -c "$2";;
3|6) on nod sh -c "$2";;
*) eval "$2";;
esac
__TIMING_RULE__ and __ACTION_RULE__

The __TIMING_RULE__ and __ACTION_RULE__ can be set to the name of a rule for b2 to call after an action completes for a target. They both give diagnostic information about the action that completed. For __TIMING_RULE__ the rule is called as:

rule timing-rule ( args * : target : start end user system )

And __ACTION_RULE__ is called as:

rule action-rule ( args * : target : command status start end user system : output ? )

The arguments for both are:

args

Any values following the rule name in the __TIMING_RULE__ or __ACTION_RULE__ are passed along here.

target

The b2 target that was built.

command

The text of the executed command in the action body.

status

The integer result of the executed command.

start

The starting timestamp of the executed command as a ISO-8601 UTC value.

end

The completion timestamp of the executed command as a ISO-8601 UTC value.

user

The number of user CPU seconds the executed command spent as a floating point value.

system

The number of system CPU seconds the executed command spent as a floating point value.

output

The output of the command as a single string. The content of the output reflects the use of the -pX option.

If both variables are set for a target both are called, first __TIMING_RULE__ then __ACTION_RULE__.

12.2.7. Modules

Boost Jam introduces support for modules, which provide some rudimentary namespace protection for rules and variables. A new keyword, module was also introduced. The features described in this section are primitives, meaning that they are meant to provide the operations needed to write Jam rules which provide a more elegant module interface.

Declaration
module expression { ... }

Code within the { …​ } executes within the module named by evaluating expression. Rule definitions can be found in the module’s own namespace, and in the namespace of the global module as module-name.rule-name, so within a module, other rules in that module may always be invoked without qualification:

module my_module
{
    rule salute ( x ) { ECHO $(x), world ; }
    rule greet ( ) { salute hello ; }
    greet ;
}
my_module.salute goodbye ;

When an invoked rule is not found in the current module’s namespace, it is looked up in the namespace of the global module, so qualified calls work across modules:

module your_module
{
    rule bedtime ( ) { my_module.salute goodnight ; }
}
Variable Scope

Each module has its own set of dynamically nested variable scopes. When execution passes from module A to module B, all the variable bindings from A become unavailable, and are replaced by the bindings that belong to B. This applies equally to local and global variables:

module A
{
    x = 1 ;
    rule f ( )
    {
        local y = 999 ; # becomes visible again when B.f calls A.g
        B.f ;
    }
    rule g ( )
    {
        ECHO $(y) ;     # prints "999"
    }
}
module B
{
    y = 2 ;
    rule f ( )
    {
        ECHO $(y) ; # always prints "2"
        A.g ;
    }
}

The only way to access another module’s variables is by entering that module:

rule peek ( module-name ? : variables + )
{
    module $(module-name)
    {
        return $($(>)) ;
    }
}

Note that because existing variable bindings change whenever a new module scope is entered, argument bindings become unavailable. That explains the use of $(>) in the peek rule above.

Local Rules
local rule rulename...

The rule is declared locally to the current module. It is not entered in the global module with qualification, and its name will not appear in the result of:

[ RULENAMES module-name ]
The RULENAMES Rule
rule RULENAMES ( module ? )

Returns a list of the names of all non-local rules in the given module. If module is omitted, the names of all non-local rules in the global module are returned.

The VARNAMES Rule
rule VARNAMES ( module ? )

Returns a list of the names of all variable bindings in the given module. If module is omitted, the names of all variable bindings in the global module are returned.

This includes any local variables in rules from the call stack which have not returned at the time of the VARNAMES invocation.
The IMPORT Rule

IMPORT allows rule name aliasing across modules:

rule IMPORT ( source_module ? : source_rules *
            : target_module ? : target_rules * )

The IMPORT rule copies rules from the source_module into the target_module as local rules. If either source_module or target_module is not supplied, it refers to the global module. source_rules specifies which rules from the source_module to import; target_rules specifies the names to give those rules in target_module. If source_rules contains a name which doesn’t correspond to a rule in source_module, or if it contains a different number of items than target_rules, an error is issued. For example,

# import m1.rule1 into m2 as local rule m1-rule1.
IMPORT m1 : rule1 : m2 : m1-rule1 ;
# import all non-local rules from m1 into m2
IMPORT m1 : [ RULENAMES m1 ] : m2 : [ RULENAMES m1 ] ;
The EXPORT Rule

EXPORT allows rule name aliasing across modules:

rule EXPORT ( module ? : rules * )

The EXPORT rule marks rules from the source_module as non-local (and thus exportable). If an element of rules does not name a rule in module, an error is issued. For example,

module X {
  local rule r { ECHO X.r ; }
}
IMPORT X : r : : r ; # error - r is local in X
EXPORT X : r ;
IMPORT X : r : : r ; # OK.
The CALLER_MODULE Rule
rule CALLER_MODULE ( levels ? )

CALLER_MODULE returns the name of the module scope enclosing the call to its caller (if levels is supplied, it is interpreted as an integer number of additional levels of call stack to traverse to locate the module). If the scope belongs to the global module, or if no such module exists, returns the empty list. For example, the following prints "{Y} {X}":

module X {
    rule get-caller { return [ CALLER_MODULE ] ; }
    rule get-caller's-caller { return [ CALLER_MODULE 1 ] ; }
    rule call-Y { return Y.call-X2 ; }
}
module Y {
    rule call-X { return X.get-caller ; }
    rule call-X2 { return X.get-caller's-caller ; }
}
callers = [ X.get-caller ] [ Y.call-X ] [ X.call-Y ] ;
ECHO {$(callers)} ;
The DELETE_MODULE Rule
rule DELETE_MODULE ( module ? )

DELETE_MODULE removes all of the variable bindings and otherwise-unreferenced rules from the given module (or the global module, if no module is supplied), and returns their memory to the system.

Though it won’t affect rules that are currently executing until they complete, DELETE_MODULE should be used with extreme care because it will wipe out any others and all variable (including locals in that module) immediately. Because of the way dynamic binding works, variables which are shadowed by locals will not be destroyed, so the results can be really unpredictable.

12.3. Miscellaneous

12.3.1. Diagnostics

In addition to generic error messages, b2 may emit one of the following:

warning: unknown rule X

A rule was invoked that has not been defined with an actions or rule statement.

using N temp target(s)

Targets marked as being temporary (but nonetheless present) have been found.

updating N target(s)

Targets are out-of-date and will be updated.

can't find N target(s)

Source files can’t be found and there are no actions to create them.

can't make N target(s)

Due to sources not being found, other targets cannot be made.

warning: X depends on itself

A target depends on itself either directly or through its sources.

don't know how to make X

A target is not present and no actions have been defined to create it.

X skipped for lack of Y

A source failed to build, and thus a target cannot be built.

warning: using independent target X

A target that is not a dependency of any other target is being referenced with $(<) or $(>).

X removed

B2 removed a partially built target after being interrupted.

12.3.2. Bugs, Limitations

For parallel building to be successful, the dependencies among files must be properly spelled out, as targets tend to get built in a quickest-first ordering. Also, beware of un-parallelizable commands that drop fixed-named files into the current directory, like yacc(1) does.

A poorly set $(JAMSHELL) is likely to result in silent failure.

12.3.3. Fundamentals

This section is derived from the official Jam documentation and from experience using it and reading the Jambase rules. We repeat the information here mostly because it is essential to understanding and using Jam, but is not consolidated in a single place. Some of it is missing from the official documentation altogether. We hope it will be useful to anyone wishing to become familiar with Jam and the Boost build system.

  • Jam rules are actually simple procedural entities. Think of them as functions. Arguments are separated by colons.

  • A Jam target is an abstract entity identified by an arbitrary string. The built-in DEPENDS rule creates a link in the dependency graph between the named targets.

  • Note that the original Jam documentation for the built-in INCLUDES rule is incorrect: INCLUDES targets1 : targets2 causes everything that depends on a member of targets1 to depend on all members of targets2. It does this in an odd way, by tacking targets2 onto a special tail section in the dependency list of everything in targets1. It seems to be OK to create circular dependencies this way; in fact, it appears to be the "right thing to do" when a single build action produces both targets1 and targets2.

  • When a rule is invoked, if there are actions declared with the same name as the rule, the actions are added to the updating actions for the target identified by the rule’s first argument. It is actually possible to invoke an undeclared rule if corresponding actions are declared: the rule is treated as empty.

  • Targets (other than NOTFILE targets) are associated with paths in the file system through a process called binding. Binding is a process of searching for a file with the same name as the target (sans grist), based on the settings of the target-specific SEARCH and LOCATE variables.

  • In addition to local and global variables, jam allows you to set a variable on a target. Target-specific variable values can usually not be read, and take effect only in the following contexts:

    • In updating actions, variable values are first looked up on the target named by the first argument (the target being updated). Because Jam builds its entire dependency tree before executing actions, Jam rules make target-specific variable settings as a way of supplying parameters to the corresponding actions.

    • Binding is controlled entirely by the target-specific setting of the SEARCH and LOCATE variables, as described here.

    • In the special rule used for header file scanning, variable values are first looked up on the target named by the rule’s first argument (the source file being scanned).

  • The "bound value" of a variable is the path associated with the target named by the variable. In build actions, the first two arguments are automatically replaced with their bound values. Target-specific variables can be selectively replaced by their bound values using the bind action modifier.

  • Note that the term "binding" as used in the Jam documentation indicates a phase of processing that includes three sub-phases: binding (yes!), update determination, and header file scanning. The repetition of the term "binding" can lead to some confusion. In particular, the Modifying Binding section in the Jam documentation should probably be titled "Modifying Update Determination".

  • "Grist" is just a string prefix of the form <characters>. It is used in Jam to create unique target names based on simpler names. For example, the file name test.exe may be used by targets in separate sub-projects, or for the debug and release variants of the "same" abstract target. Each distinct target bound to a file called "test.exe" has its own unique grist prefix. The Boost build system also takes full advantage of Jam’s ability to divide strings on grist boundaries, sometimes concatenating multiple gristed elements at the beginning of a string. Grist is used instead of identifying targets with absolute paths for two reasons:

    1. The location of targets cannot always be derived solely from what the user puts in a Jamfile, but sometimes depends also on the binding process. Some mechanism to distinctly identify targets with the same name is still needed.

    2. Grist allows us to use a uniform abstract identifier for each built target, regardless of target file location (as allowed by setting ALL_LOCATE_TARGET).

  • When grist is extracted from a name with $(var:G), the result includes the leading and trailing angle brackets. When grist is added to a name with $(var:G=expr), existing grist is first stripped. Then, if expr is non-empty, leading <s and trailing >s are added if necessary to form an expression of the form <expr2>; <expr2> is then prepended.

  • When Jam is invoked it imports all environment variable settings into corresponding Jam variables, followed by all command-line (-s…​) variable settings. Variables whose name ends in PATH, Path, or path are split into string lists on OS-specific path-list separator boundaries (e.g. ":" for UNIX and ";" for Windows). All other variables are split on space (" ") boundaries. Boost Jam modifies that behavior by allowing variables to be quoted.

  • A variable whose value is an empty list or which consists entirely of empty strings has a negative logical value. Thus, for example, code like the following allows a sensible non-empty default which can easily be overridden by the user:

    MESSAGE ?\= starting jam... ;
    if $(MESSAGE) { ECHO The message is: $(MESSAGE) ; }

    If the user wants a specific message, he invokes jam with -sMESSAGE=message text. If he wants no message, he invokes jam with -sMESSAGE= and nothing at all is printed.

  • The parsing of command line options in Jam can be rather unintuitive, with regards to how other Unix programs accept options. There are two variants accepted as valid for an option:

    1. -xvalue, and

    2. -x value.

13. Implementation Reference

This includes reference documentation for the internals of the B2 engine code. Even though the Jam facing build system interfaces are also implemented in the engine, they are documented in the general reference section. This goes deeper and includes parts of engine mechanism and data structures. It is meant for those making changes to the engine itself.

13.1. b2::list_cref

Container of b2 values, that is a non-owning reference to the LIST. Mostly follows random access container behavior.

13.1.1. b2::list_cref Overview

struct list_cref
{
	// types
	struct iterator;
	using size_type = int32_t;
	using value_type = OBJECT *;

	// construct/copy/destroy
	list_cref() = default;
	list_cref(const list_cref &) = default;
	list_cref(list_cref && other);
	explicit list_cref(LIST * l);
	list_cref & operator=(const list_cref &) = default;

	// iterators
	iterator begin() const;
	iterator end() const;

	// capacity
	bool empty() const B2_NOEXCEPT;
	size_type length() const B2_NOEXCEPT;
	size_type size() const B2_NOEXCEPT;

	// element access
	value_type & operator[](size_type i) const;

	// list operations
	bool contains(value_ref a) const;
	list_ref slice(size_type i, size_type j = -1) const;
	bool operator==(const list_cref & b) const;
	bool operator==(const list_ref & b) const;

	// data access
	LIST * data() const B2_NOEXCEPT;
	LIST * operator*() const B2_NOEXCEPT;

	protected:
	friend struct iterator;
	LIST * list_obj = nullptr;
};

13.1.2. b2::list_cref Construct/Copy/Destroy

b2::list_cref::list_cref
inline list_cref::list_cref(list_cref && other)
inline list_cref::list_cref(LIST * l)

13.1.3. b2::list_cref Iterators

b2::list_cref::begin
inline list_cref::iterator list_cref::begin() const
b2::list_cref::end
inline list_cref::iterator list_cref::end() const

13.1.4. b2::list_cref Capacity

b2::list_cref::empty
inline bool list_cref::empty() const B2_NOEXCEPT
b2::list_cref::length
inline list_cref::size_type list_cref::length() const B2_NOEXCEPT
inline list_cref::size_type list_cref::size() const B2_NOEXCEPT

13.1.5. b2::list_cref Element Access

b2::list_cref::operator[]
inline list_cref::value_type & list_cref::operator[](
	list_cref::size_type i) const

13.1.6. b2::list_cref List Operations

b2::list_cref::contains
inline bool list_cref::contains(value_ref a) const
b2::list_cref::slice
inline list_ref list_cref::slice(
	list_cref::size_type i, list_cref::size_type j) const
b2::list_cref::operator==
inline bool list_cref::operator==(const list_cref & b) const
inline bool list_cref::operator==(const list_ref & b) const

13.1.7. b2::list_cref Data Access

b2::list_cref::operator==
inline LIST * list_cref::data() const B2_NOEXCEPT
inline LIST * list_cref::operator*() const B2_NOEXCEPT

13.2. b2::list_ref

Container of b2 values, that is an owning reference to the LIST. Mostly follows random access container behavior. And as an owning reference will allocate, copy, move LIST objects as needed.

13.2.1. b2::list_ref Overview

struct list_ref : private list_cref
{
	// types
	using list_cref::iterator;
	using list_cref::size_type;
	using list_cref::value_type;

	using list_cref::begin;
	using list_cref::end;
	using list_cref::empty;
	using list_cref::length;
	using list_cref::size;
	using list_cref::operator[];
	using list_cref::contains;
	using list_cref::operator==;
	using list_cref::data;
	using list_cref::operator*;

	// construct/copy/destroy
	list_ref() = default;
	list_ref(list_ref && other);
	list_ref(const list_cref & other);
	list_ref(const list_ref & other);
	explicit list_ref(value_ref o);
	explicit list_ref(LIST * l, bool own = false);
	list_ref(iterator i, const iterator & e);
	~list_ref();

	// modifiers
	LIST * release();
	void reset(LIST * new_list = nullptr);
	list_ref & append(const list_ref & other);
	list_ref & append(list_cref other);
	list_ref & operator+(const list_ref & other);
	list_ref & operator+(const list_cref & other);
	list_ref & push_back(OBJECT * value);
	template <typename... T>
	list_ref & push_back(T... value);
	template <typename T>
	list_ref & operator+(T value);
	list_ref & pop_front();
	list_ref & operator=(list_ref && other);

	// list operations
	inline list_ref & slice(size_type i, size_type j = -1);
	inline list_cref cref() const;
};

13.2.2. b2::list_ref Construct/Copy/Destroy

b2::list_ref::list_cref
inline list_ref::list_ref(list_ref && other) // (1)
inline list_ref::list_ref(const list_cref & other) // (2)
inline list_ref::list_ref(const list_ref & other) // (2)
inline list_ref::list_ref(value_ref o) // (3)
inline list_ref::list_ref(LIST * l, bool own) // (4)
inline list_ref::list_ref(iterator i, const iterator & e) // (5)
  1. The data for the list is moved from other.

  2. Makes a copy of the list from other.

  3. Makes a new list with the initial given element value.

  4. If own == true takes ownership of the data l, otherwise makes a copy of it.

  5. Fills the new list with the elements from the [i,e) range. == b2::list_ref Modifiers === b2::list_ref::release

inline LIST * list_ref::release()

Returns the list data relinquishing ownership of it. This list is left in an empty valid state. === b2::list_ref::reset

inline void list_ref::reset(LIST * new_list)

Replaces the list data with the given new_list. The current list is freed along with the elements in the list. === b2::list_ref::append

inline list_ref & list_ref::append(const list_ref & other)
inline list_ref & list_ref::append(list_cref other)
inline list_ref & list_ref::operator+(const list_ref & other)
inline list_ref & list_ref::operator+(const list_cref & other)

Adds the elements from the other list, making copies, to the end of this list. All the functions return a reference to the list to allow for chaining. For example: list_ref() + "one" + "two". === b2::list_ref::push_back

inline list_ref & list_ref::push_back(OBJECT * value)
template <typename... T>
inline list_ref & list_ref::push_back(T... value) // (1)
template <typename... T>
inline list_ref & list_ref::operator+(T value) // (2)
  1. Adds a value constructed from the given arguments. I.e. by calling value::make(value…​).

  2. Adds the value that is convertible to value_type.

Adds a single value to the end of the list. The list is returned to allow for chaining. === b2::list_ref::pop_front

inline list_ref & list_ref::pop_front()
b2::list_ref::operator=, assign
inline list_ref & list_ref::operator=(list_ref && other) // (1)
  1. Moves the data from other to this list.

b2::list_ref::slice
inline list_ref & list_ref::slice(size_type i, size_type j)

Replaces, in-place, the list with the indicated subrange slice [i,j]. Negative values of j index from the end() position.

13.3. b2::lists

Container of a "list of list" that is owning an instance of LOL object. The interface allows for inline composition of the LOL.

13.3.1. b2::lists Overview

struct lists
{
	// types
	using size_type = int32_t;

	// construct/copy/destroy
	lists();
	lists(lists && other);
	~lists();

	// capacity
	bool empty() const B2_NOEXCEPT;
	size_type size() const B2_NOEXCEPT;
	size_type length() const B2_NOEXCEPT;
	size_type max_size() const B2_NOEXCEPT;
	size_type capacity() const B2_NOEXCEPT;

	// element access
	list_cref operator[](size_type i) const;

	// modifiers
	void push_back(const list_cref & l);
	void push_back(list_ref && l);
	lists & operator|(const list_cref & l);
	lists & operator|(LIST * l);
	lists & operator|(list_ref && l);
	lists & operator|=(list_ref && l);
	lists & append(const lists & lol);
	lists & operator|(const lists & lol);
	void swap(LOL & other);
	void clear();

	// display
	void print() const;

	// data access
	LOL * data() const B2_NOEXCEPT;
	operator LOL *() const B2_NOEXCEPT;

	private:
	mutable LOL lol;
};

13.3.2. b2::lists Construct/Copy/Destroy

b2::lists::lists
inline lists::lists()
inline lists::lists(lists && other)
b2::lists::~lists
inline lists::~lists()

13.3.3. b2::lists Capacity

13.3.4. b2::lists Element Access

b2::lists::operator[]
inline list_cref lists::operator[](int32_t i) const

Returns a constant reference, i.e. list_cref, of the list at the given i index. == b2::lists Modifiers === b2::lists::push_back

inline void lists::push_back(const list_cref & l)
inline void lists::push_back(list_ref && l)

Adds the given list to the end of the LOL. === b2::lists::operator|

inline lists & lists::operator|(const list_cref & l)
inline lists & lists::operator|(LIST * l)
inline lists & lists::operator|(list_ref && l)
inline lists & lists::operator|=(list_ref && l)

Adds the given list to the end of the LOL. This returns this object making it possible to chain additions into a single statement. === b2::lists::swap

inline void lists::swap(LOL & other)

Swaps the data from other with this. === b2::lists::clear

inline void lists::clear()

Deallocate any list items and resets the length to zero. == b2::lists Element Access === b2::lists::print

inline void lists::print() const

Outputs, to cout, the lists as colon (:) separated and space separated elements. == b2::lists Data Access === b2::lists::data

inline LOL * lists::data() const B2_NOEXCEPT
inline lists::operator LOL *() const B2_NOEXCEPT

Returns the underlying LOL object.

14. History

14.1. Version 5.2.1

This patch reverts the change to define _HAS_EXCEPTIONS=0 for Dinkumware std library. It has the undesired effect of changing the ABI. It’s better for user code to handle the conbination of turning off exceptions while treating warnings as errors, and getting warnings/errors from the std library by having the users silence the warning themselves.

This patch also fixes the case of asking to initialize any msvc toolset versions (using msvc ;) when there are already versions initialized. Instead of erroring to say that a version is already in use, it considers the set of already initialized msvc toolsets as satisfying the request to generally initialize msvc.

14.2. Version 5.2.0

Many fixes in this release from regular contributors Nikita and Dmitry. There are a couple of new features.. First the ability to have dll-path in searched libraries. Which makes it possible to better support system/external libraries. The second new feature is the addition of generating compile_commands.json for IDE and tool integration. That has a been a wish item for a long time.

  • New: Add support for generating compile_commands.json command database for some IDE integration. — René Ferdinand Rivera Morell

  • New: Addition of a module (db) for structured data management. Has a property-db class that can write out as JSON data. — René Ferdinand Rivera Morell

  • New: Allow adding dll-path to usage requirements of searched libraries. Which makes it possible to adjust search paths for dynamic libs on the platform. — Dmitry Arkhipov

  • Fix incorrect recursive loading of modules when doing recursive importing of modules. The recursive loading would cause stack overflows. — René Ferdinand Rivera Morell

  • Default to building with Clang on FreeBSD and OpenBSD. — Nikita Kniazev

  • Fix Solaris/SunOS detection in engine for OS=SUNOS and OS=SOLARIS. — Nikita Kniazev

  • Allow some tools to be default initialized with using multiple times. The tools are asciidoctor, fop, gettext, pkg-config, python, sass, and saxonhe. — Dmitry Arkhipov

  • Fix using relevant features in ac module that would cause incorrect configure checks. — Dmitry Arkhipov

  • Fix application of -m31, -m32, -m64 for address model on certain hardware and avoid adding them when not supported, generally. — Nikita Kniazev

  • Fix incorrectly attaching manifest when not targeting Windows for engine bootstrap build. — Nikita Kniazev

  • Fix embed-manifest-via=linker for clang-win toolset. — Nikita Kniazev

  • Fix incorrect unknown OS name, to use correct none. Which is needed for correct emscripten toolset building. — Nikita Kniazev

  • Fix: For msvc toolset, register only the versions that have been asked to init. — Nikita Kniazev

  • Fix: Suppress spurious 1 file(s) copied. message after every test. — Nikita Kniazev

  • Fix: Improve handling of install directories. — Dmitry Arkhipov

  • Fix: For msvc toolset, recognize Visual Studio 2022 v17.10.0, which uses toolchain version 14.40.33807. — Dmitry Andric

  • Fix: For msvc toolset, exception-handling=off should define HAS_EXCEPTIONS=0 for Dinkumware/MSSTL. — _Nikita Kniazev

14.3. Version 5.1.0

This is mostly a bugfix release to account for issues impacting Boost Libraries. There is one "big" change though. It can be rather difficult to find build failures when running larger builds. To facilitate figuring out problems the brief summary output at the end of a build is now less brief. It now includes a sorted list of the targets that got skipped and failed. The output of those lists mirrors the general skipped/failed items. Hence it’s possible to search for the same strings in the rest of the output quickly.

  • New: Add listing of failed and skipped targets to end of build summary to make it easier to find what fails. — René Ferdinand Rivera Morell

  • New: Add mpi.run-flags to mpi toolset that allows for arbitrary flags applied to running mpi targets. This allows, for example, adding --oversubscribe flag to make it possible to run tests where the tasks are more than the nodes available. — René Ferdinand Rivera Morell

  • Fix spurious errors when the header scanning tries to scan empty file names. — René Ferdinand Rivera Morell

  • Make C/C++/ObjC include directive scanning pattern more strict to avoid trying to scan for empty file names. — Andrey Semashev

  • Fix mingw linker commands to always replace backslashes with forward slashes. — Christian Seiler

  • Fix QCC debug build flag. The QCC toolset was using an old, no longer supported, debug symbols option. — John McFarlane

14.4. Version 5.0.1

  • Fix compile errors for older versions of GCC and Clang toolset for the engine. We now support building the engine with GCC 4.7 and Clang 3.6 onward. — René Ferdinand Rivera Morell

  • Fix import-search failing to find imports on Windows bacause of incorrect native vs. non-native path handling. — René Ferdinand Rivera Morell

  • Support cross-compile install of B2, using target-os=xyz. — René Ferdinand Rivera Morell

14.5. Version 5.0.0

This is a new era in B2. The drive of this new major version is to move the core build system from being implemented in Jam to C++. This initial release is only a start in this move by having some minimal aspects implemented in C++ using a new Jam/C++ native binding system. Even though this is a major release, the goal is to still have backward compatibility for existing project build files. But that backward compatibility is not guaranteed for other Jam files.

  • New: Support for Jam native variant values of string (original Jam value type), number (floating point numbers), and object (instances of classes). — René Ferdinand Rivera Morell

  • New: Port modules class, errors, modules, regex, set, string, and sysinfo to C++. — René Ferdinand Rivera Morell

  • New: Port bootstrap.jam to C++ and instead use build-system.jam as key file to find build files. — René Ferdinand Rivera Morell

  • New: Add require-b2 builtin rule to verify the B2 version a particular Jam file needs. — René Ferdinand Rivera Morell

  • New: Add regex.grep builtin that does parallel (where available) file content search with regex matching. — René Ferdinand Rivera Morell

  • New: Make parts of internals thread safe to support parallel built-ins. Currently includes Jam values, hash tables, and filesystem. — René Ferdinand Rivera Morell

  • New: Add import-search project rule to declare additional search paths for import that refer to searched project locations, or other directories. — René Ferdinand Rivera Morell

  • Fix consistent use of OPT_SEMAPHORE and documentation of JAM_SEMAPHORE. — Thomas Brown

  • Fix archive actions failing for mingw. — René Ferdinand Rivera Morell

Building B2 with VisualStudio 2013, i.e. MSVC 12, is no longer supported or tested. The effort to engineer workarounds for the missing C++11 features became too much. And was taking away from other improvements.

14.6. Version 4.10.1

  • Silence warnings for using standard deprecated functions by Apple clang toolset in b2 build. — René Ferdinand Rivera Morell

14.7. Version 4.10.0

This release contains many bug fixes but along the way also cleanup and refactoring of many toolsets, thanks to Nikita.

  • New: Scan assembler files for C Preprocessor includes. — Nikita Kniazev

  • Fix: Inherit generator overrides from a base toolset. — Nikita Kniazev

  • New: Add linemarkers feature that on preprocessing targets changes behavior to emit/omit line directives like #line and #<linenum>. — Nikita Kniazev

  • Fix compiler name for QNX. — James Choi

  • Fix openssl name handling. — Dmitry Arkhipov

  • Fix clang-win assembler path deduction. — Nikita Kniazev

  • Fix toolset sub-feature requirements inheritance. — Nikita Kniazev

  • Unify compile and link of clang-linux toolset with gcc toolset. — Nikita Kniazev

  • Fix same directory pch header generation for msvc toolset. — Nikita Kniazev

  • Implement --durations which reports top targets by execution time. — Nikita Kniazev

  • Change clang-darwin to inherit from clang-linux and unify compile commands. — Nikita Kniazev

  • Fix clang-linux to not override RPATH_OPTION. — Nikita Kniazev

  • Fix inadvertently running configuration checks that shouldn’t (as reported by Alexander Grund). By changing <build>no conditionals evaluation to short circuit. — Nikita Kniazev

  • Fix same toolset overrides (inherit-overrides). — Nikita Kniazev

  • New: Add using the C processors for assembly source files. — Nikita Kniazev

  • Many improvements and cleanup of internal testing. — Nikita Kniazev

  • Unify gcc and clang-linux soname option handling and disable it on Windows. — Nikita Kniazev

  • Unity gcc/mingw linking of shared and import libs. — Nikita Kniazev

  • Fix pdb generation ordering and naming issues. — Nikita Kniazev

  • Unify clang-darwin linking with gcc. — Nikita Kniazev

  • Fix mingw/msys/cygwin, winthreads/pthread inconsistencies to correct compiler flags. — Nikita Kniazev

  • Unify clang-vxworks by inheriting from clang-linux. — Nikita Kniazev

  • Don’t store empty config cache and log. — Nikita Kniazev

  • Fix generator custom rule name inheritance. This affects cygwin/mingw linking. — Nikita Kniazev

  • Fix testing.execute=off to correct run-fail behavior. — Nikita Kniazev

  • Fix use-project with native paths. — René Ferdinand Rivera Morell

  • Fix msvc auto config version priority. Now msvc toolsets are configured in correct newest to oldest regardless of being found from the registry or not. — René Ferdinand Rivera Morell

  • New: Add support for automatic searching of external projects for global target and project references. — René Ferdinand Rivera Morell

14.8. Version 4.9.6

  • Fix version check for winsdk on clang-win toolset. — Nikita Kniazev

14.9. Version 4.9.5

  • Improve alternative match error message to include more context. — René Ferdinand Rivera Morell

  • Fix errors when doing use-project inside projects that get included from another use-project. — René Ferdinand Rivera Morell

  • Support native msvc compilers on ARM64. — Stephen Just

  • PCH fixes: fix msvc pch include dir; fix msvc pch header name; fix missing gcc -ftemplate-depth when building pch. — Nikita Kniazev

  • New: clang-win search for compiler executable in default install locations when it is not on found in PATH. — Nikita Kniazev

  • Fix clang-win to support versioned winsdk bin location. — Nikita Kniazev

14.10. Version 4.9.4

  • Fix crash on some platforms/compilers from invalid garbage reads of varargs end marker being an int instead of a nullptr.

  • Don’t force Windows path separators for GCC when on Windows. As it confuses Cygwin GCC’s relative include path handling. — René Ferdinand Rivera Morell

  • Added common-requirements to project declaration to shorthand as declaring the same for both requirements and usage-requirements. — René Ferdinand Rivera Morell

  • Add to pass in targets to project explicit rule to reduce duplication of explicit targets when there are many. — René Ferdinand Rivera Morell

  • Make coverage feature non-incidental and link-incompatible. — Thomas Brown

  • Use PATH-based lookup for sh. For things such as Gentoo Prefix, we want to use the Bourne shell from the prefix and not the potentially ancient version from the main system. — David Seifert

14.11. Version 4.9.3

  • Updated cxxstd for 23 and 26 versions of recent gcc and clang. (#184) — Andrey Semashev

14.12. Version 4.9.2

  • Fix too long msvc link actions. — René Ferdinand Rivera Morell

14.13. Version 4.9.1

  • Fix bad calculation of initial dev-only path to bootstrap file within the b2 dev tree. — René Ferdinand Rivera Morell

  • Fix bad path calculation in final fallback for loading bootstrap file from path specified in boost-build rule. — René Ferdinand Rivera Morell

14.14. Version 4.9.0

This release has mostly internal cleanups and restructuring. The most significant being: fixing all memory leaks, automatic build system startup with the boost-build rule, the Jam Python interfaces, and the unmaintained Python build system port.

  • Add minimal and debug options for optimization feature. — René Ferdinand Rivera Morell

  • Add Rocket Lake, Alder Lake, Sapphire Rapids and Zen 3 instruction sets. — Andrey Semashev

  • Remove all, on-exit, memory leaks and fix all ASAN errors. — René Ferdinand Rivera Morell

  • Remove use of boost-build.jam as a initialization configuration file. — René Ferdinand Rivera Morell

  • Remove the incomplete build system port and Jam engine Python support extensions. — René Ferdinand Rivera Morell

  • Fix not being able to do combined arm+x86 builds on macOS with darwin and clang toolsets. — René Ferdinand Rivera Morell

  • Fix not being able to do cross-compiles on macOS with clang toolset. — René Ferdinand Rivera Morell

  • Fix errors when collecting a large number of object files with long names into a static archive for gcc and clang toolsets. — René Ferdinand Rivera Morell

  • Fix detection of QCC in build.sh engine build script. — René Ferdinand Rivera Morell

  • Fix missing assembly flags for intel-win toolset. — René Ferdinand Rivera Morell

  • Fix possible command line length limit exceeded error with msvc toolset for link actions. — René Ferdinand Rivera Morell

  • New: Add a "t" mode to FILE_OPEN built-in rule that gives one the contents of a file when evaluated. — René Ferdinand Rivera Morell

This release removes the use of boost-build.jam and the boost-build rule for initialization. The boost-build.jam is still searched for and loaded to not break existing operations. But is considered deprecated and will be removed in a future release.

14.15. Version 4.8.2

  • Fix crash on exit cleanup of target lists caused by recursive destruction and incorrect target list pop unlinking. — René Ferdinand Rivera Morell

14.16. Version 4.8.1

  • Fix build of engine on old macOS/XCode versions prior to 9.0 because of missing EXIT_SUCCESS and EXIT_FAILURE macros. — René Ferdinand Rivera Morell

14.17. Version 4.8.0

  • New: Add support for LoongArch. — Zhang Na

  • Change engine build to use static Intel libs if available instead of C++ runtime static libs to fix systems where the static C++ runtime is not available. — Alain Miniussi

  • Reorder msvc cflags and cxxflags, and add compileflags, to fix inability to override flags by users. — Peter Dimov

  • Don’t quote RPATH on clang-linux to fix use of double-quotes to make it possible to use $ORIGIN. — Dimitry Andric

  • Fix b2 executable detection on kFreeBSD. — Laurent Bigonville

  • Add .ipp extension to header scanning and a valid C++ file. — Jim King

  • Fix missing install targets when build=no is in source target usage requirements. — Dmitry Arkhipov

  • Add some future versions of C++ to cxxstd feature. — René Ferdinand Rivera Morell

  • Fix many memory leaks in engine. — René Ferdinand Rivera Morell

  • Change abort/exit calls to clean exception handling to allow for memory cleanup in engine. — René Ferdinand Rivera Morell

14.18. Version 4.7.2

  • Fix errors configuring intel-linux toolset if icpx is not in the PATH but icpc is in the PATH. — Mark E. Hamilton

  • Add cxxstd=20 to msvc toolset now that VS 2019 onward supports it. — Peter Dimov

14.19. Version 4.7.1

  • Fix regression for linking with clang-win toolset. — Peter Dimov

14.20. Version 4.7.0

Many, many fixes and internal cleanups in this release. But also adding auto-detection and bootstrap for VS 2022 preview toolset.

  • New: Add vc143, aka VS2022, aka cl.exe 17.x toolset support. Includes building engine and automatic detection of the prerelease toolset. — Sergei Krivonos

  • Allow alias targets to continue even if <build>no is in the usage requirement. Which allows composition of alias targets that may contain optional targets, like tests. — Dmitry Arkhipov

  • Fix use of JAMSHELL in gcc toolset. — René Ferdinand Rivera Morell

  • Fix compiling b2 enging such that it works when run in cross-architecture emulation context. I.e. when running arm binaries in QEMU 64 bit host. — René Ferdinand Rivera Morell

  • Default to 64bit MSVC on 64 bit hosts. — Matt Chambers

  • Remove /NOENTRY option for resource only DLLs to allow correct linking. — gnaggnoyil

  • Fix redefinition error of unix when compiling engine on OpenBSD. — Brad Smith

  • Fix building with clang on iOS and AppleTV having extra unrecognized compiler options. — Konstantin Ivlev

  • Add missing Boost.JSON to boost support module. — Dmitry Arkhipov

  • Add arm/arm64 target support in clang-win toolset. — Volo Zyko

  • Avoid warnings about threading model for qt5. — psandana

  • Unify Clang and GCC PCH creation. — Nikita Kniazev

  • Move Objective-C support to GCC toolset. — Nikita Kniazev

  • Support values for instruction-set feature for Xilinx ZYNQ. — Thomas Brown

  • MIPS: add generic mips architecture. — YunQiang Su

  • Fix preprocessing on MSVC compiler. — Nikita Kniazev

14.21. Version 4.6.1

  • Fix building b2 engine with cygwin64. — René Ferdinand Rivera Morell

  • Fix version detection of clang toolset from compiler exec. — Nikita Kniazev

14.22. Version 4.6.0

This release wraps up a few new features that make using some toolsets easier (thanks to Nikita). It’s now also possible to specify empty flags features on the command line, like cxxfalgs=, and have those be ignored. This helps to make CI scripts shorter as they don’t need to handle those cases specially. And as usual there are many bug fixes and adjustments. Thanks to everyone who contributed to this release.

  • New: Allow clang toolset to be auto-configured to a specific version by using toolset=clang-xx on the command line. — Nikita Kniazev

  • New: Include pch header automatically and on-demand on gcc and msvc toolset to mirror clang functionality. — Nikita Kniazev

  • New: Features that are narked as 'free' and 'optional' will now be ignored when the value specified on the command line is empty. Hence once can specify cxxflags= on the command line without errors. — René Ferdinand Rivera Morell

  • Preserve bootstrap.sh invoke arguments to forward to the build.sh script. — tkoecker

  • Remove use of local in buils.sh to be compatible with some, not fully capable, shells. — Tanzinul Islam

  • Workaround shell array ref error in build.sh on busybox shells. — tkoecker

  • Check for needing -pthread to build engine with gcc on some platforms. — tkoecker

  • Default to using clang on MacOS. — Stéphan Kochen

  • Add /python//numpy target to use as a dependency to communicate version specific properties. — Peter Dimov

  • Add default value for cxx and cxxflags from env vars CXX and CXXFLAGS when using the custom cxx toolset to build the engine. — Samuel Debionne and René Ferdinand Rivera Morell

  • Fix detection of intel-linux toolset installation when only the compiler executable is in the PATH. — René Ferdinand Rivera Morell

  • Fix b2 executable path determination for platforms that don’t have a native method of getting the path to executables, like OpenBSD. — René Ferdinand Rivera Morell

  • Fix property.find error message. — Thomas Brown

14.23. Version 4.5.0

Some minor fixes to improve some old issues.

  • Reenable ability of generators to return property-set as first item. — Andrew McCann

  • Fix examples to return 0 on success. — Mateusz Łoskot

  • Handle spaces in CXX path in config_toolset.bat.

  • Fix Conan b2 generator link, and pkg-config doc build error. — René Ferdinand Rivera Morell

14.24. Version 4.4.2

This release is the first of the new home for B2 at Build Frameworks Group.

  • Change references in documentation and sources of boost.org to point at equivalent bfgroup resources. — René Ferdinand Rivera Morell

  • New theme for B2 site and documentation. — René Ferdinand Rivera Morell

14.25. Version 4.4.1

Minor patch to correct missing fix for macOS default engine compiler.

  • Fix engine build defaulting to gcc instead of clang on macOS/Xcode. — René Ferdinand Rivera Morell

14.26. Version 4.4.0

Along with a variety of fixes this version introduces "dynamic" response file support for some toolsets. This means that under most circumstances, if supported by the toolset, response files are not generated. Instead the command is expanded to include the options directly.

  • New: Add response-file feature to control the kind of response file usage in toolset action. — René Ferdinand Rivera Morell

  • New: Add :O=value variable modifier for @() expansion. — René Ferdinand Rivera Morell

  • New: Add :⇐value and :>=value variable modifiers for prefix and postfix values after the complete expansion of variable references. — René Ferdinand Rivera Morell

  • New: Implement PCH on clang-win and clang-darwin. — Nikita Kniazev

  • New: Add support for Intel oneAPI release to intel-linux toolset. — René Ferdinand Rivera Morell

  • New: Add support for Intel oneAPI release to intel-windows toolset. — Edward Diener

  • Remove one at time linking limit. Once upon a time this was a performance tweak as hardware and software was not up to doing multiple links at once. Common setups are better equipped. — René Ferdinand Rivera Morell

  • Fix building engine with GCC on AIX. — René Ferdinand Rivera Morell

  • Support building engine as either 32 or 64 bit addressing model. — René Ferdinand Rivera Morell

  • Basic support for building b2 engine on GNU/Hurd. — Pino Toscano

  • Update "borland" toolset to bcc32c for building B2. — Tanzinul Islam

  • Ensure Embarcadero toolset name is only "embtc". — Tanzinul Islam

  • Adapt for Emscripten 2.0 change of default behavior for archives. — Basil Fierz

  • Fix path to bootstrap for back compat. — René Ferdinand Rivera Morell

  • Add missing BOOST_ROOT to boot strap search. — René Ferdinand Rivera Morell

  • Fix for engine compile on FreeBSD. — René Ferdinand Rivera Morell

  • Default MSVC to a native platform, and remove ambiguous implicit address-model ARM/ARM64 values. — Nikita Kniazev

  • Fix detection of MIPS32 for b2 engine build. — Ivan Melnikov

  • Enable building b2 engine with clang on Windows. — Gei0r

  • Fix building b2 engine with Intel Linux icpc. — Alain Miniussi

  • Rework build.sh to fix many bugs and to avoid use of common env vars. — René Ferdinand Rivera Morell

  • Remove limitation of relevant features for configure checks. — René Ferdinand Rivera Morell

  • Reformat configure check output to inform the variants of the checks in a reasonably brief form. — René Ferdinand Rivera Morell

  • Support building engine on Windows Bash with Mingw. — René Ferdinand Rivera Morell

14.27. Version 4.3.0

There are many invidual fixes in this release. Many thanks for the contributions. Special thanks to Nikita for the many improvements to msvc and general plugging of support holes in all the compilers.

There are some notable new features from Dmitry, Edward, and Nkita:

  • New: Add force-include feature to include headers before all sources. — Nikita Kniazev

  • New: Partial support for Embarcadero C++ compilers based on clang-5. — Edward Diener

  • New: Implement configurable installation prefixes that use features. — Dmitry Arkhipov

  • New: Add translate-path feature. The translate-path feature allows for custom path handling, with a provided rule, on a per target basis. This can be used to support custom path syntax. — René Ferdinand Rivera Morell

  • New: Add portable B2 system install option. This allows the b2 executable and the build system files to live side by side. And hence to be (re)located anywhere on disk. Soon to be used to supports Windows and other installers. This removes the need for the boost-build.jam file for bootstrap. Making it easier for users to get started. — René Ferdinand Rivera Morell

  • Unbreak building from VS Preview command prompt. — Marcel Raad

  • Fix compiler version check on macOS darwin toolset. — Bo Anderson

  • Remove pch target naming restriction on GCC. — Nikita Kniazev

  • Select appropriate QNX target platform. — Alexander Karzhenkov

  • Various space & performance improvements to the b2 engine build on Windows. — Nikita Kniazev

  • Fill extra and pedantic warning options for every compiler. — Nikita Kniazev

  • Include OS error reason for engine IO failures. — Nikita Kniazev

  • Use /Zc:inline and /Zc:throwingNew flags for better language conformance. — Nikita Kniazev

  • Add cxxstd value 20 for C++20. — Andrey Semashev

  • Parallel B2 engine compilation on MSVC. — Nikita Kniazev

  • Updated instruction-set feature with new x86 targets. — Andrey Semashev

  • Pass /nologo to rc on Windows compilers. — Nikita Kniazev

  • Fixed negation in conditional properties. — Nikita Kniazev

  • Remove leftover manifest generation early exiting. — Nikita Kniazev

  • Fix timestamp delta calculation. — Nikita Kniazev

  • Add missing assembler options to clang-win.jam, to enable Context to build. — Peter Dimov

  • Updated scarce :chars documentation with :BS example. — Nikita Kniazev

  • Fix link statically against boost-python on linux. — Joris Carrier

  • Ongoing cleanup of engine build warnings. — René Ferdinand Rivera Morell

  • Allow self-testing of toolsets that use response files. — René Ferdinand Rivera Morell

  • Port Jambase to native C++. Hence removing one of the oldest parts of the original Jam bootstrap process. — René Ferdinand Rivera Morell

14.28. Version 4.2.0

This release is predominantly minor fixes and cleanup of the engine. In particular the bootstrap/build process now clearly communicates C++11 requirement.

  • Add saxonhe_dir action. — Richard Hodges

  • Add CI testing for historical Boost versions on Windows MSVC. — René Ferdinand Rivera Morell

  • Check for C++11 support when building engine. Including an informative error message as to that fact. — René Ferdinand Rivera Morell

  • Update Jam grammar parser with latest bison version. — René Ferdinand Rivera Morell

  • Allow root b2 b2 engine build to work even if bison grammar generator is not available. — René Ferdinand Rivera Morell

  • Warning free engine build on at least Windows, macOS, and Linux. — René Ferdinand Rivera Morell

  • Sanitize Windows engine build to consistently use ANSI Win32 API. — Mateusz Loskot

  • Fix b2 engine not exiting, with error, early when it detects a Jam language error. — Mateusz Loskot

  • Print help for local modules, i.e. current dir. — Thomas Brown

14.29. Version 4.1.0

Many small bug fixes in this release. But there are some new features also. There’s now an lto feature to specify the use of LTO, and what kind. The existing stdlib feature now has real values and corresponding options for some toolsets. But most importantly there’s new documentation for all the features.

Thank to all the users that contributed to this release with these changes:

  • Support for VS2019 for intel-vin 19.0. — Edward Diener

  • Fix compiler warnings about -std=gnu11 when building b2 on Cygwin. — Andrey Semashev

  • Add example of creating multiple PCHs for individual headers. — René Ferdinand Rivera Morell

  • Add QNX threading flags for GCC toolset. — Aurelien Chartier

  • Fix version option for IBM and Sun compilers when building b2 engine — Juan Alday

  • Rename strings.h to jam_strings.h in b2 engine to avoid clash with POSIX strings.h header. — Andrey Semashev

  • Add options for cxxstd feature for IBM compiler. — Edward Diener

  • Many fixes to intel-win toolset. — Edwad Diener

  • Add z15 instruction set for gcc based toolsets. — Neale Ferguson

  • Improve using MSVC from a Cygwin shell. — Michael Haubenwallner

  • Add LTO feature and corresponding support for gcc and clang toolsets. — Dmitry Arkhipov

  • Fix errors when a source doesn’t have a type. — Peter Dimov

  • Add documentation for features. — Dmitry Arkhipov

  • Enhance stdlib feature, and corresponding documentation, for clang, gcc, and sun toolsets. — Dmitry Arkhipov

  • Install rule now makes explicit only the immediate targets it creates. —  Dmitry Arkhipov

  • Add armasm (32 and 64) support for msvc toolset. — Michał Janiszewski

  • Fix errors with custom un-versioned gcc toolset specifications. — Peter Dimov

  • Allow arflags override in gcc toolset specifications. — hyc

  • Fix founds libs not making it to the clang-win link command line. — Peter Dimov

  • Updated intel-win toolset to support for Intel C++ 19.1. — Edward Diener

  • Detect difference between MIPS32 and MIPS64 for OS in b2 engine. — YunQiang Su

14.30. Version 4.0.1

This patch release fixes a minor issue when trying to configure toolsets that override the toolset version with a non-version tag. Currently this is only known to be a problem if you: (a) configure a toolset version to something like “tot” (b) in Boost 1.72.0 when it creates cmake install artifacts. Fix for this was provided Peter Dimov.

14.31. Version 4.0.0

After even more years of development the landscape of build systems has changed considerably, and so has the landscape of compilers. This version marks the start of B2 transitioning to a C++ implementation. Initially this means that the engine will be compiled as C++ source but that source is still the base C implementation. Over time it will transform to a C++ code base in both the engine and build system. Some changes in this start:

  • Requires C++ 11 to build engine.

  • Simplified build scripts to make it easier to maintain.

  • Building with C++ optimizations gives an immediate performance improvement.

Other changes in this release:

  • Add support for using prebuilt OpenSSL. — Damian Jarek

  • Define the riscv architecture feature. — Andreas Schwab

  • Add ARM64 as a valid architecture for MSVC. — Marc Sweetgall

  • Set coverage flags, from coverage feature, for gcc and clang. — Damian Jarek

  • Add s390x CPU and support in gcc/clang. — Neale Ferguson

  • Support importing pkg-config packages. — Dmitry Arkhipov

  • Support for leak sanitizer. — Damian Jarek

  • Fix missing /manifest option in clang-win to fix admin elevation for exes with "update" in the name. — Peter Dimov

  • Add freertos to os feature. — Thomas Brown

  • Default parallel jobs (-jX) to the available CPU threads. — René Ferdinand Rivera Morell

  • Simpler coverage feature. — Hans Dembinski

  • Better stacks for sanitizers. — James E. King III

The default number of parallel jobs has changed in this release from "1" to the number of cores. There are circumstances when that default can be larger than the allocated cpu resources, for instance in some virtualized container installs.

Appendix A: Licenses

In addition to B2 being licensed under the Boost Software License - Version 1.0, B2 makes use of additional libraries that are differently licensed as follows.

A.1. MIT License

MIT License

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
JSON for Modern C++

Copyright (c) 2013-2022 Niels Lohmann


1. See the section called “Feature Attributes”
2. Many features will be overridden, rather than added-to, in sub-projects See the section called “Feature Attributes” for more information
3. see the definition of "free" in the section called “Feature Attributes”.
4. This name is historic, and will be eventually changed to metatarget
5. This create-then-register pattern is caused by limitations of the Boost.Jam language. Python port is likely to never create duplicate targets.