@ -4,28 +4,64 @@ short-description: Setting up cross-compilation
# Cross compilation
Meson has full support for cross compilation. Since cross compiling is more complicated than native building,
let's first go over some nomenclature. The three most important definitions are traditionally called *build*, *host* and *target*. This is confusing because those terms are used for quite many different things. To simplify the issue, we are going to call these the *build machine*, *host machine* and *target machine*. Their definitions are the following
Meson has full support for cross compilation. Since cross compiling is
more complicated than native building, let's first go over some
nomenclature. The three most important definitions are traditionally
called *build*, *host* and *target*. This is confusing because those
terms are used for quite many different things. To simplify the issue,
we are going to call these the *build machine*, *host machine* and
*target machine*. Their definitions are the following
* *build machine* is the computer that is doing the actual compiling
* *host machine* is the machine on which the compiled binary will run
* *target machine* is the machine on which the compiled binary's output will run (this is only meaningful for programs such as compilers that, when run, produce object code for a different CPU than what the program is being run on)
The `tl/dr` summary is the following: if you are doing regular cross compilation, you only care about *build_machine* and *host_machine*. Just ignore *target_machine* altogether and you will be correct 99% of the time. If your needs are more complex or you are interested in the actual details, do read on.
This might be easier to understand through examples. Let's start with the regular, not cross-compiling case. In these cases all of these three machines are the same. Simple so far.
Let's next look at the most common cross-compilation setup. Let's suppose you are on a 64 bit OSX machine and you are cross compiling a binary that will run on a 32 bit ARM Linux board. In this case your *build machine* is 64 bit OSX and both your *host* and *target machines* are 32 bit ARM Linux. This should be quite understandable as well.
It gets a bit trickier when we think about how the cross compiler was generated. It was built and it runs on a specific platform but the output it generates is for a different platform. In this case *build* and *host machines* are the same, but *target machine* is different.
The most complicated case is when you cross-compile a cross compiler. As an example you can, on a Linux machine, generate a cross compiler that runs on Windows but produces binaries on MIPS Linux. In this case *build machine* is x86 Linux, *host machine* is x86 Windows and *target machine* is MIPS Linux. This setup is known as the [Canadian Cross](https://en.wikipedia.org/wiki/Cross_compiler#Canadian_Cross). As a side note, be careful when reading cross compilation articles on Wikipedia or the net in general. It is very common for them to get build, host and target mixed up, even in consecutive sentences, which can leave you puzzled until you figure it out.
If you did not understand all of the details, don't worry. For most people it takes a while to wrap their head around these concepts. Don't panic, it might take a while to click, but you will get the hang of it eventually.
The `tl/dr` summary is the following: if you are doing regular cross
compilation, you only care about *build_machine* and
*host_machine*. Just ignore *target_machine* altogether and you will
be correct 99% of the time. If your needs are more complex or you are
interested in the actual details, do read on.
This might be easier to understand through examples. Let's start with
the regular, not cross-compiling case. In these cases all of these
three machines are the same. Simple so far.
Let's next look at the most common cross-compilation setup. Let's
suppose you are on a 64 bit OSX machine and you are cross compiling a
binary that will run on a 32 bit ARM Linux board. In this case your
*build machine* is 64 bit OSX and both your *host* and *target
machines* are 32 bit ARM Linux. This should be quite understandable as
well.
It gets a bit trickier when we think about how the cross compiler was
generated. It was built and it runs on a specific platform but the
output it generates is for a different platform. In this case *build*
and *host machines* are the same, but *target machine* is different.
The most complicated case is when you cross-compile a cross
compiler. As an example you can, on a Linux machine, generate a cross
compiler that runs on Windows but produces binaries on MIPS Linux. In
this case *build machine* is x86 Linux, *host machine* is x86 Windows
and *target machine* is MIPS Linux. This setup is known as the
[Canadian
Cross](https://en.wikipedia.org/wiki/Cross_compiler#Canadian_Cross). As
a side note, be careful when reading cross compilation articles on
Wikipedia or the net in general. It is very common for them to get
build, host and target mixed up, even in consecutive sentences, which
can leave you puzzled until you figure it out.
If you did not understand all of the details, don't worry. For most
people it takes a while to wrap their head around these
concepts. Don't panic, it might take a while to click, but you will
get the hang of it eventually.
## Defining the environment
Meson requires you to write a cross build definition file. It defines various properties of the cross build environment. The cross file consists of different sections. The first one is the list of executables that we are going to use. A sample snippet might look like this:
Meson requires you to write a cross build definition file. It defines
various properties of the cross build environment. The cross file
consists of different sections. The first one is the list of
executables that we are going to use. A sample snippet might look like
exe_wrapper = 'wine' # A command used to run generated executables.
```
The entries are pretty self explanatory but the last line is special. It defines a *wrapper command* that can be used to run executables for this host. In this case we can use Wine, which runs Windows applications on Linux. Other choices include running the application with qemu or a hardware simulator. If you have this kind of a wrapper, these lines are all you need to write. Meson will automatically use the given wrapper when it needs to run host binaries. This happens e.g. when running the project's test suite.
The entries are pretty self explanatory but the last line is
special. It defines a *wrapper command* that can be used to run
executables for this host. In this case we can use Wine, which runs
Windows applications on Linux. Other choices include running the
application with qemu or a hardware simulator. If you have this kind
of a wrapper, these lines are all you need to write. Meson will
automatically use the given wrapper when it needs to run host
binaries. This happens e.g. when running the project's test suite.
The next section lists properties of the cross compiler and thus of the target system. It looks like this:
The next section lists properties of the cross compiler and thus of
In most cases you don't need the size and alignment settings, Meson will detect all these by compiling and running some sample programs. If your build requires some piece of data that is not listed here, Meson will stop and write an error message describing how to fix the issue. If you need extra compiler arguments to be used during cross compilation you can set them with `[langname]_args = [args]`. Just remember to specify the args as an array and not as a single string (i.e. not as `'-DCROSS=1 -DSOMETHING=3'`).
One important thing to note, if you did not define an `exe_wrapper` in the previous section, is that Meson will make a best-effort guess at whether it can run the generated binaries on the build machine. It determines whether this is possible by looking at the `system` and `cpu_family` of build vs host. There will however be cases where they do match up, but the build machine is actually not compatible with the host machine. Typically this will happen if the libc used by the build and host machines are incompatible, or the code relies on kernel features not available on the build machine. One concrete example is a macOS build machine producing binaries for an iOS Simulator x86-64 host. They're both `darwin` and the same architecture, but their binaries are not actually compatible. In such cases you may use the `needs_exe_wrapper` property to override the auto-detection:
In most cases you don't need the size and alignment settings, Meson
will detect all these by compiling and running some sample
programs. If your build requires some piece of data that is not listed
here, Meson will stop and write an error message describing how to fix
the issue. If you need extra compiler arguments to be used during
cross compilation you can set them with `[langname]_args =
[args]`. Just remember to specify the args as an array and not as a
single string (i.e. not as `'-DCROSS=1 -DSOMETHING=3'`).
One important thing to note, if you did not define an `exe_wrapper` in
the previous section, is that Meson will make a best-effort guess at
whether it can run the generated binaries on the build machine. It
determines whether this is possible by looking at the `system` and
`cpu_family` of build vs host. There will however be cases where they
do match up, but the build machine is actually not compatible with the
host machine. Typically this will happen if the libc used by the build
and host machines are incompatible, or the code relies on kernel
features not available on the build machine. One concrete example is a
macOS build machine producing binaries for an iOS Simulator x86-64
host. They're both `darwin` and the same architecture, but their
binaries are not actually compatible. In such cases you may use the
`needs_exe_wrapper` property to override the auto-detection:
```ini
[properties]
needs_exe_wrapper = true
```
The last bit is the definition of host and target machines. Every cross build definition must have one or both of them. If it had neither, the build would not be a cross build but a native build. You do not need to define the build machine, as all necessary information about it is extracted automatically. The definitions for host and target machines look the same. Here is a sample for host machine.
The last bit is the definition of host and target machines. Every
cross build definition must have one or both of them. If it had
neither, the build would not be a cross build but a native build. You
do not need to define the build machine, as all necessary information
about it is extracted automatically. The definitions for host and
target machines look the same. Here is a sample for host machine.
```ini
[host_machine]
@ -75,11 +143,26 @@ cpu = 'i686'
endian = 'little'
```
These values define the machines sufficiently for cross compilation purposes. The corresponding target definition would look the same but have `target_machine` in the header. These values are available in your Meson scripts. There are three predefined variables called, surprisingly, `build_machine`, `host_machine` and `target_machine`. Determining the operating system of your host machine is simply a matter of calling `host_machine.system()`.
There are two different values for the CPU. The first one is `cpu_family`. It is a general type of the CPU. Common values might include `x86`, `arm` or `x86_64`. The second value is `cpu` which is a more specific subtype for the CPU. Typical values for a `x86` CPU family might include `i386` or `i586` and for `arm` family `armv5` or `armv7hl`. Note that CPU type strings are very system dependent. You might get a different value if you check its value on the same machine but with different operating systems.
If you do not define your host machine, it is assumed to be the build machine. Similarly if you do not specify target machine, it is assumed to be the host machine.
These values define the machines sufficiently for cross compilation
purposes. The corresponding target definition would look the same but
have `target_machine` in the header. These values are available in
your Meson scripts. There are three predefined variables called,
surprisingly, `build_machine`, `host_machine` and
`target_machine`. Determining the operating system of your host
machine is simply a matter of calling `host_machine.system()`.
There are two different values for the CPU. The first one is
`cpu_family`. It is a general type of the CPU. Common values might
include `x86`, `arm` or `x86_64`. The second value is `cpu` which is a
more specific subtype for the CPU. Typical values for a `x86` CPU
family might include `i386` or `i586` and for `arm` family `armv5` or
`armv7hl`. Note that CPU type strings are very system dependent. You
might get a different value if you check its value on the same machine
but with different operating systems.
If you do not define your host machine, it is assumed to be the build
machine. Similarly if you do not specify target machine, it is assumed
to be the host machine.
## Starting a cross build
@ -90,20 +173,24 @@ Once you have the cross file, starting a build is simple
Sometimes you need to build a tool which is used to generate source files. These are then compiled for the actual target. For this you would want to build some targets with the system's native compiler. This requires only one extra keyword argument.
Sometimes you need to build a tool which is used to generate source
files. These are then compiled for the actual target. For this you
would want to build some targets with the system's native
compiler. This requires only one extra keyword argument.
@ -125,22 +215,35 @@ You can then take `native_exe` and use it as part of a generator rule or anythin
## Using a custom standard library
Sometimes in cross compilation you need to build your own standard library instead of using the one provided by the compiler. Meson has built-in support for switching standard libraries transparently. The invocation to use in your cross file is the following:
Sometimes in cross compilation you need to build your own standard
library instead of using the one provided by the compiler. Meson has
built-in support for switching standard libraries transparently. The
invocation to use in your cross file is the following:
```ini
[properties]
c_stdlib = ['mylibc', 'mylibc_dep'] # Subproject name, dependency name
```
This specifies that C standard library is provided in the Meson subproject `mylibc` in internal dependency variable `mylibc_dep`. It is used on every cross built C target in the entire source tree (including subprojects) and the standard library is disabled. The build definitions of these targets do not need any modification.
This specifies that C standard library is provided in the Meson
subproject `mylibc` in internal dependency variable `mylibc_dep`. It
is used on every cross built C target in the entire source tree
(including subprojects) and the standard library is disabled. The
build definitions of these targets do not need any modification.
## Changing cross file settings
Cross file settings are only read when the build directory is set up the first time. Any changes to them after the fact will be ignored. This is the same as regular compiles where you can't change the compiler once a build tree has been set up. If you need to edit your cross file, then you need to wipe your build tree and recreate it from scratch.
Cross file settings are only read when the build directory is set up
the first time. Any changes to them after the fact will be
ignored. This is the same as regular compiles where you can't change
the compiler once a build tree has been set up. If you need to edit
your cross file, then you need to wipe your build tree and recreate it
from scratch.
## Custom data
You can store arbitrary data in `properties` and access them from your Meson files. As an example if you cross file has this:
You can store arbitrary data in `properties` and access them from your
Meson files. As an example if you cross file has this:
This is the original design rationale for Meson. The syntax it describes does not match the released version
==
A software developer's most important tool is the editor. If you talk to coders about the editors they use, you are usually met with massive enthusiasm and praise. You will hear how Emacs is the greatest thing ever or how vi is so elegant or how Eclipse's integration features make you so much more productive. You can sense the enthusiasm and affection that the people feel towards these programs.
A software developer's most important tool is the editor. If you talk
to coders about the editors they use, you are usually met with massive
enthusiasm and praise. You will hear how Emacs is the greatest thing
ever or how vi is so elegant or how Eclipse's integration features
make you so much more productive. You can sense the enthusiasm and
affection that the people feel towards these programs.
The second most important tool, even more important than the compiler, is the build system.
The second most important tool, even more important than the compiler,
is the build system.
Those are pretty much universally despised.
The most positive statement on build systems you can usually get (and it might require some coaxing) is something along the lines of *well, it's a terrible system, but all other options are even worse*. It is easy to see why this is the case. For starters, commonly used free build systems have obtuse syntaxes. They use for the most part global variables that are set in random locations so you can never really be sure what a given line of code does. They do strange and unpredictable things at every turn.
Let's illustrate this with a simple example. Suppose we want to run a program built with GNU Autotools under GDB. The instinctive thing to do is to just run `gdb programname`. The problem is that this may or may not work. In some cases the executable file is a binary whereas at other times it is a wrapper shell script that invokes the real binary which resides in a hidden subdirectory. GDB invocation fails if the binary is a script but succeeds if it is not. The user has to remember the type of each one of his executables (which is an implementation detail of the build system) just to be able to debug them. Several other such pain points can be found in [this blog post](http://voices.canonical.com/jussi.pakkanen/2011/09/13/autotools/).
Given these idiosyncrasies it is no wonder that most people don't want to have anything to do with build systems. They'll just copy-paste code that works (somewhat) in one place to another and hope for the best. They actively go out of their way not to understand the system because the mere thought of it is repulsive. Doing this also provides a kind of inverse job security. If you don't know tool X, there's less chance of finding yourself responsible for its use in your organisation. Instead you get to work on more enjoyable things.
This leads to a vicious circle. Since people avoid the tools and don't want to deal with them, very few work on improving them. The result is apathy and stagnation.
The most positive statement on build systems you can usually get (and
it might require some coaxing) is something along the lines of *well,
it's a terrible system, but all other options are even worse*. It is
easy to see why this is the case. For starters, commonly used free
build systems have obtuse syntaxes. They use for the most part global
variables that are set in random locations so you can never really be
sure what a given line of code does. They do strange and unpredictable
things at every turn.
Let's illustrate this with a simple example. Suppose we want to run a
program built with GNU Autotools under GDB. The instinctive thing to
do is to just run `gdb programname`. The problem is that this may or
may not work. In some cases the executable file is a binary whereas at
other times it is a wrapper shell script that invokes the real binary
which resides in a hidden subdirectory. GDB invocation fails if the
binary is a script but succeeds if it is not. The user has to remember
the type of each one of his executables (which is an implementation
detail of the build system) just to be able to debug them. Several
Given these idiosyncrasies it is no wonder that most people don't want
to have anything to do with build systems. They'll just copy-paste
code that works (somewhat) in one place to another and hope for the
best. They actively go out of their way not to understand the system
because the mere thought of it is repulsive. Doing this also provides
a kind of inverse job security. If you don't know tool X, there's less
chance of finding yourself responsible for its use in your
organisation. Instead you get to work on more enjoyable things.
This leads to a vicious circle. Since people avoid the tools and don't
want to deal with them, very few work on improving them. The result is
apathy and stagnation.
Can we do better?
--
At its core, building C and C++ code is not a terribly difficult task. In fact, writing a text editor is a lot more complicated and takes more effort. Yet we have lots of very high quality editors but only few build systems with questionable quality and usability.
At its core, building C and C++ code is not a terribly difficult
task. In fact, writing a text editor is a lot more complicated and
takes more effort. Yet we have lots of very high quality editors but
only few build systems with questionable quality and usability.
So, in the grand tradition of own-itch-scratching, I decided to run a scientific experiment. The purpose of this experiment was to explore what would it take to build a "good" build system. What kind of syntax would suit this problem? What sort of problems would this application need to solve? What sort of solutions would be the most appropriate?
So, in the grand tradition of own-itch-scratching, I decided to run a
scientific experiment. The purpose of this experiment was to explore
what would it take to build a "good" build system. What kind of syntax
would suit this problem? What sort of problems would this application
need to solve? What sort of solutions would be the most appropriate?
To get things started, here is a list of requirements any modern cross-platform build system needs to provide.
###1. Must be simple to use###
One of the great virtues of Python is the fact that it is very readable. It is easy to see what a given block of code does. It is concise, clear and easy to understand. The proposed build system must be syntactically and semantically clean. Side effects, global state and interrelations must be kept at a minimum or, if possible, eliminated entirely.
One of the great virtues of Python is the fact that it is very
readable. It is easy to see what a given block of code does. It is
concise, clear and easy to understand. The proposed build system must
be syntactically and semantically clean. Side effects, global state
and interrelations must be kept at a minimum or, if possible,
eliminated entirely.
###2. Must do the right thing by default###
Most builds are done by developers working on the code. Therefore the defaults must be tailored towards that use case. As an example the system shall build objects without optimization and with debug information. It shall make binaries that can be run directly from the build directory without linker tricks, shell scripts or magic environment variables.
Most builds are done by developers working on the code. Therefore the
defaults must be tailored towards that use case. As an example the
system shall build objects without optimization and with debug
information. It shall make binaries that can be run directly from the
build directory without linker tricks, shell scripts or magic
environment variables.
###3. Must enforce established best practices###
There really is no reason to compile source code without the equivalent of `-Wall`. So enable it by default. A different kind of best practice is the total separation of source and build directories. All build artifacts must be stored in the build directory. Writing stray files in the source directory is not permitted under any circumstances.
There really is no reason to compile source code without the
equivalent of `-Wall`. So enable it by default. A different kind of
best practice is the total separation of source and build
directories. All build artifacts must be stored in the build
directory. Writing stray files in the source directory is not
permitted under any circumstances.
###4. Must have native support for platforms that are in common use###
A lot of free software projects can be used on non-free platforms such as Windows or OSX. The system must provide native support for the tools of choice on those platforms. In practice this means native support for Visual Studio and XCode. Having said IDEs invoke external builder binaries does not count as native support.
A lot of free software projects can be used on non-free platforms such
as Windows or OSX. The system must provide native support for the
tools of choice on those platforms. In practice this means native
support for Visual Studio and XCode. Having said IDEs invoke external
builder binaries does not count as native support.
###5. Must not add complexity due to obsolete platforms###
Work on this build system started during the Christmas holidays of 2012. This provides a natural hard cutoff line of 2012/12/24. Any platform, tool or library that was not in active use at that time is explicitly not supported. These include Unixes such as IRIX, SunOS, OSF-1, Ubuntu versions older than 12/10, GCC versions older than 4.7 and so on. If these old versions happen to work, great. If they don't, not a single line of code will be added to the system to work around their bugs.
Work on this build system started during the Christmas holidays of
2012. This provides a natural hard cutoff line of 2012/12/24. Any
platform, tool or library that was not in active use at that time is
explicitly not supported. These include Unixes such as IRIX, SunOS,
OSF-1, Ubuntu versions older than 12/10, GCC versions older than 4.7
and so on. If these old versions happen to work, great. If they don't,
not a single line of code will be added to the system to work around
their bugs.
###6. Must be fast###
Running the configuration step on a moderate sized project must not take more than five seconds. Running the compile command on a fully up to date tree of 1000 source files must not take more than 0.1 seconds.
Running the configuration step on a moderate sized project must not
take more than five seconds. Running the compile command on a fully up
to date tree of 1000 source files must not take more than 0.1 seconds.
###7. Must provide easy to use support for modern sw development features###
An example is precompiled headers. Currently no free software build system provides native support for them. Other examples could include easy integration of Valgrind and unit tests, test coverage reporting and so on.
An example is precompiled headers. Currently no free software build
system provides native support for them. Other examples could include
easy integration of Valgrind and unit tests, test coverage reporting
and so on.
###8. Must allow override of default values###
Sometimes you just have to compile files with only given compiler flags and no others, or install files in weird places. The system must allow the user to do this if he really wants to.
Sometimes you just have to compile files with only given compiler
flags and no others, or install files in weird places. The system must
allow the user to do this if he really wants to.
Overview of the solution
--
Going over these requirements it becomes quite apparent that the only viable approach is roughly the same as taken by CMake: having a domain specific language to declare the build system. Out of this declaration a configuration is generated for the backend build system. This can be a Makefile, Visual Studio or XCode project or anything else.
The difference between the proposed DSL and existing ones is that the new one is declarative. It also tries to work on a higher level of abstraction than existing systems. As an example, using external libraries in current build systems means manually extracting and passing around compiler flags and linker flags. In the proposed system the user just declares that a given build target uses a given external dependency. The build system then takes care of passing all flags and settings to their proper locations. This means that the user can focus on his own code rather than marshalling command line arguments from one place to another.
A DSL is more work than the approach taken by SCons, which is to provide the system as a Python library. However it allows us to make the syntax more expressive and prevent certain types of bugs by e.g. making certain objects truly immutable. The end result is again the same: less work for the user.
The backend for Unix requires a bit more thought. The default choice would be Make. However it is extremely slow. It is not uncommon on large code bases for Make to take several minutes just to determine that nothing needs to be done. Instead of Make we use [Ninja](https://ninja-build.org/), which is extremely fast. The backend code is abstracted away from the core, so other backends can be added with relatively little effort.
Going over these requirements it becomes quite apparent that the only
viable approach is roughly the same as taken by CMake: having a domain
specific language to declare the build system. Out of this declaration
a configuration is generated for the backend build system. This can be
a Makefile, Visual Studio or XCode project or anything else.
The difference between the proposed DSL and existing ones is that the
new one is declarative. It also tries to work on a higher level of
abstraction than existing systems. As an example, using external
libraries in current build systems means manually extracting and
passing around compiler flags and linker flags. In the proposed system
the user just declares that a given build target uses a given external
dependency. The build system then takes care of passing all flags and
settings to their proper locations. This means that the user can focus
on his own code rather than marshalling command line arguments from
one place to another.
A DSL is more work than the approach taken by SCons, which is to
provide the system as a Python library. However it allows us to make
the syntax more expressive and prevent certain types of bugs by
e.g. making certain objects truly immutable. The end result is again
the same: less work for the user.
The backend for Unix requires a bit more thought. The default choice
would be Make. However it is extremely slow. It is not uncommon on
large code bases for Make to take several minutes just to determine
that nothing needs to be done. Instead of Make we use
[Ninja](https://ninja-build.org/), which is extremely fast. The
backend code is abstracted away from the core, so other backends can
be added with relatively little effort.
Sample code
--
Enough design talk, let's get to the code. Before looking at the examples we would like to emphasize that this is not in any way the final code. It is proof of concept code that works in the system as it currently exists (February 2013), but may change at any time.
Enough design talk, let's get to the code. Before looking at the
examples we would like to emphasize that this is not in any way the
final code. It is proof of concept code that works in the system as it
currently exists (February 2013), but may change at any time.
Let's start simple. Here is the code to compile a single executable binary.
@ -83,9 +181,15 @@ project('compile one', 'c')
executable('program', 'prog.c')
```
This is about as simple as one can get. First you declare the project name and the languages it uses. Then you specify the binary to build and its sources. The build system will do all the rest. It will add proper suffixes (e.g. '.exe' on Windows), set the default compiler flags and so on.
This is about as simple as one can get. First you declare the project
name and the languages it uses. Then you specify the binary to build
and its sources. The build system will do all the rest. It will add
proper suffixes (e.g. '.exe' on Windows), set the default compiler
flags and so on.
Usually programs have more than one source file. Listing them all in the function call can become unwieldy. That is why the system supports keyword arguments. They look like this.
Usually programs have more than one source file. Listing them all in
the function call can become unwieldy. That is why the system supports
executable('program', sources : sourcelist, dep : libdep)
```
In other build systems you have to manually add the compile and link flags from external dependencies to targets. In this system you just declare that extlibrary is mandatory and that the generated program uses that. The build system does all the plumbing for you.
In other build systems you have to manually add the compile and link
flags from external dependencies to targets. In this system you just
declare that extlibrary is mandatory and that the generated program
uses that. The build system does all the plumbing for you.
Here's a slightly more complicated definition. It should still be understandable.
Here's a slightly more complicated definition. It should still be
First we build a shared library named foobar. It is marked installable, so running `ninja install` installs it to the library directory (the system knows which one so the user does not have to care). Then we build a test executable which is linked against the library. It will no tbe installed, but instead it is added to the list of unit tests, which can be run with the command `ninja test`.
First we build a shared library named foobar. It is marked
installable, so running `ninja install` installs it to the library
directory (the system knows which one so the user does not have to
care). Then we build a test executable which is linked against the
library. It will no tbe installed, but instead it is added to the list
of unit tests, which can be run with the command `ninja test`.
Above we mentioned precompiled headers as a feature not supported by other build systems. Here's how you would use them.
Above we mentioned precompiled headers as a feature not supported by
other build systems. Here's how you would use them.
The main reason other build systems can not provide pch support this easily is because they don't enforce certain best practices. Due to the way include paths work, it is impossible to provide pch support that always works with both in-source and out-of-source builds. Mandating separate build and source directories makes this and many other problems a lot easier.
The main reason other build systems can not provide pch support this
easily is because they don't enforce certain best practices. Due to
the way include paths work, it is impossible to provide pch support
that always works with both in-source and out-of-source
builds. Mandating separate build and source directories makes this and
many other problems a lot easier.
Get the code
--
The code for this experiment can be found at [the Meson repository](https://sourceforge.net/p/meson/code/). It should be noted that it is not a build system. It is only a proposal for one. It does not work reliably yet. You probably should not use it as the build system of your project.
The code for this experiment can be found at [the Meson
repository](https://sourceforge.net/p/meson/code/). It should be noted
that it is not a build system. It is only a proposal for one. It does
not work reliably yet. You probably should not use it as the build
system of your project.
All that said I hope that this experiment will eventually turn into a full blown build system. For that I need your help. Comments and especially patches are more than welcome.
All that said I hope that this experiment will eventually turn into a
full blown build system. For that I need your help. Comments and
@ -4,4 +4,6 @@ short-description: User manual for Meson
# Manual
This is the user manual for Meson. It currently tracks the state of Git head. If you are using an older version, some of the information here might not work for you.
This is the user manual for Meson. It currently tracks the state of
Git head. If you are using an older version, some of the information
> Reproducible builds are a set of software development practices that create a verifiable path from human readable source code to the binary code used by computers.
> Reproducible builds are a set of software development practices that
create a verifiable path from human readable source code to the
binary code used by computers.
Roughly what this means is that if two different people compile the project from source, their outputs are bitwise identical to each other. This allows people to verify that binaries downloadable from the net actually come from the corresponding sources and have not, for example, had malware added to them.
Roughly what this means is that if two different people compile the
project from source, their outputs are bitwise identical to each
other. This allows people to verify that binaries downloadable from
the net actually come from the corresponding sources and have not, for
example, had malware added to them.
Meson aims to support reproducible builds out of the box with zero additional work (assuming the rest of the build environment is set up for reproducibility). If you ever find a case where this is not happening, it is a bug. Please file an issue with as much information as possible and we'll get it fixed.
Meson aims to support reproducible builds out of the box with zero
additional work (assuming the rest of the build environment is set up
for reproducibility). If you ever find a case where this is not
happening, it is a bug. Please file an issue with as much information
There are several things you need to take into consideration when writing a Meson build definition for a project. This is especially true when the project will be used as a subproject. This page lists a few things to consider when writing your definitions.
There are several things you need to take into consideration when
writing a Meson build definition for a project. This is especially
true when the project will be used as a subproject. This page lists a
few things to consider when writing your definitions.
## Do not put config.h in external search path
Many projects use a `config.h` header file that they use for configuring their project internally. These files are never installed to the system header files so there are no inclusion collisions. This is not the case with subprojects, your project tree may have an arbitrary number of configuration files, so we need to ensure they don't clash.
Many projects use a `config.h` header file that they use for
configuring their project internally. These files are never installed
to the system header files so there are no inclusion collisions. This
is not the case with subprojects, your project tree may have an
arbitrary number of configuration files, so we need to ensure they
don't clash.
The basic problem is that the users of the subproject must be able to include subproject headers without seeing its `config.h` file. The most correct solution is to rename the `config.h` file into something unique, such as `foobar-config.h`. This is usually not feasible unless you are the maintainer of the subproject in question.
The basic problem is that the users of the subproject must be able to
include subproject headers without seeing its `config.h` file. The
most correct solution is to rename the `config.h` file into something
unique, such as `foobar-config.h`. This is usually not feasible unless
you are the maintainer of the subproject in question.
The pragmatic solution is to put the config header in a directory that has no other header files and then hide that from everyone else. One way is to create a top level subdirectory called `internal` and use that to build your own sources, like this:
The pragmatic solution is to put the config header in a directory that
has no other header files and then hide that from everyone else. One
way is to create a top level subdirectory called `internal` and use
that to build your own sources, like this:
```meson
subdir('internal') # create config.h in this subdir
Many projects keep their `config.h` in the top level directory that has no other source files in it. In that case you don't need to move it but can just do this instead:
Many projects keep their `config.h` in the top level directory that
has no other source files in it. In that case you don't need to move
it but can just do this instead:
```meson
internal_inc = include_directories('.') # At top level meson.build
@ -24,9 +41,15 @@ internal_inc = include_directories('.') # At top level meson.build
## Make libraries buildable both as static and shared
Some platforms (e.g. iOS) requires linking everything in your main app statically. In other cases you might want shared libraries. They are also faster during development due to Meson's relinking optimization. However building both library types on all builds is slow and wasteful.
Some platforms (e.g. iOS) requires linking everything in your main app
statically. In other cases you might want shared libraries. They are
also faster during development due to Meson's relinking
optimization. However building both library types on all builds is
slow and wasteful.
Your project should provide a toggle specifying which type of library it should build. As an example if you have a Meson option called `shared_lib` then you could do this:
Your project should provide a toggle specifying which type of library
it should build. As an example if you have a Meson option called
Meson's Ninja backend works differently from Make and other systems. Rather than processing things directory per directory, it looks at the entire build definition at once and runs the individual compile jobs in what might look to the outside as a random order.
Meson's Ninja backend works differently from Make and other
systems. Rather than processing things directory per directory, it
looks at the entire build definition at once and runs the individual
compile jobs in what might look to the outside as a random order.
The reason for this is that this is much more efficient so your builds finish faster. The downside is that you have to be careful with your dependencies. The most common problem here is headers that are generated at compile time with e.g. code generators. If these headers are needed when building code that uses these libraries, the compile job might be run before the code generation step. The fix is to make the dependency explicit like this:
The reason for this is that this is much more efficient so your builds
finish faster. The downside is that you have to be careful with your
dependencies. The most common problem here is headers that are
generated at compile time with e.g. code generators. If these headers
are needed when building code that uses these libraries, the compile
job might be run before the code generation step. The fix is to make
the dependency explicit like this:
```meson
myheader = custom_target(...)
@ -64,9 +96,18 @@ Meson will ensure that the header file has been built before compiling `main.c`.
## Avoid exposing compilable source files in declare_dependency
The main use for the `sources` argument in `declare_dependency` is to construct the correct dependency graph for the backends, as demonstrated in the previous section. It is extremely important to note that it should *not* be used to directly expose compilable sources (`.c`, `.cpp`, etc.) of dependencies, and should rather only be used for header/config files. The following example will illustrate what can go wrong if you accidentally expose compilable source files.
The main use for the `sources` argument in `declare_dependency` is to
construct the correct dependency graph for the backends, as
demonstrated in the previous section. It is extremely important to
note that it should *not* be used to directly expose compilable
sources (`.c`, `.cpp`, etc.) of dependencies, and should rather only
be used for header/config files. The following example will illustrate
what can go wrong if you accidentally expose compilable source files.
So you've read about unity builds and how Meson natively supports them. You decide to expose the sources of dependencies in order to have unity builds that include their dependencies. For your support library you do
So you've read about unity builds and how Meson natively supports
them. You decide to expose the sources of dependencies in order to
have unity builds that include their dependencies. For your support
library you do
```meson
my_support_sources = files(...)
@ -96,6 +137,30 @@ myexe = executable(
...)
```
This is extremely dangerous. When building, `mylibrary` will build and link the support sources `my_support_sources` into the resulting shared library. Then, for `myexe`, these same support sources will be compiled again, will be linked into the resulting executable, in addition to them being already present in `mylibrary`. This can quickly run afoul of the [One Definition Rule (ODR)](https://en.wikipedia.org/wiki/One_Definition_Rule) in C++, as you have more than one definition of a symbol, yielding undefined behavior. While C does not have a strict ODR rule, there is no language in the standard which guarantees such behavior to work. Violations of the ODR can lead to weird idiosyncratic failures such as segfaults. In the overwhelming number of cases, exposing library sources via the `sources` argument in `declare_dependency` is thus incorrect. If you wish to get full cross-library performance, consider building `mysupportlib` as a static library instead and employing LTO.
There are exceptions to this rule. If there are some natural constraints on how your library is to be used, you can expose sources. For instance, the WrapDB module for GoogleTest directly exposes the sources of GTest and GMock. This is valid, as GTest and GMock will only ever be used in *terminal* link targets. A terminal target is the final target in a dependency link chain, for instance `myexe` in the last example, whereas `mylibrary` is an intermediate link target. For most libraries this rule is not applicable though, as you cannot in general control how others consume your library, and as such should not expose sources.
This is extremely dangerous. When building, `mylibrary` will build and
link the support sources `my_support_sources` into the resulting
shared library. Then, for `myexe`, these same support sources will be
compiled again, will be linked into the resulting executable, in
addition to them being already present in `mylibrary`. This can
quickly run afoul of the [One Definition Rule
(ODR)](https://en.wikipedia.org/wiki/One_Definition_Rule) in C++, as
you have more than one definition of a symbol, yielding undefined
behavior. While C does not have a strict ODR rule, there is no
language in the standard which guarantees such behavior to
work. Violations of the ODR can lead to weird idiosyncratic failures
such as segfaults. In the overwhelming number of cases, exposing
library sources via the `sources` argument in `declare_dependency` is
thus incorrect. If you wish to get full cross-library performance,
consider building `mysupportlib` as a static library instead and
employing LTO.
There are exceptions to this rule. If there are some natural
constraints on how your library is to be used, you can expose
sources. For instance, the WrapDB module for GoogleTest directly
exposes the sources of GTest and GMock. This is valid, as GTest and
GMock will only ever be used in *terminal* link targets. A terminal
target is the final target in a dependency link chain, for instance
`myexe` in the last example, whereas `mylibrary` is an intermediate
link target. For most libraries this rule is not applicable though, as
you cannot in general control how others consume your library, and as
One of the major problems of multiplatform development is wrangling all your dependencies. This is easy on Linux where you can use system packages but awkward on other platforms. Most of those do not have a package manager at all. This has been worked around by having third party package managers. They are not really a solution for end user deployment, because you can't tell them to install a package manager just to use your app. On these platforms you must produce self-contained applications.
The traditional approach to this has been to bundle dependencies inside your own project. Either as prebuilt libraries and headers or by embedding the source code inside your source tree and rewriting your build system to build them as part of your project.
This is both tedious and error prone because it is always done by hand. The Wrap dependency system of Meson aims to provide an automated way to do this.
One of the major problems of multiplatform development is wrangling
all your dependencies. This is easy on Linux where you can use system
packages but awkward on other platforms. Most of those do not have a
package manager at all. This has been worked around by having third
party package managers. They are not really a solution for end user
deployment, because you can't tell them to install a package manager
just to use your app. On these platforms you must produce
self-contained applications.
The traditional approach to this has been to bundle dependencies
inside your own project. Either as prebuilt libraries and headers or
by embedding the source code inside your source tree and rewriting
your build system to build them as part of your project.
This is both tedious and error prone because it is always done by
hand. The Wrap dependency system of Meson aims to provide an automated
way to do this.
## How it works
Meson has a concept of [subprojects](Subprojects.md). They are a way of nesting one Meson project inside another. Any project that builds with Meson can detect that it is built as a subproject and build itself in a way that makes it easy to use (usually this means as a static library).
Meson has a concept of [subprojects](Subprojects.md). They are a way
of nesting one Meson project inside another. Any project that builds
with Meson can detect that it is built as a subproject and build
itself in a way that makes it easy to use (usually this means as a
static library).
To use this kind of a project as a dependency you could just copy and extract it inside your project's `subprojects` directory. However there is a simpler way. You can specify a Wrap file that tells Meson how to download it for you. An example wrap file would look like this and should be put in `subprojects/foobar.wrap`:
To use this kind of a project as a dependency you could just copy and
extract it inside your project's `subprojects` directory. However
there is a simpler way. You can specify a Wrap file that tells Meson
how to download it for you. An example wrap file would look like this
If you then use this subproject in your build, Meson will automatically download and extract it during build. This makes subproject embedding extremely easy.
If you then use this subproject in your build, Meson will
automatically download and extract it during build. This makes
subproject embedding extremely easy.
Unfortunately most software projects in the world do not build with Meson. Because of this Meson allows you to specify a patch URL. This works in much the same way as Debian's distro patches. That is, they are downloaded and automatically applied to the subproject. These files contain a Meson build definition for the given subproject. A wrap file with an additional patch URL would look like this.
Unfortunately most software projects in the world do not build with
Meson. Because of this Meson allows you to specify a patch URL. This
works in much the same way as Debian's distro patches. That is, they
are downloaded and automatically applied to the subproject. These
files contain a Meson build definition for the given subproject. A
wrap file with an additional patch URL would look like this.
In this example the Wrap manager would download the patch and unzip it in libfoobar's directory.
In this example the Wrap manager would download the patch and unzip it
in libfoobar's directory.
This approach makes it extremely simple to embed dependencies that require build system changes. You can write the Meson build definition for the dependency in total isolation. This is a lot better than doing it inside your own source tree, especially if it contains hundreds of thousands of lines of code. Once you have a working build definition, just zip up the Meson build files (and others you have changed) and put them somewhere where you can download them.
This approach makes it extremely simple to embed dependencies that
require build system changes. You can write the Meson build definition
for the dependency in total isolation. This is a lot better than doing
it inside your own source tree, especially if it contains hundreds of
thousands of lines of code. Once you have a working build definition,
just zip up the Meson build files (and others you have changed) and
put them somewhere where you can download them.
## Branching subprojects directly from git
The above mentioned scheme assumes that your subproject is working off packaged files. Sometimes you want to check code out directly from Git. Meson supports this natively. All you need to do is to write a slightly different wrap file.
The above mentioned scheme assumes that your subproject is working off
packaged files. Sometimes you want to check code out directly from
Git. Meson supports this natively. All you need to do is to write a
The format is straightforward. The only thing to note is the revision element that can have one of two values. The first is `head` which will cause Meson to track the master head (doing a repull whenever the build definition is altered). The second type is a commit hash. In this case Meson will use the commit specified (with `git checkout [hash id]`).
The format is straightforward. The only thing to note is the revision
element that can have one of two values. The first is `head` which
will cause Meson to track the master head (doing a repull whenever the
build definition is altered). The second type is a commit hash. In
this case Meson will use the commit specified (with `git checkout
[hash id]`).
Note that in this case you cannot specify an extra patch file to use. The git repo must contain all necessary Meson build definitions.
Note that in this case you cannot specify an extra patch file to
use. The git repo must contain all necessary Meson build definitions.
Usually you would use subprojects as read only. However in some cases you want to do commits to subprojects and push them upstream. For these cases you can specify the upload URL by adding the following at the end of your wrap file:
Usually you would use subprojects as read only. However in some cases
you want to do commits to subprojects and push them upstream. For
these cases you can specify the upload URL by adding the following at
the end of your wrap file:
```ini
push-url=git@git.example.com:projects/someproject.git # Supported since version 0.37.0
@ -71,7 +117,9 @@ To use a subproject simply do this in your top level `meson.build`.
foobar_sp = subproject('foobar')
```
Usually dependencies consist of some header files plus a library to link against. To do this you would declare this internal dependency like this:
Usually dependencies consist of some header files plus a library to
link against. To do this you would declare this internal dependency
Note that the subproject object is *not* used as the dependency, but rather you need to get the declared dependency from it with `get_variable` because a subproject may have multiple declared dependencies.
Note that the subproject object is *not* used as the dependency, but
rather you need to get the declared dependency from it with
`get_variable` because a subproject may have multiple declared
dependencies.
## Toggling between distro packages and embedded source
When building distro packages it is very important that you do not embed any sources. Some distros have a rule forbidding embedded dependencies so your project must be buildable without them or otherwise the packager will hate you.
When building distro packages it is very important that you do not
embed any sources. Some distros have a rule forbidding embedded
dependencies so your project must be buildable without them or
otherwise the packager will hate you.
Doing this with Meson and Wrap is simple. Here's how you would use distro packages and fall back to embedding if the dependency is not available.
Doing this with Meson and Wrap is simple. Here's how you would use
distro packages and fall back to embedding if the dependency is not
The `fallback` keyword argument takes two items, the name of the subproject and the name of the variable that holds the dependency. If you need to do something more complicated, such as extract several different variables, then you need to do it yourself with the manual method described above.
The `fallback` keyword argument takes two items, the name of the
subproject and the name of the variable that holds the dependency. If
you need to do something more complicated, such as extract several
different variables, then you need to do it yourself with the manual
method described above.
With this setup when foobar is provided by the system, we use it. When that is not the case we use the embedded version. Note that `foobar_dep` can point to an external or an internal dependency but you don't have to worry about their differences. Meson will take care of the details for you.
With this setup when foobar is provided by the system, we use it. When
that is not the case we use the embedded version. Note that
`foobar_dep` can point to an external or an internal dependency but
you don't have to worry about their differences. Meson will take care
of the details for you.
## Getting wraps
Usually you don't want to write your wraps by hand. There is an online repository called [WrapDB](Using-the-WrapDB.md) that provides many dependencies ready to use.
Usually you don't want to write your wraps by hand. There is an online
repository called [WrapDB](Using-the-WrapDB.md) that provides many
In order to get a package in the Wrap database it must be reviewed and accepted by someone with admin rights. Here is a list of items to check in the review. If some item is not met it does not mean that the package is rejected. What should be done will be determined on a case-by-case basis. Similarly meeting all these requirements does not guarantee that the package will get accepted. Use common sense.
In order to get a package in the Wrap database it must be reviewed and
accepted by someone with admin rights. Here is a list of items to
check in the review. If some item is not met it does not mean that the
package is rejected. What should be done will be determined on a
case-by-case basis. Similarly meeting all these requirements does not
guarantee that the package will get accepted. Use common sense.