on some systems, tests may take over an hour to run--only to find
you might have used an unintended Meson version (e.g. release instead
of dev). This change prints the Meson version at the start of the
run_*tests*.py scripts.
Also, raise SystemExit(main()) is preferred in general over
sys.exit(main())
D lang compilers have an option -release (or similar) which turns off
asserts, contracts, and other runtime type checking. This patch wires
that up to the b_ndebug flag.
Fixes#7082
Currently, colourize_console is a constant, set at process
initialization.
To allow the actual stdout to be easily compared with the expected when
running tests, we want to allow colourization to be on for the test
driver, but not for the in-process configure done by run_configure,
which has stdout redirected from a tty to a pipe.
v2:
Cache _colorize_console per file object
v3:
Reset cache on setup_console()
A current rather untyped storage of options is one of the things that
contributes to the options code being so complex. This takes a small
step in synching down by storing the compiler options in dicts per
language.
Future work might be replacing the langauge strings with an enum, and
defaultdict with a custom struct, just like `PerMachine` and
`MachineChoice`.
This is more correct, and forces the target(s) to be rebuilt if the
PDB files are missing. Increases the minimum required Ninja to 1.7,
which is available in Ubuntu 16.04 under backports.
We can't do the same for import libraries, because it is impossible
for us to know at configure time whether or not an import library will
be generated for a given DLL.
Pull the crossfile specification out of run_test.py so it can be
specified in the CI job configuration.
Also make some fixes to output ordering in run_test.py.
* intel-cl tests: more rigorous detection of intent to use Intel Windows compilers
* fortran coarray test: make skipping more robust in that underlying MPI stack is .run()
This is useful for any Fortran coarray work, and especially for intel-cl where multiple Intel compiler
versions are often installed, and the wrong underlying MPI library may be dynamically linked,
and so a runtime check is needed to exercise the MPI stack underlying Fortran coarray.
This is done by
fc.run('sync all; end', dependencies: coarray)
* pep8
Meson itself *almost* only cares about the build and host platforms. The
exception is it takes a `target_machine` in the cross file and exposes
it to the user; but it doesn't do anything else with it. It's therefore
overkill to put target in `PerMachine` and `MachineChoice`. Instead, we
make a `PerThreeMachine` only for the machine infos.
Additionally fix a few other things that were bugging me in the process:
- Get rid of `MachineInfos` class. Since `envconfig.py` was created, it
has no methods that couldn't just got on `PerMachine`
- Make `default_missing` and `miss_defaulting` work functionally. That
means we can just locally bind rather than bind as class vars the
"unfrozen" configuration. This helps prevent bugs where one forgets
to freeze a configuration.
Instead use coredata.compiler_options.<machine>. This brings the cross
and native code paths closer together, since both now use that.
Command line options are interpreted just as before, for backwards
compatibility. This does introduce some funny conditionals. In the
future, I'd like to change the interpretation of command line options so
- The logic is cross-agnostic, i.e. there are no conditions affected by
`is_cross_build()`.
- Compiler args for both the build and host machines can always be
controlled by the command line.
- Compiler args for both machines can always be controlled separately.
macOS provides the tool `lipo` to check the archs supported by an
object (executable, static library, dylib, etc). This is especially
useful for fat archives, but it also helps with thin archives.
Without this, the linker will fail to link to the library we mistakenly
'found' like so:
ld: warning: ignoring file /path/to/libfoo.a, missing required architecture armv7 in file /path/to/libfoo.a
Instead of only doing a naive filesystem search, also run the linker
so that it can tell us whether the -F path specified actually contains
the framework we're looking for.
Unfortunately, `extraframework` searching is still not 100% correct in
the case when since we want to search in either /Library/Frameworks or
in /System/Library/Frameworks but not in both. The -Z flag disables
searching in those prefixes and would in theory allow this, but then
you cannot force the linker to look in those by manually adding -F
args, so that doesn't work.
Also ensure that the test's no-pkg-config codepath will always be run,
even on the CI where we always have pkg-config available.
This counts as a test case for #4728
This has the adventage that "meson --help" shows a list of all commands,
making them discoverable. This also reduce the manual parsing of
arguments to the strict minimum needed for backward compatibility.
We used to immediately try to use whatever exe_wrapper was defined in
the cross file, but some people generate the cross file once and use
it for several projects, most of which do not even need an exe wrapper
to build.
Now we're a bit more resilient. We quietly fall back to using
non-exe-wrapper paths for compiler checks and skip the sanity check.
However, if some code needs the exe wrapper, f.ex., if you run a built
executable using custom_target() or run_target(), we will error out
during setup.
Tests will, of course, continue to error out when you run them if the
exe wrapper was not found. We don't want people's tests to silently
"pass" (aka skip) because of a bad CI setup.
Closes https://github.com/mesonbuild/meson/issues/3562
This commit also adds a test for the behaviour of exe_wrapper in these
cases, and refactors the unit tests a bit for it.
There is a lot of overhead for each travis job, because docker pull
takes 3 minutes. Each cross test takes 3-4 minutes.
To make things worse, sometimes Dockerhub is slow and docker pull takes
longer than 3 minutes.
This simplifies a lot of code, and centralize "key=value" parsing in a
single place.
Unknown command line options becomes an hard error instead of
merely printing warning message. It has been warning it would become an
hard error for a while now. This has exceptions though, any
unknown option starting with "<lang>_" or "b_" are ignored because they
depend on which languages gets added and which compiler gets selected.
Also any option for unknown subproject are ignored because they depend
on which subproject actually gets built.
Also write more command line parsing tests. "19 bad command line
options" is removed because bad cmd line option became hard error and
it's covered with new tests in "30 command line".
Instead of using fragile guessing to figure out how to invoke meson,
set the value when meson is run. Also rework how we pass of
meson_script_launcher to regenchecker.py -- it wasn't even being used
With this change, we only need to guess the meson path when running
the tests, and in that case:
1. If MESON_EXE is set in the env, we know how to run meson
for project tests.
2. MESON_EXE is not set, which means we run the configure in-process
for project tests and need to guess what meson to run, so either
- meson.py is found next to run_tests.py, or
- meson, meson.py, or meson.exe is in PATH
Otherwise, you can invoke meson in the following ways:
1. meson is installed, and mesonbuild is available in PYTHONPATH:
- meson, meson.py, meson.exe from PATH
- python3 -m mesonbuild.mesonmain
- python3 /path/to/meson.py
- meson is a shell wrapper to meson.real
2. meson is not installed, and is run from git:
- Absolute path to meson.py
- Relative path to meson.py
- Symlink to meson.py
All these are tested in test_meson_commands.py, except meson.exe since
that involves building the meson msi and installing it.
There are cases when it is useful to wrap the main meson executable with
a script that sets up environment variables, passes --cross-file, etc.
For example, in a Yocto SDK, we need to point to the right meson.cross
so that everything "just works", and we need to alter CC, CXX, etc. In
such cases, it can happen that the "meson" found in the path is actually
a wrapper script that invokes the real meson, which may be in another
location (e.g. "meson.real" or similar).
Currently, in such a situation, meson gets confused because it tries to
invoke itself using the "meson" executable (which points to the wrapper
script) instead of the actual meson (which may be called "meson.real" or
similar). In fact, the wrapper script is not necessarily even Python, so
the whole thing fails.
Fix this by using Python imports to directly find mesonmain.py instead
of trying to detect it heuristically. In addition to fixing the wrapper
issue, this should make the detection logic much more robust.