Instead use coredata.compiler_options.<machine>. This brings the cross
and native code paths closer together, since both now use that.
Command line options are interpreted just as before, for backwards
compatibility. This does introduce some funny conditionals. In the
future, I'd like to change the interpretation of command line options so
- The logic is cross-agnostic, i.e. there are no conditions affected by
`is_cross_build()`.
- Compiler args for both the build and host machines can always be
controlled by the command line.
- Compiler args for both machines can always be controlled separately.
macOS provides the tool `lipo` to check the archs supported by an
object (executable, static library, dylib, etc). This is especially
useful for fat archives, but it also helps with thin archives.
Without this, the linker will fail to link to the library we mistakenly
'found' like so:
ld: warning: ignoring file /path/to/libfoo.a, missing required architecture armv7 in file /path/to/libfoo.a
Instead of only doing a naive filesystem search, also run the linker
so that it can tell us whether the -F path specified actually contains
the framework we're looking for.
Unfortunately, `extraframework` searching is still not 100% correct in
the case when since we want to search in either /Library/Frameworks or
in /System/Library/Frameworks but not in both. The -Z flag disables
searching in those prefixes and would in theory allow this, but then
you cannot force the linker to look in those by manually adding -F
args, so that doesn't work.
Also ensure that the test's no-pkg-config codepath will always be run,
even on the CI where we always have pkg-config available.
This counts as a test case for #4728
This has the adventage that "meson --help" shows a list of all commands,
making them discoverable. This also reduce the manual parsing of
arguments to the strict minimum needed for backward compatibility.
We used to immediately try to use whatever exe_wrapper was defined in
the cross file, but some people generate the cross file once and use
it for several projects, most of which do not even need an exe wrapper
to build.
Now we're a bit more resilient. We quietly fall back to using
non-exe-wrapper paths for compiler checks and skip the sanity check.
However, if some code needs the exe wrapper, f.ex., if you run a built
executable using custom_target() or run_target(), we will error out
during setup.
Tests will, of course, continue to error out when you run them if the
exe wrapper was not found. We don't want people's tests to silently
"pass" (aka skip) because of a bad CI setup.
Closes https://github.com/mesonbuild/meson/issues/3562
This commit also adds a test for the behaviour of exe_wrapper in these
cases, and refactors the unit tests a bit for it.
There is a lot of overhead for each travis job, because docker pull
takes 3 minutes. Each cross test takes 3-4 minutes.
To make things worse, sometimes Dockerhub is slow and docker pull takes
longer than 3 minutes.
This simplifies a lot of code, and centralize "key=value" parsing in a
single place.
Unknown command line options becomes an hard error instead of
merely printing warning message. It has been warning it would become an
hard error for a while now. This has exceptions though, any
unknown option starting with "<lang>_" or "b_" are ignored because they
depend on which languages gets added and which compiler gets selected.
Also any option for unknown subproject are ignored because they depend
on which subproject actually gets built.
Also write more command line parsing tests. "19 bad command line
options" is removed because bad cmd line option became hard error and
it's covered with new tests in "30 command line".
Instead of using fragile guessing to figure out how to invoke meson,
set the value when meson is run. Also rework how we pass of
meson_script_launcher to regenchecker.py -- it wasn't even being used
With this change, we only need to guess the meson path when running
the tests, and in that case:
1. If MESON_EXE is set in the env, we know how to run meson
for project tests.
2. MESON_EXE is not set, which means we run the configure in-process
for project tests and need to guess what meson to run, so either
- meson.py is found next to run_tests.py, or
- meson, meson.py, or meson.exe is in PATH
Otherwise, you can invoke meson in the following ways:
1. meson is installed, and mesonbuild is available in PYTHONPATH:
- meson, meson.py, meson.exe from PATH
- python3 -m mesonbuild.mesonmain
- python3 /path/to/meson.py
- meson is a shell wrapper to meson.real
2. meson is not installed, and is run from git:
- Absolute path to meson.py
- Relative path to meson.py
- Symlink to meson.py
All these are tested in test_meson_commands.py, except meson.exe since
that involves building the meson msi and installing it.
There are cases when it is useful to wrap the main meson executable with
a script that sets up environment variables, passes --cross-file, etc.
For example, in a Yocto SDK, we need to point to the right meson.cross
so that everything "just works", and we need to alter CC, CXX, etc. In
such cases, it can happen that the "meson" found in the path is actually
a wrapper script that invokes the real meson, which may be in another
location (e.g. "meson.real" or similar).
Currently, in such a situation, meson gets confused because it tries to
invoke itself using the "meson" executable (which points to the wrapper
script) instead of the actual meson (which may be called "meson.real" or
similar). In fact, the wrapper script is not necessarily even Python, so
the whole thing fails.
Fix this by using Python imports to directly find mesonmain.py instead
of trying to detect it heuristically. In addition to fixing the wrapper
issue, this should make the detection logic much more robust.
QuLogic discovered that HFS+ only stores dates in uint32 seconds since
the epoch, so ninja cannot report sub-1s resolution timestamps there.
Sometime in the future Apple FS will become widely-available and we
will have to add a filesystem check at startup.
https://developer.apple.com/legacy/library/technotes/tn/tn1150.html#HFSPlusDates
This way we get some testing for the patches, and speed up our builds.
My server is hosted on a UK Linode, so it should have good uptimes.
However, we should likely move this into the Docker image at least
for Linux, and perhaps put it in a CI cache for the rest.
It is not feasible to test all failure modes by creating projects in
`test cases/failing` that would be an explosion of files, and that
mechanism is too coarse anyway. We have no way to ensure that the
expected error is being raised.
See FailureTests.test_dependency for an example.
This class now consolidates a lot of the logic that each external
dependency was duplicating in its class definition.
All external dependencies now set:
* self.version
* self.compile_args and self.link_args
* self.is_found (if found)
* self.sources
* etc
And the abstract ExternalDependency class defines the methods that
will fetch those properties. Some classes still override that for
various reasons, but those should also be migrated to properties as
far as possible.
Next step is to consolidate and standardize the way in which we call
'configuration binaries' such as sdl2-config, llvm-config, pkg-config,
etc. Currently each class has to duplicate code involved with that
even though the format is very similar.
Currently only pkg-config supports multiple version requirements, and
some classes don't even properly check the version requirement. That
will also become easier now.
This actually caught a cached-dependency related bug for me that the
test-time regen did not. I also increased the ninja wait time to
1 second because that's actually how long you need to sleep to be
guaranteed that a change will be detected.
Must poke upstream about https://github.com/ninja-build/ninja/issues/371
Also sets more groundwork for running unit tests with backends other
that Ninja.
Transferring global state to executors is totally broken in Python 3.4
so just serialize all the commands.