In QEMU a single set of source files is built against many different
configurations in order to generate many executable. Each executable
includes a different but overlapping subset of the source files; some
of the files are compiled separately for each output, others are
compiled just once.
Using Makefiles, this is achieved with a complicated mechanism involving
a combination of non-recursive and recursive make; Meson can do better,
but because there are hundreds of such conditional rules, it's important
to keep meson.build files brief and easy to follow. Therefore, this
commit adds a new module to satisfy this use case while preserving
Meson's declarative nature.
Configurations are mapped to a configuration_data object, and a new
"source set" object is used to store all the rules, and then retrieve
the desired set of sources together with their dependencies.
The test case shows how extract_objects can be used to satisfy both
cases, i.e. when the object files are shared across targets and when
they have to be separate. In the real-world case, a project would use
two source set objects for the two cases and then do
"executable(..., sources: ... , objects: ...)". The next commit
adds such an example.
Switch from build.compiler to environment.coredata.compiler and likewise
from build.cross_compiler to environment.coredata.cross_compiler in
backends. (Only seems to be actually used in ninja backend).
It doesn't make much sense to have this and not also have
cross-compilers (so any use of this is already pretty suspect as
probably wrong when cross-compiling).
This information is accessible anyhow via environment.coredata.
Handling the PKG_CONFIG_PATH variable in meson introduces a new problem
for caching dependencies. We want to encode the pkg_config_path (or
cross_pkg_config_path if we're cross compiling) to be part of the key,
but we don't want to put that into the key for non-pkg-config
dependencies to avoid spurious cache misses (since pkg_config_path isn't
relevant to cmake, for example). However, on a cache lookup we can't
know that a dependency is a pkg-config dependency until we've looked in
the cache.
My solution is a two layer cache, the first layer remains the same as
before, the second layer is a dict-like object that encapsulates the
dependency type information and uses pkg_config_path and
cross_pkg_config_path as a sub key (and could be extended easily for
other types). A new object type is introduced to encapsulate this so
that callers don't need to be aware of the implementation details.
It was using ':' as a path separator while GCC uses ';' resulting in bogus
paths being returned. Instead assume that the compiler uses the platform native
separator.
The previous splitting code still worked sometimes because splitting
"C:/foo;C:/bar" resulted in the last part "/bar" being valid if "<DriveOfCWD>:/bar"
existed.
The fix also exposes a clang Windows bug where it uses the wrong separator:
https://reviews.llvm.org/D61121 . Use a regex to fix those first.
This resulted in linker errors when statically linking against a library which
had an external dependency linking against system libs.
Fixes#5386
Currently this test assumes that the user doesn't have XDG_DATA_HOME
set in their path, but this isn't a good assumption, and can result in
the test not actually testing what it means to.
Apparently we have no tests for this because this is broken pretty
badly. This extends the basic test to actually check for the correct
free-form argument and thus test this.
There are two distinct bugs here:
1) if PKG_CONFIG_PATH is unset then the variable will be set to [''],
which causes it to have the wrong behavior in some truthiness tests
2) We read PKG_CONFIG_PATH on invocations after the first, which causes
it to overwrite the value saved into the coredata.dat file. Putting
the two bugs together it means that -Dpkg_config_path doesn't work
with `meson configure`
Meson itself *almost* only cares about the build and host platforms. The
exception is it takes a `target_machine` in the cross file and exposes
it to the user; but it doesn't do anything else with it. It's therefore
overkill to put target in `PerMachine` and `MachineChoice`. Instead, we
make a `PerThreeMachine` only for the machine infos.
Additionally fix a few other things that were bugging me in the process:
- Get rid of `MachineInfos` class. Since `envconfig.py` was created, it
has no methods that couldn't just got on `PerMachine`
- Make `default_missing` and `miss_defaulting` work functionally. That
means we can just locally bind rather than bind as class vars the
"unfrozen" configuration. This helps prevent bugs where one forgets
to freeze a configuration.
This started out with a bug report of mtest trying to add bytes + str,
which I though "Oh, mypy can help!" and turned into an entire day of
awful code traversal and trying to figure out why attributes were
changing type. Hopefully this makes everything cleaner and easier to
follow.
This avoids the duplication where the option is stored in a dict at its
name, and also contains its own name. In general, the maxim in
programming is things shouldn't know their own name, so removed the name
field just leaving the option's position in the dictionary as its name.
Also, we should at some point decide whether we're going to use American
spelling (serialize/serialization) or British (serialise/serialisation)
because we have both, and both is always worse than one or the other.
This function is currently setup with keyword arguments defaulting to
None. However, it is never called without passing all of it's arguments
explicitly, and only one of it's arguments would actually be valid as
None. So just drop that, and make them all positional. And annotate
them.
I wanted to look at the imports for annotations but was having hard time
reading them because they're just all over the place. This is purely a
human readability issue.
Because the Intel compiler behaves significantly differently on windows
than it does on Linux and MacOS I've decided it would be better to
follow the clang/clang-cl split and make id "intel-cl" on windows
(leaving "intel" alone on Linux and Mac). Since we've never supported
ICL and it hasn't worked in the past I think this is an okay change to
make.
Intel helpfully provides a cl.exe that is indistinguishable from
Microsoft's cl.exe in output, but has the same behavior as icl.exe.
Since icl and ifort will only be present in your path if you've started
an Intel command prompt search for that first.