PR #6363 made it so our interpretation of env vars no longer clashed
with Autoconf's: if both Meson and Autoconf would read and env var, both
would do the same things with the value they read.
However, there were still cases that autoconf would read an env var when
meson wouldn't:
- Autoconf would use `CC` in cross builds too
- Autoconf would use `CC_FOR_BUILD` in native builds too.
There's no reason Meson can't also do this--if native cross files
overwrite rather than replace env vars, cross files can also overwrite
rather than replace env vars.
Because variables like `CC` are so ubiquitous, and because ignoring them
in cross builds just makes those builds liable to break (and things more
complicated in general), we bring Meson's behavior in line with
Autoconf's.
The rust code is ugly, because rust is annoying. It doesn't invoke a
linker directly (unless that linker is link.exe or lld-link.exe),
instead it invokes the C compiler (gcc or clang usually) to do it's
linking. Meson doesn't have good abstractions for this, though we
probably should because some of the D compilers do the same thing.
Either that or we should just call the c compiler directly, like vala
does.
This changes the public interface for meson, which we don't do unless we
absolutely have to. In this case I think we need to do it. A fair number
of projects have already been using 'ld' in their cross/native files to
get the ld binary and call it directly in custom_targets or generators,
and we broke that. While we could hit this problem again names like
`c_ld` and `cpp_ld` are far less likely to cause collisions than `ld`.
Additionally this gives a way to set the linker on a per-compiler basis,
which is probably in itself very useful.
Fixes#6442
A compiler or other tool with an empty string as name does not make
sense as it anyway can not be used and causes a failure later in
parse_entry.
Fix#5451
Meson itself *almost* only cares about the build and host platforms. The
exception is it takes a `target_machine` in the cross file and exposes
it to the user; but it doesn't do anything else with it. It's therefore
overkill to put target in `PerMachine` and `MachineChoice`. Instead, we
make a `PerThreeMachine` only for the machine infos.
Additionally fix a few other things that were bugging me in the process:
- Get rid of `MachineInfos` class. Since `envconfig.py` was created, it
has no methods that couldn't just got on `PerMachine`
- Make `default_missing` and `miss_defaulting` work functionally. That
means we can just locally bind rather than bind as class vars the
"unfrozen" configuration. This helps prevent bugs where one forgets
to freeze a configuration.
Mypy know what to do with these and isn't confused, but some versions of
python 3.5 (at least 3.5.2) can't handle these annotations. By making
them strings the python interpreter wont try to evaluate them.
Fixes#5326
Some things, like `method[...](...)` or `x: ... = ...` python 3.5
doesn't support, so I made a comment instead with the intention that it
can someday be made into a real annotation.
Currently this takes a return value, and in the error case sets it,
then has a separate if to hanle that. Instead just set the return
value in the error handler.
We'll need this in the llvm-config logic to determine the right
llvm-config to call on Fedora 30+, but this feels like the sort of
information that might be useful elsewhere. This does not expose this
information as part of the public API, it's only accessible at the
python layer.
1. They (and the others) all use PerMachineDefaultable. It's not the
best class, but consistency come first. (It and all of them can be
improved accross the board later.)
2. They use `None` as the default argument so as not to mutate what's
effectively a global variables. (Thanks @dcbaker!)
3. They have a `fallback` field to centralize authority on when
environment variables should be consulted.
First of all, I'd like compilers and other modules that environment.py
currently imports to be able to take these without creating
hard-to-follow module cycles.
Second of all, environment.py's exact purpose seems a bit obscured.
Splitting the data types (and basic pure functions) from the more
complex logic that infers that data seems like a good way to separate
concerns.