Although it's not especially common, there are certainly cases where it's
useful to pass the path to an external program to a test program.
Fixes: https://github.com/mesonbuild/meson/issues/3552
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
create_target_linker_introspection is only ever called from one place,
which passes in CompilerArgs. This was implemented in commit
5eb55075ba which performed this conversion
as a preventative measure, since its type was not obvious (and thereby
modified the *type* of the variable in place).
We have a function that wraps two others because it first checks whether
the input is static or shared. By the same token, it has to return types
valid for either. We already *know* that we are a shared library, so we
can and should use the real function directly, which is both a
micro-optimization of function call overhead, and fixes a mypy
"union-attr" error.
This is a bit of a hack, since the rule is added outside of the
`__init__` method, and that's probably bad. But at least we can get some
additional help by telling type checkers what it will be
At an OS level, Unix-like OSes usually have very large or even
unlimited sized command line limits. In practice, however, many
applications do not handle this (intentionally or otherwise). Notably
Wine has the same limits Windows does, 32,768 characters. Because we
previously double counted most characters, we papered over most
situations that we would need an RSP file on Unix-like OSes with Wine.
To fix this issue I have set the command line limit to 32k, this is
still a massive command line to pass without an RSP file, and will only
cause the use of an RSP file where it is not strictly necessary in a
small number of cases, but will fix Wine applications. Projects who wish
to not use an RSP file can still set the MESON_RSP_THRESHOLD environment
variable to a very large number instead.
Fixes: #13414
Fixes: cf0fecfce ("backend/ninja: Fix bug in NinjaRule.length_estimate")
This causes us to not count the spaces between arguments, thereby
undercounting the number of elements. This is extra important because we
previously double counted all actual characters, covering this issue up.
Fixes: cf0fecfce ("backend/ninja: Fix bug in NinjaRule.length_estimate")
c1076241af changed the logic in multiple
places, in particular it looks like it was assumed that is_cross is always
the same as need_exe_wrapper(), but that's not true.
Also the commit only talks about mypy, so this was definitely not intended.
This reverts all the cases where need_exe_wrapper() was introduced back to
is_cross.
The change in backends.py could be a correct simplification, but I don't know
the code base enough, so reverting that too.
See #13403 and #13410
Fix incorrect map access to custom_target_output_buildfile in CustomTargetIndex
case. Map keys are file names with the path appended, but the used keys were
just the file name. This led to crashes.
XCode project file contains dictionary-like structures. Strings for the
mapped values have to be quoted if these include special characters.
Previously this quoting was happening in-place, before adding a new
value to the dictionary. This produce big amounts of boilerplate
error-prone code. For example if there are targets whose names contain special
characters (grpc++ in my case), produced file will be invalid.
This moves checking and quoting subroutine to the PbxDictItem class,
eliminating the need to do this on the caller side.
[Kin](https://github.com/Serchinastico/Kin) tool was used to validate
produced PBXProj files.
In commit 2cb7350d16 we added a special
case for environment() objects to allow skipping `meson --internal exe`
overhead when /usr/bin/env exists and can be used to set environment
variables instead of using a pickled wrapper.
This special case assumed that environment is used for setting
variables, but it is also possible, albeit less common, to
append/prepend to them, in which case `meson --internal exe` will
compute the final value as an offset from whatever the current environment
variables inherited from ninja are. It is not possible to precompute
this when generating command lines for ninja, so using arguments to
/usr/bin/env is not possible. All it can do is set (override) an
environment variable.
In this case, we have to use the python wrapper and cannot optimize it
away. Add a tracking bit to the env object and propagate it to the
backend.
* Ensure private directory exists for custom targets
Some custom target commands will expect the `@PRIVATE_DIR@` to already exist, such as with `make -C @PRIVATE_DIR@ ...`
* Prefer `exist_ok` over catching exception
In large monolithic codebases with many intertwined shared libraries, test
serialization can cause configuration times to balloon massively from a single
test() invocation due to concatenating the same paths many times over.
This commit adds an initial set in which all dependencies on a test target are
stored before their paths are constructed to form the test target's
LD_LIB_PATH. In testing on a particularly large codebase, this commit reduced
total configuration time by a factor of almost 8x.
Instead of invoking javac for every .java file, pass all of the sources
for a jar target to a single javac invocation. This massively improves
first compilation time and doesn't meaningfully affect incremental builds
(it can even be faster in some cases).
The old approach also had issues where files would not always get recompiled
even though they should, necessitating a clean rebuild in order to see changes
reflected in the build output.
Multiple invocations seem to only make sense if:
- issues with files not getting flagged for rebuild are investigated and fixed
- something like the javaserver buildtool from openjdk sources is used
instead of directly spawning javac processes
- the amount of java files per jar is so large that it is faster to compile
several files one by one than to compile all the files at once (batching may
still make sense to get a reasonable balance)
Some settings require "objectVersion" to be set to a certain value or
Xcode will not use it. To fix this, we set it to the highest possible
value, determined by the detected version of Xcode. We also set
"compatibilityVersion", but mainly so it lines up with "objectVersion".
At the same time, we should not be generating Xcode 3.2-compatible
projects by default anyway.
C#, Java, and Swift targets were manually collecting compiler arguments
rather than using the helper function for this purpose. This caused each
target type to add arguments in a different order, and to forget to add
some arguments respectively:
C#: /nologo, -warnaserror
Java: warning level (-nowarn, -Xlint:all, -Xdoclint:all), debug arguments
(-g, -g:none), -Werror
Swift: -warnings-as-errors
Fix this. Also fix up some no-longer-unused argument processing in the
Compiler implementations.
The code would create a dictionary that was of type `str : list[str] |
str | None`. Then would later try to call `len(' '.join(dict[key]))`.
This would result in two different bugs:
1. If the value is `None` it would except, since None isn't iterable
and cannot be converted to a string
2. If the value was a string, then it would double the length of the
actual string and return that, by adding a space between each
character
When getting debug file arguments we can sometimes pass None, where a
None is unexpected. This becomes a particular problem in the Cuda
compiler, where the output will unconditionally be concatenated with a
static string, resulting in an uncaught exception. This is really easy
to spot once we annotate the functions in question, where a static type
checker like mypy easily spots the issue.
This commit adds those annotations, and then fixes the resulting error.
Fixes: #12997
There's a known ninja bug
(https://github.com/ninja-build/ninja/issues/1952) that running this
with dyndeps will result in Ninja deleting implicit outputs from the
dyndeps, leading to pointless rebuilds. For reference, this is what
CMake does as well.
We already have to decide whether to scan a file at configure time, so
we don't want to have to do it again at compile time, every time the
depscan rule is run. We can do this by saving and passing the language
to use in the pickle, so depscan doesn't have to re-calculate it. As an
added bonus, this removes an import from depscan
We don't need to write and pass two separate files to the depscanner,
I've used the pickle because the pickle serializer/deserializer should
be faster than JSON, thought I haven't tested.