During evaluation of codeblocks, we start off with an iteration of
nodes, and then while evaluating them we may update the global
self.current_node context. When catching and formatting errors, we
didn't take into account that the node might be updated from the
original top-level iteration.
Switch to formatting errors using self.current_node instead, to ensure
we can point at the likely most-accurate actual cause of an error.
Also update the current node in a few more places, so that function
calls always see the function call as the current node, even if the most
recently parsed node was an argument to the function call.
Fixes#11643
Extend the "common/include order" test to ensure that the build
directory is preferred over the source directory. For example,
when using `configure_file()`, the resulting file should be
preferred over a file with the same name in the source directory.
Allows getting closer to `./run_project_tests.py -- -Dwerror=true`.
- when argc and argv are not *both* used, there's a standard, compliant
mechanism to mark the variable as unused
- generated code should not build as -Werror
- more thoroughly comment out some commented code
Because we base the pickled data name on the name property of the
command being run... and for built targets, `exe.name` is always just
the name. However, for an ExternalProgram this is just whatever string
we searched for, so, NOT just the basename.
This became a bigger issue once we started using generator() with the
actual program in commit 6aeec80836,
rather than first casting it to a string, because the VS backend
*always* uses the meson_exe approach for various reasons related to VS
being VS.
Outside of that, it's difficult to actually get an ExternalProgram
object passed to meson_exe -- CustomTarget lowers it to a string,
capture is handled via argparse instead of pickling, etc.
Fixes#11593
This test is intended to test really long output, so it prints 100k
lines of stdout/stderr. It completes in two seconds on my machine, but
the default 30-second timeout is apparently too much for CI, because on
Windows we often get flaky tests due to this. e.g. we'll get within 200
lines of the end.
Bump the CI time by x2. We know this isn't particularly surprising
behavior, and allowing it to request another 30 seconds won't hang the
CI. But it will save us from some spurious failures and restarted jobs.
In commit eaf365cb3e we explicitly sorted
them for neatness, with the rationale that we were restoring intentional
behavior and we only need a set for stylistic purposes.
This actually wasn't true, because we never sorted them to begin with
(we did sort the version numbers), but sorting them is fine. The bigger
issue is that we actually used a set to avoid printing the same feature
type multiple times. Now we do print them multiple times -- because each
registered feature includes the unique node.
Fix this by using both sorted and a set.
Fix tests that should in retrospect have flagged this as an issue, but
were added later on in the same series to check something else entirely,
happen to cover this too, and were presumably copied directly from
stdout as-is...
We need to know the project minimum version before evaluating the rest
of the function. There's three basic approaches:
- try to set it inside KwargInfo
- just run a minimal version of func_project for this, then load
everything after
- drop down to the AST and set it before anything else
In order to handle FeatureNew emitted by a FunctionNode evaluated
before project() due to being inlined, such as `version: run_command()`,
only option 3 suffices, the rest all happen way too late. Since we have
just added AST handling support for erroring out, we can do that to set
the version as well.
We do not need the python module's find_installation() for this, as this
does various things to set up building and installing python modules
(pure python and C-API). This functionality is already tested in the
python tests.
Elsewhere, when we just need an interpreter capable of running python
scripts in order to guarantee a useful scripting language for custom
commands, it suffices to use find_program(), which does not run an
introspection script or do module imports, and is thus faster and
a bit cleaner.
Either way, both methods are guaranteed to find the python3 interpreter,
deferring to mesonlib.python_command for that guarantee.
test "71 summary" can sometimes return the python command with the
".exe" part all uppercased for mysterious Windows reasons. Smooth this
over with ExternalProgram.
This adds two new methods, that are conceptually related in the same way
that `enable_auto_if` and `disable_auto_if` are. They are different
however, in that they will always replace an `auto` value with an
`enabled` or `disabled` value, or error if the feature is in the
opposite state (calling `feature(disabled).enable_if(true)`, for
example). This matters when the feature will be passed to
dependency(required : …)`, which has different behavior when passed an
enabled feature than an auto one.
The `disable_if` method will be controversial, I'm sure, since it
can be expressed via `feature.require()` (`feature.require(not
condition) == feature.disable_if(condition)`). I have two defences of
this:
1) `feature.require` is difficult to reason about, I would expect
require to be equivalent to `feature.enable_if(condition)`, not to
`feature.disable_if(not condition)`.
2) mixing `enable_if` and `disable_if` in the same call chain is much
clearer than mixing `require` and `enable_if`:
```meson
get_option('feat') \
.enable_if(foo) \
.disable_if(bar) \
.enable_if(opt)
```
vs
```meson
get_option('feat') \
.enable_if(foo) \
.require(not bar) \
.enable_if(opt)
```
In the first chain it's immediately obvious what is happening, in the
second, not so much, especially if you're not familiar with what
`require` means.
It's always been strange to me we don't have an opposite method of the
`disable_auto_if` method, but I've been pressed to find a case where we
_need_ one, because `disable_auto_if` can't be logically contorted to
work. I finally found the case where they're not equivalent: when you
don't want to convert to a boolean:
```meson
f = get_option('feat').disable_auto_if(not foo)
g = get_option('feat').enable_auto_if(foo)
dep1 = dependency('foo', required : f)
dep2 = dependency('foo', required : g)
```
Currently Meson allow the following (Muon does not):
```meson
option('foo', type : 'boolean', value : 'true')
option('bar', type : 'integer', value : '42')
```
This is possibly a holdover from very old code, but it's a bad idea and
we should stop doing it. This deprecation is the first stop on that
journey.
The code below this already handles being passed an Executable or
ExternalProgram, and it does it correctly, since it handles host
binaries that need an exe_wrapper correctly, while the code in the
generator paths doesn't.
The xcode backend is, like always, problematic, it doesn't handle things
the same way as the ninja and vscode backends, and generates a shell
script instead of using meson as a wrapper when needed (it seems likely
that just forcing the meson path for xcode would be better). I don't
have a working mac to develop a fix for, so I've left a todo comment
there.
Fixes: #11264
Hook this up to installed dependency manifests. This is often needed
above and beyond just an SPDX string -- e.g. many licenses have custom
copyright lines.
At least, if you tried to use it when passing an install_dir. Because
T.Sequence is horrible and we should never use it, and the annotations
are a lie that produces bugs.
So, fix the annotations on CustomTarget to never allow this to happen
again, and also fix the function too. Move some definitions elsewhere
inline to satisfy the linter.
Fixes#11157
When auto-generating e.g. a `clang-format` target, we first check to see
if the user has already defined one, and if so we don't bother creating
our own. We check for two things:
- if a ninja target already exists, skip
- if a run_target was defined, skip
The second check is *obviously* a duplicate of the first check. But the
first check never actually worked, because all_outputs was only
generated *after* generating all utility rules and actually writing out
the build.ninja file. The check itself compares against nothing, and
always evaluates to false no matter what.
Fix this by reordering the target creation logic so we track outputs
immediately, but only error about them later. Now, we no longer need to
special-case run_target at all, so we can drop that whole logic from
build.py and interpreter.py, and simplify the tracked state.
Fixes defining an `alias_target()` for a utility, which tried to
auto-generate another rule and errored out. Also fixes doing the same
thing with a `custom_target()` although I cannot imagine why anyone
would want to produce an output file named `clang-format` (unless clang
itself decided to migrate to Meson, which would be cool but feels
unlikely).
Regression test: libccpp has both C and C++ sources. The executable only
has C sources. It should still link using the C++ compiler. When using
both_libraries the static has no sources and thus no compilers,
resulting in the executable linking using the C compiler.
https://github.com/Netflix/vmaf/issues/1107
When using both_libraries(), or library() with default_library=both, we
remove all sources from args and kwargs when building the static
library, and replace them by the objects from the shared library. But
sources could also come from any InternalDependency, in which case we
currently build them twice (not efficient) and link both objects into
the static library.
It also means that when we needlessly build those source for the static
library, it miss order dependency on generated headers that we removed
from args/kwargs, which can cause build errors in the case the source
from static lib is compiled before the header in shared lib gets
generated.
This happened in GLib:
https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2917.
A subproject could have a sub-subproject as a git submodule, or part of
the subproject's release tarball, and still have a wrap file for it
(e.g. needed for [provide] section). In that case we need to use the
source tree for the sub-subproject inplace instead of downloading a new
copy into the main project.
This is the case with GLib 2.74, it has a subproject "gvdb" as git
submodule, and part of release tarball, it ships gvdb.wrap file as well.
This is generally a bad idea, e.g. it causes OSError on freebsd.
It also gets ignored by solaris and thus causes unittest failures.
The proper solution is to simply reject any attempt to set this, and log a
warning.
The install_emptydir function does apply the mode as well, and since it
is a directory it actually does something. This is the only place where
we don't reset the mode.
Although install_subdir also installs directories, and in theory it
could set the mode as well, that would be a new feature. Also it doesn't
provide much granularity and has mixed semantics with files. Better to
let people use install_emptydir + install_subdir.
Fixes#5902
Compiled languages are Meson's bread and butter, but hardly required.
This is convenient, because many test caases specifically, do not care
about testing the compiler interactions.
In such cases, we can skip doing compiler lookups which aren't used, as
they only slow down test setup.
`configure_file` is both an extremely complicated implementation, and
a strange place for copying. It's a bit of a historical artifact, since
the fs module didn't yet exist. It makes more sense to move this to the
fs module and deprecate this `configure_file` version.
This new version works at build time rather than configure time, which
has the disadvantage it can't be passed to `run_command`, but with the
advantage that changes to the input don't require a full reconfigure.
install_mode can include the setuid bit, which has the special property
(mentioned in the set_mode logic for minstall itself) of needing to come
last, because it "will get wiped by chmod" (or at least chown).
In fact, it's not just chown that wipes setuid, but other changes as
well, such as the file contents. This is not an issue for install_data /
custom_target, but for compiled outputs, we run depfixer to handle
rpaths. This may or may not cause edits to the binary, depending on
whether we have a build rpath to wipe, or an install rpath to add. (We
also may run `strip`, but that external program already has its own mode
restoration logic.)
Fix this by switching the order of operations around, so that setting
the permissions happens last.
Fixes https://github.com/void-linux/void-packages/issues/38682
This was never meant to work, it's an implementation detail of using
`importlib.import_module` and that our modules used to be named
`unstable_` that this ever worked.
Thanks to `ModuleInfo`, all modules are just named `foo.py` instead of
`unstable_foo.py`, which simplifies the import method a bit. This also
allows for accurate FeatureNew/FeatureDeprecated use, as we know when
the module was added and if/when it was stabilized.
Instead of using FeatureNew/FeatureDeprecated in the module.
The goal here is to be able to handle information about modules in a
single place, instead of having to handle it separately. Each module
simply defines some metadata, and then the interpreter handles the rest.
In order to reliably link to static libraries or individual object
files, we need to take their languages into account as well. For static
libraries this is easy: we just add the static library's list of
compilers to the build target. For extracted objects, we need to only
add the ones for the objects we use.
But we did this really inefficiently -- in fact, downright terribly. We
iterated over all source files from the extracted objects, then tried to
look up a new compiler for them. Even though the extracted objects
already had a list of compilers! This broke once compilers were made
per-subproject, because while the extracted objects have a reference to
all the compilers it needs (just like static archives do, actually) we
might not actually be able to look up that compiler from scratch inside
the current subproject.
Fix this by asking the extracted objects to categorize all its own
sources and return the compilers we want.
Fixes#10579