Caching Compiler.run() seems likely to cause problems, but some users, like
.sizeof(), we know enough about the program run to make it safe.
This commit just adds the Compiler.cached_run(), a subsequent commit makes use
of it.
These are necessary for projects outside Meson itself that want to
extend the 'meson install' functionality as meson-python does to
assemble Python package wheels from Meson projects.
Fixes#11426.
It seems this happens because at some point setuptools imports gettext,
and we have a script by the same name.
In general, this path injection by default is bad news for our use case.
Python 3.11 introduced -P for this reason, but we cannot depend on that.
Instead, check for it first, and delete it, before doing more imports.
The version lookup should be silent. While we're at it, the version
lookup should not be happening more than once, which printing multiple
messages indicated we were doing. Pass the version into the per-file
function rather than looking it up fresh each time.
Fixes https://github.com/mesonbuild/meson/pull/11054#issuecomment-1430169280
Rustc as of version 1.61.0 has support for controlling when
whole-archive linking takes place, previous to this it tried to make a
good guess about what you wanted, which worked most of the time. This is
now implemented.
Additionally, rustc makes some assumptions about library names
(specifically static names), that meson does not keep. This can be fixed
with rustc 1.67, where a new +verbatim modifier has been added. We can
then force rustc to use the name we give it. Before that, we can sneak
through `/WHOELARCHIVE:` in cases of dynamic linking (into a dll or
exe), but we can't force the archiver to do what we want (rustc
considers the archiver to be an implementation detail). The only
solution I can come up with is to copy the library to the format that
rustc expects. I've run into some issues with that as well, so we warn
in that case.
The decisions to leave static into static broken on MSVC for 1.61–1.66
was made because:
1) The work around is non-trivial, and we would have to support that
workaround for a long time
2) The number of users of Rust in Meson is small
3) The number of users of Rust in Meson on Windows, with MSVC is tiny
4) Using rustup to update rustc on windows is trivial, and solves the
problem completely
Fixes: #10723Fixes: #11247
Co-authored-by: Nirbheek Chauhan <nirbheek@centricular.com>
This workaround was never exclusive to python2, and in fact only just
got fixed in the upcoming python 3.12 release. Extend the version
comparison to cover all those other cases.
Only search for and provide linkage to libpython, if the dependency
expects to be linked to it. Fixes overlinking on Linux / macOS when
pkg-config isn't installed and the sysconfig lookup is used instead.
This was correctly handled for pkg-config rather than deferring it until
use, since commit bf83274344 -- but that
handling neglected to cover sysconfig dependencies. And sysconfig would
always try to link to libpython, it just respected the dependency
configuration barely enough to allow falling back to "don't link" if
both link_libpython=False and the library wasn't found.
We have two copies of this code, and the python module one is vastly
superior, not just because it allows choosing which python executable to
base itself on. Unify this. Fixes various issues including non-Windows
support for sysconfig, and pypy edge cases.
In preparation for wholly merging the dependency handling from the
python module into dependencies.*, move the unique class definitions
from there into their new home in dependencies.python, which is
semantically convenient.
In preparation for handling more work inside dependencies.*, we need to
be able to run a PythonExternalProgram from the python dependency. Move
most of the definition -- but only the parts that have no interest in a
ModuleState -- and subclass a bit of sanity checking that we need to
handle specially when used in the module.
It can go directly inside the function which immediately uses it.
There's no purpose in looking it up exactly once and using it exactly
once, but looking it up outside the function and complicating the
function signature in order to pass it as a function argument.
We write this out as an embedded string to a tempfile in order to run
it, which is pretty awkward. And usually Meson's files are already files
on disk, not packed into a zip, so we can simply run it directly. Since
python 3.7, which is our new minimum, we can handle this well via the
stdlib. (There's also mesonbuild.mesondata, but we do not need
persistence in the builddir.)
This also solves the problem that has always been there, of giant python
programs inside strings occasionally confusing syntax highlighters. Or
even, it would be nice if we had syntax highlighting for this
introspection program. :D
We pass around a PythonInstallation into the depths of the dependency
factory, solely so that we can get at is_pypy in one particular branch.
We don't need the DSL functions, we don't need access to the
interpreter, let's just use the enhanced ExternalProgram object on its
own.
Add is_pypy to the object, and modify other references to get
information from .info['...'] instead of direct access.
Do not allow hanging forever.
It's fine to use selectors here, which are unix-only, because we require
Unix + a system that has e.g. sudo installed, anyway.
If the user runs `sudo meson install` this may run ninja to build
everything that gets installed. This naturally happens as root also, by
default, which is bad. Instead, detect root elevation tools and drop the
uid/gid of the child ninja process back to the original invoking user
before doing anything.
There's a couple issues with the current approach:
- pkexec is an unusual elevation method, the standard is sudo
- it tries to elevate even in automated workflows
- the user may not want to automatically rerun as root, that might be
badly behaved
Do some upfront checks instead, first to make sure it even makes sense
to try becoming root, and then to ask the user "do you really want
this". Also check for a couple common approaches to root elevation,
including doas.
Fixes#7345Fixes#7809
We just finished running rebuild_all, and then actually running
the install routine failed because of permission errors, so we magically
reran as root. But we should *know* that all prerequisite targets are up
to date already, we don't need to run it *again*.
Nevertheless, when running meson install directly we could end up
without --no-rebuild in the original argv. This meant that it wouldn't
be in the re-executed argv either. Add it on just in case -- repeating
it twice doesn't hurt anyway.
This method allows meson.build to introspect on the changed options.
It works by merely exposing the same set of data that is logged by
MesonApp._generate.
Fixes#10898
Suppressing all exceptions was hidding even syntax errors in compiler
source code. If a compiler cannot be found, a MesonException is raised,
we should only expect that type.
This can raise any OSError, but we only caught two of them that indicate
a particular failure case. Also catch the generic error form with a more
generic message.
This produces better error messages in cases where e.g. exclusive lock
is not supported.
The current check results in *any* value to `export_dynamic` generating
vala import targets, even `false`. This is pretty clearly wrong, as it
really wants to treat an unset export_dynamic as false.
This adds two new methods, that are conceptually related in the same way
that `enable_auto_if` and `disable_auto_if` are. They are different
however, in that they will always replace an `auto` value with an
`enabled` or `disabled` value, or error if the feature is in the
opposite state (calling `feature(disabled).enable_if(true)`, for
example). This matters when the feature will be passed to
dependency(required : …)`, which has different behavior when passed an
enabled feature than an auto one.
The `disable_if` method will be controversial, I'm sure, since it
can be expressed via `feature.require()` (`feature.require(not
condition) == feature.disable_if(condition)`). I have two defences of
this:
1) `feature.require` is difficult to reason about, I would expect
require to be equivalent to `feature.enable_if(condition)`, not to
`feature.disable_if(not condition)`.
2) mixing `enable_if` and `disable_if` in the same call chain is much
clearer than mixing `require` and `enable_if`:
```meson
get_option('feat') \
.enable_if(foo) \
.disable_if(bar) \
.enable_if(opt)
```
vs
```meson
get_option('feat') \
.enable_if(foo) \
.require(not bar) \
.enable_if(opt)
```
In the first chain it's immediately obvious what is happening, in the
second, not so much, especially if you're not familiar with what
`require` means.
It's always been strange to me we don't have an opposite method of the
`disable_auto_if` method, but I've been pressed to find a case where we
_need_ one, because `disable_auto_if` can't be logically contorted to
work. I finally found the case where they're not equivalent: when you
don't want to convert to a boolean:
```meson
f = get_option('feat').disable_auto_if(not foo)
g = get_option('feat').enable_auto_if(foo)
dep1 = dependency('foo', required : f)
dep2 = dependency('foo', required : g)
```
#8259 induced a regression, causing Meson 0.57.0 and upward to
stop printing outputs of scripts added using `meson.add_*_script()`.
This makes _find_source_scripts() mark executables as verbose
in meson_exe.
This is generally a good idea, and the tempfile is already instructed to
not auto-delete on close. It also fixes a bug on PyPy, where the file
isn't valid because it's not explicitly closed. This is probably due to
the garbage collection modes -- in CPython, the object goes out of scope
and gets automatically closed before we actually attempt to unpack it.
Fixes#11246
This solves rebuild issues when e.g. importing a .pxd header from a .pyx
file, just like C/C++ source headers. The transpiler needs to run again
in this case.
This functionality is present in the 3.0.0 alphas of cython, and is also
backported to 0.29.33.
Fixes#9049
We want to use as much default ninja behavior as we can, so reuse $out
instead of repeating the output file as a string in $ARGS, and raise
that into the build rule so it is only listed once.
If someone specifies a binary in a machine file, but the resulting
prog.found() is false because it doesn't actually exist on disk, then
the user was probably trying to disable finding that program. But
find_program() currently doesn't distinguish between a machine file
lookup returning a not-found program, and returning a dummy program
because there's no entry at all.
Explicitly check for a dummy program, rather than checking if the
program was found, before deciding whether to discard the lookup results
and continue trying other program lookup methods.
This was causing a ninja issue where the native headers were always
being generated because io.github.hse-project.hse_Hse.h was being
expected, but io.github.hse_project.hse_Hse.h was actually generated.