`meson setup -Dfoo=bar builddir` command was returning success ignoring
new option values.
This now also update options. It is useful because it means
`meson setup -Dfoo=bar builddir && ninja -C builddir` works regardless
whether builddir already exists or not, and when done in a script,
changing options in the script will automatically trigger a reconfigure
if needed. This was already possible by always passing --reconfigure
argument, but that triggers a reconfigure even when options did not
change.
Move this message up before we attempt to change anything in the file
system (in this case creating the directory structure).
If an error occurs it will thus occur immediately after the message,
allowing us to debug what failed to install.
The most notable problem this causes is that when running `meson setup
--reconfigure` the build.ninja file is erroneously seen as out of date,
so ninja immediately tries to regenerate it again as it didn't see the
file get updated.
There are two problems.
The first problem is that we looked for the wrong file. Ninja creates a
few internal files, and one of them is the one we care about:
`.ninja_log`, which contains stat'ed timestamps for build outputs to aid
in checking when things are out of date. But the thing we actually
checked for is `.ninja_deps`, a file that contains a compressed database
of depfile outputs. If the latter exists, then the former surely exists
too.
Checking for the wrong file meant that we would restat outputs, but only
when some build edges were previously built that had depfile outputs.
The second problem is that we checked for this in os.getcwd() instead of
the configured build directory. This very easily fails to be correct,
except when reconfigure is triggered directly by ninja itself, in which
case we didn't need the restat to begin with.
Every time I update meson, I spend about 20 minutes on frustrated googling
to figure out how to update my build directory to work with the new version.
I'm forgetful, okay? Ease this pain point by suggesting a potential fix in
the error message.
This reverts commit f52bcaa27f.
It did not pass CI, and was merged anyway because there were two CI
errors in the same cygwin job. The other error was not the fault of this
commit, and since cygwin errors were glossed over because they were
"expected", the presence of a new error *added* by this commit was
overlooked.
Per the meson development policy, PRs which result in CI errors
can/should be reverted at will, no questions asked.
In commit f52bcaa27f a few issues were
added:
- doc typo
- imports for utils.universal are not intended to be directly used, it's
an internal wrapper that exists solely to make mesonlib work well as
it always did while simultaneously allowing `meson --internal`
codepaths to avoid importing anything other than an extremely stripped
down core
- type annotation specific import was imported at runtime scope
This commit adds a new keyword arg to extension_module() that enables
a user to target the Python Limited API, declaring the version of the
limited API that they wish to target.
Two new unittests have been added to test this functionality.
Performed using https://github.com/ilevkivskyi/com2ann
This has no actual effect on the codebase as type checkers (still)
support both and negligible effect on runtime performance since
__future__ annotations ameliorates that. Technically, the bytecode would
be bigger for non function-local annotations, of which we have many
either way.
So if it doesn't really matter, why do a large-scale refactor? Simple:
because people keep wanting to, but it's getting nickle-and-dimed. If
we're going to do this we might as well do it consistently in one shot,
using tooling that guarantees repeatability and correctness.
Repeat with:
```
com2ann mesonbuild/
```
Make them into real type annotations. These are the only ones that if
automatically rewritten, would cause flake8 to error out with the
message: "E128 continuation line under-indented for visual indent".
ExternalProgram and CustomTarget have some use cases for producing
subclassed interpreter holders with more specific types and methods. In
order for those subclasses to properly refer to their held_object, we
need a shared base class that is still generic, though bound.
For the derived held objects, inherit from the base class and specify
the final types as the module-specific type.
- allow defines with leading whitespace
- always do replacement for cmakedefine
- output boolean value for cmakedefine01
- correct unittests for cmakedefine
- add cmakedefine specific unittests
clang has supported gcc syntax since version 3.3.0 from 10 years ago.
It's better than its own version because it takes a "when" verb which
allows us to explicitely ask for "auto". This is useful when overriding
flags that came from elsewhere.
Before this patch, meson was just treating b_colorout="auto" as "always".
meson tests enable PYTHONWARNDEFAULTENCODING by default and
make EncodingWarning fatal too.
Starting with Python 3.11 CPython not only warns if no encoding is passed
to open() but also to things like subprocess.check_output(). This made
the call in vsenv.py fail and in turn made test_vsenv_option fail.
check_output() here calls a .bat file which in turn calls vcvars. I don't
know what the encoding is supposed to be used there, so just be explicit
with the locale encoding to silence the warning.
When an installed static library A links to an internal static library B
built using a custom_target(), raise an error instead of a warning. This
is because to be usable, A needs to contain B which would require to
extract the archive to get its objects files.
This used to work, but was printing a warning and was installing a
broken static library, because we used to overlink in many cases, and
that got fixed in Meson 1.2.0. It now fails at link time with symbols
from the custom target not being defined. It's better to turn the
warning into a hard error at configure time.
While at it, noticed this situation can happen for any internal custom
or rust target we link to, recursively.
get_internal_static_libraries_recurse() could be called on CustomTarget
objects which do not implement it, and even if we did not call that
method, it would still fail when trying to call extract_all_objects() on
it.
Fixes: #12006
Projects that prefer GNU C but can fallback to ISO C can now set for
example `default_options: 'c_std=gnu11,c11'` and it will use gnu11 when
available, fallback to c11 otherwise. It is an error only if none of the
values are supported by the current compiler.
This allows to deprecate gnuXX values from MSVC compiler, that means
that `default_options: 'c_std=gnu11'` will now print warning with MSVC
but still fallback to 'c11' value. No warning is printed if at least one
of the values is valid, i.e. `default_options: 'c_std=gnu11,c11'`.
In the future that deprecation warning will become an hard error because
`c_std=gnu11` should mean GNU is required, for projects that cannot be
built with MSVC for example.
Improve the error message when a build target is assigned as dependency
of another build target, which allows to better pinpoint where the issue
lies on.
In the example that follow, modules/meson.build:294 is in a for loop
creating library targets from an array of dictionary, and doesn't point
to the location where interop_sw_plugin is assigned with vlc_opengl:
Before:
modules/meson.build:294:17: ERROR: Tried to use a build target as a dependency.
You probably should put it in link_with instead.
After:
modules/meson.build:294:17: ERROR: Tried to use a build target vlc_opengl as a dependency of target interop_sw_plugin.
You probably should put it in link_with instead.
It would probably be best to directly pinpoint where the assignment was
made but it's probably harder so start simple by saying what is
concerned by the error.
If we have a build- or host-architecture compiler, we can detect mips64
vs. mips by the fact that mips64 compilers define __mips64. However,
machine_info_can_run() doesn't provide any compilers, because it is
interested in the architecture of the underlying kernel. If we don't
return mips64 when running on a mips64 kernel, machine_info_can_run()
will wrongly say that we can't run mips64 binaries.
If we're running a complete 32-bit mips user-space on a mips64 kernel,
it's OK to return mips64 in the absence of any compilers, as a result
of the previous commit "environment: Assume that mips64 can run 32-bit
mips binaries".
Resolves: https://github.com/mesonbuild/meson/issues/12017
Bug-Debian: https://bugs.debian.org/1041499
Fixes: 6def03c7 "detect_cpu: Fix mips32 detection on mips64"
Signed-off-by: Simon McVittie <smcv@debian.org>
The relationship between mips64 and mips is similar to the relationship
between x86_64 and x86. Representing it here is necessary so that we
will not require an exe_wrapper when cross-compiling for 32-bit mips on
mips64, or when a complete 32-bit mips user-space runs on a 64-bit kernel
without using linux32 to alter uname(2) to pretend to be 32-bit.
Signed-off-by: Simon McVittie <smcv@debian.org>