This makes it much easier to see what we're ignoring, as well as
allowing pylint to enforce any lints that currently pass but aren't in
the allow list automatically.
This used to be fine, until imports were removed from this file. Now a
function annotated as T.NoReturn doesn't actually tell mypy that it
cannot return, though, so we manually do it.
We no longer need these upfront at all, since we now import the ones we
need for the language we are detecting, at the time of actual detection.
This avoids importing 28 files, consisting of just under 9,000 lines of
code, at interpreter startup. Now, it is only imported depending on
which languages are invoked by add_languages, which may not even be
anything. And even if we do end up importing a fair chunk of it for
C/C++ projects, spreading the import cost around the interpreter runtime
helps responsiveness.
Only import the ones we need for the language we are detecting, once we
actually detect that language.
This will allow finally dropping the main imports of these files in a
followup commit.
Instead of comparing against specific compiler classes, check the
logical compiler id or language etc.
In a couple cases, we seem to be missing a couple things by being a bit
too strict about the exact class type.
We want to optimize out some internal codepaths used at build time by
avoiding work such as argparse. This doesn't work particularly well when
the argparse arguments are imported before then. Between them, they
indirectly import pretty much all code anywhere, and msetup alone
imports most of it.
Also make sure the regenerate internal script goes directly to msetup.
This is wasteful and generally unneeded, since code can just use the
compiler they detected instead of manually poking at the internals of
this subpackage.
It also avoids importing an absolute ton of code the instant one runs
`from . import compilers`
It turns out we don't generally need to proxy every compiler ever
through the top-level package. The number of times we directly poke at
one is negligible and direct imports are pretty clean.
In various situations we want to figure out what type of compiler we
have, because we want to know stuff like "is it the pgi one", or "does
it use msvc style". The compiler object has this property already, via
an API specifically designed to communicate this info, but instead we
performed isinstance checks on a compiler class.
This is confusing and indirect, and has the side effect of requiring
more imports everywhere. We should do away with it.
In commit 47426f3663 we migrated to
typed_kwargs, but the validator accepted a list and checked to see if it
was a single Dependency object.
Fixes#10813
Save off the hash of the wrap file when first configuring a subproject.
When reconfiguring a subproject, check the hash of the wrap file
against the stored hash. If they don't match then warn the user.
The comment and some settings that appear to be related to the comment
were introduced in f21685a833 and appears
to be documenting some of the fields added in that commit. This commit
moves the comment back to the field it appears to be documenting.
Add empty functions for all commands defined in the autocompletion script.
When these functions are not defined, bash raises the following error:
$ meson init <TAB>-bash: _meson-init: command not found
Signed-off-by: Liam Beguin <liambeguin@gmail.com>
Move _meson-introspect() to follow the command list defined at the top
of the script which follows the help message order.
Signed-off-by: Liam Beguin <liambeguin@gmail.com>
In the debug logs, always log if a dependency lookup raises a
DependencyException. In the `required: false` case, this information
would otherwise disappear forever, and we would just not even log that
we tried it -- it doesn't appear in "(tried x, y and z)".
In the `required: true` case, we would re-raise the first exception if
it failed to be detected. Update the raise message with the same
information we print to the debug logs, indicating which dependency and
which method was used in the failing attempt.
functools.partial preserves information about how the method was
created, lambdas do not. Also, we just want to freeze the first argument
and forward the rest anyway.
This also lets us get rid of a mypy error that was being ignored.
A bunch of SystemDependency subclasses overrode log_tried() even though
they used the same function anyway. Delete them -- they still print
the same thing either way.
When at least one Rust target is present, we now generate a
rust-project.json file, which can be consumed by rust-analyzer. This is
placed in the build directory, and the editor must be configured to look
for this (as it is not a default search path).
In the long run collections is the right thing to use, but until we can
require 3.9.x as the minimum we cannot annotate the collections.abc
version, only the typing version. This allows correct type annotations
in users of `CompilerArgs`, as mypy (and friends) can determine that
`CompilerArgs` is ~= `T.Sequence[str]`
Gettext should search for input files relative to the (sub)project
source root, not the global source root.
This change exposes a root_subdir member in ModuleState.
When calculating the output filename for a compiled object, we sanitize
the whole input path, more or less. In cases where the input path is
very long, this can overflow the max length of an individual filename
component.
At the same time, we do want unique names so people can recognize what
these outputs actually are. Compromise:
- for filepaths with >5 components (which are a lot more likely to cause
problems, and simultanously less likely to have crucial information that
far back in the filepath)
- if an sha1 hash of the full path, replacing all *but* those last 5
components, produces a path that is *shorter* than the original path
... then use that modified path canonicalization via a hash. Due to the
use of hashes, it's unique enough to guarantee correct builds. Because
we keep the last 5 components intact, it's easy to tell what the output
file is compiled from.
Fixes building in ecosystems such as spack, where the build environment
is a very long path containing repetitions of
`__spack_path_placeholder__/` for... reasons of making the path long.
%% survived into the output since 038b31e72b. That failed to fail, at
least in the common cases, because the whole command sequence is unnecessary /
redundant - it appears to come from cmake, which executes multiple commands
within a single CustomBuild element.
GLib installs a few executables that are not needed by applications that
use the glib libraries, but are used either by build systems or by user
scripts. Debian splits them into libglib2.0-dev-bin and libglib2.0-bin
packages. Another example is GStreamer tools (e.g. gst-launch-1.0) that
Debian packages separately in gstreamer1.0-tools.
It is common enough that Meson documentation should recommend a tag for
consistency across projects.
In commit d326c87fe2 we added a special
holder object for cached compilation results, with some broken
attributes:
- "command", that was never set, but used to print the log
- "args", that was set to some, but not all, of the information a fresh
compilation would log, but never used
Remove the useless args attribute, call it command, and use it properly.
We compared a Visual Studio (the IDE) version, but we wanted a MSVC (the
compiler) version. This caused the option to be passed for a few too
many versions of MSVC, and emit a "D9002 : ignoring unknown option" on
those systems.
Compare the correct version using the version mapping from
https://walbourn.github.io/vs-2017-15-7-update/Fixes#10787
Co-authored-by: CorrodedCoder <38778644+CorrodedCoder@users.noreply.github.com>
When we do wrap resolution, we do a case-insensitive lookup because
keys, i.e. `dep_name = variable_name`, are case insensitive. In order to
check whether we should process a subproject, we need to know if the
lowercased dependency name matches.
We do this by looking up the lowercase name, and assuming that the
stored name is also lowercase. But for dependency_names, this isn't "case
insensitive and stored in lowercase" so we need to manually force it to
be consistent.
Likewise, when looking up the wrap name (which works like
dependency_names and doesn't need a provide section at all) the disk
filename of the wrap file is case sensitive, but needs to be manually
forced for consistency.