access(2) tests for X_OK that return true do not always guarantee that
the file is executable. Instead check the stat(2) mode bits explicitly.
This fixes any builds or installs executed as root on Solaris and
illumos that contain non-executable scripts.
Meson's documentation about cross-compilation made me finally understand
why the typical confusion about machine names. Thanks, but let's make it
even better. Don't wait until the very end of the section to reveal the
most important information: that machine names are relative. For
suspense we already have TV shows; spill the beans much earlier.
Also fix the first, simplest cross-compilation example: target is
irrelevant.
cmake: get language from Meson project if not specified as depedency(..., langugage: ...)
deps: add threads method:cmake
dependency('threads', method: 'cmake') is useful for cmake unit tests
or those who just want to find threads using cmake.
cmake: project(... Fortran) generally also requires C language
This default behavior can have surprising and time-consuming outcomes.
I was wondering why certain tests using several external, fixed libraries
would fail only with Meson and not with CMake or manual runs.
It turned out mtest.py enables MALLOC_PERTURB_ by default, which is
surprising--a topic for another Issue/PR.
At least, this surprising default is documented with workarounds.
wrap: add imposter URL test
this test shows that meson wrap subsystem historically allows
imposter URLs like https://wrapdb.mesonwrap.com.evil/v1/foo.zip
while the new code does no.
In my opinion, we should not fall back to http:// from the SSL HSTS WrapDB URL,
**for systems that have Python SSL** as that is controverting the point
of HSTS + SSL.
For systems that do not have Python SSL, they continue to work with a
colored mlog.warning instead of only a stderr console print.
attempt to stop masquerade URLS containing wrapdb.mesonbuild.com.evil.stuff.com
This partially reverts commit fe853ee516.
In particular this reverts the changes to the DynamicLinker __init__
methods. Frankly this is *bad* because it allows a mixin class (which
should not be directly instantiated) to be directly instantiated, and
complicates the init process. It also increases the amount of code for
zero gain, and makes the code less resilient to refactors.
There are two awful things about CompilerArgs, one is that it directly
inherits from list, and there are a lot of subtle gotcahs with
inheriting from builtin types. The second is that the class allows
arguments to be passed in whatever order. That's bad. This also fully
annotates the CompilerArgs class, so mypy can type check it for us.
`from foo import` should be used sparingly because of namespace
pollution, especially since those names will be exported
unconditionally. For typing this is extra annoying because anytime
someone wants to use another symbol from the typing module they have to
add it to the import line. Use `import typing` to avoid all of this.
There are three problems:
1) Dunders like `__lt__` and `__gt__` don't return bool, they return
either a bool or the NotImplemented singleton to signal that they don't
know how to be compared.
2) The don't take type object, the take `typing.Any`
3) They need to return NotImplemented if the comparison is not
implemented, this allows python to try the inverse dunder from the
other object. If that object returns NotImplemented as well a
TypeError is raised.
The 'output' field of the subprocess.CalledProcessError exception is
valid only when subprocess.check_output() is called, trying to access it
after calling subprocess.check_call() results in an unwanted exception
when commands return non-zero exit code, e.g.:
-----------------------------------------------------------------------
$ meson subprojects foreach false
Executing command in ./subprojects/sqlite-amalgamation-3250100
-> Not downloaded yet
Executing command in ./subprojects/gstreamer
Traceback (most recent call last):
File "/home/ao2/meson/meson/mesonbuild/msubprojects.py", line 177, in foreach
subprocess.check_call([options.command] + options.args, cwd=repo_dir)
File "/usr/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['false']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ao2/meson/meson/mesonbuild/mesonmain.py", line 129, in run
return options.run_func(options)
File "/home/ao2/meson/meson/mesonbuild/msubprojects.py", line 248, in run
options.subprojects_func(wrap, repo_dir, options)
File "/home/ao2/meson/meson/mesonbuild/msubprojects.py", line 180, in foreach
out = e.output.decode().strip()
AttributeError: 'NoneType' object has no attribute 'decode'
-----------------------------------------------------------------------
Use subprocess.check_output() instead and behave more like git commands
in handling stderr.
This makes it possible to actually run commands on all subprojects
allowing them to fail on some subprojects and succeed on others.
Also catch the case of missing commands and print an error message in
this case as well.
Some slight refactoring for the dependency classes and
I switched the elbrus compiler to the GnuLikeCompiler.
This is also the correct use according to the documentation
of GnuLikeCompiler.