* add support for cups dependencies
libcups has its own cups-config tool rather than using pkg-config.
This adds support for cups-config, based on pcap-config and
sdl2-config implementations.
This change also includes the unit test case and documentation for
cups dependency object implementation, and libcups2 dep to CI image.
Libpcap has its own pcap-config tool rather than using pkg-config. Add
support for pcap-config, based on the existing implementation of
sdl2-config that is there already.
It is not feasible to test all failure modes by creating projects in
`test cases/failing` that would be an explosion of files, and that
mechanism is too coarse anyway. We have no way to ensure that the
expected error is being raised.
See FailureTests.test_dependency for an example.
This class now consolidates a lot of the logic that each external
dependency was duplicating in its class definition.
All external dependencies now set:
* self.version
* self.compile_args and self.link_args
* self.is_found (if found)
* self.sources
* etc
And the abstract ExternalDependency class defines the methods that
will fetch those properties. Some classes still override that for
various reasons, but those should also be migrated to properties as
far as possible.
Next step is to consolidate and standardize the way in which we call
'configuration binaries' such as sdl2-config, llvm-config, pkg-config,
etc. Currently each class has to duplicate code involved with that
even though the format is very similar.
Currently only pkg-config supports multiple version requirements, and
some classes don't even properly check the version requirement. That
will also become easier now.
Use the compiler object to find gtest libraries. Fixes following
cases:
- cross compiling looked in host library paths
- static libgtest was not supported
Change-Id: If42cdf873db38676b99adca60752f652aff097a2
The old caching was a mess of spaghetti code layered over pasta code.
The new code is well-commented, is clear about what it's trying to do,
and uses a blacklist of keyword arguments instead of a whitelist while
generating identifiers for dep caching which makes it much more robust
for future changes.
The only side-effect of forgetting about a new keyword argument would
be that the dependency would not be cached unless the values of that
keyword arguments were the same in the cached and new dependency.
There are also more tests which identify scenarios that were broken
earlier.
All our cached_dep magic was totally useless since we ended up using
the same identifier for native and cross deps. Just nuke all this
cached_dep code since it is very error-prone and improve the
identifier generation instead.
For instance, this is broken *right now* with the `type_name` kwarg.
Add a bunch of tests to ensure that all this actually works...
Closes https://github.com/mesonbuild/meson/issues/1736
This adds a set of private flags that are removed from the output of
`llvm-config --cppflags` before storing them. This is unfortunately
necessary since some versions of LLVM include absolute garbage like
'-DNDEBUG' in their CPP flags, and we don't want to pass those.
Refactor to use ExternalProgram for the command instead of duplicating
that code (badly). Also improve messages to say "or not executable"
when a script/command is not found.
Also allow ExternalPrograms to be passed as arguments to
run_command(). The only thing we're doing by preventing that is
forcing people to use prog.path()
This adds a depdendncy wrapper for llvm-config based on the wxwidgets
dependency. IT handles libs, version, include dir, and the llvm unique
concept of components. These components are individual pieces of the
LLVM library that may or may not be available depending on the platform.
Meson has a common pattern of using 'if len(foo) == 0:' or
'if len(foo) != 0:', however, this is a common anti-pattern in python.
Instead tests for emptiness/non-emptiness should be done with a simple
'if foo:' or 'if not foo:'
Consider the following:
>>> import timeit
>>> timeit.timeit('if len([]) == 0: pass')
0.10730923599840025
>>> timeit.timeit('if not []: pass')
0.030033907998586074
>>> timeit.timeit('if len(['a', 'b', 'c', 'd']) == 0: pass')
0.1154778649979562
>>> timeit.timeit("if not ['a', 'b', 'c', 'd']: pass")
0.08259823200205574
>>> timeit.timeit('if len("") == 0: pass')
0.089759664999292
>>> timeit.timeit('if not "": pass')
0.02340641999762738
>>> timeit.timeit('if len("foo") == 0: pass')
0.08848102600313723
>>> timeit.timeit('if not "foo": pass')
0.04032287199879647
And for the one additional case of 'if len(foo.strip()) == 0', which can
be replaced with 'if not foo.isspace()'
>>> timeit.timeit('if len(" ".strip()) == 0: pass')
0.15294511600222904
>>> timeit.timeit('if " ".isspace(): pass')
0.09413968399894657
>>> timeit.timeit('if len(" abc".strip()) == 0: pass')
0.2023209120015963
>>> timeit.timeit('if " abc".isspace(): pass')
0.09571301700270851
In other words, it's always a win to not use len(), when you don't
actually want to check the length.
When cross compiling and looking for moc/uic/rcc you really want the
host binary.
Still fall back to QT_INSTALL_BINS as it appears that's the only
variable available with qt4.
Ideally, all dependency objects should support this, but it's a lot of
work and isn't supported by all dependency types (like frameworks and
pkg-config), so for now just enable it for external libraries.
... based on the compiler object
This addresses issue #1569
Need to be careful about using -isystem with the standard include
dirs (thanks to the unit tests for catching this). It may be
worthwhile trying to detect what the include dirs are.
Other dependencies (GTest) just avoid adding the include dir for those
system includes. Do the same here.
configure a detection method, for those types of dependencies that have
more than one means of detection.
The default detection methods are unchanged if 'method' is not
specified, and all dependencies support the method 'auto', which is the
same as not specifying a method.
The dependencies which do support multiple detection methods
additionally support other values, depending on the dependency.
Valgrind is a bit of a strange beast, in general use one isn't supposed
to link against valgrinds libs, they're non-PIC static libs, instead,
including the headers does magic to make valgrind work.
This patch implements a simple ValgrindDependency class subclassed from
PkgConfigDependency, that overwrites (effectively) only the
get_link_args method to always return an empty list. This solution may
seem strange, but I think that it follows the principle of least
surprise, and simplifies the most common use case for valgrind.
Essentially without this every valgrind consumer would be forced to
implement the following code to have a usable valgrind dependency
object:
_dep = dependency('valgrind', required : false)
if _dep.found()
valgrind_dep = declare_dependency(
compile_args : _dep.get_pkgconfig_variable('Cflags')
)
else
valgrind_dep = []
endif
While the above is workable, it's surprising behavior and the above code
snippet becomes boilerplate that everyone needs to copy into their meson
files.
Fixes#826
Ever since we changed how we do library searching, the full path to the
library has not been available under `.fullpath`. This has been broken
for at least a year...
And actually test that prog.path() works. The earlier test was just
running the command without checking if it succeeded.
Also make everything use prog.get_command() or get_path() instead of
accessing the internal member prog.fullpath directly.
We also need to check whether the program found in PATH can be executed
directly by Windows or if we need to figure out what the interpreter is
and add it to the list.
Also add `msc` to the list of extensions that can be executed natively
Includes a project test and a unit test for this and all expected
behaviours on Windows.
At the same time, also fix the order in which compile arguments are
added. Detailed comments have been added concerning the priority and
order of the arguments.
Also adds a unit test and an integration test for the same.
While reading shebangs, when we detect an attempt to run 'python3', use
sys.executable instead. For example:
#!/usr/bin/python3
#!python3
#!/usr/bin/env python3
Fixes:
Traceback (most recent call last):
File "mesonbuild/mesonmain.py", line 295, in run
app.generate()
File "mesonbuild/mesonmain.py", line 177, in generate
intr.run()
File "mesonbuild/interpreter.py", line 2444, in run
super().run()
File "mesonbuild/interpreterbase.py", line 124, in run
self.evaluate_codeblock(self.ast, start=1)
File "mesonbuild/interpreterbase.py", line 145, in evaluate_codeblock
raise e
File "mesonbuild/interpreterbase.py", line 139, in evaluate_codeblock
self.evaluate_statement(cur)
File "mesonbuild/interpreterbase.py", line 152, in evaluate_statement
return self.assignment(cur)
File "mesonbuild/interpreterbase.py", line 546, in assignment
value = self.evaluate_statement(node.value)
File "mesonbuild/interpreterbase.py", line 150, in evaluate_statement
return self.function_call(cur)
File "mesonbuild/interpreterbase.py", line 371, in function_call
return self.funcs[func_name](node, self.flatten(posargs), kwargs)
File "mesonbuild/interpreter.py", line 1876, in func_dependency
found = cached_dep.get_version()
File "mesonbuild/dependencies.py", line 369, in get_version
return self.modversion
AttributeError: 'WxDependency' object has no attribute 'modversion'
ninja: error: rebuilding 'build.ninja': subcommand failed