If the required LLVM modules can't be found, skip the LLVM framework
test, rather than succesfully doing nothing.
(This optionality is a leftover from before #7379)
(At the moment, OpenSuse provides dynamic-only LLVM. The cmake method
still finds LLVM, when a static LLVM is requested, but fails to find any
modules. This might be a bug in the cmake method of the LLVM
dependency.)
This is useful both from the perspective of optional functionality that
requires a module, and also as I continue to progress with Meson++,
which will probably not implement all of the modules that Meson itself
does.
This commit introduces a new type of `HoldableObject`: The
`SecondLevelHolder`. The primary purpose of this class is
to handle cases where two (or more) `HoldableObject`s are
stored at the same time (with one default object). The
best (and currently only) example here is the `BothLibraries`
class.
I guess the intent was that tests of thread/debug library variants only
get run with MSVC, but currently this test isn't getting run at all in
our Windows CI (since boost got removed from the VM image [1], and we
didn't notice, more on which anon).
[1] https://github.com/actions/virtual-environments/pull/2843
* backends: Add a Visual Studio 2013 backend
This is more-or-less a quick port from the VS2015 backend, except that
we update the Visual Studio version strings and toolset versions
accordingly. Also correct the generator string for Visual Studio 2015
in mesonbuild/cmake/common.py.
* backend: Add VS2012 backend
Similar to what we did for Visual Studio 2013, add a Visual Studio 2012
backend.
* vs2010backend.py: Implement `link_whole:` if needed
We actually need Visual Studio 2015 Update 2 to use `/WHOLEARCHIVE:`,
which is what we are currently using for `link_whole:` on Visual Studio.
For Visual Studio versions before that, we need to expand from the
static targets that were indicated by `link_whole:`, and any of the
sub-dependent targets that were pulled in via the dependent target's
`link_whole:`. This wil ensure `link_whole:` would actually work in
such cases.
* vs2010backend.py: Handle objects from generated sources
Unforunately, we can't use backends.determine_ext_objs() reliably, as
the Visual Studio backends handle this differently.
* vs2010backend.py: Fix generating VS2010 projects
Visual Studio 2010 (at least the Express Edition) does not set the envvar
%VisualStudioVersion% in its command prompt, so fix generating VS2010
projects by taking account into this, so that we can determine the location
of vcvarsall.bat correctly.
* whole archive test: Disable on vs2012/2013 backends too
The Visual Studio 2012/2013 IDE has problems handling the items that would be
generated from this test case, so skip this test when using
--backend=vs[2012|2013]. This test does work for the Ninja backend when
VS2012 or VS2013 is used, though.
Consolidate this error message with XCode along with the vs2010 backend.
* docs: Add the new vs2012 and vs2013 backends
Let people know that we have backends for vs2012 and 2013. Also let
people know that generating Visual Studio 2010 projects have been fixed
and the pre-vs2015 backends now handle the `link_whole:` project option.
As seen in the testcase, passing objects to custom_target does not work
if headers are passed extract_objects(), or if extract_all_objects() is used
and the sources include any header files. To fix this, use the code that
already exists for unity build to filter out the nonexistent ".h.o" files.
This already gives for free the handling of genlist, which was mentioned
in a TODO comment.
QEMU would like to use the result of extract_objects in a custom_target;
examples are using objcopy, or using the object files as the key to look
up command line arguments in compile_commands.json. This is slightly
peculiar and not covered by the test suite, but it works; in order to avoid
regressions, add a test case and document it.
As a side-effect from #8885 `find_program()` returns now `Executable`
objects when `meson.override_find_program` is called with an
executable target. To resolve this conflict the missing methods
from `ExternalProgram` are added to `BuildTarget`.
Tests that we find something sensible for intl, capable of producing
binaries using gettext() to translate stuff.
No more need to manually check headers and *maybe* include the intl
library, which we were doing before; the new dependency actually
simplifies the existing test, and should simplify users' build files
too...
Since we pass a method: 'foo' to every one of these
config-tool/pkg-config dependencies, we do not ever need to check which
type_name it has; change these to asserts instead.
In the process, we discover a bug! We kept checking for type
'configtool' instead of 'config-tool', so these tests all
short-circuited and checked nothing. Once moved to an assert, the
asserts failed.
Add a new lookup for a known system dependency and make it assert that
too.
The dependency lookup is a lot of complex code. This refactor it all
into a single file/class outside of interpreter main class. This new
design allows adding more fallbacks candidates in the future (e.g. using
cc.find_library()) but does not yet add any extra API.
This is a follow-up to gh-8706, which contained the initial fix
to ninjabackend.py but somehow lost it. This re-applies the fix
and adds a test for it.
Without the fix, the error is:
ninja: error: 'ct2.pyx', needed by 'libdir/ct2.cpython-39-x86_64-linux-gnu.so.p/ct2.pyx.c',
missing and no known rule to make it
Add a method to downgrade an option to disabled if it is not used.
This is useful to avoid unnecessary search for dependencies;
for example
dep = dependency('dep', required: get_option('feature').disable_auto_if(not foo))
can be used instead of the more verbose and complex
if get_option('feature').auto() and not foo then
dep = dependency('', required: false)
else
dep = dependency('dep', required: get_option('feature'))
endif
or to avoid unnecessary dependency searches:
dep1 = dependency('dep1', required: get_option('foo'))
# dep2 is only used together with dep1
dep2 = dependency('dep2', required: get_option('foo').disable_auto_if(not dep1.found()))
```
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a method to perform a logical AND on a feature object. The method
also takes care of raising an error if 'enabled' is ANDed with false.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This method simplifies the conversion of Feature objects to booleans.
Often, one has to use the "not" operator in order to treat "auto"
and "enabled" the same way.
"allowed()" also works well in conjunction with the require method that
is introduced in the next patch. For example,
if get_option('foo').require(host_machine.system() == 'windows').allowed() then
src += ['foo.c']
config.set10('HAVE_FOO', 1)
endif
can be used instead of
if host_machine.system() != 'windows'
if get_option('foo').enabled()
error('...')
endif
endif
if not get_option('foo').disabled() then
src += ['foo.c']
config.set10('HAVE_FOO', 1)
endif
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We need to pass any generated sources down the CustomTarget
inititalizers so that they will generate a dependency correctly,
otherwise we get race conditions.
This partially reverts commit add502c648.
In 'linkshared' test, annotate cppfunc() as imported, so an indirection
through an import stub is generated, avoiding a relocation size error
when building using gcc for Cygwin with LTO on.
Align with the example of how to write this portably in [1].
The 'c' language part of that test already gets this right.
[1] http://gcc.gnu.org/wiki/Visibility