The problem with the earlier position of the generation code was, that
the results could not be cached, because the list of all link_deps was
overall different. However, it shared a special kind of subsets with
other build build targets.
Generating the set of subdirs that are required for linking, alongside
with the link dependencies brings the possibility of caching this.
This reduces the buildting from 1 min. in efl down to 20 sec. And
reduces the amount of 30872534 calls down.
this saves ~40 sec.
With this it is now possible to do
foobar = executable('foobar', ...)
meson.override_find_program('foobar', foobar)
Which is convenient for a project like protobuf which produces both a
dependency and a tool. If protobuf is updated to use
override_find_program, it can be used as
protobuf_dep = dependency('protobuf', version : '>=3.3.1',
fallback : ['protobuf', 'protobuf_dep'])
protoc_prog = find_program('protoc')
We now use the soversion to set compatibility_version and
current_version by default. This is the only sane thing we can do by
default because of the restrictions on the values that can be used for
compatibility and current version.
Users can override this value with the `darwin_versions:` kwarg, which
can be a single value or a two-element list of values. The first one
is the compatibility version and the second is the current version.
Fixes https://github.com/mesonbuild/meson/issues/3555
Fixes https://github.com/mesonbuild/meson/issues/1451
Ninja buffers all commands and prints them only after they are
complete. Because of this, long-running commands such as `cargo
build` show no output at all and it's impossible to know if the
command is merely taking too long or is stuck somewhere.
To cater to such use-cases, Ninja has a 'pool' with depth 1 called
'console', and all processes in this pool have the following
properties:
1. stdout is connected to the program, so output can be seen in
real-time
2. The output of all other commands is buffered and displayed after
a command in this pool finishes running
3. Commands in this pool are executed serially (normal commands
continue to run in the background)
This feature is available since Ninja v1.5
https://ninja-build.org/manual.html#_the_literal_console_literal_pool
We now pass the current subproject to every FeatureNew and
FeatureDeprecated call. This requires a bunch of rework to:
1. Ensure that we have access to the subproject in the list of
arguments when used as a decorator (see _get_callee_args).
2. Pass the subproject to .use() when it's called manually.
3. We also can't do feature checks for new features in
meson_options.txt because that's parsed before we know the
meson_version from project()
* Use _get_callee_args to unwrap function call arguments, needed for
module functions.
* Move some FeatureNewKwargs from build.py to interpreter.py
* Print a summary for featurenew only if conflicts were found. The
summary now only prints conflicting features.
* Report and store featurenew/featuredeprecated only once
* Fix version comparison: use le/ge and resize arrays to not fail on
'0.47.0>=0.47'
Closes https://github.com/mesonbuild/meson/issues/3660
If the external program is a string that is meant to be searched in
PATH, we can't add a dependency on it at configure time because we don't
know where it will be at compile time.
D is not a 'c-like' language, but it can link to C libraries. The same
might be true of Rust in the future and Go when we add support for it.
This contains no functionality changes.
Since `build_always` also adds a target to the set of default targets,
this option is marked deprecated in favour of the new option
`build_always_stale`.
`build_always_stale` *only* marks the target to be always considered out
of date, but does *not* add it to the set of default targets.
The old behaviour can still be achieved by combining
`build_always_stale` with `build_by_default`.
fixes#1942
On Windows, if we are going to link with a shared module, we need the
implib.
Use case: The Xorg server builds some X protocol extensions as modules. The
implibs for these modules need to be shipped as part of the SDK, to enable
building of 3rd party extensions which reference symbols in (and hence on
Windows, need to be linked with) these modules.
Refine #3277
According to what I read on the internet, on OSX, both MH_BUNDLE (module)
and MH_DYLIB (shared library) can be dynamically loaded using dlopen(), but
it is not possible to link against MH_BUNDLE as if they were shared
libraries.
Metion this as an issue in the documentation.
Emitting a warning, and then going on to fail during the build with
mysterious errors in symbolextractor isn't very helpful, so make attempting
this an error on OSX.
Add a test for that.
See also:
https://docstore.mik.ua/orelly/unix3/mac/ch05_03.htmhttps://stackoverflow.com/questions/2339679/what-are-the-differences-between-so-and-dylib-on-osx
This makes it possible to customize permissions of all installable
targets, such as executable(), libraries, man pages, header files and
custom or generated targets.
This is useful, for instance, to install setuid/setgid binaries, which
was hard to accomplish without access to this attribute.
Libraries that have been linked with link_whole: are internal
implementation details and should never be exposed to the outside
world in either Libs: or Libs.private:
Closes https://github.com/mesonbuild/meson/issues/3509
To maintain backward compatibility we cannot add recursive objects by
default. Print a warning when there are recursive objects to be pulled
and the argument is not set. After a while we'll do pull recursive
objects by default.
- determine_ext_objs: What matters is if extobj.target is a unity build,
not if the target using those objects is a unity build.
- determine_ext_objs: Return one object file per compiler, taking into
account generated sources.
- object_filename_from_source: No need to special-case unity build, it
does the same thing in both code paths.
- check_unity_compatible: For each compiler we must extract either none
or all its sources, taking into account generated sources.
when flattening the chained dependencies of an object, we don't need to
create any new internal dependencies if all the fields to be added to it
are empty.
For projects with a lot of libraries and dependency objects this can lead
to noticeable performance improvements.
fixup
When getting dependencies, we don't need to get the same dependencies and
dependency chains multiple times. If library a depends on x, y and z, and
library b depends on a, then we should not have to iterate through x, y and
z multiple times. Pruning at the stage of scanning the dependencies leads
to significant time savings when running meson