The D stdlib function std.process.spawnProcessPosix suffers from a bug
that causes any program that calls it to OOM if the nofile ulimit is
large.
This doesn't affect CI currently but trying to run or build the
containers locally will most likely crash when calling dub as docker
defaults to something like 1073741816 for both the hard limit and soft
limit. Lowering the soft limit is enough to make dub behave.
Signed-off-by: Andrei Horodniceanu <a.horodniceanu@proton.me>
Qt 6 now has stable keywords (and has for a while). Recent stabilisation
of Plasma 6 now pulls in Qt 6 in the image builder so frameworks: 4 qt
fails as qttools is missing.
Signed-off-by: Sam James <sam@gentoo.org>
makepkg can do this, if when building packages from source you enable
debug. This is apparently being shipped in the /etc/makepkg.conf in
docker containers, which means building AUR packages now requires
installing debugedit, and then bloating your container with
/usr/src/debug. We really do not want that.
Reconfigure so that we do not, in fact, need that.
Due to reasons, Arch has chosen to split out the glib2 package into a
glib2-devel package containing a small handful of python programs. The
references to these programs are contained in the main glib2 package, so
meson fails a lot in CI with e.g.
```
test cases/frameworks/7 gnome/gdbus/meson.build:1:18: ERROR: Dependency 'gio-2.0' tool variable 'gdbus_codegen' contains erroneous value: '/usr/bin/gdbus-codegen'
This is a distributor issue -- please report it to your gio-2.0 provider.
```
Arch profile.d scripts were converted to use an appending function that
disappears when /etc/profile exits, and overall are simply not suitable
-- any more -- for sourcing individually.
(I will freely admit I'm not really sure what the overall goal of
refraining from sourcing /etc/profile itself is. Arguably it's kind of
misuse of the profile...)
This silently broke the cuda tests, which never ran because the cuda
compiler was not detected as available.
While we are at it, I guess we can convert gentoo to use the same trick
of appending it in install.sh
```
# Ubuntu sources have moved to the /etc/apt/sources.list.d/ubuntu.sources
# file, which uses the deb822 format. Use deb822-formatted .sources files
# to manage package sources in the /etc/apt/sources.list.d/ directory.
# See the sources.list(5) manual page for details.
```
Adapt to the new format.
We need to load various environment variables from /etc/profile. We
cannot unconditionally load it, because opensuse sources env_vars and
their /etc/profile has a fatal bug in it that causes it to return
nonzero and abort under `set -e` (which is *amazing* as a thing to have
in /etc/profile specifically -- just saying).
Alas, even /etc/profile.env is not enough since Java support depends on
profile.d logic. Re-conditionalize this check to only be added to
env_vars.sh for the image named "gentoo".
FAILED unittests/linuxliketests.py::LinuxlikeTests::test_install_strip - FileNotFoundError: [Errno 2] No such file or directory: 'file'
Signed-off-by: Sam James <sam@gentoo.org>
Signed-off-by: Eli Schwartz <eschwartz93@gmail.com>
We may want to consider our own binpkg cache for future to speed things up,
in addition to the ones provided by Gentoo's own binhost.
Signed-off-by: Sam James <sam@gentoo.org>
Signed-off-by: Eli Schwartz <eschwartz93@gmail.com>
The former has rust dependencies, which lead to max capping on Cygwin
since there is no rust compiler there. But it turns out there are other
disadvantages of jsonschema:
- it involves installing 5 wheels, instead of just 1
- it is much slower
To give some perspective to the latter issue, this is what it looks like
when I test with jsonschema:
```
===== 1 passed, 509 deselected in 3.07s =====
Total time: 3.341 seconds
```
And here's what it looks like when I test with fastjsonschema:
```
===== 1 passed, 509 deselected, 1 warning in 0.28s =====
Total time: 0.550 seconds
```
I cannot think of a good reason to use the former. Although in order to
work on old CI images, we'll support it as a fallback mechanism
Regression in commit 0af126fec7. We added
support for some "test cases/" stuff that actually relies on test files
being a symlink, but when testing the image builder, we copied the meson
repository contents into the docker container without telling python
that it is in fact super important to copy symlinks as symlinks.
As a result, the tests themselves ran fine when merging, but caused the
image builder to thereafter fail.
They do not appear to have 20 in their repos anymore, and no traces can
be found of it in the history, as usual. They do have 11, 17, and 21.
Last time we chose one randomly and hoped it doesn't keep changing
value. This dismally failed: 20 was upgraded to 21. It looks like 17 may
have some staying power.
They do not appear to have 15 in their repos anymore, and no traces can
be found of it in the history, as usual. They do have 11, 17, and 20, so
choose one randomly and hope it doesn't keep changing value.
In this case, PEP 668 was created to allow a thing that Debian wanted,
which is for `pip install foobar` to not break the system python. This
despite the fact that the system python is fine, unless you use sudo pip
which is discouraged for separate reasons, and it is in fact quite
natural to install additional packages to the user site-packages.
It isn't even the job of the operating system to decide whether the user
site-packages is broken, whether the operating system gets the answer
correct or not -- it is the job of the operating system to decide
whether the operating system is broken, and that can be solved by e.g.
enforcing a shebang policy for distribution-packaged software, which
distros like Fedora do, and mandating not only that python shebangs do
not contain `/usr/bin/env`, but that they *do* contain -s.
Anyway, this entire kerfuffle is mostly just a bit of pointless
interactive churn, but it bites pretty hard for our use case, which is a
container image which is fortunately tested before deployment, so
instead of failing to deploy because of theoretical conflicts with the
base system (we specifically need base system integration...) we fail to
deploy because 5 minutes into pulling apt updates at the very beginning,
pip refuses point-blank to work. I especially do not know why it is the
job of the operating system to throw errors intended for interactive
users at people baking "appliance" containers who cannot "break" the
system python anyway.
Fix this by doing what Debian and Ubuntu should both have done from the
beginning, and opting containers out of this questionable feature
entirely.
Note that CI images may still not actually complete their build/test
cycle and be updated, because e.g. LLVM 16 issues tracked by #11642 or
glib ASAN issues tracked by #11754.
The bionic image is really old and mainly exists to test that Meson
itself still works on really old distros (and really old python).
Ideally we'd avoid depending too much on it.
We can get a very modern pypy3 automatically this way, and potentially
use it for more stuff too.
It's already run on other distros. This one fails though, due to missing
debug info.
```
valgrind: Possible fixes: (1, short term): install glibc's debuginfo
valgrind: package on this machine. (2, longer term): ask the packagers
valgrind: for your Linux distribution to please in future ship a non-
valgrind: stripped ld.so (or whatever the dynamic linker .so is called)
```
It doesn't seem possible to have this work out of the box. The debuginfo
packages aren't reliably available, and debuginfod servers -- even if
they worked, which they apparently don't -- would not help anyway since
old version pruning can result in symbols disappearing before the image
is rebuilt, and thereby causing failure.
It's not really critical to test this, since as mentioned we already
have coverage of Meson's side in other distro ciimages.
From the Zen of Python: "Explicit is better than implicit."
As it turns out, it's no longer a safe assumption that pip uses
setuptools??? Well, anyway, install it properly regardless.
After a recent CI image builder update successfully ran the tests, but
didn't run the cross tests, it updated the image that then got used by
the regular CI cross tests. Somehow this resulted in a bunch of tests
now failing because zlib could not be picked up. We probably dropped a
transitive dependency somewhere. Anyway, it's correct to explicitly
specify it if we need it.
We've never used it for anything, it was originally added for #3776 but
that never got finished so it's just a waste.
This also prevents successful regeneration of the build image, because
nim is not available for Ubuntu rolling. It's available in 20.04 and
22.10, but vanished in between for reasons best known to Ubuntu.
The pip package is for python 3.6, but installs pip for all versions of
python. Apparently. Including python 3.7.
So do all other packages, especially the ones where it doesn't work but
pip thinks it is installed anyway. Force a reinstall.
This has been removed as an explicit package in impish. It seems that
having pkg-config installed and adding arm as an arch will cause it to
be generated automatically
Remove hard-coded framework test skip logic in skippable(), instead
annotate test.json with environments in which skip is expected.
(Mainly this is done with by testing the value of MESON_CI_JOBNAME now
set for linux jobs)