It started failing CI as soon as the default shifted to 1.78. Something
is broken and it prevents running stable CI. Tracking issue opened.
We pin the version because that is the same way we handle CI for linux
-- with the exception that Linux CI can upgrade itself as soon as we
fix issues causing the CI Image Builder to jam itself, whereas
unfortunately Windows will need to be manually unpinned, but such is
life as a Windows supporter.
Bug: #13236
Arch profile.d scripts were converted to use an appending function that
disappears when /etc/profile exits, and overall are simply not suitable
-- any more -- for sourcing individually.
(I will freely admit I'm not really sure what the overall goal of
refraining from sourcing /etc/profile itself is. Arguably it's kind of
misuse of the profile...)
This silently broke the cuda tests, which never ran because the cuda
compiler was not detected as available.
While we are at it, I guess we can convert gentoo to use the same trick
of appending it in install.sh
```
# Ubuntu sources have moved to the /etc/apt/sources.list.d/ubuntu.sources
# file, which uses the deb822 format. Use deb822-formatted .sources files
# to manage package sources in the /etc/apt/sources.list.d/ directory.
# See the sources.list(5) manual page for details.
```
Adapt to the new format.
We need to load various environment variables from /etc/profile. We
cannot unconditionally load it, because opensuse sources env_vars and
their /etc/profile has a fatal bug in it that causes it to return
nonzero and abort under `set -e` (which is *amazing* as a thing to have
in /etc/profile specifically -- just saying).
Alas, even /etc/profile.env is not enough since Java support depends on
profile.d logic. Re-conditionalize this check to only be added to
env_vars.sh for the image named "gentoo".
FAILED unittests/linuxliketests.py::LinuxlikeTests::test_install_strip - FileNotFoundError: [Errno 2] No such file or directory: 'file'
Signed-off-by: Sam James <sam@gentoo.org>
Signed-off-by: Eli Schwartz <eschwartz93@gmail.com>
We may want to consider our own binpkg cache for future to speed things up,
in addition to the ones provided by Gentoo's own binhost.
Signed-off-by: Sam James <sam@gentoo.org>
Signed-off-by: Eli Schwartz <eschwartz93@gmail.com>
This replaces all of the Apache blurbs at the start of each file with an
`# SPDX-License-Identifier: Apache-2.0` string. It also fixes existing
uses to be consistent in capitalization, and to be placed above any
copyright notices.
This removes nearly 3000 lines of boilerplate from the project (only
python files), which no developer cares to look at.
SPDX is in common use, particularly in the Linux kernel, and is the
recommended format for Meson's own `project(license: )` field
The former has rust dependencies, which lead to max capping on Cygwin
since there is no rust compiler there. But it turns out there are other
disadvantages of jsonschema:
- it involves installing 5 wheels, instead of just 1
- it is much slower
To give some perspective to the latter issue, this is what it looks like
when I test with jsonschema:
```
===== 1 passed, 509 deselected in 3.07s =====
Total time: 3.341 seconds
```
And here's what it looks like when I test with fastjsonschema:
```
===== 1 passed, 509 deselected, 1 warning in 0.28s =====
Total time: 0.550 seconds
```
I cannot think of a good reason to use the former. Although in order to
work on old CI images, we'll support it as a fallback mechanism
Regression in commit 0af126fec7. We added
support for some "test cases/" stuff that actually relies on test files
being a symlink, but when testing the image builder, we copied the meson
repository contents into the docker container without telling python
that it is in fact super important to copy symlinks as symlinks.
As a result, the tests themselves ran fine when merging, but caused the
image builder to thereafter fail.
They do not appear to have 20 in their repos anymore, and no traces can
be found of it in the history, as usual. They do have 11, 17, and 21.
Last time we chose one randomly and hoped it doesn't keep changing
value. This dismally failed: 20 was upgraded to 21. It looks like 17 may
have some staying power.
They do not appear to have 15 in their repos anymore, and no traces can
be found of it in the history, as usual. They do have 11, 17, and 20, so
choose one randomly and hope it doesn't keep changing value.
In this case, PEP 668 was created to allow a thing that Debian wanted,
which is for `pip install foobar` to not break the system python. This
despite the fact that the system python is fine, unless you use sudo pip
which is discouraged for separate reasons, and it is in fact quite
natural to install additional packages to the user site-packages.
It isn't even the job of the operating system to decide whether the user
site-packages is broken, whether the operating system gets the answer
correct or not -- it is the job of the operating system to decide
whether the operating system is broken, and that can be solved by e.g.
enforcing a shebang policy for distribution-packaged software, which
distros like Fedora do, and mandating not only that python shebangs do
not contain `/usr/bin/env`, but that they *do* contain -s.
Anyway, this entire kerfuffle is mostly just a bit of pointless
interactive churn, but it bites pretty hard for our use case, which is a
container image which is fortunately tested before deployment, so
instead of failing to deploy because of theoretical conflicts with the
base system (we specifically need base system integration...) we fail to
deploy because 5 minutes into pulling apt updates at the very beginning,
pip refuses point-blank to work. I especially do not know why it is the
job of the operating system to throw errors intended for interactive
users at people baking "appliance" containers who cannot "break" the
system python anyway.
Fix this by doing what Debian and Ubuntu should both have done from the
beginning, and opting containers out of this questionable feature
entirely.
Note that CI images may still not actually complete their build/test
cycle and be updated, because e.g. LLVM 16 issues tracked by #11642 or
glib ASAN issues tracked by #11754.
The bionic image is really old and mainly exists to test that Meson
itself still works on really old distros (and really old python).
Ideally we'd avoid depending too much on it.
We can get a very modern pypy3 automatically this way, and potentially
use it for more stuff too.
It's already run on other distros. This one fails though, due to missing
debug info.
```
valgrind: Possible fixes: (1, short term): install glibc's debuginfo
valgrind: package on this machine. (2, longer term): ask the packagers
valgrind: for your Linux distribution to please in future ship a non-
valgrind: stripped ld.so (or whatever the dynamic linker .so is called)
```
It doesn't seem possible to have this work out of the box. The debuginfo
packages aren't reliably available, and debuginfod servers -- even if
they worked, which they apparently don't -- would not help anyway since
old version pruning can result in symbols disappearing before the image
is rebuilt, and thereby causing failure.
It's not really critical to test this, since as mentioned we already
have coverage of Meson's side in other distro ciimages.
From the Zen of Python: "Explicit is better than implicit."
As it turns out, it's no longer a safe assumption that pip uses
setuptools??? Well, anyway, install it properly regardless.
The coverage report was always the final section of the main test run.
This made it hard to scroll around and find exactly what went wrong --
particularly as not everyone realizes that coverage isn't part of the
test run, but also because the output from coverage is... excessively
long.
This mirrors what we do in our other workflows.
After a recent CI image builder update successfully ran the tests, but
didn't run the cross tests, it updated the image that then got used by
the regular CI cross tests. Somehow this resulted in a bunch of tests
now failing because zlib could not be picked up. We probably dropped a
transitive dependency somewhere. Anyway, it's correct to explicitly
specify it if we need it.
We've never used it for anything, it was originally added for #3776 but
that never got finished so it's just a waste.
This also prevents successful regeneration of the build image, because
nim is not available for Ubuntu rolling. It's available in 20.04 and
22.10, but vanished in between for reasons best known to Ubuntu.
This enables the fortran tests for Azure.
We only test on x64, because:
- ifort isn't arm64 compatible
- x86 may in theory exist, but Meson reports it cannot compile
executables
The pip package is for python 3.6, but installs pip for all versions of
python. Apparently. Including python 3.7.
So do all other packages, especially the ones where it doesn't work but
pip thinks it is installed anyway. Force a reinstall.
This has been removed as an explicit package in impish. It seems that
having pkg-config installed and adding arm as an arch will cause it to
be generated automatically
Set MESON_CI_JOBNAME for all CI jobs which run project tests.
(Note that ${{ github.job }} is the literal job.id used in the yaml, not
any name given to the job with job.id.name, and so is the same for all
matrix entries, and thus not suitable for our purposes there).