Test '230 external project' uses make, but is too dumb to use the
platform conventions for shared library names in installed files
expected by shared_lib, so special case that.
Version 1.9 removes support for python 3.7, so we either need to pin the
version to 1.8 as long as we're support Python 3.7, or we need to drop
linting for 3.7
Followup to commit 5c479d7a13.
In this case, PEP 668 was created to allow a thing that Debian wanted,
which is for `pip install foobar` to not break the system python. This
despite the fact that the system python is fine, unless you use sudo pip
which is discouraged for separate reasons, and it is in fact quite
natural to install additional packages to the user site-packages.
It isn't even the job of the operating system to decide whether the user
site-packages is broken, whether the operating system gets the answer
correct or not -- it is the job of the operating system to decide
whether the operating system is broken, and that can be solved by e.g.
enforcing a shebang policy for distribution-packaged software, which
distros like Fedora do, and mandating not only that python shebangs do
not contain `/usr/bin/env`, but that they *do* contain -s.
Anyway, this entire kerfuffle is mostly just a bit of pointless
interactive churn, but it bites pretty hard for our use case, which is a
container image, so instead of failing to run because of theoretical
conflicts with the base system (we specifically need base system
integration...) we fail to run because 5 minutes into pulling homebrew
updates at the very beginning, pip refuses point-blank to work. I
especially do not know why it is the job of the operating system to
throw errors intended for interactive users at people designing system
integration containers who cannot "break" the system python anyway as it
is thrown away after every use.
Fix this by doing what homebrew should have done from the beginning, and
opting containers out of this questionable feature entirely.
The following workflows have been updated so that they are
triggered when the any of the test harnesses are updated:
macos,
os-comp,
msys2
Previously only changes to `run_unittests.py` caused these
workflows to be executed.
We need to load various environment variables from /etc/profile. We
cannot unconditionally load it, because opensuse sources env_vars and
their /etc/profile has a fatal bug in it that causes it to return
nonzero and abort under `set -e` (which is *amazing* as a thing to have
in /etc/profile specifically -- just saying).
Alas, even /etc/profile.env is not enough since Java support depends on
profile.d logic. Re-conditionalize this check to only be added to
env_vars.sh for the image named "gentoo".
We may want to consider our own binpkg cache for future to speed things up,
in addition to the ones provided by Gentoo's own binhost.
Signed-off-by: Sam James <sam@gentoo.org>
Signed-off-by: Eli Schwartz <eschwartz93@gmail.com>
Commit 191449f608 has numerous issues, and
being completely invalid yml syntax was just the tip of the iceberg.
In this case, it fails the github schema, which requires that env be
adjunct to a job or step definition, rather than its own thing. It did
not even make sense in context, since the purpose of the variable is to
modify brew.
Fixes: #12644Fixes: #12681
We're getting errors that we don't care about on the CI like:
```
==> Pouring node@18--18.19.0.monterey.bottle.tar.gz
The formula built, but is not symlinked into /usr/local
Error: The `brew link` step did not complete successfully
Could not symlink lib/node_modules/npm/docs/content/commands/npm-sbom.md
Target /usr/local/lib/node_modules/npm/docs/content/commands/npm-sbom.md
already exists. You may want to remove it:
rm '/usr/local/lib/node_modules/npm/docs/content/commands/npm-sbom.md'
```
We don't care about node, the only reason it's getting updated is
because it's already installed on the image and brew is auto-updating
it. So let's disable auto-update.
The former has rust dependencies, which lead to max capping on Cygwin
since there is no rust compiler there. But it turns out there are other
disadvantages of jsonschema:
- it involves installing 5 wheels, instead of just 1
- it is much slower
To give some perspective to the latter issue, this is what it looks like
when I test with jsonschema:
```
===== 1 passed, 509 deselected in 3.07s =====
Total time: 3.341 seconds
```
And here's what it looks like when I test with fastjsonschema:
```
===== 1 passed, 509 deselected, 1 warning in 0.28s =====
Total time: 0.550 seconds
```
I cannot think of a good reason to use the former. Although in order to
work on old CI images, we'll support it as a fallback mechanism
If an annotation could not be resolved, it's classified as a "missing
import" and our configuration ignored it:
```
Skipping analyzing "mesonbuild.backends": module is installed, but missing library stubs or py.typed marker
```
As far as mypy is concerned, this library may or may not exist, but it
doesn't have any typing information at all (may need to be installed
first).
We ignored this because of our docs/ and tools/ thirdparty dependencies,
but we really should not. It is trivial to install them, and then
enforce that this "just works".
By enforcing it, we also make sure typos get caught.
Github Actions supports this fine, but is misdetected by flake8/mypy.
Even though pylint defaults to text instead of colorized, we might as
well do the right thing here though.
This allows verifying that meson is type-safe under older versions of
Python, which it currently is. Different versions of Python sometimes
have different supported types for an API.
Verify this in CI.
(We flush output to ensure CI prints lines in the right order.)
On average, saves 20 seconds for a job that may take 1.5 or 2 minutes.
Mostly due to recompiling the same 3 wheels again and again, so that
avoids pointless CPU waste.
This has issues on Windows with msys2/cygwin, where we need to build it
ourselves since binary wheels aren't supported on PyPI. And we don't
have a rust compiler available for either one -- we may not be *able* to
do so for cygwin?
For msys2, the solution is pretty easy, just rely on the official msys2
packages for jsonschema, which handle both it and its dependencies for
us and don't require us to compile anything. Currently they still have
an older jsonschema that doesn't use rust deps at all, but that's
because the new jsonschema was released today. We'll automatically catch
up at some point.
For cygwin, there is no rust compiler in the cygwin repository, and
jsonschema there is old as the hills. I do not know if there's a good
answer here, but an adequate answer is to cap jsonschema at the version
we were testing with yesterday.
In this case, we have the secret available, and the workflow ran even
though it wasn't on branch "master" because of the pull request trigger.
Since the change hasn't landed on master, though, we do not want to
update the website. So check for pushes to master, specifically.
cache/restore and cache/save now exist, and close the issue linked in
the workflow comment. The new save action runs when invoked, rather than
as a post action.
fixes instability in precise version of tools resulting in some runners
getting a downgraded version and producing spuriously fixed/reintroduced
codeql alerts.
This is a no-op change from v2 to v3, but github complains that nodejs
is outdated if you don't. It's not obvious why this required a major
version bump...
However, half of our uses are on v1, which has a decent fix: failure to
upload artifacts constitutes a step failure.
Not much changes, really, other than it now sets PKG_CONFIG_PATH to
point to the python it just installed. This should generally not be a
problem (Meson's python module sets that anyway based on the
executable's introspection data).
lgtm.com was acquired by github. It is deprecated and on its way out,
because they've integrated the functionality itself into github. Take a
look at what its official replacement can do.
This does run as yet another Actions slot, which is already fairly
excessive, but the average runtime seems about 5 minutes so that's not
too bad...