_Float16 support was available on arm/aarch64 for a while, and with gcc
12 was enabled on x86 as long as SSE2 is supported.
If the target arch supports f16c, gcc emits fairly efficient assembly,
taking advantage of it. This is the case on x86-64-v3 or higher.
Same goes on arm, which has native float16 support.
On x86, without f16c, it emulates it in software using sse2 instructions.
This has shown to perform rather poorly:
_Float16 full SSE2 emulation:
frame=50074 fps=848 q=-0.0 size=N/A time=00:33:22.96 bitrate=N/A speed=33.9x
_Float16 f16c accelerated (Zen2, --cpu=znver2):
frame=50636 fps=1965 q=-0.0 Lsize=N/A time=00:33:45.40 bitrate=N/A speed=78.6x
classic half2float full software implementation:
frame=49926 fps=1605 q=-0.0 Lsize=N/A time=00:33:17.00 bitrate=N/A speed=64.2x
Hence an additional check was introduced, that only enables use of
_Float16 on x86 if f16c is being utilized.
On aarch64, a similar uplift in performance is seen:
RPi4 half2float full software implementation:
frame= 6088 fps=126 q=-0.0 Lsize=N/A time=00:04:03.48 bitrate=N/A speed=5.06x
RPi4 _Float16:
frame= 6103 fps=158 q=-0.0 Lsize=N/A time=00:04:04.08 bitrate=N/A speed=6.32x
Since arm/aarch64 always natively support 16 bit floats, it can always
be considered fast there.
I'm not aware of any additional platforms that currently support
_Float16. And if there are, they should be considered non-fast until
proven fast.
IEEE-754 differentiates two different kind of NaNs.
Quiet and Signaling ones. They are differentiated by the MSB of the
mantissa.
For whatever reason, actual hardware conversion of half to single always
sets the signaling bit to 1 if the mantissa is != 0, and to 0 if it's 0.
So our code has to follow suite or fate-testing hardware float16 will be
impossible.
Convert the input from a scatter to a gather instead,
which is faster and better for SIMD.
Also, add a pre-shuffled exptab version to avoid
gathering there at all. This doubles the exptab size,
but the speedup makes it worth it. In SIMD, the
exptab will likely be purged to a higher cache
anyway because of the FFT in the middle, and
the amount of loads stays identical.
For a 960-point inverse MDCT, the speedup is 10%.
This makes it possible to write sane and fast SIMD
versions of inverse MDCTs.
In oneVPL, MFXLoad() and MFXCreateSession() are required to create a
workable mfx session[1]
Add config filters for D3D9/D3D11 session (galinart)
The default device is changed to d3d11va for oneVPL when both d3d11va
and dxva2 are enabled on Microsoft Windows
This is in preparation for oneVPL support
[1] https://spec.oneapi.io/versions/latest/elements/oneVPL/source/programming_guide/VPL_prg_session.html#onevpl-dispatcher
Co-authored-by: galinart <artem.galin@intel.com>
Signed-off-by: galinart <artem.galin@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The following Cflags has been added to libmfx.pc, so mfx/ prefix is no
longer needed when including mfx headers in FFmpeg.
Cflags: -I${includedir} -I${includedir}/mfx
Some old versions of libmfx have the following Cflags in libmfx.pc
Cflags: -I${includedir}
We may add -I${includedir}/mfx to CFLAGS when running 'configure
--enable-libmfx' for old versions of libmfx, if so, mfx headers without
mfx/ prefix can be included too.
If libmfx comes without pkg-config support, we may do a small change to
the settings of the environment(e.g. set -I/opt/intel/mediasdk/include/mfx
instead of -I/opt/intel/mediasdk/include to CFLAGS), then the build can
find the mfx headers without mfx/ prefix
After applying this change, we won't need to change #include for mfx
headers when mfx headers are installed under a new directory.
This is in preparation for oneVPL support (mfx headers in oneVPL are
installed under vpl directory)
Poisoning returned buffers is based around the implicit assumption
that the contents of said buffers are transient. Yet this is not true
for the buffer pools used by the various hardware contexts which store
important state in there that needs to be preserved.
Furthermore, the current code is also based on the assumption
that the complete buffer pointed to by AVBuffer->data coincides with
AVBufferRef->data; yet an implementation might store some data of its
own before the actual user-visible data (accessible via AVBufferRef)
which would be broken by the current code.
(This is of course yet more proof that the AVBuffer API is not the right
tool for the hardware contexts.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Directly branch into the special 64-point deinterleave
subroutine rather than going through the general deinterleave.
64-point transform timings on Zen 3:
Before:
1974 decicycles in av_tx (fft),16776864 runs, 352 skips
After:
1956 decicycles in av_tx (fft),16775378 runs, 1838 skips
__lasx_xvldx does not accept a pointer to const (in fact,
no function in lasxintrin.h does so), although it is not allowed
to modify the pointed-to buffer. Therefore this commit adds a wrapper
for it in order to constify the H264Chroma API in a later commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
__lsx_vldx does not accept a pointer to const (in fact,
no function in lsxintrin.h does so), although it is not allowed
to modify the pointed-to buffer. Therefore this commit adds a wrapper
for it in order to constify the HEVC DSP functions in a later commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The "AYUV" format is defined by Microsoft as their preferred format for
4:4:4 content, and so it is the format used by Intel VAAPI and QSV.
As Microsoft like to define their byte ordering in little-endian
fashion, the memory order is reversed, and so our pix_fmt, which
follows memory order, has a reversed name (VUYA).
The only duration field currently present in AVFrame is pkt_duration,
which is semantically restricted to those frames that are output by
decoders.
Add a new field that stores the frame's duration without regard for how
that frame was produced. Deprecate pkt_duration.
av_fast_realloc and av_fast_mallocz? store the size of
the objects they allocate in an unsigned. Yet they overallocate
and currently they can allocate more than UINT_MAX bytes
in case a user has requested a size of about UINT_MAX * 16 / 17
or more if SIZE_MAX > UINT_MAX (and if the user increased
max_alloc_size via av_max_alloc). In this case it is impossible
to store the true size of the buffer via the unsigned*;
future requests are likely to use the (re)allocation codepath
even if the buffer is actually large enough because of
the incorrect size.
Fix this by ensuring that the actually allocated size
always fits into an unsigned. (This entails erroring out
in case the user requested more than UINT_MAX.)
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The doxy for av_channel_layout_describe() states that the user should look
at the return value to check if the string was truncated. Returning an error
code in this scenario goes against this and is an API break.
A proper fix for the timeout was applied to the Matroska demuxer in 94901a9518.
This reverts commit 8154cb7c2f.
When compiling decklink, this header is included from
a C++ file (albeit inside 'extern "C"') and this
causes compilation failures because of an implicit
void* -> char* conversion. So add an explicit cast.
Fixes ticket #9819.
Reviewed-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Use the proper header for PPC CPU detection code. sys/param.h includes
sys/types, but sys/types.h is the more appropriate header to be used
here.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
x64 always has MMX, MMXEXT, SSE and SSE2 and this means
that some functions for MMX, MMXEXT, SSE and 3dnow are always
overridden by other functions (unless one e.g. explicitly
disables SSE2). So given that the only systems which benefit
from ff_vector_fmul_window_3dnowext are truely ancient 32bit
AMD x86s it is removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
x64 always has MMX, MMXEXT, SSE and SSE2 and this means
that some functions for MMX, MMXEXT, SSE and 3dnow are always
overridden by other functions (unless one e.g. explicitly
disables SSE2). So given that the only systems which benefit
from the 8x8 MMX (overridden by MMXEXT) or the 16x16 MMXEXT
(overridden by SSE2) are truely ancient 32bit x86s they are removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
wchartoutf8() converts strings returned by WinAPI into UTF-8,
which is FFmpeg's preffered encoding.
Some external dependencies, such as AviSynth, are still
not Unicode-enabled. utf8toansi() converts UTF-8 strings
into ANSI in two steps: UTF-8 -> wchar_t -> ANSI.
wchartoansi() is responsible for the second step of the conversion.
Conversion in just one step is not supported by WinAPI.
Since these character converting functions allocate the buffer
of necessary size, they also facilitate the removal of MAX_PATH limit
in places where fixed-size ANSI/WCHAR strings were used
as filename buffers.
On Windows, getenv_utf8() wraps _wgetenv() converting its input from
and its output to UTF-8. Strings returned by getenv_utf8()
must be freed by freeenv_utf8().
On all other platforms getenv_utf8() is a wrapper around getenv(),
and freeenv_utf8() is a no-op.
The value returned by plain getenv() cannot be modified;
av_strdup() is usually used when modifications are required.
However, on Windows, av_strdup() after getenv_utf8() leads to
unnecessary allocation. getenv_dup() is introduced to avoid
such an allocation. Value returned by getenv_dup() must be freed
by av_free().
Because of cleanup complexities, in places that only test the existence
of an environment variable or compare its value with a string
consisting entirely of ASCII characters, the use of plain getenv()
is still preferred. (libavutil/log.c check_color_terminal()
is an example of such a place.)
Plain getenv() is also preffered in UNIX-only code,
such as bktr.c, fbdev_common.c, oss.c in libavdevice
or af_ladspa.c in libavfilter.
Signed-off-by: Martin Storsjö <martin@martin.st>
For SSE2 and SSE3, there are four states that the two flags
involved (AV_CPU_FLAG_SSE[23] and AV_CPU_FLAG_SSE[23]SLOW) can convey.
When ordered from worst to best they are:
1. both flags unset (SSE[23] unavailable)
2. the slow flag set, the ordinary flag unset (this is designed
for cases where SSE2 is available, but so slow that MMX(EXT)/SSE
code is usually faster)
3. both flags set (SSE2 is available, but there might be scenarios
where MMX(EXT)/SSE code is faster)
4. the ordinary flag set, the slow flag unset (this is the normal case)
The ordinary macros for checking cpuflags return true
in the latter two cases; the fast macros only return true for
the latter case. Yet the macros to check for slow currently
only return true in case three.
This seems unintended. In fact, the only uses of the slow macros
are all of the form
if (EXTERNAL_SSE2(cpu_flags) || EXTERNAL_SSE2_SLOW(cpu_flags))
where the check for EXTERNAL_SSE2_SLOW is completely redundant.
Even more importantly, it is not what was intended. Before
6369ba3c9c, the checks passed
in cases 2 to 4. Said commit changed this to something that
only passes for the third case. Commits
7fb758cd8e and
c1913064e3 restored the old behaviour,
yet merging 4efab89332 (in commit
ac774cfa57) broke this again
by changing it to what it is now.*
This commit changes the macros to make the slow macros check
whether a specific instruction is supported, even if slow.
This restores the intended meaning to all uses of the SLOW macros
and is generally more natural.
*: Libav only checks for EXTERNAL_SSE2_SLOW, i.e. for the third case
only.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is more spec-compliant because it does not rely
on dead-code elimination by the compiler. Especially
MSVC has problems with this, as can be seen in
https://ffmpeg.org/pipermail/ffmpeg-devel/2022-May/296373.html
or
https://ffmpeg.org/pipermail/ffmpeg-devel/2022-May/297022.html
This commit does not eliminate every instance where we rely
on dead code elimination: It only tackles branching to
the initialization of arch-specific dsp code, not e.g. all
uses of CONFIG_ and HAVE_ checks. But maybe it is already
enough to compile FFmpeg with MSVC with whole-programm-optimizations
enabled (if one does not disable too many components).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Those are always showing up on Patchwork when FATE tests are failing,
covering some possibly more useful information.
The volatile keyword was used as a workaround for an eight year old
clang version.
Signed-off-by: softworkz <softworkz@hotmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Normally, both the source and dest frame would have only the old API fields
set, only the new API fields set, or both set. But in some cases, like when
calling av_frame_ref() using a non reference counted source frame where only
the old channel layout API fields were populated, the result would be the dst
frame having both the new and old fields populated.
This commit takes this into account and fixes the checks by calling
av_channel_layout_compare() only if the source frame has the new API fields
set, and doing sanity checks for the source frame old API fields if the new
ones are not set.
Signed-off-by: James Almer <jamrial@gmail.com>
This commit moves some of the functionality from avfilter/colorspace
into avutil/csp and exposes it as a public API so it can be used by
libavcodec and/or libavformat. It also converts those structs from
double values to AVRational to make regression testing easier and
more consistent.
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>