This was added along side the original SSE(one) DSP function in
0a68cd876e without rationale. This was
presumably faster on x87, which is no longer relevant since we pretty
much assume SSE2 or later on x86.
Meanwhile this function is ~2.5x slower than the normal floating point
one on SiFive-U74.
Given that source and destination can alias, the compiler was forced to
perform each read-modify-write sequentially. We cannot use the `restrict`
qualifier to avoid this here because the AC-3 encoder uses the function
in-place. Instead this commit provides an explicit guarantee to the
compiler that batches of 8 elements will not overlap, so that it can
interleave calculations.
In practice contemporary optimising compilers are able to unroll and keep
the temporary array in FPU registers (without spilling).
On SiFive-U74, this speeds the same signs branch by 4x, and the
opposite signs branch 1.5x.
The current code assumes that we have unaligned rows, which hurts on
platforms with slower unaligned accesses. (Also, this lets the compiler
unroll manually, which it seems to do in practice.)
This preserves T1 whilst calling the instrumented function. In a Sci-Fi
setting where type-based Control Flow Integrity (CFI) is supported, the
calling code (i.e., the `checkasm` test case) will set T1 to the expected
value of the landing pad label (LPL) of the instrumented function.
The call wrapper will always use LPL zero which is a wild card. We should
preserve the value of T1 at least until the indirect call to the
instrumented function. Of course this is Sci-Fi, because:
1) there is no hardware (or even QEMU) support yet,
2) all our assembler functions currently use LPL zero anyway.
This uses T3 rather than T2 because indirect branches with T2 is reserved
for notionally direct calls made with an indirect call instruction (e.g.
due to GOT indirection), and are exempted from forward-edge CFI checks.
The previous fix was not sufficient.
To make things easier to reason about, split the function and
add the guards there instead of complicating the call site more.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Frame threading in the FFV1 decoder works in a very unusual way - the
state that needs to be propagated from the previous frame is not decoded
pixels(¹), but each slice's entropy coder state after decoding the slice.
For that purpose, the decoder's update_thread_context() callback stores
a pointer to the previous frame thread's private data. Then, when
decoding each slice, the frame thread uses the standard progress
mechanism to wait for the corresponding slice in the previous frame to
be completed, then copies the entropy coder state from the
previously-stored pointer.
This approach is highly dubious, as update_thread_context() should be
the only point where frame-thread contexts come into direct contact.
There are no guarantees that the stored pointer will be valid at all, or
will contain any particular data after update_thread_context() finishes.
More specifically, this code can break due to the fact that keyframes
reset entropy coder state and thus do not need to wait for the previous
frame. As an example, consider a decoder process with 2 frame threads -
thread 0 with its context 0, and thread 1 with context 1 - decoding a
previous frame P, current frame F, followed by a keyframe K. Then
consider concurrent execution consistent with the following sequence of
events:
* thread 0 starts decoding P
* thread 0 reads P's slice header, then calls
ff_thread_finish_setup() allowing next frame thread to start
* main thread calls update_thread_context() to transfer state from
context 0 to context 1; context 1 stores a pointer to context 0's private
data
* thread 1 starts decoding F
* thread 1 reads F's slice header, then calls
ff_thread_finish_setup() allowing the next frame thread to start
decoding
* thread 0 finishes decoding P
* thread 0 starts decoding K; since K is a keyframe, it does not
wait for F and reallocates the arrays holding entropy coder state
* thread 0 finishes decoding K
* thread 1 reads entropy coder state from its stored pointer to context
0, however it finds state from K rather than from P
This execution is currently prevented by special-casing FFV1 in the
generic frame threading code, however that is supremely ugly. It also
involves unnecessary copies of the state arrays, when in fact they can
only be used by one thread at a time.
This commit addresses these deficiencies by changing the array of
PlaneContext (each of which contains the allocated state arrays)
embedded in FFV1SliceContext into a RefStruct object. This object can
then be propagated across frame threads in standard manner. Since the
code structure guarantees only one thread accesses it at a time, no
copies are necessary. It is also re-created for keyframes, solving the
above issue cleanly.
Special-casing of FFV1 in the generic frame threading code will be
removed in a later commit.
(¹) except in the case of a damaged slice, when previous frame's pixels
are used directly
In all cases except decoding version 1 it's either not used, or contains
a copy of a table from quant_tables, which we can just as well use
directly.
When decoding version 1, we can just as well decode into
quant_tables[0], which would otherwise be unused.
FFV1 decoder and encoder currently use the same struct - FFV1Context -
both as codec private data and per-slice context. For this purpose
FFV1Context contains an array of pointers to per-slice FFV1Context
instances.
This pattern is highly confusing, as it is not clear which fields are
per-slice and which per-codec.
Address this by adding a new struct storing only per-slice data. Start
by moving slice_{x,y,width,height} to it.
Fixes: out of array access
Fixes: 70741/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SNOW_fuzzer-5703668010647552
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The snow encoder uses block based motion estimation which can read out of array if
insufficient alignment is used
It may be better to only apply this for the encoder, as it would safe a few bytes of memory
for the decoder. Until then, this fixes the issue in a simple way.
Fixes: out of array access
Fixes: 68963/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SNOW_fuzzer-4979988435632128
Fixes: 68969/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SNOW_fuzzer-6239933667803136.fuzz
Fixed: 70497/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SNOW_fuzzer-5751882631413760
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: division by zero
Fixes: 70561/clusterfuzz-testcase-minimized-ffmpeg_IO_DEMUXER_fuzzer-6199435013455872
Fixes: 70565/clusterfuzz-testcase-minimized-ffmpeg_dem_MOV_fuzzer-5783790316748800
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Callers of ff_framesync_get_frame() generally do not expect the result
to be writable, those that do (e.g. ff_framesync_dualinput_get_writable())
ensure writability themselves.
Significantly reduces memory consumption in complex graphs with
framesync-based filters (e.g. scale, ssim).
Reported-By: Mark Shwartzman