Motivated by a desire to use vf_libplacebo as a GPU-accelerated
cropping/padding/zooming filter. This commit adds support for setting
the `input/target.crop` fields as dynamic expressions.
Re-use the same generic variables available to other scale and crop type
filters, and also add some more that we can afford as a result of being
able to set these properties dynamically.
It's worth pointing out that `out_t/ot` is currently redundant with
`in_t/t` since it will always contain the same PTS values, but I plan on
changing this in the near future.
I decided to also expose `crop_w/crop_h` and `pos_w/pos_h` as variables
in the expression parser itself, since this enables the fairly common
use case of determining dimensions first and then placing the image
appropriately, such as is done in the default behavior (which centers
the cropped/placed region by default).
In some circumstances, libplacebo will clear the background as a result
of cropping/padding. Currently, this uses the hard-coded default fill
color of black. This option makes this behavior configurable.
As with the earlier bswap change, all versions of GCC and Clang that
support RISC-V support the popcount built-ins, so we can just use them
instead of inline assembler.
av_bswapXX() are used in context that expect exact size types, notably
variable arguments to av_log(). On Linux RV64, uint_fast32_t is an
unsigned long, so the current inline assembler does not work properly.
Since GCC and Clang gained their byte-swap built-ins before they
supported RISC-V, we can simply defer to them. As an added bonus, the
compiler can do instruction scheduling, which it couldn't with the Zbb
inline assembler.
Currently those are set in different ways depending on whether the
stream is decoded or not, using some values from the decoder if it is.
This is wrong, because there may be arbitrary amount of delay between
input packets and output frames (depending e.g. on the thread count when
frame threading is used).
Always use the path that was previously used only for streamcopy. This
should not cause any issues, because these values are now used only for
streamcopy and discontinuity handling.
This change will allow to decouple discontinuity processing from
decoding and move it to ffmpeg_demux. It also makes the code simpler.
Changes output in fate-cover-art-aiff-id3v2-remux and
fate-cover-art-mp3-id3v2-remux, where attached pictures are now written
in the correct order. This happens because InputStream.dts is no longer
reset to AV_NOPTS_VALUE after decoding, so streamcopy actually sees
valid dts values.
This was added in 380db56928 as a
temporary crutch that is not needed anymore. The only case where this
code can be triggered is the very first frame, for which InputStream.pts
is always equal to 0.
Stop using InputStream.dts for generating missing timestamps for decoded
frames, because it contains pre-decoding timestamps and there may be
arbitrary amount of delay between input packets and output frames (e.g.
dependent on the thread count when frame threading is used). It is also
in AV_TIME_BASE (i.e. microseconds), which may introduce unnecessary
rounding issues.
New code maintains a timebase that is the inverse of the LCM of all the
samplerates seen so far, and thus can accurately represent every audio
sample. This timebase is used to generate missing timestamps after
decoding.
Changes the result of the following FATE tests
* pcm_dvd-16-5.1-96000
* lavf-smjpeg
* adpcm-ima-smjpeg
In all of these the timestamps now better correspond to actual frame
durations.
If input packets have timestamps, they will be propagated to output
frames by the decoder, so at best this block does not do anything.
There can also be an arbitrary amount of delay between packets sent to
the decoder and decoded frames (e.g. due to decoder's intrinsic delay or
frame threading), so deriving any timestamps from packet properties is
wrong.
One that is fine enough to represent all DV audio sample rates. Audio
packet durations are now sample-accurate.
This largely undoes commit 76fbb0052d. To
avoid breaking the issue fixed by that commit, resync audio timestamps
against video if they get more than one frame apart. The sample from
issue #8762 still works correctly after this commit.
Slightly changes the results of the lavf-dv seektest, due to the audio
timebase being more granular.
Current code will call avpriv_set_pts_info() for each video frame,
possibly setting a different timebase if the stream framerate changes.
This violates API conventions, as the timebase is supposed to stay
constant after stream creation.
Change the demuxer to set a single timebase that is fine enough to
handle all supported DV framerates.
The seek tests change slightly because the new timebase is more
granular.
Fixes: signed integer overflow: -2147483648 - 5 cannot be represented in type 'int'
Fixes: 58066/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-5312995835379712
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This makes the worst case much faster
Fixes: Timeout
Fixes: 51363/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_BONK_fuzzer-5660734784143360
Fixes: 57957/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_BONK_fuzzer-5874095467397120
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This allows the usage of codecs in builds that have a parser but no decoders
for remuxing scenarios with raw sources.
Signed-off-by: James Almer <jamrial@gmail.com>