There is no reason to postpone it until opening the encoder. Also, abort
when the input stream is unknown, rather than disregard an explicit
request from the user.
The code will currently add a small offset to avoid exact midpoints, but
this can cause inexact results when a float timestamp is exactly
representable as an integer.
Fixes off-by-one in the first frame duration in multiple FATE tests.
Fixes: signed integer overflow: 9079256848778919936 - -288230376151711746 cannot be represented in type 'long'
Fixes: 58248/clusterfuzz-testcase-minimized-ffmpeg_dem_OGG_fuzzer-6326851353313280
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: -182838 * 32768 cannot be represented in type 'int'
Fixes: 58179/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RKA_fuzzer-5333265899978752
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: left shift of 538976288 by 13 places cannot be represented in type 'int'
Fixes: 56148/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RKA_fuzzer-6257370708967424
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
In certain use cases, controlling the maximum frame size is critical. An
example is when transmitting AAC packets over Bluetooth A2DP.
While the spec allows the packets to be fragmented (but UNRECOMMENDED),
in practice most headsets do not recognize nor reassemble such packets.
In this patch, we allow setting `bit_rate_tolerance` to 0 to indicate
that the specified bit rate should be treated as an upper bound up to
frame level.
Signed-off-by: Jeremy Wu <jrwu@chromium.org>
Today, cuda is not able to import multiplane images, and cuda requires
images to be imported whether you trying to import to cuda or export
from cuda (in the later case, the image is imported and then copied
into on the cuda side). So any interop between cuda and vulkan requires
that multiplane be disabled.
The existing option for this is not sufficient, because when deriving
devices it is not possible to specify any options.
And, it is necessary to derive the Vulkan device, because any pipeline
that involves uploading from cuda to vulkan and then back to cuda must
use the same cuda context on both sides, and the only way to propagate
the cuda context all the way through is to derive the device at each
stage.
ie:
-vf hwupload=derive_device=vulkan,<filters>,hwupload=derive_device=cuda
Added support for more VideoToolbox encoder options:
- qmin and qmax options are now used
- max_slice_bytes: Max number of bytes per H.264 slice
- max_ref_frames: Limit the number of reference frames
- Disable open GOP when the cgop flag is set
- power_efficient: Enable power-efficient mode
Signed-off-by: Rick Kern <kernrj@gmail.com>
The contents of this field are not defined for decoding. Use
pkt_timebase, which is the timebase of demuxed packets.
Drop a tautological av_packet_rescale_ts() call, as the stream and
decoder timebases are the same.
Current code marks the output stream as finished and waits for a flush
packet, but that is both unnecessary and suspect, as in theory nothing
should be sent to a finished stream - not even flush packets.
Make all relevant state per-filtergraph input, rather than per-input
stream. Refactor the code to make it work and avoid leaking memory when
a single subtitle stream is sent to multiple filters.
Set them in ifilter_parameters_from_dec(), similarly to audio/video
streams. This reduces the extent to which sub2video filters need to be
treated specially.
This function should not take an InputStream, as it only uses it to get
the InputFile and the timebase. Pass those directly instead and avoid
confusion over dealing with multiple InputStreams.
This queue should be associated with a specific filtergraph input - if
a subtitle stream is sent to multiple filters then each should have its
own queue.
This code is a sub2video analogue of ifilter_send_frame(), so it
properly belongs to the filtering code.
Note that using sub2video with more than one target for a given input
subtitle stream is currently broken and this commit does not change
that. It will be addressed in following commits.
When the filtergraph has no inputs, it can be configured immediately
when all its outputs are bound to output streams. This will simplify
treating some corner cases.