segment duration is using vs duration which compute by frame per second,
that can not fix problem of VFR video stream, so compute the duration
when split the segment, set the segment target duration use
current packet pts minus the prev segment end pts..
Reported-by: Zhao Jun <barryjzhao@tencent.com>
Reviewed-by: Zhao Jun <barryjzhao@tencent.com>
Signed-off-by: Steven Liu <liuqi05@kuaishou.com>
Fixes: shift exponent 32 is too large for 32-bit type 'int'
Fixes: 21647/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WAVPACK_fuzzer-5686168323883008
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: David Bryant <david@wavpack.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
and make it static again.
These functions have been moved from nutenc to aviobuf and internal.h
in f8280ff4c0 in order to use them in a
forthcoming patch in utils.c. Said patch never happened, so this commit
moves them back and makes them static, effectively reverting said
commit as well as f8280ff4c0 (which added
the ff-prefix to these functions).
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It allows to combine several ffio_free_dyn_buf().
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
NUT uses variable-length integers in order to for length fields.
Therefore the NUT muxer often writes data into a dynamic buffer in order
to get the length of it, then writes the length field using the fewest
amount of bytes needed. To do this, a new dynamic buffer was opened,
used and freed for each element which involves lots of allocations. This
commit changes this: The dynamic buffers are now resetted and reused.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
calculate_checksum in put_packet() is always 1.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
With the description in frame size with refs semantics (SPEC 7.2.5),
it is a requirement of bitstream conformance that for at least one
reference frame has the valid dimensions.
Modify the check to make sure the decoder works well in SINGLE_REFERENCE
mode that not all reference frames have valid dimensions.
Check and error out if invalid reference frame is used in inter_recon.
One of the failure case is a 480x272 inter frame (SINGLE_REFERENCE mode)
with following reference pool:
0. 960x544 LAST valid
1. 1920x1088 GOLDEN invalid, but not used in single reference mode
2. 1920x1088 ALTREF invalid, but not used in single reference mode
3~7 ... Unused
Identical logic in libvpx:
<https://github.com/webmproject/libvpx/blob/master/vp9/decoder/vp9_decodeframe.c#L736>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
Hard coded parameters for qmin and qmax are currently used to initialize
v4l2_m2m device. This commit uses values from avctx->{qmin,qmax} if they
are set.
Reviewed-by: Ming Qian <ming.qian@nxp.com>
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
ff_vaapi_encode_close() is not enough to free the resources like cbs
if initialization failure happens after codec->configure (except for
vp8/vp9).
We need to call avctx->codec->close() to deallocate, otherwise memory
leak happens.
Add FF_CODEC_CAP_INIT_CLEANUP for vaapi encoders and deallocate the
resources at free_and_end inside avcodec_open2().
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Since commit 979b5b8959, reverting the
Matroska ContentCompression is no longer done inside
matroska_parse_frame() (the function that creates AVPackets out of the
parsed data (unless we are dealing with certain codecs that need special
handling)), but instead in matroska_parse_block(). As a consequence,
the data that matroska_parse_frame() receives is no longer always owned
by an AVBuffer; it is owned by an AVBuffer iff no ContentCompression needed
to be reversed; otherwise the data is independently allocated and needs
to be freed on error.
Whether the data is owned by an AVBuffer or not is indicated by a variable
buf of type AVBufferRef *: If it is NULL, the data is independently
allocated, if not it is owned by the underlying AVBuffer (and is used to
avoid copying the data when creating the AVPackets).
Because the allocation of the buffer holding the uncompressed data happens
outside of matroska_parse_frame() (if a ContentCompression needs to be
reversed), the data is passed as uint8_t ** in order to not leave any
dangling pointers behind in matroska_parse_block() should the data need to
be freed: In case of errors, said uint8_t ** would be av_freep()'ed in
case buf indicated the data to be independently allocated.
Yet there is a problem with this: Some codecs (namely WavPack and
ProRes) need special handling: Their packets are only stored in
Matroska in a stripped form to save space and the demuxer reconstructs
full packets. This involved allocating a new, enlarged buffer. And if
an error happens when trying to wrap this new buffer into an AVBuffer,
this buffer needs to be freed; yet instead the given uint8_t ** (holding
the uncompressed, yet still stripped form of the data) would be freed
(av_freep()'ed) which certainly leads to a memleak of the new buffer;
even worse, in case the track does not use ContentCompression the given
uint8_t ** must not be freed as the actual data is owned by an AVBuffer
and the data given to matroska_parse_frame() is not the start of the
actual allocated buffer at all.
Both of these issues are fixed by always freeing the current data in
case it is independently allocated. Furthermore, while it would be
possible to track whether the pointer from matroska_parse_block() needs
to be reset or not, there is no gain in doing so, as the pointer is not
used at all afterwards and the sematics are clear: If the data passed
to matroska_parse_frame() is independently allocated, then ownership
of the data passes to matroska_parse_frame(). So don't pass the data
via uint8_t **.
Fixes Coverity ID 1462661 (the issue as described by Coverity is btw
a false positive: It thinks that this error can be triggered by ProRes
with a size of zero after reconstructing the original packets, but the
reconstructed packets can't have a size of zero).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Fixes ticket #8622
Reviewed-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Not requiring this leads to unexpected result, since Rav1e's current
two pass API has no way to fail in such a case.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
The previous code here did not handle passing a frames context when
ffmpeg itself did not know about the device it came from (for example,
because it was created by device derivation inside a filter graph), which
would break encoders requiring that input. Fix that by checking for HW
frames and device context methods independently, and prefer to use a
frames context method if possible. At the same time, revert the encoding
additions to the device matching function because the additional
complexity was not relevant to decoding.
Also fixes#8637, which is the same case but with the device creation
hidden in the ad-hoc libmfx setup code.
Extradata included in packet side data is meant to replace the codec context
extradata. So when muxing for example to MP4 without this change and if
extradata is present in a packet side data, the result will be that the
parameter sets present in keyframes will be filtered, but the parameter sets
ultimately included in the av1C box will not.
This is especially important for AV1 as both currently supported encoders don't
export the Sequence Header in the codec context extradata, but as packet side
data instead.
Signed-off-by: James Almer <jamrial@gmail.com>
This has previously only been checked if the chapters were initially
available, but not if they were only written in the trailer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now ff_vorbiscomment_write() used the bytestream API to write
VorbisComments. Therefore the caller had to provide a sufficiently large
buffer to write the output.
Yet two of the three callers (namely the FLAC and the Matroska muxer)
actually want the output to be written via an AVIOContext; therefore
they allocated buffers of the right size just for this purpose (i.e.
they get freed immediately afterwards). Only the Ogg muxer actually
wants a buffer. But given that it is easy to wrap a buffer into an
AVIOContext this commit changes ff_vorbiscomment_write() to use an
AVIOContext for its output.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
ff_vorbiscomment_write() used an AVDictionary ** parameter for a
dictionary whose contents ought to be written; yet this can be replaced
by AVDictionary * since commit 042ca05f0fdc5f4d56a3e9b94bc9cd67bca9a4bc;
and this in turn can be replaced by const AVDictionary * to indicate
that the dictionary isn't modified; the latter also applies to
ff_vorbiscomment_length().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If a FLAC track uses an unconventional channel layout, the Matroska
muxer adds a WAVEFORMATEXTENSIBLE_CHANNEL_MASK VorbisComment to the
CodecPrivate to preserve this information. And given that FLAC uses
24bit length fields, the muxer checks if the length is more than this
and errors out if it is.
Yet this can never happen, because we create the AVDictionary that is
the source for the VorbisComment. It only contains exactly one entry
that can't grow infinitely large (in fact, the length of the
VorbisComment is <= 4 + 33 + 1 + 18 + strlen(LIBAVFORMAT_IDENT)).
So we can simply assert the size to be < (1 << 24) - 4.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Commit 6fd300ac6c added support for WebM
Chunk livestreaming; in this case, both the header as well as each
Cluster is written to a file of its own, so that even if the AVIOContext
seems seekable, the muxer has to behave as if it were not. Yet one of
the added checks makes no sense: It ensures that no SeekHead is written
preliminarily (and hence no SeekHead is written at all) if the option
for livestreaming is set, although one should write the SeekHead in this
case when writing the Header. E.g. the WebM-DASH specification [1]
never forbids writing a SeekHead and in some instances (that don't apply
here) even requires it (if Cues are written after the Clusters).
[1]: https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/webm-dash-specification
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Since commit 4aa0665f39, the dynamic
buffer destined for the contents of the current Cluster is no longer
constantly allocated, reallocated and then freed after writing the
content; instead it is reset and reused when closing a Cluster.
Yet the code in mkv_write_trailer() still checked for whether a Cluster
is open by checking whether the pointer to the dynamic buffer is NULL or
not (instead of checking whether the position of the current Cluster is
-1 or not). If a Cluster was not open, an empty Cluster would be output.
One usually does not run into this issue, because unless there are
errors, there are only three possibilities to not have an opened Cluster
at the end of writing a packet:
The first is if one sent an audio packet to the muxer. It might trigger
closing and outputting the old Cluster, but because the muxer caches
audio packets internally, it would not be output immediately and
therefore no new Cluster would be opened.
The second is an audio packet that does not contain data (such packets
are sometimes sent for side-data only, e.g. by the FLAC encoder). The
only difference to the first scenario is that such packets are not
cached.
The third is if one explicitly flushes the muxer by sending a NULL
packet via av_write_frame().
If one also allows for errors, then there is also another possibility:
Caching the audio packet may fail in the first scenario.
If one calls av_write_trailer() after the first scenario, the cached
audio packet will be output when writing the trailer, for which
a Cluster is opened and everything is fine; because flushing the muxer
does currently not output the cached audio packet (if one is cached),
the issue also does not exist if an audio packet has been cached before
flushing. The issue only exists in one of the other scenarios.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It's based on the following specs:
RDD 45:2017 - SMPTE Registered Disclosure Doc - Interoperable Master Format - Application ProRes
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
It's based on the following specs:
RDD 36:2015 - SMPTE Registered Disclosure Doc - Apple ProRes Bitstream Syntax and Decoding Process
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>