ff_vaapi_encode_close() is not enough to free the resources like cbs
if initialization failure happens after codec->configure (except for
vp8/vp9).
We need to call avctx->codec->close() to deallocate, otherwise memory
leak happens.
Add FF_CODEC_CAP_INIT_CLEANUP for vaapi encoders and deallocate the
resources at free_and_end inside avcodec_open2().
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Since commit 979b5b8959, reverting the
Matroska ContentCompression is no longer done inside
matroska_parse_frame() (the function that creates AVPackets out of the
parsed data (unless we are dealing with certain codecs that need special
handling)), but instead in matroska_parse_block(). As a consequence,
the data that matroska_parse_frame() receives is no longer always owned
by an AVBuffer; it is owned by an AVBuffer iff no ContentCompression needed
to be reversed; otherwise the data is independently allocated and needs
to be freed on error.
Whether the data is owned by an AVBuffer or not is indicated by a variable
buf of type AVBufferRef *: If it is NULL, the data is independently
allocated, if not it is owned by the underlying AVBuffer (and is used to
avoid copying the data when creating the AVPackets).
Because the allocation of the buffer holding the uncompressed data happens
outside of matroska_parse_frame() (if a ContentCompression needs to be
reversed), the data is passed as uint8_t ** in order to not leave any
dangling pointers behind in matroska_parse_block() should the data need to
be freed: In case of errors, said uint8_t ** would be av_freep()'ed in
case buf indicated the data to be independently allocated.
Yet there is a problem with this: Some codecs (namely WavPack and
ProRes) need special handling: Their packets are only stored in
Matroska in a stripped form to save space and the demuxer reconstructs
full packets. This involved allocating a new, enlarged buffer. And if
an error happens when trying to wrap this new buffer into an AVBuffer,
this buffer needs to be freed; yet instead the given uint8_t ** (holding
the uncompressed, yet still stripped form of the data) would be freed
(av_freep()'ed) which certainly leads to a memleak of the new buffer;
even worse, in case the track does not use ContentCompression the given
uint8_t ** must not be freed as the actual data is owned by an AVBuffer
and the data given to matroska_parse_frame() is not the start of the
actual allocated buffer at all.
Both of these issues are fixed by always freeing the current data in
case it is independently allocated. Furthermore, while it would be
possible to track whether the pointer from matroska_parse_block() needs
to be reset or not, there is no gain in doing so, as the pointer is not
used at all afterwards and the sematics are clear: If the data passed
to matroska_parse_frame() is independently allocated, then ownership
of the data passes to matroska_parse_frame(). So don't pass the data
via uint8_t **.
Fixes Coverity ID 1462661 (the issue as described by Coverity is btw
a false positive: It thinks that this error can be triggered by ProRes
with a size of zero after reconstructing the original packets, but the
reconstructed packets can't have a size of zero).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Fixes ticket #8622
Reviewed-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Not requiring this leads to unexpected result, since Rav1e's current
two pass API has no way to fail in such a case.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
The previous code here did not handle passing a frames context when
ffmpeg itself did not know about the device it came from (for example,
because it was created by device derivation inside a filter graph), which
would break encoders requiring that input. Fix that by checking for HW
frames and device context methods independently, and prefer to use a
frames context method if possible. At the same time, revert the encoding
additions to the device matching function because the additional
complexity was not relevant to decoding.
Also fixes#8637, which is the same case but with the device creation
hidden in the ad-hoc libmfx setup code.
Extradata included in packet side data is meant to replace the codec context
extradata. So when muxing for example to MP4 without this change and if
extradata is present in a packet side data, the result will be that the
parameter sets present in keyframes will be filtered, but the parameter sets
ultimately included in the av1C box will not.
This is especially important for AV1 as both currently supported encoders don't
export the Sequence Header in the codec context extradata, but as packet side
data instead.
Signed-off-by: James Almer <jamrial@gmail.com>
This has previously only been checked if the chapters were initially
available, but not if they were only written in the trailer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now ff_vorbiscomment_write() used the bytestream API to write
VorbisComments. Therefore the caller had to provide a sufficiently large
buffer to write the output.
Yet two of the three callers (namely the FLAC and the Matroska muxer)
actually want the output to be written via an AVIOContext; therefore
they allocated buffers of the right size just for this purpose (i.e.
they get freed immediately afterwards). Only the Ogg muxer actually
wants a buffer. But given that it is easy to wrap a buffer into an
AVIOContext this commit changes ff_vorbiscomment_write() to use an
AVIOContext for its output.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
ff_vorbiscomment_write() used an AVDictionary ** parameter for a
dictionary whose contents ought to be written; yet this can be replaced
by AVDictionary * since commit 042ca05f0fdc5f4d56a3e9b94bc9cd67bca9a4bc;
and this in turn can be replaced by const AVDictionary * to indicate
that the dictionary isn't modified; the latter also applies to
ff_vorbiscomment_length().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If a FLAC track uses an unconventional channel layout, the Matroska
muxer adds a WAVEFORMATEXTENSIBLE_CHANNEL_MASK VorbisComment to the
CodecPrivate to preserve this information. And given that FLAC uses
24bit length fields, the muxer checks if the length is more than this
and errors out if it is.
Yet this can never happen, because we create the AVDictionary that is
the source for the VorbisComment. It only contains exactly one entry
that can't grow infinitely large (in fact, the length of the
VorbisComment is <= 4 + 33 + 1 + 18 + strlen(LIBAVFORMAT_IDENT)).
So we can simply assert the size to be < (1 << 24) - 4.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Commit 6fd300ac6c added support for WebM
Chunk livestreaming; in this case, both the header as well as each
Cluster is written to a file of its own, so that even if the AVIOContext
seems seekable, the muxer has to behave as if it were not. Yet one of
the added checks makes no sense: It ensures that no SeekHead is written
preliminarily (and hence no SeekHead is written at all) if the option
for livestreaming is set, although one should write the SeekHead in this
case when writing the Header. E.g. the WebM-DASH specification [1]
never forbids writing a SeekHead and in some instances (that don't apply
here) even requires it (if Cues are written after the Clusters).
[1]: https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/webm-dash-specification
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Since commit 4aa0665f39, the dynamic
buffer destined for the contents of the current Cluster is no longer
constantly allocated, reallocated and then freed after writing the
content; instead it is reset and reused when closing a Cluster.
Yet the code in mkv_write_trailer() still checked for whether a Cluster
is open by checking whether the pointer to the dynamic buffer is NULL or
not (instead of checking whether the position of the current Cluster is
-1 or not). If a Cluster was not open, an empty Cluster would be output.
One usually does not run into this issue, because unless there are
errors, there are only three possibilities to not have an opened Cluster
at the end of writing a packet:
The first is if one sent an audio packet to the muxer. It might trigger
closing and outputting the old Cluster, but because the muxer caches
audio packets internally, it would not be output immediately and
therefore no new Cluster would be opened.
The second is an audio packet that does not contain data (such packets
are sometimes sent for side-data only, e.g. by the FLAC encoder). The
only difference to the first scenario is that such packets are not
cached.
The third is if one explicitly flushes the muxer by sending a NULL
packet via av_write_frame().
If one also allows for errors, then there is also another possibility:
Caching the audio packet may fail in the first scenario.
If one calls av_write_trailer() after the first scenario, the cached
audio packet will be output when writing the trailer, for which
a Cluster is opened and everything is fine; because flushing the muxer
does currently not output the cached audio packet (if one is cached),
the issue also does not exist if an audio packet has been cached before
flushing. The issue only exists in one of the other scenarios.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It's based on the following specs:
RDD 45:2017 - SMPTE Registered Disclosure Doc - Interoperable Master Format - Application ProRes
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
It's based on the following specs:
RDD 36:2015 - SMPTE Registered Disclosure Doc - Apple ProRes Bitstream Syntax and Decoding Process
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Fixes: signed integer overflow: -193177 * 11585 cannot be represented in type 'int'
Fixes: 20557/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_VP9_fuzzer-5704852816789504
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: left shift of negative value -1
Fixes: 21390/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ALAC_fuzzer-6242539519868928
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: left shift of negative value -8321365
Fixes: 20506/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WMALOSSLESS_fuzzer-4798062906310656
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: -16 * 134217879 cannot be represented in type 'int'
Fixes: 20492/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DST_fuzzer-5639509530378240
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
linear. Instead of mixing these in the calculations, convert the former
first to have all following calculations in the same unit.
Signed-off-by: Kyle Swanson <k@ylo.ph>
The old approach used some highly complex delta computation math and
output-delaying.
I do not remember what the initial reasoning behind that was, but given
that we can just offset the dts by the amount of bframes, it seems wholy
unnecessary.
This leaves open an issue with VFR content, for which some more complex
logic might be needed.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Due to a typo, it was impossible to write 0.595 / -4.5 dB
of ltrt_cmixlev, ltrt_surmixlev, loro_cmixlev, loro_surmixlev.
Without any error 0.841 / -1.5 dB was written to file.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array write
Fixes: Regression since f619e1ec66
Reviewed-by: Lynne <dev@lynne.ee>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Rav1e currently uses the time base given to it only for ratecontrol... where
the inverse is taken and used as a framerate. So, do what we do in other wrappers
and use the framerate if we can.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Sequence numbers of segments should be unique, if an encoder is using shorter
than 1 second segments and it is restarted, then future segments will be using
already used sequence numbers if initial sequence number is based on the number
of seconds since epoch and not microseconds.
Signed-off-by: Marton Balint <cus@passwd.hu>
Fixes problems when non-rational options were set using rational expressions,
causing rounding errors and the option range limits not to be enforced
properly.
ffmpeg -f lavfi -i "sine=r=96000/2"
This caused an assertion failure with assert level 2.
Signed-off-by: Marton Balint <cus@passwd.hu>