When one of output[i] & expected_output is NAN, the unit test will always pass.
Signed-off-by: Ting Fu <ting.fu@intel.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
broken since:
aa5c6f382b avcodec/libaomenc: Add command-line options to control the use of partition tools
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: James Zern <jzern@google.com>
Or it'll cause null pointer dereference if size < sizeof(uint32_t), also
in case tc[0] > 3, the code will report error directly.
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Fixes: signed integer overflow: 8683744 * 256 cannot be represented in type 'int'
Fixes: 23527/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_APE_fuzzer-5679885932822528
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: 155 + 2147483647 cannot be represented in type 'int'
Fixes: 23421/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_LOCO_fuzzer-5652849097965568
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Currently the next thread's context is updated from the previous one's
if the codec descriptor is not marked as intra-only. That is not
entirely correct, since that property does not necessarily imply
anything about how a specific decoder implementation behaves.
Instead, use the presence of the update_thread_context() callback to
decide whether an update should be performed. Fixes races in CFHD,
should cause no behaviour change in any other decoders.
add probeaudiostream for get audio stream's codec_name,codec_time_base,
sample_fmt,channels and channel_layout.
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
Important part of this algorithm is the double threshold step: pixels
above "high" threshold being kept, pixels below "low" threshold dropped,
pixels in between (weak edges) are kept if they are neighboring "high"
pixels.
The weak edge check uses a neighboring context and should not be applied
on the plane's border. The condition was incorrect and has been fixed in
the commit.
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
Reviewed-by: Andriy Gelman <andriy.gelman@gmail.com>
Currently, both bsfs used the same CodedBitstreamContext for reading and
writing; as a consequence, the state of the writer's context at the
beginning of writing a fragment is exactly the state of the reader after
having read the fragment; in particular, the writer might not have
encountered one of its active parameter sets yet.
This is not nice and may lead to invalid output even when the input
is completely spec-compliant: Think of an access unit containing
a primary coded picture referencing a PPS with id id (that is known from
an earlier access unit/from extradata), then a new version of the PPS
with id id and then a redundant coded picture that is also referencing
the PPS with id id. This is spec-compliant, as the standard allows to
overwrite a PPS with a different PPS in between coded pictures and not
only at the beginning of an access unit. In this scenario, the reader
would read the primary coded picture with the old PPS and the redundant
coded picture with the new PPS (as it should); yet the writer would
write both with the new PPS as extradata which might lead to errors or
to invalid data being output without any error (e.g. if the two PPS
differed in redundant_pic_cnt_present_flag).
The above scenario does not directly translate to HEVC as long as one
restricts oneself to input with nuh_layer_id == 0 only (as cbs_h265
does: it currently strips away any NAL unit with nuh_layer_id > 0 when
decomposing); if one doesn't the same issue as above can happen.
If one also allowed input packets to contain more than one access unit,
issues like the above can happen even without redundant coded
pictures/multiple layers.
Therefore this commit uses separate contexts for reader and writer.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Several cbs-functions had an unused CodedBitstreamContext parameter.
This commit removes these.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Commit 601c238854 added support for AV_PKT_DATA_NEW_EXTRADATA, but
only for avcC extradata.
This commit adds support for sps/pps extradata as well. This makes
support consistent for passing new extradata consistent with the
types of extradata that can be passed when initializing the decoder.
Signed-off-by: Oliver Woodman <ollywoodman@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
We should not silently allocate an incorrect sized buffer.
Fixes trac issue #8718.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Fixes: out of array access
Fixes: 23888/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TIFF_fuzzer-6021365974171648.fuzz
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Add qmax/qmin support for HEVC software bitrate control(SWBRC).
Limitations:
- RateControlMethod != MFX_RATECONTROL_CQP
- with EXTBRC ON
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Signed-off-by: Zhong Li <zhongli_dev@126.com>
Instead use ffio_read_size to read data into a buffer. Also check that
the desired size was actually successfully read and combine the check
with the check for reading the extradata.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Allocating two arrays with the same number of elements together
simplifies freeing them.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
A Smacker file can contain up to seven audio tracks. Up until now,
the pts for the i. audio packet contained in a Smacker frame was
simply the end timestamp of the last i. audio packet contained in
an earlier Smacker frame.
The problem with this is that a Smacker stream need not contain data in
every Smacker frame and so the current i. audio packet present may come
from a different underlying stream than the last i. audio packet
contained in an earlier frame.
The sample hypnotix.smk* exhibits this. It has three audio tracks and
the first of the three has a longer first packet, so that the audio for
the first track is contained in only 235 packets contained in the first
235 Smacker frames; the end timestamp of this track is 166696 (about 7.56s
at a timebase of 1/22050); the other two audio tracks both have 253 packets
contained in the first 253 Smacker frames. Up until now, the 236th
packet of the second track being the first audio packet in the 236th
Smacker frame would get the end timestamp of the last first audio packet
from the last Smacker frame containing a first audio packet and said
last audio packet is the first audio packet from the 235th Smacker frame
from the first audio track, so that the timestamp is 166696. In contrast,
the 236th packet from the third track (whose packets contain the same number
of samples as the packets from the second track) has a timestamp of
156116 (because its timestamp is derived from the end timestamp of the
235th packet of the second audio track). In the end, the second track
ended up being 177360/22050 s = 8.044s long; in contrast, the third
track was 166780/22050 s = 7.56s long which also coincided with the
video.
This commit fixes this by not using timestamps from other tracks for
a packet's pts.
*: https://samples.ffmpeg.org/game-formats/smacker/wetlands/hypnotix.smk
Reviewed-by: Timotej Lazar <timotej.lazar@araneo.si>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The layout of a Smacker frame is as follows: For some frames, the
beginning of the frame contained a palette for the video stream; then
there are potentially several audio frames, followed by the data for the
video stream.
The Smacker demuxer used to read the palette, then cache every audio frame
into a buffer (that gets reallocated to the desired size every time a
frame is read into this buffer), then read and return the video frame
(together with the palette). The cached audio frames are then returned
by copying the data into freshly allocated buffers; if there are none
left, the next frame is read.
This commit changes this: At the beginning of a frame, the palette is
read and cached as now. But audio frames are no longer cached at all;
they are returned immediately. This gets rid of copying and also allows
to remove the code for the buffer-to-AVStream correspondence.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>