Fixes: overread by 1
Fixes: 21880/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ADPCM_IMA_CUNNING_fuzzer-5717917221257216.fuzz
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
We need at least a few bits of entropy to determine the start index of each
queue, in order to let filters run in parallel as much as possible, and
rand() is not thread safe and disrupts any external API's usage of rand,
so instead replace it with av_get_random_seed.
While it has more overhead than rand, we only run it once per filter upon init.
Up until now, the HLS muxer uses av_strtok() to split an input string
controlling parameters of the VariantStreams and then duplicates
parts of this string containing parameters such as the language or the
name of the VariantStream. But these parts are proper zero-terminated
strings of their own that are never modified lateron, so one can simply
use the substring as-is without creating a copy. This commit implements
this.
The same also happened for the string controlling the closed caption
groups.
Furthermore, add const to indicate that the pointers to these substrings
are not used to modify them and also to indicate that these strings are
not allocated on their own.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, the HLS muxer duplicated a string for every VariantStream,
although neither the original nor the copies are ever modified. So use
the original directly and stop copying.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
more math unary operations will be added here
It can be tested with the model file generated with below python scripy:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.subtract(x, 0.5)
x2 = tf.abs(x1)
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
high resolutions with only small blocks appear to be rather
slow with the fuzzer + sanitizers.
A solution which makes this run faster is welcome.
Fixes: Timeout (did not wait -> 17sec)
Fixes: 21006/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-6002552539971584
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This combination skips allocating large padding which can read out of array
Fixes: 20978/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-5746381832847360
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The old resync logic had some bugs, for example the packet size could stuck
into 192 bytes, because pos47_full was not updated for every packet, and for
unseekable inputs the resync logic simply skipped some 0x47 sync bytes,
therefore the calculated distance between sync bytes was a multiple of 188
bytes.
AVIO only buffers a single packet (for UDP/mpegts, that usually means 1316
bytes), so for every ten consecutive 188-byte MPEGTS packets there was always a
seek failure, and that caused the old code to not find the 188 byte pattern
across 10 consecutive packets.
This patch changes the custom logic to the one which is used when probing to
determine the packet size. This was already proposed as a FIXME a long time
ago...
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Uses ff_get_wav_header() in riffdec.c
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
then ff_h264_free_tables() and h264_decode_end() can be removed
in h264_decode_init() if it's failed.
The FF_CODEC_CAP_INIT_CLEANUP flag is need for single thread, For multithread,
it'll be cleanup still by AV_CODEC_CAP_FRAME_THREADS flag if have.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
then ff_mpv_encode_end() will be unnecessary in ff_mpv_encode_init()
if it's failed.
The FF_CODEC_CAP_INIT_CLEANUP flag is need for single thread, For multithread,
it'll be cleanup still by AV_CODEC_CAP_FRAME_THREADS flag if have.
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
then we can remove adpcm_encode_close() in adpcm_encode_init() if have failed.
so the goto error lable will be unnecessary and can be removed later.
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
7546ac2fee made it so that the start_time for mp3 files is
adjusted for skip_samples. However, this appears incorrect because
subsequent packet timestamps are not adjusted and skip_samples are
applied by deleting data from a packet without changing the timestamp.
E.g., we are told the start_time is ~25ms and we get a packet with a
timestamp of 0 that has had the skip_samples discarded from it. As such
rendering engines may incorrectly discard everything prior to the
25ms thinking that is where playback should officially start. Since the
samples were deleted without adjusting timestamps though, the true
start_time is still 0.
Other formats like MP4 with edit lists will adjust both the start
time and the timestamps of subsequent packets to avoid this issue.
Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Currently find_ref_idx() would trigger 2 scans in DPB to find the
requested POC:
1. Firstly, ignore MSB of ref->poc and search for the requested POC;
2. Secondly, compare the entire ref->poc with requested POC;
For long term reference, we are able to only check LSB if MSB is not
presented(e.g. delta_poc_msb_present_flag == 0). However, for short
term reference, we should never ignore poc's MSB and it should be
kind of bit-exact. (Details in 8.3.2)
Otherwise this leads to decoding failures like:
[hevc @ 0x5638f4328600] Error constructing the frame RPS.
[hevc @ 0x5638f4328600] Error parsing NAL unit #2.
[hevc @ 0x5638f4338a80] Could not find ref with POC 21
Error while decoding stream #0:0: Invalid data found when processing input
Search the requested POC based on whether MSB is used, and avoid
the 2-times scan for DPB buffer. This benefits both native HEVC
decoder and integrated HW decoders.
Signed-off-by: Xu Guangxin <guangxin.xu@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
delta_poc_msb_present_flag is needed in find_ref_idx() to
indicate whether MSB of POC should be taken into account.
Details in 8.3.2.
Signed-off-by: Xu Guangxin <guangxin.xu@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Including codecapi.h and uuids.h in UWP mode doesn't define all defines
properly, ending up with constructs that MSVC silently tolerates, but
that clang errors out on, like this:
DEFINE_GUIDEX(CODECAPI_AVEncCommonFormatConstraint);
Just avoid including codecapi.h completely and hardcode the last few
enum values we use from there. We already use local versions of most
enums from there, due to older mingw-w64 headers being incomplete.
Signed-off-by: Martin Storsjö <martin@martin.st>
This might have been used originally for the decoder parts of
the MediaFoundation wrapper, which aren't merged yet.
Signed-off-by: Martin Storsjö <martin@martin.st>
We want to copy the lowest amount of bytes per line, but while the buffer
stride is sanitized, the src/dst stride can be negative, and negative numbers
of bytes do not make a lot of sense.