Up until now, the HLS muxer uses av_strtok() to split an input string
controlling parameters of the VariantStreams and then duplicates
parts of this string containing parameters such as the language or the
name of the VariantStream. But these parts are proper zero-terminated
strings of their own that are never modified lateron, so one can simply
use the substring as-is without creating a copy. This commit implements
this.
The same also happened for the string controlling the closed caption
groups.
Furthermore, add const to indicate that the pointers to these substrings
are not used to modify them and also to indicate that these strings are
not allocated on their own.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, the HLS muxer duplicated a string for every VariantStream,
although neither the original nor the copies are ever modified. So use
the original directly and stop copying.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
more math unary operations will be added here
It can be tested with the model file generated with below python scripy:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.subtract(x, 0.5)
x2 = tf.abs(x1)
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
high resolutions with only small blocks appear to be rather
slow with the fuzzer + sanitizers.
A solution which makes this run faster is welcome.
Fixes: Timeout (did not wait -> 17sec)
Fixes: 21006/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-6002552539971584
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This combination skips allocating large padding which can read out of array
Fixes: 20978/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-5746381832847360
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The old resync logic had some bugs, for example the packet size could stuck
into 192 bytes, because pos47_full was not updated for every packet, and for
unseekable inputs the resync logic simply skipped some 0x47 sync bytes,
therefore the calculated distance between sync bytes was a multiple of 188
bytes.
AVIO only buffers a single packet (for UDP/mpegts, that usually means 1316
bytes), so for every ten consecutive 188-byte MPEGTS packets there was always a
seek failure, and that caused the old code to not find the 188 byte pattern
across 10 consecutive packets.
This patch changes the custom logic to the one which is used when probing to
determine the packet size. This was already proposed as a FIXME a long time
ago...
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Uses ff_get_wav_header() in riffdec.c
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
then ff_h264_free_tables() and h264_decode_end() can be removed
in h264_decode_init() if it's failed.
The FF_CODEC_CAP_INIT_CLEANUP flag is need for single thread, For multithread,
it'll be cleanup still by AV_CODEC_CAP_FRAME_THREADS flag if have.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
then ff_mpv_encode_end() will be unnecessary in ff_mpv_encode_init()
if it's failed.
The FF_CODEC_CAP_INIT_CLEANUP flag is need for single thread, For multithread,
it'll be cleanup still by AV_CODEC_CAP_FRAME_THREADS flag if have.
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
then we can remove adpcm_encode_close() in adpcm_encode_init() if have failed.
so the goto error lable will be unnecessary and can be removed later.
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
7546ac2fee made it so that the start_time for mp3 files is
adjusted for skip_samples. However, this appears incorrect because
subsequent packet timestamps are not adjusted and skip_samples are
applied by deleting data from a packet without changing the timestamp.
E.g., we are told the start_time is ~25ms and we get a packet with a
timestamp of 0 that has had the skip_samples discarded from it. As such
rendering engines may incorrectly discard everything prior to the
25ms thinking that is where playback should officially start. Since the
samples were deleted without adjusting timestamps though, the true
start_time is still 0.
Other formats like MP4 with edit lists will adjust both the start
time and the timestamps of subsequent packets to avoid this issue.
Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Currently find_ref_idx() would trigger 2 scans in DPB to find the
requested POC:
1. Firstly, ignore MSB of ref->poc and search for the requested POC;
2. Secondly, compare the entire ref->poc with requested POC;
For long term reference, we are able to only check LSB if MSB is not
presented(e.g. delta_poc_msb_present_flag == 0). However, for short
term reference, we should never ignore poc's MSB and it should be
kind of bit-exact. (Details in 8.3.2)
Otherwise this leads to decoding failures like:
[hevc @ 0x5638f4328600] Error constructing the frame RPS.
[hevc @ 0x5638f4328600] Error parsing NAL unit #2.
[hevc @ 0x5638f4338a80] Could not find ref with POC 21
Error while decoding stream #0:0: Invalid data found when processing input
Search the requested POC based on whether MSB is used, and avoid
the 2-times scan for DPB buffer. This benefits both native HEVC
decoder and integrated HW decoders.
Signed-off-by: Xu Guangxin <guangxin.xu@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
delta_poc_msb_present_flag is needed in find_ref_idx() to
indicate whether MSB of POC should be taken into account.
Details in 8.3.2.
Signed-off-by: Xu Guangxin <guangxin.xu@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Including codecapi.h and uuids.h in UWP mode doesn't define all defines
properly, ending up with constructs that MSVC silently tolerates, but
that clang errors out on, like this:
DEFINE_GUIDEX(CODECAPI_AVEncCommonFormatConstraint);
Just avoid including codecapi.h completely and hardcode the last few
enum values we use from there. We already use local versions of most
enums from there, due to older mingw-w64 headers being incomplete.
Signed-off-by: Martin Storsjö <martin@martin.st>
This might have been used originally for the decoder parts of
the MediaFoundation wrapper, which aren't merged yet.
Signed-off-by: Martin Storsjö <martin@martin.st>
We want to copy the lowest amount of bytes per line, but while the buffer
stride is sanitized, the src/dst stride can be negative, and negative numbers
of bytes do not make a lot of sense.
By default now, if AV_EF_CRCCHECK or AV_EF_IGNORE_ERR are enabled the decoder
will skip the chunk and carry on with the next one. This should make the
decoder able to decode more corrupt files because the functions which decode
individual chunks will very likely error out if fed invalid data and stop the
decoding of the entire image.
A lot of files have CRC included.
The CRC only covers 34 bytes at most from the frame but it should still be
enough for some amount of error detection.
The CRC flag is only signalled once every few minutes but CRC is still
always present so the patch uses the file version instead.
CRC on 24-bit files wants non-padded samples so skip such files.
Some corrupt samples may have been output before the final check
depending on the -max_samples setting.
The size of a single allocation performed by av_malloc() or av_realloc()
is supposed to be bounded by max_alloc_size, which defaults to INT_MAX
and can be set by the user; yet currently this is not completely
honoured: The actual value used is max_alloc_size - 32. How this came
to be can only be understood historically:
a) 0ecca7a49f disallowed allocations
> INT_MAX. At that time the size parameter of av_malloc() was an
unsigned and the commentary added ("lets disallow possible ambiguous
cases") indicates that this was done as a precaution against calling the
functions with negative int values. Genuinely limiting the size of
allocations to INT_MAX doesn't seem to have been the intention given
that at this time the memalign hack introduced in commit
da9b170c6f (which when enabled increased
the size of allocations slightly so that one can return a correctly
aligned pointer that actually does not point to the beginning of the
allocated buffer) was already present.
b) Said memalign hack allocated 17 bytes more than actually desired, yet
allocating 16 bytes more is actually enough and so this was changed in
a9493601638b048c44751956d2360f215918800c; this commit also replaced
INT_MAX by INT_MAX - 16 (and made the limit therefore a limit on the size
of the allocated buffer), but kept the comment, although there is nothing
ambiguous about allocating (INT_MAX - 16)..INT_MAX.
c) 13dfce3d44 then increased 16 to 32 for
AVX, 6b4c0be558 replaced INT_MAX by
MAX_MALLOC_SIZE (which was of course defined to be INT_MAX) and
5a8e994287 added max_alloc_size and made
it user-selectable.
d) 4fb311c804 then dropped the memalign
hack, yet it kept the -32 (probably because the comment about ambiguous
cases was still present?), although it is no longer needed at all after
this commit. Therefore this commit removes it and uses max_alloc_size
directly.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>