Since release 4.2, FFmpeg fails to detect the correct streams in an RTMP
stream that contains a |RtmpSampleAccess AMF object prior to the
onMetaData AMF object. In the debug log it would show "[flv] Unknown
type |RtmpSampleAccess".
This functionality broke in commit d7638d8dfc
as unknown metadata packets now result in an opaque data stream, and the
|RtmpSampleAccess packet was an "unknown" metadata packet type.
With this change the RTMP streams are correctly detected when there
is a |RtmpSampleAccess object prior to the onMetaData object.
Signed-off-by: Peter van der Spek <p.vanderspek@bluebillywig.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fix the issue: https://github.com/intel/media-driver/issues/317
the root cause is update_dimensions will be called multple times
when decoder thread number is not only 1, but update_dimensions
call get_pixel_format in each decode thread will trigger the
hwaccel_uninit/hwaccel_init more than once. But only one hwaccel
should be shared with all decode threads.
in current context,
there are 3 situations in the update_dimensions():
1. First time calling. No matter single thread or multithread,
get_pixel_format() should be called after dimensions were
set;
2. Dimention changed at the runtime. Dimention need to be
updated when macroblocks_base is already allocated,
get_pixel_format() should be called to recreate new frames
according to updated dimension;
3. Multithread first time calling. After decoder init, the
other threads will call update_dimensions() at first time
to allocate macroblocks_base and set dimensions.
But get_pixel_format() is shouldn't be called due to low
level frames and context are already created.
In this fix, we only call update_dimensions as need.
Signed-off-by: Wang, Shaofei <shaofei.wang@intel.com>
Reviewed-by: Jun, Zhao <jun.zhao@intel.com>
Reviewed-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
As it was brought up that the current documentation leaves things
as specific to YCbCr only, ICtCp and RGB are now mentioned.
Additionally, the specifications on which these definitions of
narrow and full range are defined are mentioned.
This way, the documentation of AVColorRange should now match how
most people seem to read interpret it at this point, and thus
flagging RGB AVFrames as full range is valid not only according to
common sense, but also the enum definition.
The earlier code would first attempt to allocate two buffers, then
attempt to allocate an AVIOContext, using one of the new buffers I/O
buffer, then check the allocations. On success, a z_stream that is used
in the AVIOContext's read_packet callback is initialized afterwards.
There are two problems with this: In case the allocation of the I/O
buffer fails avio_alloc_context() will be given a NULL read buffer
with a size > 0. This works right now, but it is fragile. The second
problem is that the z_stream used in the read_packet callback is not
functional when avio_alloc_context() is allocated (it might be that
avio_alloc_context() might already fill the buffer in the future). This
commit fixes both of these problems by reordering the operations.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Causes some error as the ADPCM predictors aren't known, but
the difference is negligible and not audible.
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
If the average bit rate cannot be calculated, such as in the case
of streamed fragmented mp4, utilize various available parameters
in priority order.
Tests are updated where the esds or btrt or ISML manifest boxes'
output changes.
This is utilized by various media ingests to figure out the bit
rate of the content you are pushing towards it, so write it for
video, audio and subtitle tracks in case at least one nonzero value
is available. It is only mentioned for timed metadata sample
descriptions in QTFF, so limit it only to ISOBMFF (MODE_MP4) mode.
Updates the FATE tests which have their results changed due to the
20 extra bytes being written per track.
for some cases (for example, super resolution), the DNN model changes
the frame size which impacts the filter behavior, so the filter needs
to know the out frame size at very beginning.
Currently, the filter reuses DNNModule.execute_model to query the
out frame size, it is not clear from interface perspective, so add
a new explict interface DNNModel.get_output for such query.
suppose we have a detect and classify filter in the future, the
detect filter generates some bounding boxes (BBox) as AVFrame sidedata,
and the classify filter executes DNN model for each BBox. For each
BBox, we need to crop the AVFrame, copy data to DNN model input and do
the model execution. So we have to save the in_frame at DNNModel.set_input
and use it at DNNModule.execute_model, such saving is not feasible
when we support async execute_model.
This patch sets the in_frame as execution_model parameter, and so
all the information are put together within the same function for
each inference. It also makes easy to support BBox async inference.
Currently, every filter needs to provide code to transfer data from
AVFrame* to model input (DNNData*), and also from model output
(DNNData*) to AVFrame*. Actually, such transfer can be implemented
within DNN module, and so filter can focus on its own business logic.
DNN module also exports the function pointer pre_proc and post_proc
in struct DNNModel, just in case that a filter has its special logic
to transfer data between AVFrame* and DNNData*. The default implementation
within DNN module is used if the filter does not set pre/post_proc.
Fixes: signed integer overflow: 2 * 2132811776 cannot be represented in type 'int'
Fixes: 25722/clusterfuzz-testcase-minimized-ffmpeg_IO_DEMUXER_fuzzer-6221704077246464
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: timeout (243sec -> a few ms)
Fixes: 25716/clusterfuzz-testcase-minimized-ffmpeg_IO_DEMUXER_fuzzer-5764093666131968
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: 6000 * -2147483648 cannot be represented in type 'int'
Fixes: 25700/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-6578316302352384
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
get_content_url() allocates two buffers for temporary strings and when
one of them couldn't be allocated, it simply returns, although one of
the two allocations could have succeeded and would leak in this
scenario. This can be fixed by avoiding one of the temporary buffers.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
1. Perform the necessary reindentations after the last few commits.
2. Adapt switches to the ordinary indentation style.
3. Now that the effective lifetimes of the variables containing
the freshly allocated strings used when parsing the representation
are disjoint, the variables can be replaced by a single variable.
Doing so has the advantage of making it more clear that these are
throwaway variables, hence it has been done.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This allows to reduce the level of indentation for parsing the supported
representations (audio, video and subtitles). It also allows to avoid
some allocations and frees for unsupported representations.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This commit removes two always-true checks as well as a dead default
case of a switch. The check when parsing manifests is always true,
because we now jump to the cleaning code in case the format of the
representation is unknown. The default case of the switch is dead,
because the type of the representation is already checked at the
beginning of parse_manifest_representation(). The check when reading
the header is dead, because we error out if an error happened before.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, the DASH demuxer used av_dynarray_add() to add
audio/video/subtitles representations to arrays. Yet av_dynarray_add()
frees the array upon failure, leading to leaks of its elements;
furthermore, the element to be added leaks, too.
This has been fixed by using av_dynarray_add_nofree() instead and by
freeing the elements that could not be added to the list. Furthermore,
errors from this are now checked and returned.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
These languages are normally freed after having been added as metadata
to their respective AVStreams. Yet if one never reaches said point, they
leak. This can happen as a result of an error when reading the header or
as a result of refreshing the manifests.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The DASH demuxer currently extracts several strings at once from an xml
document before processing them one by one; these strings are allocated,
stored in local variables and need to be freed by the demuxer itself.
So if an error happens when processing one of them, all strings need to
be freed before returning. This has simply not been done, leading to
leaks.
A simple fix would be to add the necessary code for freeing; yet there is
a better solution: Avoid having several strings at the same time by
extracting a string, processing it and immediately freeing it. That way
one only has to free at most one string on error.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If parsing a representation fails, it is not added to the list of
representations and is therefore not freed in dash_close(); it therefore
leaked in most error paths in parse_manifest_representation() (some
error paths had (incomplete) code for freeing). This commit fixes
freeing the representation in this case.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It is always zero. Also remove other unused elements.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
open_url() in the DASH as well in the hls demuxer share a common bug:
They modify an AVDictionary (i.e. set a new entry) given to them as
AVDictionary *, yet if this new entry leads to reallocation and
relocation of the AVDictionary, the caller's pointer will become
dangling, leading to use-after-frees. So pass an AVDictionary **.
(With the current implementation of AVDictionary the above can only
happen if the AVDictionary was empty initially (in which case the
new AVDictionary leaks); furthermore if the I/O is ordinary (i.e. opened
by avio_open2() or ffio_open_whitelist()), the dict is never empty (it
contains an rw_timeout entry from save_avio_options()). So this issue
could only happen if the caller sets a nondefault io_open callback, but
no AVIOContext (the AVFMT_FLAG_CUSTOM_IO flag won't be set in this
case). In case of the HLS demuxer, it was also necessary that setting
the "seekable" entry failed. Yet one should simply not rely on internals
of the AVDict API.)
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Just postpone the allocation of the dict until it is really needed
(after the checks that can fail).
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This currently doesn't cause any trouble, because the only caller did
not clean up the representation upon error at all; but fixing this is
a prerequisite for doing so.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The code in question seems to have been copied from about 70 lines
above; yet the code here is only executed if some of the variables
(namely representation_segmenttemplate_node and fragment_template_node)
are NULL, so it makes no sense to check them for a child element.
Also remove a redundant resetting of a pointer to an AVFormatContext
after avformat_close_input() (which already sets the pointer to NULL).
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
When using one of the AV_DICT_DONT_STRDUP_KEY/VAL flags, av_dict_set()
already frees the key/value on error, so that freeing it again would
lead to a double free.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The AAX demuxer reads a 32bit number containing the amount of entries
of an array and stores it in an uint32_t. Yet when iterating over this
array, a loop counter of type int is used. This leads to undefined
behaviour if the amount of entries is not in the range of int; to avoid
this, it is generally good to use the same type for the loop counter as
for the variable it is compared to. This is done in one of the two loops
affected by this.
In the other loop, the undefined behaviour can begin even earlier: Here
the loop counter is multiplied by an uint16_t which can overflow as soon
as the loop counter is > 2^15. Using an unsigned type would avoid the
undefined behaviour, but truncation would still be possible, so use an
uint64_t.
Also use an uint32_t for a variable containing an index in said array.
This fixes Coverity issue #1466767.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>