contained in Vorbis comments in the CodecPrivate of flac tracks.
Moreover, it also tests header removal compression.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This test contains a track with zlib compressed CodecPrivate in addition
to compressed frames; the former was unchecked before.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Fixes: fate-fitsdec-bitpix-64
Possibly Fixes: -nan is outside the range of representable values of type 'unsigned short'
Possibly Fixes: 17769/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FITS_fuzzer-5678314672357376
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Unlike other tf.*.conv2d layers, tf.nn.conv2d does not create many
nodes (within a scope) in the graph, it just acts like other layers.
tf.nn.conv2d only creates one node in the graph, and no internal
nodes such as 'kernel' are created.
The format of native model file is also changed, a flag named
has_bias is added, so change the version number.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Allows the creation of the sdtp atom while remuxing MP4 to MP4. This
atom is required by Apple devices (iPhone, Apple TV) in order to accept
2160p medias.
The flac parser uses a fifo to buffer its data. Consequently, when
searching for sync codes of flac packets, one needs to take care of
the possibility of wraparound. This is done by using an optimized start
code search that works on each of the continuous buffers separately and
by explicitly checking whether the last pre-wrap byte and the first
post-wrap byte constitute a valid sync code.
Moreover, the last MAX_FRAME_HEADER_SIZE - 1 bytes ought not to be searched
for (the start of) a sync code because a header that might be found in this
region might not be completely available. These bytes ought to be searched
lateron when more data is available or when flushing.
Unfortunately there was an off-by-one error in the calculation of the
length to search of the post-wrap buffer: It was too large, because the
calculation was based on the amount of bytes available in the fifo from
the last pre-wrap byte onwards. This meant that a header might be
parsed twice (once prematurely and once regularly when more data is
available); it could also mean that an invalid header will be treated as
valid (namely if the length of said invalid header is
MAX_FRAME_HEADER_SIZE and the invalid byte that will be treated as the
last byte of this potential header happens to be the right CRC-8).
Should a header be parsed twice, the second instance will be the best child
of the first instance; the first instance's score will be
FLAC_HEADER_BASE_SCORE - FLAC_HEADER_CHANGED_PENALTY ( = 3) higher than
the second instance's score. So the frame belonging to the first
instance will be output and it will be done as a zero length frame (the
difference of the header's offset and the child's offset). This has
serious consequences when flushing, as returning a zero length buffer
signals to the caller that no more data will be output; consequently the
last frames not yet output will be dropped.
Furthermore, a "sample/frame number mismatch in adjacent frames" warning
got output when returning the zero-length frame belonging to the first
header, because the child's sample/frame number of course didn't match
the expected sample frame/number given its parent.
filter/hdcd-mix.flac from the FATE-suite was affected by this (the last
frame was omitted) which is the reason why several FATE-tests needed to
be updated.
Fixes ticket #5937.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
A threshold of 1 is sufficient for simple_dump_cut.webm, 10 is used
just to be sure the next truncated file doesnt cause the same issue
Obvious alternative fixes are to simply accept that the file is broken or to
write some advanced error concealment or to
simply accept that the decoder wont stop at the end of input.
Fixes: Ticket 8069 (artifacts not the differing md5 which was there before 1afd246960)
Fixes: simple_dump_cut.webm
Fixes: regression of 1afd246960
fate-vp5 changes because the last frame is truncated and now handled
differently.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Because the lavf_container is sometimes called with only 2 arguments,
fate tests produce bash errors like this:
tests/fate-run.sh: 299: test: =: unexpected operator
This commit fixes this.
Reviewed-by: Limin Wang <lance.lmwang@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Right now, the concat filter does not set the frame_rate value on any of
the out links. As a result, the default ffmpeg behaviour kicks in - to
copy the framerate from the first input to the outputs.
If a later input is higher framerate, this results in dropped frames; if
a later input is lower framerate it might cause judder.
This patch checks if all of the video inputs have the same framerate, and
if not it sets the out link to use '1/0' as the frame rate, the value
meaning "unknown/vfr".
A test is added to verify the VFR behaviour. The existing test for CFR
behaviour passes unchanged.
the info can be saved in dnn operand object without regenerating again and again,
and it is also needed for layer split/merge, and for memory reuse.
to make things step by step, this patch just focuses on c code,
the change within python script will be added later.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
This makes the code bitexact between platforms.
Intermediate timestamps between frames are preserved.
The timebase is simplified.
Rounding differs from doubles in cases where timestamps/durations
are "funny"
Suggested-by: jb
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This reverts commit a9dacdeea6.
This patch effectively made the decoder output vfr content out of samples
where cfr is expected.
Addresses ticket #7880.
Signed-off-by: James Almer <jamrial@gmail.com>
The packet counting based approach caused excessive sdt/pat/pmt for VBR, so
let's use a timestamp based approach instead similar to how we emit PCRs.
SDT/PAT/PMT period should be consistent for both VBR and CBR from now on.
Also change the type of sdt_period and pat_period to AV_OPT_TYPE_DURATION so no
floating point math is necessary.
Fixes ticket #3714.
Signed-off-by: Marton Balint <cus@passwd.hu>
background:
DNN (deep neural network) is a sub module of libavfilter, and FATE/dnn
is unit test for the DNN module, one unit test for one dnn layer.
The unit tests are not based on the APIs exported by libavfilter,
they just directly call into the functions within DNN submodule.
There is an issue when run the following command:
build$ ../ffmpeg/configure --disable-static --enable-shared
make
make fate-dnn-layer-pad
And part of error message:
tests/dnn/dnn-layer-pad-test.o: In function `test_with_mode_symmetric':
/work/media/ffmpeg/build/src/tests/dnn/dnn-layer-pad-test.c:73: undefined reference to `dnn_execute_layer_pad'
The root cause is that function dnn_execute_layer_pad is a LOCAL symbol
in libavfilter.so, and so the linker could not find it when build dnn-layer-pad-test.
To check it, just run: readelf -s libavfilter/libavfilter.so | grep dnn
So, add dependency in fate/dnn Makefile with ffmpeg static libraries.
This is the same method used in fate/checkasm
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>