Need to check malloc fail before using it, so adjust the location
in the code.
Reviewed-by: Nicolas George <nicolas.george@normalesup.org>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
After the last few commits, the functions for writing master elements
with CRC-32 elements didn't really make use of the ebml_master
structure any more, so remove these parameters from the functions.
The only things that still need to be kept are the positions of the
level 1 elements that are written preliminarily and updated later.
These positions are stored in the MatroskaMuxContext and
replace the corresponding ebml_master structures.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Up until now, a block's relative offset has been reported as the offset
in the log messages output when writing blocks; given that it is
impossible to know the real offset from the beginning of the file at
this point due to the fact that it is not yet known how many bytes will
be used for the containing cluster's length field both the relative
offset in the cluster as well as the offset of the containing cluster
will be reported from now on.
Furthermore, the TrackNumber of the written block has been added to the
log output.
Also, the log message for writing vtt blocks has been brought in line
with the message for normal blocks.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Up until now, the length field of most level 1 elements has been written
using eight bytes, although it is known in advance how much space the
content of said elements will take up so that it would be possible to
determine the minimal amount of bytes for the length field. This
commit changes this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Fixes intendation, whitespace, a typo and renames a variable
(dyn_bc->cluster_bc) to make its meaning clearer and to bring
it more in line with the naming of similar variables.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Given that in both the seekable as well as the non-seekable mode dynamic
buffers are used to write level 1 elements and that now no seeks are
used in the seekable case any more, the two modes can be combined; as a
consequence, the non-seekable mode automatically inherits the ability to
write CRC-32 elements.
There are no differences in case the output is seekable; when it is not
and writing CRC-32 elements is disabled, there can still be minor
differences because before this commit, the EBML ID and length field
were counted towards the cluster size limit; now they no longer are.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Up until now, the writing process for level 1 elements (those elements
for which CRC-32 elements are written by default) was this in case the
output was seekable: Write the EBML ID, write an "unkown length" EBML
number of the desired length, then write the element into a dynamic
buffer, then write the dynamic buffer (after possible calculation and
writing of the CRC-element), then seek back to the size element and
overwrite the unknown-size element with the real size. The seeking and
overwriting part has been eliminated by not writing the size initially.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
A Matroska EBML ID can only be maximally four bytes long, so make the
variables denoting EBML IDs uint32_t instead of unsigned int to
better reflect this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
All places where end_ebml_master_crc32_preliminary are used already
check for whether the output is seekable, so the check in the function
is redundant.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Since 4e3bdf729a there is no reason any
more to treat the seekable and non-seekable cases separate with regards
to the log message for a new cluster. This effectively reverts
d41aeea8a6.
Also improved the log message: "pts 80dts 0" -> "pts 80, dts 0".
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Up until now, the check for whether to write CRC32 elements was always
mkv->write_crc && mkv->mode != MODE_WEBM. This is equivalent to simply
set write_crc to zero in WebM-mode. And this is what this commit does.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Up until e7ddafd515 the Matroska muxer
wrote a secondary seek head referencing all the clusters. When this
was changed, a (now completely wrong) comment remained and the unique
remaining seek head was still called main_seekhead. This has been
changed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Up until now the EBML Header length field has been written with eight
bytes, although the EBML Header is always so small that only one byte
is needed for it. This patch saves seven bytes for every Matroska/Webm
file.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The upper bounds currently used for determining the size of a CuePoint's
length field can be improved somewhat; as a result, a CuePoint
containing three CueTrackPositions will now only need a size field
with one byte length.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The earlier code included the size of the BlockGroup's length field and
the EBML ID in the calculation of the size for the payload and ignored
the size of the duration's length field. This meant that Blockgroups
corresponding to packets with size 2^(7n) - 17 - n - i, i = 0,..., n - 1,
n = 1,..., 8 (i.e. 110, 16364, 16365, 2097130..2097132, ...) were written
with length fields that are unnecessarily long.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
At this point, ts already includes the ts_offset so that the relative
time written with the cluster is already given by ts - mkv->cluster_pts.
It is this number that needs to fit into an int16_t.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
currently, only float is supported as model input, actually, there
are other data types, this patch adds uint8.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
some models such as ssd, yolo have more than one output.
the clean up code in this patch is a little complex, it is because
that set_input_output_tf could be called for many times together
with ff_dnn_execute_model_tf, we have to clean resources for the
case that the two interfaces are called interleaved.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Currently, within interface set_input_output, the dims/memory of the tensorflow
dnn model output is determined by executing the model with zero input,
actually, the output dims might vary with different input data for networks
such as object detection models faster-rcnn, ssd and yolo.
This patch moves the logic from set_input_output to execute_model which
is suitable for all the cases. Since interface changed, and so dnn_backend_native
also changes.
In vf_sr.c, it knows it's srcnn or espcn by executing the model with zero input,
so execute_model has to be called in function config_props
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
remove the requirment that the name of DNN model input/output
should be "x"/"y",
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
remove 'else' since there is always 'return' in 'if' scope,
so the code will be clean for later maintenance
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
otherwise, the following check will return error if layer_add_res
is randomly initialized.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Adding the support to build FFMPEG with HW accelerated decode and encode on PPC64
little endian architecture.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Cuvid supports clips with a limit on maximum number of macroblocks.
This check was missing after cuvidGetDecoderCaps API call allowing
unsupported clips to proceed.
Added the missing check, same as the one in hwaccel nvdec implementation.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Instead of doing each column one by one, doing several columns
together gives about 30% better performance.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Ruiling Song <ruiling.song@intel.com>
Currently profile mapping is hard-coded, and not flexible to do extactly
map (E.g: libmfx treats H264 constrained baseline to be baseline profile).
vaapi profile mapping funtion provides a better soultion than current
qsv mapping.
Signed-off-by: Zhong Li <zhong.li@intel.com>
It is helpful to know why some clips decoding failed.
Ticket#7330 is a good example, with this patch it is easily to
know bitstream codec level is out of support range.
Signed-off-by: Zhong Li <zhong.li@intel.com>
Reference: Table 8: Interpretation of valid BITPIX value from FITS standard 4.0
Fixes: runtime error: division by zero
Fixes: 14581/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FITS_fuzzer-5652382425284608
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
10 bytes (id3v2 header amount of bytes) were being read before any checks
were made on the bitstream. The result was that we were overreading into
the next frame if the current one was 8 or 9 bytes long.
Fixes tickets #7271 and #7869.
Signed-off-by: James Almer <jamrial@gmail.com>
The latest generation video decoder on the Turing chips supports
decoding HEVC 4:4:4. This change adds AV_PIX_FMT_VDPAU as a valid format
for HEVC 4:4:4 8 bit.
Pass SPS, PPS range extensions to VDPAU layer via
VdpPictureInfoHEVC444. Added VdpPictureInfoHEVC444 struct to
VdpPictureInfo union to populate the range extension params. Mapped
FF_PROFILE_HEVC_REXT to VDP_DECODER_PROFILE_HEVC_MAIN_444.
New VdpYCbCr Formats VDP_YCBCR_FORMAT_Y_U_V_444 and,
VDP_YCBCR_FORMAT_Y_UV_444 have been added in VDPAU with libvdpau-1.2
to be used in get/putbits for YUV 4:4:4 surfaces. Earlier mapping of
AV_PIX_FMT_YUV444P to VDP_YCBCR_FORMAT_YV12 is not valid.
Hence this Change maps AV_PIX_FMT_YUV444P to VDP_YCBCR_FORMAT_Y_U_V_444
to access the YUV 4:4:4 surface via read-back API's of VDPAU.
Apparently in the new SDK one cannot query if VANC output is supported, so we
will fall back to non-VANC output if enabling the video output with VANC fails.
Fixes ticket #7867.
Signed-off-by: Marton Balint <cus@passwd.hu>