These fields are ad-hoc and will be deprecated. Use the recently-added
AV_CODEC_FLAG_COPY_OPAQUE to pass arbitrary user data from packets to
frames.
Changes the result of the flcl1905 test, which uses ffprobe to decode
wmav2 with multiple frames per packet. Such packets are handled
internally by calling the decoder's decode callback multiple times,
offsetting the internal packet's data pointer and decreasing its size
after each call. The output pkt_size value before this commit is then
the remaining internal packet size at the time of each internal decode
call.
After this commit, output pkt_size is simply the size of the full packet
submitted by the caller to the decoder. This is more correct, since
internal packets are never seen by the caller and should have no
observable outside effects.
ISOBMFF (14496-12) made this field ('channelcount') in the
AudioSampleEntry structure non-template¹ somewhere before the
release of the 2022 edition. As for ETSI TS 126 244 AKA 3GPP
file format (V16.1.0, 2020-10), it does not seem contain any
references limiting the channelcount entry in AudioSampleEntry
or in its own definition of EVSSampleEntry.
fate-mov-mp4-chapters test had to be adjusted as it output a
mono vorbis stream, which would now be properly marked as such
in the container.
1: As per 14496-12:
Fields shown as “template” in the box descriptions are fields
which are coded with a default value unless a derived
specification defines their use and permits writers to use
other values than the default.
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
The cHRM chunk is descriptive. That is, it describes the primaries that should
be used to interpret the pixel data in the PNG file. This is notably different
from Mastering Display Metadata, which describes which subset of the presented
gamut is relevant. MDM describes a gamut and says colors outside the gamut are
not required to be preserved, but it does not actually describe the gamut that
the pixel data from the frame resides in. Thus, to decode a cHRM chunk present
in a PNG file to Mastering Display Metadata is incorrect.
This commit changes this behavior so the cHRM chunk, if present, is decoded to
color metadata. For example, if the cHRM chunk describes BT.709 primaries, the
resulting AVFrame will be tagged with AVCOL_PRI_BT709, as a description of its
pixel data. To do this, it utilizes libavutil/csp.h, which exposes a funcction
av_csp_primaries_id_from_desc, to detect which enum value accurately describes
the white point and primaries represented by the cHRM chunk.
This commit also changes pngenc.c to utilize the libavuitl/csp.h API, since it
previously duplicated code contained in that API. Instead, taking advantage of
the API that exists makes more sense. pngenc.c does properly utilize the color
tags rather than incorrectly using MDM, so that required no change.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
segment_time and segment_times are defined as duration specifications, not
timestamps, so calculation of segment duration must account for initial
timestamp. Fixed.
FATE ref for segment-mp4-to-ts changed on account of avoiding premature
segment cut at the end of the first segment.
Defined by H.274, this SEI message is utilized by iPhones to save
the nominal ambient viewing environment for the display of recorded
HDR content. The contents of the message are exposed to API users
as AVFrame side data containing AVAmbientViewingEnvironment.
As the DV RPU test sample is from an iPhone and includes Ambient
Viewing Environment SEI messages, its test result gets updated.
Parsing should probably be enabled for all codecs, at least for headers,
but e.g. the AAC parser produces 1-byte packets of zero padding with it,
so I'm just enabling it for EAC3 for the moment.
Current code may, depending on the muxer, decide to use VSYNC_VFR tagged
with the specified framerate, without actually performing framerate
conversion. This is clearly wrong and against the documentation, which
states unambiguously that -r should produce CFR output for video
encoding.
FATE test changes:
* nuv-rtjpeg: replace -r with '-enc_time_base -1', which keeps the
original timebase. Output frames are now produced with proper
durations.
* filter-mpdecimate: just drop the -r option, it is unnecessary
* filter-fps-r: remove, this test makes no sense and actually
produces broken VFR output (with incorrect frame durations).
Commit 18f24527eb accidentally made side data only packets be handled like a
flush request. Fix this regression by effectively ignoring them as was the
original intention.
Signed-off-by: James Almer <jamrial@gmail.com>
Currently, in case of equality on the first color channel, the order of
the ref colors is defined by the hashing function. This commit makes the
sorting deterministic and improve the hierarchical ordering.
Some encoders, like flac, can send side data only packets at the end.
Eventually, said extradata update should ideally be used to update the header
when writting to seekable output, but for now, ignore them.
Should fix the undefined behavior of passing NULL to memcpy().
Signed-off-by: James Almer <jamrial@gmail.com>
PFM (aka Portable FloatMap) encodes its scanlines from bottom-to-top,
not from top-to-bottom, unlike other NetPBM formats. Without this
patch, FFmpeg ignores this exception and decodes/encodes PFM images
mirrored vertically from their proper orientation.
For reference, see the NetPBM tool pfmtopam, which encodes a .pam
from a .pfm, using the correct orientation (and which FFmpeg reads
correctly). Also compare ffplay to magick display, which shows the
correct orientation as well.
See: http://www.pauldebevec.com/Research/HDR/PFM/ and see:
https://netpbm.sourceforge.net/doc/pfm.html for descriptions of this
image format.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: James Almer <jamrial@gmail.com>
This filter, when used in the "pad" mode, currently makes the
distinction between limited and full range solely by testing for YUVJ
pixel formats at link setup time. This is deprecated and should be
improved to perform the detection based on the per-frame metadata.
In order to make this distinction based on color range metadata, which
is only known at the time of filtering frames, for simplicity, we simply
allocate two copies of the "black" frame - one for limited range and the
other for full range metadata. This could be done more dynamically (e.g.
as-needed or simply by blitting the appropriate pixel value directly),
but this change is relatively simple and preserves the structure of the
existing code.
This commit actually fixes a bug in FATE - the new output is correct for
the first time. The previous md5 ref was of a frame that incorrectly
combined full-range pixel data with limited-range black fields. The
corresponding result has been updated.
Signed-off-by: Niklas Haas <git@haasn.dev>
The idea behind last_pkt_props was to store the properties of the last packet
fed to the decoder. Any sort of queueing required by CODEC_CAP_DELAY decoders
that consume several packets before they start outputting frames should be done
by the decoders in question. An example of this is libdav1d.
This is required for the following commits that will fix last_pkt_props in
frame threading scenarios, as well as maintain its contents during flush.
This revers commit 022a12b306.
Signed-off-by: James Almer <jamrial@gmail.com>
The Encoding field (and the \fe tag) allows to limit font selection to
only those fonts declaring support for the specified codepage in their
OS/2's table "Code Page Character Range" field.
Particularly, Encoding=0 means only font's declaring support for "ANSI",
or rather "Latin (Western European)", are allowed to be selected.
Specifying Encoding=1 allows all fonts to be considered.
We do not want to limit font selection, so specify Encoding=1.
NB: at the time of writing libass only partially supports this field,
thus hiding the issue in any libass-based renderer. A VSFilter-based
DirectShow filter or XySubFilter will reveal the issue when a font not
declaring support for latin characters is specified in a style.
Colour values used in ASS files without a "YCbCr Matrix" header set to
"None" are subject to colour mangling, due to how ASS was historically
conceived. A more in-depth description can be found in the documetation
inside libass' public ass_types.h header. The important part is, if this
header is not set to "None", the final output colours can deviate from
the literal value specified in the file. When converting from non-ASS
formats we do not want any colour shift to happen, so let's set the
appropiate header.
NB: ffmpeg's subtitle filter, does not follow libass' documentation
regarding colour mangling, thus hiding the bug. Anything based on
VSFilter, XySubFilter or e.g. mpv do and might show the issue.
(Of course native ASS subs, which _do_ rely on colour mangling won't
work properly with the subtitle filter, but this can be fixed another
time)
It is valid for HEVC; in fact, the ATSC-HEVC spec [1] simply
refers to the relevant H.264 spec.
It is also trivial to implement now: Just move applying AFD
to ff_h2645_sei_to_frame() and stop ignoring AFD when parsing
a HEVC SEI containing it.
A FATE-test for this has been added.
[1]: https://www.atsc.org/atsc-documents/a3412017-video-hevc/
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
floating point uses a slightly different predictor technique describe here
http://chriscox.org/TIFFTN3d1.pdf
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This patch replaces the transform used in AAC with lavu/tx and removes
the limitation on only being able to decode 960-sample files
with the float decoder.
This commit also removes a whole bunch of unnecessary and slow
lifting steps the decoder did to compensate for the poor accuracy
of the old integer transformation code.
Overall float decoder speedup on Zen 3 for 64kbps: 32%
Fixes ticket #128.
The SVQ1 interframe mean VLC symbols -128 and 128 are incorrectly swapped
in our SVQ1 implementation, resulting in visible artifacts for some videos.
This patch unswaps the order of these two symbols.
The most noticable example of the artiacts caused by this error can be observed in
https://trac.ffmpeg.org/attachment/ticket/128/svq1_set.7z '352_288_k_50.mov'.
The artifacts are not observed when using the reference decoder
(QuickTime 7.7.9 x86 binary).
As a result of this patch, the reference data for the fate-svq1 test
($SAMPLES/svq1/marymary-shackles.mov) must be modified. For this file, our
decoder output is now bitwise identical to the reference decoder. I have
tested patch with various other samples and they are all now bitwise identical.
The data in SGI images is stored planar, so exporting
it via planar pixel formats is natural.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This check is intended to be avoid buffer overflows,
yet there are four problems with it:
1. It has an in-built off-by-one error: len == out_end - out
is perfectly fine and nothing to worry about.
This off-by-one error led to the pixel in the lower-right corner
not being set properly for the back frame of the sample from
the rl2 FATE-test. This pixel is copied to every frame which
is the reason for the update to the reference file of said test.
With this patch, the output of the decoder matches the output
as captured from the reference decoder* (apart from the fact
that said reference somehow lacks the top part of the frame
(copied over from the background frame)).
2. Given that the stride of the buffer may be different
from the width of the video (despite one pixel taking one byte),
there is a second check lateron making the first check redundant
(if one returns immediately; a simple break at the second check
is not sufficient, because it only exits the inner loop).
3. The check is based around the assumption of the stride being
positive (it has this in common with the other check which
will be fixed in a future commit).
4. Even after fixing the off-by-one error, the check in
question is still triggered by all the non-background frames
in the FATE sample as well as by A1100100.RL2. In all these
cases, they use len == 255 and val == 128. For videos with
background frame this just means "copy from the background
frame", which would be done anyway lateron.* Yet for videos
without it copying it is necessary to avoid leaving
uninitialized parts in the video.
*: Available in https://samples.mplayerhq.hu/game-formats/voyeur-rl2/
**: Due to this, the code that copies the rest from the
back frame is no longer executed for any of the samples
available on the sample server. Given that these are only
the files from the demo version of this game, I don't know
whether this code is executed for any file in existence or not.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>