Matroska requires pts to be >= 0 with a slight exception:
It has a mechanism to deal with codec delay, i.e. with
the data added at the beginning that does not correspond
to actual input data and should be discarded by the player.
Only the audio actually intended to be output needs to have
a timestamp >= 0.
In order to avoid unnecessary timestamp shifting, this patch
allows muxers to inform the shifting code about this so that
it can take it into account.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Matroska generally requires timestamps to be nonnegative, but
there is an exception: Data that corresponds to encoder delay
and is not supposed to be output anyway can have a negative
timestamp. This is achieved by using the CodecDelay header
field: The demuxer has to subtract this value from the raw
(nonnegative) timestamps of the corresponding track.
Therefore the muxer has to add this value first to write
this raw timestamp.
Support for writing CodecDelay has been added in FFmpeg commit
d92b1b1bab and in Libav commit
a1aa37dd0b. The former simply
wrote the header field and did not apply any timestamp offsets,
leading to desynchronisation (if one uses multiple tracks).
The latter applied it at two places, but not at the one where
it actually matters, namely in mkv_write_block(), leading to
the same desynchronisation as with the former commit. It furthermore
used the wrong stream timebase to convert the delay to the
stream's timebase, as the conversion used the timebase from
before avpriv_set_pts_info().
When the latter was merged in 82e4f39883,
it was only done in a deactivated state that still did not
offset the timestamps when muxing due to "assertion failures
and av sync errors". a1aa37dd0b
made it definitely more likely to run into assertion failures
(namely if the relative block timestamp doesn't fit into an int16_t).
Yet all of the above issues have been fixed (in commits
962d631573,
5d3953a5dc and
4ebeab15b0. This commit therefore
enables applying CodecDelay, fixing ticket #7182.
There is just one slight regression from this: If one has input
with encoder delay where the first timestamp is negative, but
the pts of the part of the data that is actually intended to be
output is nonnegative, then the timestamps will currently by default
be shifted to make them nonnegative before they reach the muxer;
the muxer will then ensure that the shifted timestamps are retained.
Before this commit, the muxer did not ensure this; instead the
timestamps that the demuxer will output were shifted and
if the first timestamp of the actually intended output was zero
before shifting, then this unintentional shift just cancels
the shift performed before the packet reached the muxer.
(But notice that this only applies if all the tracks use the same
CodecDelay, or the relative sync between tracks will be impaired.)
This happens in the matroska-opus-remux and matroska-ogg-opus-remux
FATE tests. Future commits will forward the information that
the Matroska muxer has a limited capability to handle negative
timestamps so that the shifting in libavformat can take advantage
of it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Opus can be decoded to multiple samplerates (namely 48kHz, 24KHz,
16Khz, 12 KHz and 8Khz); libopus as well as our encoder wrapper
support these sample rates. The OpusHead contains a field for
this original samplerate. Yet the pre-skip (and the granule-position
in the Ogg-Opus mapping in general) are always in the 48KHz clock,
irrespective of the original sample rate.
Before commit c3c22bee63, our libopus
encoder was buggy: It did not account for the fact that the pre-skip
field is always according to a 48kHz clock and wrote a too small
value in case one uses the encoder with a sample rate other than 48kHz;
this discrepancy between CodecDelay and OpusHead led to Firefox
rejecting such streams.
In order to account for that, said commit made the muxer always use
48kHz instead of the actual sample rate to convert the initial_padding
(in samples in the stream's sample rate) to ns. This meant that both
fields are now off by the same factor, so Firefox was happy.
Then commit f4bdeddc3c fixed the issue
in libopusenc; so the OpusHead is correct, but the CodecDelay is
still off*. This commit fixes this by effectively reverting
c3c22bee63.
*: Firefox seems to no longer abort when CodecDelay and OpusHead
are off.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is possible for the trailing padding to be zero, namely
e.g. if the AV_PKT_DATA_SKIP_SAMPLES side data is used
for leading padding. Matroska supports this (use a negative
DiscardPadding), but players do not; at least Firefox refuses
to play such a file. So for now only write DiscardPadding
if it is trailing padding and nonzero.
The fate-matroska-ogg-opus-remux was affected by this.
(I wish CodecDelay would not exist and DiscardPadding would
be used to instead trim the codec delay away (with the Block
timestamp corresponding to the time at which the actually
output audio is output).)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Creating CFHD RL VLC tables works by first extending
the codes by the sign, followed by creating a VLC,
followed by deriving the RL VLC from this VLC (which
is then discarded). Extending the codes uses stack arrays.
The tables used to initialize the VLC are already sorted
from left-to-right in the tree. This means that the
corresponding VLC entries are generally also ascending,
but not always: Entries from subtables always follow
the corresponding main table although it is possible
for the right-most node to fit into the main table.
This suggests that one can try to use the final destination
buffer as scratch buffer for the tables with sign included.
Unfortunately it works for neither of the tables if one
uses the right-most part of the RL VLC buffer as scratch buffer;
using the left-most part of the RL VLC buffer as scratch buffer
might work if one traverses the VLC entries from end to start.
But it works only for the little RL VLC (table 9), not for table 18.
Therefore this patch uses the RL VLC buffer for table 9
as scratch buffer for creating the bigger table 18.
Afterwards the left part of the buffer for table 9 is
used as scratch buffer to create table 9.
This fixes the cfhd part of ticket #9399 (if it is not already fixed).
Notice that I do not consider the previous stack usage excessive.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The VLC is only used to initialize RL VLC.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
cfhddata.c initializes a RL VLC table via code tables and
corresponding tables for length, run and level. code and length
tables are used to initialize a VLC, no symbol table is used.
Afterwards the symbols of said VLC are just the indices of
the corresponding entries in the code and length table that
were used for initialization; they can therefore be used
to get the matching level and run entry and they are not used
for anything else. Therefore one can just permute these tables
without changing the resulting RL VLC tables.
This commit does just this. It permutes these tables so that
the code tables are ordered from left to right in the resulting
tree and then switches to ff_init_vlc_from_lengths(), which
allows to remove the codes table altogether.
Given that these tables are constructed on the stack, this
also reduces stack usage, potentially fixing part of #9399.
(The size of the tables on the stack decreases from 4752 to
2640.)
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
cfhd.c checked for level being equal to a certain codebook-
dependent constant and to run being two. The first check is
actually redundant, as all codebooks contain only one (real)
entry with run == 2 (as is usual with VLCs, this one real entry
has several corresponding entries in the table). But given
that no entry has a run of zero (except incomplete entries
which just signal that one needs to do another round of parsing),
one can actually use that as sentinel. This patch does so.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Demuxers are not allowed to do this and few callers, if any, will handle
this correctly. Send the AV_SIDE_DATA_PARAM_CHANGE_SAMPLE_RATE side data
instead.
Demuxers are not supposed to update AVCodecParameters after the stream
was seen by the caller. This value is not important enough to support
dynamic updates for.
The mov demuxer only returns DV audio, video packets are discarded.
It first reads the data to be parsed into a packet. Then both this
packet and the pointer to its data are passed together to
avpriv_dv_produce_packet(), which parses the data and partially
overwrites the packet. This is confusing and potentially dangerous, so
just pass NULL and avoid pointless packet modification.
The struct is quite small and the decoder and the encoder use different
fields from it, so benefits from reusing it are small.
This allows making the buf field const.
The function contains only two assignments, setting DVVideoContext.avctx
and AVCodecContext.chroma_sample_location. However, the decoder does not
use the former, and the encoder should not be setting the latter.
Therefore move the first assignment to dvenc and the second to dvdec.
Make the encoder warn if the user-signalled chroma sample location does
not match the supported one, and return an error on higher compliance
levels.
Fixes: out of array access
Fixes: 50014/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SPEEDHQ_fuzzer-4748914632294400
Alternatively the buffer size can be increased
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Initialized to 1:1, but if the script sets these properties, it
will be set to those instead (0:0 disables it, apparently).
Signed-off-by: Stephen Hutchinson <qyot27@gmail.com>
If we want to be able to map between VAAPI and Vulkan (to do Vulkan
filtering), we need to have matching formats on each side.
The mappings here are not exact. In the same way that P010 is still
mapped to full 16 bit formats, P012 has to be mapped that way as well.
Similarly, VUYX has to be mapped to an alpha-equipped format, and XV36
has to be mapped to a fully 16bit alpha-equipped format.
While Vulkan seems to fundamentally lack formats with an undefined,
but physically present, alpha channel, it has have 10X6 and 12X4
formats that you could imagine using for P010, P012 and XV36, but these
formats don't support the STORAGE usage flag. Today, hwcontext_vulkan
requires all formats to be storable because it wants to be able to use
them to create writable images. Until that changes, which might happen,
we have to restrict the set of formats we use.
Finally, when mapping a Vulkan image back to vaapi, I observed that
the VK_FORMAT_R16G16B16A16_UNORM format we have to use for XV36 going
to Vulkan is mapped to Y416 when going to vaapi (which makes sense as
it's the exact matching format) so I had to add an entry for it even
though we don't use it directly.
With the necessary pixel formats defined, we can now expose support for
the remaining 10/12bit combinations that VAAPI can handle.
Specifically, we are adding support for:
* HEVC
** 12bit 420
** 10bit 422
** 12bit 422
** 10bit 444
** 12bit 444
* VP9
** 10bit 444
** 12bit 444
These obviously require actual hardware support to be usable, but where
that exists, it is now enabled.
Note that unlike YUVA/YUVX, the Intel driver does not formally expose
support for the alphaless formats XV30 and XV360, and so we are
implicitly discarding the alpha from the decoder and passing undefined
values for the alpha to the encoder. If a future encoder iteration was
to actually do something with the alpha bits, we would need to use a
formal alpha capable format or the encoder would need to explicitly
accept the alphaless format.
These are the formats we want/need to use when dealing with the Intel
VAAPI decoder for 12bit 4:2:0, 12bit 4:2:2, 10bit 4:4:4 and 12bit 4:4:4
respectively.
As with the already supported Y210 and YUVX (XVUY) formats, they are
based on formats Microsoft picked as their preferred 4:2:2 and 4:4:4
video formats, and Intel ran with it.
P12 and Y212 are simply an extension of 10 bit formats to say 12 bits
will be used, with 4 unused bits instead of 6.
XV30, and XV36, as exotic as they sound, are variants of Y410 and Y412
where the alpha channel is left formally undefined. We prefer these
over the alpha versions because the hardware cannot actually do
anything with the alpha channel and respecting it is just overhead.
Y412/XV46 is a normal looking packed 4 channel format where each
channel is 16bits wide but only the 12msb are used (like P012).
Y410/XV30 packs three 10bit channels in 32bits with 2bits of alpha,
like A/X2RGB10 style formats. This annoying layout forced me to define
the BE version as a bitstream format. It seems like our pixdesc
infrastructure can handle the LE version being byte-defined, but not
when it's reversed. If there's a better way to handle this, please
let me know. Our existing X2 formats all have the 2 bits at the MSB
end, but this format places them at the LSB end and that seems to be
the root of the problem.
There are no particular reasons to force the compiler to use the same
register as output and input operand. This forces an extra MOV
instruction if the input value needs to be reused after the swap.
In most cases, this makes no differences, as the compiler will seleect
the same register for both operands either way.
Signed-off-by: Martin Storsjö <martin@martin.st>
There are no particular reasons to force the compiler to use the same
register as output and input operand. This forces an extra MOV
instruction if the input value needs to be reused after the swap.
In most cases, this makes no differences, as the compiler will seleect
the same register for both operands either way.
Signed-off-by: Martin Storsjö <martin@martin.st>
It is advantageous for ff_crop_tab, as the base pointer used to
access this table is not the first element of it. But the real
base pointer is still at a constant offset from the code/the GOT
and can therefore be accessed relative to the instruction pointer
(if supported by the arch) or relative to the GOT; without this,
one has to first load address of ff_crop_tab (potentially via
the GOT) and then offset manually (which is what the earlier code
did).
Reviewed-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It reduces typing: Before this patch, there were 11 callbacks
that exceeded the 80 char line length limit; now there are zero.
It also allows to remove ONLY_IF_THREADS_ENABLED() in
libavutil/internal.h.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>