The general demuxing API uses parsers and decoders. Therefore
FFStream contains pointers to AVCodecContexts and
AVCodecParserContext and lavf/internal.h includes lavc/avcodec.h.
Yet actually only a few files files really use these; and it is best
when this number stays small. Therefore this commit uses opaque
structs in lavf/internal.h for these contexts and stops including
avcodec.h.
This also avoids including lavc/codec_desc.h implicitly. All other
headers are implicitly included as now (mostly through codec.h).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The field is not specific to Opus.
The mp2fixed encoder signals initial_padding and is used
by both the matroska-encoding-delay test as well as
the lavf-mkv tests which necessitated several FATE ref changes.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Matroska generally requires timestamps to be nonnegative, but
there is an exception: Data that corresponds to encoder delay
and is not supposed to be output anyway can have a negative
timestamp. This is achieved by using the CodecDelay header
field: The demuxer has to subtract this value from the raw
(nonnegative) timestamps of the corresponding track.
Therefore the muxer has to add this value first to write
this raw timestamp.
Support for writing CodecDelay has been added in FFmpeg commit
d92b1b1bab and in Libav commit
a1aa37dd0b. The former simply
wrote the header field and did not apply any timestamp offsets,
leading to desynchronisation (if one uses multiple tracks).
The latter applied it at two places, but not at the one where
it actually matters, namely in mkv_write_block(), leading to
the same desynchronisation as with the former commit. It furthermore
used the wrong stream timebase to convert the delay to the
stream's timebase, as the conversion used the timebase from
before avpriv_set_pts_info().
When the latter was merged in 82e4f39883,
it was only done in a deactivated state that still did not
offset the timestamps when muxing due to "assertion failures
and av sync errors". a1aa37dd0b
made it definitely more likely to run into assertion failures
(namely if the relative block timestamp doesn't fit into an int16_t).
Yet all of the above issues have been fixed (in commits
962d631573,
5d3953a5dc and
4ebeab15b0. This commit therefore
enables applying CodecDelay, fixing ticket #7182.
There is just one slight regression from this: If one has input
with encoder delay where the first timestamp is negative, but
the pts of the part of the data that is actually intended to be
output is nonnegative, then the timestamps will currently by default
be shifted to make them nonnegative before they reach the muxer;
the muxer will then ensure that the shifted timestamps are retained.
Before this commit, the muxer did not ensure this; instead the
timestamps that the demuxer will output were shifted and
if the first timestamp of the actually intended output was zero
before shifting, then this unintentional shift just cancels
the shift performed before the packet reached the muxer.
(But notice that this only applies if all the tracks use the same
CodecDelay, or the relative sync between tracks will be impaired.)
This happens in the matroska-opus-remux and matroska-ogg-opus-remux
FATE tests. Future commits will forward the information that
the Matroska muxer has a limited capability to handle negative
timestamps so that the shifting in libavformat can take advantage
of it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Opus can be decoded to multiple samplerates (namely 48kHz, 24KHz,
16Khz, 12 KHz and 8Khz); libopus as well as our encoder wrapper
support these sample rates. The OpusHead contains a field for
this original samplerate. Yet the pre-skip (and the granule-position
in the Ogg-Opus mapping in general) are always in the 48KHz clock,
irrespective of the original sample rate.
Before commit c3c22bee63, our libopus
encoder was buggy: It did not account for the fact that the pre-skip
field is always according to a 48kHz clock and wrote a too small
value in case one uses the encoder with a sample rate other than 48kHz;
this discrepancy between CodecDelay and OpusHead led to Firefox
rejecting such streams.
In order to account for that, said commit made the muxer always use
48kHz instead of the actual sample rate to convert the initial_padding
(in samples in the stream's sample rate) to ns. This meant that both
fields are now off by the same factor, so Firefox was happy.
Then commit f4bdeddc3c fixed the issue
in libopusenc; so the OpusHead is correct, but the CodecDelay is
still off*. This commit fixes this by effectively reverting
c3c22bee63.
*: Firefox seems to no longer abort when CodecDelay and OpusHead
are off.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is possible for the trailing padding to be zero, namely
e.g. if the AV_PKT_DATA_SKIP_SAMPLES side data is used
for leading padding. Matroska supports this (use a negative
DiscardPadding), but players do not; at least Firefox refuses
to play such a file. So for now only write DiscardPadding
if it is trailing padding and nonzero.
The fate-matroska-ogg-opus-remux was affected by this.
(I wish CodecDelay would not exist and DiscardPadding would
be used to instead trim the codec delay away (with the Block
timestamp corresponding to the time at which the actually
output audio is output).)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Regression since 67eea6cf02.
Affects only WebVTT when muxing WebM. (This is covered
by the webm-webvtt-remux FATE test which fails for several
FATE boxes on fate-ffmpeg.org.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, only the first four bytes (the ones preceding
the OBU) were written because not enough space has been reserved
for the complete CodecPrivate. This commit changes this
by increasing the space reserved for the CodecPrivate (it is big
enough for every sane sequence header plus something extra);
the code falls back to writing four bytes in case the increased
space turns out to be insufficient.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, updating extradata was very ad-hoc: The amount of
space reserved for extradata was not recorded when writing the
header; instead the AAC code simply presumed that it was enough.
This commit changes this by recording how much space is available.
This brings with it that the code for writing of and reserving space
for the CodecPrivate and for updating it diverges. They are therefore
split; this allows to put other common tasks like seeking to
right offset as well as writing padding (in case the new extradata did
not fill the whole reserved space) to this common function.
The code for filling up the reserved space is smarter than the code
it replaces; therefore it is no longer necessary to reserve more
than necessary just to be sure that one can add an EBML Void element
(whose minimum size is two) lateron. This is the reason for the change
to the aac-autobsf-adtstoasc test.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Instead pass extradata and extradata_size explicitly.
(It is not perfect, as ff_put_(wav|bmp)_header() still uses
the extradata embedded in codecpar, but this is not an issue
as long as their CodecPrivate isn't updated.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is in preparation for splitting writing and updating
extradata more thoroughly later.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Because not all metadata is written as tags, the Matroska muxer
filters out the tags that are not written as tags.
Therefore the code first checks whether a Tag master element
needs to be opened for a given stream/chapter/attachment/global
metadata. If the answer turns out to be yes, it is checked again
whether a given AVDictionaryEntry is written as a tag.
This commit changes this: The Tag element is opened unconditionally
and in case it turns out that it was unneeded, it is discarded again.
This is possible because the Tag element is written into its own
dynamic buffer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is possible by using a dynamic buffer to write them;
said dynamic buffer is (re)used and reset as appropriate.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is no longer converted since mkv_write_chapters() is called
before mkv_write_tags() which happens since commit
4ebfc13c33. Given the fact that
chapters can also be written late, mkv_write_chapters() has to
convert the metadata itself.
Fixes ticket #9812.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Before this patch the muxer writes an invalid file
(namely one in which the Projection master is a child of
the Colour element) if the following conditions are met:
a) The stream contains AVMasteringDisplayMetadata without primaries
and luminance (i.e. useless AVMasteringDisplayMetadata).
b) The stream contains AV_PKT_DATA_SPHERICAL side data.
c) All the colour elements of the stream are equal to default
(i.e. unknown).
Fortunately these conditions are very unlikely to be met.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Add a parameter to omit seq header when generating the av1C atom.
For now, this does not change any behavior. This will be used by a
follow-up patch to add AVIF support.
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
This avoids unnecessary rebuilds of most source files if only the
list of enabled components has changed, but not the other properties
of the build, set in config.h.
Signed-off-by: Martin Storsjö <martin@martin.st>
According to the documentation, the ISOBMFF 'equi' box must
be present for equirectangular projections.
Reviewed-by: Hendrik Leppkes <h.leppkes@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This avoids copying the data in small chunks (1024B) into
the dynamic buffer's small buffer before finally writing them
into the "big" buffer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, the WebM variant of WebVTT subtitles has been handled
specially: It had its own function to write it, because the data
had to be reformatted before writing. But given that other codecs
also need reformatting, this is no good reason to also duplicate the
generic stuff for writing Block(Group)s.
This commit therefore uses an ordinary reformatting function for
this task; writing WebVTT subtitles now uses the generic code
and therefore automatically uses the least amount of bytes
for its BlockGroup length fields whereas the earlier code used
an overestimation for the length of the Duration element.
This is the reason for the changes to the webm-webvtt-remux FATE-test.
(This commit does not implement support for Matroska's way of muxing
WebVTT; it also does not add checks to ensure that WebM-style subtitles
don't get muxed in Matroska. But the function for reformatting gets a
webm prefix to indicate that this is for WebM.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This commit uses the new EbmlWriter API to write the length fields
of the BlockGroup and its descendants that are themselves Master
elements (namely BlockAdditions and BlockMore) on the least amount of
bytes.
This fixes regressions introduced when the special code for writing
general subtitles was removed. Accordingly, the binsub-mksenc and
matroska-zero-length-block FATE-tests have now been reverted back
to their old state again; the advantages of this approach are evident
with the matroska-vp8-alpha-remux test which up until now wrote
all the length fields of all BlockGroups, BlockAdditions and BlockMore
on eight bytes.
Using the EbmlWriter API also allowed to improve locality in
mkv_write_block(): E.g. both DiscardPadding as well as the
BlockAdditional side-data are now directly used to add elements
to the writer whereas the earlier code had to first check
for whether a BlockGroup should be used and then check again
(after the place where a BlockGroup would be opened if one were
used) for whether there is DiscardPadding or BlockAdditional
side-data to write.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Add a field to mkv_track that is set to the offset instead
of checking for whether the track is ProRes when writing
the Block. This makes writing the Block independent
of the AVCodecParameters.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This e.g. stops recalculating ts again.
Also pass the AVFormatContext as pointer to void as it is only used
for logging.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Once upon a time, mkv_write_block() only wrote a (Simple)Block,
not a BlockGroup which is needed for subtitles to convey
the duration. But with the introduction of support for writing
BlockAdditions and DiscardPadding (both of which require a BlockGroup),
mkv_write_block() can also open and close a BlockGroup of its own. This
naturally led to some code duplication which is removed in this commit.
This new code leads to one regression: It always uses eight bytes for
the BlockGroup's length field, whereas the earlier code usually used the
lowest amount of bytes needed. This will be fixed in a future commit.
This temporary regression is also the reason for changes to the
binsub-mksenc and matroska-zero-length-block fate tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Do this by using the new NALUList API. This avoids an allocation
of a dynamic buffer per packet as well as the (re)allocation
of the actual buffer as well as copying the data around.
This improves performance: The time for one call to write_packet
decreased from 703501 to 357900 decicyles when remuxing a 5min
14000 kb/s H.264 transport stream.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Matroska does not have different profiles that allow or disallow
in-band extradata, so one can just use the ordinary H.264 function
for H.265, too. (Both use ff_avc_parse_nal_units() internally anyway.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This avoids allocations+copies in all cases, not only those
in which the desired OBUs are contiguous in the input buffer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
WavPack's blocks use a length field, so that parsing them is fast.
Therefore it makes sense to parse the block twice, once to get
the length of the output packet and once to write the actual data
instead of writing the data into a temporary buffer in a single pass.
This speeds up muxing from 1597092 to 761850 Decicycles per
write_packet call for a 2000kb/s stereo WavPack file muxed to /dev/null
with writing CRC-32 disabled.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Matroska uses variable-length elements and in order not to waste
bytes on length fields, the length of the data to write needs to
be known before writing the length field. Annex B H.264/5 and
WavPack need to be reformatted to know this length and this
currently involves writing the data into temporary buffers;
AV1 sometimes suffers from this as well.
This commit aims to solve this by adding a callback that is called
twice per packet: Once to get the size and once to actually write
the data. In case of WavPack and AV1 (where parsing is cheap due
to length fields) both calls will just parse the data with only
the second function writing anything. For H.264/5, the position
of the NALUs will need to be stored to be written lateron.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids the surprise of using pb for the main AVIOContext
at the beginning and end of mkv_write_header() and for
for the dynamic buffer opened for the Info element
in the middle of mkv_write_header().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Using start/end_ebml_master() to write an EBML Master element
uses seeks under the hood. This does not work if the output is
unseekable with the AVIOContext's buffer being very small
(the size of the currently written Matroska EBML header is 40)
or with the AVIOContext being in direct mode, because then
this seek can't be performed in the AVIOContext's buffer.
So using an approach that does not rely on seeking at all
is preferable; this is achieved by switching to EbmlWriter.
Also factor writing the EBML header out into a function of its own.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also check the (user-provided) tags for being overlong; the earlier
code had an implicit unchecked size_t->int conversion.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This muxer currently uses two ways to ensure that no bytes
are wasted by writing unnecessary long EBML length fields
for Master elements and the (Simple)Block element
(all the other elements are fine as one either already has
the right length or getting the actual length is easy
and necessary anyway):
Either use an upper bound that is good enough in case one
is available or write the data into a dynamic buffer first
to get the length; the former approach is impossible in
lots of cases, whereas the latter incurs allocations and
memcpying. It is therefore unfeasible to use the latter
for e.g. the attachments or the BlockGroups.
This patch adds a third alternative to complement the other two:
It consists of an EbmlWriter that one can add EBML elements to
that can be written later by calling ebml_writer_write();
the latter function first traverses the written elements recursively
and calculates the length of each element; then a second pass
is performed in which all the elements are written directly
(without any seeks).
This new API also performs checks for overlong elements;
this is in contrast to put_ebml_string() which simply performs
a size_t->int conversion even for strings originating from the user.
The new API is designed to have very low overhead: It uses
stack arrays and performs no allocations; this also comes
at a price: Right now, it can only be used in contexts in which
there is a compile-time upper bound for the number of elements.
It is also incompatible with storing the offset of an element
in order to update this field later. Furthermore, it puts
the onus of memory management (i.e. ensuring that pointers stay valid)
on the user.
These restrictions might be overcome in the future.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This would happen in case non-WebVTT-subtitles had BlockAdditional
or DiscardPadding side-data. Given that these are not accounted for
in the length of the outer BlockGroup (which is a quite sharp upper
bound) it is possible for the outer BlockGroup to use an insufficient
number of bytes which leads to an assert in end_ebml_master().
Fix this by not opening a second BlockGroup inside an already opened
BlockGroup.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is similar to the faststart option of the mov muxer, yet
in contrast to it it works together with reserve_index_space
(the equivalent to reserved_moov_size): If the reserved space
does not suffice, the data is shifted; if not, the Cues are
written at the front without shifting the data.
Several tests that cover (not only) this have been added.
Implements #7017.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The Matroska muxer has quite a lot of dependencies and lots of them
are unnecessary for WebM. By disabling the Matroska-only code
at compile time one can get rid of them.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>