The high level summary of RCWT can be delegated doc/muxers, which
makes it easier to maintain and more consistent with the documentation
of the demuxer.
Signed-off-by: Marth64 <marth64@proxyid.net>
RCWT (Raw Captions With Time) is a format native to ccextractor,
a commonly used OSS tool for processing 608/708 Closed Captions (CC).
RCWT can be used to archive the original extracted CC bitstream.
The muxer was added in January 2024. In this commit, add the demuxer.
One can now demux RCWT files for rendering in ccaption_dec or interop
with ccextractor (which produces RCWT). Using the muxer/demuxer combo,
the CC bits can be kept for processing or rendering with either tool.
This can be an effective way to backup an original CC stream, including
format extensions like EIA-708 and overall original presentation.
Signed-off-by: Marth64 <marth64@proxyid.net>
If ff_subtitles_queue_insert() were given a NULL buffer
with 0 length, it would still attempt to grow the packet
or memcpy depending on if merge option is enabled.
In this commit, allow passing a NULL buffer with 0 length
without attempting to do such operations. This way, if a
subtitle demuxer happens to pass an empty cue or wants to
use av_get_packet() to read bytes, there are no unnecessary
operations on the packet after it is allocated.
Signed-off-by: Marth64 <marth64@proxyid.net>
Yet another probesize used to get the durations when
estimate_timings_from_pts is required. It is aimed at users interested
in better durations probing for itself, or because using
avformat_find_stream_info indirectly and requiring exact values: for
concatdec for example, especially if streamcopying above it.
The current code is a performance trade-off that can fail to get video
stream durations in a scenario with high bitrates and buffering for
files ending cleanly (as opposed to live captures): the physical gap
between the last video packet and the last audio packet is very high in
such a case.
Default behaviour is unchanged: 250k up to 250k << 6 (step by step).
Setting this new option has two effects:
- override the maximum probesize (currently 250k << 6)
- reduce the number of steps to 1 instead of 6, this is to avoid
detecting the audio "too early" and failing to reach a video packet.
Even if a single audio stream duration is found but not the other
audio/video stream durations, there will be a retry, so at the end the
full user-overriden probesize will be used as expected by the user.
Signed-off-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>
Both samples rely on a feature our decoder doesn't currently support.
Should fix fate failures on some systems where not even the one single frame
could be generated.
Signed-off-by: James Almer <jamrial@gmail.com>
We only support mdta as type, yet we were not skipping other types,
but rather reading key_size worth of bytes twice per entry.
Signed-off-by: James Almer <jamrial@gmail.com>
Stop reading keys and return AVERROR_INVALIDDATA if key_size
is larger than the amount of space left in the atom.
Bug: https://crbug.com/41496983
Signed-off-by: Eugene Zemtsov <eugene@chromium.org>
Signed-off-by: James Almer <jamrial@gmail.com>
VLC_MULTI_ELEM contains an uint8_t array that is supposed
to be treated as an array of uint16_t when the used symbols
have a size of two; otherwise it should be treated as just
an array of uint8_t, but it was not always treated that way:
vlc_multi_gen() initialized the first entry of the array
by writing the symbol via AV_WN16; on big endian systems,
the intended value was instead written into the second entry
of the array (where it would likely be overwritten lateron
during initialization).
read_vlc_multi() also treated this case incorrectly: In case
the code is so long that it needs a classical multi-stage lookup,
the symbol has been written to the destination as if via AV_WN16.
On little endian systems, this sets the correct first symbol and
clobbers (zeroes) the next one, but the next one will be overwritten
lateron anyway, so it won't be recognized. But on big-endian systems,
the first symbol will be set to zero and the actually read symbol
will be put into the slot for the next one (where it will be overwritten
lateron).
This commit fixes this; this fixes the magicyuv and utvideo FATE-tests
on big endian arches.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The fits decoder decodes to native pixel formats; so
the fitsdec-gbrap16be fate test failed on BE despite
its name because the reference file is LE.
This patch fixes this by forcing a pixel format;
the forced pixel format is BE, causing a change
in the reference file.
The fitsdec-gbrp16be test was not affected, because
its source file (lena-rgb48.png from tne FATE suite)
is actually biendian (as if someone had multiplied
8bit content by 257...).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The format and the first scale filter ensures that the filter
processing actually happens in high bit depth; the second
scale filter is only necessary for big endian arches.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Precludes the usage of the altivec IDCT which fixes
the avid-meridian FATE test on ppc64be here.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>