This patch add another ARIB caption decoder using libaribcaption
external library.
Unlike libaribb24, it supports 3 types of subtitle outputs:
* text: plain text
* ass: ASS formatted text
* bitmap: bitmap image
Default subtitle type is ass as same as libaribb24.
Advantages compared with libaribb24 on ASS subtitle are:
* Subtitle positioning.
* Multi-rect subtitle: some captions are displayed at different
position at a time.
* More stability and reproducibility.
To compile with this feature:
* libaribcaption external library has to be pre-installed.
https://github.com/xqq/libaribcaption
* configure with `--enable-libaribcaption` option.
`--enable-libaribb24` and `--enable-libaribcaption` options are
not exclusive. If both enabled, libaribcaption precedes as
order listed in `libavcodec/allcodecs.c`.
Signed-off-by: rcombs <rcombs@rcombs.me>
Many filters accept user-provided data that is cumbersome to provide as
text strings - e.g. binary files or very long text. For that reason such
filters typically provide a option whose value is the path from which
the filter loads the actual data.
However, filters doing their own IO internally is a layering violation
that the callers may not expect, and is thus best avoided. With the
recently introduced graph segment parsing API, loading option values
from files can now be handled by the caller.
This commit makes use of the new API in ffmpeg CLI. Any option name in
the filtergraph syntax can now be prefixed with a slash '/'. This will
cause ffmpeg to interpret the value as the path to load the actual value
from.
Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Customized SSIM for various projections (and stereo formats) of 360 images and videos.
Further contributions by:
Ashok Mathew Kuruvilla
Matthieu Patou
Yu-Hui Wu
Anton Khirnov
Suggested-By: ffmpeg@fb.com
Signed-off-by: Anton Khirnov <anton@khirnov.net>
It is a URL rewriter for IPFS gateways, not an actual implementation of
IPFS, and naming it as such was both incorrect and misleading.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Starting with an h264 implementation. Can be extended to support other codecs.
A few caveats:
- OpenGOP streams are currently not supported. The firt packet must be an IDR
frame.
- In some streams, a few frames at the end may not get a reordered PTS when
they reference frames past EOS. The code added to derive timestamps from
previous frames needs to extended.
Addresses ticket #502.
Signed-off-by: James Almer <jamrial@gmail.com>
With the necessary pixel formats defined, we can now expose support for
the remaining 10/12bit combinations that VAAPI can handle.
Specifically, we are adding support for:
* HEVC
** 12bit 420
** 10bit 422
** 12bit 422
** 10bit 444
** 12bit 444
* VP9
** 10bit 444
** 12bit 444
These obviously require actual hardware support to be usable, but where
that exists, it is now enabled.
Note that unlike YUVA/YUVX, the Intel driver does not formally expose
support for the alphaless formats XV30 and XV360, and so we are
implicitly discarding the alpha from the decoder and passing undefined
values for the alpha to the encoder. If a future encoder iteration was
to actually do something with the alpha bits, we would need to use a
formal alpha capable format or the encoder would need to explicitly
accept the alphaless format.