This is something of a hack. It allocates a new hwframe context for
the target format, then maps it back to the source link and overwrites
the input link hw_frames_ctx so that the previous filter will receive
the frames we want from ff_get_video_buffer(). It may fail if
the previous filter imposes any additional constraints on the frames
it wants to use as output.
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
Some frames contexts are not usable without additional format-specific
state in hwctx. This change adds new functions frames_derive_from and
frames_derive_to to initialise this state appropriately when deriving
a frames context which will require it to be set.
This only supports one device globally, but more can be used by
passing them with input streams in hw_frames_ctx or by deriving new
devices inside a filter graph with hwmap.
If -hwaccel foo is supplied without any other device options, and the
foo hwaccel is meant to have a device, try to make such a device with
default parameters for the hwaccel to use.
This was left over from an earlier version which created the new
packet inside the current frame structure. Now it just leaks an
unused packet, so remove the allocation entirely.
The function currently accepts a PutBitContext and a GetBitContext,
which hardcodes their sizes into the lavc ABI. Since the function is
quite small and only called in a few places, the simplest solution is
making it inline, thus avoiding a runtime dependency completely.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
request_channel_layout is a decoder option and it makes no sense
to have it in a parser.
This feature was needed in the past when the decoder was allowed
to reuse the avctx from the demuxer. Nowadays the decoder receives
only the parameters from it, already containing the real channel
layout (and the correct request_channel_layout option).
After initialization the decoder overwrites the channel layout
with the downmixed one that is actually output, so there is no need
to preserve this functionality in the parser.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
request_channel_layout is a decoder option and it makes no sense
to have it in a parser.
This feature was needed in the past when the decoder was allowed
to reuse the avctx from the demuxer. Nowadays the decoder receives
only the parameters from it, already containing the real channel
layout (and the correct request_channel_layout option).
After initialization the decoder overwrites the channel layout
with the downmixed one that is actually output, so there is no need
to preserve this functionality in the parser.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
The non-H.26[45] codecs already use this form. Since we don't
currently generate I frames for codecs which support them separately
to IDR, the p_per_i variable is set to infinity by default so that it
doesn't interfere with any other calculation. (All the code for I
frames still exists, and it works for H.264 if set manually.)
The same warning is issued when actual unknown spherical metadata is
found further down in the function.
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Inlining public functions hardcodes their implementation into the ABI,
so it should be avoided unless there is a very good reason for it. No
such reason exists in this case.
Fill out the default/unset parameters with ones actually in use.
Matches the current MediaSDK example code.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>