The current qsv deinterlace module does not work at all because MSDK needs user to pass
extra parameters to enable hint functions,such as denoise,deinterlace,composition and so on.
Usage:-hwaccel qsv -r 25 -c:v h264_qsv -i in -vf deinterlace_qsv=bob -b 2M
-maxrate 3M -c:v h264_qsv -y out.h264
Signed-off-by: ChaoX A Liu <chaox.a.liu@gmail.com>
Signed-off-by: Zhengxu Huang <zhengxu.maxwell@gmail.com>
Signed-off-by: Andrew Zhang <huazh407@gmail.com>
Change-Id: I9e7ddcf884f2788c2820f6c98affacfb9d8f3287
Signed-off-by: Maxym Dmytrychenko <maxim.d33@gmail.com>
This mode apparently does not support decoding of HEVC Main (8 bit).
With D3D11 and Intel drivers on Windows 10 I get green corruption, while
using DXVA2_ModeHEVC_VLD_Main works.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Otherwise the first decoded frame will still be tagged with the
original transfer instead of the alternative one.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
glibc introduced _DEFAULT_SOURCE in version 2.19 to replace _BSD_SOURCE and
_SVID_SOURCE, which were deprecated in version 2.20. Add _DEFAULT_SOURCE
where the latter two are used to be forwards-compatible and avoid warnings
about the use of deprecated definitions.
Do not use skip_remaining() to fully wipe the cache, as this could do
a 64-bit shift of a 64-bit variable which is undefined behavior in C.
Instead set the related variables to zero directly.
Thanks to Uoti for pointing out the problem.
CC: libav-stable@libav.org
Avoid anonymously typedeffed structs and enums, drop an unused context member,
fix a small wording mishap, sizeof(type) ---> sizeof(*variable), drop a
needlessly verbose log message, use av_malloc_array() where appropriate.
Now it is possible to adjust compression speed vs R/D when needed
and also skip vintage player compatibility at will.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
version 2013-02-08 Rl
- fixes/optimization in multistrip encoding and codebook size choice,
quality/bitrate is now better than that of the binary proprietary encoder
version 2013-02-12 Rl
- separated codebook training sets, avoided the transfer of wasted bytes,
which yields both better quality and smaller files
- now using the correct colorspace (TODO: move conversion to libswscale)
version 2013-02-14 Rl "Valentine's Day" version:
- made strip division more robust
- minimized bruteforcing the number of strips,
(costs some R/D but speeds up compession a lot), the heuristic
assumption is that score as a function of the number of strips has
one wide minimum which moves slowly, of course not fully true
- simplified codebook generation,
the old code was meant for other optimizations than we actually do
- optimized the codebook generation / error estimation for MODE_MC
version 2013-04-28 Rl
- bugfixed codebook optimization logic
version 2014-01-20 Rl
- made the encoder compatible with vintage decoders
and added some yet unused code for possible future
incremental codebook updates
- fixed a small memory leak
version 2014-01-21 Rl
- believe it or not, now we get even smaller files, with better quality
(which means I missed an optimization earlier :)
Signed-off-by: Diego Biurrun <diego@biurrun.de>
If using the winstore compat library, a fallback LoadLibrary
function does exist, that only calls LoadPackagedLibrary though
(which doesn't work for dynamically loading d3d11 DLLs).
Therefore explicitly check the targeted API family instead.
Make this check a reusable HAVE_* component which other parts
of the libraries can check when necessary as well.
Signed-off-by: Martin Storsjö <martin@martin.st>
Currently, the tags enforced and set on the segmenter muxer level
mismatch what the mp4/ismv muxer uses (since 713efb2c0d).
Skip the codec_tag altogether here, to let the user (try to) set
whichever codec/tag is preferred; the individual chained muxer will
reject invalid codecs anyway.
Signed-off-by: Martin Storsjö <martin@martin.st>
The use of this SEI is for backward compatibility in HLG HDR systems:
older devices that cannot interpret the "arib-std-b67" transfer will
get the compatible transfer (usually bt709 or bt2020) from the VUI,
while newer devices that can interpret HDR will read the SEI and use
its value instead.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This fixes a segfault (originally found in Movian, but traced to libav)
when decoding subtitles because only an array of rects is allocated,
but not the actual structs it contains. The issue was probably
introduced in commit 2383323 where the loop to allocate the rects in
the array was thrown away.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Hardware accelerated decoding generally uses AVHWFramesContext for pool
allocation of hardware surfaces. These are setup to allocate surfaces
aligned to hardware and hwaccel API requirements. Due to the
architecture, av_hwframe_get_buffer() will return AVFrames with
the dimensions set to the aligned sizes.
This causes some decoders (like hevc) return these aligned size as
final frame size, instead of cropping them to the video's actual
dimensions. To make sure this doesn't happen, crop the frame to the
size the decoder expects when ff_get_buffer() is called.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Some devices (some phones, apparently) will support only this opaque
format. Of course this won't work with CLI, because copying data
directly is not supported.
Automatic frame allocation (setting AVCodecContext.hw_device_ctx) does
not support this mode, even if it's the only supported mode. But since
opaque surfaces are generally less useful, that's probably ok.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Makes dealing with formats that can not be used for staging textures
easier (DXGI_FORMAT_420_OPAQUE). It also saves memory if the staging
texture is never needed, so this is a good thing.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
It appears in this case, frames_ininit is called twice (once by
av_hwframe_ctx_init(), and again by unreffing the frames ctx ref).
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Some existed since forever, some are new.
The cast in get_surface() is silly, but unless we change the av_log
function signature, or all callers of ff_dxva2_get_surface_index(), it's
needed to remove the const warning.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Make supported codec profiles part of each dxva_modes entry. Every DXVA2
mode is representative for a codec with a subset of supported profiles,
so reflecting that in dxva_modes seems appropriate.
In practice, this will more strictly check MPEG2 profiles, will stop
relying on the surface format checks for selecting the correct HEVC
profile, and remove the verbose messages for mismatching H264/HEVC
profiles. Instead of the latter, it will now print the more nebulous "No
decoder device for codec found" verbose message.
This also respects AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH. Move the
Main10 HEVC entry before the normal one to make this work better.
Originally inspired by VLC's code.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
These variables might be set from a previous probe run, but one or the
other program that is probed for may not grok the flags, resulting in
errors during assembling when the values of those variables are passed
to the assembler.
The previous default sets the allocated surfaces to 32 unless it is
user-overridden or the lookahead parameter is set.
Change the surfaces calculation for default, B-frames and lookahead scenario.
Also employ this mechanism to pass $libdir to the runtime library search
path if rpath is enabled. This fixes underlinking of some test binaries
on some systems.
Check the existing flags in the cc/cflags/cppflags/ldflags for
occurrances of -isysroot; if none is found but --sysroot was specified,
set -isysroot to the same value as --sysroot.
This simplifies configuring cross-builds for iOS, if the global
environment variable SDKROOT isn't set.
Signed-off-by: Martin Storsjö <martin@martin.st>
The rtmp protocol uses nonblocking reads, to poll for incoming
messages from the server while publishing a stream.
Prior to 94599a6de3 and
d13b124eaf, the tls protocol
handled the nonblocking flag, mostly as a side effect from not
using custom IO callbacks for reading from the socket. When custom
IO callbacks were taken into use in
d15eec4d6b, the handling of a nonblocking
socket wasn't necessary for the default blocking mode any longer.
The code was simplified, since it was overlooked that other code
within libavformat actually used the tls protocol in nonblocking mode.
This fixes publishing over rtmps, with the gnutls backend.
Signed-off-by: Martin Storsjö <martin@martin.st>
The rtmp protocol uses nonblocking reads, to poll for incoming
messages from the server while publishing a stream.
Prior to 94599a6de3 and
d13b124eaf, the tls protocol
handled the nonblocking flag, mostly as a side effect from not
using custom IO callbacks for reading from the socket. When custom
IO callbacks were taken into use in
d15eec4d6b, the handling of a nonblocking
socket wasn't necessary for the default blocking mode any longer.
The code was simplified, since it was overlooked that other code
within libavformat actually used the tls protocol in nonblocking mode.
This fixes publishing over rtmps, with the openssl backend.
Signed-off-by: Martin Storsjö <martin@martin.st>
mux.c init_muxer() already sets codec_tag correctly in the cases
simplified here.
This also adds the capability to support alternative tags for the
same codec_id.