Don't use "static const" for compile time float constants, but use
defines. This fixes the following error:
src/libavfilter/vf_ssim360.c(549): error C2099: initializer is not a constant
Signed-off-by: Martin Storsjö <martin@martin.st>
Customized SSIM for various projections (and stereo formats) of 360 images and videos.
Further contributions by:
Ashok Mathew Kuruvilla
Matthieu Patou
Yu-Hui Wu
Anton Khirnov
Suggested-By: ffmpeg@fb.com
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Since those fields will be overridden by videotoolbox_start(), they
should never be set by user, it can trigger memory leaks otherwise.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
In the code path of av_videotoolbox_default_init/init2(),
avctx->internal->hwaccel_priv_data is NULL and passed to
decoder_cb.decompressionOutputRefCon. Then it will be dereferenced
inside videotoolbox_decoder_callback().
Delay videotoolbox_star() until ff_videotoolbox_common_init() to
fix the bug.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
It's the minimum of all child protocols max_packet_size. Can be used
like this:
ffmpeg -re -i cctv.mp4 -c copy -f mpegts \
-protocol_whitelist 'tee,file,udp' \
'tee:out.ts|udp://127.0.0.1:6666?pkt_size=1316'
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
AVMediaCodecDeviceContext without surface or native_window is
useless, it shouldn't be created at all. Such dummy AVHWDeviceContext
is allowed before, and it's used by mpv player. Creating a ANativeWindow
automatically breaks such usecases.
So disable creating a ANativeWindow by default. It can be enabled
via the create_window flag, or by set the AVDictionary of
av_hwdevice_ctx_create(). The downside is that
ffmpeg -hwaccel mediacodec -i input.mp4 \
-c:a copy -c:v hevc_mediacodec output.mp4
use ByteBuffer mode which isn't as efficient as before. The upside
is libavfilter works now, which should be less surprise.
To enable create_window on ffmpeg command line, use
ffmpeg -hwaccel mediacodec \
-init_hw_device mediacodec=mediacodec,create_window=1 \
-i input.mp4 -c:a copy -c:v hevc_mediacodec output.mp4
Users should know what it is to enable create_window. It should
be OK to take sometime to figure out the option. And there are comments
inside hwcontext_mediacodec.h to help user figure it out.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
This commit adds both decode and encode support for cICP chunks, which
allow a PNG image's pixel data to be tagged by any of the enum values in
H.273, without an ICC profile.
Upon decode, if a cICP chunk is present, the PNG decoder will tag output
AVFrames with the resulting enum color, and ignore iCCP, sRGB, gAMA, and
cHRM chunks, as per the spec.
Upon encode, if the color space is known and specified, and it is not sRGB,
the PNG encoder will output a cICP chunk containing the color space. If the
color space is sRGB, then it will output an sRGB chunk instead of a cICP
chunk. If the color space of the input is not unspecified, it will not output
a cICP chunk tagging the PNG as unspecified.
In either the sRGB case or the non-SRGB case, gAMA and cHRM are still written
as fallbacks provided the info is known.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
If an sRGB chunk is present in the PNG file, this commit will cause the
png decoder to ignore the cHRM and gAMA chunks and tag the resulting AVFrames
with BT.709 primaries, and ISO/IEC 61966-2-1 transfer. If these tags are
present in the AVFrame, pngenc.c already writes this chunk, so no change was
needed on the encode-side.
The PNG spec does not define what happens if sRGB and iCCP are present at
the same time, it just recommends that this not happen. As of this patch,
the decoder will have the ICC profile take precedence, and it will not tag
the pixel data as sRGB.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
The cHRM chunk is descriptive. That is, it describes the primaries that should
be used to interpret the pixel data in the PNG file. This is notably different
from Mastering Display Metadata, which describes which subset of the presented
gamut is relevant. MDM describes a gamut and says colors outside the gamut are
not required to be preserved, but it does not actually describe the gamut that
the pixel data from the frame resides in. Thus, to decode a cHRM chunk present
in a PNG file to Mastering Display Metadata is incorrect.
This commit changes this behavior so the cHRM chunk, if present, is decoded to
color metadata. For example, if the cHRM chunk describes BT.709 primaries, the
resulting AVFrame will be tagged with AVCOL_PRI_BT709, as a description of its
pixel data. To do this, it utilizes libavutil/csp.h, which exposes a funcction
av_csp_primaries_id_from_desc, to detect which enum value accurately describes
the white point and primaries represented by the cHRM chunk.
This commit also changes pngenc.c to utilize the libavuitl/csp.h API, since it
previously duplicated code contained in that API. Instead, taking advantage of
the API that exists makes more sense. pngenc.c does properly utilize the color
tags rather than incorrectly using MDM, so that required no change.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Gamma 2.2 and Gamma 2.8 are tagged in the file as 0.45455 and 0.35714,
respectively (i.e. 1/2.2 and 1/2.8). Trying to identify them as 2.2 and
2.8 instead of these values will cause the transfer function to not
properly be recognized. This patch fixes this.
bits_*_signed(0) will currently invoke an undefined shift by
8 * sizeof(int).
Add bits_*_signed_nz() that only works for n>0, analogous to
bits_read_nz(). Add an explicit check for n=0 in bits_*_signed().
Found-by: James Almer
This change improves the performance and multicore scalability of the vp9
codec for streaming single-pass encoded videos by taking advantage of up
to 64 cores in the system. The current thread limit for ffmpeg codecs is 16
(MAX_AUTO_THREADS in pthread_internal.h) due to a limitation in H.264 codec
that prevents more than 16 threads being used.
Experiments show that increasing the thread limit to 64 for vp9 improves
the performance for encoding 4K raw videos for streaming by up to 47%
compared to 16 threads, and from 20-30% for 32 threads, with the same quality
as measured by the VMAF score.
Rationale for this change:
Vp9 uses tiling to split the video frame into multiple columns; tiles must
be at least 256 pixels wide, so there is a limit to how many tiles can be
used. The tiles can be processed in parallel, and more tiles mean more CPU
threads can be used. 4K videos can make use of 16 threads, and 8K videos
can use 32. Row-mt can double the number of threads so 64 threads can be used.
Signed-off-by: James Zern <jzern@google.com>