Also remove the ancient reference to libmpcodecs while at it.
Signed-off-by: James Almer <jamrial@gmail.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
This commit adds a chromatic aberration filter for Vulkan that attempts to
emulate a lens chromatic aberration effect.
For a YUV frame it will instead shift the chroma channels, providing a
simple approximation.
This commit adds a Vulkan filtering infrastructure for libavfilter.
It attempts to abstract as much as possible of the Vulkan API from filters.
The way the hwcontext and the framework are designed permits for parallel,
non-CPU-blocking filtering throughout, with the exception of up/downloading
and mapping.
It performs HDR(High Dynamic Range) to SDR(Standard Dynamic Range) conversion
with tone-mapping. It only supports HDR10 as input temporarily.
An example command to use this filter with vaapi codecs:
FFMPEG -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -hwaccel_output_format vaapi \
-i INPUT -vf 'tonemap_vaapi=format=p010' -c:v hevc_vaapi -profile 2 OUTPUT
Signed-off-by: Xinpeng Sun <xinpeng.sun@intel.com>
Signed-off-by: Zachary Zhou <zachary.zhou@intel.com>
Signed-off-by: Ruiling Song <ruiling.song@intel.com>
This filter accepts all the dnn networks which do image processing.
Currently, frame with formats rgb24 and bgr24 are supported. Other
formats such as gray and YUV will be supported next. The dnn network
can accept data in float32 or uint8 format. And the dnn network can
change frame size.
The following is a python script to halve the value of the first
channel of the pixel. It demos how to setup and execute dnn model
with python+tensorflow. It also generates .pb file which will be
used by ffmpeg.
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('in.bmp')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32)
filter = tf.Variable(filter_data)
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
output = sess.run(y, feed_dict={x: in_data})
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False)
output = output * 255.0
output = output.astype(np.uint8)
imageio.imsave("out.bmp", np.squeeze(output))
To do the same thing with ffmpeg:
- generate halve_first_channel.pb with the above script
- generate halve_first_channel.model with tools/python/convert.py
- try with following commands
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
it is expected that there will be more files to support native mode,
so put all the dnn codes under libavfilter/dnn
The main change of this patch is to move the file location, see below:
modified: libavfilter/Makefile
new file: libavfilter/dnn/Makefile
renamed: libavfilter/dnn_backend_native.c -> libavfilter/dnn/dnn_backend_native.c
renamed: libavfilter/dnn_backend_native.h -> libavfilter/dnn/dnn_backend_native.h
renamed: libavfilter/dnn_backend_tf.c -> libavfilter/dnn/dnn_backend_tf.c
renamed: libavfilter/dnn_backend_tf.h -> libavfilter/dnn/dnn_backend_tf.h
renamed: libavfilter/dnn_interface.c -> libavfilter/dnn/dnn_interface.c
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Remove the rain in the input image/video by applying the derain
methods based on convolutional neural networks. Training scripts
as well as scripts for model generation are provided in the
repository at https://github.com/XueweiMeng/derain_filter.git.
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>