ff_format_io_close will check the AVIOContext pointer pb, so drop
the unnecessary check before ff_format_io_close.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Enable probesize/max_analyze_duration setting when open the sub-demuxer,
it's will be used to minimizing the initial delay.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Add probesize/max_analyze_duration support when open the sub-demuxer,
it's will be used to minimizing the initial delay.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
This filter accepts all the dnn networks which do image processing.
Currently, frame with formats rgb24 and bgr24 are supported. Other
formats such as gray and YUV will be supported next. The dnn network
can accept data in float32 or uint8 format. And the dnn network can
change frame size.
The following is a python script to halve the value of the first
channel of the pixel. It demos how to setup and execute dnn model
with python+tensorflow. It also generates .pb file which will be
used by ffmpeg.
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('in.bmp')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32)
filter = tf.Variable(filter_data)
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
output = sess.run(y, feed_dict={x: in_data})
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False)
output = output * 255.0
output = output.astype(np.uint8)
imageio.imsave("out.bmp", np.squeeze(output))
To do the same thing with ffmpeg:
- generate halve_first_channel.pb with the above script
- generate halve_first_channel.model with tools/python/convert.py
- try with following commands
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Before this commit an av_assert0 would fail if a v4l2 device did not
support a target format.
For example,
./ffmpeg -f v4l2 -codec:v h264 -i /dev/video0 -f mpegts -
would signal an abort if /dev/video0 did not support h264.
The new behaviour is to return an AVERROR(EINVAL) error code. An
av_assert0 has been added to verify this return.
Fixes#6629
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If the alpha decoder init failed we presented the error message from the normal
decoder. This change should prevent such mistakes.
Signed-off-by: Marton Balint <cus@passwd.hu>
Signed-off-by: James Zern <jzern@google.com>
This compared to the other suggestions is cleaner and easier to understand
keeping the condition in the if() simple.
This affects alot of fate tests.
See: [FFmpeg-devel] [PATCH 05/11] avformat/nutenc: Don't pass NULL to memcmp
See: [FFmpeg-devel] [PATCH]lavf/nutenc: Do not call memcmp() with NULL argument
Fixes: Ticket 7980
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>