The first stats is printed after the initial stats_period has elapsed. With a large period,
it may appear that ffmpeg has frozen at startup.
The initial stats is now printed after the first transcode_step.
At present, progress stats are updated at a hardcoded interval of
half a second. For long processes, this can lead to bloated
logs and progress reports.
Users can now set a custom period using option -stats_period
Default is kept at 0.5 seconds.
They add considerable complexity to frame-threading implementation,
which includes an unavoidably leaking error path, while the advantages
of this option to the users are highly dubious.
It should be always possible and desirable for the callers to make their
get_buffer2() implementation thread-safe, so deprecate this option.
We now have the possibility of getting AVFrames here, and we should
not touch the muxer's codecpar after writing the header.
Results of FATE tests change as the MXF and Matroska muxers actually
write down the field/frame coding type of a stream in their
respective headers. Before this change, these values in codecpar
would only be set after the muxer was initialized. Now, the
information is also available for encoder and muxer initialization.
Additionally, reap the first rewards by being able to set the
color related encoding values based on the passed AVFrame.
The only tests that seem to have changed their results with this
change seem to be the MXF tests. There, the muxer writes the
limited/full range flag to the output container if the encoder
is not set to "unspecified".
- For video, this means a single initialization point in do_video_out.
- For audio we unfortunately need to do it in two places just
before the buffer sink is utilized (if av_buffersink_get_samples
would still work according to its specification after a call to
avfilter_graph_request_oldest was made, we could at least remove
the one in transcode_step).
Other adjustments to make things work:
- As the AVFrame PTS adjustment to encoder time base needs the encoder
to be initialized, so it is now moved to do_{video,audio}_out,
right after the encoder has been initialized. Due to this,
the additional parameter in do_video_out is removed as it is no
longer necessary.
This way the old max queue size limit based behavior for streams
where each individual packet is large is kept, while for smaller
streams more packets can be buffered (current default is at 50
megabytes per stream).
For some explanation, by default ffmpeg copies packets from before
the appointed seek point/start time and puts them into the local
muxing queue. Before, it getting utilized was much less likely
since as soon as the filter chain was initialized, the encoder
(and thus output stream) was also initialized.
Now, since we will be pushing the encoder initialization to when the
first AVFrame is decoded and filtered - which only happens after
the exact seek point is hit as packets are ignored until then -
this queue will be seeing much more usage.
In more layman's terms, this attempts to fix cases such as where:
- seek point ends up being 5 seconds before requested time.
- audio is set to copy, and thus immediately begins filling the
muxing queue.
- video is being encoded, and thus all received packets are skipped
until the requested time is hit.
The user has no business modifying the underlying AVCodec.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The AVFilterInOuts normally get freed in init_output_filter() when
the corresponding streams get created; yet if an error happens before
one reaches said point, they leak. Therefore this commit makes
ffmpeg_cleanup free them, too.
Fixes ticket #8267.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Threaded input can increase smoothness of e.g. x11grab significantly. Before
this patch, in order to activate threaded input the user had to specify a
"dummy" additional input, with this change it is no longer required.
Signed-off-by: Marton Balint <cus@passwd.hu>
This useful, because by ffprobe's very nature, you use it to probe
a file and find out what it is. Requiring every format private option
to be known to the demuxer forces one to run ffprobe twice, if one
wants to use ffprobe in a generic way.
For example, say one wants to probe all user-uploaded files, while
also ignoring edit lists for any MP4s that are uploaded. Currently,
you'd have to run ffprobe twice: once to identify the format, and
once again to actually probe the metadata you want. After this
patch, you could set -ignore_editlist 1 on every call and only
probe once.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Currently, ffmpeg inserts scale filter by default in the filter graph
to force the whole decoded stream to scale into the same size with the
first frame. It's not quite make sense in resolution changing cases if
user wants the rawvideo without any scale.
Using autoscale/noautoscale as an output option to indicate whether auto
inserting the scale filter in the filter graph:
-noautoscale or -autoscale 0:
disable the default auto scale filter inserting.
ffmpeg -y -i input.mp4 out1.yuv -noautoscale out2.yuv -autoscale 0 out3.yuv
Update docs.
Suggested-by: Mark Thompson <sw@jkqxz.net>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: U. Artie Eoff <ullysses.a.eoff@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
The previous code here did not handle passing a frames context when
ffmpeg itself did not know about the device it came from (for example,
because it was created by device derivation inside a filter graph), which
would break encoders requiring that input. Fix that by checking for HW
frames and device context methods independently, and prefer to use a
frames context method if possible. At the same time, revert the encoding
additions to the device matching function because the additional
complexity was not relevant to decoding.
Also fixes#8637, which is the same case but with the device creation
hidden in the ad-hoc libmfx setup code.
This can support encoders which want frames and/or device contexts. For
the device case, it currently picks the first initialised device of the
desired type to give to the encoder - a new option would be needed if it
were necessary to choose between multiple devices of the same type.
The data of an attachment file is put into an AVCodecParameter's
extradata. The corresponding size field has type int, yet there was no
check for the size to fit into an int. As a consequence, it was possible
to create extradata with negative size (by using a big enough max_alloc).
Other errors were also possible: If SIZE_MAX < INT64_MAX (e.g. on 32bit
systems) then the file size might be truncated before the allocation;
and avio_read() takes an int, too, so one would not have read as much
as one desired.
Furthermore, the extradata is now padded as is required.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
When QSV is enabled in FFmpeg, the command "ffmpeg -hwaccels" shows a
duplicate entry in acceleration methods for QSV:
Hardware acceleration methods:
vaapi
qsv
drm
opencl
qsv
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>