Currently, the code doing this is spread over several places and may
behave in unexpected ways. E.g. automatic 'default' marking is only done
for streams fed by complex filtergraphs. It is also applied in the order
in which the output streams are initialized, which is effectively
random.
Move processing the dispositions at the end of open_output_file(), when
we already have all the necessary information.
Apply the automatic default marking only if no explicit -disposition
options were supplied by the user, and apply it to the first stream of
each type (excluding attached pics) when there is more than one stream
of that type and no default markings were copied from the input streams.
Explicitly document the new behavior.
Changes the results of some tests, where the output file gets a default
disposition, while it previously did not.
UPD: Rebase of last patch set over current master and use DX9 as default device type.
Makes selection of dxva2/DX9 device type by default as before with explicit d3d11va/DX11 usage to cover more HW configurations.
Added warning message to expect changing default device type in the future.
Fixes TGL / AV1 decode as requires DX11 with explicit DX11 type
selection.
Add headless/multi adapter support and fixes:
https://trac.ffmpeg.org/ticket/7511https://trac.ffmpeg.org/ticket/6827http://ffmpeg.org/pipermail/ffmpeg-trac/2017-November/041901.htmlhttps://trac.ffmpeg.org/ticket/7933338fbcd5bbhttps://github.com/jellyfin/jellyfin/issues/2626#issuecomment-602153952
Any other fixes are welcome including OpenCL interop patch since I don't have proper setup to validate this use case
Decoding, encoding, transcoding have been validated.
child_device_type option is responsible for d3d11va/dxva2 device selection
Usage examples:
DirectX 11:
-init_hw_device qsv:hw,child_device_type=d3d11va
-init_hw_device qsv:hw,child_device_type=d3d11va,child_device=0
OR
-init_hw_device d3d11va=dx -init_hw_device qsv@dx
DirectX 9 is still supported but requires explicit selection:
-init_hw_device qsv:hw,child_device_type=dxva2
OR
-init_hw_device dxva2=dx -init_hw_device qsv@dx
Signed-off-by: Artem Galin <artem.galin@intel.com>
At present, progress stats are updated at a hardcoded interval of
half a second. For long processes, this can lead to bloated
logs and progress reports.
Users can now set a custom period using option -stats_period
Default is kept at 0.5 seconds.
This way the old max queue size limit based behavior for streams
where each individual packet is large is kept, while for smaller
streams more packets can be buffered (current default is at 50
megabytes per stream).
For some explanation, by default ffmpeg copies packets from before
the appointed seek point/start time and puts them into the local
muxing queue. Before, it getting utilized was much less likely
since as soon as the filter chain was initialized, the encoder
(and thus output stream) was also initialized.
Now, since we will be pushing the encoder initialization to when the
first AVFrame is decoded and filtered - which only happens after
the exact seek point is hit as packets are ignored until then -
this queue will be seeing much more usage.
In more layman's terms, this attempts to fix cases such as where:
- seek point ends up being 5 seconds before requested time.
- audio is set to copy, and thus immediately begins filling the
muxing queue.
- video is being encoded, and thus all received packets are skipped
until the requested time is hit.
Threaded input can increase smoothness of e.g. x11grab significantly. Before
this patch, in order to activate threaded input the user had to specify a
"dummy" additional input, with this change it is no longer required.
Signed-off-by: Marton Balint <cus@passwd.hu>
Currently, ffmpeg inserts scale filter by default in the filter graph
to force the whole decoded stream to scale into the same size with the
first frame. It's not quite make sense in resolution changing cases if
user wants the rawvideo without any scale.
Using autoscale/noautoscale as an output option to indicate whether auto
inserting the scale filter in the filter graph:
-noautoscale or -autoscale 0:
disable the default auto scale filter inserting.
ffmpeg -y -i input.mp4 out1.yuv -noautoscale out2.yuv -autoscale 0 out3.yuv
Update docs.
Suggested-by: Mark Thompson <sw@jkqxz.net>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: U. Artie Eoff <ullysses.a.eoff@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
The "-deinterlace" was deprecated since d7edd35, over eight years
ago.
Refer to deinterlacing filters instead.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Also documents all options supported by the hwdevice.
This lets users enable all extensions they need without writing their own
instance initialization code.
Add two FAQs about running FFmpeg in the background.
The first explains the use of the -nostdin option in
a straightforward way. Text revised based on review.
The second FAQ starts from a confusing error message,
and leads to the solution, use of the -nostdin option.
The purpose of the second FAQ is to attract web searches
from people having the problem, and offer them a solution.
Add an anchor to the Main Options section of the ffmpeg
documentation, so that the FAQs can link directly there.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
For some strange reason "-t" option was only implemented
for input files while both "-t" and "-to" were available
for use for output files. This made extracting a range from
input file inconvenient.
This patch enables -to option for input so one can do
ffmpeg -ss 1:23:20 -to 1:27:22.3 -i myinput.mkv ...
Signed-off-by: Vitaly _Vi Shukela <vi0oss@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The -map option allows for a trailing ? so that an error is not thrown if
the input stream does not exist.
This capability is extended to the map_channel option.
This allows a ffmpeg command not to break if an input channel does not
exist, which can be of use (for instance, scripts processing audio
channels with sources having unset number of audio channels).
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This only supports one device globally, but more can be used by
passing them with input streams in hw_frames_ctx or by deriving new
devices inside a filter graph with hwmap.
(cherry picked from commit e669db7610)
add a per-stream option for setting the encoder timebase.
the following values are allowed:
0 - for video, use 1/frame_rate, for audio use 1/sample_rate (this is
the default)
-1 - match the input timebase (when possible)
>0 - set the timebase to provided number
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>