This option flag only carries nontrivial information for options that
call a function, in all other cases its presence can be inferred from
the option type (bool options do not have arguments, all other types do)
and is thus nothing but useless clutter.
Change the option parsing code to infer its value when it can, and drop
the flag from options where it's not needed.
It causes those options to be parsed as either
* -autofoo 0/1 (with an argument)
* -noautofoo (without an argument)
This is unnecessary, confusing, and against the documentation; these are
also the only two bool options that take an argument.
This should not affect the users, as these options are on by default,
and are supposed to be used as -nofoo per the documentation.
Otherwise an unitialized stack value would be copied to FPSConvContext.
As it's then never used, it tends not to be a problem in practice,
however it is UB and some compilers warn about it.
Long is 32 bits signed on Windows, and nb_stream{s,_groups} are both unsigned
int. In a realistic scenario this wont make a difference, but it's still
proper.
Also ensure the parsed string is an integer while at it.
Signed-off-by: James Almer <jamrial@gmail.com>
In close_output(), a dummy frame is created with format NONE passed
to enc_open(), which isn't prepared for it. The NULL pointer
dereference happened at
av_pix_fmt_desc_get(enc_ctx->pix_fmt)->comp[0].depth.
When fgt.graph is NULL, skip fg_output_frame() since there is
nothing to output.
frame #0: 0x0000005555bc34a4 ffmpeg_g`enc_open(opaque=0xb400007efe2db690, frame=0xb400007efe2d9f70) at ffmpeg_enc.c:235:44
frame #1: 0x0000005555bef250 ffmpeg_g`enc_open(sch=0xb400007dde2d4090, enc=0xb400007e4e2daad0, frame=0xb400007efe2d9f70) at ffmpeg_sched.c:1462:11
frame #2: 0x0000005555bee094 ffmpeg_g`send_to_enc(sch=0xb400007dde2d4090, enc=0xb400007e4e2daad0, frame=0xb400007efe2d9f70) at ffmpeg_sched.c:1571:19
frame #3: 0x0000005555bee01c ffmpeg_g`sch_filter_send(sch=0xb400007dde2d4090, fg_idx=0, out_idx=0, frame=0xb400007efe2d9f70) at ffmpeg_sched.c:2154:12
frame #4: 0x0000005555bcf124 ffmpeg_g`close_output(ofp=0xb400007e4e2d85b0, fgt=0x0000007d1790eb08) at ffmpeg_filter.c:2225:15
frame #5: 0x0000005555bcb000 ffmpeg_g`fg_output_frame(ofp=0xb400007e4e2d85b0, fgt=0x0000007d1790eb08, frame=0x0000000000000000) at ffmpeg_filter.c:2317:16
frame #6: 0x0000005555bc7e48 ffmpeg_g`filter_thread(arg=0xb400007eae2ce7a0) at ffmpeg_filter.c:2836:15
frame #7: 0x0000005555bee568 ffmpeg_g`task_wrapper(arg=0xb400007d8e2db478) at ffmpeg_sched.c:2200:21
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
They cannot store 1 as signed, only 0 and -1.
Avoids warnings such as:
implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Wsingle-bit-bitfield-constant-conversion]
Should fix valgrind warnings about Conditional jump or move depends on uninitialised value.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
See the comment block at the top of fftools/ffmpeg_sched.h for more
details on what this scheduler is for.
This commit adds the scheduling code itself, along with minimal
integration with the rest of the program:
* allocating and freeing the scheduler
* passing it throughout the call stack in order to register the
individual components (demuxers/decoders/filtergraphs/encoders/muxers)
with the scheduler
The scheduler is not actually used as of this commit, so it should not
result in any change in behavior. That will change in future commits.
As for the analogous decoding change, this is only a preparatory step to
a fully threaded architecture and does not yet make encoding truly
parallel. The main thread will currently submit a frame and wait until
it has been fully processed by the encoder before moving on. That will
change in future commits after filters are moved to threads and a
thread-aware scheduler is added.
This code suffers from a known issue - if an encoder with a sync queue
receives EOF it will terminate after processing everything it currently
has, even though the sync queue might still be triggered by other
threads. That will be fixed in following commits.
* the code is made shorter and simpler
* avoids constantly allocating and freeing AVPackets, thanks to
ThreadQueue integration with ObjPool
* is consistent with decoding/filtering/muxing
* reduces the diff in the future switch to thread-aware scheduling
This makes ifile_get_packet() always block. Any potential issues caused
by this will be resolved by the switch to thread-aware scheduling in
future commits.
Otherwise they'd be silently ignored if received by the filtering thread
before the filtergraph can be initialized, which would make the output
dependent on the order in which frames from different inputs arrive.
As previously for decoding, this is merely "scaffolding" for moving to a
fully threaded architecture and does not yet make filtering truly
parallel - the main thread will currently wait for the filtering thread
to finish its work before continuing. That will change in future commits
after encoders are also moved to threads and a thread-aware scheduler is
added.
Avoid making decisions based on current graph input state, which makes
the output dependent on the order in which the frames from different
inputs are interleaved.
Makes the output of fate-filter-overlay-dvdsub-2397 more correct - the
subtitle appears two frames later, which is closer to its PTS as stored
in the file.
Declaring the function argument as const fixes a warning down the line
that the const parameter is stripped. We don't modify this argument.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Current code tracks min/max pts for each stream separately; then when
the file ends it combines them with last frame's duration to compute the
total duration of each stream; finally it selects the longest of those
durations as the file duration.
This is incorrect - the total file duration is the largest timestamp
difference between any frames, regardless of the stream.
Also change the way the last frame information is reported from decoders
to the muxer - previously it would be just the last frame's duration,
now the end timestamp is sent, which is simpler.
Changes the result of the fate-ffmpeg-streamloop-transcode-av test,
where the timestamps are shifted slightly forward. Note that the
matroska demuxer does not return the first audio packet after seeking
(due to buggy interaction betwen the generic code and the demuxer), so
there is a gap in audio.